Converting to Floating Point Image from .tif - c++

I am relatively new to C++ and coding in general and have run into a problem when attempting to convert an image to a floating point image. I am attempting to do this to eliminate round off errors with calculating the mean and standard deviation of pixel intensity for images as it starts to effect data quite substantially. My code is below.
Mat img = imread("Cells2.tif");
cv::namedWindow("stuff", CV_WINDOW_NORMAL);
cv::imshow("stuff",img);
CvMat cvmat = img;
Mat dst = cvCreateImage(cvGetSize(&cvmat),IPL_DEPTH_32F,1);
cvConvertScale(&cvmat,&dst);
cvScale(&dst,&dst,1.0/255);
cvNamedWindow("Test",CV_WINDOW_NORMAL);
cvShowImage("Test",&dst);
And I am running into this error
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in an unknown function, file ......\modules\core\src\array.cpp, line 1238
I've looked everywhere and everyone was saying to convert img to CvMat which I attempted above.
When I did that as above code shows I get
OpenCV Error: Bad argument (Unknown array type) in unknown function, file ......\modules\core\src\matrix.cpp line 697
Thanks for your help in advance.

Just use C++ OpenCV interface instead of C interface and use convertTo function to convert between data types.
Mat img = imread("Cells2.tif");
cv::imshow("source",img);
Mat dst; // destination image
// check if we have RGB or grayscale image
if (img.channels() == 3) {
// convert 3-channel (RGB) 8-bit uchar image to 32 bit float
src.convertTo(dst, CV_32FC3);
}
else if (img.channels() == 1) {
// convert 1-chanel (grayscale) 8-bit uchar image to 32 bit float
img1.convertTo(dst, CV_32FC1);
}
// display output, note that to display dst image correctly
// we have to divide each element of dst by 255 to keep
// the pixel values in the range [0,1].
cv::imshow("output",dst/255);
waitKey();
Second part of the question To calculate the mean of all elements in dst
cv::Salar avg_pixel;
double avg;
// note that Scalar is a vector.
// If your image is RGB, Scalar will contain 3 values,
// representing color values for each channel.
avg_pixel = cv::mean(dst);
if (dst.channels() == 3) {
//if 3 channels
avg = (avg_pixel[0] + avg_pixel[1] + avg_pixel[2]) / 3;
}
if(dst.channels() == 1) {
avg = avg_pixel[0];
}
cout << "average element of m: " << avg << endl;

Here is my code for calculating the average in C++ OpenCV.
int NumPixels = img.total();
double avg;
double c;
for(int y = 0; y <= img.cols; y++)
for(int x = 0; x <= dst.rows; x++)
c+=img.at<uchar>(x,y);
avg = c/NumPixels;
cout << "Avg Value\n" << 255*avg;
For MATLAB I just load the image and take Q = mean(img(:)); which returns 1776.23
And for the return of 1612.36 I used cv:Scalar z = mean(dst);

Related

Crop image with OpenCV\C++

I'm using OpenCV (v 2.4.9.1, Ubuntu 16.04) to do a resize and crop on an image, the original image is a JPEG file with dimensions 640x480.
cv::Mat _aspect_preserving_resize(const cv::Mat& image, int target_width)
{
cv::Mat output;
int min_dim = ( image.cols >= image.rows ) ? image.rows : image.cols;
float scale = ( ( float ) target_width ) / min_dim;
cv::resize( image, output, cv::Size(int(image.cols*scale), int(image.rows*scale)));
return output;
}
cv::Mat _center_crop(cv::Mat& image, cv::Size& input_size)
{
cv::Rect myROI(int(image.cols/2-input_size.width/2), int(image.rows/2-input_size.height/2), input_size.width, input_size.height);
cv::Mat croppedImage = image(myROI);
return croppedImage;
}
int min_input_size = int(input_size.height * 1.14);
cv::Mat image = cv::imread("power-dril/47105738371_72f83eeb37_z.jpg");
cv::Mat output = _aspect_preserving_resize(image, min_input_size);
cv::Mat result = _center_crop(output, input_size);
After this I display the images, and it looks perfect - as I would expect it to be:
The problem comes when I stream this image, where I notice that the size (in elements) of the cropped image is only a third of what I would expect. It looks as if there is only one cannel on the resultant crop. It should have had 224*224*3=150528, but I'm getting only 50176 when I'm doing
std::cout << cropped_image.total() << " " << cropped_image.type() << endl;
>>> 50176 16
Any idea what's wrong here? The type of the resulting cv::Mat looks okay, and also visually it looks ok, so how there is only one channel?
Thanks in advance.
Basic Structures — OpenCV 2.4.13.7 documentation says:
Mat::total
Returns the total number of array elements.
C++: size_t Mat::total() const
The method returns the number of array elements (a number of pixels if
the array represents an image).
Therefore, the return value is the number of pixels 224*224=50176 and your expected value is wrong.
My terminology was wrong, as pointed by #MikeCAT, and it seems that the issue should be solved in the serialization logic. I went with a solution along the lines of this one:
Convert Mat to Array/Vector in OpenCV
My original code didn't check the channels() function.
if (curr_img.isContinuous()) {
int totalsz = curr_img.dataend-curr_img.datastart;
array.assign(curr_img.datastart, curr_img.datastart + totalsz);
} else {
int rowsz = CV_ELEM_SIZE(curr_img.type()) * curr_img.cols;
for (int i = 0; i < curr_img.rows; ++i) {
array.insert(array.end(), curr_img.ptr<uint8_t>(i), curr_img.ptr<uint8_t>(i) + rowsz);
}
}

OpenCV 3 C++ Mat fetching with pointer goes random

I'm quite new to OpenCV and I'm now using version 3.4.1 with C++ implementation. I'm still exploring, so this question is not specific to a project, but is more of a "try to understand how it works". Please consider, with the same idea in mind, that I know that I'm somehow "reinventing the will" with this code, but I wrote this example to understand "HOW IT WORKS".
The idea is:
Read an RGB image
Make it binary
Find Connected areas
Colour each area differently
As an example I'm using a 5x5 pixel RGB image saved as BMP. The image is a white box with black pixels all around it's contour.
Up to the point where I get the ConnectedComponents matrix, named Mat::Labels, it all goes fine. If I print the Matrix I see exactly what I expect:
11111
10001
10001
10001
11111
Remember that I've inverted the threshold so it is correct to get 1 on the edges...
I then create a Mat with same size of Mat::Labels but 3 channels to colour it with RGB. This is named Mat::ColoredLabels.
Next step is to instanciate a pointer that runs through the Mat::Labels and for each position in the Mat::Labels where the value is 1 fill the corresponding Mat:.ColoredLabels position with a color.
HERE THINGS GOT VERY WRONG ! The pointer does not fetch the Mat::Labels row byt row as I would expect but follows some other order.
Questions:
Am I doing something wrong or it is "obvious" that the pointer fetching follows some "umpredictable" order ?
How could I set values of a Matrix (Mat::ColoredLabels) based on the values of another matrix (Mat::Labels) ?
.
#include "opencv2\highgui.hpp"
#include "opencv2\opencv.hpp"
#include <stdio.h>
using namespace cv;
int main(int argc, char *argv[]) {
char* FilePath = "";
Mat Img;
Mat ImgGray;
Mat ImgBinary;
Mat Labels;
uchar *P;
uchar *CP;
// Image acquisition
if (argc < 2) {
printf("Missing argument");
return -1;
}
FilePath = argv[1];
Img = imread(FilePath, CV_LOAD_IMAGE_COLOR);
if (Img.empty()) {
printf("Invalid image");
return -1;
}
// Convert to Gray...I know I could convert it right away while loading....
cvtColor(Img, ImgGray, CV_RGB2GRAY);
// Threshold (inverted) to obtain black background and white blobs-> it works
threshold(ImgGray, ImgBinary, 170, 255, CV_THRESH_BINARY_INV);
// Find Connected Components and put the 1/0 result in Mat::Labels
int BlobsNum = connectedComponents(ImgBinary, Labels, 8, CV_16U);
// Just to see what comes out with a 5x5 image. I get:
// 11111
// 10001
// 10001
// 10001
// 11111
std::cout << Labels << "\n";
// Prepare to fetch the Mat(s) with pointer to be fast
int nRows = Labels.rows;
int nCols = Labels.cols * Labels.channels();
if (Labels.isContinuous()) {
nCols *= nRows;
nRows = 1;
}
// Prepare a Mat as big as LAbels but with 3 channels to color different blobs
Mat ColoredLabels(Img.rows, Img.cols, CV_8UC3, cv::Scalar(127, 127, 127));
int ColoredLabelsNumChannels = ColoredLabels.channels();
// Fetch Mat::Labels and Mat::ColoredLabes with the same for cycle...
for (int i = 0; i < nRows; i++) {
// !!! HERE SOMETHING GOES WRONG !!!!
P = Labels.ptr<uchar>(i);
CP = ColoredLabels.ptr<uchar>(i);
for (int j = 0; j < nCols; j++) {
// The coloring operation does not work
if (P[j] > 0) {
CP[j*ColoredLabelsNumChannels] = 0;
CP[j*ColoredLabelsNumChannels + 1] = 0;
CP[j*ColoredLabelsNumChannels + 2] = 255;
}
}
}
std::cout << "\n" << ColoredLabels << "\n";
namedWindow("ColoredLabels", CV_WINDOW_NORMAL);
imshow("ColoredLabels", ColoredLabels);
waitKey(0);
printf("Execution completed succesfully");
return 0;
}
You used connectedComponents function with CV_16U parameter. This means that the single element of the image will consist of 16 bits (hence '16') and you have to interpret them as unsigned integer (hence 'U'). And since ptr returns a pointer, you have to dereference it to get the value.
Therefore you should access label image elements in the following way:
unsigned short val = *Labels.ptr<unsigned short>(i) // or uint16_t
unsigned short val = Labels.at<unsigned short>.at(y, x);
Regarding your second question, it is as simple as that, but of course you have to understand which type casts result in loss of precisions or overflows and which ones not.
mat0.at<int>(y, x) = mat1.at<int>(y, x); // both matrices have CV_32S types
mat2.at<int>(y, x) = mat3.at<char>(y,x); // CV_32S and CV_8S
// Implicit cast occurs. Possible information loss: assigning 32-bit integer values to 8-bit ints
// mat4.at<unsigned char>(y, x) = mat5.at<unsigned int>(y, x); // CV_8U and CV_32U

Convert OGRE::Image to cv::Mat

I am new to OGRE library. I have a human model in OGRE, I get the projection of the model in 'orginalImage' variable. I would like to perform some image processing using openCV. So I am trying to achieve OGRE::Image to cv::Mat conversion.
Ogre::Image orginalImage = get2dProjection();
//This is an attempt to convert the image
cv::Mat destinationImage(orginalImage.getHeight(), orginalImage.getWidth(), CV_8UC3, orginalImage.getData());
imwrite("out.png", destinationImage);
I get following error:
realloc(): invalid pointer: 0x00007f9e2ca13840 ***
On the similar note, I tried following as my second attempt
cv::Mat cvtImgOGRE2MAT(Ogre::Image imgIn) {
//Result image intialisation:
int imgHeight = imgIn.getHeight();
int imgWidth = imgIn.getWidth();
cv::Mat imgOut(imgHeight, imgWidth, CV_32FC1);
Ogre::ColourValue color;
float gray;
cout << "Converting " << endl;
for(int r = 0; r < imgHeight; r++){
for(int c = 0; c < imgWidth; c++){
color = imgIn.getColourAt(r,c,0);
gray = 0.2126 * color.r + 0.7152 * color.g + 0.0722 * color.b;
imgOut.at<float>(r,c) = gray;
}
}
return imgOut;
}
I get same error when I do one of the following:
imshow("asdfasd", imgOut);
imwrite("asdfasd.png", imgOut);
unfortunately I have no experience with OGRE, so I can just talk about OpenCV and what I've seen in Ogre documentation and poster's comments.
The first thing to mention is that the Ogre image' PixelFormat is PF_BYTE_RGBA (from comments) which is (according to OGRE documentation) a 4 byte pixel format, so the cv::Mat type should be CV_8UC4 if image data should be given by pointer. In addition, openCV best supports BGR images, so a color conversion might be best to save/display.
please try:
Ogre::Image orginalImage = get2dProjection();
//This is an attempt to convert the image
cv::Mat destinationImage(orginalImage.getHeight(), orginalImage.getWidth(), CV_8UC4, orginalImage.getData());
cv::Mat resultBGR;
cv::cvtColor(destinationImage, resultBGR, CV_RGBA2BGR);
imwrite("out.png", resultBGR);
in your second example I wondered what is wrong there, until I saw color = imgIn.getColourAt(r,c,0); which might be wrong since most image APIs use .getPixel(x,y) so I confirmed that this is the same for OGRE. Please try this:
cv::Mat cvtImgOGRE2MAT(Ogre::Image imgIn)
{
//Result image intialisation:
int imgHeight = imgIn.getHeight();
int imgWidth = imgIn.getWidth();
cv::Mat imgOut(imgHeight, imgWidth, CV_32FC1);
Ogre::ColourValue color;
float gray;
cout << "Converting " << endl;
for(int r = 0; r < imgHeight; r++)
{
for(int c = 0; c < imgWidth; c++)
{
// next line was changed
color = imgIn.getColourAt(c,r,0);
gray = 0.2126 * color.r + 0.7152 * color.g + 0.0722 * color.b;
// this access is right
imgOut.at<float>(r,c) = gray;
}
}
return imgOut;
// depending of what you want to do with the image, "float" Mat type assumes what image intensity values are within 0..1 (displaying) or 0..255 (imwrite)
}
if you still get realloc errors, can you please try to find the exact line of code where it happens?
One thing I didnt consider yet is the real memory layout of OGRE images. It might be possible that they use some kind of aligned memory, where each pixel-row is aligned to have a memory size as a multiple of 4 or 16 or sth. (which might be more efficient, e.g. to use SSE instructions or sth.) If that is the case, you can't use the first method but you would have to change it to cv::Mat destinationImage(orginalImage.getHeight(), orginalImage.getWidth(), CV_8UC4, orginalImage.getData(), STEPSIZE); where STEPSIZE is the number of BYTES for each pixel ROW! But the second version should work then!

Copying two images pixel by pixel

I am trying to work with each pixel from depth map. (I am implementing image segmentation.) I don't know how to work with pixels from image with depth higher than 1.
This sample code copies depth map to another cv::Mat pixel by pixel. It works fine, if I normalize it (depth of normalized image = 1). But it doesn't work with depth = 3, because .at<uchar> is wrong operation for this depth.
cv::Mat res;
cv::StereoBM bm(CV_STEREO_BM_NORMALIZED_RESPONSE);
bm(left, right, res);
std::cout<<"type "<<res.type()<<" depth "<<res.depth()<<" channels "<<res.channels()<<"\n";// type 3 depth 3 channels 1
cv::Mat tmp = cv::Mat::zeros(res.rows, res.cols, res.type());
for(int i = 0; i < res.rows; i++)
{
for(int j = 0; j < res.cols; j++)
{
tmp.at<uchar>(i, j) = res.at<uchar>(i, j);
//std::cout << (int)res.at<uchar>(i, j) << " ";
}
//std::cout << std::endl;
}
cv::imshow("tmp", normalize(tmp));
cv::imshow("res", normalize(res));
normilize function
cv::Mat normalize(cv::Mat const &depth_map)
{
double min;
double max;
cv::minMaxIdx(depth_map, &min, &max);
cv::Mat adjMap;
cv::convertScaleAbs(depth_map, adjMap, 255 / max);
return adjMap;
}
left image - tmp, right image - res
How can I get the pixel from image with depth equal to 3?
Mat::depth() returns value equal to a constant symbolising bit depth of the image. If You get depth equal to for example CV_32F, to get to the pixels You would need to use float instead of uchar.
CV_8S -> char
CV_8U -> uchar
CV_16U -> unsigned int
CV_16S -> int
CV_32F -> float
CV_64F -> double
Mat::channels() tells You how many values of that type are assigned to a coordinate. These multiple values can be extracted as cv::Vec. So if You have a two channel Mat with depth CV_8U, instead using Mat.at<uchar> You would need to go with Mat.at<Vec2b>, or Mat.at<Vec2f> for CV_32F one.
When your images are of depth 3, do this for copying pixel by pixel:
tmp.at<Vec3b>(i,j) = res.at<Vec3b>(i,j);
However, if you are copying the whole image , I do not understand the point of copying each pixel individually, unless you want to do different processing with different pixels.
You can just copy the whole image res to tmp by this:
res.copyTo(tmp);

OpenCV: How to visualize a depth image

I am using a dataset in which it has images where each pixel is a 16 bit unsigned int storing the depth value of that pixel in mm. I am trying to visualize this as a greyscale depth image by doing the following:
cv::Mat depthImage;
depthImage = cv::imread("coffee_mug_1_1_1_depthcrop.png", CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR ); // Read the file
depthImage.convertTo(depthImage, CV_32F); // convert the image data to float type
namedWindow("window");
float max = 0;
for(int i = 0; i < depthImage.rows; i++){
for(int j = 0; j < depthImage.cols; j++){
if(depthImage.at<float>(i,j) > max){
max = depthImage.at<float>(i,j);
}
}
}
cout << max << endl;
float divisor = max / 255.0;
cout << divisor << endl;
for(int i = 0; i < depthImage.rows; i++){
for(int j = 0; j < depthImage.cols; j++){
cout << depthImage.at<float>(i,j) << ", ";
max = depthImage.at<float>(i,j) /= divisor;
cout << depthImage.at<float>(i,j) << endl;
}
}
imshow("window", depthImage);
waitKey(0);
However, it is only showing two colours this is because all of the values are close together i.e. in the range of 150-175 + the small values which show up black (see below).
Is there a way to normalize this data such that it will show various grey levels to highlight these small depth differences?
According to the documentation, the function imshow can be used with a variety of image types. It support 16-bit unsigned images, so you can display your image using
cv::Mat map = cv::imread("image", CV_LOAD_IMAGE_ANYCOLOR | CV_LOAD_IMAGE_ANYDEPTH);
cv::imshow("window", map);
In this case, the image value range is mapped from the range [0, 255*256] to the range [0, 255].
If your image only contains values on the low part of this range, you will observe an obscure image. If you want to use the full display range (from black to white), you should adjust the image to cover the expected dynamic range, one way to do it is
double min;
double max;
cv::minMaxIdx(map, &min, &max);
cv::Mat adjMap;
cv::convertScaleAbs(map, adjMap, 255 / max);
cv::imshow("Out", adjMap);
Adding to samg' answer, you can expand even more the range of your displayed image.
double min;
double max;
cv::minMaxIdx(map, &min, &max);
cv::Mat adjMap;
// expand your range to 0..255. Similar to histEq();
map.convertTo(adjMap,CV_8UC1, 255 / (max-min), -min);
// this is great. It converts your grayscale image into a tone-mapped one,
// much more pleasing for the eye
// function is found in contrib module, so include contrib.hpp
// and link accordingly
cv::Mat falseColorsMap;
applyColorMap(adjMap, falseColorsMap, cv::COLORMAP_AUTUMN);
cv::imshow("Out", falseColorsMap);
The result should be something like the one below
Ifimshow input has floating point data type then the function assumes that pixel values are in [0; 1] range. As result all values higher than 1 are displayed white.
So you need not divide your divisor by 255.
Adding to Sammy answer, if the original range color is [-min,max] and you want to perform histogram equalization and display the Depth color, the code should be like below:
double min;
double max;
cv::minMaxIdx(map, &min, &max);
cv::Mat adjMap;
// Histogram Equalization
float scale = 255 / (max-min);
map.convertTo(adjMap,CV_8UC1, scale, -min*scale);
// this is great. It converts your grayscale image into a tone-mapped one,
// much more pleasing for the eye
// function is found in contrib module, so include contrib.hpp
// and link accordingly
cv::Mat falseColorsMap;
applyColorMap(adjMap, falseColorsMap, cv::COLORMAP_AUTUMN);
cv::imshow("Out", falseColorsMap);