Using CImg; I'll keep this quick and simple.
CImg<float> i = *spectralImages->at(currentImage);
disp.display(i);
float* f = i.data();
disp is displaying a black image despite the fact that stepping through *(f), *(f+1), *(f+2), etc. is retrieving the correct numbers (255.0, 245.0, etc.)
I've been working on this all day. Is there a quirk with CImg that I'm missing?
EDIT:
Saving the file as a BMP seems to make the correct result, so there's just an issue with drawing it.
If your CImg image contains only a single value, or several equal values, the default display will display them as black images, because of the normalization applied to the pixel values for the display.
As CImg is able to manage any type of images (including float-valued), it always normalize the pixel values in [0,255] for the display (it does not change the pixel value in your object of course, it just normalizes them internally for its display).
So if your image has a single pixel values, the normalization will always result to '0', hence the black image as a result.
That means you probably didn't construct your CImgDisplay disp with the right pixel normalization argument (by default, it is enabled).
disp should be constructed like this :
CImgDisplay disp(100,100,"my display",0);
to disable the default normalization of pixel values.
Related
I am trying to convert some bitmap files into custom images (exr, pfm, whatever), and after that, back to bitmap:
CImg<float> image(_T("D:\\Temp\\test.bmp"));
image.normalize(0.0, 1.0);
image.save_exr(_T("D:\\Temp\\test.exr"));
and goes fine (same for .pfm file), I mean the exr file is ok, same for pfm file.
But when this exr, or pfm file I trying to convert back to bitmap:
CImg<float> image;
image.load_exr(_T("D:\\Temp\\test.exr")); // image.load_pfm(_T("D:\\Tempx\\test.pfm"));
image.save_bmp(_T("D:\\Temp\\test2.bmp"));
the result, test2.bmp is black. Complete. Why ? What I am doing wrong ?
Some image formats support saving as float, but most formats save as unsigned 8 bit integer (or uint8), meaning normal image values are from 0 to 255. If you try to save an array that is made up of floats from 0 to 1 into a format that does not support floats, your values will most likely be converted to integers. When you display your image with most image-viewing software, it'll appear entirely black since 0 is black and 1 is almost black.
Most likely when you save your image to bitmap it is trying to convert the values to uint8 but not scaling properly. You can fix this by multiplying normalized values between 0 and 1 by 255. img = int(img*255) or using numpy img = (img*255).astype(np.uint8).
It is also possible that somehow your save function is able to preserve floating point values in the bitmap format. However your image viewing software might not know how to view/display a float image. Perhaps use some imshow function (matplotlib.pyplot can easily display floating point grayscale arrays) between each line of code to check if the arrays are consistent with what you expect them to be.
I was wondering what the unit is of my boundRect[].tl() output.
topleft = boundRect[largest_contour_index].tl();
My assumption is that it is in pixels.
If so, do I need to look at the pixels of my camera and the format it outputs to calculate the position of my object?
Or do the pixels that the function outputs change due to the fact that OpenCV converts the image to an 8-bit image? I can imagine that the amount of pixels where the image consists of becomes smaller when the image is converted to 8 bit.
Please correct me if I'm wrong.
Thank you!
First of all, the BoundingRect returns x,y coordinates, width and height. you can refer to its documentation: docs.opencv.org/2.4/modules/core/doc/basic_structures.html#rect
second, the 8-bit image conversion was based on pixel value of color and doesn't have a direct relation with pixel count. So converting a 100x100 image to 8-bit image will still be 100x100 px
I have a black area around my image and I want to create a mask using OpenCV C++ that selects just this black area so that I can paint it later. How can i do that without affecting the image itself?
I tried to convert the image to grayscale and then using threshold to convert it to binary, but it affects my image since the result contains black pixels from inside the image.
Another Question : if i want to crop the image instead of paint it, how can i do it??
Thanks in advance,
I would solve the problem like this:
Inverse-binarize the image with a threshold of 1 (i.e. all pixels with the value 0 are set to 1, all others to 0)
use cv::findContours to find white segments
remove segments that don't touch image borders
use cv::drawContours to draw the remaining segments to a mask.
There is probably a more efficient solution in terms of runtime efficiency, but you should be able to prototype my solution quite quickly.
I want to know the what actually happnens in the following code.
cv::Rect rectangle(x,y,width,height);
cv::Mat result;
cv::Mat bgModel,fgModel;
cv::grabCut(image,
result,
rectangle,
bgModel,fgModel,
1,
cv::GC_INIT_WITH_RECT);
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result);
According to my knowledge when we define the rectangle outside the rectangle will be consider as the known background and the inside as the unknown foreground.
Then bgmodel and fgmodel are Gausian mixed models which keeps the foreground and background pixels separately.
The parameter we are passing as 1 means, we are asking to divide the pixels to separate pixels process to run only once.
what i can't understand is
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
What actually happens in the above method.
If anyone could explain, that would be a great help.
thanx.
I've found this book which tells that this code compares values of pixels in result to GC_PR_FGD value. Those that are equal it doesn't touch, all the other pixels it deletes. Here's a citation from there:
The input/output segmentation image can have one of the four values:
cv::GC_BGD, for pixels certainly belonging to the background (for example, pixels outside the rectangle in our example)
cv::GC_FGD, for pixels certainly belonging to the foreground (none in our example)
cv::GC_PR_BGD, for pixels probably belonging to the background
cv::GC_PR_FGD for pixels probably belonging to the foreground (that is the initial value for the pixels inside the rectangle in our
example).
We get a binary image of the segmentation by extracting the pixels
having a value equal to cv::GC_PR_FGD:
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(image.size(),CV_8UC3,
cv::Scalar(255,255,255));
image.copyTo(foreground, // bg pixels are not copied
result);
there are plenty of tutorials showing how to blend two images in opencv:
http://opencv.itseez.com/doc/tutorials/core/adding_images/adding_images.html
http://aishack.in/tutorials/transparent-image-overlays-in-opencv/
But all of them are based on this equation:
opencv blending http://opencv.itseez.com/_images/math/afeb868ed1632ace1fe886b5bfbb6fd933b742b8.png
which means that I will be combining two images by averaging them and consequently I'll be loosing intensity on both images.
For instance, let alpha = 0.5, f0(x) = 255, and f1(x) = 0. After applying this equation, the result image g(x) = 127. That is not what I need. The first image should remain unchanged. And the transparency must be applied in the second one.
My problem is:
the first image f0(x) should not be changed and an alpha should be applied to the second image f1(x) when it overlays the first image f0(x).
I cannot figure out how to do this. Any help?
Unfortunately, alpha channels are not supported by OpenCV. From the imread documentation:
Note that in the current implementation the alpha channel, if any, is stripped from the output image. For example, a 4-channel RGBA image is loaded as RGB if flags > 0.
See this SO post for a possible work around using imagemagick.
Hope that is helpful!