I'm tries to access Cimg pixel values to print out the pixel intensity that my mouse is at, as well as calculating a histogram. However, I got all zeros from Cimg object.
The cimg image is initiated from memory buffer and it is 12 bit gray scale image, but padded to 16 bit to save in memory.
The code below is defined in a function that is called multiple times. I want to refresh the images in the current display and not to produce a new one every time the function is called. So the Cimgdisp is defined outside the function.
#include "include\CImg.h"
int main(){
CImg <unsigned short> image(width,height,1,1);
CImgDisplay disp(image);
//showImg() get called multiple times here
}
void showImg(){
unsigned short* imgPtr = (unsigned short*) (getImagePtr());
CImg <unsigned short> img(imgPtr,width,height);
img*=(65535/4095);//Renormalise from 12 bit input to 16bit for better display
//Display
disp->render(img);
disp->paint();
img*=(4095/65535);//Normalise back to get corect intensities
CImg <float> hist(img.histogram(100));
hist.display_graph(0,3);
//find mouse position and disp intensity
mouseX = disp->mouse_x()*width/disp->width();//Rescale the position of the mouse to true position of the image
mouseY = disp->mouse_y()*height/disp->height();
if (mouseX>0&mouseY>0){
PxIntensity = img(mouseX,mouseY,0,0);}
else {
PxIntensity = -1;}
}
All the intensities I retrieve are zero and the histogram is also zero.
img*=(4095/65535);//Normalise back to get corect intensities is incorrect, as (4095/65535)=0 in C/C++ (division of an integer by a larger one).
Maybe img*=(4095/65535.); ?
If you just want to scale between 12-bit and 16-bit and back then just using bit-shifts might be better.
img<<=4;//Renormalise from 12 bit input to 16bit for better display
//Display
disp->render(img);
disp->paint();
img>>=4;//Normalise back to get corect intensities
So I've generated a binary jpeg image as shown here :
as I know this image has either 255 or 0 values (I dunno why in paint I find some pixels at the edge with different values, but I ignored them they are so few, and I am not sure if they came from the JPEG format)
so my target is to convert this image to 0x00 for the dark spots and 0x01 for the light spots (note I mean 1 as one and not as 255)
for (int i = 0; i < eroded_dilated_binary.rows; i++)
{
for (int j = 0; j < eroded_dilated_binary.cols; j++)
{
pix=eroded_dilated_binary.at<char>(i, j); //pix is defined as char here
eroded_dilated_binary.at<char>(i, j) = (pix / 255);
}
}
out is here :
I checked the output and I got to deduce it is all 0's. but how is it possible?
I even tried to divide the first image by 2 I got non-logical answer. I even tried to subtract some value from each pixel but still got weird values.
I tried that other syntax shown in this topic OpenCV:Whats the easiest way to divide a Mat by a Scalar but it gives totally wrong values.
1-What is going on behind the scenes? I don't get how arithmetic operations work here!!
2- how to guarantee that my image is only 1's and 0's or only 0's and 255's?
3- when I and this binary mask with the original image to hide the background, I get the middle image correctly but the body of the bottle has more fluctuations and random pixilization in the middle and even Gaussian filter is not smoothing well. I thought anding function has different way of working too?
I've wrote a code which detects squares (white) in realtime and draws a frame around it. Each side of length l of the squares is divided in 7 parts. Then I draw a line of length h=l/7 at each of the six points evolving from the deviation perpendicular to the side of the triangle (blue). The corners are marked in red. It then looks something like this:
For the drawing of the blue lines and circles I have a 3 Channel (CV_8UC3) matrix drawing, which is zero everywhere except at the positions of the red, blue and white lines. Then what I do to lay this matrix over my webcam image is using the addWeighted function of opencv.
addWeighted( drawing, 1, webcam_img, 1, 0.0, dst); (Description for addWeighted here).
But then, as you can see I get the effect that the colors for my dashes and circles are wrong outside the black area (probably also not correct inside the black area, but better there). It makes totally sense why it happens, as it just adds the matrices with a weight.
I'd like to have the matrix drawing with the correct colors over my image. Problem is, I don't no how to fix it. I somehow need a mask drawing_mask where my dashes are, sort of, superimposed to my camera image. In Matlab something like dst=webcam_img; dst(drawing>0)=drawing(drawing>0);
Anyone an idea how to do this in C++?
1. Custom version
I would write it explicitly:
const int cols = drawing.cols;
const int rows = drawing.rows;
for (int j = 0; j < rows; j++) {
const uint8_t* p_draw = drawing.ptr(j); //Take a pointer to j-th row of the image to be drawn
uint8_t* p_dest = webcam_img.ptr(j); //Take a pointer to j-th row of the destination image
for (int i = 0; i < cols; i++) {
//Check all three channels BGR
if(p_draw[0] | p_draw[1] | p_draw[2]) { //Using binary OR should ease the optimization work for the compiler
p_dest[0] = p_draw[0]; //If the pixel is not zero,
p_dest[1] = p_draw[1]; //copy it (overwrite) in the destination image
p_dest[2] = p_draw[2];
}
p_dest += 3; //Move to the next pixel
p_draw += 3;
}
}
Of course you can move this code in a function with arguments (const cv::Mat& drawing, cv::Mat& webcam_img).
2. OpenCV "purist" version
But the pure OpenCV way would be the following:
cv::Mat mask;
//Create a single channel image where each pixel != 0 if it is colored in your "drawing" image
cv::cvtColor(drawing, mask, CV_BGR2GRAY);
//Copy to destination image only pixels that are != 0 in the mask
drawing.copyTo(webcam_img, mask);
Less efficient (the color conversion to create the mask is somehow expensive), but certainly more compact. Small note: It won't work if you have one very dark color, like (0,0,1) that in grayscale will be converted to 0.
Also note that it might be less expensive to redraw the same overlays (lines, circles) in your destination image, basically calling the same draw operations that you made to create your drawing image.
I need to apply gradient operator to RGB bitmap image. It works for 8 bit image but having the difficulty in implementing same for 24 bit image. Here is my code. Can anyone see how
to correct the zorizontal gradient operation to RGB image.
if (iBitPerPixel == 24) ////RGB 24 bits image
{
for(int i=0; i<iHeight; i++)
for(int j=1; j<iWidth-4; j++)
{
//pImg_Gradient[i*Wp+j] = pImg[i*Wp+j+1] - pImg[i*Wp+j-1] ;
int level = pImg[i*Wp+j*3+1] - pImg[i*Wp+j*3-1] ;
pImg_Gradient[i*Wp+j*3] = level;
// pImg_Gradient[i*Wp+j*3] = level;
// pImg_Gradient[i*Wp+j*3+1] = level;
// pImg_Gradient[i*Wp+j*3+2]= level;
}
for(int i=0; i<iHeight; i++)
for(int j=0; j<iWidth; j++)
{
// Copy the convetred values to original image.
pImg[i*Wp+j] = (BYTE) pImg_Gradient[i*Wp+j];
}
//delete pImg_Gradient;
}
Unfortunately, it is not clear how to define a gradient of an RGB image. The best way to go is to transform the image into a color space that separates intensity from color, such as HSV, and compute the gradient of the intensity component. Alternatively, you can compute the gradient of each color channel separately, and then combine the results in some way, such as taking the average.
Also see Edge detectors for RGB images?
In order to calculate the Gradient of an image (Which is a vector) you need to calculate both the horizontal and vertical derivative of the image.
Since we're dealing with a discrete image we should use Finitie Difference approximations of the derivative.
There are many ways to approximate, many of them are listed on the Wikipedia Pages:
http://en.wikipedia.org/wiki/Finite_difference
http://en.wikipedia.org/wiki/Finite_difference_method
http://en.wikipedia.org/wiki/Finite_difference_coefficients
Basically those are Spatial Coefficients hence you can define a filter using them and just filter the image.
This would be the most efficient way to calculate the gradient.
So, all you need is to find a library (Such as Open CV) which supports filtering images and you're done.
For color images, usually, you just calculate the Gradient per Color Channel.
Good Luck.
From your code; you are trying to calculate gradient from RGB but there is nothing to indicate how RGB is stored in your image. A complete guess is that in your image you have BGRBGRBGR...etc.
In that case your code is getting the gradient from the green channel, then storing it in the red of the gradient image. You don't show the gradient image being cleared to 0 - if you don't do this then it will probably be full of junk.
My suggestion is to convert to a greyscale image first; then you can use your original code.
Or calculate a gradient for each colour channel.
The plan
My project is able to capture the bitmap of a target window and convert it into an IplImage, and then display that image in a cvNamedWindow, where further processing can take place.
For the sake of testing, I've loaded an image into MSPaint like so:
The user is then allowed to click and drag the mouse over any number of pixels within the image to create a vector<cv::Scalar_<BYTE>> containing these RGB color values.
Then, with the help of ColorRGBToHLS(), this array is then sorted from left to right by hue, like so:
// PixelColor is just a cv::Scalar_<BYTE>
bool comparePixelColors( PixelColor& pc1, PixelColor& pc2 ) {
WORD h1 = 0, h2 = 0;
WORD s1 = 0, s2 = 0;
WORD l1 = 0, l2 = 0;
ColorRGBToHLS(RGB(pc1.val[2], pc1.val[1], pc1.val[0]), &h1, &l1, &s1);
ColorRGBToHLS(RGB(pc2.val[2], pc2.val[1], pc2.val[0]), &h2, &l2, &s2);
return ( h1 < h2 );
}
//..(elsewhere in code)
std::sort(m_colorRange.begin(), m_colorRange.end(), comparePixelColors);
...and then shown in a new cvNamedWindow, which looks something like:
The problem
Now, the idea here is to create a binary threshold image (or "mask") where this selected range of colors become white, and the rest of the source image becomes black... similar to the way the "Select By Color" tool operates in GIMP, or the "magic wand" tool works in Photoshop... except instead of limiting ourselves to a specific contoured selection, we are literally operating on the image as a whole.
I've read into cvInRangeS, and it sounds like it's precisely what I need.
However, and for whatever reason, the thresholded image always ends up being totally black...
VOID ShowThreshedImage(const IplImage* src, const PixelColor& min, const PixelColor& max)
{
IplImage* imgHSV = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 3);
cvCvtColor(src, imgHSV, CV_RGB2HLS);
cvNamedWindow("T1");
cvShowImage("T1", imgHSV); // <-- Shows up like the image below
IplImage* imgThreshed = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);
cvInRangeS(imgHSV, min, max, imgThreshed);
cvNamedWindow("T2");
cvShowImage("T2", imgThreshed); // <-- SHOWS UP PITCH BLACK!
}
This is what the "T1" window ends up looking like (which I suppose is correct?):
Bearing in mind that because the color range vector is stored as RGB (and that OpenCV internally reverses this order into BGR), I have converted the min/max values into HLS before passing them into ShowThreshedImage() like so:
CvScalar rgbPixelToHSV(const PixelColor& pixelColor)
{
WORD h = 0, s = 0, l = 0;
ColorRGBToHLS(RGB(pixelColor.val[2], pixelColor.val[1], pixelColor.val[0]), &h, &l, &s);
return PixelColor(h, s, l);
}
//...(elsewhere in code)
if(m_colorRange.size() > 0)
m_minHSV = rgbPixelToHSV(m_colorRange[0]);
if(m_colorRange.size() > 1)
m_maxHSV = rgbPixelToHSV(m_colorRange[m_colorRange.size() - 1]);
ShowThreshedImage(m_imgSrc, m_minHSV, m_maxHSV);
...But even without this conversion and simply passing RGB values instead, the result is still an entirely black image. I've even tried manually plugging in certain min/max values, and the best result I got was a few lit pixels (albeit, the incorrect ones).
The question:
What am I doing wrong here?
Is there something that I don't understand about the cvInRangeS method?
Do I need to step through each and every single color in order to properly threshold the selected range out of the source image?
Are there any other ways of accomplishing this?
Thank you for your time.
Update:
I have discovered that cvInRangeS expects all values for min to be lower than that of max. But when a range of colors are selected, there doesn't appear to be any guarantee that this will be the case, often resulting in a black thresholded image.
And swapping values to enforce this rule may result in unwanted colors within the new range (in some cases, this could include all colors instead of just the desired ones).
So I suppose the real question here would be:
"How do you segment an array of RGB colors, and use them to threshold an image?"
Your problem might be caused by the simple fact that OpenCV maintains a different range for values than for instanc MSpaint. For instance the HSV color space in paint is 360,100,100 while in OpenCV it is 180,255,255. Check your input values in openCV bu outputting the pixel value when clicking on a certain pixel. inRangeS should be the correct tool for the job. That said, in RGB it should work just as well because the range is the same as in paint.
cvSetMouseCallback("MyWindow", mouseEvent, (void*) &myImage);
void mouseEvent(int evt, int x, int y, int flags, void *param) {
if (evt == CV_EVENT_LBUTTONDOWN) {
printf("%d %d\n", x, y);
IplImage* imageSource = (IplImage*) param;
Mat image(imageSource);
cout << "Image cols " << image.cols << " rows " << image.rows << endl;
Mat imageHSV;
cvtColor(image, imageHSV, CV_BGR2HSV);
Vec3b p = imageHSV.at<Vec3b > (y, x);
char text[20];
sprintf(text, "H=%d, S=%d, V=%d", p[0], p[1], p[2]);
cout << text << endl;
}
}
When you have an idea about the HSV values by using this values, use these as lower and upper bounds for the in range method after converting the image to HSV by using cvtColor(image, imageHSV, CV_BGR2HSV). That should make you able to get the desired result.
It is not going to be too inefficient to iterate through every pixel. That is exactly what cvInRangeS would do - see this: http://docs.opencv.org/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way (I do this all the time and it is instantaneous for reasonable size images).
I would treat the color in the array as points in 3D RGB space. Find two color points that specify a prism that includes all other color points. That is just finding the min and max of all r,g, and b values. If this idea is not ok then you might have to check every image pixel against every pixel in the vector.
Then for each pixel in the image: result is black if (pixel.r < min.r) || (pixel.r > max.r) || (pixel.g < min.g) || (pixel.g > max.g) || (pixel.b < min.b) || (pixel.b > max.b), result is the pixel value otherwise.
This all should be very easy, so long as it is actually what you want.