How to access the elements of single channel IplImage in Opencv - c++

How can I access the Elements of an IplImage (single channel and IPL_DEPTH_8U depth).
I want to change the pixel value at a particular (x, y) position of the image.

opencv provide CV_IMAGE_ELEM method to access elements of IplImage,it's a macro,
define CV_IMAGE_ELEM( image, elemtype, row, col ) \
(((elemtype*)((image)->imageData + (image)->widthStep*(row)))[(col)])
second parameter is the type of

Pixels are stored inside imageData array.
So, since your image is single channel you just have to do like:
myimage.imageData[y*myimage.width+x] = 100;
This ensure in imageData the right offset from beginning of buffer, and it's more readable than any other pointer algebra operation.
In N-channels images it's enough to multiply by N the array offset, and add the number of channel to read:
i.e. for an RGB image
myimage.imageData[3*(y*myimage.width+x)+0] = 100; //Red
myimage.imageData[3*(y*myimage.width+x)+1] = 100; //Green
myimage.imageData[3*(y*myimage.width+x)+2] = 100; //Blue
Any optimization to avoid to multiply data to obtain index can be done according to the goal you have to achieve.

The fast way to get the pixel value is use macro.
CV_IMAGE_ELEM( image_header, elemtype, y, x_Nc )
And in your case, the Image is single channel.So you can get the i,j pixel value by
CV_IMAGE_ELEM(image, unsigned char, i, j)

Related

OpenCV - RGB Channels in Float Data Type and Intensity Range within 0-255

How can I achieve the values of the RGB channels as
Float data type
Intensity range within 0-255
I used CV_32FC4 as the matrix type since I'll perform floating-point mathematical operations to implement Daltonization. I was expecting that the intensity range is the same with the intensity range of the RGB Channels in CV_8UC3, just having a different data type. But when I printed the matrix I noticed that the intensities of the channels are not within 0-255. I realized that it due to the range of the float matrix type.
Mat mFrame(height, width, CV_32FC4, (unsigned char *)pNV21FrameData);
for(int y = 0 ; y < height ; y++){
for(int x = 0 ; x < width ; x++){
Vec4f BGRA = mFrame.at<Vec4f>(y,x);
// Algorithm Implementation
mFrame.at<Vec4f>(y,x) = BGRA;
}
}
Mat mResult;
mFrame.convertTo(mResult, CV_8UC4, 1.0/255.0);
I need to manipulate the pixels like BGRA[0] = BGRA[0] * n; then assign it back to the matrix.
By your comments and the link in it I see that the data comes in BGRA. The data is in uchar.
I assume this from this line:
Mat mResult(height, width, CV_8UC4, (unsigned char *)poutPixels);
To solve this you can create the matrix and then convert it to float.
Mat mFrame(height, width, CV_8UC4, (unsigned char *)pNV21FrameData);
Mat mFloatFrame;
mFrame.convertTo(mFloatFrame, CV_32FC4);
Notice that this will keep the current ranges (0-255) if you need another one (like 0-1) you may put the scaling factor.
Finally you can convert back, but beware that this function does saturate_cast. If you have an specific way you want to manage the overflow or the decimals, you will have to do it before converting it.
Mat mResult;
mFloatFrame.convertTo(mResult, CV_8UC4);
Note that 1.0/255.0 is not there, since the data is already in the range of 0-255 (at least before the operations).
One final comment, the link in your comments use IplImage and other old C (deprecated) versions of OpenCv. If you are working in c++, stick to the c++ versions like Mat. This is not in the code you show here, but in the you linked. This comment is more for you to avoid future headaches.

Thresholding a range of colors from an image

The plan
My project is able to capture the bitmap of a target window and convert it into an IplImage, and then display that image in a cvNamedWindow, where further processing can take place.
For the sake of testing, I've loaded an image into MSPaint like so:
The user is then allowed to click and drag the mouse over any number of pixels within the image to create a vector<cv::Scalar_<BYTE>> containing these RGB color values.
Then, with the help of ColorRGBToHLS(), this array is then sorted from left to right by hue, like so:
// PixelColor is just a cv::Scalar_<BYTE>
bool comparePixelColors( PixelColor& pc1, PixelColor& pc2 ) {
WORD h1 = 0, h2 = 0;
WORD s1 = 0, s2 = 0;
WORD l1 = 0, l2 = 0;
ColorRGBToHLS(RGB(pc1.val[2], pc1.val[1], pc1.val[0]), &h1, &l1, &s1);
ColorRGBToHLS(RGB(pc2.val[2], pc2.val[1], pc2.val[0]), &h2, &l2, &s2);
return ( h1 < h2 );
}
//..(elsewhere in code)
std::sort(m_colorRange.begin(), m_colorRange.end(), comparePixelColors);
...and then shown in a new cvNamedWindow, which looks something like:
The problem
Now, the idea here is to create a binary threshold image (or "mask") where this selected range of colors become white, and the rest of the source image becomes black... similar to the way the "Select By Color" tool operates in GIMP, or the "magic wand" tool works in Photoshop... except instead of limiting ourselves to a specific contoured selection, we are literally operating on the image as a whole.
I've read into cvInRangeS, and it sounds like it's precisely what I need.
However, and for whatever reason, the thresholded image always ends up being totally black...
VOID ShowThreshedImage(const IplImage* src, const PixelColor& min, const PixelColor& max)
{
IplImage* imgHSV = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 3);
cvCvtColor(src, imgHSV, CV_RGB2HLS);
cvNamedWindow("T1");
cvShowImage("T1", imgHSV); // <-- Shows up like the image below
IplImage* imgThreshed = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);
cvInRangeS(imgHSV, min, max, imgThreshed);
cvNamedWindow("T2");
cvShowImage("T2", imgThreshed); // <-- SHOWS UP PITCH BLACK!
}
This is what the "T1" window ends up looking like (which I suppose is correct?):
Bearing in mind that because the color range vector is stored as RGB (and that OpenCV internally reverses this order into BGR), I have converted the min/max values into HLS before passing them into ShowThreshedImage() like so:
CvScalar rgbPixelToHSV(const PixelColor& pixelColor)
{
WORD h = 0, s = 0, l = 0;
ColorRGBToHLS(RGB(pixelColor.val[2], pixelColor.val[1], pixelColor.val[0]), &h, &l, &s);
return PixelColor(h, s, l);
}
//...(elsewhere in code)
if(m_colorRange.size() > 0)
m_minHSV = rgbPixelToHSV(m_colorRange[0]);
if(m_colorRange.size() > 1)
m_maxHSV = rgbPixelToHSV(m_colorRange[m_colorRange.size() - 1]);
ShowThreshedImage(m_imgSrc, m_minHSV, m_maxHSV);
...But even without this conversion and simply passing RGB values instead, the result is still an entirely black image. I've even tried manually plugging in certain min/max values, and the best result I got was a few lit pixels (albeit, the incorrect ones).
The question:
What am I doing wrong here?
Is there something that I don't understand about the cvInRangeS method?
Do I need to step through each and every single color in order to properly threshold the selected range out of the source image?
Are there any other ways of accomplishing this?
Thank you for your time.
Update:
I have discovered that cvInRangeS expects all values for min to be lower than that of max. But when a range of colors are selected, there doesn't appear to be any guarantee that this will be the case, often resulting in a black thresholded image.
And swapping values to enforce this rule may result in unwanted colors within the new range (in some cases, this could include all colors instead of just the desired ones).
So I suppose the real question here would be:
"How do you segment an array of RGB colors, and use them to threshold an image?"
Your problem might be caused by the simple fact that OpenCV maintains a different range for values than for instanc MSpaint. For instance the HSV color space in paint is 360,100,100 while in OpenCV it is 180,255,255. Check your input values in openCV bu outputting the pixel value when clicking on a certain pixel. inRangeS should be the correct tool for the job. That said, in RGB it should work just as well because the range is the same as in paint.
cvSetMouseCallback("MyWindow", mouseEvent, (void*) &myImage);
void mouseEvent(int evt, int x, int y, int flags, void *param) {
if (evt == CV_EVENT_LBUTTONDOWN) {
printf("%d %d\n", x, y);
IplImage* imageSource = (IplImage*) param;
Mat image(imageSource);
cout << "Image cols " << image.cols << " rows " << image.rows << endl;
Mat imageHSV;
cvtColor(image, imageHSV, CV_BGR2HSV);
Vec3b p = imageHSV.at<Vec3b > (y, x);
char text[20];
sprintf(text, "H=%d, S=%d, V=%d", p[0], p[1], p[2]);
cout << text << endl;
}
}
When you have an idea about the HSV values by using this values, use these as lower and upper bounds for the in range method after converting the image to HSV by using cvtColor(image, imageHSV, CV_BGR2HSV). That should make you able to get the desired result.
It is not going to be too inefficient to iterate through every pixel. That is exactly what cvInRangeS would do - see this: http://docs.opencv.org/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way (I do this all the time and it is instantaneous for reasonable size images).
I would treat the color in the array as points in 3D RGB space. Find two color points that specify a prism that includes all other color points. That is just finding the min and max of all r,g, and b values. If this idea is not ok then you might have to check every image pixel against every pixel in the vector.
Then for each pixel in the image: result is black if (pixel.r < min.r) || (pixel.r > max.r) || (pixel.g < min.g) || (pixel.g > max.g) || (pixel.b < min.b) || (pixel.b > max.b), result is the pixel value otherwise.
This all should be very easy, so long as it is actually what you want.

How do I determine if a pixel is black or white in OpenCV?

I have this code in Python:
width = cv.GetSize(img_otsu)[0]
height = cv.GetSize(img_otsu)[1]
#print width,":",height
for y in range(height):
for x in range(width):
if(img_otsu[y,x]==(255.0)):
CountPixelW+=1
if(img_otsu[y,x]==(0.0)):
CountPixelB+=1
I want to convert this Python code to C++
This is what I have so far:
cv::threshold(img_gray,img_otsu,0.0,255.0,cv::THRESH_BINARY+cv::THRESH_OTSU);
for(int y =0;y<=img_otsu.size().height;y++)
for(int x=0;x<=img_otsu.size().width;x++)
{
//Check Pixel 0 or 255 This is Problem
}
How to I check if the pixel is black or white in C++?
You can use the at() function for Mat objects (see OpenCV docs).
img_otsu.at<uchar>(y,x) will return the value of the element in the matrix at that position. Note that you may have to change uchar to whatever type of matrix img_otsu is (e.g., float or double). Once you get the value, simply compare it to 0 or 255.

converting float to unsigned char in OpenCV

I have designed a filter in the form of a horizontal 1D vector using OpenCV and C++. The vector consists of float data. The original uchar data of the grayscale image is multiplied with this float vector as a 1 dimensional window to obtain the result. However, I am not getting proper results.
When the vector elements are multiplied with the image pixel values, the exceed the range 0-255 and I think this is causing problems.
Is there any way to typecast this float data into uchar to get proper results?
I'm using Img.at<uchar> = (uchar)(floatVector) right now.
Thanks
I will suggest you to type cast after you have multiplied...so convert your uchar image matrix to CV_32FC1 (since you say its grayscale image so channel = 1)....do the convolution of the image with your filter then type cast the values to ucharfor displaying may be..
You want to carry out the multiplication in the float type, and only at the end, convert back to unsigned char. Don't forget to also have your float vector normalized (all values add up to 1)
So basically you want
Data.at(coordinates) = (unsigned char)
(floatVector(0)*(Data.at(coord0) + ... + FloatVector(last)*Data.at(coordLast))

getting Y value[Ycbcr] of one Pixel in opencv

I'm trying to get the Y value of pixel from a frame that's in Ycbcr color mode.
here what I' wrote:
cv::Mat frame, Ycbcrframe, helpframe;
........
cvtColor(frame,yCbCrFrame,CV_RGB2YCrCb); // converting to Ycbcr
Vec3b intensity =yCbCrFrame.at<uchar>(YPoint);
uchar yv = intensity.val[0]; // I thought it's my Y value but its not, coz he gives me I think the Blue channel of RGB color space
any Idea how what the correct way to do that
what about the following code?
Vec3f Y_pix = YCbCrframe.at<Vec3f>(rows, cols);
int pixelval = Y_pix[0];
(P.S. I havent tried it yet)
You need to know both the depth (numerical format and precision of channel sample) as well as the channel count (typically 3, but can also be 1 (monochrome) or 4 (alpha-containing)), ahead of time.
For 3-channel, 8-bit unsigned integer (a.k.a. byte or uchar) pixel format, each pixel can be accessed with
mat8UC3.at<cv::Vec3b>(pt);