Checking if a pixel is black in a grayscale image (OpenCV) - c++

I need to figure out if any given pixel is black or white on a gray-scale image that I put through a thresholding algorithm before that. The image becomes basically blobs of black on a white background.
Mat falsetest;
...
cv::cvtColor(detected_edges, falsetest, CV_BGR2GRAY);
threshold(falsetest, falsetest,128, 255,THRESH_BINARY);
...
printf("x:%d y:%d %d\n",x,y,falsetest.at<uchar>(x,y));
I expected the results to be either 0 or 255, however, that is not the case. The output for different pixels looks something like this:
x:1259 y:175 111
x:1243 y:189 184
x:1229 y:969 203
x:293 y:619 255
x:1123 y:339 183
Am I trying to do this in a wrong way, or does it seem that the error lies elsewhere?

Are you sure that falsetest contains uchar pixels, and not floats? In such case, you would need to access values of falsetest by:
falsetest.at<float>(x,y)

The CV code looks good. However, you are using %d in printf to display a uchar.
Either use %hhd or do
static_cast<int>(falsetest.at<uchar>(x,y)).

I have finally figured out what the problem was. I thought that when I called
cv::cvtColor(detected_edges, falsetest, CV_BGR2GRAY);
all the matrix data was copied to falsetest. However, it seems that it was not the case, and when I proceeded to modify detected_edges, falsetest also became contaminated. Cloning the matrix solved the problem.

Related

Vignetting correction on RGB image with OpenCV

First of all: I'm new to opencv :-)
I want to perform a vignetting correction on a 24bit RGB Image. I used an area scan camera as a line camera and put together an image from 1780x2 px parts to get an complete image with 1780x3000 px. Because of the vignetting, i made a white reference picture with 1780x2 px to calculate a LUT (with correction factor in it) for the vignetting removal. Here is my code idea:
Mat white = imread("WHITE_REF_2L.bmp", 0);
Mat lut(2, 1780, CV_8UC3, Scalar(0));
lut = 255 / white;
imwrite("lut_test.bmp", lut*white);
As i understood, what the second last line will (hopefully) do, is to divide 255 with every intensity value of every channel and store this in the lut matrice.
I want to use that lut then to to calculate the “real” (not distorted) intensity
level of each pixel by multiplying every element of the src img with every element of the lut matrice.
obviously its not working how i want to do it, i get a memory exception.
Can anybody help me with that problem?
edit: i'm using opencv 3.1.0 and i solved the problem like this:
// read white reference image
Mat white = imread("WHITE_REF_2L_D.bmp", IMREAD_COLOR);
white.convertTo(white, CV_32FC3);
// calculate LUT with vignetting correction factors
Mat vLUT(2, 1780, CV_32FC3, Scalar(0.0f));
divide(240.0f, white, vLUT);
of course that's not optimal, i will read in more white references and calculate the mean value to optimize.
Here's the 2 lines white reference, you can see the shadows at the image borders i want to correct
when i multiply vLUT with the white reference i obviously get a homogenous image as the result.
thanks, maybe this can help anyone else ;)

Filter away grayscale colors, remove light gray, keep black

I have a Mat which is a frame containing an image of grayscale objects. I want to make everything in this image that is light gray to white, more precisely anything that is lighter than R:50 G:50 B:50 (I'm not the best with color scales, but more or less make gray objects white and keep everything that is almost black).
The grayscale is CV_BGR2GRAY.
I have tried to use inRange() etc. but I don't really understand how to use the channels, therefore an example with some very basic explanation is highly appreciated!
The inRange function takes the source image + 2 parameters that you should know about, lowbounds and highbounds, which are just 3-element array that cotnains the values for BGR you want the pixels to be between.
So in you case, you shall call it like this:
inRange(src, CvScalar(0,0,0),CvScalar(50,50,50),dest);

Multiply Images In OpenCv & Apply Laplacian Filter On It

In my previous question here's the link. According to the answer I have obtained the desired image which is white flood filled.
Now after applying the morphological operation of erosion on the white flood filled image, I get the new masked image.
Your answer helped a lot. Now what I am trying to do is that I am multiplying the new masked image with the original grayscaled image in order to get the veins pattern. But it gives me the same image as result which I get after performing erosion on the white flood filled image. After completing this step I have to apply the Laplacian function to get the veins pattern. I am attaching the original image and the result image that I want. I hope you will look into the matter.
Original Image.
Result Image.
If I am right in understanding you, you only want to extract the veins from the grayscale hand image, right? To do something like this, you would obviously multiply both of them as,
finalimg = grayimg * veinmask;
If you have done the above I think it would be more helpful to post a portion of your code so experts here might be able to point out whats wrong, also the output image that you're getting, and the one you want would also help.
I hope I understand you correctly. You have a gray scale image showing a hand (first image in your question)
You create a mask image that looks like the second image you posted.
Multiplication of both results in the mask image?
If that is the case check your values. If you work within a byte image your mask image must contain values 0 and 1, not 0 and 255 as the multiplication results for non-zero mask pixels otherwise exceed 255!

How to thin an image borders with specific pixel size? OpenCV

I'm trying to thin an image by making the border pixels of size 16x24 becoming 0. I'm not trying to get the skeletal image, I'm just trying to reduce the size of the white area. Any methods that I could use? Enlighten me please.
This is the sample image that i'm trying to thin. It is made of 16x24 white blocks
EDIT
I tried to use this
cv::Mat img=cv::imread("image.bmp", CV_LOAD_IMAGE_GRAYSCALE);//image is in binary
cv::Mat mask = img > 0;
Mat kernel = Mat::ones( 16, 24, CV_8U );
erode(mask,mask,kernel);
But the result i got was this
which is not exactly what i wanted. I want to maintain the exact same shape with just 16x24 pixels of white shaved off from the border. Any idea what went wrong?
You want to Erode your image.
Another Description
Late answer, but you should erode your image using a kernel which is twice the size you want to get rid of plus one, like:
Mat kernel = Mat::ones( 24*2+1, 16*2+1, CV_8U );
Notice I changed the places of the height and width of the block, I only know opencv from Python, but I am pretty sure the order is the same as in Python.

weird behaviour saving image in opencv

After doing some opencv operation, I initialize a new image that I'd like to use. Saving this empty image gives a weird result
The lines I use to save this image are:
Mat dst2 (Size (320, 240), CV_8UC3);
imwrite("bla.jpg", dst2);
I should get a black image, but this is what I get. Moving these two lines to the start of the program everything wordks fine
Anyone had this problem before?
I just noticed that these white lines contain portions from other images I'm processing in the same program
Regards
Because you did not initialize the image with any values, you just defined the size and type, you will get random pixels (or not so random, it is probably showing pieces of pixels in memory).
It is the same concept of using/accessing an uninitialized variable.
To paint the image black you can use Mat::setTo, docs here:
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-setto