Why does setTo not work (assertion failed)? - c++

I am just learning OpenCV and, since I have some experience with Matlab's logical indexing, I was really interested to see the matrix method setTo. My initial attempt doesn't work though, and I can't work out why, so I'd be very grateful for your help!
I have a Mat containing image data, and want to set all values greater than 10 to zero. So, I did:
Mat not_relevant = abs(difference - frame2) > 10;
difference = difference.setTo(0, not_relevant);
This however gives me:
OpenCV Error: Assertion failed (mask.empty() || mask.type() == CV_8U) in
cv::Mat::setTo, file
C:\builds\2_4_PackSlave-win32-vc12-shared\opencv\modules\core\src\copy.cpp, line 347
I have tried converting not_relevant, difference and frame2 before doing this using, e.g.:
frame2.convertTo(frame2, CV_8UC1);
but that did not fix the error, so I'm not sure what else I could try. Does anyone have any idea what might be wrong?
Thank you for your help!

I think the error is pretty clear.type of your mask image should be CV_8U.
so you need to convert not_relevent to grayscale.
Mat not_relevant = abs(difference - frame2) > 10;
cv::cvtColor(not_relevant, not_relevant, CV_BGR2GRAY);
difference = difference.setTo(0, not_relevant);
Why convertTo does not work here ?
CV_8U(or CV_8UC1) is type of image with one channel of uchar values.
convertTo can not change number of channels in image.
So converting image with more than one channel to CV_8U using convertTo does not work .
check this answer for more detailed explanations.

Related

OpenCv, get image information

I am playing around with an open source openCv application. With the provided image sets, it works great, but when I attempt to pass it a live camera stream, or even recorded frames from that camera stream, it crashes. I assume that this is to do with the cv::Mat type, differing image channels, or some conversion that i am not doing.
The provided dataset is grey-scale, 8 bit, and so are my images.
The application expects grayscale (CV_8U).
My question is:
Given one of the (working) provided images, and one of my recorded (not working) images, what is the best way to compare them using opencv, to find out what the difference might be that is causing my crashes?
Thank you.
I have tried:
Commenting out this code (Which gave assertion errors)
if(mImGray.channels()==3)
{
cvtColor(mImGray,mImGray,CV_BGR2GRAY);
cvtColor(imGrayRight,imGrayRight,CV_BGR2GRAY);
}
else if(mImGray.channels()==4)
{
cvtColor(mImGray,mImGray,CV_BGRA2GRAY);
cvtColor(imGrayRight,imGrayRight,CV_BGRA2GRAY);
}
And replacing it with:
cv::Mat TempL;
mImGray.convertTo(TempL, CV_8U);
cvtColor(TempL, mImGray, CV_BayerGR2BGR);
cvtColor(mImGray, mImGray, CV_BGR2GRAY);
And the program crashes with no error...
You can try this code:
if ( mImGray.depth() != CV_8U )
mImGray.convertTo(mImGray, CV_8U);
if (mImGray.channels() == 3 )
{
cvtColor(mImGray, mImGray, COLOR_BGR2GRAY);
}
Or you can define a new Mat with create function and use that.

How to thin an image borders with specific pixel size? OpenCV

I'm trying to thin an image by making the border pixels of size 16x24 becoming 0. I'm not trying to get the skeletal image, I'm just trying to reduce the size of the white area. Any methods that I could use? Enlighten me please.
This is the sample image that i'm trying to thin. It is made of 16x24 white blocks
EDIT
I tried to use this
cv::Mat img=cv::imread("image.bmp", CV_LOAD_IMAGE_GRAYSCALE);//image is in binary
cv::Mat mask = img > 0;
Mat kernel = Mat::ones( 16, 24, CV_8U );
erode(mask,mask,kernel);
But the result i got was this
which is not exactly what i wanted. I want to maintain the exact same shape with just 16x24 pixels of white shaved off from the border. Any idea what went wrong?
You want to Erode your image.
Another Description
Late answer, but you should erode your image using a kernel which is twice the size you want to get rid of plus one, like:
Mat kernel = Mat::ones( 24*2+1, 16*2+1, CV_8U );
Notice I changed the places of the height and width of the block, I only know opencv from Python, but I am pretty sure the order is the same as in Python.

OpenCV and working with Fourier transform dft()

I am trying to point-wise multiply fourier transforms of two separate images and then convert back to a normal image. I'm not very familiar with using the fourier transform in OpenCV but this is what I have at the moment. The last line where the output is shown causes an exception of type 'System.Runtime.InteropServices.SEHException' but I can't figure out how to fix it. I have tried various different parameters and functions at each stage but all seem either give an exception or an empty output. What am I doing wrong? Thanks for any help you can give me!
Mat dftInput1, dftImage1, dftInput2, dftImage2, multipliedDFT, inverseDFT, inverseDFTconverted;
image1.convertTo(dftInput1, CV_32F);
dft(dftInput1, dftImage1, DFT_COMPLEX_OUTPUT);
image2.convertTo(dftInput2, CV_32F);
dft(dftInput2, dftImage2, DFT_COMPLEX_OUTPUT);
multiply(dftImage1, dftImage2, multipliedDFT);
idft(multipliedDFT, inverseDFT, DFT_SCALE);
inverseDFT.convertTo(inverseDFTconverted, CV_8U);
imshow("Output", inverseDFTconverted);
imshow can't show 2 channel images, only 1,3,4 channel ones.
if you use DFT_COMPLEX_OUTPUT for the dft, you get a 2 channel image, applying the reverse idft again produces a 2channel(complex) Mat
no idea, why you get a 'System.Runtime.InteropServices.SEHException' though ( is that 'managed c++' ? )
convertTo() changes the type of the channels, but not their count (yea, surprise).
so, either restrict it to the real part:
idft(multipliedDFT, inverseDFT, CV_DFT_SCALE | CV_DFT_REAL_OUTPUT );
or split it , and throw only the real part at imshow:
Mat chan[2];
split( inverseDFTconverted, chan );
imshow("lalala", chan[0]);

Checking if a pixel is black in a grayscale image (OpenCV)

I need to figure out if any given pixel is black or white on a gray-scale image that I put through a thresholding algorithm before that. The image becomes basically blobs of black on a white background.
Mat falsetest;
...
cv::cvtColor(detected_edges, falsetest, CV_BGR2GRAY);
threshold(falsetest, falsetest,128, 255,THRESH_BINARY);
...
printf("x:%d y:%d %d\n",x,y,falsetest.at<uchar>(x,y));
I expected the results to be either 0 or 255, however, that is not the case. The output for different pixels looks something like this:
x:1259 y:175 111
x:1243 y:189 184
x:1229 y:969 203
x:293 y:619 255
x:1123 y:339 183
Am I trying to do this in a wrong way, or does it seem that the error lies elsewhere?
Are you sure that falsetest contains uchar pixels, and not floats? In such case, you would need to access values of falsetest by:
falsetest.at<float>(x,y)
The CV code looks good. However, you are using %d in printf to display a uchar.
Either use %hhd or do
static_cast<int>(falsetest.at<uchar>(x,y)).
I have finally figured out what the problem was. I thought that when I called
cv::cvtColor(detected_edges, falsetest, CV_BGR2GRAY);
all the matrix data was copied to falsetest. However, it seems that it was not the case, and when I proceeded to modify detected_edges, falsetest also became contaminated. Cloning the matrix solved the problem.

masking in openCV

Mat mask = Mat::zeros(img1.rows, img1.cols, CV_8UC1)
this piece of code is supposed to create a mask, I think, by using C++. What is the equivalent to creating a mask in C, like this? also, can someone explain to me what this piece of code is actually doing please?
With the C API, we would call
IplImage *mask = cvCreateImage(cvGetSize(img1), IPL_DEPTH_8U, 1);
cvSetZero(mask);
The C API is easier to read IMO, and what it does is create an image with 8 bits per pixel, 1 channel (grayscale), of the same size as img1, and then setting all its pixel values to zero.