OpenCV Error Assertion failed on some Pixal Values - c++

I loaded an Image to a Mat:
Mat Mask = cvLoadImage(filename);
Its an 3744 X 5616 RGB Image. On the next Step i convert it to an Grayscale.
cvtColor(Mask,Mask,CV_BGR2GRAY);
after this i normalize it to use the full Grayscale later:
normalize(Mask,Mask,0,255,NORM_MINMAX,CV_8U);
Now i need the specific Grayscale values and getting an Error on some Values:
for(int i=0;i<(Picture.rows);i++)
{
for(int j=0;j<(Picture.cols);j++)
{
Vec3b masked = Mask.at<Vec3b>(i,j);
//some stuff
}
}
I'm getting the Following Error on some Pixels:
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)size.p[0] && (unsigned)i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channels()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3) - 1))*4) & 15) == elemSize1()) in unknown function, file c:\opencv\build\include\opencv2\core\mat.hpp, line 537
Anyone can tell me what i did wrong? It's strange that it appears only on some Pixel Values
Edit:
Additional Information:
If I load my Mask as Grayscale everything works fine. But when I use cvtColor() or Mat Mask = imread(filename,CV_LOAD_IMAGE_GRAYSCALE); on the image the error appears. Very Strange...

I think your problem is you are accessing a binary image with .at<Vec3b>(i,j). Instead you want to access each pixel with .at<uchar>(i,j). cvtColor(Mask,Mask,CV_BGR2GRAY); changes the 3 channel BGR image to a one channel grayscale image. .at<Vec3b>(i,j) is trying to access a 3 channel image which will eventually go past the end of the image array in memory causing problems or tripping those assertions.
The inner part of your for loop should look like this:
unsigned char masked = Mask.at<uchar>(i,j);

Related

Opencv stops when aruco::detectMarkers crashes is called

I am trying to track my aruco markers but when I call the detectMarkers() function my application stops and I have absolutely no idea why.
So I am using it like this :
aruco::detectMarkers(colorMat, markerDictionnary, markerCorners, markerIds);
The variables are declared like that :
vector<vector<Point2f>> markerCorners;
Ptr<aruco::Dictionary> markerDictionnary = aruco::getPredefinedDictionary(aruco::PREDEFINED_DICTIONARY_NAME::DICT_4X4_50);
vector<int> markerIds;
My colorMat is declared and populated in previous functions so I'm just going to copy every line where it is used:
cv::Mat colorMat;
colorMat = Mat(colorHeight, colorWidth, CV_8UC4, &colorBuffer[0]).clone();
cv::flip(colorMat, colorMat, 1);
cv::imshow("Color", colorMat);
The error I get in my console is:
OpenCV(4.3.0) Error: Assertion failed (_in.type() == CV_8UC1 || _in.type() == CV_8UC3) in cv::aruco::_convertToGrey, file C:\Users\...\Librairies\opencv_contrib-4.3.0\modules\aruco\src\aruco.cpp, line 107
OpenCV(4.3.0) C:\Users\...\Librairies\opencv_contrib-4.3.0\modules\aruco\src\aruco.cpp:107: error: (-215:Assertion failed) _in.type() == CV_8UC1 || _in.type() == CV_8UC3 in function 'cv::aruco::_convertToGrey'
Does anyone know where this error is coming from? Thank in advance!
As you see there:
colorMat = Mat(colorHeight, colorWidth, CV_8UC4, &colorBuffer[0]).clone();
You're creating a cv::Mat that has 4 channels, that are the Blue, Red, Green, and alpha channel; so your Mat is holding a BGRA image.
Like you see in the error, detectMarkers want either a BGR (or RGB) image (3 channels) or a grey image (1 channel).
So you should convert your image before passing it to detectMarker. A way to do so is, for example:
cvtColor(colorMat, colorMat, COLOR_BGRA2GRAY);
that convert your image into a grayscale picture.

How can I make a matrix same as MNIST image dataset.

I'm trying to make a vector matrix, which is the same as MNIST image dataset.
Each image from the webcam is captured and store into the vector. However the matrix i created is different from the MNIST dataset. So the main code doesn't work for matrix I created.
I was thinking that maybe its because the pixel type is different.
What I noticed is, when I looked up a single matrix from MNIST data it had 15 decimal points. However I was not able to set 15 decimal points. When I set the image to be CV_FC64. It shows following error message.
"Assertion failed in cv::cvtColor, file C:\file path. "
The main code works for MNIST dataset.. I'm not sure what to do..
please advice. me.
while (1)
{
cap >> src;
src.convertTo(src, CV_64FC1);
src = src / 256;
cvtColor(src, src_gray, CV_RGB2GRAY);
resize(src_gray, src_N, size);
testX.push_back(src_N);
}
cvtColor only allows 8U, 16U and 32F bit-depths. So after you convertTo(..., CV_64FC1), the bit-depth is 64F and the assertion fails: https://github.com/opencv/opencv/blob/84699e0e1860a3485e3dfc12230fbded955dba13/modules/imgproc/src/color.cpp#L8676:
CV_Assert( depth == CV_8U || depth == CV_16U || depth == CV_32F );
If you really need 64F, it'd make sense to first cvtColor and then increase bit-depth to 64F using convertTo.

Error when converting from HSV to BGR, or HSV to JPEG in openCV after using inRange

I am using openCV 3.1.0 (I have tried with 2.4.9, with same problem). I want to output a HSV mat to jpeg:
// .. Getting JPEG content into memory
// JPEG to mat
Mat imgBuf=Mat(1, jpegContent, CV_8UC3, jpegSize);
Mat imgMat=imdecode(imgBuf, CV_LOAD_IMAGE_COLOR);
free(jpegContent);
if(imgMat.data == NULL) {
// Some error handling
}
// Now the JPEG is decoded and reside in imgMat
cvtColor(imgMat, imgMat, CV_BGR2HSV); // Converting to HSV
Mat tmp;
inRange(imgMat, Scalar(0, 0, 0), Scalar(8, 8, 8), tmp); // Problem goes here
cvtColor(tmp, imgMat, CV_HSV2BGR);
// Mat to JPEG
vector<uchar> buf;
imencode(".jpg", imgMat, buf, std::vector<int>());
outputJPEG=(unsigned char*)malloc(buf.size());
memcpy(outputJPEG, &buf[0], buf.size());
// ... Output JPEG
The problem is, when i do cvtColor(tmp, imgMat, CV_HSV2BGR) with inRange, my program will fail with:
OpenCV Error: Assertion failed (scn == 3 && (dcn == 3 || dcn == 4) && (depth == CV_8U || depth == CV_32F)) in cvtColor, file /home/pi/opencv/src/opencv-3.1.0/modules/imgproc/src/color.cpp, line 8176
terminate called after throwing an instance of 'cv::Exception'
what(): /home/pi/opencv/src/opencv-3.1.0/modules/imgproc/src/color.cpp:8176: error: (-215) scn == 3 && (dcn == 3 || dcn == 4) && (depth == CV_8U || depth == CV_32F) in function cvtColor
If i removed inRange, the program work just fine. I have tried to remove the cvtColor call, letting imencode to do its job, automatically converting HSV to BGR then to JPEG. This time, no more assertion failed, but i got corrupted JPEG image, as GStreamer complains:
gstrtpjpegpay.c(581): gst_rtp_jpeg_pay_read_sof ():
/GstPipeline:pipeline0/GstRtpJPEGPay:rtpjpegpay0
WARNING: from element /GstPipeline:pipeline0/GstRtpJPEGPay:rtpjpegpay0: Wrong SOF length 11.
Again, removing inRange also solves this issue, it produces good JPEG data.
So, is that i am invoking inRange improperly that cause corrupted image data? If yes, what is the correct way to use inRange?
inRange produces a single channel binary matrix, i.e. a CV_8UC1 matrix with values either 0 or 255.
So you cannot convert tmp with HSV2BGR, because the source image tmp doesn't have 3 channels.
OpenCV is telling you exactly this: scn (source channels) is not 3.
Since you probably want to keep and then convert to BGR only part of the image in your range, you can:
set to black everything outside the range: imgMat.setTo(Scalar(0,0,0), ~tmp);
convert the resulting image to BGR: cvtColor(imgMat, imgMat, CV_HSV2BGR);

opencv cvtColor set/get matrix element

cv::cvtColor(dst,bwImage, CV_RGB2GRAY);
how can I set a specific pixel for example 0,0 value it this bwImage after usign cvtColor?
I usually use:
bwImage.at<float>(0,0) = 0;
but it now throws an exception.
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)si
ze.p[0] && (unsigned)(i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channel
s()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3
) - 1))*4) & 15) == elemSize1()) in cv::Mat::at, file C:\Users\gdarmon\Downloads
\opencv\build\include\opencv2/core/mat.hpp, line 538
Update I have found a work around
bwImage.convertTo(bwImage,CV_32F);
bwImage.at<float>(0,0)=0;
As you already found out, your black and white image does not hold floats, but 8-bit unsigned integers. Besides your workaround you could also do
bwImage.at<uint8_t>(0,0) = 0;
if you include stdint.h. As this is just a typedef for unsigned char, you can also not include the header and do this:
bwImage.at<unsigned char>(0,0) = 0;
On a side note: the default channel ordering of OpenCV is BGR, so using CV_RGB2GRAY here would be wrong, if you did not reorder the channels beforehand.

OpenCV: Error copying one image to another

I am trying to copy one image to another pixel by pixel (I know there are sophisticated methods available. I am trying to solve another problem and answer to this will be useful).
This is my code:
int main()
{
Mat Img;
Img = imread("../../../stereo_images/left01.jpg");
Mat copyImg = Mat::zeros(Img.size(), CV_8U);
for(int i=0; i<Img.rows; i++){
for(int j=0; j<Img.cols; j++){
copyImg.at<uchar>(j,i) = Img.at<uchar>(j,i);
}}
namedWindow("Image", CV_WINDOW_AUTOSIZE );
imshow("Image", Img);
namedWindow("copyImage", CV_WINDOW_AUTOSIZE );
imshow("copyImage", copyImg);
waitKey(0);
return 0;
}
When I run this code in visual studio I get the following error
OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)si
ze.p[0] && (unsigned)(i1*DataType<_Tp>::channels) < (unsigned)(size.p[1]*channel
s()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3
) - 1))*4) & 15) == elemSize1()) in cv::Mat::at, file c:\opencv\opencv-2.4.9\ope
ncv\build\include\opencv2\core\mat.hpp, line 537
I know for fact that Img's type is CV_8U. Why does this happen ?
Thanks!
// will read in a rgb image , no matter what the content is
Img = imread("../../../stereo_images/left01.jpg");
to make it read grayscale images use:
Img = imread("../../../stereo_images/left01.jpg", CV_LOAD_IMAGE_GRAYSCALE);
then, you don't need to copy per pixel (and you should even avoid that), just use:
Mat im2 = Img.clone();
if you do per-pixel loops, watch out to get the indices right. it's row-col world here, not x,y, so it should be:
copyImg.at<uchar>(i,j) = Img.at<uchar>(i,j);
in your case
I know for fact that Img's type is CV_8U.
But CV_8U is just the image depth (8-bit U-nsigned). The type also specifies the number of channels, which is usually three. One for blue, one for green and one for red in this order as default for OpenCV. The type would be CV_8UC3 (C-hannels = 3). imread will convert even a black and white image to a 3-channel image by default. imread(filename, CV_LOAD_IMAGE_GRAYSCALE) will load a 1-channel image (CV_8UC1). But if you're not sure the easiest solution is
Mat copyImg = Mat::zeros(Img.size(), Img.type());
To access the array elements you have to know the size of it. Using .at<uchar>() on a 3-channel image will only access the first channel because you have 3*8 bit per pixel. So on a 3-channel image you have to use
copyImg.at<Vec3b>(i,j) = Img.at<Vec3b>(i,j);
where Vec3b is a cv::Vec<uchar, 3>. You should also note that the first argument of at<>(,) is the index along dim 0 which are the rows and second argument cols. Or in other words in classic 2d-xy-chart order you access a pixel with .at<>(y,x) == .at<>(Point(x,y)).
your problem is with this line :
copyImg.at<uchar>(j,i) = Img.at<uchar>(j,i);
It should be :
copyImg.at<uchar>(i,j) = Img.at<uchar>(i,j);
Note that if you want to copy image you can simply do this :
Mat copyImg = Img.clone();