Copying cv::Mat to another creates "assertion failed 0 <= _colRange.start && .." - c++

A pretty simple concept, I have a 640x480 Mat and a 800x480 screen, so I am trying to copy the original image to the center of a black 800x480 image so the aspect ratio is maintained but the whole screen is used.
I followed this post and tried both solutions (direct copy to and region of interest) and get the same error:
OpenCV Error: Assertion failed (0 <= _colRange.start && _colRange.start <= _colRange.end && _colRange.end <= m.cols) in Mat, file /home/pi/opencv-3.0.0/modules/core/src/matrix.cpp, line 464
terminate called after throwing an instance of 'cv::Exception'
what(): /home/pi/opencv-3.0.0/modules/core/src/matrix.cpp:464: error: (-215) 0 <= _colRange.start && _colRange.start <= _colRange.end && _colRange.end <= m.cols in function Mat
Aborted
The offending code:
cv::Mat displayimage = cv::Mat(800, 480, CV_16U, cv::Scalar(0));
modimage1.copyTo(displayimage.rowRange(1,480).colRange(81,720));
I first attempted it with start/end range/row of (0,480) and (80,720), but then the error made it sound like it couldn't start at 0, so then of course I thought I was off by 1 and I started at 1 with the same results. But in actuality, the error is for the COLUMNS and not the ROWS, and with the columns being off by 1 wouldn't even matter. So what doesn't it like about where I'm trying to copy this image to?

Duh, this one was easier than I thought. The cv::Mat() arguments are height THEN width, not width then heigth. Tricky. But I also ran into an error with the wrong number of channels for my mat type, so to make the code bulletproof I just initialized it as the same image type of the image that would be copied to it, so the code below works fine:
cv::Mat displayimage = cv::Mat(480, 800, modimage1.type(), cv::Scalar(0));
modimage1.copyTo(displayimage.rowRange(0,480).colRange(80,720));

you can use cv::copyMakeBorder
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "iostream"
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
Mat src = imread(argv[1]);
if (src.empty())
{
cout << endl
<< "ERROR! Unable to read the image" << endl
<< "Press a key to terminate";
cin.get();
return 0;
}
imshow("Source image", src);
Mat dst;
Size dst_dims = Size(800,480);
int top = ( dst_dims.height - src.rows ) / 2;
int bottom = ( (dst_dims.height + 1) - src.rows ) / 2;
int left = ( dst_dims.width - src.cols ) / 2;
int right = ( ( dst_dims.width + 1 ) - src.cols ) / 2;
copyMakeBorder(src, dst, top, bottom, left, right, BORDER_CONSTANT, Scalar(0,0,0));
imshow("New image", dst);
waitKey();
return 0;
}

Related

OpenCV error in codes

I'm doing a project on opencv on image matching.
The lines
std::vector<cv::Keypoint> keypoints1;
std::vector<cv::Keypoint> keypoints2;
have the error: namespace "cv" has no member "Keypoint"
How do i solve this?
Another error is in the code
//Define feature detector
cv::FastFeatureDetector fastDet(80);
//Keypoint detection
fastDet.detect(image1, keypoints1);
fastDet.detect(image2, keypoints2);
where the error says:
object of abstract class type "cv::FastFeatureDetector" is not allowed:
function :cv::FastFeatureDetector::setThreshold" is a pure virtual function
function :cv::FastFeatureDetector::getThreshold" is a pure virtual function
function :cv::FastFeatureDetector::setNonmaxSuppression" is a pure virtual function
function :cv::FastFeatureDetector::getNonmaxSuppression" is a pure virtual function
function :cv::FastFeatureDetector::setType" is a pure virtual function
function :cv::FastFeatureDetector::getType" is a pure virtual function
Can someone please help?
Here is the whole code:
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2\features2d\features2d.hpp"
#include"opencv2\core.hpp"
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <vector>
using namespace cv;
using namespace std;
void main(int argc, const char** argv)
{
Mat image1 = imread("image1.jpg", CV_LOAD_IMAGE_UNCHANGED);
Mat image2 = imread("image2.jpg", CV_LOAD_IMAGE_UNCHANGED);
//Define keypoints vector
std::vector<cv::Keypoint> keypoints1;
std::vector<cv::Keypoint> keypoints2;
//Define feature detector
cv::FastFeatureDetector fastDet(80);
//Keypoint detection
fastDet.detect(image1, keypoints1);
fastDet.detect(image2, keypoints2);
//Define a square neighbourhood
const int nsize(11); //size of the neighbourhood
cv::Rect neighbourhood(0, 0, nsize, nsize); //11x11
cv::Mat patch1;
cv::Mat patch2;
//For all points in first image
//find the best match in second image
cv::Mat result;
std::vector<cv::DMatch> matches;
//for all keypoints in image 1
for (int i = 0; i < keypoints1.size(); i++)
{
//define image patch
neighbourhood.x = keypoints1[i].pt.x - nsize / 2;
neighbourhood.y = keypoints1[i].pt.y - nsize / 2;
//if neighbourhood of points outside image,
//then continue with next point
if (neighbourhood.x < 0 || neighbourhood.y < 0 || neighbourhood.x + nsize >= image1.cols || neighbourhood.y + nsize >= image1.rows)
continue;
//patch in image 1
patch1 = image1(neighbourhood);
//reset best correlation value;
cv::DMatch bestMatch;
//for all keypoints in image 2
for (int j = 0; j < keypoints2.size(); j++)
{
//define image patch
neighbourhood.x = keypoints2[j].pt.x - nsize / 2;
neighbourhood.y = keypoints2[j].pt.y - nsize / 2;
//if neighbourhood of points outside image,
//then continue with next point
if (neighbourhood.x < 0 || neighbourhood.y < 0 || neighbourhood.x + nsize >= image2.cols || neighbourhood.y + nsize >= image2.rows)
continue;
//patch in image 2
patch2 = image2(neighbourhood);
//match the 2 patches
cv::matchTemplate(patch1, patch2, result, CV_TM_SQDIFF_NORMED);
//check if it is best match
if (result.at<float>(0, 0) < bestMatch.distance)
{
bestMatch.distance = result.at<float>(0, 0);
bestMatch.queryIdx = i;
bestMatch.trainIdx = j;
}
}
//add the best match
matches.push_back(bestMatch);
}
//extract the 25 best matches
std::nth_element(matches.begin(), matches.begin() + 25, matches.end());
matches.erase(matches.begin() + 25, matches.end());
//Draw matching results
cv::Mat matchImage;
cv::DrawMatchesFlags();
}
There are some mistakes in your code.
Replace belowe line
std::vector<cv::Keypoint> keypoints1;
std::vector<cv::Keypoint> keypoints2;
with this
std::vector<cv::KeyPoint> keypoints1;
std::vector<cv::KeyPoint> keypoints2;
For cv::FastFeatureDetector fastDet(80); may be you need to include library opencv_features2d
After this changes your code will run successfully.

OpenCV Harris Corner Detection crashes

I'm trying to use Harris Corner detection algorithm of OpenCV to find corners in an image. I want to track it across consecutive frames using Lucas-Kanade Pyramidal Optical flow.
I have this C++ code, which doesn't seem to work for some reason:
#include <stdio.h>
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/contrib/contrib.hpp"
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
void main()
{
Mat img1, img2;
Mat disp1, disp2;
int thresh = 200;
vector<Point2f> left_corners;
vector<Point2f> right_corners;
vector<unsigned char> status;
vector<float> error;
Size s;
s.height = 400;
s.width = 400;
img1 = imread("D:\\img_l.jpg",0);
img2 = imread("D:\\img_r.jpg",0);
resize(img2, img2, s, 0, 0, INTER_CUBIC);
resize(img1, img1, s, 0, 0, INTER_CUBIC);
disp1 = Mat::zeros( img1.size(), CV_32FC1 );
disp2 = Mat::zeros( img2.size(), CV_32FC1 );
int blockSize = 2;
int apertureSize = 3;
double k = 0.04;
cornerHarris( img1, disp1, blockSize, apertureSize, k, BORDER_DEFAULT );
normalize( disp1, disp1, 0, 255, NORM_MINMAX, CV_32FC1, Mat() );
for( int j = 0; j < disp1.size().height ; j++ )
{
for( int i = 0; i < disp1.size().width; i++ )
{
if( (int) disp1.at<float>(j,i) > thresh )
{
left_corners.push_back(Point2f( j, i ));
}
}
}
right_corners.resize(left_corners.size());
calcOpticalFlowPyrLK(img1,img2,left_corners,right_corners,status,error, Size(11,11),5);
printf("Vector size : %d",left_corners.size());
waitKey(0);
}
When I run it, I get the following error message:
Microsoft Visual Studio C Runtime Library has detected a fatal error in OpenCVTest.exe.
(OpenCVTest being the name of my project)
OpenCV Error: Assertion failed ((npoints = prevPtsMat.checkVector(2, CV_32F, true)) >= 0) in unknown function, file ..\..\OpenCV-2.3.0-win-src\OpenCV-2.3.0\modules\video\src\lkpyramid.cpp, line 71
I have been trying to debug this from yesterday, but in vain. Please help.
As we can see in the source code, this error is thrown if the previous points array is in someway faulty. Exactly what makes it bad is hard to say since the documentation for checkVector is a bit sketchy. You can still look at the code to find out.
But my guess is that your left_corners variable have either the wrong type (not CV_32F) or the wrong shape.

Detection ROI using openCV

I'm doing a work where I have to find the region of interest (ROI) and then perform a threshold on the image:
As I am not from the field of computing, I'm having some difficulties.
I started trying to find the ROI through the following code:
// code
string filename = "2011-06-11-09%3A12%3A15.387932.bmp";
Mat img = imread(filename)
if (!img.data)
{
std::cout << "!!! imread não conseguiu abrir imagem: " << filename << std::endl;
return -1;
}
cv::Rect roi;
roi.x = 0
roi.y = 90
roi.width = 400;
roi.height = 90;
cv::Mat crop = original_image(roi);
cv::imwrite("2011-06-11-09%3A12%3A15.387932.bmp", crop);
Thank you very much.
I assume that you are interested in isolating the digits of the image and that you want to specify the ROI manually (given what you wrote).
You can use better coordinates for the ROI and crop that into a new cv::Mat with it to get something like the output below:
Executing a threshold on this image only makes sense if you want to isolate the digits to do some recognition later. A good technique for doing that is offered by cv::inRange() which performs a threshold operation on all channels (RGB image == 3 channels).
Note: cv::Mat stores pixels in the BGR order, this is important to remember when you specify the values for the threshold.
As a simple test, you can perform a threshold from B:70 G:90 R:100 to B:140 G:140 R:140 to get the following output:
Not bad! I changed your code a little to get these results:
#include <cv.h>
#include <highgui.h>
#include <iostream>
int main()
{
cv::Mat image = cv::imread("input.jpg");
if (!image.data)
{
std::cout << "!!! imread failed to load image" << std::endl;
return -1;
}
cv::Rect roi;
roi.x = 165;
roi.y = 50;
roi.width = 440;
roi.height = 80;
/* Crop the original image to the defined ROI */
cv::Mat crop = image(roi);
cv::imwrite("colors_roi.png", crop);
/* Threshold the ROI based on a BGR color range to isolate yellow-ish colors */
cv::Mat dest;
cv::inRange(crop, cv::Scalar(70, 90, 100), cv::Scalar(140, 140, 140), dest);
cv::imwrite("colors_threshold.png", dest);
cv::imshow("Example", dest);
cv::waitKey();
return 0;
}

sYSMALLOc: Assertion Failed error in opencv

The code compiles successfully but I am getting the following error when I try to execute the code with some images.
malloc.c:3096: sYSMALLOc: Assertion `(old_top == (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - __builtin_offsetof (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) (old_size) >= (unsigned long)((((__builtin_offsetof (struct malloc_chunk, fd_nextsize))+((2 * (sizeof(size_t))) - 1)) & ~((2 * (sizeof(size_t))) - 1))) && ((old_top)->size & 0x1) && ((unsigned long)old_end & pagemask) == 0)' failed.
Aborted
My code is:
#include "opencv2/modules/imgproc/include/opencv2/imgproc/imgproc.hpp"
#include "opencv2/modules/highgui/include/opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
using namespace cv;
/// Global variables
int const min_BINARY_value = 0;
int const max_BINARY_value = 255;
Mat src, src_gray, new_image;
const char* window_name = "Web Safe Colors";
/**
* #function main
*/
int main( int argc, char** argv )
{
double sum=0, mean=0;
/// Load an image
src = imread( argv[1], 1 );
/// Convert the image to Gray
cvtColor( src, src_gray, CV_RGB2GRAY );
/// Create new image matrix
new_image = Mat::ones( src_gray.size(), src_gray.type() );
/// Calculate sum of pixels
for( int y = 0; y < src_gray.rows; y++ )
{
for( int x = 0; x < src_gray.cols; x++ )
{
sum = sum + src_gray.at<Vec3b>(y,x)[0];
}
}
/// Calculate mean of pixels
mean = sum / (src_gray.rows * src_gray.cols);
/// Perform conversion to binary
for( int y = 0; y < src_gray.rows; y++ )
{
for( int x = 0; x < src_gray.cols; x++ )
{
if(src_gray.at<Vec3b>(y,x)[0] <= mean)
new_image.at<Vec3b>(y,x)[0] = min_BINARY_value;
else
new_image.at<Vec3b>(y,x)[0] = max_BINARY_value;
}
}
/// Create a window to display results
namedWindow( window_name, CV_WINDOW_AUTOSIZE );
imshow( window_name, new_image );
/// Wait until user finishes program
while(true)
{
int c;
c = waitKey( 20 );
if( (char)c == 27 )
{ break; }
}
}
Can you please help me identify the problem?
I cannot reproduce the exact error message you get. On my computer your program stopped with a segmentation fault.
The reason for this was, that you are accessing the pixels of your gray value images as if they were rgb images. So instead of
new_image.at<Vec3b>(y,x)[0]
you need to use
new_image.at<uchar>(y,x)
Because in a gray scale image every pixel only has a single value instead of a vector of 3 values (red, green and blue). After I applied this changes your program ran without errors and produced the expected output of an thresholded binary image.
It is possible that because of this you are overwriting some other memory opencv currently used and that this memory corruption then lead to your error message.

Assertion failed with accumulateWeighted in OpenCV

I am using openCV and trying to calculate a moving average of the background, then taking the current frame and subtracting the background to determine movement (of some sort).
However, when running the program I get:
OpenCV Error: Assertion failed (func != 0) in accumulateWeighted, file /home/sebbe/projekt/opencv/trunk/opencv/modules/imgproc/src/accum.cpp, line 431
terminate called after throwing an instance of 'cv::Exception'
what(): /home/sebbe/projekt/opencv/trunk/opencv/modules/imgproc/src/accum.cpp:431: error: (-215) func != 0 in function accumulateWeighted
I cant possibly see what arguments are wrong to accumulateWeighted.
Code inserted below:
#include <stdio.h>
#include <stdlib.h>
#include "cv.h"
#include "highgui.h"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "cxcore.h"
using namespace cv;
int main( int argc, char **argv )
{
Mat colourFrame;
Mat frame;
Mat greyFrame;
Mat movingAverage;
Mat difference;
Mat temp;
int key = 0;
VideoCapture cap(0);
/* always check */
if ( !cap.isOpened() ) {
fprintf( stderr, "Cannot open initialize webcam!\n" );
return 1;
}
namedWindow("Camera Window", 0);
// Initialize
cap >> movingAverage;
while( key != 'q' ) {
/* get a frame */
cap >> colourFrame;
/* Create a running average of the motion and convert the scale */
accumulateWeighted(colourFrame, movingAverage, 0.02, Mat() );
/* Take the difference from the current frame to the moving average */
absdiff(colourFrame, movingAverage, difference);
/* Convert the image to grayscale */
cvtColor(difference, greyFrame, CV_BGR2GRAY);
/* Convert the image to black and white */
threshold(greyFrame, greyFrame, 70, 255, CV_THRESH_BINARY);
/* display current frame */
imshow("Camera Window",greyFrame);
/* exit if user press 'q' */
key = cvWaitKey( 1 );
}
return 0;
}
Looking at the OpenCV sources, specifically at modules/imgproc/src/accum.cpp line 431, the lines that precede this assertion are:
void cv::accumulateWeighted( InputArray _src, CV_IN_OUT InputOutputArray _dst,
double alpha, InputArray _mask )
{
Mat src = _src.getMat(), dst = _dst.getMat(), mask = _mask.getMat();
int sdepth = src.depth(), ddepth = dst.depth(), cn = src.channels();
CV_Assert( dst.size == src.size && dst.channels() == cn );
CV_Assert( mask.empty() || (mask.size == src.size && mask.type() == CV_8U) );
intfidx = getAccTabIdx(sdepth, ddepth);
AccWFunc func = fidx >= 0 ? accWTab[fidx] : 0;
CV_Assert( func != 0 ); // line 431
What's happening in your case is that getAccTabIdx() is returning -1, which in turn makes func be ZERO.
For accumulateWeighted() to work properly, the depth of colourFrame and movingAverage must be one of the following options:
colourFrame.depth() == CV_8U && movingAverage.depth() == CV_32F
colourFrame.depth() == CV_8U && movingAverage.depth() == CV_64F
colourFrame.depth() == CV_16U && movingAverage.depth() == CV_32F
colourFrame.depth() == CV_16U && movingAverage.depth() == CV_64F
colourFrame.depth() == CV_32F && movingAverage.depth() == CV_32F
colourFrame.depth() == CV_32F && movingAverage.depth() == CV_64F
colourFrame.depth() == CV_64F && movingAverage.depth() == CV_64F
Anything different than that will make getAccTabIdx() return -1 and trigger the exception at line 431.
From the documentation on OpenCV API you can see that the output image from accumulateWeighted is
dst – Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point.
So your initialization is wrong. You should retrieve the colourFrame size first and then do this:
cv::Mat movingAverage = cv::Mat::zeros(colourFrame.size(), CV_32FC3);
On Python a working solution is to initiate movingAverage using FIRSTcolourFrame.copy().astype("float").
I found the solution on this website