How to use shift and offset attributes in function fillPoly - c++

I'm looking for easiest way to apply an offset to minimize mask on the image. I tried to use function fillPoly. The names of attributes 'shift' and 'offset', especially the latter one sounded good. Unfortunately I got error: -215 Assertion failed, when I tried to use it.
I cannot find any examples with usage of these attributes. Can any body point me ?
It's not the whole code but help understand my issue.
std::vector<std::vector<cv::Point> > fillContAll;
fillContAll.push_back(fillContSingle);
cv::Mat maska;
srcImage.copyTo(maska);
maska.setTo(0);
cv::fillPoly(maska, fillContAll, cv::Scalar(255, 255, 255),8,50);
// I tried also cv::fillPoly(maska, fillContAll, cv::Scalar(255, 255, 255),8,(5,5));
The whole error message is:
exception thrown!
OpenCV(4.2.0) C:\build\master_winpack-build-win64-vc15\opencv\modules\imgproc\src\drawing.cpp:2005: error: (-215:Assertion failed) pts && npts && ncontours >= 0 && 0 <= shift && shift <= XY_SHIFT in function 'cv::fillPoly'

Related

What does 'faces.size()' mean in this openCV loop?

I'm trying to teach myself opencv and c++ and this example program for face and eye detection includes the line:
for(size_t i = 0; i < faces.size(); i++)
I don't understand what faces.size() means, and following from that at what point i can be greater than faces.size().
How does it acquire a numerical value?
I see plenty of instances of faces throughout the rest of the program, but the only time I see size is as a parameter for face_cascade.detectMultiScale. It is capitalized though, which makes me think that it has nothing to do with faces.size().
faces.size()
Returns the size of 'faces', i.e. how many faces there are in 'faces'.
In general a basic for loop is structured like so:
for ( init; condition; increment )
{
//your code...
}
It will run as long as the condition is true, i.e. as long as 'i' is less than faces.size() (which might be '10' or some other integer value).
'i' will get bigger as for each loop iteration 1 is added to it. This is managed by the i++ instruction.
I'd suggest if you're struggling with loop syntax that openCV might not be the best place to start learning C++ as a lot of the examples expect a level of competence higher than 'beginner' (intentionally and unintentionally via simple bad coding/lack of commenting etc.)
faces is being populated here :
//-- Detect faces
face_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0|CASCADE_SCALE_IMAGE, Size(30, 30) );
According to OpenCV documentation :
void cv::CascadeClassifier::detectMultiScale ( InputArray image,
std::vector< Rect > & objects,
double scaleFactor = 1.1,
int minNeighbors = 3,
int flags = 0,
Size minSize = Size(),
Size maxSize = Size()
)
where std::vector< Rect > & objects (faces in your case) is a
Vector of rectangles where each rectangle contains the detected
object, the rectangles may be partially outside the original image.
As you can see, objects is passed by reference to allow its modification inside the function.
Also std::vector<Type>::size() will give you the size of your vector, so, i<faces.size() is necessary to get the index i inside the bounds of the vector.

OpenCV : Fast Template matching algorithm

I am currently trying to look for an image inside a video. The main goal is to follow some actions on the video like a pressed button or a pop-up window displayed on the screen.
The code I'm using uses OpenCV Template Matching function:
// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the higher the better
if( matchingMethod == CV_TM_SQDIFF || matchingMethod == CV_TM_SQDIFF_NORMED )
matchLoc = minLoc;
else
matchLoc = maxLoc;
if( !((matchLoc.x == 0) && (matchLoc.y == 0)) || maxVal >= 0.8)
return TRUE;
return FALSE;
}
The test is done with these two templates :
And the full image is a 3840x2160 image (i can't but the whole image since it is too big in bmp):
1) The question is how is it possible that for two templates with very few pixels of difference the algorithm can detect the first one but completely skip the second one ?
2) Is it possible that color depth could cause problems in the detection ?
Both templates are loaded as BMP files in 24 bits depth. The source image is converted in 24 bits depth.
Threshold is set to 0.92 for good accuracy
MaxLevels is set to 1 for a very good accuracy since 2 does not find any matches
Thank your for your help and advices
For those who maybe have the same issue, I just had to manage the return value differently.
Instead of
if( !((matchLoc.x == 0) && (matchLoc.y == 0)) || maxVal >= 0.8)
return TRUE;
Which returns true as soon as potential match (80% match) has been found.
Now I return true only if the maxVal is superior than 0.99 which means very high match.
if( maxVal >= 0.99)
return TRUE;
Second element I've changed is the threshold value which is used to classify the pixel values. I've decreased this value to 0.82 instead of 0.94 to get more possible matches and then filter with the maxVal value.

Modifying Mat attribute in class with mouse callback

Going to explain using bullets to make it easy to read (hopefully):
I'm writing a program which needs to be able to draw on top of images, using the mouse.
The way I have organized the program is that each image is stored in it's own instance of a class. The instance includes a cv::Mat attribute where the image is saved, and a blank cv::Mat (I refer to as the canvas) where I want whatever is drawn to be saved. The canvas is the same size and type as the image cv::Mat.
I've written a mouse callback function to draw rectangles, however I get an error (which I believe is to do with getting the stored canvas from the image).
OpenCV Error: Assertion failed (0 <= _dims && _dims <= CV_MAX_DIM) in setSize, file /tmp/opencv-0tcS7S/opencv-2.4.9/modules/core/src/matrix.cpp, line 89
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /tmp/opencv-0tcS7S/opencv-2.4.9/modules/core/src/matrix.cpp:89: error: (-215) 0 <= _dims && _dims <= CV_MAX_DIM in function setSize
Here is my code:
void draw(int event, int x, int y, int flags, void* params){
ImgData* input = (ImgData*)params; //Convert the struct passed in as void to a ImageData struct.
if(event == EVENT_LBUTTONDOWN){
printf("Left mouse button clicked.\n");
input->ipt = Point(x,y); //Store the initial point, this point is saved in memory.
printf("Original position = (%i,%i)\n",x,y);
}else if(event == EVENT_LBUTTONUP){
cv::Mat temp_canvas;
input->getCanvas().copyTo(temp_canvas);
printf("Left mouse button released.\n");
input->fpt = Point(x,y); //Final Point.
cv::rectangle(temp_canvas, input->ipt, input->fpt, Scalar(200,0,0));
input->setCanvas(temp_canvas);
}
}
The way I'm trying to do it is to copy the canvas from the object instance, draw the rectangle on this copy, then overwrite the old canvas with this modified canvas.
Any help with this or explanation as to why it is happening will be greatly appreciated.
Thank you
I've solved the problem, was simply that I wasn't passing the object correctly:
setMouseCallback("Window", draw, &images->at(imageShown)); // replaced the code: setMouseCallback("Window", draw, &images[imageShown]);
Silly mistake...

Possibly a MatIterator_ error in a Region of Interest

I'm working with some object detection using OpenCV 2.4.4 with C++. Unfortunately those objects that I must detect doesn't have unique shape, but they do have some really specific color range in the HSV color space (right now i'm detecting some red objects).
Also, the object should be present only in some part of the image, so I'm not scanning all the image, moreover, under this ROI I have sub-ROIs (little rectangles, and I need to know in which of these rectangles the object is in a frame).
So, first I tried to detect it something was found in the large ROI and if it was I set it to some color alien to my ambient. I did it with the following code:
cvtColor(frame(Range(0,espv),Range::all()),framhsv,CV_BGR2HSV);
MatIterator_<Vec3b> it, ith, end, endh;
ith = framhsv.begin<Vec3b>();
endh = framhsv.end<Vec3b>();
it = (frame(Range(0,espv),Range::all())).begin<Vec3b>();
for (;ith != endh; ++it,++ith){
if( (((*ith)[0] <=7 && (*ith)[0]>=0)||((*ith)[0]>170)) && (*ith)[1]>=160){
(*it)[0] = 63;
(*it)[1] = 0;
(*it)[2] = 255;}}
Everything worked right and the object was "detected", so I passed to the next steep, which is detecting if the object was in each sub-ROIs. To do so I created the following function the returns the number o "detected" pixels in each sub-ROI, so I can consider which is the correct sub-ROI.
Here is the function:
Vector<int> CountRed(cv::Mat &frame, int hspa, int vspa, int nlines ,int nblocks){
Vector<int> count(nblocks*nlines,0);
for(int i = 1; i<= nlines; i++){
for(int j = 1; j<= nblocks; j++){
Mat framhsv;
cvtColor(frame(Range((i-1)*vspa,i*vspa),Range((j-1)*hspa,j*hspa)),framhsv,CV_BGR2HSV); //Converte somente a área de interesse para HSV
MatIterator_<Vec3b> it, ith, endh;
ith = framhsv.begin<Vec3b>();
endh = framhsv.end<Vec3b>();
it = (frame(Range((i-1)*vspa,i*vspa),Range((j-1)*hspa,j*hspa))).begin<Vec3b>();
for (;ith != endh; ++it,++ith){
if( (((*ith)[0] <=7 && (*ith)[0]>=0)||((*ith)[0]>170)) && (*ith)[1]>=160){
(*it)[0] = 63;
(*it)[1] = 0;
(*it)[2] = 255;
count[(j)+nblocks*(i-1)-1]++;}}}}
return count;
}
The code compiles without warnings or errors and the video shows up, but if I insert the object that I must detect in any part of the ROI the code stop working, this also happens if I remove the:
count[(j)+nblocks*(i-1)-1]++;
line. I get the following error:
Access violation writing location 0xFFFFF970.
I really think the problem must be in accessing the iterator when not using Range::All()
To try to clarify what I'm doing here is an image of a frame:
http://i.imgur.com/Irvr8bk.jpg
the purple region is the frame, the red region is the ROI, and each of the black ones are the sub-ROIs.
I've also tried using the frame(Rect(0,0,frame.cols,2*vspa) to define the ROI and it also worked, but I got the same error when trying to work with the sub-ROIs using cv::Rect. So, I really think this must be a MatIterator_ error when not accessing the full row structure.
So, what should I do to work with those sub-ROIs ?

can't understand following c++ code line

I'm new with C++, and try to figuring out what this line of code means:
cur_rect = cv::Rect(cur_rect) & cv::Rect(0, 0, mat->cols, mat->rows); // here
if( cv::Rect(cur_rect) == cv::Rect() ) //here
{
.......
}
The Rect & Rect part intersects two rectangles and gives a non-empty rectangle back when the two inputs overlap.
So you can compare the result to Rect() to see whether there was an intersection. Your code crops cur_rect to (0, 0, mat->cols, mat->rows) and then checks whether it is empty or not.
Sources:
http://opencv.willowgarage.com/documentation/cpp/core_basic_structures.html?highlight=rect
How can one easily detect whether 2 ROIs intersects in OpenCv?
Edit
An alternative implementation, a bit cleaner:
// crop cur_rect to rectangle with matrix 'mat' size:
cur_rect &= cv::Rect(0, 0, mat->cols, mat->rows);
if (cur_rect.area() == 0) {
// result is empty
...
}
I am assuming that cv::Rect(...) methods (or family of them) returns a rectangle object. The line that you do not understand, I assume is an overloaded operator (==) that compares rectangles.
But I am making a lot of assumptions here as I do not have the code for cv class.
As to the & overloaded operator - one assumes that this is doing an intersection or union. Once again without the code it is hard to say.