I am trying to track object from video stream using SIFT algorithm. I want to detect the object and track it by drawing a rectangle surrounding it. The problem is, the rectangle gets skewed and not accurately drawn most of the time. I am using the following code to draw the rectangle around the detected object (videoImage is the frame from video stream).
line(videoImage, sceneCorners[0], sceneCorners[1], Scalar(255, 0, 0), 2);
line(videoImage, sceneCorners[1], sceneCorners[2], Scalar(255, 0, 0), 2);
line(videoImage, sceneCorners[2], sceneCorners[3], Scalar(255, 0, 0), 2);
line(videoImage, sceneCorners[3], sceneCorners[0], Scalar(255, 0, 0), 2);
I also tried the following code (imgMatches is the image with only the good matches)
line(imgMatches, sceneCorners[0] + Point2f( object.cols, 0), sceneCorners[1] + Point2f( object.cols, 0), Scalar(0, 255, 0), 2);
line(imgMatches, sceneCorners[1] + Point2f( object.cols, 0), sceneCorners[2] + Point2f( object.cols, 0), Scalar(0, 255, 0), 2);
line(imgMatches, sceneCorners[2] + Point2f( object.cols, 0), sceneCorners[3] + Point2f( object.cols, 0), Scalar(0, 255, 0), 2);
line(imgMatches, sceneCorners[3] + Point2f( object.cols, 0), sceneCorners[0] + Point2f( object.cols, 0), Scalar(0, 255, 0), 2);
Both seems give the same result. So, my question is, how do draw a rectangle bounding my tracked object which is consistent to the tracked object? By the way, I am using OpenCV (C++) with Visual Studio 2010 on Windows 7.
The problem is not drawing the rectangle, but detecting the object correctly. It is very common that detections in single images are noisy if you get only some keypoints, even if you filter them with RANSAC and a fundamental matrix or a homography.
If you want a more accurate rectangle around the object, you must write a better detection algorithm. For example, you can try to look for more correspondences when you have a first hint of the position of the object in the image.
Maybe have a look at this question SIFT matches and recognition?. It's about the same question. The solution is a 4D hough space.
Related
I am trying to port a code from python to c++. I need to paint a part of an image (polygon) with black color.
In python, I had the list of points and then I call to fillconvexpolygon to do that.
I tried the same in c++ but I get this exception:
OpenCV Error: Assertion failed (points.checkVector(2, CV_32S) >= 0) in fillConvexPoly, file /home/user/template_matching/repositories/opencv/modules/core/src/drawing.cpp, line 2017
terminate called after throwing an instance of 'cv::Exception'
what(): /home/user/template_matching/repositories/opencv/modules/core/src/drawing.cpp:2017: error: (-215) points.checkVector(2, CV_32S) >= 0 in function fillConvexPoly
I am able to paint lines on the image and I have points in the list of points used as polygon contour:
drawMatches(img_object, TSMART_BANKNOTE[i].kp, img_orig, BankNote_t.kp,
good, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
line(img_matches, scene_corners[0] + Point2f(img_object.cols, 0),
scene_corners[1] + Point2f(img_object.cols, 0),
Scalar(0, 255, 0), 4); //Left side
line(img_matches, scene_corners[2] + Point2f(img_object.cols, 0),
scene_corners[3] + Point2f(img_object.cols, 0),
Scalar(0, 255, 0), 4); //right side
line(img_matches, scene_corners[1] + Point2f(img_object.cols, 0),
scene_corners[2] + Point2f(img_object.cols, 0),
Scalar(0, 255, 0), 4); //Down line
line(img_matches, scene_corners[3] + Point2f(img_object.cols, 0),
scene_corners[0] + Point2f(img_object.cols, 0),
Scalar(0, 255, 0), 4); //Up line
vector<Point2f> pt;
pt.push_back(Point2f(scene_corners[0].x + img_object.cols,
scene_corners[1].y));
pt.push_back(Point2f(scene_corners[2].x + img_object.cols,
scene_corners[3].y));
pt.push_back(Point2f(scene_corners[1].x + img_object.cols,
scene_corners[2].y));
pt.push_back(Point2f(scene_corners[3].x + img_object.cols,
scene_corners[0].y));
std::cout << "PT POINTS: " << pt.size() <<"\r\n";
//fillConvexPoly(img_matches, pt, pt.size(), Scalar(1,1,1), 16, 0);
fillConvexPoly(img_matches, pt, cv::Scalar(255), 16, 0);
//-- Show detected matches
std::cout <<"H5\r\n";
imshow( "Good Matches & Object detection", img_matches);
waitKey(0);
In documentation I found a fillconvexpoly function which receives size of vector. Mine seems not to accept that. I am using opencv 2.4.6
The exception says it was checking for CV_32S type(signed integer), where your points are of float type.
Replace your
std::vector<cv::Point2f> pt;
with
std::vector<cv::Point2i> pt;
I have two rectangles around the eyes. Now I want to draw three more rectangles, on the left side of the head, one on the upper side and the last on the right side. All these based on the position of the eyes. I've tried to shift the eye's rectangle by using Rect= Rect +- Point, but what I get is a bigger (or smaller) rectangle at the same location.
Any idea about how could I make this? Thanks in advance.
Here my code:
for ( i = 0; i < eyes.size(); i++) {
Rect eye_i = eyes[i];
rectangle(frame, eye_i, CV_RGB(255, 0, 255), 1);
DrawRectangle(eye_i, frame);
Mat eyeroi = frame(eyes[i]);
imwrite("eye.jpeg", eyeroi);
}
void DrawRectangle(Rect rect,Mat frame) {
Rect up = rect + Point(-8,10);
Rect right = rect + Point(20, 0);
Rect left = rect + Point(-8, 0);
rectangle(frame, up, CV_RGB(0, 255, 255), 1);
rectangle(frame, right, CV_RGB(0, 255, 255), 1);
rectangle(frame, left, CV_RGB(0, 255, 255), 1);
}
I have an OpenCV program which uses SURF to detect if a template object is detected within the video stream. I am looking to print out the object name when the object is detected but at the moment it seems to be printing whenever a "good" feature match has been found which for the vast majority of the time are false positives.
My program is as follows:
//Step 1: Detect keypoints using SURF detector
//Step 2: Calculate descriptors (feature vectors)
//Step 3: Matching descriptor vectors using FLANN matcher
//Step 4: Localise the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
if() {
std::cout << fileNamePostCut << std::endl;
}
...
I'm not sure which condition to state in order to print the object name (fileNamePostCut)
Your goal is to dismiss false positives. You should pursue two approaches:
First, use the SIFT ratio test to dismiss unclear matches
You are calculating a homography from your matches. Use it as a model for correct matches: cv::findHomography has an optional output called mask. Use it to determine how many matches have actually contributed to the homography (these are called inliers). The more, the better - only print the object's name if you have more than 10 inliers for example.
I am using openCV to write a code that can find and replace one image with another image
Here is my 1st image
Now i have 2nd image as this
I need to replace the second image with this
and final output should be like this
So how to start about ? I am not sure how can i find it, I tried using Template Matching but the images are supposed to be exactly equal for template matching, and when my images are distorted or skewed in some manner then it doesn't work ?
How can i match the image get the bounds using openCV, and replace with another image ?
Any help would be appreciated
Thank you
SURF algorithm, that's you want it. OPENCV SURF Example
You can use this SURF Algorithm for matching the images as show here
The code
line( img_scene , scene_corners[0] + Point2f( img_object .cols, 0), scene_corners[1] + Point2f( img_object .cols, 0), Scalar(0, 255, 0), 4 );
will draw image not on the scene image but on object so use this
line( img_scene , scene_corners[0], scene_corners[1] , Scalar(0, 255, 0), 4 );
line( img_scene , scene_corners[1] , scene_corners[2] , Scalar( 0, 255, 0), 4 );
line( img_scene , scene_corners[2] , scene_corners[3], Scalar( 0, 255, 0), 4 );
line( img_scene , scene_corners[3] , scene_corners[0], Scalar( 0, 255, 0), 4 );
Now to replace the image use this
Mat temp;
cv::resize(mReplacementImage,temp,img_object .size());
warpPerspective(temp, mReplacementImage, H, img_scene .size());
Mat mask = cv::Mat::ones(img_object .size(), CV_8U);
Mat temp2;
warpPerspective(mask, temp2, H, img_scene .size());
mReplacementImage.copyTo(img_scene , temp2);
cv::imwrite("output.bmp",img_scene );
I am making a program that tracks features with ORB from OpenCV (2.43) I followed
this tutorial and used advice from here.
My goal is to track the object in video feed (face) and draw a rectangle around it.
My program finds keypoints and matches them correctly, but when I try to use findHomography + perspectiveTransform to find new corners for the image usually returns some nonsense type values (though sometimes it returns correct homography).
Here is an example picture:
Here is the corresponding problematic part:
Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
Rest of the code is practically the same as in the links I provided.
The lines drawn seem completley random, my goal is only to get minimal rectangle of the source object in new scene, so if there is alternative to using homography that works too.
P.S. Source image to track is a region that is copied from video input and then tracked in new pictures from that input, does it matter?
The function perspectiveTransform estimates the homography under the assumption that your corresponding points set are not error prone. However, in real world data you cannot assume that. The solution is to use a robust estimation function such as the RANSAC to solve the homography problem as an overdetermine system of equations.
You can use the findHomography function instead which returns a homography. The input of this function is a set of points. This set needs at least 4 point but a larger set is better. The homography is only an estimate but which is more robust against errors. By using the CV_RANSAC flag it is able to remove outliers (wrong point correspondences) internaly.