Find and replace image using openCV - c++

I am using openCV to write a code that can find and replace one image with another image
Here is my 1st image
Now i have 2nd image as this
I need to replace the second image with this
and final output should be like this
So how to start about ? I am not sure how can i find it, I tried using Template Matching but the images are supposed to be exactly equal for template matching, and when my images are distorted or skewed in some manner then it doesn't work ?
How can i match the image get the bounds using openCV, and replace with another image ?
Any help would be appreciated
Thank you

SURF algorithm, that's you want it. OPENCV SURF Example

You can use this SURF Algorithm for matching the images as show here
The code
line( img_scene , scene_corners[0] + Point2f( img_object .cols, 0), scene_corners[1] + Point2f( img_object .cols, 0), Scalar(0, 255, 0), 4 );
will draw image not on the scene image but on object so use this
line( img_scene , scene_corners[0], scene_corners[1] , Scalar(0, 255, 0), 4 );
line( img_scene , scene_corners[1] , scene_corners[2] , Scalar( 0, 255, 0), 4 );
line( img_scene , scene_corners[2] , scene_corners[3], Scalar( 0, 255, 0), 4 );
line( img_scene , scene_corners[3] , scene_corners[0], Scalar( 0, 255, 0), 4 );
Now to replace the image use this
Mat temp;
cv::resize(mReplacementImage,temp,img_object .size());
warpPerspective(temp, mReplacementImage, H, img_scene .size());
Mat mask = cv::Mat::ones(img_object .size(), CV_8U);
Mat temp2;
warpPerspective(mask, temp2, H, img_scene .size());
mReplacementImage.copyTo(img_scene , temp2);
cv::imwrite("output.bmp",img_scene );

Related

How do I draw a green rectangle with opencv?

Really new to C++ so my apologies for such a question. Been trying out with these but they dont seem to work.
(I'm executing a template matching function in opencv - https://docs.opencv.org/3.4/de/da9/tutorial_template_matching.html)
Edit: Here is my code for my image, template and mask i used!
cv::Mat image = cv::Mat(height,width, CV16UC1, image); // image in short
cv::Mat temp;
image.convertTo(temp, CV32_F); // convert image to 32 bits
cv::image_template = cv::Mat(t_width, t_height, CV_32F, t_image); // template
cv::mask_template = cv::Mat(t_width, t_height, CV_32F, m_image); // mask
cv:: Mat img_display, result;
temp.copyTo(img_display); // image to display
int result_cols = temp.cols - image_template.cols + 1;
int result_rows = temp.rows - image_template.rows + 1;
result.create(result_rows, result_cols, CV32FC1);
// all the other code
matchTemplate(temp, image_template, result, 0, mask_template);
normalize( result, result, 0, 1, cv::NORM_MINMAX, -1, cv::Mat());
// localize the minimum and maximum values in the result matrix
double minVal;
double maxVal;
cv::Point minLoc;
cv::Point maxLoc;
cv::Point matchLoc;
minMaxLoc(result, &minVal, &maxVal, &minLoc, &maxLoc, cv::Mat());
// for match_method TM_SQDIFF we take lowest values
matchLoc = minLoc;
// display source image and result matrix , draw rectangle around highest possible matching area
cv::rectangle( img_display, matchLoc, cv::Point( matchLoc.x + image_template.cols, matchLoc.y + image_template.rows), cv::Scalar::all(255), 2, 8, 0);
cv::rectangle( result, matchLoc, cv::Point(matchLoc.x + image_template.cols, matchLoc.y + image_template.rows), cv::Scalar::all(255), 2, 8, 0);
This is the given code:
cv::rectangle( img_display, matchLoc, cv::Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), cv::Scalar::all(0), 2, 8, 0 );
Tried to change it with the following code snippets but doesn't seem to work.
cv::rectangle( img_display, matchLoc, cv::Point( matchLoc.x + templ.cols , matchLoc.y + templ.rows ), cv::Scalar(0,255,0) , 2, 8, 0 );
This doesn't work either
rectangle(ref, maxloc, Point(maxloc.x + tpl.cols, maxloc.y + tpl.rows), CV_RGB(0,255,0), 2);
Do let me know where I am wrong!
First of all, you are trying to scale your pixels 0 to 255 but you cant do that because your image is a float type image(32FC1) float type image pixel values can be scaled 0.0 to 1.0.
You need to convert your image to 8UC to be able to colorize easily. But this way will also have several problems which mentioned here. OpenCV matchTemplate function always gives result in 32FC1 format so it is difficult to make some colorized things on this type image.
In your source image you can draw your rectangles with your desired color but not in float type. You can also check this link
Simply add
#define CV_RGB(r, g, b)
on top of your code so that OpenCV knows it will use RGB color space instead of the default BGR.
And then draw your green rectangle this way.
rectangle(frame, Point(startX, startY), Point(endX, endY), CV_RGB(0, 255, 0), 2);

Opencv exception fillconvexppoly

I am trying to port a code from python to c++. I need to paint a part of an image (polygon) with black color.
In python, I had the list of points and then I call to fillconvexpolygon to do that.
I tried the same in c++ but I get this exception:
OpenCV Error: Assertion failed (points.checkVector(2, CV_32S) >= 0) in fillConvexPoly, file /home/user/template_matching/repositories/opencv/modules/core/src/drawing.cpp, line 2017
terminate called after throwing an instance of 'cv::Exception'
what(): /home/user/template_matching/repositories/opencv/modules/core/src/drawing.cpp:2017: error: (-215) points.checkVector(2, CV_32S) >= 0 in function fillConvexPoly
I am able to paint lines on the image and I have points in the list of points used as polygon contour:
drawMatches(img_object, TSMART_BANKNOTE[i].kp, img_orig, BankNote_t.kp,
good, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
line(img_matches, scene_corners[0] + Point2f(img_object.cols, 0),
scene_corners[1] + Point2f(img_object.cols, 0),
Scalar(0, 255, 0), 4); //Left side
line(img_matches, scene_corners[2] + Point2f(img_object.cols, 0),
scene_corners[3] + Point2f(img_object.cols, 0),
Scalar(0, 255, 0), 4); //right side
line(img_matches, scene_corners[1] + Point2f(img_object.cols, 0),
scene_corners[2] + Point2f(img_object.cols, 0),
Scalar(0, 255, 0), 4); //Down line
line(img_matches, scene_corners[3] + Point2f(img_object.cols, 0),
scene_corners[0] + Point2f(img_object.cols, 0),
Scalar(0, 255, 0), 4); //Up line
vector<Point2f> pt;
pt.push_back(Point2f(scene_corners[0].x + img_object.cols,
scene_corners[1].y));
pt.push_back(Point2f(scene_corners[2].x + img_object.cols,
scene_corners[3].y));
pt.push_back(Point2f(scene_corners[1].x + img_object.cols,
scene_corners[2].y));
pt.push_back(Point2f(scene_corners[3].x + img_object.cols,
scene_corners[0].y));
std::cout << "PT POINTS: " << pt.size() <<"\r\n";
//fillConvexPoly(img_matches, pt, pt.size(), Scalar(1,1,1), 16, 0);
fillConvexPoly(img_matches, pt, cv::Scalar(255), 16, 0);
//-- Show detected matches
std::cout <<"H5\r\n";
imshow( "Good Matches & Object detection", img_matches);
waitKey(0);
In documentation I found a fillconvexpoly function which receives size of vector. Mine seems not to accept that. I am using opencv 2.4.6
The exception says it was checking for CV_32S type(signed integer), where your points are of float type.
Replace your
std::vector<cv::Point2f> pt;
with
std::vector<cv::Point2i> pt;

OpenCV - print object name only when homography is drawn

I have an OpenCV program which uses SURF to detect if a template object is detected within the video stream. I am looking to print out the object name when the object is detected but at the moment it seems to be printing whenever a "good" feature match has been found which for the vast majority of the time are false positives.
My program is as follows:
//Step 1: Detect keypoints using SURF detector
//Step 2: Calculate descriptors (feature vectors)
//Step 3: Matching descriptor vectors using FLANN matcher
//Step 4: Localise the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
if() {
std::cout << fileNamePostCut << std::endl;
}
...
I'm not sure which condition to state in order to print the object name (fileNamePostCut)
Your goal is to dismiss false positives. You should pursue two approaches:
First, use the SIFT ratio test to dismiss unclear matches
You are calculating a homography from your matches. Use it as a model for correct matches: cv::findHomography has an optional output called mask. Use it to determine how many matches have actually contributed to the homography (these are called inliers). The more, the better - only print the object's name if you have more than 10 inliers for example.

SIFT object detection bounding box

I am trying to track object from video stream using SIFT algorithm. I want to detect the object and track it by drawing a rectangle surrounding it. The problem is, the rectangle gets skewed and not accurately drawn most of the time. I am using the following code to draw the rectangle around the detected object (videoImage is the frame from video stream).
line(videoImage, sceneCorners[0], sceneCorners[1], Scalar(255, 0, 0), 2);
line(videoImage, sceneCorners[1], sceneCorners[2], Scalar(255, 0, 0), 2);
line(videoImage, sceneCorners[2], sceneCorners[3], Scalar(255, 0, 0), 2);
line(videoImage, sceneCorners[3], sceneCorners[0], Scalar(255, 0, 0), 2);
I also tried the following code (imgMatches is the image with only the good matches)
line(imgMatches, sceneCorners[0] + Point2f( object.cols, 0), sceneCorners[1] + Point2f( object.cols, 0), Scalar(0, 255, 0), 2);
line(imgMatches, sceneCorners[1] + Point2f( object.cols, 0), sceneCorners[2] + Point2f( object.cols, 0), Scalar(0, 255, 0), 2);
line(imgMatches, sceneCorners[2] + Point2f( object.cols, 0), sceneCorners[3] + Point2f( object.cols, 0), Scalar(0, 255, 0), 2);
line(imgMatches, sceneCorners[3] + Point2f( object.cols, 0), sceneCorners[0] + Point2f( object.cols, 0), Scalar(0, 255, 0), 2);
Both seems give the same result. So, my question is, how do draw a rectangle bounding my tracked object which is consistent to the tracked object? By the way, I am using OpenCV (C++) with Visual Studio 2010 on Windows 7.
The problem is not drawing the rectangle, but detecting the object correctly. It is very common that detections in single images are noisy if you get only some keypoints, even if you filter them with RANSAC and a fundamental matrix or a homography.
If you want a more accurate rectangle around the object, you must write a better detection algorithm. For example, you can try to look for more correspondences when you have a first hint of the position of the object in the image.
Maybe have a look at this question SIFT matches and recognition?. It's about the same question. The solution is a 4D hough space.

OpenCV's findHomography produces nonsense results

I am making a program that tracks features with ORB from OpenCV (2.43) I followed
this tutorial and used advice from here.
My goal is to track the object in video feed (face) and draw a rectangle around it.
My program finds keypoints and matches them correctly, but when I try to use findHomography + perspectiveTransform to find new corners for the image usually returns some nonsense type values (though sometimes it returns correct homography).
Here is an example picture:
Here is the corresponding problematic part:
Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
Rest of the code is practically the same as in the links I provided.
The lines drawn seem completley random, my goal is only to get minimal rectangle of the source object in new scene, so if there is alternative to using homography that works too.
P.S. Source image to track is a region that is copied from video input and then tracked in new pictures from that input, does it matter?
The function perspectiveTransform estimates the homography under the assumption that your corresponding points set are not error prone. However, in real world data you cannot assume that. The solution is to use a robust estimation function such as the RANSAC to solve the homography problem as an overdetermine system of equations.
You can use the findHomography function instead which returns a homography. The input of this function is a set of points. This set needs at least 4 point but a larger set is better. The homography is only an estimate but which is more robust against errors. By using the CV_RANSAC flag it is able to remove outliers (wrong point correspondences) internaly.