OpenCV WarpPerspective issue - c++

I am currently trying to implement a basic image stitching C++ (OpenCV) code in Eclipse. The feature detection part shows great results for SURF Features. However, when I attempt to warp the 2 images together, I get only half the image as the output. I have tried to find a solution everywhere but to no avail. I even tried to offset the homography matrix , like in this answer OpenCV warpperspective . Nothing has helped so far.
I'll attach the output images in the comments since I don't have enough reputation points.
For feature detection and homography, I used the exact code from here
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html
And then I added the following piece of code after the given code,
Mat result;
warpPerspective(img_object,result,H, Size(2*img_object.cols,img_object.rows));
Mat half(result,Rect(0,0,img_scene.cols,img_scene.rows));
img_scene.copyTo(half);
imshow( "Warped Image", result);
I'm quite new at this and just trying to put the pieces together. So I apologize if there's some basic error.

If you're only trying to put the pieces together, you cold try the built in OpenCV image stitcher class: http://docs.opencv.org/modules/stitching/doc/high_level.html#stitcher

I found a related question here Stitching 2 images in opencv and implemented the additional code given. It worked!
For reference, the edited code I wrote was
Mat result;
warpPerspective(img_scene, result, H, Size(img_scene.cols*2, img_scene.rows*2), INTER_CUBIC);
Mat final(Size(img_scene.cols + img_object.cols, img_scene.rows*2),CV_8UC3);
Mat roi1(final, Rect(0, 0, img_object.cols, img_object.rows));
Mat roi2(final, Rect(0, 0, result.cols, result.rows));
result.copyTo(roi2);
img_object.copyTo(roi1);

Related

Rotated image when using "findHomography" and "warpPerspective"

I am currently developing an ALPR system. To detect the plate first I am using the method described here.
The problem comes with certain images. After using findHomography and warpPerspective it gives me the plate image rotated.
This is the original image that gives me issues.
This is the detected plate contour
And this is the warped image
As you can see it is rotated 90 degrees. In other examples the detection works amazing.
The specific piece of code
cv::Mat warpped_plate( PLATE_HEIGHT, PLATE_WIDTH, CV_8UC3 );
vector< cv::Point> real_plate_polygons;
real_plate_polygons = {cv::Point(PLATE_WIDTH, PLATE_HEIGHT), cv::Point(0, PLATE_HEIGHT), cv::Point(0, 0), cv::Point(PLATE_WIDTH, 0)};
cv::Mat homography = cv::findHomography( plate_polygons, real_plate_polygons );
cv::warpPerspective(source_img, warpped_plate, homography, cv::Size(PLATE_WIDTH, PLATE_HEIGHT));
Where plate_polygons contains the four points of the plate (and are correct, because were used to draw the white box in the mask).
Any ideas? Thanks in advance!
As nico mentioned, the problem was with the order of the points in the plate_polygon. The algorithm generating them was not consistent on the start point (in my case, starting from lower).

Threshold function does not work OpenCV C++

I just want to do convert a gray image to a binary image. But threshold function gives me a totaly black image as a binary image. I want to get dark gray object.
What is wrong here?
Code:
Mat theFrame = imread("C:\\asdsss.png"); // opencv
Mat gray,binary;
cvtColor(theFrame, gray, CV_BGR2GRAY);
threshold(gray, binary, 150, 255, THRESH_BINARY);
imwrite("result.jpg",binary);
İnput image:
The code works perfectly fine. I ran you exact code on the image provided. There is no issue with it.
I got the following output by running your code. The only problem I can think of could be loading of image. Try to see your image using cv::imshow after loading it. Also try to convert your image into jpg format and then try loading it again. You can also try compiling and running the opencv thresholding sample.

segmentation of the source image in opencv based on canny edge outline attained from processing the said source image

I have a source image. I need a particular portion to be segmented from it and save it as another image. I have the canny outline of the portion I need to be segmented out,but how do I use it to cut the portion from the source image? I have attached both the source image and the canny edge outline. Please help me and suggest me a solution.
EDIT-1: Alexander Kondratskiy,Is this what you meant by filling the boundary?
EDIT-2 : according to Kannat, I have done this
Now how do I separate the regions that are outside and inside of the contour into two separate images?
Edit 3- I thought of 'And-ing'the mask and the contour lined source image.Since I am using C, I am having a little difficulty.
this is the code I use to and:-
hsv_gray = cvCreateImage( cvSize(seg->width, seg->height), IPL_DEPTH_8U, 1 );
cvCvtColor( seg, hsv_gray, CV_BGR2GRAY );
hsv_mask=cvCloneImage(hsv_gray);
IplImage* contourImg =cvCreateImage( cvSize(hsv_mask->width, hsv_mask->height), IPL_DEPTH_8U, 3 );
IplImage* newImg=cvCreateImage( cvSize(hsv_mask->width, hsv_mask->height), IPL_DEPTH_8U, 3 );
cvAnd(contourImg, hsv_mask,newImg,NULL);
I always get an error of mismatch size or Type. I adjusted the size but I can't seem to adjust the type,since one(hsv_mask) is 1 channel and the others are 3 channels.
#kanat- I also tried your boundingrect but could not seem to get in right in C format.
Use cv::findContours on your second image to find the contour of the segment. Then use cv::boundingRect to find bounding box for this segment. After that you can create matrix and save in it cropped bounding box from your second image (as I see it is a binary image). To crop needed region use this:
cv::getRectSubPix(your_image, BB_size, cv::Point(BB.x + BB.width/2,
BB.y + BB.height/2), new_image).
Then you can save new_image using cv::imwrite. That's it.
EDIT:
If you found only one contour then use this (else you will iterate through elements of found contours). The following code shows the steps but sorry I can't test it now:
std::vector<std::vector<cv::Point>> contours;
// cv::findContours(..., contours, ...);
cv::Rect BB = cv::boundingRect(cv::Mat(contours[0]));
cv::Mat new_image;
cv::getRectSubPix(your_image, BB.size(), cv::Point(BB.x + BB.width/2,
BB.y + BB.height/2), new_image);
cv::imwrite("new_image_name.jpg", new_image);
You could fill the boundary created by the canny-edge detector, and use that as an alpha mask on the original image.

OpenCV: Why create multiple Mat objects to transform an image's format?

I have not worked with OpenCV for a while, so please bear with my beginner questions. I curiously thought of something as I was looking through OpenCV tutorials and sample code.
Why do people create multiple Mat images when going through multiple transformations? Here is an example:
Mat mat, gray, thresh, equal;
mat = imread("E:/photo.jpg");
cvtColor(mat, gray, CV_BGR2GRAY);
equalizeHist(gray, equal);
threshold(equal, thresh, 50, 255, THRESH_BINARY);
Example of a code that uses only two Mat images:
Mat mat, process;
mat = imread("E:/photo.jpg");
cvtColor(mat, process, CV_BGR2GRAY);
equalizeHist(process, process);
threshold(process, process, 50, 255, THRESH_BINARY);
Is there anything different between the two examples? Also, another beginner question: will OpenCV run faster when it only creates two Mat images, or will it still be the same?
Thank you in advance.
The question comes down to, do you still need the unequalized image later on in the code? If you want to further process the gray image then the first option is better. If not, then use the second option.
Some functions might not work in-place; specifically, ones that transform the matrix to a different format, either by changing its dimensions (such as copyMakeBorder) or number of channels (such as cvtColor).
For your use case, the two blocks of code perform the same number of calculations, so the speed wouldn't change at all. The second option is obviously more memory efficient.

OpenCV: findContours exception

my matlab code is:
h = fspecial('average', filterSize);
imageData = imfilter(imageData, h, 'replicate');
bwImg = im2bw(imageData, grayThresh);
cDist=regionprops(bwImg, 'Area');
cDist=[cDist.Area];
opencv code is:
cv::blur(dst, dst,cv::Size(filterSize,filterSize));
dst = im2bw(dst, grayThresh);
cv::vector<cv::vector<cv::Point> > contours;
cv::vector<cv::Vec4i> hierarchy;
cv::findContours(dst,contours,hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
here is my image2blackand white function
cv::Mat AutomaticMacbethDetection::im2bw(cv::Mat src, double grayThresh)
{
cv::Mat dst;
cv::threshold(src, dst, grayThresh, 1, CV_THRESH_BINARY);
return dst;
}
I'm getting an exception in findContours() C++ exception: cv::Exception at memory location 0x0000003F6E09E0A0.
Can you please explain what am I doing wrong.
dst is cv::Mat and I used it all along it has my original values.
Update here is my matrix written into *.txt file:
http://www.filedropper.com/gili
UPDATE 2:
I have added dst.convertTo(dst,CV_8U); like Micka suggested, I no longer have an exception. however values are nothing like expected.
Take a look at this question which has a similar problem to what you're encountering: Matlab and OpenCV calculate different image moment m00 for the same image.
Basically, the OP in the linked post is trying to find the zeroth image moment for both x and y of all closed contours - which is actually just the area, by using findContours in OpenCV and regionprops in MATLAB. In MATLAB, that can be accessed by the Area property from regionprops, and judging from your MATLAB code, you wish to find the same quantity.
From the post, there is most certainly a difference between how OpenCV and MATLAB finds contours in an image. This boils down to the way both platforms consider what is a "connected pixel". OpenCV only uses a four-pixel neighbourhood while MATLAB uses an eight-pixel neighbourhood.
As such, there is nothing wrong with your implementation, and converting to 8UC1 is good. However, the areas (and ultimately the total number of connected components and contours themselves) between both contours found with MATLAB and OpenCV are not the same. The only way for you to get exactly the same result is if you manually draw the contours found by findContours on a black image, and using the cv::moments function directly on this image.
However, because of the differing implementations of cv::blur() in comparison to fspecial with an averaging mask that is even, you still may not be able to get the same results along the borders of the image. If there are no important contours around the borders of your image, then hopefully this will give you the right result.
Good luck!