OpenCv 3d Stitching Panorama - c++

I have 7 images from gopro (5 cameras in rig and one for top and one for bottom, They all are gopro camera). I want to stitch all these images together to create a 3d panorama. I have been able to stitch 5 images in Rig by using opencv stitching_detailed.cpp. Link to file:
https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/stitching_detailed.cpp
But I'm not sure how to stitch top and bottom (For me right now bottom is not that important but i have to do something about top). Any idea how this can be done? Please let me know if i can use same stitching_detailed.cpp to stitch top also.
Following Link contains images which I'm using. It also contains results i got from stitching images in rig.
https://drive.google.com/folderview?id=0B_Bl8s2ePunQcnBaM3A4WDlDcXM&usp=sharing

So first you need to understand how stitching_detailed.cpp works.
1. features keypoint is detected in each image using SURF/ORB/SIFT or so. Then for each image pair, best feature matches are found and homography matrix is calculated and number of inliers for each pair is obtained
(*finder)(img, features[i]);
BestOf2NearestMatcher matcher(try_cuda, match_conf);
matcher(features, pairwise_matches);
All these pairs are pased into leaveBiggestComponent to obtain largest set of images, which belong to a panorama.
camera parameters or each image is calculated from above obtained set and warping and blending is done.
step 1 will find homography for each pair and generate number of inliers. Step 2 will remove all those image pair for which confidence factor(number of inliers) is less than threshold. Since cam7 img has so less features and almost no overlapping region with any other image, it will get rejected in leavebiggestcomponent step.
You can see features and mtaching in this link (I have used orbfeatures)
https://drive.google.com/open?id=0B2wDitsftUG9QnhCWFIybENkbDA
Also I have not changed the image size, but i guess reducing the image size a bit(by half maybe) will yield more feature points
What you can do is reduce the time interval in which you take frame for stitching. To obtain good results, there must be atleast 40% overlapping region between images.

Related

Matching contours to level images - OpenCV

Is it possible to only match & level the contours in an image? Possibly symmetric matching? If so, what matcher would you use for this purpose?
Demonstration Image:
In this image of my lovely iMac, you can see that the images passed in are unleveled. This is because I took the first image at a different height than the second image.. For example:
(First Image capture)
(Second Image capture)
So, instead of matching features all over the image, I was wondering if OpenCV has any feature matcher that could limit me to match the edges of where 1 image ends, and the other one begins. That way, I could hopefully straighten them up.
What I currently use:
BestOf2NearestMatcher
GridAdaptedFeatureDetector with GFTTDetector
SiftDescriptorExtractor
Refining with camera parameters based on features with basic OpenCV sample
What I hope would be the result:
My target result would be to align the images in the demonstration image above.
I am kind of working on the same issue. You can surely limit the feature points using some mask. In my case I know that there is 50% overlap in adjacent images, so I have used features in later half (x > im.cols)of first image and initial half of second image(x < im.cols).You can make this changes in matchers.cpp of stitching. This will eliminate the chance of false matches.
But this will fail when there are less or no feature points. I would suggest to go for line based stitching if aligning edges of object is the main concern.
Line features in opencv3.0
http://docs.opencv.org/3.0-beta/modules/line_descriptor/doc/tutorial.html

Stitching images from 2 overlapping cameras stationary relative to each-other

I'm new to CV, and trying to stitch together a video of two cameras which are stationary one relative to the other. The details:
The cameras are one beside the other and I can adjust the rotation angle between them. The cameras will be moving with respect to the world, so the scene will be changing.
The amount of frames to be stitched is roughly 300 (each frame is composed of two pictures, one from each camera).
I don't need to do the stitching in real time, but I want to do it as fast as possible using the fact that I know the relative positions of the cameras. Resolution of each picture is relatively high, around 900x600.
Right now I'm at the stage where I have code to stitch 2 single pictures, courtesy of http://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/
The main stages are:
Using SURF detector to find SURF descriptor in both images
matching the SURF descriptor using FLANN Matcher
Postprocessing matches to find good matches
Using RANSAC to estimate the Homography matrix using the matched SURF descriptors
Warping the images based on the homography matrix
My question is: How can I optimize the process based on the fact that I already know the camera positions?
Ideally I would like to do some initial calculation once to find the transform between the camera perspectives, and then reuse it. But not sure with my rudimentary CV knowledge if this is indeed possible, and what transform I could use if so.
I understand that calculating the homography matrix once and reusing it won't work, since the scene is changing.
Two other possibilities:
I found a similar case (but stationary scene) where the transform is computed once and reused. Which transform is this, and could it work in my case?
The other possibility I found is to use the initial knowledge to find the overlapping region between two pictures, and ignore the rest of the pictures to save time. Relevant thread
Any help would be greatly appreciated!
Ron

Dilation Gradient w/ different ROI's (blob optimization) OPENCV

I'm working on a dilation problem in c++ with opencv. I've captured videoframes of a car park and in order to obtain the best blobs I came up with this.
Erosion (5x5 kernel rectangular), 3 iterations
Dilation GRADIENT (think of it like a color gradient along the y-axis)
So what did I do to get this working? First I needed to know 2 points (x,y) and 2 good dilate kernelsizes at those points. With this information one can inter and extrapolate those values over the whole image. So I calculated ROI's (size and dilation kernelsize) from those parameters. So each ROI has its own predefined kernelsize used for dilation. Note that there isn't any space between two consecutive ROI's (opencv rectangles). Everything is working fine, but there are two side effects:
Buldges on the sides of the blobs. The black line is de border of the ROI!
buldges picture
Blobs which are 'cut off' from the main blob. These aren't actually cut off but the ROI under the one of the blob above dilates (gets pixel information from the above ROI, I think) into blobs who are seperated. It should be one massive blob. blob who shoudn't be there picture
I've tried everything on changing the ROI sizes and left some space between them but the disadvantage is that the blob between 2 seperated ROI's is not dilated.
So my questions are:
What causes those side effects exactly?
What do I have to do to make them go away?
EDIT
So I found my solution: when you call the opencv dilate function, one needs to be sure if the same cv::Mat can be used as destination image. If not you'll be using parts of the original and new image. So all I had to do was including a destination cv::Mat.
This doesn't answer your first question (What causes those side effects for sure), but to make them go away, you can do some variant of the following, assuming the ROI parameters are discrete and not continuous (as seems to be the case).
You can compute the dilation for the entire image using every possible kernel size. Then, after all of those binary images are computed, you can combine them together taking the correct samples from the correct image to get the desired output image. This absolutely will waste a good deal of time, but it should work with no artifacts.
Once you've confirmed that the results you've gotten above (which are pretty much guaranteed to be of as-good-as-possible quality) you can start trying to optimize. One thing I'd try is expanding each of the ROI sizes for computing the dilation by the size of the kernel size. This might get around artifacts that can arise from strange boundary conditions.
This leads to my guess as to what causes the artifacts in the first place: Whenever you take a finite image and run a convolution (or morphological operator) you need to choose what you'll do with the edge pixels. Normally, accessing the pixel at (-4, -1) is meaningless, but to perform the operator you'll have to if your kernel overlaps with it. If OpenCV is doing this edge padding for your subregions, it very easily could give you the artifacts you're seeing.
Hope this helps!

Image comparison method with C++ and OpenCV

I am new to OpenCV. I would like to know if we can compare two images (one of the images made by photoshop i.e source image and the otherone will be taken from the camera) and find if they are same or not.
I tried to compare the images using template matching. It does not work. Can you tell me what are the other procedures which we can use for this kind of comparison?
Comparison of images can be done in different ways depending on which purpose you have in mind:
if you just want to compare whether two images are approximately equal (with a few
luminance differences), but with the same perspective and camera view, you can simply
compute a pixel-to-pixel squared difference, per color band. If the sum of squares over
the two images is smaller than a threshold the images match, otherwise not.
If one image is a black-white variant of the other, conversion of the color images is
needed (see e.g. http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale). Afterwarts simply perform the step above.
If one image is a subimage of the other, you need to perform registration of the two
images. This means determining the scale, possible rotation and XY-translation that is
necessary to lay the subimage on the larger image (for methods to register images, see:
Pluim, J.P.W., Maintz, J.B.A., Viergever, M.A. , Mutual-information-based registration of
medical images: a survey, IEEE Transactions on Medical Imaging, 2003, Volume 22, Issue 8,
pp. 986 – 1004)
If you have perspective differences, you need an algorithm for deskewing one image to
match the other as well as possible. For ways of doing deskewing look for example in
http://javaanpr.sourceforge.net/anpr.pdf from page 15 and onwards.
Good luck!
You should try SIFT. You apply SIFT to your marker (image saved in memory) and you get some descriptors (points robust to be recognized). Then you can use FAST algorithm with the camera frames in order to find the coprrespondent keypoints of the marker in the camera image.
You have many threads about this topic:
How to get a rectangle around the target object using the features extracted by SIFT in OpenCV
How to search the image for an object with SIFT and OpenCV?
OpenCV - Object matching using SURF descriptors and BruteForceMatcher
Good luck

stitching aerial images

I am trying to stitch 2 aerial images together with very little overlap, probably <500 px of overlap. These images have 3600x2100 resolution. I am using the OpenCV library to complete this task.
Here is my approach:
1. Find feature points and match points between the two images.
2. Find homography between two images
3. Warp one of the images using the homgraphy
4. Stitch the two images
Right now I am trying to get this to work with two images. I am having trouble with step 3 and possibly step 2. I used findHomography() from the OpenCV library to grab my homography between the two images. Then I called warpPerspective() on one of my images using the homgraphy.
The problem with the approach is that the transformed image is all distorted. Also it seems to only transform a certain part of the image. I have no idea why it is not transforming the whole image.
Can someone give me some advice on how I should approach this problem? Thanks
In the results that you have posted, I can see that you have at least one keypoint mismatch. If you use findHomography(src, dst, 0), it will mess up your homography. You should use findHomography(src, dst, CV_RANSAC) instead.
You can also try to use warpAffine instead of warpPerspective.
Edit: In the results that you posted in the comments to your question, I had the impression that the matching worked quite stable. That means that you should be able to get good results with the example as well. Since you mostly seem to have to deal with translation you could try to filter out the outliers with the following sketched algorithm:
calculate the average (or median) motion vector x_avg
calculate the normalized dot product <x_avg, x_match>
discard x_match if the dot product is smaller than a threshold
To make it work for images with smaller overlap, you would have to look at the detector, descriptors and matches. You do not specify which descriptors you work with, but I would suggest using SIFT or SURF descriptors and the corresponding detectors. You should also set the detector parameters to make a dense sampling (i.e., try to detect more features).
You can refer to this answer which is slightly related: OpenCV - Image Stitching
To stitch images using Homography, the most important thing that should be taken care of is finding of correspondence points in both the images. Lesser the outliers in the correspondence points, the better is the generated homography.
Using robust techniques such as RANSAC along with FindHomography() function of OpenCV(Use CV_RANSAC as option) will still generate reasonable homography provided percentage of inliers is more than percentage of outliers. Also make sure that there are at-least 4 inliers in the correspondence points that passed to the FindHomography function.