Stitching images from 2 overlapping cameras stationary relative to each-other - c++

I'm new to CV, and trying to stitch together a video of two cameras which are stationary one relative to the other. The details:
The cameras are one beside the other and I can adjust the rotation angle between them. The cameras will be moving with respect to the world, so the scene will be changing.
The amount of frames to be stitched is roughly 300 (each frame is composed of two pictures, one from each camera).
I don't need to do the stitching in real time, but I want to do it as fast as possible using the fact that I know the relative positions of the cameras. Resolution of each picture is relatively high, around 900x600.
Right now I'm at the stage where I have code to stitch 2 single pictures, courtesy of http://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/
The main stages are:
Using SURF detector to find SURF descriptor in both images
matching the SURF descriptor using FLANN Matcher
Postprocessing matches to find good matches
Using RANSAC to estimate the Homography matrix using the matched SURF descriptors
Warping the images based on the homography matrix
My question is: How can I optimize the process based on the fact that I already know the camera positions?
Ideally I would like to do some initial calculation once to find the transform between the camera perspectives, and then reuse it. But not sure with my rudimentary CV knowledge if this is indeed possible, and what transform I could use if so.
I understand that calculating the homography matrix once and reusing it won't work, since the scene is changing.
Two other possibilities:
I found a similar case (but stationary scene) where the transform is computed once and reused. Which transform is this, and could it work in my case?
The other possibility I found is to use the initial knowledge to find the overlapping region between two pictures, and ignore the rest of the pictures to save time. Relevant thread
Any help would be greatly appreciated!
Ron

Related

OpenCv 3d Stitching Panorama

I have 7 images from gopro (5 cameras in rig and one for top and one for bottom, They all are gopro camera). I want to stitch all these images together to create a 3d panorama. I have been able to stitch 5 images in Rig by using opencv stitching_detailed.cpp. Link to file:
https://raw.githubusercontent.com/opencv/opencv/master/samples/cpp/stitching_detailed.cpp
But I'm not sure how to stitch top and bottom (For me right now bottom is not that important but i have to do something about top). Any idea how this can be done? Please let me know if i can use same stitching_detailed.cpp to stitch top also.
Following Link contains images which I'm using. It also contains results i got from stitching images in rig.
https://drive.google.com/folderview?id=0B_Bl8s2ePunQcnBaM3A4WDlDcXM&usp=sharing
So first you need to understand how stitching_detailed.cpp works.
1. features keypoint is detected in each image using SURF/ORB/SIFT or so. Then for each image pair, best feature matches are found and homography matrix is calculated and number of inliers for each pair is obtained
(*finder)(img, features[i]);
BestOf2NearestMatcher matcher(try_cuda, match_conf);
matcher(features, pairwise_matches);
All these pairs are pased into leaveBiggestComponent to obtain largest set of images, which belong to a panorama.
camera parameters or each image is calculated from above obtained set and warping and blending is done.
step 1 will find homography for each pair and generate number of inliers. Step 2 will remove all those image pair for which confidence factor(number of inliers) is less than threshold. Since cam7 img has so less features and almost no overlapping region with any other image, it will get rejected in leavebiggestcomponent step.
You can see features and mtaching in this link (I have used orbfeatures)
https://drive.google.com/open?id=0B2wDitsftUG9QnhCWFIybENkbDA
Also I have not changed the image size, but i guess reducing the image size a bit(by half maybe) will yield more feature points
What you can do is reduce the time interval in which you take frame for stitching. To obtain good results, there must be atleast 40% overlapping region between images.

findHomography usage opencv

I am using opencv c++ and am a new user. I am interested in object detection problems . So far I have studies and implemented the use of sparse optical flow( Lucas Kanade method) in a video from a stationary camera.After trying k means and Background substraction , I have decided to move to a more difficult problem , that is the moving camera.
I have so far studied some documentation and found out that I could use cv::findHomography in order to find the inliers or outliers during the sequence of frames in my video and then understand from the returned values what movement is caused due to camera motion and what due to object motion. In addition , I could use SURF features to track some objects and then decide which of them are good points .
However , I was wondering how I could implement this theory. For example, should I use the first frame as ground truth and detect some features using SURF and then for the rest of the video use findHomography for each frame ? Any ideas/help is welcome !
Detecting moving objects from moving camera is a quite challenging task, and requires solid understanding of multiple view geometry, besides there is less info on this topic available (than, for example, about structure from motion), so be warned!
Anyway, homography matrix will not be a good choice for detection of moving objects (unless you are 100% sure that your background can be represented by a flat surface accurately enough). You should probably use a fundamental matrix or trifocal tensor.
Fundamental matrix is computed from point correspondences between 2 frames. It associates points on one image with lines on other image (so called epipolar lines), and this way it is independent from scene structure. After you have obtained F matrix using some robust estimation method, like RANSAC or LMEDS (RANSAC seems to be better for this kind of task), you can calculate the reprojection error for each point. Objects that move independently from scene would not be accurately described by F matrix and will have a bigger error. So, outliers of F matrix calculated from image matches over two frames can be considered moving objects. One note though - objects that move along epipolar lines would not be detected by this approach, since their parallax can be also described by some depth level.
Trifocal tensor does not have the depth/motion ambiguity with objects that move along epipolar lines, but it is harder to estimate and it is not included into OpenCV. It can be calculated from correspondences over 3 frames, and its usage can be conceptually described as triangulating a point from 2 views and then calculating reprojection error on a third view.
As for the matching - I still think that LK tracking will be better than SURF matching if you work with video sequences, since in that case you don't need to consider very distant points as matches, and tracking usually is faster then detection+matching.

Image comparison method with C++ and OpenCV

I am new to OpenCV. I would like to know if we can compare two images (one of the images made by photoshop i.e source image and the otherone will be taken from the camera) and find if they are same or not.
I tried to compare the images using template matching. It does not work. Can you tell me what are the other procedures which we can use for this kind of comparison?
Comparison of images can be done in different ways depending on which purpose you have in mind:
if you just want to compare whether two images are approximately equal (with a few
luminance differences), but with the same perspective and camera view, you can simply
compute a pixel-to-pixel squared difference, per color band. If the sum of squares over
the two images is smaller than a threshold the images match, otherwise not.
If one image is a black-white variant of the other, conversion of the color images is
needed (see e.g. http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale). Afterwarts simply perform the step above.
If one image is a subimage of the other, you need to perform registration of the two
images. This means determining the scale, possible rotation and XY-translation that is
necessary to lay the subimage on the larger image (for methods to register images, see:
Pluim, J.P.W., Maintz, J.B.A., Viergever, M.A. , Mutual-information-based registration of
medical images: a survey, IEEE Transactions on Medical Imaging, 2003, Volume 22, Issue 8,
pp. 986 – 1004)
If you have perspective differences, you need an algorithm for deskewing one image to
match the other as well as possible. For ways of doing deskewing look for example in
http://javaanpr.sourceforge.net/anpr.pdf from page 15 and onwards.
Good luck!
You should try SIFT. You apply SIFT to your marker (image saved in memory) and you get some descriptors (points robust to be recognized). Then you can use FAST algorithm with the camera frames in order to find the coprrespondent keypoints of the marker in the camera image.
You have many threads about this topic:
How to get a rectangle around the target object using the features extracted by SIFT in OpenCV
How to search the image for an object with SIFT and OpenCV?
OpenCV - Object matching using SURF descriptors and BruteForceMatcher
Good luck

stitching aerial images

I am trying to stitch 2 aerial images together with very little overlap, probably <500 px of overlap. These images have 3600x2100 resolution. I am using the OpenCV library to complete this task.
Here is my approach:
1. Find feature points and match points between the two images.
2. Find homography between two images
3. Warp one of the images using the homgraphy
4. Stitch the two images
Right now I am trying to get this to work with two images. I am having trouble with step 3 and possibly step 2. I used findHomography() from the OpenCV library to grab my homography between the two images. Then I called warpPerspective() on one of my images using the homgraphy.
The problem with the approach is that the transformed image is all distorted. Also it seems to only transform a certain part of the image. I have no idea why it is not transforming the whole image.
Can someone give me some advice on how I should approach this problem? Thanks
In the results that you have posted, I can see that you have at least one keypoint mismatch. If you use findHomography(src, dst, 0), it will mess up your homography. You should use findHomography(src, dst, CV_RANSAC) instead.
You can also try to use warpAffine instead of warpPerspective.
Edit: In the results that you posted in the comments to your question, I had the impression that the matching worked quite stable. That means that you should be able to get good results with the example as well. Since you mostly seem to have to deal with translation you could try to filter out the outliers with the following sketched algorithm:
calculate the average (or median) motion vector x_avg
calculate the normalized dot product <x_avg, x_match>
discard x_match if the dot product is smaller than a threshold
To make it work for images with smaller overlap, you would have to look at the detector, descriptors and matches. You do not specify which descriptors you work with, but I would suggest using SIFT or SURF descriptors and the corresponding detectors. You should also set the detector parameters to make a dense sampling (i.e., try to detect more features).
You can refer to this answer which is slightly related: OpenCV - Image Stitching
To stitch images using Homography, the most important thing that should be taken care of is finding of correspondence points in both the images. Lesser the outliers in the correspondence points, the better is the generated homography.
Using robust techniques such as RANSAC along with FindHomography() function of OpenCV(Use CV_RANSAC as option) will still generate reasonable homography provided percentage of inliers is more than percentage of outliers. Also make sure that there are at-least 4 inliers in the correspondence points that passed to the FindHomography function.

What is the best way to divide a video into scenes (segments)

I've been requested to take a given video, probably a simple cartoon, and return an array of its scenes.
I need to use the opencv libary in order to do it, and the result format is irrelevent (i.e. I can return the timespans of each scene or actualy split the video).
Any help would be appriciated.
Thanks
Technically, a scene is a group shots which are successively taken together at a single location. A shot is a basic narrative element of the video which is composed of a number of frames that are presented from a continuous viewpoint.
Automatically dividing a video into its shots is called the shot boundary detection problem in which the basic idea is identifying consecutive frames that form a transition from one shot to another.
Identifying transitions generally involve calculating a similarity value between two frames. This value can be calculated using low level image features such as color, edge or motion. A simple similarity metric could be:
s(f1, f2) = sum(i in all pixel locations)(abs(ficolor(i) - f2color(i))) / N
where f1 and f2 represent two distinct video frames and N represents number pixels in those frames. This is the average first order (Manhattan) pixel color distance between two frames.
Say you have a video composed of frames { f1, f2 ... fM } and you have calculated this distance between neighboring frames. A simple decision measure could be labeling a transition from fa to fb as a shot boundary if s(fa, fb) is below a certain threshold.
A successful shot boundary detector uses distances of second order (or more) such as Euclidean distance or Pearson correlation coefficient and utilizes a combination of different features instead of using only one, say color.
Usually, a camera or object movement breaks the pixel correspondence between frames. Using frequencies of low level details with the help of histograms will be a cure here.
Also, performing decision making over more than two frames helps in finding smooth transitions where one shot dissolves into or replaces another for a duration. Deciding for a group of frames also help us in identifying false transitions caused by light flashes or fast moving cameras.
For your problem, please start from basic approaches like comparing RGB colors and edge responses between video frames. Analyze your results and data together and try to adapt new features, distance metrics and decision making methods for better performance.
The best way of segmenting a video into shots will vary depending on your data. Machine learning approaches like probabilistically modeling frame transitions with Gaussian mixture models or classification through support vector machines are expected to perform better than hand selected thresholds. However it is important that you learn the basics before effectively choosing input features.
Automatically finding shot boundaries will be sufficent to divide your video into meaningful parts. Dividing your video into scenes, on the other hand, is considered a harder semantic problem. Nevertheless, shot segmentation is the first step to it.
There is a huge research area focusing on this. Search for papers, you can find various algorithms described in detail.
Here some examples:
Scene detection in Hollywood movies and TV shows
A framework for video scene boundary detection
Video summarization and scene detection by graph modeling
There are plenty more out there, just google.