Interpolation warp - c++

I use opencv with cpp.
I have std::vector<std::pair<cv::Point2d, cv::Point2d> > wich represent a warp.
For each point of an image A, i associate a point of an image B.
I don't know all association between points of image A and points of image B. The points of image A are on a sparse matrix. These data have also probably epsilon error.
So I would like interpolate.
In opencv I don't found a function which do simply an interpolation.
How do this ?
I found the function cv::warpPoint but I don't know the cv::Mat Camera intrinsic parameters nor cv::Mat Camera rotation matrix.
How compute these matrix from my data ?

I think the best way is piecewise affine warper:
https://code.google.com/p/imgwarp-opencv/
I have my own fast implementation, but comments are in russian, you can find it here.

So there are 2 questions:
how to warp the points from one image to the other.
Try cv::remap to do that, once you have dense (interpolated) description. See http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/remap/remap.html for example.
How to compute non-given point pairs by interpolation.
I don't have a solution for this, but some ideas:
don't use point pairs but displacement vectors. displacement might be easier to interpolate.
use inverse formulation to get a dense description of the second image (otherwise there might be pixel that aren't touched at all
But I guess the "real" method to do this would be some kind of spline interpolation.

Related

ransac with homography vs 8/5 point algorithms

Im beginning to learn computer vision and I'm confused on the difference between the two.
I know that the 8 point algorithm is used to compute the fundamental matrix and the 5 point algorithm is used to compute the essential matrix. Both of which can be used to determine the relative camera pose.
I also found that the relative camera pose can be determined using ransac with homography https://inspirit.github.io/jsfeat/#multiview in the ransac method
Is there a difference between using ransac with homography as opposed to using the algorithms?
First of all, note that you still need RANSAC with the 8-point or 5-point algorithms, since in practice outliers are to be expected in the matching process.
I think the main downside of pose from homography is that the point matches you use need to be coplanar. Additionaly, if I'm not mistaken, in a scene with more than one plane, you might get different homographies depending on which planes you select in the scene. That is why applying a homography to correct perspective adds distortion to some other parts of the image (see the example in this video). So in complex scenes (e.g. urban environements) where matching is more difficult, I'd use one of the 8-point or the 5-point algorithms.
Note that you can also recover the relative pose directly (up to scale, obviously), and compute the essential from that (see this paper). It's easier than computing the fundamental/essential and then extracting relative pose.

Calculating the precision of homography on 2D plane

I am trying to find a way to parametrize the precision of my homography calculation. I would like to obtain a value that describes the precision of the homography calculation for a measurement taken at a certain position.
I currently have succesfully calculated the homography (with cv::findHomography) and I can use it to map a point on my camera image onto a 2D map (using cv::perspectiveTransform). Now I want to track these objects on my 2D map and to do this I want to take in account that objects that are in the back of my camera image have a less precise position on my 2D map than the objects that are all the way in the front.
I have looked at the following example on this website that mentions plane fitting but I don't really understand how to fill the matrices correctly using this method. The visualisation of the result does seem to fit my needs. Is there any way to do this with standard OpenCV functions?
EDIT:
Thanks Francesco for your recommendations. But, I think I am looking for something different than your answer. I am not looking to test the precision of the homography itself, but the relation between the density of measurements in one real camera view and the actual size on a map I create. I want to know that when I am 1 pixel off on my detection in the camera image, how many meters this will be on my map at this point.
I can of course calculate by taking some pixels around my measurement on my camera image and then use the homography to see how many meters on my map this represent every time I do a homography, but I don't want to calculate this every time. What I would like is to have a formula that tells me the relation between pixels in my image and pixels on my map so I can take this in account for my tracking on the map.
What you are looking for is called "predictive error bars" or "prediction uncertainty". You should definitely consult a good introductory book on estimation theory for details (e.g. this one). But briefly, the predictive uncertainty is the probability that...
A certain pixel p in image 1 will is the mapping H(p') of a pixel p' in image 2 under the homography H...
Given the uncertainty in H which is due to the errors in the matched pairs (q0, q0'), (q1, q1'), ..., that have been used to estimate H, ...
But assuming the model is correct, that is, that the true map between images 1 and 2 is, in fact, a homography (although the estimated parameters of the homography itself may be affected by errors).
In order to estimate this probability distribution you'll need a model for the errors in the measurements, and a model for how they propagate through the (homography) model.

How to move epipole to the outside of the image

Hi i had computed the fundamental matrix from two images and i found out that the epipoles lie within the image. I cannot do the rectification using matlab if the image contains epipole.
May i know how to compute the fundamental matrix that the epipole is not in the image?
The epipolar geometry is the intrinsic projective geometry between two
views. It is independent of scene structure, and only depends on the
cameras' internal parameters and relative pose.
So the intrinsics/extrinsics of the cameras define the fundamental matrix that you get (i.e. you cannot compute another fundamental, s.t. the epipoles are not in the image).
What you can do is either take a different pair of images (with a different camera geometry, for example) and you may get epipoles out of the image.
The problem you're actually having is that the rectification algorithm that you're using is limited and doesn't work for the case when the epipole is inside the image. Note, there exist other algorithms that do not have this limitation. I have implemented such an algorithm in the past, and may be can find the (MATLAB) code. So, please let me know if you're interested.
If you're in a mood to learn more about epipolar geometry and the fundamental matrix, I recommend you take a look here:

Affine homography computation

Suppose you have an homography H between two images. The first image is the reference image, where the planar object cover the entire image (and it is parallel to the image). The second image depicts the planar object from another abritrary view (run-time image). Now, given a point in the reference image p=(x,y), i have a rectangular region of pixels of size SxS (with S<=20 pixel) around p (call it patch). I can unwarp this patch using the pixels in the run-time image and the inverse homography H^(-1).
Now, what i want to do is to compute, given H, an affine homography H_affine suitable for the patch around the point p. The naive way that i am using is to compute 4 point correspondences: the four corners of the patch and the corresponding points in the run-time image (computed using the full homography H). Given this four point correspondences (all belonging to a small neighborhood of the point p), one can compute the affine homography solving a simple linear system (using the gold standard algorithm). The affine homography so computed will approximate with reasonable precision (below .5 pixel) the full projective homography, since we are in a small neighboorhood of p (if the scale is not too unfavorable, that is, the patch SxS does not correspond to a big image region in the run-time image).
Is there a faster way to compute H_affine given H (related to the point p and the patch SxS)?
You say that you already know H, but then it sounds like you're trying to compute it all over again but this time call the result H_affine. The correct H would be a projective transformation and it can be uniquely decomposed into 3 parts representing the projective part, the affine part and the similarity part. If you already know H and only want the affine part and below, then decompose H and ignore its projective component. If you don't know H, then the 4 point correspondence is the way to go.

Convert Polar Image to a Cartesian Image

I am attempting to convert an image in polar coordinates (axes are angle x radius) to an image in cartesian coordinates (axes are x and y).
This is simple enough in matlab using pcolor() but the issue is that I must do this in a mex file (c++ interface to Matlab). This seem's easy enough except that Matlab ONLY uses array containers so I can't think of a clever or eloquent way of doing this.
I do have access to the image dimensions and I can imagine a very messy way of repackaging the input image array as a matrix in C++ and carying out the conversion but this would be messy and problematic.
Also, I need to be able to interpolate gaps between points in the xy plain.
Any ideas?
This is reasonably standard in image processing, particularly in registration. However, it takes some thought and isn't "obvious". It wasn't obvious to me the first time either.
I'm assuming you have two images, in different "domains", in your case a source image in polar coordinates and a target image in Cartesian coordinates. I'm assuming you know the region in the target image you want to populate.
The commonly known best thing to do in image processing is to loop over coordinates in the known area of the target image that you want to populate. For each of these positions (x,y), you'll have some conversion to polar. It's probably r = sqrt(x*x+y*y) and theta = atan2(y,x) or something like that. Then you sample from that position in the polar coordinate position with interpolation.
Among choices of interpolation are:
Nearest neighbor - you just round to the nearest r and theta and choose the value of that.
Bilinear -
Bi-cubic
...
Of course you should take care of boundary conditions and what happens if your r and theta go out of your image.
This procedure also is similar (looping over the target image and sampling from the source image, and doing lookups based on the reverse transform) for all kinds of coordinates transformations. The nice thing is that you don't leave holes where your source imagine is relevant.
Hope this helps with the image part.
As for the mex part, here's some links:
Mex tutorial
Mex tutorial
Can you be more specific about what you need about the mex part?