Bilinear interpolation on fisheye filter - c++

I have to implement a fisheye transfromation with bilinear interpolation. After the transformation of one pixel i don't have integer coordinates anymore and I would like to map this pixel on integer coordinates using bilinear interpolation. The problem is that everithing I found on bilinear interpolation on the inetrnete (see for example Wikipedia) does the opposite thing: it gives the value of one non-integer pixel by using the coordinates of four neighbors that have integer coordinates. I would like to do the opposite, i.e. map the one pixel with non-integer coordinates to the four neighbors with integer coordinates. Surely there is something that I am missing and would be helpful to understand where I am wrong.
EDIT:
TO be more clear: Let say that I have the pixel (i,j)=(2,2) of the starting image. After the fisheye transformation I obtain non-integer coordinates, for example (2.1,2.2). I want to save this new pixel to a new image but obviously I don't know in which pixel to save it because of non-integer coordinates. The easiest way is to truncate the coordinates, but the image quality is not very good: I have to use bilinear interpolation. Despite this I don't understand how it works because I want to split my non integer pixel to neighboring pixels with integer coordinates of the new (transformed image), but I found description only of the opposite operation, i.e. finding non-integer coordinates starting from four integer pixels (http://en.wikipedia.org/wiki/Bilinear_interpolation)

Your question is a little unclear. From what I understand, you have a regular image which you want to transform into a fisheye-like image. To do this, I am guessing you take each pixel coordinate {xr,yr} from the regular image, use the fisheye transformation to obtain the corresponding coordinates {xf,yf} in the fisheye-like image. You would like to assign the initial pixel intensity to the destination pixel, however you do not know how to do this since {xf,yf} are not integer values.
If that's the case, you are actually taking the problem backwards. You should start from integer pixel coordinates in the fisheye image, use the inverse fisheye transformation to obtain floating-point pixel coordinates in the regular image, and use bilinear interpolation to estimate the intensity of the floating point coordinates from the 4 closest integer coordinates.
The basic procedure is as follows:
Start with integer pixel coordinates (xf,yf) in the fisheye image (e.g. (2,3) in the fisheye image). You want to estimate the intensity If associated to these coordinates.
Find the corresponding point in the "starting" image, by mapping (xf,yf) into the "starting" image using the inverse fisheye transformation. You obtain floating-point pixel coordinates (xs,ys) in the "starting" image (e.g. (2.2,2.5) in the starting image).
Use Bilinear Interpolation to estimate the intensity Is at coordinates (xs,ys), based on the intensity of the 4 closest integer pixel coordinates in the "starting" image (e.g. (2,2), (2,3), (3,2), (3,3) in the starting image)
Assign Is to If
Repeat from step 1. with the next integer pixel coordinates, until the intensity of all pixels of the fisheye image have been found.
Note that deriving the inverse fisheye transformation might be a little tricky, depending on the equations... However, that is how image resampling has to be performed.

You need to find the inverse fisheye transform first, and use "backward wrap" to go from the destination image to the source image.
I'll give you a simple example. Say you want to expand the image by a non integral factor of 1.5. So you have
x_dest = x_source * 1.5, y_dest = y_source * 1.5
Now if you iterate over the coordinates in the original image, you'll get non-integral coordinates in the destination image. E.g., (1,1) will be mapped to (1.5, 1.5). And this is your problem, and in general the problem with "forward wrapping" an image.
Instead, you reverse the transformation and write
x_source = x_dest / 1.5, y_source = y_dest / 1.5
Now you iterate over the destination image pixels. For example, pixel (4,4) in the destination image comes from (4/1.5, 4/1.5) = (2.6, 2.6) in the source image. These are non-integral coordinates and you use the 4 neighboring pixels in the source image to estimate the color at this coordinate (in our example the pixels at (2,2), (2,3), (3,2) and (3,3))

Related

Film coordinate to world coordinate

I am working on building 3D point cloud from features matching using OpenCV3.1 and OpenGL.
I have implemented 1) Camera Calibration (Hence I am having Intrinsic Matrix of the camera) 2) Feature extraction( Hence I have 2D points in Pixel Coordinates).
I was going through few websites but generally all have suggested the flow for converting 3D object points to pixel points but I am doing completely backword projection. Here is the ppt that explains it well.
I have implemented film coordinates(u,v) from pixel coordinates(x,y)(With the help of intrisic matrix). Can anyone shed the light on how I can render "Z" of camera coordinate(X,Y,Z) from the film coordinate(x,y).
Please guide me on how I can utilize functions for the desired goal in OpenCV like solvePnP, recoverPose, findFundamentalMat, findEssentialMat.
With single camera and rotating object on fixed rotation platform I would implement something like this:
Each camera has resolution xs,ys and field of view FOV defined by two angles FOVx,FOVy so either check your camera data sheet or measure it. From that and perpendicular distance (z) you can convert any pixel position (x,y) to 3D coordinate relative to camera (x',y',z'). So first convert pixel position to angles:
ax = (x - (xs/2)) * FOVx / xs
ay = (y - (ys/2)) * FOVy / ys
and then compute cartesian position in 3D:
x' = distance * tan(ax)
y' = distance * tan(ay)
z' = distance
That is nice but on common image we do not know the distance. Luckily on such setup if we turn our object than any convex edge will make an maximum ax angle on the sides if crossing the perpendicular plane to camera. So check few frames and if maximal ax detected you can assume its an edge (or convex bump) of object positioned at distance.
If you also know the rotation angle ang of your platform (relative to your camera) Then you can compute the un-rotated position by using rotation formula around y axis (Ay matrix in the link) and known platform center position relative to camera (just subbstraction befor the un-rotation)... As I mention all this is just simple geometry.
In an nutshell:
obtain calibration data
FOVx,FOVy,xs,ys,distance. Some camera datasheets have only FOVx but if the pixels are rectangular you can compute the FOVy from resolution as
FOVx/FOVy = xs/ys
Beware with Multi resolution camera modes the FOV can be different for each resolution !!!
extract the silhouette of your object in the video for each frame
you can subbstract the background image to ease up the detection
obtain platform angle for each frame
so either use IRC data or place known markers on the rotation disc and detect/interpolate...
detect ax maximum
just inspect the x coordinate of the silhouette (for each y line of image separately) and if peak detected add its 3D position to your model. Let assume rotating rectangular box. Some of its frames could look like this:
So inspect one horizontal line on all frames and found the maximal ax. To improve accuracy you can do a close loop regulation loop by turning the platform until peak is found "exactly". Do this for all horizontal lines separately.
btw. if you detect no ax change over few frames that means circular shape with the same radius ... so you can handle each of such frame as ax maximum.
Easy as pie resulting in 3D point cloud. Which you can sort by platform angle to ease up conversion to mesh ... That angle can be also used as texture coordinate ...
But do not forget that you will lose some concave details that are hidden in the silhouette !!!
If this approach is not enough you can use this same setup for stereoscopic 3D reconstruction. Because each rotation behaves as new (known) camera position.
You can't, if all you have is 2D images from that single camera location.
In theory you could use heuristics to infer a Z stacking. But mathematically your problem is under defined and there's literally infinitely many different Z coordinates that would evaluate your constraints. You have to supply some extra information. For example you could move your camera around over several frames (Google "structure from motion") or you could use multiple cameras or use a camera that has a depth sensor and gives you complete XYZ tuples (Kinect or similar).
Update due to comment:
For every pixel in a 2D image there is an infinite number of points that is projected to it. The technical term for that is called a ray. If you have two 2D images of about the same volume of space each image's set of ray (one for each pixel) intersects with the set of rays corresponding to the other image. Which is to say, that if you determine the ray for a pixel in image #1 this maps to a line of pixels covered by that ray in image #2. Selecting a particular pixel along that line in image #2 will give you the XYZ tuple for that point.
Since you're rotating the object by a certain angle θ along a certain axis a between images, you actually have a lot of images to work with. All you have to do is deriving the camera location by an additional transformation (inverse(translate(-a)·rotate(θ)·translate(a)).
Then do the following: Select a image to start with. For the particular pixel you're interested in determine the ray it corresponds to. For that simply assume two Z values for the pixel. 0 and 1 work just fine. Transform them back into the space of your object, then project them into the view space of the next camera you chose to use; the result will be two points in the image plane (possibly outside the limits of the actual image, but that's not a problem). These two points define a line within that second image. Find the pixel along that line that matches the pixel on the first image you selected and project that back into the space as done with the first image. Due to numerical round-off errors you're not going to get a perfect intersection of the rays in 3D space, so find the point where the ray are the closest with each other (this involves solving a quadratic polynomial, which is trivial).
To select which pixel you want to match between images you can use some feature motion tracking algorithm, as used in video compression or similar. The basic idea is, that for every pixel a correlation of its surroundings is performed with the same region in the previous image. Where the correlation peaks is, where it likely was moved from into.
With this pixel tracking in place you can then derive the structure of the object. This is essentially what structure from motion does.

Generate subpixel accuracy disparity map in OpenCV

The purpose is to generate disparity map for a calibrated stereo images.
A 3D model is projected onto a pair of calibrated stereo images (left/right) by using OpenCV function cv::projectPoints(). The cv::projectPoints() gives points on the 2D image coordinate in cv::Point2f which is subpixel accuracy.
As some 3D points are projected onto same pixel region, I only keep the point that have smaller depth/Z due to the fact that further points are occluded by the closer point.
By doing this, I can get two index images for left and right image respectively. Each pixel of the index image refer to its position in the 3D model (stored in std::vector) or in 2D (std::vector).
Following snippet should briefly explain the procedures:
std::vector<cv::Point3f> model_3D;
std::vector<cv::point2f> projectedPointsL, projectedPointsR;
cv::projectPoints(model_3D, rvec, tvec, P1.colRange(0,3), cv::noArray(), projectedPointsL);
cv::projectPoints(model_3D, rvec, tvec, P2.colRange(0,3), cv::noArray(), projectedPointsR);
// Each pixel in indexImage is a index pointing to a position in the vector of projected points
cv::Mat indexImageL, indexImageR;
// This function filter projected points and return the index image
filterProjectedPoints(projectedPointsL, model_3D, indexImageL);
filterProjectedPoints(projectedPointsR, model_3D, indexImageR);
In order to generate the disparity map, I can either:
1.For each pixel in the disparity map, find the corresponding pixel position in the left/right index images and subtract their position. This way gives integer disparity (not subpixel accuracy);
2.For each pixel in the disparity map, find its 2D (floating accuracy) position on both left/right projected points and calculate the difference in x axis as disparity. This way gives subpixel accuracy disparity.
The first way is straightforward and introduces error due to ignoring subpixel projected points.
However, the second way also introduces error as a pair of projected pixels (from same 3D points) may be projected into different location within a pixel. For example, a projected point in left image is (115.289, 80.393), in right image is (145.686, 79.883). Its position in disparity map will be (115, 80) and disparity can be: 145.686 - 115.289 = 30.397. As you can see, they may not be exactly row aligned to have same y coordinate.
Questions are:
1. Are both ways correct (except introducing error)?
2. If the 2nd way is correct, if the error is ignoble when computing subpixel accuracy disparity.
Well, you can also tell me how would you calculate subpixel disparity map in this scenario.
In a rectified image pair, the disparity is simply d=f*b/(z+f), where f is the focal length, b is the baseline between the two cameras and z is the distance to the object normal to the image planes. This assumes a basic pin hole camera model.
By approximating z >> f, this turns to d=f*b/z, i.e. the reciprocal nature of d to z.
So you can just calculate the disparity map analytically.

Opencv Warp perspective whole image

I'm struggling with this problem:
I have the an image and I want to apply a warp perspective to it (I already have the transformation matrix) but instead of the output only having the transformation area (like the example below) I want to be able to see the whole image instead.
EXAMPLE http://docs.opencv.org/trunk/_images/perspective.jpg
Instead of having the transformation region, like this example, I want to transform the whole original image.
How can I achieve this?
Thanks!
It seems that you are computing the perspective transform by selecting the corners of the sudoku grid in the input image and requesting them to be warped at fixed location in the output image. In your example, it seems that you are requesting the top-left corner to be warped at coordinates (0,0), the top-right corner at (300,0), the bottom-left at (0,300) and the bottom-right at (300,300).
This will always result in the cropping of the image area on the left of the two left corners and above the two top corners (i.e. the image area where x<0 or y<0 in the output image). Also, if you specify an output image size of 300x300, this results in the cropping of the image area on the right to the right corners and below the bottom corners.
If you want to keep the whole image, you need to use different output coordinates for the corners. For example warp TLC to (100, 100), TRC to (400,100), BLC to (100,400) and BRC to (400,400), and specify an output image size of 600x600 for instance.
You can also calculate the optimal coordinates as follows:
Compute the default perspective transform H0 (as you are doing now)
Transform the corners of the input image using H0, and compute the minimum and maximum values for the x and y coordinates of these corners. Let's denote them xmin, xmax, ymin, ymax.
Compute the translation necessary to map the point (xmin,ymin) to (0,0). The matrix of this translation is T = [1, 0, -xmin; 0, 1, -ymin; 0, 0, 1].
Compute the optimised perspective transform H1 = T*H0 and specify an output image size of (xmax-xmin) x (ymax-ymin).
This way, you are guaranteed that:
the four corners of your input sudoku grid will form a true square
the output image will be translated so that no useful image data is cropped above or to the left of the grid corners
the output image will be have sized so that no useful image data is cropped below or to the right of the grid corners
However, this will generate black areas since the ouput image is no longer a perfect rectangle, hence some pixels in the output image won't have any correspondence in the input image.
Edit 1: If you want to replace the black areas with something else, you can initialize the destination matrix as you wish and then set the borderMode parameter of the warpPerspective function to BORDER_TRANSPARENT.

Projection of set of 3D points into virtual image plane in opencv c++

Anyone know how to project set of 3D points into virtual image plane in opencv c++
Thank you
First you need to have your transformation matrix defined (rotation, translation, etc) to map the 3D space to the 2D virtual image plane, then just multiply your 3D point coordinates (x, y, z) to the matrix to get the 2D coordinates in the image.
registration (OpenNI 2) or alternative viewPoint capability (openNI 1.5) indeed help to align depth with rgb using a single line of code. The price you pay is that you cannot really restore exact X, Y point locations in 3D space since the row and col are moved after alignment.
Sometimes you need not only Z but also X, Y and want them to be exact; plus you want the alignment of depth and rgb. Then you have to align rgb to depth. Note that this alignment is not supported by Kinect/OpenNI. The price you pay for this - there is no RGB values in the locations where depth is undefined.
If one knows extrinsic parameters that is rotation and translation of the depth camera relative to color one then alignment is just a matter of making an alternative viewpoint: restore 3D from depth, and then look at your point cloud from the point of view of a color camera: that is apply inverse rotation and translation. For example, moving camera to the right is like moving the world (points) to the left. Reproject 3D into 2D and interpolate if needed. This is really easy and is just an inverse of 3d reconstruction; below, Cx is close to w/2 and Cy to h/2;
col = focal*X/Z+Cx
row = -focal*Y/Z+Cy // this is because row in the image increases downward
A proper but also more expensive way to get a nice depth map after point cloud rotation is to trace rays from each pixel till it intersects the point cloud or come sufficiently close to one of the points. In this way you will have less holes in your depth map due to sampling artifacts.

creating image using polar coordinates (image transformations)

I'm working on image warping. The transformed version of the real coordinates of an image are x and y, and, the polar coordinates of the transformed image are r and theta.
(cylindrical anamorphosis). I have the transformation functions. But Im confused about a certain things. I'm getting the polar coordinates from transformation functions which can easily be converted to cartesian. But how to draw this transformed image? as the new size will be different than the old image size.
EDIT : I have the image as shown in the cylinder. I have the transformation function to convert it into the illusion image as shown. As this image's size is different from the original image, how do I ensure that all my points in the main image are being transformed. Moreover the coordinates of those points in transformed image are polar. Can I use openCV to form the new image using the transformed polar coordinates?
REF: http://www.physics.uoguelph.ca/phyjlh/morph/Anamorph.pdf
You have two problems here. In my understanding, the bigger problem arises because you are converting discrete integral coordinates into floating point coordinates. The other problem is that the resulting image's size is larger or smaller than the original image's size. Additionally, the resulting image does not have to be rectangular, so it will have to be either cropped, or filled with black pixels along the corners.
According to http://opencv.willowgarage.com/documentation/geometric_image_transformations.html there is no radial transformation routine.
I'd suggest you do the following:
Upscale the original image to have width*2, height*2. Set the new image to black. (cvResize, cvZero)
Run over each pixel in the original image. Find the new coordinates of the pixel. Add 1/9 of its value to all 8 neighbors of the new
coordinates, and to the new coordinates itself. (CV_IMAGE_ELEM(...) +=
1.0/9 * ....)
Downscale the new image back to the original width, height.
Depending on the result, you may want to use a sharpening routine.
If you want to KEEP some pixels that go out of bounds, that's a different question. Basically you want to find Min and Max of the coordinates you receive, so for example your original image has Min,Max = [0,1024] and your new MinNew,MaxNew = [-200,1200] you make a function
normalize(int &convertedx,int &convertedy)
{
convertedx = MinNewX + (MaxNewX-MinNewX)/(MaxX-MinX) * convertedx;
convertedy = ...;
}