Generate subpixel accuracy disparity map in OpenCV - c++

The purpose is to generate disparity map for a calibrated stereo images.
A 3D model is projected onto a pair of calibrated stereo images (left/right) by using OpenCV function cv::projectPoints(). The cv::projectPoints() gives points on the 2D image coordinate in cv::Point2f which is subpixel accuracy.
As some 3D points are projected onto same pixel region, I only keep the point that have smaller depth/Z due to the fact that further points are occluded by the closer point.
By doing this, I can get two index images for left and right image respectively. Each pixel of the index image refer to its position in the 3D model (stored in std::vector) or in 2D (std::vector).
Following snippet should briefly explain the procedures:
std::vector<cv::Point3f> model_3D;
std::vector<cv::point2f> projectedPointsL, projectedPointsR;
cv::projectPoints(model_3D, rvec, tvec, P1.colRange(0,3), cv::noArray(), projectedPointsL);
cv::projectPoints(model_3D, rvec, tvec, P2.colRange(0,3), cv::noArray(), projectedPointsR);
// Each pixel in indexImage is a index pointing to a position in the vector of projected points
cv::Mat indexImageL, indexImageR;
// This function filter projected points and return the index image
filterProjectedPoints(projectedPointsL, model_3D, indexImageL);
filterProjectedPoints(projectedPointsR, model_3D, indexImageR);
In order to generate the disparity map, I can either:
1.For each pixel in the disparity map, find the corresponding pixel position in the left/right index images and subtract their position. This way gives integer disparity (not subpixel accuracy);
2.For each pixel in the disparity map, find its 2D (floating accuracy) position on both left/right projected points and calculate the difference in x axis as disparity. This way gives subpixel accuracy disparity.
The first way is straightforward and introduces error due to ignoring subpixel projected points.
However, the second way also introduces error as a pair of projected pixels (from same 3D points) may be projected into different location within a pixel. For example, a projected point in left image is (115.289, 80.393), in right image is (145.686, 79.883). Its position in disparity map will be (115, 80) and disparity can be: 145.686 - 115.289 = 30.397. As you can see, they may not be exactly row aligned to have same y coordinate.
Questions are:
1. Are both ways correct (except introducing error)?
2. If the 2nd way is correct, if the error is ignoble when computing subpixel accuracy disparity.
Well, you can also tell me how would you calculate subpixel disparity map in this scenario.

In a rectified image pair, the disparity is simply d=f*b/(z+f), where f is the focal length, b is the baseline between the two cameras and z is the distance to the object normal to the image planes. This assumes a basic pin hole camera model.
By approximating z >> f, this turns to d=f*b/z, i.e. the reciprocal nature of d to z.
So you can just calculate the disparity map analytically.

Related

Film coordinate to world coordinate

I am working on building 3D point cloud from features matching using OpenCV3.1 and OpenGL.
I have implemented 1) Camera Calibration (Hence I am having Intrinsic Matrix of the camera) 2) Feature extraction( Hence I have 2D points in Pixel Coordinates).
I was going through few websites but generally all have suggested the flow for converting 3D object points to pixel points but I am doing completely backword projection. Here is the ppt that explains it well.
I have implemented film coordinates(u,v) from pixel coordinates(x,y)(With the help of intrisic matrix). Can anyone shed the light on how I can render "Z" of camera coordinate(X,Y,Z) from the film coordinate(x,y).
Please guide me on how I can utilize functions for the desired goal in OpenCV like solvePnP, recoverPose, findFundamentalMat, findEssentialMat.
With single camera and rotating object on fixed rotation platform I would implement something like this:
Each camera has resolution xs,ys and field of view FOV defined by two angles FOVx,FOVy so either check your camera data sheet or measure it. From that and perpendicular distance (z) you can convert any pixel position (x,y) to 3D coordinate relative to camera (x',y',z'). So first convert pixel position to angles:
ax = (x - (xs/2)) * FOVx / xs
ay = (y - (ys/2)) * FOVy / ys
and then compute cartesian position in 3D:
x' = distance * tan(ax)
y' = distance * tan(ay)
z' = distance
That is nice but on common image we do not know the distance. Luckily on such setup if we turn our object than any convex edge will make an maximum ax angle on the sides if crossing the perpendicular plane to camera. So check few frames and if maximal ax detected you can assume its an edge (or convex bump) of object positioned at distance.
If you also know the rotation angle ang of your platform (relative to your camera) Then you can compute the un-rotated position by using rotation formula around y axis (Ay matrix in the link) and known platform center position relative to camera (just subbstraction befor the un-rotation)... As I mention all this is just simple geometry.
In an nutshell:
obtain calibration data
FOVx,FOVy,xs,ys,distance. Some camera datasheets have only FOVx but if the pixels are rectangular you can compute the FOVy from resolution as
FOVx/FOVy = xs/ys
Beware with Multi resolution camera modes the FOV can be different for each resolution !!!
extract the silhouette of your object in the video for each frame
you can subbstract the background image to ease up the detection
obtain platform angle for each frame
so either use IRC data or place known markers on the rotation disc and detect/interpolate...
detect ax maximum
just inspect the x coordinate of the silhouette (for each y line of image separately) and if peak detected add its 3D position to your model. Let assume rotating rectangular box. Some of its frames could look like this:
So inspect one horizontal line on all frames and found the maximal ax. To improve accuracy you can do a close loop regulation loop by turning the platform until peak is found "exactly". Do this for all horizontal lines separately.
btw. if you detect no ax change over few frames that means circular shape with the same radius ... so you can handle each of such frame as ax maximum.
Easy as pie resulting in 3D point cloud. Which you can sort by platform angle to ease up conversion to mesh ... That angle can be also used as texture coordinate ...
But do not forget that you will lose some concave details that are hidden in the silhouette !!!
If this approach is not enough you can use this same setup for stereoscopic 3D reconstruction. Because each rotation behaves as new (known) camera position.
You can't, if all you have is 2D images from that single camera location.
In theory you could use heuristics to infer a Z stacking. But mathematically your problem is under defined and there's literally infinitely many different Z coordinates that would evaluate your constraints. You have to supply some extra information. For example you could move your camera around over several frames (Google "structure from motion") or you could use multiple cameras or use a camera that has a depth sensor and gives you complete XYZ tuples (Kinect or similar).
Update due to comment:
For every pixel in a 2D image there is an infinite number of points that is projected to it. The technical term for that is called a ray. If you have two 2D images of about the same volume of space each image's set of ray (one for each pixel) intersects with the set of rays corresponding to the other image. Which is to say, that if you determine the ray for a pixel in image #1 this maps to a line of pixels covered by that ray in image #2. Selecting a particular pixel along that line in image #2 will give you the XYZ tuple for that point.
Since you're rotating the object by a certain angle θ along a certain axis a between images, you actually have a lot of images to work with. All you have to do is deriving the camera location by an additional transformation (inverse(translate(-a)·rotate(θ)·translate(a)).
Then do the following: Select a image to start with. For the particular pixel you're interested in determine the ray it corresponds to. For that simply assume two Z values for the pixel. 0 and 1 work just fine. Transform them back into the space of your object, then project them into the view space of the next camera you chose to use; the result will be two points in the image plane (possibly outside the limits of the actual image, but that's not a problem). These two points define a line within that second image. Find the pixel along that line that matches the pixel on the first image you selected and project that back into the space as done with the first image. Due to numerical round-off errors you're not going to get a perfect intersection of the rays in 3D space, so find the point where the ray are the closest with each other (this involves solving a quadratic polynomial, which is trivial).
To select which pixel you want to match between images you can use some feature motion tracking algorithm, as used in video compression or similar. The basic idea is, that for every pixel a correlation of its surroundings is performed with the same region in the previous image. Where the correlation peaks is, where it likely was moved from into.
With this pixel tracking in place you can then derive the structure of the object. This is essentially what structure from motion does.

OpenCV triangulatePoints - what are the correct coordinates to feed it with?

Previously, I was using another method to determine 3D positions from two 2D images. For that (mediocre) method I had to get 2D coordinates with a point of origin in the center of the image. Because of that, I get a lot of negative values. OpenCV normally uses the let bottom corner as the coordinates origin point (no negative values at all)
The app user is supposed to be able to use either method. Can I keep collecting 2D coordinates that way, or do I have to change it? If not, do I have to use the new center point of the image (result of cv::stereoCalibrate) instead of the default one, (frame.cols/2 , frame.rows/2)?
When using OpenCV's triangulatePoints, you need to pass as arguments:
projectionMatrixA - implicitly contains intrinsic camera parameters (focal length & Principal Point Offset which is the pixel offset from the left and from the top that should be considered as 0,0)
projectionMatrixB - besides intrinsic camera parameteres, projection matrix also reflects the position of the camera in relation to some coordinate system. So if you have two identical cameras, the two projection matrices would still differ because they are positioned differently.
2D points that are a result of 3D points being projected using projectionMatrixA
2D points that are a result of 3D points being projected using projectionMatrixB
To answer the question, there is nothing wrong with the fact that 2D points have negative values.
AFAIK, in the calib module, when dealing with 2D points (pixel coordinates), the coordinate (0,0) should always be around the center of the image and not in the top left corner. So naturally, any points in the left region of the image have x < 0, and any point in the upper region of the image have y < 0.

Bilinear interpolation on fisheye filter

I have to implement a fisheye transfromation with bilinear interpolation. After the transformation of one pixel i don't have integer coordinates anymore and I would like to map this pixel on integer coordinates using bilinear interpolation. The problem is that everithing I found on bilinear interpolation on the inetrnete (see for example Wikipedia) does the opposite thing: it gives the value of one non-integer pixel by using the coordinates of four neighbors that have integer coordinates. I would like to do the opposite, i.e. map the one pixel with non-integer coordinates to the four neighbors with integer coordinates. Surely there is something that I am missing and would be helpful to understand where I am wrong.
EDIT:
TO be more clear: Let say that I have the pixel (i,j)=(2,2) of the starting image. After the fisheye transformation I obtain non-integer coordinates, for example (2.1,2.2). I want to save this new pixel to a new image but obviously I don't know in which pixel to save it because of non-integer coordinates. The easiest way is to truncate the coordinates, but the image quality is not very good: I have to use bilinear interpolation. Despite this I don't understand how it works because I want to split my non integer pixel to neighboring pixels with integer coordinates of the new (transformed image), but I found description only of the opposite operation, i.e. finding non-integer coordinates starting from four integer pixels (http://en.wikipedia.org/wiki/Bilinear_interpolation)
Your question is a little unclear. From what I understand, you have a regular image which you want to transform into a fisheye-like image. To do this, I am guessing you take each pixel coordinate {xr,yr} from the regular image, use the fisheye transformation to obtain the corresponding coordinates {xf,yf} in the fisheye-like image. You would like to assign the initial pixel intensity to the destination pixel, however you do not know how to do this since {xf,yf} are not integer values.
If that's the case, you are actually taking the problem backwards. You should start from integer pixel coordinates in the fisheye image, use the inverse fisheye transformation to obtain floating-point pixel coordinates in the regular image, and use bilinear interpolation to estimate the intensity of the floating point coordinates from the 4 closest integer coordinates.
The basic procedure is as follows:
Start with integer pixel coordinates (xf,yf) in the fisheye image (e.g. (2,3) in the fisheye image). You want to estimate the intensity If associated to these coordinates.
Find the corresponding point in the "starting" image, by mapping (xf,yf) into the "starting" image using the inverse fisheye transformation. You obtain floating-point pixel coordinates (xs,ys) in the "starting" image (e.g. (2.2,2.5) in the starting image).
Use Bilinear Interpolation to estimate the intensity Is at coordinates (xs,ys), based on the intensity of the 4 closest integer pixel coordinates in the "starting" image (e.g. (2,2), (2,3), (3,2), (3,3) in the starting image)
Assign Is to If
Repeat from step 1. with the next integer pixel coordinates, until the intensity of all pixels of the fisheye image have been found.
Note that deriving the inverse fisheye transformation might be a little tricky, depending on the equations... However, that is how image resampling has to be performed.
You need to find the inverse fisheye transform first, and use "backward wrap" to go from the destination image to the source image.
I'll give you a simple example. Say you want to expand the image by a non integral factor of 1.5. So you have
x_dest = x_source * 1.5, y_dest = y_source * 1.5
Now if you iterate over the coordinates in the original image, you'll get non-integral coordinates in the destination image. E.g., (1,1) will be mapped to (1.5, 1.5). And this is your problem, and in general the problem with "forward wrapping" an image.
Instead, you reverse the transformation and write
x_source = x_dest / 1.5, y_source = y_dest / 1.5
Now you iterate over the destination image pixels. For example, pixel (4,4) in the destination image comes from (4/1.5, 4/1.5) = (2.6, 2.6) in the source image. These are non-integral coordinates and you use the 4 neighboring pixels in the source image to estimate the color at this coordinate (in our example the pixels at (2,2), (2,3), (3,2) and (3,3))

FInding the Z coordinate using disparity map

I have found Disparity map of two stereoscopic images. And now I have to write an OpenGL code to visualize it for 3D reconstruction.
OpenGL has function vertex3f() for which three co-ordinates are to mentioned.
Two dimension are points on image.
So how to find z dimension using Disparity map?
Please suggest something on this.
Since, you have found disparity mapping, I assume that you are working with rectified images. In that case, the Z coordinate is given by simple similar triangle formulation,
z=Bf/d, where f if the focal length of the camera used (in pixels), d is the obtained disparity value for the pixel of interest and B is the baseline between the two stereo images.
Note, the unit of z would be the same as that of B.

Projection of set of 3D points into virtual image plane in opencv c++

Anyone know how to project set of 3D points into virtual image plane in opencv c++
Thank you
First you need to have your transformation matrix defined (rotation, translation, etc) to map the 3D space to the 2D virtual image plane, then just multiply your 3D point coordinates (x, y, z) to the matrix to get the 2D coordinates in the image.
registration (OpenNI 2) or alternative viewPoint capability (openNI 1.5) indeed help to align depth with rgb using a single line of code. The price you pay is that you cannot really restore exact X, Y point locations in 3D space since the row and col are moved after alignment.
Sometimes you need not only Z but also X, Y and want them to be exact; plus you want the alignment of depth and rgb. Then you have to align rgb to depth. Note that this alignment is not supported by Kinect/OpenNI. The price you pay for this - there is no RGB values in the locations where depth is undefined.
If one knows extrinsic parameters that is rotation and translation of the depth camera relative to color one then alignment is just a matter of making an alternative viewpoint: restore 3D from depth, and then look at your point cloud from the point of view of a color camera: that is apply inverse rotation and translation. For example, moving camera to the right is like moving the world (points) to the left. Reproject 3D into 2D and interpolate if needed. This is really easy and is just an inverse of 3d reconstruction; below, Cx is close to w/2 and Cy to h/2;
col = focal*X/Z+Cx
row = -focal*Y/Z+Cy // this is because row in the image increases downward
A proper but also more expensive way to get a nice depth map after point cloud rotation is to trace rays from each pixel till it intersects the point cloud or come sufficiently close to one of the points. In this way you will have less holes in your depth map due to sampling artifacts.