Convert OpenCV stereorectify camera output to three.js camera - c++

I have calibrated my stereo cameras using OpenCV's stereoCalibrate() and rectified them using stereoRectify(). This is successful (cameras' epipolar lines correctly aligned and can obtain accurate 3D point locations from corresponding 2D point left/right pairs).
I'm now trying to use WebGL-based three.js to display the same 3D points in the same projection as I have in OpenCV. In other words, if I overlaid the calibrated and rectified 2D image from my left camera onto the three.js output, the three.js 3D points should visually align with where they are on the OpenCV 2D image. This could be used for augmented reality.
Just dealing with the left camera (camera 1):
stereoRectify() provides projection matrix P1 and rectification transform R1. Using only P1 I can convert a 3D point to the correct 2D screen position in OpenCV. However, I am having difficulty using P1 to get an equivalent camera.projectionMatrix in three.js.
I'm using the OpenCV camera matrix to OpenGL projection conversion suggested by Guillaume Noctua here. I'm taking the camera matrix to be the top-left 3x3 of P1. That produces a three.js camera view that looks similar, but not quite aligned (camera appears rotated along all axis by a degree or so, with perhaps some other small but clearly erroneous distortions/translations). So I'm missing something. Do I need to use the rectification transform R1 somehow too? I've tried using it as a rotation matrix and rotating the three.js camera by this amount but it doesn't improve.
Any thoughts on using OpenCV's P1 and R1 matrix to make an equivalent camera view in three.js would be much appreciated.

Related

Aruco Pose Estimation from Stereo Setup

I am interested in finding the Rotation Matrix of an Aruco Marker from a Stereo Camera.
I know that estimateposesinglemarkers gives a Rotation Vector (which can be converted to matrix via Rodrigues)and Translation Vector but the values are not that stable and is supposedly written for MonoCam.
I can get Stable 3D points of the Marker from a Stereo Camera, however i am struggling in creating a Rotation Matrix. My Main goal is to achieve what Ali has achieved in this following blog Relative Position of Aruco Markers.
I have tried working with Euler Angles from here by creating a plane of the Aruco Marker from the 3D points that i get from the Stereo Camera but in vain.
I know my algorithm is failing because the values of the Relative Co-ordinates keeps on changing on moving the camera which should not happen as the Relative Co-ordinates b/w the Markers Should remain Constant.
I have a properly Calibrated camera with all the required matrices.
I tried using SolvePnP, but i believe it gives Rvecs and Tvecs which when combined together brings points from the model coordinate system to the camera coordinate system.
Any idea on how i can create the Rotation Matrix of the Marker with my fairly Stable 3D points so that on moving the camera, the relative Co-ordinates doesn't Change ?
Thanks in Advance.

OpenCV undistorts only a central part of fisheye image

I'm trying to perform fisheye camera calibration via OpenCV 3.4.0 (C++, MS Windows).
I used cv::fisheye::calibrate to make K and D (camera matrix and radial distortion coeffitients matrix). Then I used cv::fisheye::initUndistortRectifyMap to produce maps for X and Y coordinates.
And finally I used cv::remap to undistort image from fisheye camera via maps from initUndistortRectifyMap.
Everything looks right, but OpenCV dewarps only a central part of fisheye image.
Edges are moved outside.
I'd like to dewarp the whole image.
I tried to change focal length in K matrix manually, and got undistorted edges, but they became very very blurry.
I found some results in this task. For example
https://www.youtube.com/watch?v=Ll8KCnCw4iU
and
https://www.youtube.com/watch?v=p1kCR1i2nF0
As far as you can see these results are very similar with my results.
Does anybody have a solution of this problem?
I analyzed a lot of papers in the last 2 weeks. I think I found the source of the problem. OpenCV 3.4.0 fisheye undistortion method is based on a pin-hole camera model. We have an angle between the optical axis of the camera and the ray of light from some object. We also have an angle between the direction to an undistorted point of this object and the camera optical axis. If the fisheye image was undistorted correctly, these two angles would be equal. FOV of my fisheye camera is 180 degrees. It means that distance fromthe undistorted image center and the point corresponding to the edge of the undistorted image is equal to infinity.
In other words if we have a fisheye camera with FOV around 180 degrees, undistortion (via OpenCV) of 100% of fisheye image surface is impossible.
It can be achieved, only that using a projection instead of trying to undistort it.
More info here OpenCV fisheye calibration cuts too much of the resulting image
Example result:

OpenCV stereo vision 3D coordinates to 2D camera-plane projection different than triangulating 2D points to 3D

I get an image point in the left camera (pointL) and the corresponding image point in the right camera (pointR) of my stereo camera using feature matching. The two cameras are parallel and are at the same "hight". There is only a x-translation between them.
I also know the projection matrices for each camera (projL, projR), which I got during calibration using initUndistortRectifyMap.
For triangulating the point, I call:
triangulatePoints(projL, projR, pointL, pointR, pos3D) (documentation), where pos3D is the output 3D position of the object.
Now, I want to project the 3D-coordinates to the 2D-image of the left camera:
2Dpos = projL*3dPos
The resulting x-coordinate is correct. But the y-coodinate is about 20 pixels wrong.
How can I fix this?
Edit:
Of course, I need to use homogeneous coordinates, in order to multiply it with the projection matrix (3x4). For that reason, I set:
3dPos[0] = x;
3dPos[1] = y;
3dPos[2] = z;
3dPos[3] = 1;
Is it wrong, to set 3dPos[3]to 1?
Note:
All images are remapped, I do this in a kind of preprocessing step.
Of course, I always use the homogeneous coordinates
You are likely projecting into the rectified camera. Need to apply the inverse of the rectification warp to obtain the point in the original (undistorted) linear camera coordinates, then apply distortion to get into the original image.

How to create a Bird-Eye-View of image to a given plane?

I'm given a plane (support vector and plane's normal vector), an image which was taken by a camera of which i know the intrinsic parameters (fx,fy,cx,cy). How do i obtain the transformation of this image to a bird-eye-view like image (so that birds view is collinear to the plane's normal vector). I'm confused with the coordinate systems i have to use, some matrices are in world coordinates and some in local. I know that there is warpPerspective() in OpenCV, would this do the job?
Im using OpenCV: 2.4.9
Thanks alot!
Update:
Do I have to calculate 4 points with the camera facing normal, then 4 points from the bird eye view and pass them to findHomography() to obtain the transformation matrix?
Update:
Solved. Got it to work!
A rectangle on the world plane will appear as a quadrangle in your image.
In a bird's eye view, you want it to appear as a rectangle again.
You must know at least the aspect ratio of this world rectangle for the top view to be correctly and equally scaled along both axes.
Given the 4 quadrangle 2D points in your image, and the destination rectangle corners (for which you essentially choose the 2D coordinates) you can calculate the homography from your quadrangle to a rectangle and use warpPerspective() to render it.
This is often the easiest way to do it.
You can also go through the camera and matrices themselves. For this you will need to rotate the camera to be above the plane with orthographic projection. See the homography decomposition here.

In OpenCV, converting 2d image point to 3d world unit vector

I have calibrated my camera with OpenCV (findChessboard etc) so I have:
- Camera Distortion Coefficients & Intrinsics matrix
- Camera Pose information (Translation & Rotation, computed separatedly via other means) as Euler Angles & a 4x4
- 2D points within the camera frame
How can I convert these 2D points into 3D unit vectors pointing out into the world? I tried using cv::undistortPoints but that doesn't seem to do it (only returns 2D remapped points), and I'm not exactly sure what method of matrix math to use to model the camera via the Camera intrinsics I have.
Convert your 2d point into a homogenous point (give it a third coordinate equal to 1) and then multiply by the inverse of your camera intrinsics matrix. For example
cv::Matx31f hom_pt(point_in_image.x, point_in_image.y, 1);
hom_pt = camera_intrinsics_mat.inv()*hom_pt; //put in world coordinates
cv::Point3f origin(0,0,0);
cv::Point3f direction(hom_pt(0),hom_pt(1),hom_pt(2));
//To get a unit vector, direction just needs to be normalized
direction *= 1/cv::norm(direction);
origin and direction now define the ray in world space corresponding to that image point. Note that here the origin is centered on the camera, you can use your camera pose to transform to a different origin. Distortion coefficients map from your actual camera to the pinhole camera model and should be used at the very beginning to find your actual 2d coordinate. The steps then are
Undistort 2d coordinate with distortion coefficients
Convert to ray (as shown above)
Move that ray to whatever coordinate system you like.