Projecting 3D points to an undistorted ROI using OpenCV - c++

Assume I have a camera that has been calibrated using the full camera frame to obtain the camera matrix and distortion coefficients. Also, assume that I have a 3D world point expressed in that camera's frame.
I know that I can use cv::projectPoints() with rvec=tvec=(0,0,0), the camera matrix, and distortion coefficients to project the point to the (full) distorted frame. I also know that if I am receiving an ROI from the camera (which is a cropped portion of the full distorted frame), I can adjust for the ROI simply by subtracting the (x,y) coordinate of the top left corner of the ROI from the result of cv::projectPoints(). Lastly, I know that if I use cv::projectPoints() with rvec=tvec=(0,0,0), the camera matrix, and zero distortion coefficients I can project the point to the full undistorted frame (correct me if I'm wrong, but I think this requires that you use the same camera matrix in cv::undistort() and don't use a newCameraMatrix).
How do I handle the case where I want to project to the undistorted version of an ROI that I am receiving (i.e. I get a (distorted) ROI and then use cv::undistort() on it using the method described here to account for the fact that it's an ROI, and then I want to project the 3D point to that resulting image)?
If there is a better way to go about all this I am open to suggestions as well. My goal is that I want to be able to project 3D points to distorted and undistorted frames with or without the presence of an ROI where the ROI is always originally defined by the feed from the camera and therefore always defined in the distorted frame (i.e. 4 different cases: distorted full frame, distorted ROI, undistorted full frame, undistorted version of distorted ROI).

Related

Aruco Pose Estimation from Stereo Setup

I am interested in finding the Rotation Matrix of an Aruco Marker from a Stereo Camera.
I know that estimateposesinglemarkers gives a Rotation Vector (which can be converted to matrix via Rodrigues)and Translation Vector but the values are not that stable and is supposedly written for MonoCam.
I can get Stable 3D points of the Marker from a Stereo Camera, however i am struggling in creating a Rotation Matrix. My Main goal is to achieve what Ali has achieved in this following blog Relative Position of Aruco Markers.
I have tried working with Euler Angles from here by creating a plane of the Aruco Marker from the 3D points that i get from the Stereo Camera but in vain.
I know my algorithm is failing because the values of the Relative Co-ordinates keeps on changing on moving the camera which should not happen as the Relative Co-ordinates b/w the Markers Should remain Constant.
I have a properly Calibrated camera with all the required matrices.
I tried using SolvePnP, but i believe it gives Rvecs and Tvecs which when combined together brings points from the model coordinate system to the camera coordinate system.
Any idea on how i can create the Rotation Matrix of the Marker with my fairly Stable 3D points so that on moving the camera, the relative Co-ordinates doesn't Change ?
Thanks in Advance.

Measure real size of object with Calibrated Camera opencv c++?

i am completing my thesis related opencv.
I want to measure real size of object (mm) with single camera but i have problem with convert the camera's natural units (pixels) and the real world units!!!
After calibrate camera, i have:
Camera matrix (3x3)
Distortion coefficients
Extrinsic parameters [rotation vector(1x3) + translation vector(1x3)]
I have read following link but i can't find out formula to convert unit.
https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
Example about measure size of object
Any sugguestion???
Thanks so much.
As mentioned in the comments, you need the distance to the object to obtain 3D coordinates from pixels. A possible workflow would be:
Rectify the image using the distortion parameters, i.e., correct the distortion caused by the camera.
Deproject the pixels into 3D points in the camera coordinate frame using the camera matrix. For this you can multiply the inverse of the 3x3 camera matrix with a vector containing the pixels [pixel_x, pixel_y, 1]^T. If you multiply the result [x', y', 1]^T with the depth, i.e., the z-component you obtain the 3D point in the camera coordinate frame.
Transform the point from the camera coordinate frame into the world coordinate frame using the extrinsics parameters.
Obtaining the depth values from an image alone is not possible. The only option is to use some additional information. Maybe your object is placed on a table and you know the distance between the camera and the table.
To measure distances between the camera and a table or even the object itself you could use Aruco markers, which are also available within openCV.

Mismatch between Point Projection and Warped 2-D Image [opencv]

I am using 2 different methods to render an image (as an opencv Matrix):
an implemented projection function that uses the camera intrinsics (focal length, principal point; distortion is disabled) - this function is used in other software packages and is supposed to work correctly (repository)
a 2D to 2D image warping (here, I'm determining the intersections of the corner-rays of my camera with my 2D image that should be warped into my camera frame); this backprojection of the corner points is using the same camera model as above
Now, I overlay these two images and what should basically happen is that a projected pen-tip (method 1.) should line up with a line that is drawn on the warped image (method 2.). However, this is not happening.
There is a tiny shift in both directions, depending on the orientation of the pen that is writing, and it is reduced when I am shifting the principal point of the camera. Now my question is, since I am not considering the principal point in the 2D-2D image warping, can this be the cause of the mismatch? Or is it generally impossible to align those two, since the image warping is a simplification of the projection process?
Grey Point: projected origin (should fall in line with the edges of the white area)
Blue Reticle: penTip that should "write" the Bordeaux-colored line
Grey Line: pen approximation
Red Edge: "x-axis" of white image part
Green Edge: "y-axis" of white image part
EDIT:
I also did the same projection with the origin of the coordinate system, and here, the mismatch grows, the further the origin moves out of the center of the image. (so delta[warp,project] gets larger on the image borders compare to the center)

OpenCV undistorts only a central part of fisheye image

I'm trying to perform fisheye camera calibration via OpenCV 3.4.0 (C++, MS Windows).
I used cv::fisheye::calibrate to make K and D (camera matrix and radial distortion coeffitients matrix). Then I used cv::fisheye::initUndistortRectifyMap to produce maps for X and Y coordinates.
And finally I used cv::remap to undistort image from fisheye camera via maps from initUndistortRectifyMap.
Everything looks right, but OpenCV dewarps only a central part of fisheye image.
Edges are moved outside.
I'd like to dewarp the whole image.
I tried to change focal length in K matrix manually, and got undistorted edges, but they became very very blurry.
I found some results in this task. For example
https://www.youtube.com/watch?v=Ll8KCnCw4iU
and
https://www.youtube.com/watch?v=p1kCR1i2nF0
As far as you can see these results are very similar with my results.
Does anybody have a solution of this problem?
I analyzed a lot of papers in the last 2 weeks. I think I found the source of the problem. OpenCV 3.4.0 fisheye undistortion method is based on a pin-hole camera model. We have an angle between the optical axis of the camera and the ray of light from some object. We also have an angle between the direction to an undistorted point of this object and the camera optical axis. If the fisheye image was undistorted correctly, these two angles would be equal. FOV of my fisheye camera is 180 degrees. It means that distance fromthe undistorted image center and the point corresponding to the edge of the undistorted image is equal to infinity.
In other words if we have a fisheye camera with FOV around 180 degrees, undistortion (via OpenCV) of 100% of fisheye image surface is impossible.
It can be achieved, only that using a projection instead of trying to undistort it.
More info here OpenCV fisheye calibration cuts too much of the resulting image
Example result:

How do I find the new position of a feature after undistorting the image?

I have an image with a circle in it, and I use the openCV methods to detect it and display its edges and center on the image before the image is rectified and undistorted.
I rectify the image and undistort it using InitUndistortRectifyMap in OpenCV. After remaping, the image is warped and the circle has an oval shape due to the change in perspective. The position coordinates of the center do obviuosly change as well.
I cannot do the circle detection step after rectifying because this will produce inaccurate results, due to the perspective change.
My question is, how can I find the position of the center after the image has been undistorted and rectified?
There is an undistortPoints function which is able to transform vector of Point2f or Point2d.