Currently I am using the triangulatePoints function from OpenCV to triangulate two 2D points from two calibrated cameras into a 3D Point.
cv::triangulatePoints(calibration[camera1Index].getProjectionMatrix(), calibration[camera2Index].getProjectionMatrix(), points1, points2, points4D);
My code uses the projection matrix of the cameras and the undistorted point for each camera to find the 3D point.
The problem is that sometimes the cameras send points that can't be correlated.
My question is if there is any method to find the zone where a point from the first camera should be projected in the second camera and discard all the points from the second camera outside of that zone.
I've been searching and I guess homography is the solution here but I wasn't able to find an approach for this algorithm or if there's an existing method in OpenCV for this.
Thanks in advance.
Related
I am interested in finding the Rotation Matrix of an Aruco Marker from a Stereo Camera.
I know that estimateposesinglemarkers gives a Rotation Vector (which can be converted to matrix via Rodrigues)and Translation Vector but the values are not that stable and is supposedly written for MonoCam.
I can get Stable 3D points of the Marker from a Stereo Camera, however i am struggling in creating a Rotation Matrix. My Main goal is to achieve what Ali has achieved in this following blog Relative Position of Aruco Markers.
I have tried working with Euler Angles from here by creating a plane of the Aruco Marker from the 3D points that i get from the Stereo Camera but in vain.
I know my algorithm is failing because the values of the Relative Co-ordinates keeps on changing on moving the camera which should not happen as the Relative Co-ordinates b/w the Markers Should remain Constant.
I have a properly Calibrated camera with all the required matrices.
I tried using SolvePnP, but i believe it gives Rvecs and Tvecs which when combined together brings points from the model coordinate system to the camera coordinate system.
Any idea on how i can create the Rotation Matrix of the Marker with my fairly Stable 3D points so that on moving the camera, the relative Co-ordinates doesn't Change ?
Thanks in Advance.
I have a rotatable camera mounted in a fixed position. The camera can be rotated with API. I want to do 3D reconstruction about the environment around the camera.
Is it possible to do 3D reconstruction in my setup ? I have read some theories about 3D reconstruction with two cameras. What are the major differences between my setup and a two-camera setup ?
Any tutorials/blogs/samples are welcome ;)
Triangulating points isn't possible during pure rotational motion. Geometrically you can think about how the direction vectors to a feature match from the two camera poses will be parallel and will never meet or will meet at an arbitrary point so the results from triangulation won't mean anything. The best approach for this would have to be some depth estimation from monocular images using deep learning.
I have a single calibrated camera pointing at a checkerboard at different locations with Known
Camera Intrinsics. fx,fy,cx,cy
Distortion Co-efficients K1,K2,K3,T1,T2,etc..
Camera Rotation & Translation (R,T) from IMU
After Undistortion, I have computed Point correspondence of checkerboard points in all the images with a known camera-to-camera Rotation and Translation vectors.
How can I estimate the 3D points of the checkerboard in all the images?
I think OpenCV has a function to do this but I'm not able to understand how to use this!
1) cv::sfm::triangulatePoints
2) triangulatePoints
How to compute the 3D Points using OpenCV?
Since you already have the matched points form the image you can use findFundamentalMat() to get the fundamental matrix. Keep in mind you need at least 7 matched points to do this. If you have more then 8 points CV_FM_RANSAC might be the best option.
Then use cv::sfm::projectionsFromFundamental() to find the projection matrix for each image, check if the projection matrix is valid (ex.check if the points are in-front of the camera).
then feed the projections and the points it into cv::sfm::triangulatePoints().
Hope this helps :)
Edit
The rotation and translation matrix are needed to change reference frames because the camera moves in SFM. The reference frame is at the position of the camera. Transforms are needed to make sure the position of the points a coherent(under the same reference frame which is usually the reface frame of the camera in the first image), so all the points are in the same coordinate system.
IE. To relate the point gathered by the second frame to the first frame, the third to second frame and so on.
So basically you can use the R and T vector to construct a transform matrix for each frame and multiplying it with your points to put them in the reface frame of the camera in the first frame.
I am trying to use OpenCv to correct an image for distortion and then calculate the real world coordinates given a pixel coordinate. I can not find any examples online or in the OpenCv book of how to do this.
I have done the camera calibration with the chess board image. Now, I just need a basic function that I can give pixel coordinates to that will give me real world coordinates based off of the camera matrix, the distortion coefficients, rotational and translation vectors.
Does anyone know how to do this?
Take a look at the findHomography() function. If you know the location in the real world of a set of points you can use this function to create transformation matrix that you can use with the function perspectiveTransform()
std::vector<Point2f> worldPoints;
std::vector<Point2f> cameraPoints;
//insert somepoints in both vectors
Mat perspectiveMat_= findHomography(cameraPoints, worldPoints, CV_RANSAC);
//use perspective transform to translate other points to real word coordinates
std::vector<Point2f> camera_corners;
//insert points from your camera image here
std::vector<Point2f> world_corners;
perspectiveTransform(camera_corners, world_corners, perspectiveMat_);
You can find more information about the function here
As I understand correctly you need a world point from image point. With a monocular camera this problem is unsolvable. You can not determine the depth (distance) of the real world point to the camera.
There are visual simultaneous localization and mapping (SLAM) algorithms that create a map of the world and compute the trajectory of the camera from a video, but they are a whole other thing.
Given a single image and a point on it, expressed in terms of 2D pixel coordinates, there is an infinity of 3D points in the real world, all belonging to a line, which map to your point in your image... not just one point.
But, if you know the distance of the object in pixel (x,y) from the camera then you can calculate its location in 3D.
I am using opencv with C, and I am trying to get the extrinsic parameters (Rotation and translation) between 2 cameras.
I'm told that a checkerboard pattern can be used to calibrate, but I can't find any good samples on this. How do I go about doing this?
edit
The suggestions given are for calibrating a single camera with a checkerboard. How would you find the rotation and translation between 2 cameras given the checkerboard images from both views?
I was using code from http://www.starlino.com/opencv_qt_stereovision.html. It has some useful information and code of the author is pretty easy to understand and analyze, it covers both - chessboard calibrate and getting depth image from stereo cameras. I think it's based on this OpenCV book
opencv library here and about 3 chapters of the opencv book
A picture from a camera is just a projection of a bunch of color samples onto a plane. Assuming that the camera itself creates pictures with square pixels, the possible position of a given pixel is a vector from the camera's origin through the plane the pixel was projected onto. We'll refer to that plane as the picture plane.
One sample doesn't give you that much information. Two samples tells you a little bit more - the position of the camera relative to the plane created by three points: the two sample points and the camera position. And a third sample tells you the relative position of the camera in the world; this will be a single point in space.
If you take the same three samples and find them in another picture taken from a different camera, you will be able to determine the relative position of the cameras from the three samples (and their orientations based on the right and up vectors of the picture plane). To the correct distance, you need to know the distance between the actual sample points. In the case of a checkerboard, it's the physical dimensions of the checkerboard.