How to use Camera Calibration parameters on OpenCV for Augmented Reality? - c++

So far Ive succesfully calibrated a camera with Opencv by using the chessboard pattern in the tutorials, so I got the cameraMatrix, distCoefs, Rotation and Translation vectors.
Now Id like to display an image on top of my chessboard using the calibration parameters. How can I do that?
These are the steps Ive done so far:
1 - Get the chessboard corners
2 - projectPoints to get from world (640x480) to the warped frame seen during the calibration
3 - getPerspectiveTransform to get the transformation from world to warped image
4 - warpPerspective to get the image coordinates (the image Id like to display on top of the chessboard) to warp
5 - Create a mask on top of the chessboard
6 - flip the image Id like to display
7 - And finally copy the warped image to video frame on top of the area delimited by the mask.
Corners and mask are working fine. But Im not quite sure about the rest of the process.
Can anyone help me?

see this post i think my code will help you, i have an error but to understand how to make it i think the code will be good:
https://stackoverflow.com/questions/34785237/augmented-reality-projection-cube-error
good luck
PD : The code is in python !

Related

Get correspondence between flat and depth images from .oni file

I have a video captured from Kinect in an .oni file
I can extract an RGB image from it then find a feature on the image.
What I need to do then is to find a point in 3D that corresponds to my point on 2D image.
Is this possible?
(I am going to be using C++)
It is possible. In openni 2 there is a class call CoordinateConverter, there is no direct function from RGB -> World coordinates though.... but if you have the correspondance in the depth image this should class could work. Also, I am almost sure that if the image registration is done the x,y points with valid depth should be the same x,y from the image.
I hope this helps, if you use openni 1 just tell me

How to detect image location before stitching with OpenCV / C++

I'm trying to merge/stitch 2 images together but found that the default stitcher class in OpenCV could not handle my images.
So I started to write my own..
Unfortunately the images are too large to attach to this message (they are both 12600x9000 pixels in size).. so I'll try to explain as good as possible.
The 2 images are not pictures takes by a camera but are tiff files extracted from a PDF file.
The images themselves were actually CAD drawings, so not much gradients in there and therefore I think the default stitcher class could not handle them.
So far, I managed to extract the features and match them.
Also I used the following well known example to stitch them together:
Mat WarpedImage;
cv::warpPerspective(img_2,WarpedImage,homography,cv::Size(2*img_2.cols,2*img_2.rows));
Mat half(WarpedImage,Rect(0,0,img_1.cols,img_1.rows));
img_1.copyTo(half);
I sort of made it fit.. because my problem is that in my case the 2 images could be aligned vertically or horizontally.
By default, all stitch examples on the internet assume the first image is the left image and the 2nd image is the right image.
So my first question would be:
How can I detect if the image is to the left, right, above or below the first image and create a proper sized new image?
Secondly..
Currently I'm getting the proper image.. however, because I'm not having some decent code to check the ideal width and height of the new image, I have a lot of black/empty space in the new image.
What would be the best C++ code to remove those black area's?
(I'm seeing a lot of Python scripts on the net.. but no C++ examples of this.. and I have 0 Python skills....)
Thank you very much in advance for your help.
Greetings,
Floris.
You can reproject the corners of the second image with perspectiveTransform. With the transformed points you can find the relative position of your image and calculate the new image size that will fit both images. This will also let you deal with the black areas, since you have the boundaries of the two images.

OpenCV Transform using Chessboard

I have only just started experimenting with OpenCV a little bit. I have a setup of an LCD with a static position, and I'd like to extract what is being displayed on the screen from the image. I've seen the chessboard pattern used for calibrating a camera, but it seems like that is used to undistort the image, which isn't totally what I want to do.
I was thinking I'd display the chessboard on the LCD and then figure out the transformations needed to convert the image of the LCD into the ideal view of the chessboard directly overhead and cropped. Then I would store the transformations, change what the LCD is displaying, take a picture, perform the same transformations, and get the ideal view of what was now being displayed.
I'm wondering if that sounds like a good idea? Is there a simpler way to achieve what I'm trying to do? And any tips on the functions I should be using to figure out the transformations, perform them, store them (maybe just keep the transform matrices in memory or write them to file), etc?
I'm not sure I understood correctly everything you are trying to do, but bear with me.
Some cameras have lenses that cause a little distortion to the image, and for this purpose OpenCV offers methods to aid in the camera calibration process.
Practically speaking, if you want to write an application that will automatically correct the distortion in the image, first, you need to discover what are the magical values that need to be used to undo this effect. These values come from a proper calibration procedure.
The chessboard image is used together with an application to calibrate the camera. So, after you have an image of the chessboard taken by the camera device, pass this image to the calibration app. The app will identify the corners of the squares and compute the values of the distortion and return the magical values you need to use to counter the distortion effect. At this point, you are interested in 2 variables returned by calibrateCamera(): they are cameraMatrix and distCoeffs. Print them, and write the data on a piece of paper.
At the end, the system you are developing needs to have a function/method to undistort the image, where these 2 variables will be hard coded inside the function, followed by a call to cv::undistort() (if you are using the C++ API of OpenCV):
cv::Mat undistorted;
cv::undistort(image, undistorted, cameraMatrix, distCoeffs);
and that's it.
Detecting rotation automatically might be a bit tricky, but the first thing to do is find the coordinates of the object you are interested in. But if the camera is in a fixed position, this is going to be easy.
For more info on perspective change and rotation with OpenCV, I suggest taking a look at these other questions:
Executing cv::warpPerspective for a fake deskewing on a set of cv::Point
Affine Transform, Simple Rotation and Scaling or something else entirely?
Rotate cv::Mat using cv::warpAffine offsets destination image
findhomography() is not bad choice, but skew,distortion(camera lens) is real problem..
C++: Mat findHomography(InputArray srcPoints, InputArray dstPoints,
int method=0, double ransacReprojThreshold=3, OutputArray
mask=noArray() )
Python: cv2.findHomography(srcPoints, dstPoints[, method[,
ransacReprojThreshold[, mask]]]) → retval, mask
C: void cvFindHomography(const CvMat* srcPoints, const CvMat*
dstPoints, CvMat* H, int method=0, double ransacReprojThreshold=3,
CvMat* status=NULL)
http://opencv.itseez.com/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findhomography

how to cut Faces after detection

I have been working on a college project using OpenCV. I made a simple program which detects faces, by passing frames captured by a webcam in a function which detects faces.
On detection it draws black boxes on the faces when detected. However my project does not end here, i would like to be able to clip out those faces which get detected as soon as possible and save it in an image and then apply different image processing techniques [as per my need]. If this is too problematic i could use a simple image instead of using frames captured by a webcam.
I am just clueless about how to go about clipping those faces out that get detected.
For C++ version you can check this tutorial from OpenCV documentation.
In the function detectAndDisplay you can see the line
Mat faceROI = frame_gray( faces[i] );
where faceROI is clipped face and you can save it to file with imwrite function:
imwrite("face.jpg", faceROI);
http://nashruddin.com/OpenCV_Region_of_Interest_(ROI)
Check this link, you can crop the image using the dimensions of the black box, resize it and save as a new file.
Could you grab the frame and crop the photo with the X,Y coordinates of each corner?

openCV combining an image to another image on given coordinates

i wrote a face & eye detection code
next step is put an image to the coordinates of the detected eye (for ex: eye
patch, eye glasses)
i couldn't find the function to combine the source frame and the image I want to add
any suggestions
thanks
You can use cvCopy with a mask to do this. If the the images do not have the same height and width set the ROI of the destination image before using cvCopy.
See OpenCV documentation:
cvCopy
cvSetImageROI