OpenCV Transform using Chessboard - c++

I have only just started experimenting with OpenCV a little bit. I have a setup of an LCD with a static position, and I'd like to extract what is being displayed on the screen from the image. I've seen the chessboard pattern used for calibrating a camera, but it seems like that is used to undistort the image, which isn't totally what I want to do.
I was thinking I'd display the chessboard on the LCD and then figure out the transformations needed to convert the image of the LCD into the ideal view of the chessboard directly overhead and cropped. Then I would store the transformations, change what the LCD is displaying, take a picture, perform the same transformations, and get the ideal view of what was now being displayed.
I'm wondering if that sounds like a good idea? Is there a simpler way to achieve what I'm trying to do? And any tips on the functions I should be using to figure out the transformations, perform them, store them (maybe just keep the transform matrices in memory or write them to file), etc?

I'm not sure I understood correctly everything you are trying to do, but bear with me.
Some cameras have lenses that cause a little distortion to the image, and for this purpose OpenCV offers methods to aid in the camera calibration process.
Practically speaking, if you want to write an application that will automatically correct the distortion in the image, first, you need to discover what are the magical values that need to be used to undo this effect. These values come from a proper calibration procedure.
The chessboard image is used together with an application to calibrate the camera. So, after you have an image of the chessboard taken by the camera device, pass this image to the calibration app. The app will identify the corners of the squares and compute the values of the distortion and return the magical values you need to use to counter the distortion effect. At this point, you are interested in 2 variables returned by calibrateCamera(): they are cameraMatrix and distCoeffs. Print them, and write the data on a piece of paper.
At the end, the system you are developing needs to have a function/method to undistort the image, where these 2 variables will be hard coded inside the function, followed by a call to cv::undistort() (if you are using the C++ API of OpenCV):
cv::Mat undistorted;
cv::undistort(image, undistorted, cameraMatrix, distCoeffs);
and that's it.
Detecting rotation automatically might be a bit tricky, but the first thing to do is find the coordinates of the object you are interested in. But if the camera is in a fixed position, this is going to be easy.
For more info on perspective change and rotation with OpenCV, I suggest taking a look at these other questions:
Executing cv::warpPerspective for a fake deskewing on a set of cv::Point
Affine Transform, Simple Rotation and Scaling or something else entirely?
Rotate cv::Mat using cv::warpAffine offsets destination image

findhomography() is not bad choice, but skew,distortion(camera lens) is real problem..
C++: Mat findHomography(InputArray srcPoints, InputArray dstPoints,
int method=0, double ransacReprojThreshold=3, OutputArray
mask=noArray() )
Python: cv2.findHomography(srcPoints, dstPoints[, method[,
ransacReprojThreshold[, mask]]]) → retval, mask
C: void cvFindHomography(const CvMat* srcPoints, const CvMat*
dstPoints, CvMat* H, int method=0, double ransacReprojThreshold=3,
CvMat* status=NULL)
http://opencv.itseez.com/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findhomography

Related

Depth/Disparity Map from a moving camera in OpenCV

Is that possible to get the depth/disparity map from a moving camera? Let say I capture an image at x location, after I travelled let say 5cm and I capture another picture, and from there I calculate the depth map of the image.
I have tried using BlockMatching in opencv but the result is not good.The first and second image are as following:
first image,second image,
disparity map (colour),disparity map
My code is as following:
GpuMat leftGPU;
GpuMat rightGPU;
leftGPU.upload(left);rightGPU.upload(right);
GpuMat disparityGPU;
GpuMat disparityGPU2;
Mat disparity;Mat disparity1,disparity2;
Ptr<cuda::StereoBM> stereo = createStereoBM(256,3);
stereo->setMinDisparity(-39);
stereo->setPreFilterCap(61);
stereo->setPreFilterSize(3);
stereo->setSpeckleRange(1);
stereo->setUniquenessRatio(0);
stereo->compute(leftGPU,rightGPU,disparityGPU);
drawColorDisp(disparityGPU, disparityGPU2,256);
disparityGPU.download(disparity);
disparityGPU2.download(disparity2);
imshow("display img",disparityGPU);
how can I improve upon this? From the colour disparity map, there are quite a lot error (ie. the tall circle is red in colour and it is the same as some of the part of the table.). Also,from the disparity map, there are small noise (all the black dots in the picture), how can I pad those black dots with nearby disparities?
It is possible if the object is static.
To properly do stereo matching, you first need to rectify your images! If you don't have calibrated cameras, you can do this from detected feature points. Also note that for cuda::StereoBM the minimum default disparity is 0. (I have never used cuda, but I don't think your setMinDisparity is doing anything, see this anser.)
Now, in your example images corresponding points are only about 1 row apart, therefore your disparity map actually doesn't look too bad. Maybe having a larger blockSize would already do in this special case.
Finally, your objects have very low texture, therefore the block matching algorithm can't detect much.

OpenCV blur creating linesblob like areas

I'm using OpenCV with a very high kernel (50 and higher) to get a very exaggerated blur effect.
I am getting these weird line/area like effects on the generated imagery. Please refer to the wall area on the image below.
Is this something that is inherent to blurring at a very high kernel size?
What would be some techniques to smooth out and eliminate this effect?
I am using OpenFrameworks with the ofxCV addon. The relevant part of my code is just
blur(camScaled, 51);
If you are not familiar ofxCV is essentially a bridge and maps back to this OpenCV call in the end.
CV_EXPORTS_W void blur( InputArray src, OutputArray dst,
Size ksize, Point anchor=Point(-1,-1),
int borderType=BORDER_DEFAULT );
This effect is pretty normal because blurring means averaging the pixels value through the Kernel.
You should try an edge-preserving filter such as bilateral filter.
If you still want to use a "classic" blur you could try the median blur instead of mean blur, that should give you at least a more attenuated result.

Using Distortion Coefficients with findHomography in OpenCV

I am currently lost in the OpenCV documentation and am looking for some guidance on the possible ordering of functions, or perhaps a function within OpenCV that I haven't came acrossed yet...
I am tracking a laser blob within a camera feed to a location on a projection screen. Up until now I have been using findHomography and then projectTransform to accomplish this however the camera I was using had very little distortion. Now I am using a different camera with a noticeable radial distortion. I have used cvCalibrateCamera to get the distortion coefficients, camera matrix, etc. but I am not sure how I should use this data with my current process, or perhaps I need to use different functions and/or ordering of functions from OpenCV altogether. Any suggestions would be appreciated...
My current code that works well (without distortion) is as follows:
Mat homog;
homog = findHomography(Mat(vCameraPoints), Mat(vTargetPoints), CV_RANSAC);
vector<Point2f> cvTrackPoint;
cvTrackPoint.push_back(Point2f(pMapPoint.fX, pMapPoint.fY));
Mat normalizedImageMat;
perspectiveTransform(Mat(cvTrackPoint), normalizedImageMat, homog);
Point2f normalizedImgPt;
normalizedImgPt = Point2f(normalizedImageMat.at<Point2f>(0,0));
normalizedImgPt.x /= szCameraSize.fWidth;
normalizedImgPt.y /= szCameraSize.fHeight;
I then of course multiply the normalizedImgPt to my projection screen resolution
So again, just to clarify...I do have what appears to be good data from calibrateCamera, how would I use this information to factor in the lens distortion? Perhaps the above process wont work, any help?
Thanks, in advance
If you have acquired the distortion coefficients, then a simple (yet probably suboptimal) way to get back to the non-distorted case would be to undistort the image. The undistorted image is the image a camera with similar intrinsic and extrinsic parameters but without lens distorsion would acquire.
The corresponding OpenCV function is undistort

Reverse Fish-Eye Distortion

I am working with a fish-eye camera and need the reverse the distortion before any further calculation,
In this question this is happening Correcting fisheye distortion
src = cv.LoadImage(src)
dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)
mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)
cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
The problem with this is that this way the remap functions goes through all the points and creates a new picture out of. this is time consuming to do it every frame.
They way that I am looking for is to have a point to point translation on the fish-eye picture to normal picture coordinates.
The approach we are taking is to do all the calculations on the input frame and just translate the result coordinates to the world coordinates so we don't want to go through all the points of a picture and create a new one out of it. (Time is really important for us)
In the matrices mapx and mapy there are some point to point translations but a lot of points are without complete translation.
I tried to interpolate this matrices but the result was not what I was looking for.
Any help in would be much appreciated, even other approaches which are more time efficient than cv.Remap.
Thanks
I think what you want is cv.UndistortPoints().
Assuming you have detected some point features distorted in your distorted image, you should be able to do something like this:
cv.UndistortPoints(distorted, undistorted, intrinsics, dist_coeffs)
This will allow you to work with undistorted points without generating a new, undistorted image for each frame.

Extracting Depth images of Kinect using opencv

Does anyone know what is the simplest way to extract the gray-level depth images of Kinect using OpenCV and C++? any source code in this field?
if you use OpenNI SDK, you can simply point to the buffer:
//on setup:
xn::DepthGenerator depthGenerator;
xn::DepthMetaData depthMD;
cv::Mat depthWrapper;
//on update loop,
//after context.WaitAnyUpdateAll();
depthGenerator.GetMetaData(depthMD);
depthWrapper = cv::Mat(depthMD.YRes(), depthMD.XRes(), CV_16UC1, (void*) depthMD.Data());
note that depthWrapper is const so you need to clone it in order to manipulate it
The documentation has everything you need. Can't elaborate better than this.
You need to do two things (apart from reading about context, depth generator and initialization of Kinect):
Create Mat of the type CV_16U a.
context.WaitOneUpdateAll(depth_map); b. Mdepth_original =
Mat(h_depth, w_depth, CV_16U, (void*) depth_map.GetData()) c. copy
the Mat since it will be destroyed during next read:
Mdepth_original.copyTo(depth);
Map depth to gray or color. Color seems like a good idea (256^3 levels) but a human eye is more sensitive to the luminance change. Even with 256 levels you can map 10,000 Kinect levels reasonably well using [histogram equalization][1] technique. A simplest way though is to loose precision and just do I(x, y) = 255.0*z(x, y)/z_range
Here is how histogram equalization is implemented in openNI2:
https://github.com/OpenNI/OpenNI2/blob/master/Samples/Common/OniSampleUtilities.h