warping images and problem with the matrix of Homography opencv - c++

I have a problem when I'm getting the results of using the function cvFindHomograhpy().
The results it gives negative coordinates, I will explain now what I'm doing.
I'm working on video stabilization by using optical flow method. I have estimated the location of the features in the first and second frame. Now, my aim is to warp the images in order to stabilize the images. Before this step I should calculate the homography matrix between the frames which I used the function mentioned above but the problem that I'm getting this results which it doesn't seem to be realistic because it has negative values and these values can be changed to more weird results.
0.482982 53.5034 -0.100254
-0.000865877 63.6554 -0.000213824
-0.0901095 0.301558 1
After obtaining these results I get a problem to apply for image Warping by using CvWarpPerspective(). The error shows that there is a problem with using the matrices. Incorrect transforming from "cvarrTomat"?
So where is the problem? Can you give me another suggestion if it's available?
Notice: if you can help me about implementing the warping in c++ it would be great.
Thank you

A poor homography estimation can generate warping error inside CvWarpPerspective().
The homography values you have posted show that you have a full projective transformation that move points at infinity to points in 2d euclidean plane and it could be wrong.
In video stabilization to compute a good homography model usually other features are used such as harris corner, hessian affine or SIFT/SURF combined with a robust model estimator such as RANSAC or LMEDS.
Check out this link for a matlab example on video stabilization...

Related

Shadow Removal by inverse laplacian in C++

I have some query to code the algorithm in
Removing shadows from images by finlayson
I have gotten the matlab code to get illumination invariance image by jose alvarez and convert it to c++ coding
I have continued to follow the image reconstruction step outlined by finlayson algorithm but was stumped at the part of the Possion equation which is the part after removing the shadow edge from the log channel image
How should I proceed after that. The discussion of this part is vague to me. SI have read the following presentation slides. It say I must perform a inverse laplace operation on the image.
what should i do ? inverse laplace is not so common to code. Would need any advice I could get
ThANKS

Opencv stitching planar images

I'm writing a program that stitches aerial images from a video in real time taken by a drone. In fact, what I'm doing is:
get two consecutive frames
find features between these two frames
calculate homography
calculate images offset and stitch them together
The problem is the 3rd step, and that's why: Both my code and opencv stitching function calculate homography asserting that I've a rotation in some axis (because it's a panorama). Actually, the drone is acting like a "scanner", so instead having a rotation I've a translation of the camera. This influences my homography and then my stitching step creating artifacts. There's a way to consider the camera translation instead a rotation (like orthographic thing)?
If it helps, I've intrinsic camera parameters calculated with the calibration tutorial
Ps: I'm programming in c++
EDIT
I've found this library written by Nasa http://ti.arc.nasa.gov/tech/asr/intelligent-robotics/nasa-vision-workbench/
could it help? Can I use it togheter with opencv?

projection matrix from homography

I'm working on stereo-vision with the stereoRectifyUncalibrated() method under OpenCV 3.0.
I calibrate my system with the following steps:
Detect and match SURF feature points between images from 2 cameras
Apply findFundamentalMat() with the matching paairs
Get the rectifying homographies with stereoRectifyUncalibrated().
For each camera, I compute a rotation matrix as follows:
R1 = cameraMatrix[0].inv()*H1*cameraMatrix[0];
To compute 3D points, I need to get projection matrix but i don't know how i can estimate the translation vector.
I tried decomposeHomographyMat() and this solution https://stackoverflow.com/a/10781165/3653104 but the rotation matrix is not the same as what I get with R1.
When I check the rectified images with R1/R2 (using initUndistortRectifyMap() followed by remap()), the result seems correct (I checked with epipolar lines).
I am a little lost with my weak knowledge in vision. Thus if somebody could explain to me. Thank you :)
The code in the link that you have provided (https://stackoverflow.com/a/10781165/3653104) computes not the Rotation but 3x4 pose of the camera.
The last column of the pose is your Translation vector

3D reconstruction from 2 images with baseline and single camera calibration

my semester project is to Calibrate Stereo Cameras with a big baseline (~2m).
so my approach is to run without exact defined calibration pattern like the chessboard cause it had to be huge and would be hard to handle.
my problem is similar to this: 3d reconstruction from 2 images without info about the camera
Program till now:
Corner detection left image goodFeaturesToTrack
refined corners cornerSubPix
Find corner locations in right image calcOpticalFlowPyrLK
calculate fundamental matrix F findFundamentalMat
calculate H1, H2 rectification homography matrix stereoRectifyUncalibrated
Rectify images warpPerspective
Calculate Disparity map sgbm
so far so good it works passably but rectified images are "jumping" in perspective if i change the number of corners..
don't know if this if form imprecision or mistakes i mad or if it cant be calculated due to no known camera parameters or no lens distortion compensation (but also happens on Tsukuba pics..)
suggestions are welcome :)
but not my main problem, now i want to reconstruct the 3D points.
but reprojectImageTo3D needs the Q matrix which i don't have so far. so my question is how to calculate it? i have the baseline, distance between the two cameras. My feeling says if i convert des disparity map in to a 3d point cloud the only thing im missing is the scale right? so if i set in the baseline i got the 3d reconstruction right? then how to?
im also planing to compensate lens distortion as the first step for each camera separately with a chessboard (small and close to one camera at a time so i haven't to be 10-15m away with a big pattern in the overlapping area of both..) so if this is helping i could also use the camera parameters..
is there a documentation besides the http://docs.opencv.org? that i can see and understand what and how the Q matrix is calculated or can i open the source code (probably hard to understand for me ^^) if i press F2 in Qt i only see the function with the transfer parameter types.. (sorry im really new to all of this )
left: input with found corners
top h1, h2: rectify images (looks good with this corner count ^^)
SGBM: Disparity map
so i found out what the Q matrix constrains here:
Using OpenCV to generate 3d points (assuming frontal parallel configuration)
all these parameters are given by the single camera calibration:
c_x , c_y , f
and the baseline is what i have measured:
T_x
so this works for now, only the units are not that clear to me, i have used them form single camera calib which are in px and set the baseline in meters, divided the disparity map by 16, but it seams not the right scale..
by the way the disparity map above was wrong ^^ and now it looks better. you have to do a anti Shearing Transform cause the stereoRectifyUncalibrated is Shearing your image (not documented?).
described in this paper at "7 Shearing Transform" by Charles Loop Zhengyou Zhang:
http://research.microsoft.com/en-us/um/people/Zhang/Papers/TR99-21.pdf
Result:
http://i.stack.imgur.com/UkuJi.jpg

3d to 2d image transformation - PointCloud to OpenCV Image - C++

I'm collecting a hand 3d image from my Kinect, and I want to generate a 2d image using only the X and Y values to do image processing using OpenCV. The size of the 3d matrix is variable and depends on the output from the Kinect and the X and Y values are not in proper scale to generate an 2d image. My 3d points and my 3d image are: http://postimg.org/image/g0hm3y06n/
I really don't know how can I generate my 2d image to perform my Image Processing.
Someone can help me or have a good example that I can use to create my image and do the proper scaling for that problem? I want as output the HAND CONTOURS.
I think you should apply Delaunay triangulation to 2D coordinates of point cloud (depth ignored), then remove too long vertices from triangles. You can estimate the length threshold by counting points per some area and evaluating square root from the value you'll get. After you got triangulation you can draw filled triangles and find contours.
I think what you are looking for is the OKPCL package.. Also, make sure you check out this PCL post about the topic.. There is also an OpenCVPCL Bridge class but apparently the website is down.
And lastly, there has been official word that the OpenCV and PCL are joining forces for a development platform that integrates GPU computing with 2d/3D perception and processing.
HTH
You could use PCLs RangeImage respectively RangeImagePlanar class with its createFromPointCloud method (I think you do have a point cloud, right? Or what do you mean by 3D image?).
Then you can create a OpenCV Mat using getImagePoint functions.