I am using opencv and c++. I have successfully obtained the transformation matrix between images A and B based on 3 common points in the images. Now I want to apply this found transformation matrix to the whole image. I was hoping that warpAffine could do the job but it gives me this error http://i.imgur.com/T7Xl0cw.jpg. However, I used only part of the affineTransform code where it finds the warped image because I already found the transformation matrix using another method. Can anybody tell if this is the correct way to transform the whole image if I already have a transformation marix?Here is the piece of that code http://pastebin.com/HFYSneG2
If you already have the transformation matrix, then cv::warpAffine is the right way to go. Your error message seems to be about the type of the transformation matrix and/or its size, which should be 2x3 float or double precision.
The matrix of the common points found on both images needed to be transposed and then warpAffine used
Related
I'm trying to implement a medical images (MRI .nii files) processing tool(c++) for educational purpose.
Specifically, apply a version of FFT on those images. I already did if for 2D images and I was wondering if the same approach is possible for the 4D case:
Transform image in matrix
Apply FFT on 4D matrix and do computation on the transformed matrix
Inverse FFT and print out modified image
I've found this, but having a tensor for each cell I think is not efficient for my specific problem.
To recap: is there a way to, given a vtkImageReader, retrieve a 4D tensor?
EDIT: I was thinking if is possible to cut the "cubic" image into a dimension to retrieve a 2D matrix, and do this for every "dx" to get a vector of images. It's not a very elegant solution but if that's the only way is it possible to split 3D images in vtk?
I am currently facing a problem, to depict you what my programm does and should do, here is the copy/paste of the beginning of a previous post I've made.
This program lies on the classic "structure from motion" method.
The basic idea is to take a pair of images, detect their keypoints and compute the descriptors of those keypoints. Then, the keypoints matching is done, with a certain number of tests to insure the result is good. That part works perfectly.
Once this is done, the following computations are performed : fundamental matrix, essential matrix, SVD decomposition of the essential matrix, camera projection matrices computation and finally, triangulation.
The result for a pair of images is a set of 3D coordinates, giving us points to be drawn in a 3D viewer. This works perfectly, for a pair.
However, I have to perform a step manually, and this is not acceptable if I want my program to efficiently work with more than two images.
Indeed, I compute my projection matrices according the classic method, as follows, at paragraph "Determining R and t from E" : https://en.wikipedia.org/wiki/Essential_matrix
I have then 4 possible solutions for my projection matrix.
I think I have understood the geometrical point of view of the problem, portrayded in this Hartley and Zisserman paper extract (chapters 9.6.3 and 9.7.1) : http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf
Nonetheless, my question is : Given the four possible projection matrices computed and the 3D points computed by the OpenCV function triangulatePoints() (for each projection matrix), how can I elect the "true" projection matrix, automatically ? (without having to draw 4 times my points in my 3D viewer, in order to see if they are consistent)
Thanks for reading.
I'm working on stereo-vision with the stereoRectifyUncalibrated() method under OpenCV 3.0.
I calibrate my system with the following steps:
Detect and match SURF feature points between images from 2 cameras
Apply findFundamentalMat() with the matching paairs
Get the rectifying homographies with stereoRectifyUncalibrated().
For each camera, I compute a rotation matrix as follows:
R1 = cameraMatrix[0].inv()*H1*cameraMatrix[0];
To compute 3D points, I need to get projection matrix but i don't know how i can estimate the translation vector.
I tried decomposeHomographyMat() and this solution https://stackoverflow.com/a/10781165/3653104 but the rotation matrix is not the same as what I get with R1.
When I check the rectified images with R1/R2 (using initUndistortRectifyMap() followed by remap()), the result seems correct (I checked with epipolar lines).
I am a little lost with my weak knowledge in vision. Thus if somebody could explain to me. Thank you :)
The code in the link that you have provided (https://stackoverflow.com/a/10781165/3653104) computes not the Rotation but 3x4 pose of the camera.
The last column of the pose is your Translation vector
I use opencv with cpp.
I have std::vector<std::pair<cv::Point2d, cv::Point2d> > wich represent a warp.
For each point of an image A, i associate a point of an image B.
I don't know all association between points of image A and points of image B. The points of image A are on a sparse matrix. These data have also probably epsilon error.
So I would like interpolate.
In opencv I don't found a function which do simply an interpolation.
How do this ?
I found the function cv::warpPoint but I don't know the cv::Mat Camera intrinsic parameters nor cv::Mat Camera rotation matrix.
How compute these matrix from my data ?
I think the best way is piecewise affine warper:
https://code.google.com/p/imgwarp-opencv/
I have my own fast implementation, but comments are in russian, you can find it here.
So there are 2 questions:
how to warp the points from one image to the other.
Try cv::remap to do that, once you have dense (interpolated) description. See http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/remap/remap.html for example.
How to compute non-given point pairs by interpolation.
I don't have a solution for this, but some ideas:
don't use point pairs but displacement vectors. displacement might be easier to interpolate.
use inverse formulation to get a dense description of the second image (otherwise there might be pixel that aren't touched at all
But I guess the "real" method to do this would be some kind of spline interpolation.
I have a problem when I'm getting the results of using the function cvFindHomograhpy().
The results it gives negative coordinates, I will explain now what I'm doing.
I'm working on video stabilization by using optical flow method. I have estimated the location of the features in the first and second frame. Now, my aim is to warp the images in order to stabilize the images. Before this step I should calculate the homography matrix between the frames which I used the function mentioned above but the problem that I'm getting this results which it doesn't seem to be realistic because it has negative values and these values can be changed to more weird results.
0.482982 53.5034 -0.100254
-0.000865877 63.6554 -0.000213824
-0.0901095 0.301558 1
After obtaining these results I get a problem to apply for image Warping by using CvWarpPerspective(). The error shows that there is a problem with using the matrices. Incorrect transforming from "cvarrTomat"?
So where is the problem? Can you give me another suggestion if it's available?
Notice: if you can help me about implementing the warping in c++ it would be great.
Thank you
A poor homography estimation can generate warping error inside CvWarpPerspective().
The homography values you have posted show that you have a full projective transformation that move points at infinity to points in 2d euclidean plane and it could be wrong.
In video stabilization to compute a good homography model usually other features are used such as harris corner, hessian affine or SIFT/SURF combined with a robust model estimator such as RANSAC or LMEDS.
Check out this link for a matlab example on video stabilization...