vtk NIFTI image to tensor/3d matrix representation - c++

I'm trying to implement a medical images (MRI .nii files) processing tool(c++) for educational purpose.
Specifically, apply a version of FFT on those images. I already did if for 2D images and I was wondering if the same approach is possible for the 4D case:
Transform image in matrix
Apply FFT on 4D matrix and do computation on the transformed matrix
Inverse FFT and print out modified image
I've found this, but having a tensor for each cell I think is not efficient for my specific problem.
To recap: is there a way to, given a vtkImageReader, retrieve a 4D tensor?
EDIT: I was thinking if is possible to cut the "cubic" image into a dimension to retrieve a 2D matrix, and do this for every "dx" to get a vector of images. It's not a very elegant solution but if that's the only way is it possible to split 3D images in vtk?

Related

Matching 3d models in a 2d scene using pcl or opencv

I have a 3d model obtained with a 3d scanner and I want to match it in a 2d scene (simple 2d video which contains the model).
I know pcl deals only with point clouds and opencv with 2d images, is it possible though to user any of them to extract the keypoints from the 3d model and then use them to find the model in a 2d image?
It depends on the kind of objects. If you look for simple shape objects as boxes, you can detect corners in 3D and in 2D and match its together.
For more complex objects, maybe you will have to mesh your point cloud to find robust interest points. For example, this paper https://hal.inria.fr/hal-00682775/file/squelette-rr.pdf explains a method to extract robust points in a shape, OR a surface, but I don't know if the same keypoints will be extracted in 2D and 3D.
Find all key points and project them on ground plane to get equivalent 2D image. You can use pcl 2d projection techniques also. Possible duplicate of Generate image from an unorganized Point Cloud in PCL

OpenCV Projection Matrix Choice

I am currently facing a problem, to depict you what my programm does and should do, here is the copy/paste of the beginning of a previous post I've made.
This program lies on the classic "structure from motion" method.
The basic idea is to take a pair of images, detect their keypoints and compute the descriptors of those keypoints. Then, the keypoints matching is done, with a certain number of tests to insure the result is good. That part works perfectly.
Once this is done, the following computations are performed : fundamental matrix, essential matrix, SVD decomposition of the essential matrix, camera projection matrices computation and finally, triangulation.
The result for a pair of images is a set of 3D coordinates, giving us points to be drawn in a 3D viewer. This works perfectly, for a pair.
However, I have to perform a step manually, and this is not acceptable if I want my program to efficiently work with more than two images.
Indeed, I compute my projection matrices according the classic method, as follows, at paragraph "Determining R and t from E" : https://en.wikipedia.org/wiki/Essential_matrix
I have then 4 possible solutions for my projection matrix.
I think I have understood the geometrical point of view of the problem, portrayded in this Hartley and Zisserman paper extract (chapters 9.6.3 and 9.7.1) : http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf
Nonetheless, my question is : Given the four possible projection matrices computed and the 3D points computed by the OpenCV function triangulatePoints() (for each projection matrix), how can I elect the "true" projection matrix, automatically ? (without having to draw 4 times my points in my 3D viewer, in order to see if they are consistent)
Thanks for reading.

Interpolation warp

I use opencv with cpp.
I have std::vector<std::pair<cv::Point2d, cv::Point2d> > wich represent a warp.
For each point of an image A, i associate a point of an image B.
I don't know all association between points of image A and points of image B. The points of image A are on a sparse matrix. These data have also probably epsilon error.
So I would like interpolate.
In opencv I don't found a function which do simply an interpolation.
How do this ?
I found the function cv::warpPoint but I don't know the cv::Mat Camera intrinsic parameters nor cv::Mat Camera rotation matrix.
How compute these matrix from my data ?
I think the best way is piecewise affine warper:
https://code.google.com/p/imgwarp-opencv/
I have my own fast implementation, but comments are in russian, you can find it here.
So there are 2 questions:
how to warp the points from one image to the other.
Try cv::remap to do that, once you have dense (interpolated) description. See http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/remap/remap.html for example.
How to compute non-given point pairs by interpolation.
I don't have a solution for this, but some ideas:
don't use point pairs but displacement vectors. displacement might be easier to interpolate.
use inverse formulation to get a dense description of the second image (otherwise there might be pixel that aren't touched at all
But I guess the "real" method to do this would be some kind of spline interpolation.

3d to 2d image transformation - PointCloud to OpenCV Image - C++

I'm collecting a hand 3d image from my Kinect, and I want to generate a 2d image using only the X and Y values to do image processing using OpenCV. The size of the 3d matrix is variable and depends on the output from the Kinect and the X and Y values are not in proper scale to generate an 2d image. My 3d points and my 3d image are: http://postimg.org/image/g0hm3y06n/
I really don't know how can I generate my 2d image to perform my Image Processing.
Someone can help me or have a good example that I can use to create my image and do the proper scaling for that problem? I want as output the HAND CONTOURS.
I think you should apply Delaunay triangulation to 2D coordinates of point cloud (depth ignored), then remove too long vertices from triangles. You can estimate the length threshold by counting points per some area and evaluating square root from the value you'll get. After you got triangulation you can draw filled triangles and find contours.
I think what you are looking for is the OKPCL package.. Also, make sure you check out this PCL post about the topic.. There is also an OpenCVPCL Bridge class but apparently the website is down.
And lastly, there has been official word that the OpenCV and PCL are joining forces for a development platform that integrates GPU computing with 2d/3D perception and processing.
HTH
You could use PCLs RangeImage respectively RangeImagePlanar class with its createFromPointCloud method (I think you do have a point cloud, right? Or what do you mean by 3D image?).
Then you can create a OpenCV Mat using getImagePoint functions.

Generalising found transformation matrix to a whole image

I am using opencv and c++. I have successfully obtained the transformation matrix between images A and B based on 3 common points in the images. Now I want to apply this found transformation matrix to the whole image. I was hoping that warpAffine could do the job but it gives me this error http://i.imgur.com/T7Xl0cw.jpg. However, I used only part of the affineTransform code where it finds the warped image because I already found the transformation matrix using another method. Can anybody tell if this is the correct way to transform the whole image if I already have a transformation marix?Here is the piece of that code http://pastebin.com/HFYSneG2
If you already have the transformation matrix, then cv::warpAffine is the right way to go. Your error message seems to be about the type of the transformation matrix and/or its size, which should be 2x3 float or double precision.
The matrix of the common points found on both images needed to be transposed and then warpAffine used