Matching 3d models in a 2d scene using pcl or opencv - c++

I have a 3d model obtained with a 3d scanner and I want to match it in a 2d scene (simple 2d video which contains the model).
I know pcl deals only with point clouds and opencv with 2d images, is it possible though to user any of them to extract the keypoints from the 3d model and then use them to find the model in a 2d image?

It depends on the kind of objects. If you look for simple shape objects as boxes, you can detect corners in 3D and in 2D and match its together.
For more complex objects, maybe you will have to mesh your point cloud to find robust interest points. For example, this paper https://hal.inria.fr/hal-00682775/file/squelette-rr.pdf explains a method to extract robust points in a shape, OR a surface, but I don't know if the same keypoints will be extracted in 2D and 3D.

Find all key points and project them on ground plane to get equivalent 2D image. You can use pcl 2d projection techniques also. Possible duplicate of Generate image from an unorganized Point Cloud in PCL

Related

How to get the convex hull of a binary image using DIPlib in C++?

I have a stack of binary images of an open porous structure and I want to get a binary mask which covers the whole volume of the structure (the structure itself and the void contained in the structure). I think a good way to achieve my goal would be to calculate the convex hull of the image. This works fine in Python using skimage.morphology.convex_hull_image (see images).
But I need this functionality in C++ and I want to use the DIPlib library. Unfortunately I'm struggeling with the correct implementation since the documentation confuses me a bit.
Could you provide a minimal example which explains how to derive the the convex hull of a binary object as an image?
Does the DIPlib implementation also handle 3D images?
You'd want to use the function dip::MakeRegionsConvex2D(). For example:
dip::Image img = dip.ImageRead('yIFuP.jpg');
dip::Image bin = img > 128; // assuming img is scalar
dip::MakeRegionsConvex2D(bin, bin);
This function is explicitly written for 2D images, and will not work for 3D images.
For a 3D image, I would get a list of the coordinates of all set pixels (use dip::Find), and pass that into a quickhull algorithm implementation such as the one in CGAL, then draw the resulting 3D polyhedron into the image. This last step might be the most challenging one (I don't know if CGAL has functionality to render a polyhedron to an image). The quick and dirty solution would be to iterate over all pixels, and for each do a in/out test, set the pixel if it's inside the polyhedron.

3d to 2d image transformation - PointCloud to OpenCV Image - C++

I'm collecting a hand 3d image from my Kinect, and I want to generate a 2d image using only the X and Y values to do image processing using OpenCV. The size of the 3d matrix is variable and depends on the output from the Kinect and the X and Y values are not in proper scale to generate an 2d image. My 3d points and my 3d image are: http://postimg.org/image/g0hm3y06n/
I really don't know how can I generate my 2d image to perform my Image Processing.
Someone can help me or have a good example that I can use to create my image and do the proper scaling for that problem? I want as output the HAND CONTOURS.
I think you should apply Delaunay triangulation to 2D coordinates of point cloud (depth ignored), then remove too long vertices from triangles. You can estimate the length threshold by counting points per some area and evaluating square root from the value you'll get. After you got triangulation you can draw filled triangles and find contours.
I think what you are looking for is the OKPCL package.. Also, make sure you check out this PCL post about the topic.. There is also an OpenCVPCL Bridge class but apparently the website is down.
And lastly, there has been official word that the OpenCV and PCL are joining forces for a development platform that integrates GPU computing with 2d/3D perception and processing.
HTH
You could use PCLs RangeImage respectively RangeImagePlanar class with its createFromPointCloud method (I think you do have a point cloud, right? Or what do you mean by 3D image?).
Then you can create a OpenCV Mat using getImagePoint functions.

How to do the correspondance 2D-3D points

I'm working with OpenCv API on an augmented reality project using one camera.I have :
The 3D point of my 3D object( i get 4 points from MeshLab)
The 2D points which i want to follow ( i have 4 points):these points are not the projection of the 3D points.
Intrinsic camera parameters.
Using these parameters, i have the extrinsic parameters( rotation and translation using the cvFindExtrinsicParam function) which i have used to render my model and set the modelView matrix.
My problem is that the 3D model are not shown in particular position: it has been shown in différent location on my image. How can i fix the model location and then the modelView matrix?
In other forums they told me that i should do the correspondance 2D-3D to get the extrinsic parameters but i don't know how to correspond my 2D points with the 3D points?
Typically you would design the points you want to track in such a fashion that the 2d-3d correspondence is immediately clear. The easiest way to do this is to have points with different colors. You could also go with some sort of pattern (google augmented reality cards) which you would then have to analyze in order to find out how it is rotated in the image. The pattern of course can not be rotation symmetric.
If you can't do that, you can try out all the different permutations of the points, plug them into OpenCV to get a matrix, then project your 3D points to 2D points with those matrices, and then see which one fits best.

Reconstructing 3D from some images without calibration?

I want to make a 3D reconstruction from multiple images without using a chessboard Calibration. I'm using OpenCV and studying the method to obtain the way to get the model 3D from 30 images without calibrating the camera with a chessboard pattern.
Is this possible? Where can I get the extrinsics params?
Can I make the 3D reconstruction without calibrating?
The calibration grid (chessboard in the typical OpenCV example) is simply an object of known dimensions that lets you estimate the camera's intrinsic parameters, i.e. the mapping from camera coordinates to the image coordinates of a point. This includes focal length, centre of projection, radial distortion parameters et cetera.
If you do away with the calibration object, you will need to find these parameters from the image observations themselves. This approach is called "self-calibration" or "auto-calibration" and can be fairly involved. Basically, you are trying to get a good starting point for the follow-up non-linear optimisation (i.e. bundle adjustment). For a start, you might want to refer to Marc Pollefeys' PhD thesis, who came up with a simple linear algorithm for this problem:
http://www.cs.unc.edu/~marc/pubs/PollefeysIJCV04.pdf

How to get curve from intersection of point cloud and arbitrary plane?

I have various point clouds defining RT-STRUCTs called ROI from DICOM files. DICOM files are formed by tomographic scanners. Each ROI is formed by point cloud and it represents some 3D object.
The goal is to get 2D curve which is formed by plane, cutting ROI's cloud point. The problem is that I can't just use points which were intersected by plane. What I probably need is to intersect 3D concave hull with some plane and get resulting intersection contour.
Is there any libraries which have already implemented these operations? I've found PCL library and probably it should be able to solve my problem, but I can't figure out how to achieve it with PCL. In addition I can use Matlab as well - we use it through its runtime from C++.
Has anyone stumbled with this problem already?
P.S. As I've mentioned above, I need to use a solution from my C++ code - so it should be some library or matlab solution which I'll use through Matlab Runtime.
P.P.S. Accuracy in such kind of calculations is really important - it will be used in a medical software intended for work with brain tumors, so you can imagine consequences of an error (:
You first need to form a surface from the point set.
If it's possible to pick a 2d direction for the points (ie they form a convexhull in one view) you can use a simple 2D Delaunay triangluation in those 2 coordinates.
otherwise you need a full 3D surfacing function (marching cubes or Poisson)
Then once you have the triangles it's simple to calculate the contour line that a plane cuts them.
See links in Mesh generation from points with x, y and z coordinates
Perhaps you could just discard the points that are far from the plane and project the remaining ones onto the plane. You'll still need to reconstruct the curve in the plane but there are several good methods for that. See for instance http://www.cse.ohio-state.edu/~tamaldey/curverecon.htm and http://valis.cs.uiuc.edu/~sariel/research/CG/applets/Crust/Crust.html.