Extract points from PointCloud with PCL - c++

I'm using the library PCL to read .pcd point clouds.
I need to extract a profile cut of this point cloud.
Any idea advice on how to implement such a feature ?
Basically, I want to move a box along the point cloud, and project the points present in the box on 1 plane.
I already did the reading of the point cloud, but I'm a bit stuck with the extraction of sub-point cloud

You can use the pcl::ProjectInliers class which does exactly that: it projects the points onto a parametric model (e.g. plane, sphere, ...). There's even an handy tutorial for it!
Here's an extract from the tutorial which creates a plane and projects the points on it:
// Create a set of planar coefficients with X=Y=0,Z=1
pcl::ModelCoefficients::Ptr coefficients (new pcl::ModelCoefficients ());
coefficients->values.resize (4);
coefficients->values[0] = coefficients->values[1] = 0;
coefficients->values[2] = 1.0;
coefficients->values[3] = 0;
// Create the filtering object
pcl::ProjectInliers<pcl::PointXYZ> proj;
proj.setModelType (pcl::SACMODEL_PLANE);
proj.setInputCloud (cloud);
proj.setModelCoefficients (coefficients);
proj.filter (*cloud_projected);

If you don't need an actual box but a distance threshold to a plane will do try this.
plane can be represented by a unit normal vector (3d) and distance, say norm = (0,0,1), d = 10. This defines a plane z = 10;
create a point on the plane, just (10*0, 10*0, 10*1) -> point_on_plane =(0,0,10)
point distance from plane dist = (p - point_on_plane) .dot(norm)
if fabs(dist) less than threshold, project points on the plane
projection = p - dist*norm
to iterate all cross section increase d.

Related

Why does pcl::PointNormal RANSAC cylinder model state that it does not contain normals when estimating model coefficients?

I have been trying to fit a cylinder model to a generated point cloud using the pcl tools. I have been adapting the example code that is provided in the documentation here.
From my understanding, the pcl::SampleConsensusModelCylinder requires normals data so I have added a new check in the code that checks for a -cf argument. The code then calculates the normals of each point in a pcl::Normals-type point cloud called cloud_normals and then concatenates this with the original pcl::PointXYZ-type point cloud, cloud . I save this to a new pcl::PointNormal-type point cloud called cloud_normalpoints and with this I attempt to fit the the pcl::SampleConsensusModelCylinder using RAndom SAmple Consensus (RANSAC).
I have included the code below:
else if (pcl::console::find_argument (argc, argv, "-cf") >= 0 )
{
//TODO: find fastest way to fit cylinder model to point cloud
// the cloud_normals point cloud will be used to store the point cloud or normals
pcl::PointCloud<pcl::Normal>::Ptr cloud_normals (new pcl::PointCloud<pcl::Normal>);
cloud_normals->width = 500;
cloud_normals->height = 1;
// is_dense is True if no points have NaN or Inf in any of their floating points field
cloud_normals->is_dense = false;
cloud_normals->points.resize (cloud_normals->width * cloud_normals->height);
// the NormalEstimation object ne is created and will estimate the normals and curvature at each point
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> ne;
// the search:KdTree object pointer points to a search method for finding points in 3D space (3D point clouds)
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ> ());
// use the filtered cloud as an input to the normal estimator
ne.setInputCloud (cloud);
// set number of k-nearest neighbours to use for feature estimation to 50
ne.setKSearch (50);
// compute normals and save these to the clouds_normals point cloud
ne.compute (*cloud_normals);
pcl::PointCloud<pcl::PointNormal>::Ptr cloud_normalpoints (new pcl::PointCloud<pcl::PointNormal>);
cloud_normalpoints->width = 500;
cloud_normalpoints->height = 1;
// is_dense is True if no points have NaN or Inf in any of their floating points field
cloud_normalpoints->is_dense = false;
cloud_normalpoints->points.resize (cloud_normalpoints->width * cloud_normalpoints->height);
pcl::concatenateFields(*cloud,*cloud_normals,*cloud_normalpoints);
//TODO: Solve normals not given error
pcl::SampleConsensusModelCylinder<pcl::PointNormal, pcl::Normal>::Ptr
model_c (new pcl::SampleConsensusModelCylinder<pcl::PointNormal, pcl::Normal> (cloud_normalpoints));
// Declares ransac as a ransac implementation searching for a cylinder (according to model_c -> in cloud)
pcl::RandomSampleConsensus<pcl::PointNormal> ransac (model_c);
// Set distance threshold of .01 -> believe this is for inliers
ransac.setDistanceThreshold (.01);
// Compute model coefficients and find inliers
ransac.computeModel();
// Return indices of best set of inliers so far for this model
ransac.getInliers(inliers);
}
I also added some more code to generate an original point cloud containing a cylinder, but this works so I will not go into it here.
When I run my code it gets to the compute model stage and then throws the following error:
[pcl::SampleConsensusModelCylinder::computeModelCoefficients] No input dataset containing normals was given!
Does anyone know why this is? The cloud_normalpoints cloud includes the normal data that was found for each point. Should I be setting up the RANSAC estimator differently? Should I use a different point type? I am relatively new to pcl so any help would be greatly appreciated!
You have to call the function setInputNormals of your model_c, where you pass the cloud_normals.
The cloud you pass in the constructor of SampleConsensusModelCylinder only sets the XYZ information, but is not used for normals.
This tutorial could also be interesting for you: https://pcl.readthedocs.io/projects/tutorials/en/latest/cylinder_segmentation.html

Get known position in one image to another using 8-point algorithm

I have two images and and know the position of a point in the first image. Now I want to get the corresponding position in the second image.
This is my idea:
I can use algorithms such as SIFT to match keypoints (as seen in the image)
I know the camera matrix using calibration with e.g. chessboards
Using the 8 point algorithm I calculate the fundamental matrix F
Can I now use F to calculate the corresponding point?
Using fundamental matrix F alone is not enough. If you have a point on one image, you can't find its position on the second image, because it depends not only on configuration of the cameras, but also on the distance from the camera to that point.
This can also be seen from the equation x2^T * F * x1 = 0. If you know x1 and F, then for x2 you get equation x2^T * b = 0, where b = F * x1. This is an equation of a point x2 lying on the line b (points x1, x2 and line b are in homogeneous coordinates). Although you cant find the exact position of the point on the second image, you know that it must lie somewhere on that line.
Hartley and Zisserman have a great explanation these of these concepts in their book Multiple View Geometry in Computer Vision. Be sure to check it out for more details.

Calculate transformation for obtaining bounding box in perspective view

This problem does not completely depend on point cloud library. Instead, it is a computer graphics problem.
Given a perspective view of a scene which contains a ball, we need to obtain bounding box of this ball. Please see a sample figure below-
If the yellow ball is projected, a bounding rectangle can be defined by two diagonal points s1 and s2. 3D coordinates of these points were defined by the following procedure-
Visualize the point cloud in PCLVisualizer
Define 2D bounding box using two diagonal points specifically for this view and get the 3D coordinates of the points using registerPointPickingCallback() callback function
Define minimum and maximum x, y coordinates by comparing diagonal points
Define minimum and maximum z coordinate based on the experimental setup
From the above procedure, 3D coordinates of s1 and s2 were found as shown below.
Point s1: (0.774406, 0.295306, 2.703)
Point s2: ( 1.37865, 0.845276, 2.834)
By comparing s1 and s2, a box was defined and addCube was used but it didn't work. See two screenshots of the output below taken at different camera angles-
Below is the code snippet-
double x_min = 0.774406, x_max = 1.37865;
double y_min = 0.295306, y_max = 0.845276;
double z_min = 0.2, z_max = 3.0;
pcl::visualization::PCLVisualizer viewer("Cloud Viewer");
viewer.setCameraParameters(camera);
viewer.addCube(x_min, x_max, y_min, y_max, z_min, z_max, 1, 0, 0, "bounding_box");
Fortunately, addCube function has another definition, which provides a way to apply translation and rotation.
My question is, using these two points i.e., s1 and s2, and camera, how do define the transformations, so that the bounding box contains the ball.
Below are the links to download all required file, in order to run the project-
bounding_box.cpp: The CPP file
scene.pcd: The captured PCD file used in above CPP code
camera.cam: Camera file used in above CPP code
Dependency: PCL 1.8

How to get points in stereo image from extrinsic parameters

I've got a pair of cameras calibrated, knowing their intrinsic and extrinsic parameters.
Knowing that they are both looking to a plane, if I define some points in one of the image, how can I get the point in the other image?
The cameras are pretty close one form the other, so suppose there is not any occlusion, both can see the same object.
Is there an openCV unction or set of functions to do this?
My point is on Z=0 in the world.
Basically:
P_CAM1=(200,300) -> P_CAM2= ?
The answer:
Undistort images. cv::undistort
calculate P1ccdf = A^(-1)*P1 -> P1 point in f=1 world ref coords.
Calculate the point in the world ref Z=0 for that:
Calculate the optical centre coords in the world: Copt1=[0,0,0,1]T Copt1w=-R1t*T1
compute the vector between Coptw1 and P1ccdf: with some math -> V1w=R1T*P1ccdf
Coptw1+lambda*V1w=[Pxw,Pyw,0]T -> lambda=-cpotw(z)/V1(z)
Coptw1+lambda*V1w=[Pxw,Pyw,0]T=P1w
Protect world point into cam2
P1ccd2f = R2T2 *[Pxw Pyw 0 1]T
P1ccd2f=P1ccd2f/P1ccd2f(3)
P2=A2*P1ccd2f
where
P means point
A is the intrinsic matrix (4x4)
RT is the camera matrix (3x4)

Optimizing a pinhole camera rendering system

I'm making a software rasterizer for school, and I'm using an unusual rendering method instead of traditional matrix calculations. It's based on a pinhole camera. I have a few points in 3D space, and I convert them to 2D screen coordinates by taking the distance between it and the camera and normalizing it
Vec3 ray_to_camera = (a_Point - plane_pos).Normalize();
This gives me a directional vector towards the camera. I then turn that direction into a ray by placing the ray's origin on the camera and performing a ray-plane intersection with a plane slightly behind the camera.
Vec3 plane_pos = m_Position + (m_Direction * m_ScreenDistance);
float dot = ray_to_camera.GetDotProduct(m_Direction);
if (dot < 0)
{
float time = (-m_ScreenDistance - plane_pos.GetDotProduct(m_Direction)) / dot;
// if time is smaller than 0 the ray is either parallel to the plane or misses it
if (time >= 0)
{
// retrieving the actual intersection point
a_Point -= (m_Direction * ((a_Point - plane_pos).GetDotProduct(m_Direction)));
// subtracting the plane origin from the intersection point
// puts the point at world origin (0, 0, 0)
Vec3 sub = a_Point - plane_pos;
// the axes are calculated by saying the directional vector of the camera
// is the new z axis
projected.x = sub.GetDotProduct(m_Axis[0]);
projected.y = sub.GetDotProduct(m_Axis[1]);
}
}
This works wonderful, but I'm wondering: can the algorithm be made any faster? Right now, for every triangle in the scene, I have to calculate three normals.
float length = 1 / sqrtf(GetSquaredLength());
x *= length;
y *= length;
z *= length;
Even with a fast reciprocal square root approximation (1 / sqrt(x)) that's going to be very demanding.
My questions are thus:
Is there a good way to approximate the three normals?
What is this rendering technique called?
Can the three vertex points be approximated using the normal of the centroid? ((v0 + v1 + v2) / 3)
Thanks in advance.
P.S. "You will build a fully functional software rasterizer in the next seven weeks with the help of an expert in this field. Begin." I ADORE my education. :)
EDIT:
Vec2 projected;
// the plane is behind the camera
Vec3 plane_pos = m_Position + (m_Direction * m_ScreenDistance);
float scale = m_ScreenDistance / (m_Position - plane_pos).GetSquaredLength();
// times -100 because of the squared length instead of the length
// (which would involve a squared root)
projected.x = a_Point.GetDotProduct(m_Axis[0]).x * scale * -100;
projected.y = a_Point.GetDotProduct(m_Axis[1]).y * scale * -100;
return projected;
This returns the correct results, however the model is now independent of the camera position. :(
It's a lot shorter and faster though!
This is called a ray-tracer - a rather typical assignment for a first computer graphics course* - and you can find a lot of interesting implementation details on the classic Foley/Van Damm textbook (Computer Graphics Principes and Practice). I strongly suggest you buy/borrow this textbook and read it carefully.
*Just wait until you get started on reflections and refraction... Now the fun begins!
It is difficult to understand exactly what your code doing, because it seems to be performing a lot of redundant operations! However, if I understand what you say you're trying to do, you are:
finding the vector from the pinhole to the point
normalizing it
projecting backwards along the normalized vector to an "image plane" (behind the pinhole, natch!)
finding the vector to this point from a central point on the image plane
doing dot products on the result with "axis" vectors to find the x and y screen coordinates
If the above description represents your intentions, then the normalization should be redundant -- you shouldn't have to do it at all! If removing the normalization gives you bad results, you are probably doing something slightly different from your stated plan... in other words, it seems likely that you have confused yourself along with me, and that the normalization step is "fixing" it to the extent that it looks good enough in your test cases, even though it probably still isn't doing quite what you want it to.
The overall problem, I think, is that your code is massively overengineered: you are writing all your high-level vector algebra as code to be executed in the inner loop. The way to optimize this is to work out all your vector algebra on paper, find the simplest expression possible for your inner loop, and precompute all the necessary constants for this at camera setup time. The pinhole camera specs would only be the inputs to the camera setup routine.
Unfortunately, unless I miss my guess, this should reduce your pinhole camera to the traditional, boring old matrix calculations. (ray tracing does make it easy to do cool nonstandard camera stuff -- but what you describe should end up perfectly standard...)
Your code is a little unclear to me (plane_pos?), but it does seem that you could cut out some unnecessary calculation.
Instead of normalizing the ray (scaling it to length 1), why not scale it so that the z component is equal to the distance from the camera to the plane-- in fact, scale x and y by this factor, you don't need z.
float scale = distance_to_plane/z;
x *= scale;
y *= scale;
This will give the x and y coordinates on the plane, no sqrt(), no dot products.
Well, off the bat, you can calculate normals for every triangle when your program starts up. Then when you're actually running, you just have to access the normals. This sort of startup calculation to save costs later tends to happen a lot in graphics. This is why we have large loading screens in a lot of our video games!