PCL: Scale two Point-Clouds to the same size - c++

I got two point clouds and try to scale them to the same size. My first approach was to just divide the square roots from the eigenvalues:
pcl::PCA<pcl::PointNormal> pca;
pca.setInputCloud(model_cloud_ptr);
Eigen::Vector3f ev_M = pca.getEigenValues();
pca.setInputCloud(segmented_cloud_ptr);
Eigen::Vector3f ev_S = pca.getEigenValues();
double s = sqrt(ev_M[0])/sqrt(ev_S[0]);
This helps me to scale my model cloud to have approximately the same size as my segmented cloud. But the result is really not that perfect. It is a simple estimation. I tried doing it with TransformationEstimationSVDScale and also with SampleConsensusModelRegistration like in this tutorial. But when doing this I get the message, that the number of source points/indices differs from the number of target points/indices.
What would be the best approach for me to scale the clouds to the same size, when having different numbers of points in them?
Edit I tried doing what #dspeyer proposed but this gives me a scaling factor of almost 1.0
pcl::PCA<pcl::PointNormal> pca;
pca.setInputCloud(model_cloud_ptr);
Eigen::Matrix3f ev_M = pca.getEigenVectors();
Eigen::Vector3f ev_M1 = ev_M.col(0);
Eigen::Vector3f ev_M2 = ev_M.col(1);
auto dist_M1 = ev_M1.maxCoeff()-ev_M1.minCoeff();
auto dist_M2 = ev_M2.maxCoeff()-ev_M2.minCoeff();
auto distM_max = std::max(dist_M1, dist_M2);
pca.setInputCloud(segmented_cloud_ptr);
Eigen::Matrix3f ev_S = pca.getEigenVectors();
Eigen::Vector3f ev_S1 = ev_S.col(0);
Eigen::Vector3f ev_S2 = ev_S.col(1);
auto dist_S1 = ev_S1.maxCoeff()-ev_S1.minCoeff();
auto dist_S2 = ev_S2.maxCoeff()-ev_S2.minCoeff();
auto distS_max = std::max(dist_S1, dist_S2);
double s = distS_max / distM_max;

I would suggest using eigenvectors of each cloud to identify each ones primary axis of variation and then scaling them based on each clouds variation in that axis. In my example I used an oriented bounding box (max min in eigenspace), but mean value or standard deviation in the primary axis (x axis in eigenspace) could be better metrics depending on the application.
I left some debug flags in the function in case they are helpful to you, but gave them the defaults that I expect you will use. I tested for variable axis stretching and variable rotations of sample and golden clouds. This function should be able to handle that all just fine.
One caveat of this method is that if warping is axially variable AND warping causes one axis to overcome another axis as primary axis of variation, this function could improperly scale the clouds. I am not sure if this edge case is relevant to you. As long as you have uniform scaling between your clouds, this case should never occur.
debugFlags: debugOverlay will leave both input clouds scaled and in their respective eigen orientations (allows more easy comparison). primaryAxisOnly will use only the primary axis of variation to perform scaling if true, if false, it will scale all 3 axes of variation independently.
Function:
void rescaleClouds(pcl::PointCloud<pcl::PointXYZ>::Ptr& goldenCloud, pcl::PointCloud<pcl::PointXYZ>::Ptr& sampleCloud, bool debugOverlay = false, bool primaryAxisOnly = true)
{
//analyze golden cloud
pcl::PCA<pcl::PointXYZ> pcaGolden;
pcaGolden.setInputCloud(goldenCloud);
Eigen::Matrix3f goldenEVs_Dir = pcaGolden.getEigenVectors();
Eigen::Vector4f goldenMidPt = pcaGolden.getMean();
Eigen::Matrix4f goldenTransform = Eigen::Matrix4f::Identity();
goldenTransform.block<3, 3>(0, 0) = goldenEVs_Dir;
goldenTransform.block<4, 1>(0, 3) = goldenMidPt;
pcl::PointCloud<pcl::PointXYZ>::Ptr orientedGolden(new pcl::PointCloud<pcl::PointXYZ>);
pcl::transformPointCloud(*goldenCloud, *orientedGolden, goldenTransform.inverse());
pcl::PointXYZ goldenMin, goldenMax;
pcl::getMinMax3D(*orientedGolden, goldenMin, goldenMax);
//analyze sample cloud
pcl::PCA<pcl::PointXYZ> pcaSample;
pcaSample.setInputCloud(sampleCloud);
Eigen::Matrix3f sampleEVs_Dir = pcaSample.getEigenVectors();
Eigen::Vector4f sampleMidPt = pcaSample.getMean();
Eigen::Matrix4f sampleTransform = Eigen::Matrix4f::Identity();
sampleTransform.block<3, 3>(0, 0) = sampleEVs_Dir;
sampleTransform.block<4, 1>(0, 3) = sampleMidPt;
pcl::PointCloud<pcl::PointXYZ>::Ptr orientedSample(new pcl::PointCloud<pcl::PointXYZ>);
pcl::transformPointCloud(*sampleCloud, *orientedSample, sampleTransform.inverse());
pcl::PointXYZ sampleMin, sampleMax;
pcl::getMinMax3D(*orientedSample, sampleMin, sampleMax);
//apply scaling to oriented sample cloud
double xScale = (sampleMax.x - sampleMin.x) / (goldenMax.x - goldenMin.x);
double yScale = (sampleMax.y - sampleMin.y) / (goldenMax.y - goldenMin.y);
double zScale = (sampleMax.z - sampleMin.z) / (goldenMax.z - goldenMin.z);
if (primaryAxisOnly) { std::cout << "scale: " << xScale << std::endl; }
else { std::cout << "xScale: " << xScale << "yScale: " << yScale << "zScale: " << zScale << std::endl; }
for (int i = 0; i < orientedSample->points.size(); i++)
{
if (primaryAxisOnly)
{
orientedSample->points[i].x = orientedSample->points[i].x / xScale;
orientedSample->points[i].y = orientedSample->points[i].y / xScale;
orientedSample->points[i].z = orientedSample->points[i].z / xScale;
}
else
{
orientedSample->points[i].x = orientedSample->points[i].x / xScale;
orientedSample->points[i].y = orientedSample->points[i].y / yScale;
orientedSample->points[i].z = orientedSample->points[i].z / zScale;
}
}
//depending on your next step, it may be reasonable to leave this cloud at its eigen orientation, but this transformation will allow this function to scale in place.
if (debugOverlay)
{
goldenCloud = orientedGolden;
sampleCloud = orientedSample;
}
else
{
pcl::transformPointCloud(*orientedSample, *sampleCloud, sampleTransform);
}
}
Test Code (you will need your own clouds and visualizers):
pcl::PointCloud<pcl::PointXYZ>::Ptr golden(new pcl::PointCloud<pcl::PointXYZ>);
fileIO::loadFromPCD(golden, "CT_Scan_Nov_7_fullSpine.pcd");
CloudVis::simpleVis(golden);
double xStretch = 1.75;
double yStretch = 1.65;
double zStretch = 1.5;
pcl::PointCloud<pcl::PointXYZ>::Ptr stretched(new pcl::PointCloud<pcl::PointXYZ>);
for (int i = 0; i < golden->points.size(); i++)
{
pcl::PointXYZ pt = golden->points[i];
stretched->points.push_back(pcl::PointXYZ(pt.x * xStretch, pt.y * yStretch, pt.z * zStretch));
}
Eigen::Affine3f arbRotation = Eigen::Affine3f::Identity();
arbRotation.rotate(Eigen::AngleAxisf(M_PI / 4.0, Eigen::Vector3f::UnitY()));
pcl::transformPointCloud(*stretched, *stretched, arbRotation);
CloudVis::rgbClusterVis(golden, stretched);
rescaleClouds(golden, stretched,true,false);
CloudVis::rgbClusterVis(golden, stretched);

Seems like you should be able to:
Project everything onto the first two eigenvectors
Take the min and max for each
Subtract max-min for each eigenvector/dataset pair
Take the max of the two ranges (usually, but not always the first eigenvector -- when it isn't you'll want to rotate the final display)
Use the ratio of those maxes as the scaling constant

Related

Using The Dot Product to determine whether an object is on the left hand side or right hand side of the direction of the object

so I currently am trying to create some method which when taking in a simulation vehicles position, direction, and an objects position, Will determine whether or not the object lies on the right and side or left hand side of that vehicles direction. This is what i have implemented so far (Note I am in a 2D co-ord system):
This is the code block that uses the method
void Class::leftOrRight()
{
// Clearing both _lhsCones and _rhsCones vectors
_rhsCones.clear();
_lhsCones.clear();
for (int i =0; i < _cones.size(); i++)
{
if (dotAngleFromYaw(_x, _y, _cones[i].x(), _cones[i].y(), _yaw) > 0)
{
_lhsCones.push_back(_cones[i]);
}
else
{
_rhsCones.push_back(_cones[i]);
}
}
return;
}
This is the code block which computes the angle
double Class::dotAngleFromYaw(double xCar, double yCar, double xCone, double yCone, double yawCar)
{
double iOne = cos(yawCar);
double jOne = sin(yawCar);
double iTwo = xCone - xCar;
double jTwo = yCone - yCar;
//ensure to normalise the vector two
double magTwo = std::sqrt(std::pow(iTwo, 2) + std::pow(jTwo, 2));
iTwo = iTwo / magTwo;
jTwo = jTwo / magTwo;
double theta = acos((iOne * iTwo) + (jOne * jTwo)); // in radians
return theta;
}
My issue with this is that dotAngleFromYaw(0,0,0,1,0) = +pi/2 and dotAngleFromYaw(0,0,0,-1,0) = +pi/2 hence the if statements fail to sort the cones.
Any help would be great
*Adjustments made from comment suggestions
I have change the sort method as follows
double Class::indicateSide(double xCar, double yCar, double xCone, double yCone, double yawCar)
{
// Compute the i and j compoents of the yaw measurment as a unit vector i.e Vector Mag = 1
double iOne = cos(yawCar);
double jOne = sin(yawCar);
// Create the Car to Cone Vector
double iTwo = xCone - xCar;
double jTwo = yCone - yCar;
//ensure to normalise the vCar to Cone Vector
double magTwo = std::sqrt(std::pow(iTwo, 2) + std::pow(jTwo, 2));
iTwo = iTwo / magTwo;
jTwo = jTwo / magTwo;
// // Using the transformation Matrix with Theta = yaw (angle in radians) transform the axis to the augmented 2D space
// double Ex = cos(yawCar)*iOne - sin(yawCar)*jOne;
// double Ey = sin(yawCar)*iOne + cos(yawCar)*jOne;
// Take the Cross Product of < Ex, 0 > x < x', y' > where x', y' have the same location in the simulation space.
double result = iOne*jTwo - jOne*iTwo;
return result;
}
However I still am having issues defining the left and right, note that I have also become aware that objects behind the vehicle are still passed to every instance of the array of objects to be evaluated and hence I have implemented a dot product check elsewhere that seems to work fine for now, which is why I have not included it here I can make another adjustment to the post to include said code. I did try to implement the co-ordinate system transformation however i did not see improvements compared to when the added lines are not commented out and implemented.
Any further feedback is greatly appreciated
If the angle does not matter and you only want to know whether "left or right" I'd go for another approach.
Set up a plane that has xCar and yCar on its surface. When setting it up it's up to you how to define the plane's normal i.e. the side its facing to.
After that you can apply the dot-product to determine the 'sign' indicating which side it's on.
Note that dot product does not provide information about left/right position.
Sign of dot product says whether position is ahead or backward.
To get left/right side, you need to check sign of cross product
cross = iOne * jTwo - jOne * iTwo
(note subtraction and i/j alternation)
To see the difference between dot and cross product info:
Quick test. Mathematical coordinate system (CCW) is used (left/right depends on CW/CCW)
BTW, in kinematics simulations it is worth to store components of direction vector rather than angle.
#define _USE_MATH_DEFINES // для C++
#include <cmath>
#include <iostream>
void check_target(float carx, float cary, float dirx, float diry, float tx, float ty) {
float cross = (tx - carx) * diry - (ty - cary) * dirx;
float dot = (tx - carx) * dirx + (ty - cary) * diry;
if (cross >= 0) {
if (dot >= 0)
std::cout << "ahead right\n";
else
std::cout << "behind right\n";
}
else {
if (dot >= 0)
std::cout << "ahead left\n";
else
std::cout << "behind left\n";
}
}
int main()
{
float carx, cary, car_dir_angle, dirx, diry;
float tx, ty;
carx = 1;
cary = 1;
car_dir_angle = M_PI / 4;
dirx = cos(car_dir_angle);
diry = sin(car_dir_angle);
check_target(carx, cary, dirx, diry, 2, 3);
check_target(carx, cary, dirx, diry, 2, 1);
check_target(carx, cary, dirx, diry, 1, 0);
check_target(carx, cary, dirx, diry, 0, 1);
}

Sorting RHS/LHS Objects in Vehicle Path C++

So I currently am trying to create some method which when taking in a simulation vehicles position, direction, and an objects position, Will determine whether or not the object lies on the right and side or left hand side of that vehicles direction. An image will be shown here,Simple Diagram of Problem Situation
So far I have tried to use the cross product and some other methods to solve the problem i will include relevant code blocks here:
void Class::sortCones()
{
// Clearing both _lhsCones and _rhsCones vectors
_rhsCones.clear();
_lhsCones.clear();
for (int i =0; i < _cones.size(); i++)
{
if (indicateSide(_x, _y, _cones[i].x(), _cones[i].y(), _yaw) > 0)
{
_lhsCones.push_back(_cones[i]);
}
if (indicateSide(_x, _y, _cones[i].x(), _cones[i].y(), _yaw) == 0)
{
return;
}
else
{
_rhsCones.push_back(_cones[i]);
}
}
return;
}
double Class::indicateSide(double xCar, double yCar, double xCone, double yCone, double yawCar)
{
// Compute the i and j compoents of the yaw measurment as a unit vector i.e Vector Mag = 1
double iOne = cos(yawCar);
double jOne = sin(yawCar);
// Create the Car to Cone Vector
double iTwo = xCone - xCar;
double jTwo = yCone - yCar;
//ensure to normalise the vCar to Cone Vector
double magTwo = std::sqrt(std::pow(iTwo, 2) + std::pow(jTwo, 2));
iTwo = iTwo / magTwo;
jTwo = jTwo / magTwo;
// - old method
// Using the transformation Matrix with Theta = yaw (angle in radians) transform the axis to the augmented 2D space
// Take the Cross Product of < Ex, 0 > x < x', y' > where x', y' have the same location in the simulation space.
// double Ex = cos(yawCar)*iOne - sin(yawCar)*jOne;
// double Ey = sin(yawCar)*iOne + cos(yawCar)*jOne;
double result = iOne*jTwo - jOne*iTwo;
return result;
}
The car currently just seems to run off in a straight line and one of the funny elements is the sorting method of left and right any direction is GREATLY appreciated.

Generate image from an unorganized Point Cloud in PCL

I have an unorganized point cloud of the scene. Below is a screenshot of the point cloud-
I want to compose an image from this point cloud. Below is the code snippet-
#include <iostream>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
#include <opencv2/opencv.hpp>
int main(int argc, char** argv)
{
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
pcl::io::loadPCDFile("file.pcd", *cloud);
cv::Mat image = cv::Mat(cloud->height, cloud->width, CV_8UC3);
for (int i = 0; i < image.rows; i++)
{
for (int j = 0; j < image.cols; j++)
{
pcl::PointXYZRGBA point = cloud->at(j, i);
image.at<cv::Vec3b>(i, j)[0] = point.b;
image.at<cv::Vec3b>(i, j)[1] = point.g;
image.at<cv::Vec3b>(i, j)[2] = point.r;
}
}
cv::imwrite("image.png", image);
return (0);
}
The PCD file can be found here. The above code throws following error at runtime-
terminate called after throwing an instance of 'pcl::IsNotDenseException'
what(): : Can't use 2D indexing with a unorganized point cloud
Since the cloud is unorganized, the HEIGHT field is 1. This makes me confused while defining the dimensions of the image.
Questions
How to compose an image from an unorganized point cloud?
How to convert pixels located in composed image back to point cloud (3D space)?
PS: I am using PCL 1.7 in Ubuntu 14.04 LTS OS.
What Unorganized point cloud means is that the points are NOT assigned to a fixed (organized) grid, therefore ->at(j, i) can't be used (height is always 1, and the width is just the size of the cloud.
If you want to generate an image from your cloud, I suggest the following process:
Project the point cloud to a plane.
Generate a grid (organized point cloud) on that plane.
Interpolate the colors from the unorganized cloud to the grid (organized cloud).
Generate image from your organized grid (your initial attempt).
To be able to convert back to 3D:
When projecting to the plane save the "projection vectors" (vector from original point to the projected point).
Interpolate that as well to the grid.
methods for creating the grid:
Project the point cloud to a plane (unorganized cloud), and optionally save the reconstruction information in the normals:
pcl::PointCloud<pcl::PointXYZINormal>::Ptr ProjectToPlane(pcl::PointCloud<pcl::PointXYZINormal>::Ptr cloud, Eigen::Vector3f origin, Eigen::Vector3f axis_x, Eigen::Vector3f axis_y)
{
PointCloud<PointXYZINormal>::Ptr aux_cloud(new PointCloud<PointXYZINormal>);
copyPointCloud(*cloud, *aux_cloud);
auto normal = axis_x.cross(axis_y);
Eigen::Hyperplane<float, 3> plane(normal, origin);
for (auto itPoint = aux_cloud->begin(); itPoint != aux_cloud->end(); itPoint++)
{
// project point to plane
auto proj = plane.projection(itPoint->getVector3fMap());
itPoint->getVector3fMap() = proj;
// optional: save the reconstruction information as normals in the projected cloud
itPoint->getNormalVector3fMap() = itPoint->getVector3fMap() - proj;
}
return aux_cloud;
}
Generate a grid based on an origin point and two axis vectors (length and image_size can either be predetermined as calculated from your cloud):
pcl::PointCloud<pcl::PointXYZINormal>::Ptr GenerateGrid(Eigen::Vector3f origin, Eigen::Vector3f axis_x , Eigen::Vector3f axis_y, float length, int image_size)
{
auto step = length / image_size;
pcl::PointCloud<pcl::PointXYZINormal>::Ptr image_cloud(new pcl::PointCloud<pcl::PointXYZINormal>(image_size, image_size));
for (auto i = 0; i < image_size; i++)
for (auto j = 0; j < image_size; j++)
{
int x = i - int(image_size / 2);
int y = j - int(image_size / 2);
image_cloud->at(i, j).getVector3fMap() = center + (x * step * axisx) + (y * step * axisy);
}
return image_cloud;
}
Interpolate to an organized grid (where the normals store reconstruction information and the curvature is used as a flag to indicate empty pixel (no corresponding point):
void InterpolateToGrid(pcl::PointCloud<pcl::PointXYZINormal>::Ptr cloud, pcl::PointCloud<pcl::PointXYZINormal>::Ptr grid, float max_resolution, int max_nn_to_consider)
{
pcl::search::KdTree<pcl::PointXYZINormal>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZINormal>);
tree->setInputCloud(cloud);
for (auto idx = 0; idx < grid->points.size(); idx++)
{
std::vector<int> indices;
std::vector<float> distances;
if (tree->radiusSearch(grid->points[idx], max_resolution, indices, distances, max_nn_to_consider) > 0)
{
// Linear Interpolation of:
// Intensity
// Normals- residual vector to inflate(recondtruct) the surface
float intensity(0);
Eigen::Vector3f n(0, 0, 0);
float weight_factor = 1.0F / accumulate(distances.begin(), distances.end(), 0.0F);
for (auto i = 0; i < indices.size(); i++)
{
float w = weight_factor * distances[i];
intensity += w * cloud->points[indices[i]].intensity;
auto res = cloud->points[indices[i]].getVector3fMap() - grid->points[idx].getVector3fMap();
n += w * res;
}
grid->points[idx].intensity = intensity;
grid->points[idx].getNormalVector3fMap() = n;
grid->points[idx].curvature = 1;
}
else
{
grid->points[idx].intensity = 0;
grid->points[idx].curvature = 0;
grid->points[idx].getNormalVector3fMap() = Eigen::Vector3f(0, 0, 0);
}
}
}
Now you have a grid (an organized cloud), which you can easily map to an image. Any changes you make to the images, you can map back to the grid, and use the normals to project back to your original point cloud.
usage example for creating the grid:
pcl::PointCloud<pcl::PointXYZINormal>::Ptr original_cloud = ...;
// reference frame for the projection
// e.g. take XZ plane around 0,0,0 of length 100 and map to 128*128 image
Eigen::Vector3f origin = Eigen::Vector3f(0,0,0);
Eigen::Vector3f axis_x = Eigen::Vector3f(1,0,0);
Eigen::Vector3f axis_y = Eigen::Vector3f(0,0,1);
float length = 100
int image_size = 128
auto aux_cloud = ProjectToPlane(original_cloud, origin, axis_x, axis_y);
// aux_cloud now contains the points of original_cloud, with:
// xyz coordinates projected to XZ plane
// color (intensity) of the original_cloud (remains unchanged)
// normals - we lose the normal information, as we use this field to save the projection information. if you wish to keep the normal data, you should define a custom PointType.
// note: for the sake of projection, the origin is only used to define the plane, so any arbitrary point on the plane can be used
auto grid = GenerateGrid(origin, axis_x , axis_y, length, image_size)
// organized cloud that can be trivially mapped to an image
float max_resolution = 2 * length / image_size;
int max_nn_to_consider = 16;
InterpolateToGrid(aux_cloud, grid, max_resolution, max_nn_to_consider);
// Now you have a grid (an organized cloud), which you can easily map to an image. Any changes you make to the images, you can map back to the grid, and use the normals to project back to your original point cloud.
additional helper methods for how I use the grid:
// Convert an Organized cloud to cv::Mat (an image and a mask)
// point Intensity is used for the image
// if as_float is true => take the raw intensity (image is CV_32F)
// if as_float is false => assume intensity is in range [0, 255] and round it (image is CV_8U)
// point Curvature is used for the mask (assume 1 or 0)
std::pair<cv::Mat, cv::Mat> ConvertGridToImage(pcl::PointCloud<pcl::PointXYZINormal>::Ptr grid, bool as_float)
{
int rows = grid->height;
int cols = grid->width;
if ((rows <= 0) || (cols <= 0))
return pair<Mat, Mat>(Mat(), Mat());
// Initialize
Mat image = Mat(rows, cols, as_float? CV_32F : CV_8U);
Mat mask = Mat(rows, cols, CV_8U);
if (as_float)
{
for (int y = 0; y < image.rows; y++)
{
for (int x = 0; x < image.cols; x++)
{
image.at<float>(y, x) = grid->at(x, image.rows - y - 1).intensity;
mask.at<uchar>(y, x) = 255 * grid->at(x, image.rows - y - 1).curvature;
}
}
}
else
{
for (int y = 0; y < image.rows; y++)
{
for (int x = 0; x < image.cols; x++)
{
image.at<uchar>(y, x) = (int)round(grid->at(x, image.rows - y - 1).intensity);
mask.at<uchar>(y, x) = 255 * grid->at(x, image.rows - y - 1).curvature;
}
}
}
return pair<Mat, Mat>(image, mask);
}
// project image to cloud (using the grid data)
// organized - whether the resulting cloud should be an organized cloud
pcl::PointCloud<pcl::PointXYZI>::Ptr BackProjectImage(cv::Mat image, pcl::PointCloud<pcl::PointXYZINormal>::Ptr grid, bool organized)
{
if ((image.size().height != grid->height) || (image.size().width != grid->width))
{
assert(false);
throw;
}
PointCloud<PointXYZI>::Ptr cloud(new PointCloud<PointXYZI>);
cloud->reserve(grid->height * grid->width);
// order of iteration is critical for organized target cloud
for (auto r = image.size().height - 1; r >= 0; r--)
{
for (auto c = 0; c < image.size().width; c++)
{
PointXYZI point;
auto mask_value = mask.at<uchar>(image.rows - r - 1, c);
if (mask_value > 0) // valid pixel
{
point.intensity = mask_value;
point.getVector3fMap() = grid->at(c, r).getVector3fMap() + grid->at(c, r).getNormalVector3fMap();
}
else // invalid pixel
{
if (organized)
{
point.intensity = 0;
point.x = numeric_limits<float>::quiet_NaN();
point.y = numeric_limits<float>::quiet_NaN();
point.z = numeric_limits<float>::quiet_NaN();
}
else
{
continue;
}
}
cloud->push_back(point);
}
}
if (organized)
{
cloud->width = grid->width;
cloud->height = grid->height;
}
return cloud;
}
usage example for working with the grid:
// image_mask is std::pair<cv::Mat, cv::Mat>
auto image_mask = ConvertGridToImage(grid, false);
...
do some work with the image/mask
...
auto new_cloud = BackProjectImage(image_mask.first, grid, false);
For an unorganized point cloud, height and width have different meanings as you may have noticed. http://pointclouds.org/documentation/tutorials/basic_structures.php
It is not as simple to convert an unorganized point cloud to an image, as the points are represented as floats and there is no defined perspective. However, you can work around that by determining a perspective and creating discrete bins for the points. A similar question and answer can be found here: Converting a pointcloud to a depth/multi channel image

OpenCV templates in 2D point data set

I was wandering what the best approach would be for detecting 'figures' in an array of 2D points.
In this example I have two 'templates'. Figure 1 is a template and figure 2 is a template.
Each of these templates exists only as a vector of points with an x,y coordinate.
Let's say we have a third vector with points with x,y coordinate
What would be the best way to find out and isolate points matching one of the first two arrays in the third one. (including scaling, rotation)?
I have been trying nearest neigbours(FlannBasedMatcehr) or even SVM implementation but it doesn't seem to get me any result, template matching doesn't seem to be the way to go either, I think. I am not working on images but only on 2D points in memory...
Especially because the input vector always has more points than the original data set to be compared with.
All it needs to do is find points in array that match a template.
I am not a 'specialist' in machine learning or opencv. I guess I am overlooking something from the beginning...
Thank you very much for your help/suggestions.
just for fun I tried this:
Choose two points of the point dataset and compute the transformation mapping the first two pattern points to those points.
Test whether all transformed pattern points can be found in the data set.
This approach is very naive and has a complexity of O(m*n²) with n data points and a single pattern of size m (points). This complexity might be increased for some nearest neighbor search methods. So you have to consider whether it's not efficient enough for your appplication.
Some improvements could include some heuristic to not choose all n² combinations of points but, but you need background information of maximal pattern scaling or something like that.
For evaluation I first created a pattern:
Then I create random points and add the pattern somewhere within (scaled, rotated and translated):
After some computation this method recognizes the pattern. The red line shows the chosen points for transformation computation.
Here's the code:
// draw a set of points on a given destination image
void drawPoints(cv::Mat & image, std::vector<cv::Point2f> points, cv::Scalar color = cv::Scalar(255,255,255), float size=10)
{
for(unsigned int i=0; i<points.size(); ++i)
{
cv::circle(image, points[i], 0, color, size);
}
}
// assumes a 2x3 (affine) transformation (CV_32FC1). does not change the input points
std::vector<cv::Point2f> applyTransformation(std::vector<cv::Point2f> points, cv::Mat transformation)
{
for(unsigned int i=0; i<points.size(); ++i)
{
const cv::Point2f tmp = points[i];
points[i].x = tmp.x * transformation.at<float>(0,0) + tmp.y * transformation.at<float>(0,1) + transformation.at<float>(0,2) ;
points[i].y = tmp.x * transformation.at<float>(1,0) + tmp.y * transformation.at<float>(1,1) + transformation.at<float>(1,2) ;
}
return points;
}
const float PI = 3.14159265359;
// similarity transformation uses same scaling along both axes, rotation and a translation part
cv::Mat composeSimilarityTransformation(float s, float r, float tx, float ty)
{
cv::Mat transformation = cv::Mat::zeros(2,3,CV_32FC1);
// compute rotation matrix and scale entries
float rRad = PI*r/180.0f;
transformation.at<float>(0,0) = s*cosf(rRad);
transformation.at<float>(0,1) = s*sinf(rRad);
transformation.at<float>(1,0) = -s*sinf(rRad);
transformation.at<float>(1,1) = s*cosf(rRad);
// translation
transformation.at<float>(0,2) = tx;
transformation.at<float>(1,2) = ty;
return transformation;
}
// create random points
std::vector<cv::Point2f> createPointSet(cv::Size2i imageSize, std::vector<cv::Point2f> pointPattern, unsigned int nRandomDots = 50)
{
// subtract center of gravity to allow more intuitive rotation
cv::Point2f centerOfGravity(0,0);
for(unsigned int i=0; i<pointPattern.size(); ++i)
{
centerOfGravity.x += pointPattern[i].x;
centerOfGravity.y += pointPattern[i].y;
}
centerOfGravity.x /= (float)pointPattern.size();
centerOfGravity.y /= (float)pointPattern.size();
pointPattern = applyTransformation(pointPattern, composeSimilarityTransformation(1,0,-centerOfGravity.x, -centerOfGravity.y));
// create random points
//unsigned int nRandomDots = 0;
std::vector<cv::Point2f> pointset;
srand (time(NULL));
for(unsigned int i =0; i<nRandomDots; ++i)
{
pointset.push_back( cv::Point2f(rand()%imageSize.width, rand()%imageSize.height) );
}
cv::Mat image = cv::Mat::ones(imageSize,CV_8UC3);
image = cv::Scalar(255,255,255);
drawPoints(image, pointset, cv::Scalar(0,0,0));
cv::namedWindow("pointset"); cv::imshow("pointset", image);
// add point pattern to a random location
float scaleFactor = rand()%30 + 10.0f;
float translationX = rand()%(imageSize.width/2)+ imageSize.width/4;
float translationY = rand()%(imageSize.height/2)+ imageSize.height/4;
float rotationAngle = rand()%360;
std::cout << "s: " << scaleFactor << " r: " << rotationAngle << " t: " << translationX << "/" << translationY << std::endl;
std::vector<cv::Point2f> transformedPattern = applyTransformation(pointPattern,composeSimilarityTransformation(scaleFactor,rotationAngle,translationX,translationY));
//std::vector<cv::Point2f> transformedPattern = applyTransformation(pointPattern,trans);
drawPoints(image, transformedPattern, cv::Scalar(0,0,0));
drawPoints(image, transformedPattern, cv::Scalar(0,255,0),3);
cv::imwrite("dataPoints.png", image);
cv::namedWindow("pointset + pattern"); cv::imshow("pointset + pattern", image);
for(unsigned int i=0; i<transformedPattern.size(); ++i)
pointset.push_back(transformedPattern[i]);
return pointset;
}
void programDetectPointPattern()
{
cv::Size2i imageSize(640,480);
// create a point pattern, this can be in any scale and any relative location
std::vector<cv::Point2f> pointPattern;
pointPattern.push_back(cv::Point2f(0,0));
pointPattern.push_back(cv::Point2f(2,0));
pointPattern.push_back(cv::Point2f(4,0));
pointPattern.push_back(cv::Point2f(1,2));
pointPattern.push_back(cv::Point2f(3,2));
pointPattern.push_back(cv::Point2f(2,4));
// transform the pattern so it can be drawn
cv::Mat trans = cv::Mat::ones(2,3,CV_32FC1);
trans.at<float>(0,0) = 20.0f; // scale x
trans.at<float>(1,1) = 20.0f; // scale y
trans.at<float>(0,2) = 20.0f; // translation x
trans.at<float>(1,2) = 20.0f; // translation y
// draw the pattern
cv::Mat drawnPattern = cv::Mat::ones(cv::Size2i(128,128),CV_8U);
drawnPattern *= 255;
drawPoints(drawnPattern,applyTransformation(pointPattern, trans), cv::Scalar(0),5);
// display and save pattern
cv::imwrite("patternToDetect.png", drawnPattern);
cv::namedWindow("pattern"); cv::imshow("pattern", drawnPattern);
// draw the points and the included pattern
std::vector<cv::Point2f> pointset = createPointSet(imageSize, pointPattern);
cv::Mat image = cv::Mat(imageSize, CV_8UC3);
image = cv::Scalar(255,255,255);
drawPoints(image,pointset, cv::Scalar(0,0,0));
// normally we would have to use some nearest neighbor distance computation, but to make it easier here,
// we create a small area around every point, which allows to test for point existence in a small neighborhood very efficiently (for small images)
// in the real application this "inlier" check should be performed by k-nearest neighbor search and threshold the distance,
// efficiently evaluated by a kd-tree
cv::Mat pointImage = cv::Mat::zeros(imageSize,CV_8U);
float maxDist = 3.0f; // how exact must the pattern be recognized, can there be some "noise" in the position of the data points?
drawPoints(pointImage, pointset, cv::Scalar(255),maxDist);
cv::namedWindow("pointImage"); cv::imshow("pointImage", pointImage);
// choose two points from the pattern (can be arbitrary so just take the first two)
cv::Point2f referencePoint1 = pointPattern[0];
cv::Point2f referencePoint2 = pointPattern[1];
cv::Point2f diff1; // difference vector
diff1.x = referencePoint2.x - referencePoint1.x;
diff1.y = referencePoint2.y - referencePoint1.y;
float referenceLength = sqrt(diff1.x*diff1.x + diff1.y*diff1.y);
diff1.x = diff1.x/referenceLength; diff1.y = diff1.y/referenceLength;
std::cout << "reference: " << std::endl;
std::cout << referencePoint1 << std::endl;
// now try to find the pattern
for(unsigned int j=0; j<pointset.size(); ++j)
{
cv::Point2f targetPoint1 = pointset[j];
for(unsigned int i=0; i<pointset.size(); ++i)
{
cv::Point2f targetPoint2 = pointset[i];
cv::Point2f diff2;
diff2.x = targetPoint2.x - targetPoint1.x;
diff2.y = targetPoint2.y - targetPoint1.y;
float targetLength = sqrt(diff2.x*diff2.x + diff2.y*diff2.y);
diff2.x = diff2.x/targetLength; diff2.y = diff2.y/targetLength;
// with nearest-neighborhood search this line will be similar or the maximal neighbor distance must be relative to targetLength!
if(targetLength < maxDist) continue;
// scale:
float s = targetLength/referenceLength;
// rotation:
float r = -180.0f/PI*(atan2(diff2.y,diff2.x) + atan2(diff1.y,diff1.x));
// scale and rotate the reference point to compute the translation needed
std::vector<cv::Point2f> origin;
origin.push_back(referencePoint1);
origin = applyTransformation(origin, composeSimilarityTransformation(s,r,0,0));
// compute the translation which maps the two reference points on the two target points
float tx = targetPoint1.x - origin[0].x;
float ty = targetPoint1.y - origin[0].y;
std::vector<cv::Point2f> transformedPattern = applyTransformation(pointPattern,composeSimilarityTransformation(s,r,tx,ty));
// now test if all transformed pattern points can be found in the dataset
bool found = true;
for(unsigned int i=0; i<transformedPattern.size(); ++i)
{
cv::Point2f curr = transformedPattern[i];
// here we check whether there is a point drawn in the image. If you have no image you will have to perform a nearest neighbor search.
// this can be done with a balanced kd-tree in O(log n) time
// building such a balanced kd-tree has to be done once for the whole dataset and needs O(n*(log n)) afair
if((curr.x >= 0)&&(curr.x <= pointImage.cols-1)&&(curr.y>=0)&&(curr.y <= pointImage.rows-1))
{
if(pointImage.at<unsigned char>(curr.y, curr.x) == 0) found = false;
// if working with kd-tree: if nearest neighbor distance > maxDist => found = false;
}
else found = false;
}
if(found)
{
std::cout << composeSimilarityTransformation(s,r,tx,ty) << std::endl;
cv::Mat currentIteration;
image.copyTo(currentIteration);
cv::circle(currentIteration,targetPoint1,5, cv::Scalar(255,0,0),1);
cv::circle(currentIteration,targetPoint2,5, cv::Scalar(255,0,255),1);
cv::line(currentIteration,targetPoint1,targetPoint2,cv::Scalar(0,0,255));
drawPoints(currentIteration, transformedPattern, cv::Scalar(0,0,255),4);
cv::imwrite("detectedPattern.png", currentIteration);
cv::namedWindow("iteration"); cv::imshow("iteration", currentIteration); cv::waitKey(-1);
}
}
}
}

Conversion from PointCloud to Mat

Let's say I initialize a point-cloud. I want to store its RGB channels in opencv's Mat data-type. How can I do that?
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZRGBA>); //Create a new cloud
pcl::io::loadPCDFile<pcl::PointXYZRGBA> ("cloud.pcd", *cloud);
Do I understand it right, that you are only interested in the RGB-values of the point-cloud and don't care about its XYZ-values?
Then you can do:
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZRGBA>);
//Create a new cloud
pcl::io::loadPCDFile<pcl::PointXYZRGBA> ("cloud.pcd", *cloud);
cv::Mat result;
if (cloud->isOrganized()) {
result = cv::Mat(cloud->height, cloud->width, CV_8UC3);
if (!cloud->empty()) {
for (int h=0; h<result.rows; h++) {
for (int w=0; w<result.cols; w++) {
pcl::PointXYZRGBA point = cloud->at(w, h);
Eigen::Vector3i rgb = point.getRGBVector3i();
result.at<cv::Vec3b>(h,w)[0] = rgb[2];
result.at<cv::Vec3b>(h,w)[1] = rgb[1];
result.at<cv::Vec3b>(h,w)[2] = rgb[0];
}
}
}
}
I think it's enough to show the basic idea.
BUT this only works, if your point-cloud is organized:
An organized point cloud dataset is the name given to point clouds
that resemble an organized image (or matrix) like structure, where the
data is split into rows and columns. Examples of such point clouds
include data coming from stereo cameras or Time Of Flight cameras. The
advantages of a organized dataset is that by knowing the relationship
between adjacent points (e.g. pixels), nearest neighbor operations are
much more efficient, thus speeding up the computation and lowering the
costs of certain algorithms in PCL. (Source)
I know how to convert from Mat(3D Image) to XYZRGB. I think you can figure out the other way. Here Q is disparity to depth Matrix.
pcl::PointCloud<pcl::PointXYZRGB>::Ptr point_cloud_ptr (new pcl::PointCloud<pcl::PointXYZRGB>);
double px, py, pz;
uchar pr, pg, pb;
for (int i = 0; i < img_rgb.rows; i++)
{
uchar* rgb_ptr = img_rgb.ptr<uchar>(i);
uchar* disp_ptr = img_disparity.ptr<uchar>(i);
double* recons_ptr = recons3D.ptr<double>(i);
for (int j = 0; j < img_rgb.cols; j++)
{
//Get 3D coordinates
uchar d = disp_ptr[j];
if ( d == 0 ) continue; //Discard bad pixels
double pw = -1.0 * static_cast<double>(d) * Q32 + Q33;
px = static_cast<double>(j) + Q03;
py = static_cast<double>(i) + Q13;
pz = Q23;
// Normalize the points
px = px/pw;
py = py/pw;
pz = pz/pw;
//Get RGB info
pb = rgb_ptr[3*j];
pg = rgb_ptr[3*j+1];
pr = rgb_ptr[3*j+2];
//Insert info into point cloud structure
pcl::PointXYZRGB point;
point.x = px;
point.y = py;
point.z = pz;
uint32_t rgb = (static_cast<uint32_t>(pr) << 16 |
static_cast<uint32_t>(pg) << 8 | static_cast<uint32_t>(pb));
point.rgb = *reinterpret_cast<float*>(&rgb);
point_cloud_ptr->points.push_back (point);
}
}
point_cloud_ptr->width = (int) point_cloud_ptr->points.size();
point_cloud_ptr->height = 1;
I have the same problem and I succeed to solve it!
You should firstly transform the coordinate so that your 'ground plane' is the X-O-Y plane.
The core api is pcl::getTransformationFromTwoUnitVectorsAndOrigin
You can have a look at my question:
good luck!