How can I segment a point cloud using normals? - c++

I am trying to segment a cylinder from a plane using conditional Euclidean Clustering (based on this documentation). To test this out I have generated a point cloud consisting of just a plane and a cylinder, shown below:
I have set up my custom Euclidean clustering as follows:
bool
enforceCurvatureSimilarity(const PointFull& point_a, const PointFull& point_b, float squared_distance)
{
Eigen::Map<const Eigen::Vector3f> point_a_normal = point_a.getNormalVector3fMap (), point_b_normal = point_b.getNormalVector3fMap ();
if (std::abs (point_a_normal.dot (point_b_normal)) > 0.7)
{
return (true);
}
return (false);
}
I calculate the normals and perform the clustering as follows:
// Set up a Normal Estimation class and merge data in cloud_with_normals
ne.setSearchMethod (search_tree);
// setKSearch seems faster than radius search -> is it less accurate?
ne.setKSearch (5);
//ne.setRadiusSearch (0.1);
// pcl::IntegralImageNormalEstimation<PointT, PointN> ne;
// ne.setMaxDepthChangeFactor(0.02f);
// ne.setNormalSmoothingSize(10.0f);
ne.setInputCloud (cloud);
ne.compute (*cloud_normals);
pcl::concatenateFields (*cloud, *cloud_normals, *cloud_with_normals);
// Set up a Conditional Euclidean Clustering class
pcl::ConditionalEuclideanClustering<PointFull> cec (true);
cec.setInputCloud (cloud_with_normals);
cec.setConditionFunction (&enforceCurvatureSimilarity);
//cec.setClusterTolerance (500.0);
cec.setClusterTolerance(0.05);
//Clusters that make up less than 0.1% of total cloud size considered too small
cec.setMinClusterSize (cloud_with_normals->size () / 1000);
//Clusters that make up more than 10% of total cloud size considered too large
cec.setMaxClusterSize (cloud_with_normals->size () *0.1);
cec.segment (*clusters);
cec.getRemovedClusters (small_clusters, large_clusters);
However, when I try and segment the point cloud, everything is seen as a single cluster. I can think of two reasons as to why this might be happening.
The first being incorrect tuning of the parameters. I have tried many different combinations for both the normal estimation and clustering parameters, but have yet to see any real difference. At best a few points in the cylinder will be considered a separate cluster of one point.
The second reason is that the normals around the base of the cylinder are being incorrectly estimated due to the hard edge. This could be leading them to have an angle when they shouldn't which is breaking down the clustering algorithm. An example of this can be seen below:
How can I fix this so that the segmentation works correctly?

May be useing DoN is better for this situation. Your point cloud is sparse,even the normal estimate being incorrectly on the plane.

Related

Why does pcl::PointNormal RANSAC cylinder model state that it does not contain normals when estimating model coefficients?

I have been trying to fit a cylinder model to a generated point cloud using the pcl tools. I have been adapting the example code that is provided in the documentation here.
From my understanding, the pcl::SampleConsensusModelCylinder requires normals data so I have added a new check in the code that checks for a -cf argument. The code then calculates the normals of each point in a pcl::Normals-type point cloud called cloud_normals and then concatenates this with the original pcl::PointXYZ-type point cloud, cloud . I save this to a new pcl::PointNormal-type point cloud called cloud_normalpoints and with this I attempt to fit the the pcl::SampleConsensusModelCylinder using RAndom SAmple Consensus (RANSAC).
I have included the code below:
else if (pcl::console::find_argument (argc, argv, "-cf") >= 0 )
{
//TODO: find fastest way to fit cylinder model to point cloud
// the cloud_normals point cloud will be used to store the point cloud or normals
pcl::PointCloud<pcl::Normal>::Ptr cloud_normals (new pcl::PointCloud<pcl::Normal>);
cloud_normals->width = 500;
cloud_normals->height = 1;
// is_dense is True if no points have NaN or Inf in any of their floating points field
cloud_normals->is_dense = false;
cloud_normals->points.resize (cloud_normals->width * cloud_normals->height);
// the NormalEstimation object ne is created and will estimate the normals and curvature at each point
pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> ne;
// the search:KdTree object pointer points to a search method for finding points in 3D space (3D point clouds)
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ> ());
// use the filtered cloud as an input to the normal estimator
ne.setInputCloud (cloud);
// set number of k-nearest neighbours to use for feature estimation to 50
ne.setKSearch (50);
// compute normals and save these to the clouds_normals point cloud
ne.compute (*cloud_normals);
pcl::PointCloud<pcl::PointNormal>::Ptr cloud_normalpoints (new pcl::PointCloud<pcl::PointNormal>);
cloud_normalpoints->width = 500;
cloud_normalpoints->height = 1;
// is_dense is True if no points have NaN or Inf in any of their floating points field
cloud_normalpoints->is_dense = false;
cloud_normalpoints->points.resize (cloud_normalpoints->width * cloud_normalpoints->height);
pcl::concatenateFields(*cloud,*cloud_normals,*cloud_normalpoints);
//TODO: Solve normals not given error
pcl::SampleConsensusModelCylinder<pcl::PointNormal, pcl::Normal>::Ptr
model_c (new pcl::SampleConsensusModelCylinder<pcl::PointNormal, pcl::Normal> (cloud_normalpoints));
// Declares ransac as a ransac implementation searching for a cylinder (according to model_c -> in cloud)
pcl::RandomSampleConsensus<pcl::PointNormal> ransac (model_c);
// Set distance threshold of .01 -> believe this is for inliers
ransac.setDistanceThreshold (.01);
// Compute model coefficients and find inliers
ransac.computeModel();
// Return indices of best set of inliers so far for this model
ransac.getInliers(inliers);
}
I also added some more code to generate an original point cloud containing a cylinder, but this works so I will not go into it here.
When I run my code it gets to the compute model stage and then throws the following error:
[pcl::SampleConsensusModelCylinder::computeModelCoefficients] No input dataset containing normals was given!
Does anyone know why this is? The cloud_normalpoints cloud includes the normal data that was found for each point. Should I be setting up the RANSAC estimator differently? Should I use a different point type? I am relatively new to pcl so any help would be greatly appreciated!
You have to call the function setInputNormals of your model_c, where you pass the cloud_normals.
The cloud you pass in the constructor of SampleConsensusModelCylinder only sets the XYZ information, but is not used for normals.
This tutorial could also be interesting for you: https://pcl.readthedocs.io/projects/tutorials/en/latest/cylinder_segmentation.html

Wall detection using PCL and RANSAC

I have been working on wall detection in PCD (Point Cloud Data) file using PCL (Point Cloud Library). The PCD file has been generated through a depth camera. I found that in many of the similar applications e.g. floor detection, RANSAC has been used. So, I thought of applying RANSAC here as well and I tried my best to understand RANSAC in-general but I still have certain questions pertaining to my application:
In brief, RANSAC tries to remove the outliers in the given data and generalize the inliers through a model iteratively. So, in the case of floor detection, would it consider the rest of the point clouds corresponding to other objects as just outliers and the floor as inlier? The same is the case for the walls?
As per Plane model segmentation tutorial by PCL, RANSAC is giving the coefficients of the model plane i.e. a, b, c, and d in the equation of plane: a*x + b*y + c*z + d = 0 through coefficients->values vector. So, I assume that in the case of wall detection, it would try to give the equation of the plane corresponding to the wall. However, what if the depth camera is at the corner of a room and the top view of walls look like this:
             wall 1
       ______________
       |
       |
       | wall 2
       |
       |
       So, in this case, what would be the resultant model plane look like? Would it be kind of a        hypotenuse (making a triangle)?
             wall 1
       ---------------------
                                |
                                | wall 2
                                |
                                 ----------------------
                                
         wall 3
       Even in this case, how would it look like?
As per the Extracting indices from a PointCloud tutorial by PCL, ExtractIndices <pcl::ExtractIndices> filter is used to extract a subset of points from a point cloud based on the indices output by a segmentation algorithm. But, what exactly this filter is doing? In fact, in the case of floor detection or wall detection (assuming there is only one straight wall), RANSAC is already giving an equation of one plane. So, is there any need of using that filter? If yes then why and how?
How can I detect multiple walls in the following case? ExtractIndices <pcl::ExtractIndices> filter can do this? If yes then how?
             wall 1
       ---------------------
                                |
                                | wall 2
                                |
                                 ----------------------
                                
         wall 3
If you think that there are better ways than using RANSAC then also please let me know.
Answers for a few question you asked:
As fas as I know, in case of using plane model, RANSAC chooses 3 points from the cloud randomly, and considers it as a plane.(this is a provisional statement that will be substantiated later) All the points that are closer to this plane that the given threshold as a perpendicular distance are considered as inliers.
The algorithm gives back the plane which contains the most points in it (the found plane also depends on the iteration number you choose, if it is too low maybe it misses the largest plane).
In case of walls the story is the same. You can search for planes, but should choose the searching directions well. Walls are casually perpendicular to plane x-y. The parameters should be set considered this.
Example:
pcl::SACSegmentation<pcl::PointXYZI> seg;
Eigen::Vector3f axis;
//HELPER VARIABLES
float angle = 12.0;
void set_segmentation(float threshold, int max_iteration, float probability) {
seg.setModelType(pcl::SACMODEL_PERPENDICULAR_PLANE);
seg.setMethodType(pcl::SAC_RANSAC);
// set cloud, threshold, and other paramatres
seg.setDistanceThreshold(threshold);//Distance need to be adjusted according
to the obj
seg.setMaxIterations(max_iteration);
seg.setProbability(probability);
}
PlaneSegment(float x, float y, float z, float set_angle, float threshold, int
max_iteration, float probability) {
axis = Eigen::Vector3f(x, y, z);
angle = set_angle;
seg.setAxis(axis);
seg.setEpsAngle(angle * (3.1415 / 180.0f));
//SET SEGMENTATION
set_segmentation(threshold, max_iteration, probability);
}
pcl::PointIndices::Ptr segment_plane(pcl::PointCloud<pcl::PointXYZI>::Ptr cloud,
pcl::PointIndices::Ptr inliers) {
seg.setInputCloud(cloud);
seg.segment(*inliers, *coefficients);
if (inliers->indices.size() == 0)
{
PCL_ERROR("COULD NOT ESTIMATE PLANAR MODEL.\n");
exit(-1);
}
return inliers;
}
pcl::PointCloud<pcl::PointXYZI>::Ptr
extraction(pcl::PointCloud<pcl::PointXYZI>::Ptr cloud, pcl::PointIndices::Ptr
inliers) {
extract.setInputCloud(cloud);
extract.setIndices(inliers);
extract.setNegative(true);
extract.filter(*cloud);
return cloud;
}
You can set an acceptance angle, in this case 12 degrees, and also the searching directions based on the axis.
For your second point:
In case of multiple walls, it will give back the one which contains the most points. But you should be able to extract the other planes also if needed. (advice, save all planes which contains more points than a threshold you choose)
I chekced after your problem, this is also a solution for that: pcl::RANSAC segmentation, get all planes in cloud?. First comment gives very good answer.
Third point:
Check the example code. Note, this is a class so thats why there is a constructor. The segment_plane function returns inliers. Based on that you can call the extraction function, and it removes the inliers from the cloud. This is a very simple and fast soulition for this. You can avoid the suffering with the coefficients values. Also, if you dont want to remove them just colour them by iterating through the inliers and set its intensity to a chosen value.
RANSAC algorithm can be robust, but sometimes it just really does not work. Also it can be slow because of the iteration number.
There are multiple ways to solve this problem on another way.
Just an example: consider a grid below the cloud. A lot of equal sized square cells. In each cell you check the minimum and maximum point heights. Based on this you can get the ground plane (If these values are just slightly different, and are close to each other. The difference of the maximum heigth and the minimum height is very low in case of ground cell) or the walls. With walls you can assume that the points have an even distribution in each cells and the difference of the maximum - minimum values are high.
Best regards.

Which pcl filter to use to downsample a point cloud

I am getting a point cloud from a lidar on an autonomous driving robot, but it's too much data to process.
I already implemented a passthrough filter.
I did get a very good result and I was asking myself if there were others filters or methods I could dig into.
Of course I'm not looking for anything specific but rather a direction or advice, because I'm pretty new to the pcl library and it seems pretty huge.
Here is my filter now:
pcl::PointCloud<PointXYZIR>::Ptr cloudInput;
cloudInput.reset(new pcl::PointCloud<PointXYZIR> (cloud_in));
pcl::PointCloud<PointXYZIR>::Ptr cloudFiltered;
cloudFiltered.reset(new pcl::PointCloud<PointXYZIR>);
// Create the filtering object: downsample the dataset using a leaf size
pcl::VoxelGrid<PointXYZIR> avg;
avg.setInputCloud(cloudInput);
avg.setLeafSize(0.25f, 0.25f, 0.25f);
avg.filter(*cloudFiltered);
//Filter object
pcl::PassThrough<PointXYZIR> filter;
filter.setInputCloud(cloudFiltered);
filter.setFilterFieldName("x");
filter.setFilterLimits(-100, 100);
filter.filter(*cloudFiltered);
filter.setFilterFieldName("y");
filter.setFilterLimits(-100, 100);
filter.filter(*cloudFiltered);
cloud_out = *cloudFiltered;
Actually i did find a solution, but there is no general solution. In my case, and i think this problem is very specific to which point cloud you gonna get and what you wanna do with it.
The passtrought filter is a very efficient way to down sample without taking too many risk of losing interesting data.
http://pointclouds.org/documentation/tutorials/passthrough.php
Then i tested StatisticalOutlierRemoval, is efficient but not relevant in my case.
http://pointclouds.org/documentation/tutorials/statistical_outlier.php
Now, i downsample the pointcloud with a leafsize function, then i create a kdtree to filter the point by radius.
It's about the same amount of calculation that the passtrought filter took, but it has more sense in my project to do it this way.
// Create the filtering object: downsample the dataset using a leaf size
pcl::VoxelGrid<PointXYZIR> avg;
avg.setInputCloud(cloudInput);
avg.setLeafSize(0.25f, 0.25f, 0.25f);
avg.filter(*cloudFiltered);
//searchPoint
PointXYZIR searchPoint = cloudFiltered->at(0) ;
//result from radiusSearch()
std::vector<int> pointIdxRadiusSearch;
std::vector<float> pointRadiusSquaredDistance;
//kdTree
pcl::KdTreeFLANN<PointXYZIR> kdtree;
kdtree.setInputCloud (cloudFiltered);
kdtree.setSortedResults(true);
if ( kdtree.radiusSearch (searchPoint, 100, pointIdxRadiusSearch, pointRadiusSquaredDistance) > 0 )
{
//delete every point in target
for (size_t j = 0; j < pointIdxRadiusSearch.size (); ++j)
{
//is this the way to erase correctly???
cloud_out.push_back(cloudFiltered->points[pointIdxRadiusSearch[j]]);
}
}
voxel grid to downsampling should maintain pretty good cloud distribution while reducing the number of points. You can set how small the voxels are in each axis in order to maintain as much or as little resolution as you want. Each voxel will delete all points in it and replace them with a single point which is averaged from those deleted.
http://pointclouds.org/documentation/tutorials/voxel_grid.php#voxelgrid

Point Feature Histograms in PCL library output interpretation

I am using library called Point Cloud Library (PCL). In particular I am trying to compute point feature histograms. I followed this code from the website:
#include <pcl/point_types.h>
#include <pcl/features/pfh.h>
{
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
pcl::PointCloud<pcl::Normal>::Ptr normals (new pcl::PointCloud<pcl::Normal> ());
... read, pass in or create a point cloud with normals ...
... (note: you can create a single PointCloud<PointNormal> if you want) ...
// Create the PFH estimation class, and pass the input dataset+normals to it
pcl::PFHEstimation<pcl::PointXYZ, pcl::Normal, pcl::PFHSignature125> pfh;
pfh.setInputCloud (cloud);
pfh.setInputNormals (normals);
// alternatively, if cloud is of tpe PointNormal, do pfh.setInputNormals (cloud);
// Create an empty kdtree representation, and pass it to the PFH estimation object.
// Its content will be filled inside the object, based on the given input dataset (as no other search surface is given).
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ> ());
//pcl::KdTreeFLANN<pcl::PointXYZ>::Ptr tree (new pcl::KdTreeFLANN<pcl::PointXYZ> ()); -- older call for PCL 1.5-
pfh.setSearchMethod (tree);
// Output datasets
pcl::PointCloud<pcl::PFHSignature125>::Ptr pfhs (new pcl::PointCloud<pcl::PFHSignature125> ());
// Use all neighbors in a sphere of radius 5cm
// IMPORTANT: the radius used here has to be larger than the radius used to estimate the surface normals!!!
pfh.setRadiusSearch (0.05);
// Compute the features
pfh.compute (*pfhs);
// pfhs->points.size () should have the same size as the input cloud->points.size ()*
}
The output I get is an array of 125 values per point from the original point cloud. For example if I have a point cloud with 1000 points, where each point contains XYZ, then there will be 1000 * 125 values. I was able to understand why I have 125 entries where each corresponds to a bin. (assuming 3 features and 5 divisions 5^3 = 125)
This post helped some: PCL Point Feature Histograms - binning
Unfortunately I still have a few questions:
1) Why do I have a 125 histograms per point? Is it because it measures what is the percentage of points that are in K-nearest neighborhood to current point have similar features and then each point has its own neighborhood?
2) I see for some points all 125 entries are zeros. Why?
3) Graphing point feature histograms values as shown in the paper and website:
Website:
http://pointclouds.org/documentation/tutorials/pfh_estimation.php#pfh-estimation
Paper:
https://pdfs.semanticscholar.org/5aee/411f0b4228ba63c85df0e8ed64cab5844aed.pdf
The graphs shown have their X axis as number of bins (in my case 125 bins) so the natural question how do we consolidate 125 values per point into one graph?
I tried a simple summation of appropriate columns and scaling them by a constant but I do not think it is right. By summation I mean add all bin[0] for every point, next sum all bin[1] for every point and so on until bin[124].
I would really appreciate any help to clarify this.
Thank you.
The PFH descriptor is a local descriptor, so the histogram is computed for each point it is given. You likely will want to use only a keypoint or a set of kepypoints.
The histogram will have entries of 0 if it has no nearest neighbors within the radius search.
As for graphing, try viewing the histogram for one point at a time. I don't think it makes sense to consolidate it into one graph.
If you are interested in global descriptors that take into account all points, take a look at the CVFH descriptor (Clustered Viewpoint Feature Histogram) or other global ones.

OpenCV 3.0: Calibration not fitting as expected

I'm getting results I don't expect when I use OpenCV 3.0 calibrateCamera. Here is my algorithm:
Load in 30 image points
Load in 30 corresponding world points (coplanar in this case)
Use points to calibrate the camera, just for un-distorting
Un-distort the image points, but don't use the intrinsics (coplanar world points, so intrinsics are dodgy)
Use the undistorted points to find a homography, transforming to world points (can do this because they are all coplanar)
Use the homography and perspective transform to map the undistorted points to the world space
Compare the original world points to the mapped points
The points I have are noisy and only a small section of the image. There are 30 coplanar points from a single view so I can't get camera intrinsics, but should be able to get distortion coefficients and a homography to create a fronto-parallel view.
As expected, the error varies depending on the calibration flags. However, it varies opposite to what I expected. If I allow all variables to adjust, I would expect error to come down. I am not saying I expect a better model; I actually expect over-fitting, but that should still reduce error. What I see though is that the fewer variables I use, the lower my error. The best result is with a straight homography.
I have two suspected causes, but they seem unlikely and I'd like to hear an unadulterated answer before I air them. I have pulled out the code to just do what I'm talking about. It's a bit long, but it includes loading the points.
The code doesn't appear to have bugs; I've used "better" points and it works perfectly. I want to emphasize that the solution here can't be to use better points or perform a better calibration; the whole point of the exercise is to see how the various calibration models respond to different qualities of calibration data.
Any ideas?
Added
To be clear, I know the results will be bad and I expect that. I also understand that I may learn bad distortion parameters which leads to worse results when testing points that have not been used to train the model. What I don't understand is how the distortion model has more error when using the training set as the test set. That is, if the cv::calibrateCamera is supposed to choose parameters to reduce error over the training set of points provided, yet it is producing more error than if it had just selected 0s for K!, K2, ... K6, P1, P2. Bad data or not, it should at least do better on the training set. Before I can say the data is not appropriate for this model, I have to be sure I'm doing the best I can with the data available, and I can't say that at this stage.
Here an example image
The points with the green pins are marked. This is obviously just a test image.
Here is more example stuff
In the following the image is cropped from the big one above. The centre has not changed. This is what happens when I undistort with just the points marked manually from the green pins and allowing K1 (only K1) to vary from 0:
Before
After
I would put it down to a bug, but when I use a larger set of points that covers more of the screen, even from a single plane, it works reasonably well. This looks terrible. However, the error is not nearly as bad as you might think from looking at the picture.
// Load image points
std::vector<cv::Point2f> im_points;
im_points.push_back(cv::Point2f(1206, 1454));
im_points.push_back(cv::Point2f(1245, 1443));
im_points.push_back(cv::Point2f(1284, 1429));
im_points.push_back(cv::Point2f(1315, 1456));
im_points.push_back(cv::Point2f(1352, 1443));
im_points.push_back(cv::Point2f(1383, 1431));
im_points.push_back(cv::Point2f(1431, 1458));
im_points.push_back(cv::Point2f(1463, 1445));
im_points.push_back(cv::Point2f(1489, 1432));
im_points.push_back(cv::Point2f(1550, 1461));
im_points.push_back(cv::Point2f(1574, 1447));
im_points.push_back(cv::Point2f(1597, 1434));
im_points.push_back(cv::Point2f(1673, 1463));
im_points.push_back(cv::Point2f(1691, 1449));
im_points.push_back(cv::Point2f(1708, 1436));
im_points.push_back(cv::Point2f(1798, 1464));
im_points.push_back(cv::Point2f(1809, 1451));
im_points.push_back(cv::Point2f(1819, 1438));
im_points.push_back(cv::Point2f(1925, 1467));
im_points.push_back(cv::Point2f(1929, 1454));
im_points.push_back(cv::Point2f(1935, 1440));
im_points.push_back(cv::Point2f(2054, 1470));
im_points.push_back(cv::Point2f(2052, 1456));
im_points.push_back(cv::Point2f(2051, 1443));
im_points.push_back(cv::Point2f(2182, 1474));
im_points.push_back(cv::Point2f(2171, 1459));
im_points.push_back(cv::Point2f(2164, 1446));
im_points.push_back(cv::Point2f(2306, 1474));
im_points.push_back(cv::Point2f(2292, 1462));
im_points.push_back(cv::Point2f(2278, 1449));
// Create corresponding world / object points
std::vector<cv::Point3f> world_points;
for (int i = 0; i < 30; i++) {
world_points.push_back(cv::Point3f(5 * (i / 3), 4 * (i % 3), 0.0f));
}
// Perform calibration
// Flags are set out so they can be commented out and "freed" easily
int calibration_flags = 0
| cv::CALIB_FIX_K1
| cv::CALIB_FIX_K2
| cv::CALIB_FIX_K3
| cv::CALIB_FIX_K4
| cv::CALIB_FIX_K5
| cv::CALIB_FIX_K6
| cv::CALIB_ZERO_TANGENT_DIST
| 0;
// Initialise matrix
cv::Mat intrinsic_matrix = cv::Mat(3, 3, CV_64F);
intrinsic_matrix.ptr<float>(0)[0] = 1;
intrinsic_matrix.ptr<float>(1)[1] = 1;
cv::Mat distortion_coeffs = cv::Mat::zeros(5, 1, CV_64F);
// Rotation and translation vectors
std::vector<cv::Mat> undistort_rvecs;
std::vector<cv::Mat> undistort_tvecs;
// Wrap in an outer vector for calibration
std::vector<std::vector<cv::Point2f>>im_points_v(1, im_points);
std::vector<std::vector<cv::Point3f>>w_points_v(1, world_points);
// Calibrate; only 1 plane, so intrinsics can't be trusted
cv::Size image_size(4000, 3000);
calibrateCamera(w_points_v, im_points_v,
image_size, intrinsic_matrix, distortion_coeffs,
undistort_rvecs, undistort_tvecs, calibration_flags);
// Undistort im_points
std::vector<cv::Point2f> ud_points;
cv::undistortPoints(im_points, ud_points, intrinsic_matrix, distortion_coeffs);
// ud_points have been "unintrinsiced", but we don't know the intrinsics, so reverse that
double fx = intrinsic_matrix.at<double>(0, 0);
double fy = intrinsic_matrix.at<double>(1, 1);
double cx = intrinsic_matrix.at<double>(0, 2);
double cy = intrinsic_matrix.at<double>(1, 2);
for (std::vector<cv::Point2f>::iterator iter = ud_points.begin(); iter != ud_points.end(); iter++) {
iter->x = iter->x * fx + cx;
iter->y = iter->y * fy + cy;
}
// Find a homography mapping the undistorted points to the known world points, ground plane
cv::Mat homography = cv::findHomography(ud_points, world_points);
// Transform the undistorted image points to the world points (2d only, but z is constant)
std::vector<cv::Point2f> estimated_world_points;
std::cout << "homography" << homography << std::endl;
cv::perspectiveTransform(ud_points, estimated_world_points, homography);
// Work out error
double sum_sq_error = 0;
for (int i = 0; i < 30; i++) {
double err_x = estimated_world_points.at(i).x - world_points.at(i).x;
double err_y = estimated_world_points.at(i).y - world_points.at(i).y;
sum_sq_error += err_x*err_x + err_y*err_y;
}
std::cout << "Sum squared error is: " << sum_sq_error << std::endl;
I would take random samples of the 30 input points and compute the homography in each case along with the errors under the estimated homographies, a RANSAC scheme, and verify consensus between error levels and homography parameters, this can be just a verification of the global optimisation process. I know that might seem unnecessary, but it is just a sanity check for how sensitive the procedure is to the input (noise levels, location)
Also, it seems logical that fixing most of the variables gets you the least errors, as the degrees of freedom in the minimization process are less. I would try fixing different ones to establish another consensus. At least this would let you know which variables are the most sensitive to the noise levels of the input.
Hopefully, such a small section of the image would be close to the image centre as it will incur the least amount of lens distortion. Is using a different distortion model possible in your case? A more viable way is to adapt the number of distortion parameters given the position of the pattern with respect to the image centre.
Without knowing the constraints of the algorithm, I might have misunderstood the question, that's also an option too, in such case I can roll back.
I would like to have this as a comment rather, but I do not have enough points.
OpenCV runs Levenberg-Marquardt algorithm inside calibrate camera.
https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm/
This algortihm works fine in problems with one minimum. In case of single image, points located close each other and many dimensional problem (n= number of coefficents) algorithm may be unstable (especially with wrong initial guess of camera matrix. Convergence of algorithm is well described here:
https://na.math.kit.edu/download/papers/levenberg.pdf/
As you wrote, error depends on calibration flags - number of flags changes dimension of a problem to be optimized.
Camera calibration also calculates pose of camera, which will be bad in models with wrong calibration matrix.
As a solution I suggest changing approach. You dont need to calculate camera matrix and pose in this step. Since you know, that points are located on a plane you can use 3d-2d plane projection equation to determine distribution type of points. By distribution I mean, that all points will be located equally on some kind of trapezoid.
Then you can use cv::undistort with different distCoeffs on your test image and calculate image point distribution and distribution error.
The last step will be to perform this steps as a target function for some optimization algorithm with distortion coefficents being optimized.
This is not the easiest solution, but i hope it will help you.