Win 11 Pro
PCL 1.12.1
Visual Studio
I want to segment a plane within point cloud using SACsegmentation module and get some unexpected result. Though I solve this problem then, I want to know the reason causing the problem. I want to segment the ground(as blue line set showed), but the code returns the gross one as showed with red lines.
enter image description here
here is the code and pcd file.
https://github.com/PointCloudLibrary/pcl/files/8101342/compress.zip
And after sampling down the point cloud with UniformSample module, it generates the right result. Is it possible that the size of point cloud has some influence
Related
I am trying to run SLAM algorithm (ElasticFusion) using my custom .klg file.
I tried the following 2 ways :
The first way was about to build .klg file manually from separate depth and rgb image (.png) files and their time stamp informations . I tried conversion script on this Sequence 'freiburg1_desk' dataset and then run ElasticFusion . I get good result and point cloud. But when I tried to record environment on my own device with following the same steps, I did not get desired result or point cloud. The result which i am getting in live logging is much better . I guess it is because of the code that i am using for depth image visualization.
np.clip(depth, 0, 2**10-1, depth)
depth2=depth/2842.5
depth2+=1.1863
depth=np.tan(depth2)
depth*=0.1236
depth-=0.037
depth*=1000
#depth = 0.1236 * math.tan(depth / 2842.5 + 1.1863);
depth = depth.astype(np.uint16)
return depth
I get above formula from here
A better approximation is given by Stéphane Magnenat in this post: distance = 0.1236 * tan(rawDisparity / 2842.5 + 1.1863) in meters. Adding a final offset term of -0.037 centers the original ROS data.
The second way that i tried was using this Logger which is suggested by Thomas Whelan (ElasticFusion). I run this Logger without any error.
Number devices connected: 1
1. device on bus 001:14 is a Xbox NUI Camera (2AE) from Microsoft (45E) with serial id 'A00366911101042A'
searching for device with index = 1
Opened 'Xbox NUI Camera' on bus 1:14 with serial number 'A00366911101042A'
But i am getting black screen for Depth and RGB image.
I am using Ubuntu 16.04 and Kinect1. Any kind of suggestion or help will be appreciated
Solved
Second way worked after re-installing OpenNI. Probably in previous runs Logger somehow was not able to find OpenNI for streaming the depth and rgb.
I've been trying to refine my camera parameters with CvLevMarq but after reading about it, it seems to be causing mixed results - which is exactly what I am experiencing. I read about the alternatives and came upon EIGEN - and also found this library that utilizes it.
However, the library above seems to use a stitching class that doesn't support OpenCV and will probably require me to port it to OpenCV.
Before going ahead and doing so, which will probably not be an easy task, I figured I'd ask around first and see if anyone else had the same problem?
I'm currently using:
1. Calculating features with FASTFeatureDetector
Ptr<FeatureDetector> detector = new FastFeatureDetector(5,true);
detector->detect(firstGreyImage, features_global[firstImageIndex].keypoints); // Previous picture
detector->detect(secondGreyImage, features_global[secondImageIndex].keypoints); // New picture
2. Extracting features with SIFTDescriptorExtractor
Ptr<SiftDescriptorExtractor> extractor = new SiftDescriptorExtractor();
extractor->compute(firstGreyImage, features_global[firstImageIndex].keypoints, features_global[firstImageIndex].descriptors); // Previous Picture
extractor->compute(secondGreyImage, features_global[secondImageIndex].keypoints, features_global[secondImageIndex].descriptors); // New Picture
3. Matching features with BestOf2NearestMatcher
vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_use_gpu, 0.50f);
matcher(features_global, pairwise_matches);
matcher.collectGarbage();
4. CameraParams.R quaternion passed from a device (slightly inaccurate which causes the issue)
5. CameraParams.Focal == 389.0f -- Played around with this value, 389.0f is the only value that matches the images horizontally but not vertically.
6. Bundle Adjustment (cvLevMarq, calcError & calcJacobian)
Ptr<BPRefiner> adjuster = new BPRefiner();
adjuster->setConfThresh(0.80f);
adjuster->setMaxIterations(5);
(*adjuster)(features,pairwise_matches,cameras);
7. ExposureCompensator (GAIN)
8. OpenCV MultiBand Blender
What works so far:
SeamFinder - works to some extent but it depends on the result of the cvLevMarq algoritm. I.e. if the algoritm is off, seamFinder is going to be off too.
HomographyBasedEstimator works beautifully. However, since it "relies" on the features, it's unfortunately not the method that I'm looking for.
I wouldn't want to rely on the features since I already have the matrix, if there's a way to "refine" the current matrix instead - then that would be the targeted result.
Results so far:
cvLevMarq "Russian roulette" 6/10:
This is what I'm trying to achieve 10/10 times. But 4/10 times, it looks like the picture below this one.
By simply just re-running the algorithm, the results change. 4/10 times it looks like this (or worse):
cvLevMarq "Russian roulette" 4/10:
Desired Result:
I'd like to "refine" my camera parameters with the features that I've matched - in hope that the images would align perfectly. Instead of hoping that cvLevMarq will do the job for me (which it won't 4/10 times), is there another way to ensure that the images will be aligned?
Update:
I've tried these versions:
OpenCV 3.1: Using CVLevMarq with 3.1 is like playing Russian roulette. Some times it can align them perfectly, and other times it estimates focal as NAN which causes segfault in the MultiBand Blender (ROI = 0,0,1,1 because of NAN)
OpenCV 2.4.9/2.4.13: Using CvLevMarq with 2.4.9 or 2.4.13 is unfortunately the same thing minus the NAN issue. 6/10 times it can align the images perfectly, but the other 4 times it's completely off.
My Speculations / Thoughts:
Template Matching using OpenCV. Maybe if I template match the ends of the images (i.e. x = 0, y = 0,height = image.height, width = 50). Any thoughts about this?
I found this interesting paper about Levenberg Marquardt applied in Homography. That looks like something that could solve my problem since the paper uses corner detection and whatnot to detect the features in the images. Any thoughts about this?
Maybe the problem isn't in CvLevMarq but instead in BestOf2NearestMatcher? However, I've searched for days and I couldn't find another method that returns the pairwise matches to pass to BPRefiner.
Hough Line Transform Detecting the lines in the first/second image and use that to align the images. Any thoughts on this? -- One thing might be, what if the images doesn't have any lines? I.e. empty wall?
Maybe I'm overkilling something so simple.. Or maybe I'm not? Basically, I'm trying to align a set of images so I can warp them without overlapping each-other. Drop a comment if it doesn't make sense :)
Update Aug 12:
After trying all kinds of combinations, the absolute best so far is CvLevMarq. The only problem with it is the mixed results shown in the images above. If anyone has any input, I'd be forever grateful.
It seems your parameter initialization is the problem. I would use a linear estimator first, i.e. ignore your noisy sensor, and then use this as the initial values for the non-linear optimizer.
A quick method is to use getaffinetransform, as you have mostly rotation.
Maybe you want to take a look at this library: https://github.com/ethz-asl/kalibr.
Cheers
If you want to stitch the images, you should see stitching_detailed.cpp. It will probably solve your problem.
In addition, I have used Graph Cut Seam Finding method with Canny Edge Detection for better stitching results in this code. If you want to optimize this code, see here.
Also, if you are going to use it for personal use, SIFT is good. You should know, SIFT is patented and will cost you if you use it for commercial purposes. Use ORB instead.
Hope it helps!
I need to use Marching Cubes based on Radial Basis Function so I looked up this algorithm implemented in PCL.
Actually I'm using PCL v1.6 so the function is:
pcl::MarchingCubesRBF
The problem is that it doesn't work, that is it doesn't create any triangles: sometimes the output is '0 triangles created', at times running blocks my machine.
Anyway my implementation is:
pcl::MarchingCubesRBF<pcl::PointNormal> mc;
pcl::PolygonMesh::Ptr triangles(new pcl::PolygonMesh);
mc.setInputCloud (cloud_with_normals);
mc.setSearchMethod (tree);
mc.reconstruct (*triangles);
I tried with different files like input but neither of them works. One of it is https://github.com/FabiApfelkern/cloudfinish/blob/master/cat.pcd
I found there was a bug about the implementation in pcl: http://dev.pointclouds.org/issues/768
However I don't understand if it is solved in pcl v1.6. Let me know how could I solve if it is possible.
I'm using C++ with VS2010
I had the same problem and I fixed it setting the grid resolution:
mc.setGridResolution (100, 100, 100);
mc.reconstruct (*triangles);
The grid resolution is the amount of voxels used in x, y and z directions. So if you set it to 1, 1, 1, there will be only one voxel - and thus not a very good representation of your point cloud. The higher the resolution, the more expensive it will be, but it also improves the quality of the resulting mesh.
I’m trying to develop (using C++ - MSVS 12.0) a function that discover which pixels (from a raster image) have its center inside a polygon (previously populated using a shapefile). I’m using GDAL 1.11.0 (just installed, using devinstall) building from source and using the option INCLUDE_OGR_FRMTS=YES. I can use GDAL and most of OGR functions without problem. However, when I use the following code:
if (polygon->Contains(tmpPoint))
I receive the error message: ERROR 6: GEOS support not enabled
Anybody knows how to solve this issue?
I’m using:
#include "ogrsf_frmts.h"
and my function is declared:
void FindPixels(GDALDataset *image, OGRLayer *poLayer, OGRPolygon *polygon)
and part of my code is:
OGRPoint *tmpPoint = NULL
OGRSpatialReference *spatialReference = NULL;
spatialReference = polygon->getSpatialReference();
tmpPoint = new OGRPoint();
tmpPoint->assignSpatialReference(spatialReference);
loop begin:
tmpPoint->setX(imgTLX + (j * imgRes) + imgResHalf);
tmpPoint->setY(imgTLY - (i * imgRes) - imgResHalf);
if (polygon->Contains(tmpPoint))
Thanks in advance!
MB
Use GDALRasterizeLayers to burn the image of polygon onto a raster. This way you will find all the pixels that fall into the polygon, or not. The default is to burn the pixel only if the centre intersects a polygon.
If the source layer has multiple polygons, you may need to distinguish them by either setting an attribute filter, or using burned attribute ID fields (although this won't work if the polygons overlap).
I've got the BumbleBee 2 stereo camera and two mentioned SDKs.
I've managed to capture a video from it in my program, rectify stereo images and get a disparity map. Next thing I'd like to have is a depth map similar to one, the Kinect gives.
The Triclops' documentation is rather short, it only references functions, without typical workflow description. The workflow is described in examples.
Up to now I've found 2 relevant functions: family of triclopsRCDxxToXYZ() functions and triclopsExtractImage3d() function.
Functions from the first family calculate x, y and z coordinate for a single pixel. Z coordinate perfectly corresponds to the depth in meters. However, to use this function I should create two nested loops, as shown in the stereo3dpoints example. That gives too much overhead, because each call returns two more coordinates.
The second function, triclopsExtractImage3d(), always returns error TriclopsErrorInvalidParameter. The documentation says only that "there is a geometry mismatch between the context and the TriclopsImage3d", which is not clear for me.
Examples of Triclops 3.3.1 SDK do not show how to use it. Google brings example from Triclops SDK 3.2, which is absent in 3.3.1.
I've tried adding lines 253-273 from the link above to current stereo3dpoints - got that error.
Does anyone have an experience with it?
Is it valid to use triclopsExtractImage3d() or is it obsolete?
I also tried plotting values of disparity vs. z, obtained from triclopsRCDxxToXYZ().
The plot shows almost exact inverse proportionality: .
That is z = k / disparity. But k is not constant across the image, it varies from approximately 2.5e-5 to 1.4e-3, that is two orders of magnitude. Therefore, it is incorrect to calculate this value once and use forever.
Maybe it is a bit to late and you figured it out by yourself but:
To use triclopsExtractImage3d you have to create a 3dImage first.
TriclopsImage3d *depthImage;
triclopsCreateImage3d(triclopsContext, &depthImage);
triclopsExtractImage3d(triclopsContext, depthImage);
triclopsDestroyImage3d(&depthImage);