pcl::MarchingCubesRBF doesn't output mesh - c++

I need to use Marching Cubes based on Radial Basis Function so I looked up this algorithm implemented in PCL.
Actually I'm using PCL v1.6 so the function is:
pcl::MarchingCubesRBF
The problem is that it doesn't work, that is it doesn't create any triangles: sometimes the output is '0 triangles created', at times running blocks my machine.
Anyway my implementation is:
pcl::MarchingCubesRBF<pcl::PointNormal> mc;
pcl::PolygonMesh::Ptr triangles(new pcl::PolygonMesh);
mc.setInputCloud (cloud_with_normals);
mc.setSearchMethod (tree);
mc.reconstruct (*triangles);
I tried with different files like input but neither of them works. One of it is https://github.com/FabiApfelkern/cloudfinish/blob/master/cat.pcd
I found there was a bug about the implementation in pcl: http://dev.pointclouds.org/issues/768
However I don't understand if it is solved in pcl v1.6. Let me know how could I solve if it is possible.
I'm using C++ with VS2010

I had the same problem and I fixed it setting the grid resolution:
mc.setGridResolution (100, 100, 100);
mc.reconstruct (*triangles);
The grid resolution is the amount of voxels used in x, y and z directions. So if you set it to 1, 1, 1, there will be only one voxel - and thus not a very good representation of your point cloud. The higher the resolution, the more expensive it will be, but it also improves the quality of the resulting mesh.

Related

Can't find GDK::InterpType members in gtkmm

I'm trying to make a Gtk::Image widget display a picture from a file, but prevent the widget from expanding in size, so I'm loading it from a Gdk::Pixbuf and then scaling the picture. I'm using Gdk::Pixbuf instead of GdkPixBuf because the latter one works on regular pointers, but Gtk::Image requires a Glib::RefPtr<Gdk::Pixbuf>. (Just mentioning all this in case there's a better way to achieve what I'm doing that I'm unaware of.)
auto pixbuf = Gdk::Pixbuf::create_from_file("/home/raitis/Music/WRLD/Awake EP/cover.jpg");
auto scaled = pixbuf->scale_simple(48, 48, Gdk::InterpType::NEAREST);
image->set(scaled);
Anyway, problem is that although I'm following the documentation for Gdk::Pixbuf, line 2 in my code generate the error:
error: ‘NEAREST’ is not a member of ‘Gdk::InterpType’
auto scaled = pixbuf->scale_simple(48, 48, Gdk::InterpType::NEAREST);
^~~~~~~
Trying GDK_INTERP_NEAREST instead also leads to an error. :(
no known conversion for argument 3 from ‘GdkInterpType’ to ‘Gdk::InterpType’
From the stable gtkmm gdkmm documentation, Gdk::InterpType members are:
INTERP_NEAREST
Nearest neighbor sampling; this is the fastest and lowest quality
mode. Quality is normally unacceptable when scaling down, but may be OK when
scaling up.
INTERP_TILES
This is an accurate simulation of the PostScript image operator
without any interpolation enabled.
Each pixel is rendered as a tiny parallelogram of solid color, the
edges of which are implemented with antialiasing. It resembles nearest
neighbor for enlargement, and bilinear for reduction.
INTERP_BILINEAR
Best quality/speed balance; use this mode by default.
Bilinear interpolation. For enlargement, it is equivalent to
point-sampling the ideal bilinear-interpolated image. For reduction,
it is equivalent to laying down small tiles and integrating over the
coverage area.
INTERP_HYPER
This is the slowest and highest quality reconstruction function.
It is derived from the hyperbolic filters in Wolberg's "Digital Image
Warping", and is formally defined as the hyperbolic-filter sampling
the ideal hyperbolic-filter interpolated image (the filter is designed
to be idempotent for 1:1 pixel mapping).
And from the documentation of the Gdk::Pixbuf, in the scale_simple method you'll find a reference to the interpolation type:
Leaves src unaffected. interp_type should be Gdk::INTERP_NEAREST if
you want maximum speed (but when scaling down Gdk::INTERP_NEAREST is
usually unusably ugly). The default interp_type should be
Gdk::INTERP_BILINEAR which offers reasonable quality and speed.

Refining Camera parameters and calculating errors - OpenCV

I've been trying to refine my camera parameters with CvLevMarq but after reading about it, it seems to be causing mixed results - which is exactly what I am experiencing. I read about the alternatives and came upon EIGEN - and also found this library that utilizes it.
However, the library above seems to use a stitching class that doesn't support OpenCV and will probably require me to port it to OpenCV.
Before going ahead and doing so, which will probably not be an easy task, I figured I'd ask around first and see if anyone else had the same problem?
I'm currently using:
1. Calculating features with FASTFeatureDetector
Ptr<FeatureDetector> detector = new FastFeatureDetector(5,true);
detector->detect(firstGreyImage, features_global[firstImageIndex].keypoints); // Previous picture
detector->detect(secondGreyImage, features_global[secondImageIndex].keypoints); // New picture
2. Extracting features with SIFTDescriptorExtractor
Ptr<SiftDescriptorExtractor> extractor = new SiftDescriptorExtractor();
extractor->compute(firstGreyImage, features_global[firstImageIndex].keypoints, features_global[firstImageIndex].descriptors); // Previous Picture
extractor->compute(secondGreyImage, features_global[secondImageIndex].keypoints, features_global[secondImageIndex].descriptors); // New Picture
3. Matching features with BestOf2NearestMatcher
vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_use_gpu, 0.50f);
matcher(features_global, pairwise_matches);
matcher.collectGarbage();
4. CameraParams.R quaternion passed from a device (slightly inaccurate which causes the issue)
5. CameraParams.Focal == 389.0f -- Played around with this value, 389.0f is the only value that matches the images horizontally but not vertically.
6. Bundle Adjustment (cvLevMarq, calcError & calcJacobian)
Ptr<BPRefiner> adjuster = new BPRefiner();
adjuster->setConfThresh(0.80f);
adjuster->setMaxIterations(5);
(*adjuster)(features,pairwise_matches,cameras);
7. ExposureCompensator (GAIN)
8. OpenCV MultiBand Blender
What works so far:
SeamFinder - works to some extent but it depends on the result of the cvLevMarq algoritm. I.e. if the algoritm is off, seamFinder is going to be off too.
HomographyBasedEstimator works beautifully. However, since it "relies" on the features, it's unfortunately not the method that I'm looking for.
I wouldn't want to rely on the features since I already have the matrix, if there's a way to "refine" the current matrix instead - then that would be the targeted result.
Results so far:
cvLevMarq "Russian roulette" 6/10:
This is what I'm trying to achieve 10/10 times. But 4/10 times, it looks like the picture below this one.
By simply just re-running the algorithm, the results change. 4/10 times it looks like this (or worse):
cvLevMarq "Russian roulette" 4/10:
Desired Result:
I'd like to "refine" my camera parameters with the features that I've matched - in hope that the images would align perfectly. Instead of hoping that cvLevMarq will do the job for me (which it won't 4/10 times), is there another way to ensure that the images will be aligned?
Update:
I've tried these versions:
OpenCV 3.1: Using CVLevMarq with 3.1 is like playing Russian roulette. Some times it can align them perfectly, and other times it estimates focal as NAN which causes segfault in the MultiBand Blender (ROI = 0,0,1,1 because of NAN)
OpenCV 2.4.9/2.4.13: Using CvLevMarq with 2.4.9 or 2.4.13 is unfortunately the same thing minus the NAN issue. 6/10 times it can align the images perfectly, but the other 4 times it's completely off.
My Speculations / Thoughts:
Template Matching using OpenCV. Maybe if I template match the ends of the images (i.e. x = 0, y = 0,height = image.height, width = 50). Any thoughts about this?
I found this interesting paper about Levenberg Marquardt applied in Homography. That looks like something that could solve my problem since the paper uses corner detection and whatnot to detect the features in the images. Any thoughts about this?
Maybe the problem isn't in CvLevMarq but instead in BestOf2NearestMatcher? However, I've searched for days and I couldn't find another method that returns the pairwise matches to pass to BPRefiner.
Hough Line Transform Detecting the lines in the first/second image and use that to align the images. Any thoughts on this? -- One thing might be, what if the images doesn't have any lines? I.e. empty wall?
Maybe I'm overkilling something so simple.. Or maybe I'm not? Basically, I'm trying to align a set of images so I can warp them without overlapping each-other. Drop a comment if it doesn't make sense :)
Update Aug 12:
After trying all kinds of combinations, the absolute best so far is CvLevMarq. The only problem with it is the mixed results shown in the images above. If anyone has any input, I'd be forever grateful.
It seems your parameter initialization is the problem. I would use a linear estimator first, i.e. ignore your noisy sensor, and then use this as the initial values for the non-linear optimizer.
A quick method is to use getaffinetransform, as you have mostly rotation.
Maybe you want to take a look at this library: https://github.com/ethz-asl/kalibr.
Cheers
If you want to stitch the images, you should see stitching_detailed.cpp. It will probably solve your problem.
In addition, I have used Graph Cut Seam Finding method with Canny Edge Detection for better stitching results in this code. If you want to optimize this code, see here.
Also, if you are going to use it for personal use, SIFT is good. You should know, SIFT is patented and will cost you if you use it for commercial purposes. Use ORB instead.
Hope it helps!

How to create a depth map from PointGrey BumbleBee2 stereo camera using Triclops and FlyCapture SDKs?

I've got the BumbleBee 2 stereo camera and two mentioned SDKs.
I've managed to capture a video from it in my program, rectify stereo images and get a disparity map. Next thing I'd like to have is a depth map similar to one, the Kinect gives.
The Triclops' documentation is rather short, it only references functions, without typical workflow description. The workflow is described in examples.
Up to now I've found 2 relevant functions: family of triclopsRCDxxToXYZ() functions and triclopsExtractImage3d() function.
Functions from the first family calculate x, y and z coordinate for a single pixel. Z coordinate perfectly corresponds to the depth in meters. However, to use this function I should create two nested loops, as shown in the stereo3dpoints example. That gives too much overhead, because each call returns two more coordinates.
The second function, triclopsExtractImage3d(), always returns error TriclopsErrorInvalidParameter. The documentation says only that "there is a geometry mismatch between the context and the TriclopsImage3d", which is not clear for me.
Examples of Triclops 3.3.1 SDK do not show how to use it. Google brings example from Triclops SDK 3.2, which is absent in 3.3.1.
I've tried adding lines 253-273 from the link above to current stereo3dpoints - got that error.
Does anyone have an experience with it?
Is it valid to use triclopsExtractImage3d() or is it obsolete?
I also tried plotting values of disparity vs. z, obtained from triclopsRCDxxToXYZ().
The plot shows almost exact inverse proportionality: .
That is z = k / disparity. But k is not constant across the image, it varies from approximately 2.5e-5 to 1.4e-3, that is two orders of magnitude. Therefore, it is incorrect to calculate this value once and use forever.
Maybe it is a bit to late and you figured it out by yourself but:
To use triclopsExtractImage3d you have to create a 3dImage first.
TriclopsImage3d *depthImage;
triclopsCreateImage3d(triclopsContext, &depthImage);
triclopsExtractImage3d(triclopsContext, depthImage);
triclopsDestroyImage3d(&depthImage);

Matlab griddata equivalent in C++

I am looking for a C++ equivalent to Matlab's griddata function, or any 2D global interpolation method.
I have a C++ code that uses Eigen 3. I will have an Eigen Vector that will contain x,y, and z values, and two Eigen matrices equivalent to those produced by Meshgrid in Matlab. I would like to interpolate the z values from the Vectors onto the grid points defined by the Meshgrid equivalents (which will extend past the outside of the original points a bit, so minor extrapolation is required).
I'm not too bothered by accuracy--it doesn't need to be perfect. However, I cannot accept NaN as a solution--the interpolation must be computed everywhere on the mesh regardless of data gaps. In other words, staying inside the convex hull is not an option.
I would prefer not to write an interpolation from scratch, but if someone wants to point me to pretty good (and explicit) recipe I'll give it a shot. It's not the most hateful thing to write (at least in an algorithmic sense), but I don't want to reinvent the wheel.
Effectively what I have is scattered terrain locations, and I wish to define a rectilinear mesh that nominally follows some distance beneath the topography for use later. Once I have the node points, I will be good.
My research so far:
The question asked here: MATLAB functions in C++ produced a close answer, but unfortunately the suggestion was not free (SciMath).
I have tried understanding the interpolation function used in Generic Mapping Tools, and was rewarded with a headache.
I briefly looked into the Grid Algorithms library (GrAL). If anyone has commentary I would appreciate it.
Eigen has an unsupported interpolation package, but it seems to just be for curves (not surfaces).
Edit: VTK has a matplotlib functionality. Presumably there must be an interpolation used somewhere in that for display purposes. Does anyone know if that's accessible and usable?
Thank you.
This is probably a little late, but hopefully it helps someone.
Method 1.) Octave: If you're coming from Matlab, one way is to embed the gnu Matlab clone Octave directly into the c++ program. I don't have much experience with it, but you can call the octave library functions directly from a cpp file.
See here, for instance. http://www.gnu.org/software/octave/doc/interpreter/Standalone-Programs.html#Standalone-Programs
griddata is included in octave's geometry package.
Method 2.) PCL: They way I do it is to use the point cloud library (http://www.pointclouds.org) and VoxelGrid. You can set x, and y bin sizes as you please, then set a really large z bin size, which gets you one z value for each x,y bin. The catch is that x,y, and z values are the centroid for the points averaged into the bin, not the bin centers (which is also why it works for this). So you need to massage the x,y values when you're done:
Ex:
//read in a list of comma separated values (x,y,z)
FILE * fp;
fp = fopen("points.xyz","r");
//store them in PCL's point cloud format
pcl::PointCloud<pcl::PointXYZ>::Ptr basic_cloud_ptr (new pcl::PointCloud<pcl::PointXYZ>);
int numpts=0;
double x,y,z;
while(fscanf(fp, "%lg, %lg, %lg", &x, &y, &z)!=EOF)
{
pcl::PointXYZ basic_point;
basic_point.x = x; basic_point.y = y; basic_point.z = z;
basic_cloud_ptr->points.push_back(basic_point);
}
fclose(fp);
basic_cloud_ptr->width = (int) basic_cloud_ptr->points.size ();
basic_cloud_ptr->height = 1;
// create object for result
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_filtered(new pcl::PointCloud<pcl::PointXYZ>());
// create filtering object and process
pcl::VoxelGrid<pcl::PointXYZ> sor;
sor.setInputCloud (basic_cloud_ptr);
//set the bin sizes here. (dx,dy,dz). for 2d results, make one of the bins larger
//than the data set span in that axis
sor.setLeafSize (0.1, 0.1, 1000);
sor.filter (*cloud_filtered);
So that cloud_filtered is now a point cloud that contains one point for each bin. Then I just make a 2-d matrix and go through the point cloud assigning points to their x,y bins if I want an image, etc. as would be produced by griddata. It works pretty well, and it's much faster than matlab's griddata for large datasets.

Perlin's Noise with OpenGL

I was studying Perlin's Noise through some examples # http://dindinx.net/OpenGL/index.php?menu=exemples&submenu=shaders and couldn't help to notice that his make3DNoiseTexture() in perlin.c uses noise3(ni) instead of PerlinNoise3D(...)
Now why is that? Isn't Perlin's Noise supposed to be a summation of different noise frequencies and amplitudes?
Qestion 2 is what does ni, inci, incj, inck stand for? Why use ni instead of x,y coordinates? Why is ni incremented with
ni[0]+=inci;
inci = 1.0 / (Noise3DTexSize / frequency);
I see Hugo Elias created his Perlin2D with x,y coordinates, and so does PerlinNoise3D(...).
Thanks in advance :)
I now understand why and am going to answer my own question in hopes that it helps other people.
Perlin's Noise is actually a synthesis of gradient noises. In its production process, we must compute the dot product of a vector pointing from one of the corners flooring the input point to the input point itself with the random-generated gradient vector.
Now if the input point were a whole number, such as the xyz coordinates of a texture you want to create, the dot product would always return 0, which would give you a flat noise. So instead, we use inci, incj, inck as an alternative index. Yep, just an index, nothing else.
Now returning to question 1, there are two methods to implement Perlin's Noise:
1.Calculate the noise values separately and store them in the RGBA slots in the texture
2.Synthesize the noises up before-hand and store them in one of the RGBA slots in the texture
noise3(ni) is the actual implementation of method 1, while PerlinNoise3D(...) suggests the latter.
In my personal opinion, method 1 is much better because you have much more flexibility over how you use each octave in your shaders.
My guess on the reason for using noise3(ni) in make3DNoiseTexture() instead if PerlinNoise3D(...) is that when you use that noise texture in your shader you want to be able to replicate and modify the functionality of PerlinNoise3D(...) directly in the shader.
My guess for the reasoning behind ni, inci, incj, inck is that using x,y,z of the volume directly don't give a good result so by scaling the the noise with the frequency instead it is possible to adjust the resolution of the noise independently from the volume size.