I have a set of Exact_predicates_exact_constructions_kernel::Point_3 points that are coplanar. I would like to be able to serialize the points to a database in double precision and then be able to read the points back into an exact precision space and have them be coplanar. Specifically, my workflow is CGAL -> Postgres using Postgis -> SFCGAL. SFCGAL requires that the points be coplanar and uses exact constructions.
I know how to write the geometry, but I'm not sure how to go about rounding the points so that they remain coplanar. Obviously the points read will have a loss of precision and will be slightly transformed compared to the originals, but I only need them to be within roughly a e-04 distance from their respective points.
Unfortunately, projecting the points onto a plane after reading them is a last resort option. It doesn't work very well for my purposes.
Related
I am trying to merge point clouds from two frames into one bigger point cloud. I will use ICP for that but I understand I need to per-align the point clouds. I am trying to do it with PCL template_alignment code from:
https://pcl.readthedocs.io/projects/tutorials/en/latest/template_alignment.html#template-alignment
The program computes surface normals after loading pointcloud. It works fine for the sample data used in the code but for my own data the "norm_est.compute(*normals_)" statement on line 89 returns NaN values. I read on PCL library documention that if the function can't find the neighbouring points it will return NaN values. That is my question, why the program is unable to find neighbourung points and what do I do about it? I am using the same settings as in the code in the above link for radius search and other perimeters for normal estimation.My left Image and point cloud are given below. I have uploaded a coloured pointcloud for better visualization but for alignment purposes I am using point cloud without RGB and my pointcloud.ply file contains only xyz coordinates.
Simple fix: modify that line (89) as such
Old:
norm_est.setRadiusSearch (normal_radius_);
new:
norm_est.setKSearch(5);
What this does is instead of looking inside a particular size sphere (unknown number of entries) it looks for a specific number of nearest neighbors.
Note that 5 is a pretty arbitrary number. You could go faster by lowering to 3 (minimum required) or slower but more accurate by increasing that number. It is probably best to not actually drop a hardcoded value right there and as such I suggest you pipe it out similar to how normal_radius_ was before, but this should get you past this issue for now.
Other options:
1: remove nan from point cloud after calculating normals (pcl::removeNaNFromPointCloud)
2: Run a reprocess step where you do a statistical outlier removal filter. Or an outright minimum neighbor radius filter. This will remove points with too few neighbors (which are what is generating nan values in your normal calculation)
3: increase radius of normal calculation or perform a nearest neighbor (not radius based) normal calculation.
I have a set of point cloud, and I would like to test if there is a corner in a 3D room. So I would like to discuss my approach and if there is a better approach or not in terms of speed, because I want to test it on mobile phones.
I will try to use hough tranform to detect lines, then I will try to see if there are three lines that are intersecting and they make a two plane that are intersecting too.
If the point cloud data comes from a depth sensor, then you have a relatively dense sampling of your walls. One thing I found that works well with depth sensors (e.g. Kinect or DepthSense) is a robust version of the RANSAC procedure that #MartinBeckett suggested.
Instead of picking 3 points at random, pick one point at random, and get the neighboring points in the cloud. There are two ways to do that:
The proper way: use a 3D nearest neighbor query data structure, like a KD-tree, to get all the points within some small distance from your query point.
The sloppy but faster way: use the pixel grid neighborhood of your randomly selected pixel. This may include points that are far from it in 3D, because they are on a different plane/object, but that's OK, since this pixel will not get much support from the data.
The next step is to generate a plane equation from that group of 3D points. You can use PCA on their 3D coordinates to get the two most significant eigenvectors, which define the plane surface (the last eigenvector should be the normal).
From there, the RANSAC algorithm proceeds as usual: check how many other points in the data are close to that plane, and find the plane(s) with maximal support. I found it better to find the largest support plane, remove the supporting 3D points, and run the algorithm again to find other 'smaller' planes. This way you may be able to get all the walls in your room.
EDIT:
To clarify the above: the support of a hypothesized plane is the set of all 3D points whose distance from that plane is at most some threshold (e.g. 10 cm, should depend on the depth sensor's measurement error model).
After each run of the RANSAC algorithm, the plane that had the largest support is chosen. All the points supporting that plane may be used to refine the plane equation (this is more robust than just using the neighboring points) by performing PCA/linear regression on the support set.
In order to proceed and find other planes, the support of the previous iteration should be removed from the 3D point set, so that remaining points lie on other planes. This may be repeated as long as there are enough points and best plane fit error is not too large.
In your case (looking for a corner), you need at least 3 perpendicular planes. If you find two planes with large support which are roughly parallel, then they may be the floor and some counter, or two parallel walls. Either the room has no visible corner, or you need to keep looking for a perpendicular plane with smaller support.
Normal approach would be ransac
Pick 3 points at random.
Make a plane.
Check if each other point lies on the plane.
If enough are on the plane - recalculate a best plane from all these points and remove them from the set
If not try another 3 points
Stop when you have enough planes, or too few points left.
Another approach if you know that the planes are near vertical or near horizontal.
pick a small vertical range
Get all the points in this range
Try and fit 2d lines
Repeat for other Z ranges
If you get a parallel set of lines in each Z slice then they are probably have a plane - recalculate the best fit plane for the points.
I would first like to point out
Even though this is an old post, I would like to present a complementary approach, similar to Hough Voting, to find all corner locations, composed of plane intersections, jointly:
Uniformly sample the space. Ensure that there is at least a distance $d$ between the points (e.g. you can even do this is CloudCompare with a 'space' subsampling)
Compute the point cloud normals at these points.
Randomly pick 3 points from this downsampled cloud.
Each oriented point (point+plane) defines a hypothetical plane. Therefore, each 3 point picked define 3 planes. Those planes, if not parallel and not intersecting at a line, always intersect at a single point.
Create a voting space to describe the corner: The intersection of the 3 planes (the point) might a valid parameterization. So our parameter space has 3 free parameters.
For each 3 points cast a vote in the accumulator space to the corner point.
Go to (2) and repeat until all sampled points are exhausted, or enough iterations are done. This way we'll be casting votes for all possible corner locations.
Take the local maxima of the accumulator space. Depending on the votes, we'll be selecting the corners from intersection of the largest planes (as they'll receive more votes) to the intersection of small planes. The largest 4 are probably the corners of the room. If not, one could also consider the other local maxima.
Note that the voting space is a quantized 3D space and the corner location will be a rough estimate of the actual one. If desired, one could store the planes intersection at that very location and refine them (with iterative optimization similar to ICP or etc) to get a very fine corner location.
This approach will be quite fast and probably very accurate, given that you could refine the location. I believe it's the best algorithm presented so far. Of course this assumes that we could compute the normals of the point clouds (we can always do that at sample locations with the help of the eigenvectors of the covariance matrix).
Please also look here, where I have put out a list of plane-fitting related questions at stackoverflow:
3D Plane fitting algorithms
I am currently reading into the topic of stereo vision, using the book of Hartley&Zimmerman alongside some papers, as I am trying to develop an algorithm capable of creating elevation maps from two images.
I am trying to come up with the basic steps for such an algorithm. This is what I think I have to do:
If I have two images I somehow have to find the fundamental matrix, F, in order to find the actual elevation values at all points from triangulation later on. If the cameras are calibrated this is straightforward if not it is slightly more complex (plenty of methods for this can be found in H&Z).
It is necessary to know F in order to obtain the epipolar lines. These are lines that are used in order to find image point x in the first image back in the second image.
Now comes the part were it gets a bit confusing for me:
Now I would start taking a image point x_i in the first picture and try to find the corresponding point x_i’ in the second picture, using some matching algorithm. Using triangulation it is now possible to compute the real world point X and from that it’s elevation. This process will be repeated for every pixel in the right image.
In the perfect world (no noise etc) triangulation will be done based on
x1=P1X
x2=P2X
In the real world it is necessary to find a best fit instead.
Doing this for all pixels will lead to the complete elevation map as desired, some pixels will however be impossible to match and therefore can't be triangulated.
What confuses me most is that I have the feeling that Hartley&Zimmerman skip the entire discussion on how to obtain your point correspondences (matching?) and that the papers I read in addition to the book talk a lot about disparity maps which aren’t mentioned in H&Z at all. However I think I understood correctly that the disparity is simply the difference x1_i- x2_i?
Is this approach correct, and if not where did I make mistakes?
Your approach is in general correct.
You can think of a stereo camera system as two points in space where their relative orientation is known. This are the optical centers. In front of each optical center, you have a coordinate system. These are the image planes. When you have found two corresponding pixels, you can then calculate a line for each pixel, wich goes throug the pixel and the respectively optical center. Where the two lines intersect, there is the object point in 3D. Because of the not perfect world, they will probably not intersect and one may use the point where the lines are closest to each other.
There exist several algorithms to detect which points correspond.
When using disparities, the two image planes need to be aligned such that the images are parallel and each row in image 1 corresponds to the same row in image 2. Then correspondences only need to be searched on a per row basis. Then it is also enough to know about the differences on x-axis of the single corresponding points. This is then the disparity.
Looking to store a circle in a geodjango field so I can use the geodjango query __contains to find out if a point is in the circle (similar to what can be done with a PolygonField).
Currently have it stored as a Decimal radius and GeoDjango Point Field, but need a way to query a list of locations in the DB such that these varying circles (point field and radii) contain my search point (long/lat).
Hope it makes sense.
Technically speaking, PostGIS supports CurvePolygon and CircularString geometry types, which can be used to store curved geometries. For example, a 2-unit radius around x=10, y=10 that has been approximated by a 64-point buffered polygon is:
SELECT ST_AsText(ST_LineToCurve(ST_Buffer(ST_MakePoint(10, 10), 2, 16)));
st_astext
------------------------------------------------
CURVEPOLYGON(CIRCULARSTRING(12 10,8 10,12 10))
(1 row)
However, this approach is not typically done, as there is very limited support for this geometry type (i.e., ST_AsSVG, and others won't work). These geometry types will likely cause plenty of grief, and I'd recommend not doing this.
Typically, all geometries are stored as a well supported type: POINT, LINESTRING or POLYGON (with optional MULTI- prefix). With these types, use the ST_DWithin function (e.g., GeoDjango calls this __dwithin, see also this question) to query if another geometry is within a specified distance. For example, if you have a point location, you can see if other geometries are within a certain distance (i.e., radius) from the point.
For example, I could relate the positions where I draw my entities to the screen before sending them to OpenGL - or: I give OpenGL the direct coordinates, relative to the game world, which are of course in a much larger scale.
Is drawing with large vertex coordinates slow?
There aren't speed issues, but you will find that you get accuracy issues as the differences between your numbers will be very small in relation to the size of the numbers.
For example, if you model is centred at (0,0,0) and is 10 x 10 x 10 then two points at opposite sides of your model are at (-5,-5,-5) and (5,5,5). If that same model is centred at (1000,1000,1000) then your points are (995,995,995) and (1005,1005,1005).
While the absolute difference is the same, the difference relative to the coordinates isn't.
This could lead to rounding errors and display artefacts.
You should arrange the coordinates of models so that the centre of the model is at (0,0,0).
Speed of larger values will be the same as smaller values of the same data type. But if you want more than the normal 32bit precision and enable something like GL_EXT_vertex_attrib_64bit, then there may be performance penalties for that.
Large values are allowed, but one extra thing to watch out for other than the standard floating point precision issues is the precision of the depth buffer.
The depth precision is non-linearly distributed. So it is more precise close up and less precise for things far away. So if you want to be able to use a wide range of values for vertex coordinates, be aware that you may see z-fighting artifacts if you intend to render giant objects very far away from the camera very small things close to the camera. Setting your near and far clip planes as close as possible to your actual scene extents will help but may not be enough to fix the problem. See http://www.opengl.org/resources/faq/technical/depthbuffer.htm, especially the last two points for more details.
Also, I know you asked about vertex positions, but texture coordinates may have additional precision limits beyond normal floating point rules. Assigning a texture coordinate of (9999999.5, 9999999.5) and expecting it to wrap properly to the (0..1, 0..1) range may not work as you expect (depending on implementation/hardware).
The floating point pipeline has constant speed on operations, regardless of the size of the operands.
The only danger is getting abnormally large or small numbers where the error gets out of hand and you end up with subnormal numbers.