Hello,
I am trying to find the grid points which are outside from my data and at least 2cm close to grid data. The grid data is shown in red col and data is shown in blue. I can find the point inside if dist(cubeGridPoint < SampleDataPoint). I am interested in finding the points which are 2 cm close to sample. Any algorithm or help will def help me.
In summary, I want to find only grid points which are at least 2 cm from sample and lying outside the sample.
the data looks not too convex to me (more concave like)
so the convex hull will create not what you want ...
I would use different approach similar to this find holes in 2D point set
create space for your grid map
3D voxel map for each 2x2x2 cm voxel
and clear it with zeroes
loop through all points of your data
compute the voxel table coordinate for it
and increment its element value in voxel map
if you have too many points that also handle increment overflow
so you will not go to zero or negative values ...
now
all voxels with zero are outside your object
or in some local hole of it
filter the anomalies
you can use dilatation/erosion for that
or count neigbour voxels with zero and if lover then treshold clear it with 1
find the boundary
handle table as set of 2D layers
in each layer process or horizontal lines
detect / replace it like this:
find -> replace
?????0000000000????? -> ?????0111111110?????
000000000000000????? -> 111111111111110?????
?????000000000000000 -> ?????011111111111111
you can do this separate in 3 directions
and combine the results together to avoid projection related holes
now all the voxels holding zero are your boundary points
[notes]
you can improve this by additional filtering like smooth or whatever
Related
I have an algorithmic problem on a Cartesian plane.. I need to efficiently search for geometric shapes that intersect with a given point. There are several shapes(rectangle, circle, triangle and polygon) but those are not important, because the determining the actual point inclusion is not a problem here, I will implement those on my own. The problem lies in determining which shapes need to be verified for the inclusion with the given point. Iterating through all of my shapes on plane and running the point inclusion method on each one of them is inefficient as the number of instances of shapes will be quite large. My first idea was to divide the plane for segments(the plane is finite, but too large for any kind of 3D array) and when adding a shape to the database, i would determine which segments it would intersect with and save them within object of the shape. Then when the point for inclusion verification is given, I would only need to determine the segment in which the point is located and then verify the inclusion only with objects which intersect with that segment.
Is that the way to go? I don't know if the method I described is optimal or if i am not missing something. Any help would be appreciated..
Thanks in advance
P.S.: I will be writing this in C++. That is not really relevant as it is more of an algorithmic problem but I wanted to put that out if someone was curious...
The gridding approach can be used here.
See the plane as a raster image where you draw all your shapes using a scan conversion algorithm, making sure that all pixels even partially covered are filled. For every image pixel, keep a list of the shapes that filled it.
A query is then straightforward: find the pixel where the query point falls in time O(1) and check every shape in the list, in time O(K), where K is the list length, approximately equal to the number of intersecting shapes.
If your image is made of N² pixels and you have M objects having an average area A pixels, you will need to store N²+M.A list elements (a shape identifier + a link to the next). You will choose the pixel size to achieve a good compromise between accuracy and storage cost. In any case, you must limit yourself to N²<Q.M, where Q is the total number of queries, otherwise the cost of just initializing the image could exceed the total query time.
In case your scene is very sparse (more voids than shapes), you can use a compressed representation of the image, using a quadtree.
I have a set of point cloud, and I would like to test if there is a corner in a 3D room. So I would like to discuss my approach and if there is a better approach or not in terms of speed, because I want to test it on mobile phones.
I will try to use hough tranform to detect lines, then I will try to see if there are three lines that are intersecting and they make a two plane that are intersecting too.
If the point cloud data comes from a depth sensor, then you have a relatively dense sampling of your walls. One thing I found that works well with depth sensors (e.g. Kinect or DepthSense) is a robust version of the RANSAC procedure that #MartinBeckett suggested.
Instead of picking 3 points at random, pick one point at random, and get the neighboring points in the cloud. There are two ways to do that:
The proper way: use a 3D nearest neighbor query data structure, like a KD-tree, to get all the points within some small distance from your query point.
The sloppy but faster way: use the pixel grid neighborhood of your randomly selected pixel. This may include points that are far from it in 3D, because they are on a different plane/object, but that's OK, since this pixel will not get much support from the data.
The next step is to generate a plane equation from that group of 3D points. You can use PCA on their 3D coordinates to get the two most significant eigenvectors, which define the plane surface (the last eigenvector should be the normal).
From there, the RANSAC algorithm proceeds as usual: check how many other points in the data are close to that plane, and find the plane(s) with maximal support. I found it better to find the largest support plane, remove the supporting 3D points, and run the algorithm again to find other 'smaller' planes. This way you may be able to get all the walls in your room.
EDIT:
To clarify the above: the support of a hypothesized plane is the set of all 3D points whose distance from that plane is at most some threshold (e.g. 10 cm, should depend on the depth sensor's measurement error model).
After each run of the RANSAC algorithm, the plane that had the largest support is chosen. All the points supporting that plane may be used to refine the plane equation (this is more robust than just using the neighboring points) by performing PCA/linear regression on the support set.
In order to proceed and find other planes, the support of the previous iteration should be removed from the 3D point set, so that remaining points lie on other planes. This may be repeated as long as there are enough points and best plane fit error is not too large.
In your case (looking for a corner), you need at least 3 perpendicular planes. If you find two planes with large support which are roughly parallel, then they may be the floor and some counter, or two parallel walls. Either the room has no visible corner, or you need to keep looking for a perpendicular plane with smaller support.
Normal approach would be ransac
Pick 3 points at random.
Make a plane.
Check if each other point lies on the plane.
If enough are on the plane - recalculate a best plane from all these points and remove them from the set
If not try another 3 points
Stop when you have enough planes, or too few points left.
Another approach if you know that the planes are near vertical or near horizontal.
pick a small vertical range
Get all the points in this range
Try and fit 2d lines
Repeat for other Z ranges
If you get a parallel set of lines in each Z slice then they are probably have a plane - recalculate the best fit plane for the points.
I would first like to point out
Even though this is an old post, I would like to present a complementary approach, similar to Hough Voting, to find all corner locations, composed of plane intersections, jointly:
Uniformly sample the space. Ensure that there is at least a distance $d$ between the points (e.g. you can even do this is CloudCompare with a 'space' subsampling)
Compute the point cloud normals at these points.
Randomly pick 3 points from this downsampled cloud.
Each oriented point (point+plane) defines a hypothetical plane. Therefore, each 3 point picked define 3 planes. Those planes, if not parallel and not intersecting at a line, always intersect at a single point.
Create a voting space to describe the corner: The intersection of the 3 planes (the point) might a valid parameterization. So our parameter space has 3 free parameters.
For each 3 points cast a vote in the accumulator space to the corner point.
Go to (2) and repeat until all sampled points are exhausted, or enough iterations are done. This way we'll be casting votes for all possible corner locations.
Take the local maxima of the accumulator space. Depending on the votes, we'll be selecting the corners from intersection of the largest planes (as they'll receive more votes) to the intersection of small planes. The largest 4 are probably the corners of the room. If not, one could also consider the other local maxima.
Note that the voting space is a quantized 3D space and the corner location will be a rough estimate of the actual one. If desired, one could store the planes intersection at that very location and refine them (with iterative optimization similar to ICP or etc) to get a very fine corner location.
This approach will be quite fast and probably very accurate, given that you could refine the location. I believe it's the best algorithm presented so far. Of course this assumes that we could compute the normals of the point clouds (we can always do that at sample locations with the help of the eigenvectors of the covariance matrix).
Please also look here, where I have put out a list of plane-fitting related questions at stackoverflow:
3D Plane fitting algorithms
I have a 3D mesh which represents a surface with some rough boundaries which I would like to smooth:
I am using a half edge data structure for storing the geometry so I can easily iterate over the boundary edges, vertices and faces. I can also quite easily determine whether a given pair of edges is a convex/concave using a dot and cross product.
What would be the best approach for smoothing the edges out, so they form a continuous, curvy line, rather then the sharp pattern seen in the pictures?
compute angle between two neighboring faces
I call it ada as abs delta angle. If it is bigger then threshold it means this point is edge. You can compute it as max of all angles between all edge lines. In 2D it looks like this:
in 3D mesh there is more then 2 lines per point so you have to check all combinations and select the biggest one
ada=max(abs(acos(n(i).n(j)))
where n(i),n(j) are normal vectors of neighboring faces where i != j
identify problematic zones
so find points where ada > threshold and create a list of these points
filter this list
if this point is too far from any other (distance>threshold) then remove it from list to preserve geometric shape
smooth points
you have to tweak this step to match your needs I would do this:
find a group of points in the list which are close together and apply some averaging geometric or numeric on them for example:
pnt(i)=0.5*pnt(i)+0.25*pnt(i-1)+0.25*pnt(i+1)
this can be applied repetitive
blue and red dots are original points, green dots are smoothed points
I took the difference of two consecutive frames of a video. What I got (as you know) a black frame except the moving objects. The moving objects are white. I want to count the number of white pixels in the frame. I mean, I want to go through the image row by row and if the value of the ith pixel is greater than a specified number (say 50) then they must be stored in an array. Later on I will use this array to check if there is actually an object or just a noise. For example, if a car is moving in the video then after frame differencing I will check each pixel of the frames, containing the car, row by row, to detect that car. As a result when there is a moving car in the video the pixels' values are greater than 0 after frame differencing. Any idea how can I sum all the pixels of the moving car that will enable me to decide if it is a car or just a noise.
Thanks in advance :)
You'll probably find that the difference is non-trivial. For instance, you will probably find that the biggest difference is near the edges of the car, perpendicular to the movement of the car. One of those two edges will have negative values, one positive. Therefore, the biggest advantage of the "difference image" is that you restrict your search area. In isolation it's not very useful.
So, what should you do? Well, use an edge detection algorithm on the normal image, and compare the edge found there with the 2 edges found in the difference image. The edges belonging to the car will connect the 2 edges from the difference image.
You could use blob detection: http://www.labbookpages.co.uk/software/imgProc/blobDetection.html
to detect a blob of white pixels in each "difference image". Once you have the blobs you can find their center by finding the average of their pixel positions. Then you can find the path swept out by these centers and check it against some criterion.
Without knowing more about your images I cannot suggest a criterion, but for example if you are watching them move down a straight road you might expect all the points to be roughly co-linear. In this case, you can get the gradient and a point where a blob is found and use the point-gradient form of a line to get the lines equation:
y - y_1 = m(x - x_1)
For example given a point (4, 2) and gradient 3 you would get
y - 2 = 3(x - 4)
y = 3x - 2
You can then check all points against this line to see if they lie along it.
Say I have a closed shape as seen in image below and I have the edge pixels. What is the most efficient way to fill the shape, i.e. turn pixels 'on' inside the shape if:
1) I have all the edge pixels
2) I have most of the edge pixels and not all of them (as seen in the figure).
Construct the convex hull and add the missing pixels. Then use a scanline algorithm to fill the polygon.
It all depends on the situation.
If you manually created the framebuffer (basically using a byte array or something alike) you have to iterate over all pixels you want to change. So, for example, starting at the leftmost edge of a row:
Find start of shape on row
Jump one right and turn on pixel until found second end of shape on row (or end of row)
Continue on next row
This will of course only work if you have all edge pixels. Take a look at Marching Squares, can be of some assistance.
And please, be more specific. "The most efficient way to fill the shape" depends alot of your underlying rendering library, if it's raster graphics and so on...
EDIT
Note, the algorithm is much faster if you can generate the edge pixels, then there's no need to look for start of edge.
A standard flood fill algorithm will be pretty efficient on a convex shape, and will handle the cases where the shape is less convex than you anticipated. Unfortunately it requires an unbroken outline.
The breaks in the boundary destroy the meaning if the word "inside".
A neural network like a human retina is very efficient at doing this processing.
On a computer you need to take time to define what you mean by "inside". How big a gap? How wriggerly a boundary?
Simulate a largish circular bug bouncing arround the "inside" - too big to go thru the gaps but smaller than the min radius of curvature of the boundary????
Before you can fill the inside of something you would need to determine the exact boundary, in this case that would constitute recognising the circle.
After that you can just check in a box around the circle for every pixel, if it is actually in it. Since you have to do something with every pixel inside the circle and the number of pixels in the circle is linear in the number of pixels of a bounding square (assuming the bounding square's sides have length 'radius * constant' for some constant), this should be close to optimal.