Hello I am new to PCL (point cloud library) and my task is to detect 3d objects in a box / bin using surflet pairs using kinect. I am able to segment and cluster some of the objects but my algorithm also detects the box as a segment. I want to know how can I detect and remove only the box in the scene ?
should I use PCA or SIFT ?
Thank you,
Saransh Vora
You could run a planar ransac and subtract all points that belong to planes of sufficiently large sizes. An additional specification would be to only subtract out planes that have a normal vector at nearly 90 degrees from unit-z. This would allow you to search for smaller planes without fearing cutting your objects in your box too badly as it would make your filter highly specific to vertical planes.
Another note, if your box doesn't move... you could just save your (empty) point cloud ie the cloud when there are no objects in the box and then when you get a new cloud use that saved cloud as a proximity filter to cut out all points which are sufficiently close to what was labeled as background.
Related
Reading from the ground truth, I have an initial bounding box. I then calculate a foreground mask and use cv2.goodFeaturesToTrack to get points I can follow using cv2.calcOpticalFlowPyrLK. I calculate the bounding box by taking the largest possible rectangle going through the out-most points (roughly as described here: # How to efficiently find the bounding box of a collection of points? )
However, every now and then I need to recalculate the goodFeaturesToTrack to avoid the person "losing" all the points over time.
Whenever recalculating, points may land on other people, if they stand within the bounding box of the person to track. They will now be followed, too. Due to that my bounding box fails to be of any use after such a recalculation. What are some methods to avoid such a behavior?
I am looking for resources and general explanations and not specific implementations.
Ideas I had
Take the size of the previous bounding box divided by the current bounding box size into account and ignore if size changes too much.
Take the previous white-fullness of the foreground mask divided by the current white-fullness of the foreground mask into account. Do not re-calculate the bounding box if the foreground mask is too full. Other people are probably crossing the box.
Calculate a general movement vector for the bounding box from the median of all points calculated using optical flow. Alter the bounding box only within some relation to the vector to avoid rapid changes of the bounding box.
Filter the found good features to track points using some additional metric.
In general I am looking for a method that calculates new goodFeaturesToTrack based stronger on the previous goodFeaturesToTrack or the points derived from that using optical flow, I guess.
I'm trying to find spheres from a point cloud with pcl::sacSegmentation using RANSAC. The cloud is scanned with an accurate terrestial scanner from one station. The cloud density is about 1cm. The best results so far are shown in the image below. As you can see the cloud contains 2 spheres (r=7,25cm) and a steel beam where the balls are attached.. I am able to find three sphere candidates whose inlier points are extracted from cloud in the image (You can see two circle shapes on the beam near the spheres).
Input point cloud. Inlier points extracted
So, it seems that I am close. Still the found sphere centers are too much (~10cm) away from the truth. Any suggestion how could I improve this? I have been tweaking the model parameters quite some time. Here are the parameters for the aforementioned results:
seg.setOptimizeCoefficients(true);
seg.setModelType(pcl::SACMODEL_SPHERE);
seg.setMethodType(pcl::SAC_RANSAC);
seg.setMaxIterations(500000);
seg.setDistanceThreshold(0.0020);
seg.setProbability(0.99900);
seg.setRadiusLimits(0.06, 0.08);
seg.setInputCloud(cloud);
I also tried to improve the results by including point normals in the model with no better results. Yet there are couple parameters more to adjust so there might be some combinations I had not tried.
I happily give you more information if needed.
Thaks
naikh0u
After some investigation I have come in to conclusion that I can't find spheres with SACSegmentation from a cloud that contains lot of other points that don't belong in any sphere shape. Like in my case the beam is too much for the algorithm.
Thus, I have to choose the points that show some potential being a part of a sphere shape. Also I think, I need to separate the points belonging in different spheres. I tested and saw that my code works pretty well if the input cloud has only sphere points for single sphere with some "natural" noise.
Some have solved this problem by first extracting all points belonging to planes and then searched for spheres. Others have used colors of the target (in case of camera) to extract only needed points.
Deleting plane points should work for my example cloud, but my application may have more complex shapes too so it may be too simple..
..Finally, I got satisfied results by clustering the cloud with pcl::EuclideanClusterExtraction and feeding the clusters for sphere search one by one.
Using PCL, I'm trying to detect and localize a rectangular cut from a large steel frame(img below):
Now I'm using the Concave hull class, and I do get the outlines from the rectangle. However, the outer borders of the camera view also follow.
I used a passthrough filter to get rid of the borders, however that only works in specific cases.
What I'm asking is, do you happen to know any methods that could give a better result?
For the holes, they are not always at the same height or location. But they are of a standard size(+/- 1 cm). A size criteria can eliminate false detections.
This is a gazebo simulated model, and the point cloud captured from a simulated kinect using ROS.
Using PCL, I used SAC planar segmentation, then extract a concave hull. As seen on the image, the edges of the camera view are also considered as a concave.
pcl::SACSegmentation<pcl::PointXYZ> segmentation;
segmentation.setOptimizeCoefficients (true);
segmentation.setModelType(pcl::SACMODEL_PLANE);
segmentation.setMethodType(pcl::SAC_RANSAC);
segmentation.setMaxIterations(1000);
segmentation.setDistanceThreshold(0.01);
segmentation.setInputCloud(cloud_ptr);//breytti
segmentation.segment(*inliers, *coefficients);
pcl::ConcaveHull<pcl::PointXYZ> chull;
chull.setInputCloud (cloud_projected);
chull.setAlpha (0.1);
chull.reconstruct (*cloud_hull, hullPolygons);
Eigen::Vector4f centroid;//new object for centroid calculation
pcl::PointXYZ minpt, maxpt;//min max boundary of new cloud
pcl::compute3DCentroid(*cloud_hull, centroid);
pcl::getMinMax3D(*cloud_hull,minpt,maxpt);
To sum it up, looking for a robust method or ideas to detect a rectangular cut from the frame.
Thanks
An alternative tool could be to use CloudCompare. In version 2.12 alpha, Tools> Sand box (research) > find biggest inner rectangle 2D it can be used to detect rectangular holes.
I am trying to implement a motion detection in OpenCV C++. I tried various methods like MOG, Optical flow which work fine but is there a way we can eliminate constant movements in the scene like a constant fan motion etc ? I have opencv accumuateWeighted() in mind but not sure if it works. Is there any better way we can do it ?
I have not got full robust solution and also i don't have any experience with video processing but i would put my idea whatever till now i have got in to this problem:
First consider a few pairs of consecutive image frames from the video and convert them to gray scale for more robust comparison.
Raster scan the image pairs and find the difference of image pairs by comparing corresponding pairs.
The resultant image will give the pixel location where there is a change in image to image in a pair, cluster these pixels locations and make a bounding box over them. So that this bounding box region will mark an object which is translating/rotation.
Now as we have applied the above image difference operation over several pairs. We will have rotating/translating bounding box in each image pair difference.
Now check in each resultant image difference with pixels having bounding box over them.
Compare bounding box central location in a difference image with other difference images. If bounding box with a very slight variation in its central location exists across all difference images then object contained in that bounding box will be having rotational motion like Fan,leaves and remaining bounding boxes will represent the actual translating objects in the video.
I am working on a project to detect certain objects in an aerial image, and as part of this I am trying to utilize elevation data for the image. I am working with Digital Elevation Models (DEMs), basically a matrix of elevation values. When I am trying to detect trees, for example, I want to search for tree-shaped regions that are higher than their surrounding terrain. Here is an example of a tree in a DEM heatmap:
https://i.stack.imgur.com/pIvlv.png
I want to be able to find small regions like that that are higher than their surroundings.
I am using OpenCV and GDAL for my actual image processing. Do either of those already contain techniques for what I'm trying to accomplish? If not, can you point me in the right direction? Some ideas I've had are going through each pixel and calculating the rate of change in relation to it's surrounding pixels, which would hopefully mean that pixels with high rates change/steep slopes would signify an edge of a raised area.
Note that the elevations will change from image to image, and this needs to work with any elevation. So the ground might be around 10 meters in one image but 20 meters in another.
Supposing you can put the DEM information into a 2D Mat where each "pixel" has the elevation value, you can find local maximums by applying dilate and then substract the result from the original image.
There's a related post with code examples in: http://answers.opencv.org/question/28035/find-local-maximum-in-1d-2d-mat/