I'm writing a mobile robotics application in C/C++ in Ubuntu and, at the moment, I'm using a laser sensor to scan the environment and detect collisions with objects when the robot moves.
This laser has a scan area of 270° and a maximum radius of 4000mm.
It is able to detect an object within this range and to report their distance from the sensor.
Each distance is in planar coordinates, so to get more readeable data, I convert them from planar to cartesian coordinates and then I print them in a text file and then I plot them in MatLab to see what the laser had detected.
This picture shows a typical detection on cartesian coordinates.
Values are in meters, so 0.75 are 75 centimeters and 2 are two meters. Contiguous blue points are all the detected objects, while the points near (0,0) refer to the laser position and must be discarded. Blue points under y < 0 are produced since laser scan area is 270°; I added the red line square (1.5 x 2 meters) to determine the region within I want to implement the collisions check.
So, I would like to detect in realtime if there are points (objects) inside that area and, if yes, call some functions. This is a little bit tricky, because, this check should be able to detect also if there are contiguous points to determine if the object is real or not (i.e. if it detects a point, then it should search the nearest point to determine if they compose an object or if it's only a point which may be a detection error).
This is the function I use to perform a single scan:
struct point pt[limit*URG_POINTS];
//..
for(i = 0; i < limit; i++){
for(j = 0; j < URG_POINTS; j++){
ang2 = kDeg2Rad*((j*240/(double)URG_POINTS)-120);
offset = 0.03; //it depends on sensor module [m]
dis = (double) dist[cnt] / 1000.0;
//THRESHOLD of RANGE
// if(dis > MAX_RANGE) dis = 0; //MAX RANGE = 4[m]
// if(dis < MIN_RANGE) dis = 0;
pt[cnt].x = dis * cos(ang2) * cos(ang1) + (offset*sin(ang1)); // <-- X POINTS
pt[cnt].y = dis * sin(ang2); // <-- Y POINTS
// pt[cnt].z = dis * cos(ang2) * sin(ang1) - (offset*cos(ang1)); <- I disabled 3D mapping at the moment
cnt++;
}
ang1 += diff;
}
After each single scan, pt contains all the detected points in x-y coordinates.
I'd like to do something like this:
perform a single scan, then at the end,
apply collisions check on each pt.x and pt.y
if you find a point in the inner region, then check for other near points, if yes, stop the robot;
if not or if no other near points are found, start another scan
I'd like to know how to easy check objects (composed by more than one single point) inner the previous defined region.
Can you help me, please?
It seems very difficult for me :(
I don't think I can give a complete answer, but a few thoughts on where it might be possible to go.
What do you mean with realtime? How long may it take for any given algorithm to run? And what processor does your program run at?
Filtering the points that are within your area of detection should be quite easy just by checking if abs(x) < 0.75 and y< 2 && y > 0. Furthermore, you should only consider points that are far enough away from 0, so x^2 + y^2 > d.
But that should be the trivial part.
More interesting it will get to detect groups of points. DBSCAN has proven to be a fairly good clustering algorithm for detecting 2-dimensional groups of points. The critical question here is if DBSCAN is fast enough for real-time applications.
If not, you might have to think about optimizing the algorithm (You can press it's complexity to n*log(n) using some clever indexing structures).
Furthermore, it might be worth thinking about how you can incorporate the knowledge you have from your last iteration (assuming a high frequency, the data points should not change to much).
It might be worth looking at other robotics projects - I could imagine the problem of interpreting sensor data to construct information of the surroundings is a rather common one.
UPDATE
It is fairly difficult to give you good advice without knowing where you stumble on applying DBSCAN on your problem. But let me try to give you a step-by-step-guide how an algorithm may work:
For each datapoint you receive you check whether it is in the region you want to have observed. (The conditions I have given above should work).
If the datapoint is within the region you save it to some sort of list
After reading all data points you check if the list is empty. If so, everything is good. Otherwise we have to check if there are bigger groups of data points that you have to navigate around.
Now comes the more difficult part. You throw DBSCAN on that points and try to find groups of points. Which parameters will work for the algorithm I do not know - that has to be tried. After that you should have some clusters of points. I'm not totally sure what you will do with the groups - an idea would be to detect the points of each group that have the minimum and maximum degree in polar coordinates. That way you could decide how far you have to turn your vehicle. Special care would have to be taken if two groups are so close that it will not be possible to navigate through the gap between.
For the implementation of DBSCAN you could here or just ask google for help. It is a fairly common algorithm that has been coded thousands of times. For further optimizations concerning speed it might be helpful to create an own implementation. However, if one of the implementations you find seems to be usable, I would try that first before going all the way and implementing it on my own.
If you stumble on specific problems while implementing the algorithm I would suggest creating a new question, as it is far away from this one and you might get more people that are willing to help you.
I hope things are a bit clearer now. If not please give the exact point that you have doubts about.
Related
I have got a binary image/contour containing four human beings, and I want to detect/count all humans. Since there are occlusions, so I think it is best to get the head/maxima in the contour of all the humans. In that case human can be counted.
I am able to get the global maxima\topmost point (in terms of calculus language), but I want to get all the local maximas
The code for finding the topmost point is as suggested by Adrian in his blogpost i.e.:
topmost = tuple(biggest_contour[biggest_contour[:,:,1].argmin()][0])
Can anyone please suggest how to get all the local maximas, instead of just topmost location?
Here is the sample of my Image:
The definition of "local maximum" can be tricky to pin down, but if you start with a simple method you'll develop an intuition to look further. Even if there are methods available on the web to do this work for you, it's worth implementing a few basic techniques yourself before you go googling.
One simple method I've used in the path goes something like this:
Find the contours as arrays/lists/containers of (x,y) coordinates.
At each element N (a pixel) in the list, get the pixels at N - D and N + D; that is the pixels D ahead of the current pixel and D behind the current pixel
Calculate the point-to-point distance
Calculate the distance along the contour from N-D to N+D
Calculate (distanceAlongContour)/(point-to-point distance)
...
There are numerous other ways to do this, but this is quick to implement from scratch, and I think a reasonable starting point: Compare the "geodesic" distance and the Euclidean distance.
A few other possibilities:
Do a bunch of curve fits to chunks of pixels from the contour. (Lots of details to investigate here.)
Use Ramer-Puecker-Douglas to render the outlines as polygons, then choose parameters to ensure those polygons are appropriately simplified. (Second time I've mentioned R-P-D today; it's handy.) Check for vertices with angles that deviate much from 180 degrees.
Try a corner detector. Crude, but easy to implement.
Implement an edge follower that moves from one pixel to the next in the contour list, and calculate some kind of "inertia" as the pixel shifts direction. This wouldn't be useful on a pixel-by-pixel basis, but you could compare, say, pixels N-1,N,N+1 to pixels N+1,N+2,N+3. Or just calculate the angle between them.
For a given vehicle, I implemented a suspension system on four wheels.
The system is based on Hooke's Law.
The Problem: The vehicle should not be able to touch the ground. When driving in a spherical container (inside), the suspension gets compressed up to 100%, making the vehicle chassis touch the underground, which leads to unwanted collisions that throw the vehicle around.
Despite that may being a realistical behaviour, our game aims for an arcade-feeling, so I am looking for a formula to implement a maximum compression, so that the vehicle chassis can't come closer to the underground than X percent of the suspension size at any given moment, without actually simulating a physical contact between the two rigid bodys. Thus, I need to apply a fake force to the suspensions.
My current approach:
If the vehicle chassis would in fact touch the suspension base (Sorry, I don't know the proper word to describe this. I mean, when the suspension is at maximum compression), a force equal in magnitude and opposite in direction relative to the force pushing onto the suspension would be applied to the vehicle chassis, forcing it to stop moving downwards.
Therefore, I receive my vehicles world velocity V.
To get the downwards-velocity, I get the DotProduct of the velocity and the BodyUpVector.
float DownForceMagnitude = DotProduct(VelocityAtSuspension, BodyUpVector);
FVector DownForce = DownForceMagnitude * BodyUpVector;
FVector CounterForce = -DownForce * WeightOnSuspension;
Okay, this pseudo-code works somewhat fine on even underground, when the vehicle lands on a plane after a jump. Driving on a increasing slope however (like driving on the inside-walls of a sphere), makes the suspension reach maximum compression anyway, so apparently my approach is not correct.
I am now wondering what the cause is. My weight calculation only is simulated by VehicleWeight / 4, since the Unreal Engine 4 has no functionality to receive weight at a given location. I am no physics-pro, so forgive me if this is easy to calculate. Could that be the issue?
I do not need a physically 100% plausible solution, I just need a solution that works, and sufficiently stops the downwards motion of my vehicle chassis.
Any help is appreciated.
Greetings,
I had this problem with a futuristic magnetic hovercraft.
I solved it by reducing the force by ln depending on suspensions extension levels like so:
y = ln(ln(x+e))
where:
x = Suspension extension lvl in % (-> 0 being fully compressed)
y = the factor that you multiply the force with
e = eulers number
Here a graphic to help what it will be like:
https://ggbm.at/gmGEsAzE
ln is a very slow growing function thats why it works so great for this.
You probably want to clamp the values (maybe between 0 and 100 idk exactly how your code behaves and how u want this "break" to behave)
Tailor the function to your needs, I just wanted to suggest u use the ln like I did to solve this Problem.
I added e to x first to make it go through 0,0 if u want to make it stop earlier just subtract from x before using ln.
Also notice depending on when/how you calculate / update your suspension this (and any function applied to the force based on the extension of suspension levels) may not work under some circumstances or at all.
The closest thing I've found to help explain what I need is here in this question: Draw equidistant points on a spiral
However, that's not exactly what I want.
The spiral to draw is an archimedean spiral and the points obtained must be equidistant from each other. (Quote: From the question linked above.)
This is precisely what I want given the Archimedean Spiral equation, .
There is a specific set of data a user can input, they are NOT based on spirals but circular figures in general. They are as follows: center point [X,Y,Z], radius, horizontal separation [can be called X separation, depends on figure], and vertical separation [can be called Y separation, depends on figure], and most importantly degrees of rotation. I'd like the horizontal separation to be the distance between consecutive points since they are the ones that need to be the same distance between each other. I'd also like vertical separation to be the distance between the 'parallel' curves.
So given that specific input selection (and yes, some can be ignored), how can I iterate through all of the consecutive, equidistant points it would take to reach the input degrees (which can be very large but is finite) and return the X and Y point of each point of those points?
Basically what I'm try to achieve is a loop from zero to the number of degrees in the input, given all of the rest of the input and my preferences noted above, and drawing a point for all of the equidistant, consecutive points (if you decide to represent using code, just represent the drawing using a 'print').
I'm having a hard time explaining, but I think I got it pretty much covered. The points on this graph are exactly what I need:
Assuming a 2D case and an archimedean spiral centered around zero (a=0), so with equation . Successive lines are then apart, so to obtain a 'vertical spacing' of , set .
The length of the arc from the centre to a point at given angle is given by Wolfram, but his solution is difficult to working with. Instead, we can approximate the length of the arc (using a very rough for-large-theta approximation) to . Rearranging, , allowing us to determine what angles correspond to the desired 'horizontal spacing'. If this approximation is not good enough, I would look at using something like Newton-Raphson. The question you link to uses also uses an approximation, although not the same one.
Finally, recognising that polar coordinates translate to cartesian as follows: ; .
I get the following:
This is generated by the following MATLAB code, but it should be straight-forward enough to translate to C++ if this is what you actually need.
% Entered by user
vertspacing = 1;
horzspacing = 1;
thetamax = 10*pi;
% Calculation of (x,y) - underlying archimedean spiral.
b = vertspacing/2/pi;
theta = 0:0.01:thetamax;
x = b*theta.*cos(theta);
y = b*theta.*sin(theta);
% Calculation of equidistant (xi,yi) points on spiral.
smax = 0.5*b*thetamax.*thetamax;
s = 0:horzspacing:smax;
thetai = sqrt(2*s/b);
xi = b*thetai.*cos(thetai);
yi = b*thetai.*sin(thetai);
I do have 4 lists of the x and y coordinates of calibration points. Those are in no particular order and not alligned on any axis (they come from a real calibration picture with slight rotation and distortion) but the lists have the same indexing and cannot be sorted in such a way that each list is ascending/descending. They also hold no integer values but floating point. I am now trying to find the four neighbouring points for a given point.
E.g. searching for the neighbours of the point [150,150] would return [140,140], [140,160], [160,140], [160,160] (except for them actually beeing more like [139.581239,138.28812]).
At the moment I have to look through all calibration points for each point to check. There are about 500 calibration points.
Later during the process, I need to know the 4 neighbours for a random point within the 1600x1400 grid for multiple million times. So it is crucial to find those points as fast as possible to avoid calculation time of days or even weeks.
My first approach was checking each of the ~500 calibration points for each point to check and look at their relative position to the checking point (x_calib > x and y_calib > y would be somewhere in the top, right region of the point) and calculate their distance to it. The closest point in each region (top left, top right, lower left, lower right) would then be the respective neighbour point. That seems not the be efficient at all and takes a lot of time.
The second approach was creating a rainbow table for each of the 1600x1400 points and save the respective neighbours (to be exact, to save the index in the list of coordinates). Later on, the process would check this rainbow table at position [x,y,0], [x,y,1], [x,y,2] and [x,y,3] to get the 4 indices of the 4 neighbour points. Though calculating the rainbow table takes some time (~20 minutes for those ~2 million points), this approach speeds up the later processing. Unfortunatelly, this approach makes it difficult to debug the later steps of the process because it takes this much time before the rest even starts..
I still think there should be room for optimization and I would appreciate any suggestion or help to speed up the whole thing. I allready read about the kd-tree thing but did not quite see the possibility to use it here. I'm hoping that there's an approach for this kind of unsorted (and unsortable) list of points which is more efficient than the rainbow table - or which is at least faster at creating the table.
Thanks in advance!
I have a wireless mesh network of nodes, each of which is capable of reporting its 'distance' to its neighbors, measured in (simplified) signal strength to them. The nodes are geographically in 3d space but because of radio interference, the distance between nodes need not be trigonometrically (trigonomically?) consistent. I.e., given nodes A, B and C, the distance between A and B might be 10, between A and C also 10, yet between B and C 100.
What I want to do is visualize the logical network layout in terms of connectness of nodes, i.e. include the logical distance between nodes in the visual.
So far my research has shown the multidimensional scaling (MDS) is designed for exactly this sort of thing. Given that my data can be directly expressed as a 2d distance matrix, it's even a simpler form of the more general MDS.
Now, there seem to be many MDS algorithms, see e.g. http://homepage.tudelft.nl/19j49/Matlab_Toolbox_for_Dimensionality_Reduction.html and http://tapkee.lisitsyn.me/ . I need to do this in C++ and I'm hoping I can use a ready-made component, i.e. not have to re-implement an algo from a paper. So, I thought this: https://sites.google.com/site/simpmatrix/ would be the ticket. And it works, but:
The layout is not stable, i.e. every time the algorithm is re-run, the position of the nodes changes (see differences between image 1 and 2 below - this is from having been run twice, without any further changes). This is due to the initialization matrix (which contains the initial location of each node, which the algorithm then iteratively corrects) that is passed to this algorithm - I pass an empty one and then the implementation derives a random one. In general, the layout does approach the layout I expected from the given input data. Furthermore, between different runs, the direction of nodes (clockwise or counterclockwise) can change. See image 3 below.
The 'solution' I thought was obvious, was to pass a stable default initialization matrix. But when I put all nodes initially in the same place, they're not moved at all; when I put them on one axis (node 0 at 0,0 ; node 1 at 1,0 ; node 2 at 2,0 etc.), they are moved along that axis only. (see image 4 below). The relative distances between them are OK, though.
So it seems like this algorithm only changes distance between nodes, but doesn't change their location.
Thanks for reading this far - my questions are (I'd be happy to get just one or a few of them answered as each of them might give me a clue as to what direction to continue in):
Where can I find more information on the properties of each of the many MDS algorithms?
Is there an algorithm that derives the complete location of each node in a network, without having to pass an initial position for each node?
Is there a solid way to estimate the location of each point so that the algorithm can then correctly scale the distance between them? I have no geographic location of each of these nodes, that is the whole point of this exercise.
Are there any algorithms to keep the 'angle' at which the network is derived constant between runs?
If all else fails, my next option is going to be to use the algorithm I mentioned above, increase the number of iterations to keep the variability between runs at around a few pixels (I'd have to experiment with how many iterations that would take), then 'rotate' each node around node 0 to, for example, align nodes 0 and 1 on a horizontal line from left to right; that way, I would 'correct' the location of the points after their relative distances have been determined by the MDS algorithm. I would have to correct for the order of connected nodes (clockwise or counterclockwise) around each node as well. This might become hairy quite quickly.
Obviously I'd prefer a stable algorithmic solution - increasing iterations to smooth out the randomness is not very reliable.
Thanks.
EDIT: I was referred to cs.stackexchange.com and some comments have been made there; for algorithmic suggestions, please see https://cs.stackexchange.com/questions/18439/stable-multi-dimensional-scaling-algorithm .
Image 1 - with random initialization matrix:
Image 2 - after running with same input data, rotated when compared to 1:
Image 3 - same as previous 2, but nodes 1-3 are in another direction:
Image 4 - with the initial layout of the nodes on one line, their position on the y axis isn't changed:
Most scaling algorithms effectively set "springs" between nodes, where the resting length of the spring is the desired length of the edge. They then attempt to minimize the energy of the system of springs. When you initialize all the nodes on top of each other though, the amount of energy released when any one node is moved is the same in every direction. So the gradient of energy with respect to each node's position is zero, so the algorithm leaves the node where it is. Similarly if you start them all in a straight line, the gradient is always along that line, so the nodes are only ever moved along it.
(That's a flawed explanation in many respects, but it works for an intuition)
Try initializing the nodes to lie on the unit circle, on a grid or in any other fashion such that they aren't all co-linear. Assuming the library algorithm's update scheme is deterministic, that should give you reproducible visualizations and avoid degeneracy conditions.
If the library is non-deterministic, either find another library which is deterministic, or open up the source code and replace the randomness generator with a PRNG initialized with a fixed seed. I'd recommend the former option though, as other, more advanced libraries should allow you to set edges you want to "ignore" too.
I have read the codes of the "SimpleMatrix" MDS library and found that it use a random permutation matrix to decide the order of points. After fix the permutation order (just use srand(12345) instead of srand(time(0))), the result of the same data is unchanged.
Obviously there's no exact solution in general to this problem; with just 4 nodes ABCD and distances AB=BC=AC=AD=BD=1 CD=10 you cannot clearly draw a suitable 2D diagram (and not even a 3D one).
What those algorithms do is just placing springs between the nodes and then simulate a repulsion/attraction (depending on if the spring is shorter or longer than prescribed distance) probably also adding spatial friction to avoid resonance and explosion.
To keep a "stable" diagram just build a solution and then only update the distances, re-using the current position from previous solution as starting point. Picking two fixed nodes and aligning them seems a good idea to prevent a slow drift but I'd say that spring forces never end up creating a rotational momentum and thus I'd expect that just scaling and centering the solution should be enough anyway.