Using the function convexityDefects takes the blob from a binary image and creates a vector with all the points surrounding my hand. This vector always begins at the Point at the highest Y location on my blob which is interfering with my ability to run K-Curvature on each finger to determine accurate finger position.
I have thought about reshuffling the vector so that the beginning of the vector starts away from a location away from the fingers on my hand. The problem is that this is difficult to implement because I don't have an effective way to pick a point on the hand to start the reordering of the points. Do you have any suggestions which could provide easy methods to fix this? My goal is to have none of the fingertips within ~30 points of the beginning of this array.
Related
I have two arrays currPoints and prevPoints. Both are not necessarily the same size. I want to compare each element in currPoints with prevPoints and replace the value in prevPoints that is closest to the value in currPoints.
Example:
prevPoints{2,5,10,13,84,22}
currPoints{1,15,9,99}
After applying the algorithm
prevPoints{1,5,9,15,99,22}
So what is the best algorithm/method for this? It needs to be fast.
Context: If it helps, I am trying to work on a tracking algorithm that takes points from two consecutive frames in a video and tries to figure out which points in the first frame correspond to points in the second frame. I hope to track objects and tag them with an ID this way. Speed is crucial as processing is to be done in realtime.
You need to sort both the arrays first. But remember the original orientation of the prevPoints array as you need to get the original array again at the end.
So after sorting:
prevPoints{2,5,10,13,22,84}
currPoints{1,9,15,99}
Now you basically need to figure out which of the currPoints should get into prevPoints. The algorithm is will be similar to merge 2 sorted arrays just that you won't merge, instead replace values.
Initially both pointers are at the start of the corresponding arrays. 1 from currpoints should replace 2 in prevPoints based on the fact that the value in currPoints is less than prevPoints and you know that the next points in PrevPoints will only be higher than 2 (sorted arry, remember). Replace and move on the pointers.
Now currpointer is at 9 and prevpointer is at 5. Calculate the absolute difference and keep a store of the minimium absolute difference encountered so far and also the value of the number that caused the least minimum absolute difference to be encountered.(4 in this case). Move the prevpointer forward as the currpointer is pointing to a higher value.
Now prevpointer at 10 and currpointer at 9. 9 is less than 10 and so a replacement has to be done. As this minimum absolute difference is less than the earlier one ( 1 < 4 ) so 10 will be replaced by 9.
Now the prevpointer is at 13 and currpointer is at 15.
Proceed in the same fashion.
Rearrange the prevPoints array to the original orientation.
Hope this helps!!!
We sort the first list by the x positions, and the second list by the y positions. So each point has a position in each list. Now the way you do this for a nearest neighbor search (at least what I came up with) is you find the position of the point in each list through a binary search. Then we know 4 directions, either +-1 x or +-y, and basically we travel in each of these directions until such time as the best length so far is greater than the distance of just that one coordinate.
So we search in the each direction. And say the closest point is at a distance of 25, then if our next coord in the +X direction is beyond 25 in just the +X direction we can stop because even if the change in Y is 0, it cannot be closer.
This makes for a highly effective and quick n(log(n)) closest point algorithm to find a single point. But, also since we only need the two sorted lists once we have those in n(log(n)) time we can find the nearest point for all the remaining points in something like log(n) time. Find the position in the x sorted list, find the position in the y sorted list. Then spiral out until you truncate and have certainly found the nearest point. But, since the scaffolding is the same in each case it should simply end up being quite quick.
Though given your actual test case you might want to come up with something that is simply a very effective heuristic.
Simply tracing the points seems really naive, if we are tracing the same thing from frame to frame it should be the case that the point from F0 to F1 in F2 should actually be equal to the distance it travelled in F0 to F1. If we assume all these points are traveling in roughly straight lines, we could do a much better job than simply closest points. We could find generally the curves these points are taking. If we guess that their position should be `F2 by interpolating F0 and F1 and low and behold the position of a point there is really really close. Then we can be quite sure we nailed that.
Equally the objects one would assume have all the points travel roughly the same direction. Like each point travels +5,+5 from F0 to F1, not only can we guess their positions of F2 but we can know these objects make up the same object rather effectively.
I have a doubly-linked list based Polygon2D class that needs to be searched and modified, and is to be used in a game engine for various utilities like collision detection and to define graphical shapes and possibly texture coordinates, among other things. The polygon should be able to be concave or convex, but it cannot intersect itself.
I'm having trouble coming up with a method to insert a point such that it doesn't cause an intersection with the polygon. What I've been doing is searching for the closest edge to the point to insert by having two pointers to nodes, both starting at the head and iterating in separate directions. When the "next" node for either is the other pointer, the search is complete and the point is inserted between the two. Otherwise, the node iterating forward goes until it gets to the closest point so far (stopping if the next node is the other pointer), then the node iterating "backwards" does the same.
Unfortunately, this still results in intersections in cases where the edge just before the forward iterating pointer or the edge just "after" the backwards iterating pointer intersects the new edge created when inserting a new point. After that, more and more intersections can easily slip in.
Here is the insert method's code.
Can I improve this algorithm and still keep it O(n) or is there an entirely different method which may work better?
As a side note, the "findClosest[Edge](vec2 pt)" search uses a slightly modified version of the algorithm, but I feel like there must be a more effective way to do these searches without using more memory or time.
As for the calculation of the distance from a given point to a vertex this Distance from a point to a polygon might help.
I am implementing the Good Features To Track/Shi-Tomasi corner detection algorithm on CUDA and need to find a way to parallelize the following part of the algorithm:
I start with an array of points obtained from an image sorted according to a certain intensity value (an eigenvalue of a previous calculation).
Starting with the first point of the array, I remove any point in the array that is within a certain physical distance of the first point. (This distance is calculated on the image plane, not on the array).
On the resulting array, we repeat step two for the remaining points.
Is this somehow parallelizable, specifically on CUDA? I'm suspecting not, since there will obviously be dependencies across the image.
I think the article Accelerated Corner-Detector Algorithms describes the way to solve this problem.
I need to find for each point of the data set all its nearest neighbors. The data set contains approx. 10 million 2D points. The data are close to the grid, but do not form a precise grid...
This option excludes (in my opinion) the use of KD Trees, where the basic assumption is no points have same x coordinate and y coordinate.
I need a fast algorithm O(n) or better (but not too difficult for implementation :-)) ) to solve this problem ... Due to the fact that boost is not standardized, I do not want to use it ...
Thanks for your answers or code samples...
I would do the following:
Create a larger grid on top of the points.
Go through the points linearly, and for each one of them, figure out which large "cell" it belongs to (and add the points to a list associated with that cell).
(This can be done in constant time for each point, just do an integer division of the coordinates of the points.)
Now go through the points linearly again. To find the 10 nearest neighbors you only need to look at the points in the adjacent, larger, cells.
Since your points are fairly evenly scattered, you can do this in time proportional to the number of points in each (large) cell.
Here is an (ugly) pic describing the situation:
The cells must be large enough for (the center) and the adjacent cells to contain the closest 10 points, but small enough to speed up the computation. You could see it as a "hash-function" where you'll find the closest points in the same bucket.
(Note that strictly speaking it's not O(n) but by tweaking the size of the larger cells, you should get close enough. :-)
I have used a library called ANN (Approximate Nearest Neighbour) with great success. It does use a Kd-tree approach, although there was more than one algorithm to try. I used it for point location on a triangulated surface. You might have some luck with it. It is minimal and was easy to include in my library just by dropping in its source.
Good luck with this interesting task!
I recently wrote an extremely basic edge detection algorithm that works on an array of chars. The program was meant to detect the edges of blobs of a single particular value on the array and worked by simply looking left, right, up and down on the array element and checking if one of those values is not the same as the value it was currently looking at. The goal was not to produce a mathematical line but rather a set of ordered points that represented a descritized closed loop edge.
The algorithm works perfectly fine, except that my data contained a bit of noise hence would randomly produce edges where there should be no edges. This in turn wreaked havoc on some of my other programs down the line.
There is two types of noise that the data contains. The first type is fairly sparse and somewhat random. The second type is a semi continuous straight line on the x=y axis. I know the source of the first type of noise, its a feature of the data and there is nothing i can do about it. As for the second type, i know it's my program's fault for causing it...though i haven't a hot clue exactly what is causing it.
My question is:
How should I go about removing the noise completely?
I know that the correct data has points that are always beside each other and is very compact and ordered (with no gaps) and is a closed loop or multiple loops. The first type of noise is usually sparse and random, that could be easily taken care of by checking if any edges is next that noise point is also counted as an edge. If not, then the point is most defiantly noise and should be removed.
However, the second type of noise, where we have a semi continuous line about x=y poses more of a problem. The line is sometimes continuous for random lengths (the longest was it went half way across my entire array unbroken). It is even possible for it to intersect the actual edge.
Any ideas on how to do this?
Normally in image processing a median filter.
You also often do a dilate (make lines bigger) than an erode (make lines thinner) to close up any gaps in the lines
Noise tends to concentrate at higher frequencies, so run a low pass filter over the image before you do edge detection. I've seen this principle used to do sub-pixel edge detection.
This is the sort of thing that I'll throw into unit tests. Get some minimal datasets that exhibit this problem (something small enough that it can be directly encoded into the test file), run the tests, and with the small dataset just step through and see what's going on.