I am working on Traffic Surveillance System an OpenCv project, I need to detect moving cars and people. I am using background subtraction method to detect moving objects and thus drawing counters.
I have a problem :
When two car are moving on road closely them my system detects it as one car, I have used all efforts like canny-edge detection, transformation etc. Can anyone tell me any particular methodology to solve this type of problems.
Plenty of solutions are possible.
A geometric approach would detect that the one moving blob is too big to be a single passenger car. Still, this may indicate a car with a caravan. That leads us to another question: if you have two blobs moving close together, how do you know it's two cars and not one car towing a caravan? You may need to add some elementary shape detection.
Another trivial approach is to observe that cars do not suddenly multiply. If you have 5 video frames, and in 4 of them you spot two cars, then it's very very likely that the 5th frame also has two cars.
CV system tracks object as moving blobs (“clouds” of moving pixels) identifies them and distinct one from another in case of occlusions. When two (or more) blobs are intersected, system merges them in one combined object and marks it by IDs of all those source-objects that currently included in the combination. When one of objects separates from the combination CV system recognize which one is out and re-arrange ID appropriately.
I have a wireless mesh network of nodes, each of which is capable of reporting its 'distance' to its neighbors, measured in (simplified) signal strength to them. The nodes are geographically in 3d space but because of radio interference, the distance between nodes need not be trigonometrically (trigonomically?) consistent. I.e., given nodes A, B and C, the distance between A and B might be 10, between A and C also 10, yet between B and C 100.
What I want to do is visualize the logical network layout in terms of connectness of nodes, i.e. include the logical distance between nodes in the visual.
So far my research has shown the multidimensional scaling (MDS) is designed for exactly this sort of thing. Given that my data can be directly expressed as a 2d distance matrix, it's even a simpler form of the more general MDS.
Now, there seem to be many MDS algorithms, see e.g. http://homepage.tudelft.nl/19j49/Matlab_Toolbox_for_Dimensionality_Reduction.html and http://tapkee.lisitsyn.me/ . I need to do this in C++ and I'm hoping I can use a ready-made component, i.e. not have to re-implement an algo from a paper. So, I thought this: https://sites.google.com/site/simpmatrix/ would be the ticket. And it works, but:
The layout is not stable, i.e. every time the algorithm is re-run, the position of the nodes changes (see differences between image 1 and 2 below - this is from having been run twice, without any further changes). This is due to the initialization matrix (which contains the initial location of each node, which the algorithm then iteratively corrects) that is passed to this algorithm - I pass an empty one and then the implementation derives a random one. In general, the layout does approach the layout I expected from the given input data. Furthermore, between different runs, the direction of nodes (clockwise or counterclockwise) can change. See image 3 below.
The 'solution' I thought was obvious, was to pass a stable default initialization matrix. But when I put all nodes initially in the same place, they're not moved at all; when I put them on one axis (node 0 at 0,0 ; node 1 at 1,0 ; node 2 at 2,0 etc.), they are moved along that axis only. (see image 4 below). The relative distances between them are OK, though.
So it seems like this algorithm only changes distance between nodes, but doesn't change their location.
Thanks for reading this far - my questions are (I'd be happy to get just one or a few of them answered as each of them might give me a clue as to what direction to continue in):
Where can I find more information on the properties of each of the many MDS algorithms?
Is there an algorithm that derives the complete location of each node in a network, without having to pass an initial position for each node?
Is there a solid way to estimate the location of each point so that the algorithm can then correctly scale the distance between them? I have no geographic location of each of these nodes, that is the whole point of this exercise.
Are there any algorithms to keep the 'angle' at which the network is derived constant between runs?
If all else fails, my next option is going to be to use the algorithm I mentioned above, increase the number of iterations to keep the variability between runs at around a few pixels (I'd have to experiment with how many iterations that would take), then 'rotate' each node around node 0 to, for example, align nodes 0 and 1 on a horizontal line from left to right; that way, I would 'correct' the location of the points after their relative distances have been determined by the MDS algorithm. I would have to correct for the order of connected nodes (clockwise or counterclockwise) around each node as well. This might become hairy quite quickly.
Obviously I'd prefer a stable algorithmic solution - increasing iterations to smooth out the randomness is not very reliable.
Thanks.
EDIT: I was referred to cs.stackexchange.com and some comments have been made there; for algorithmic suggestions, please see https://cs.stackexchange.com/questions/18439/stable-multi-dimensional-scaling-algorithm .
Image 1 - with random initialization matrix:
Image 2 - after running with same input data, rotated when compared to 1:
Image 3 - same as previous 2, but nodes 1-3 are in another direction:
Image 4 - with the initial layout of the nodes on one line, their position on the y axis isn't changed:
Most scaling algorithms effectively set "springs" between nodes, where the resting length of the spring is the desired length of the edge. They then attempt to minimize the energy of the system of springs. When you initialize all the nodes on top of each other though, the amount of energy released when any one node is moved is the same in every direction. So the gradient of energy with respect to each node's position is zero, so the algorithm leaves the node where it is. Similarly if you start them all in a straight line, the gradient is always along that line, so the nodes are only ever moved along it.
(That's a flawed explanation in many respects, but it works for an intuition)
Try initializing the nodes to lie on the unit circle, on a grid or in any other fashion such that they aren't all co-linear. Assuming the library algorithm's update scheme is deterministic, that should give you reproducible visualizations and avoid degeneracy conditions.
If the library is non-deterministic, either find another library which is deterministic, or open up the source code and replace the randomness generator with a PRNG initialized with a fixed seed. I'd recommend the former option though, as other, more advanced libraries should allow you to set edges you want to "ignore" too.
I have read the codes of the "SimpleMatrix" MDS library and found that it use a random permutation matrix to decide the order of points. After fix the permutation order (just use srand(12345) instead of srand(time(0))), the result of the same data is unchanged.
Obviously there's no exact solution in general to this problem; with just 4 nodes ABCD and distances AB=BC=AC=AD=BD=1 CD=10 you cannot clearly draw a suitable 2D diagram (and not even a 3D one).
What those algorithms do is just placing springs between the nodes and then simulate a repulsion/attraction (depending on if the spring is shorter or longer than prescribed distance) probably also adding spatial friction to avoid resonance and explosion.
To keep a "stable" diagram just build a solution and then only update the distances, re-using the current position from previous solution as starting point. Picking two fixed nodes and aligning them seems a good idea to prevent a slow drift but I'd say that spring forces never end up creating a rotational momentum and thus I'd expect that just scaling and centering the solution should be enough anyway.
For a course in my Computer Science studies, I have to come up with a set of constraints and a score-definition to find a tiling for frequent itemset mining. The matrix with the data consists of ones and zeroes.
My task is to come up with a set of constraints for the tiling (having a fixed amount of tiles), and a score-function that needs to be maximized. Since I started working out a solution that allows overlapping tiles, I tried to find a score-function to calculate the total "area" of all tiles. Bear in mind that the score function has to be evaluated for every possible solution, so I can't simply go over the total matrix (which contains about 100k elements) and see if it is part of a tile.
However, I only took into account overlap between only 2 tiles, and came up with the following:
TotalArea = Sum_a_in_Tiles(Area(a)) - Sum_a/b_in_tiles(Overlap(a,b))
Silly me, I didn't consider a possible overlap between 3 tiles. My question is the following:
Is it possible to come up with a generic score-function for n tiles, considering only area per tile and area per overlap between 2 (or more) tiles, and if so, how would I program it?
I could provide some code, but then again it has to be programmed in some obscure language called Comet :(
I'm currently making a game in the DirectX engine in c++. I'm using path-finding to guide an army of soldiers to a specific location. the problem is that I use raycasts to see if there is nothing in the way of my path, and this slows down the speed of the game. Is there a better way to do pathfinding?
I also have a problem with the moving of my army. Right now i'm using the average of soldiers' positions as the start point, which means all the soldiers need to go there first before moving to the end point. Is there a way to make them go to the end point without going to the startpoint?
Thanks for the help.
Have you tried something like A-Star? to navigate via nodes, or some sort of 2d-array representation of your map? written good it could possible be faster aswell as easier to do with jobs ( multithreaded ).
if you have a solider, who is at postion A, and needs to get to B.
just calulate the path from C(the avrage position what ever) to B. get the direction from a to b and do some sort of interpolation. ( havent done this, or tried it, but it could probablt work out pretty well!)
Are you hit-testing every object when you are raycasting?
That can be very expensive when you have many objects and soldiers.
A common solution is to divide your world into square grid cells, and put each object in a list of objects for that grid.
Then you draw an imaginary line from the soldier to the destination and check each cell what objects you need to hit test against. This way you will evaluate only objects close to the straight path and ignore all others.
Can anyone suggest me how to solve bypass problem in moving soldiers around ? I have grid liek game ( rows=200, columns=200 ) and I can send soldier around and find path with A* algorithm ( one soldier occupy approximately on quarter of cell ). My problem is when I send two soldiers on trip on map and they can cross during the trip in on cell, how to by pass them ? At the moment they acting like ghosts and pass one through other. Did anyone have same task in past ?
What you are looking for is a very simple form of collision detection. You want to see whether two units collide, i.e. would intersect/be in the same grid cell in the next time step. There is a bunch of strategies. Start with the link and read your way through the net ;)