(Sorry my English is not good, but I will try to phrase it clearly)
For example, I've got road data in a form like this:
Latitude Longitude
RoadA(consists of 2 dots)
31.263319 121.5555711
31.2619722 121.5564754
RoadB(consists of 3 dots)
31.2619722 121.5564754
31.2611567 121.557023
31.2610903 121.557088
As you can see, each road consists of several (2~x) dots. The road may be a curve and need many dots to describe it. Between two dots they are connected by a straight line.
Once I have read in all the road data, I will read in a set of dots, my task here is that once a new dot is given, I need to find out if it is on any of the roads. If not, I need to draw a perpendicular towards the nearest road and find out the coordinate of the pedal foot(the nearest point on road).
The amount of query is huge, so I need the speed to be as fast as possible.What kind of data structure should I use?
There are some Spatial Partitioning methods in Game Development and theory.
Maybe you should use one of them.
link
You should partition your locations in Binary,Quad,Oct, ... trees.
I think the best way, is to use a map of Pairs.
Related
I am currently dealing with text recognition. Here is a part of binarized image with edge detection (using Canny):
EDIT: I am posting a link to an image. I don't have 10 rep points so I cannot post an image.
EDIT 2: And here's the same piece after thresholding. Honestly, I don't know which approach would be better.
[2
The questions remain the same:
How should I detect certain letters? I need to determine location of every letter and then every word.
Is it a problem that some letters are "opened"? I mean that they are not closed areas.
If I use cv::matchtemplate, does it mean that I need to have 24 templates for every letter + 10 for every digit? And then loop over my image to determine the best correlation?
If both the letters and squares they are in, are 1-pixel wide, what filters / operations should I do to close the opened letters? I tried various combinations of dilate and erode - with no effect.
The question is kind of "how do I do OCR with Open CV?" and the answer is that it's an involved process and quite difficult.
But some pointers. Firstly, its hard to detect letters which are outlined. Most of the tools are designed for filled letters. But that image looks as if there will only be one non-letter distractor if you fill all loops using a certain size threshold. You can get rid of the non-letter lines because they are a huge connected object.
Once you've filled the letters, they can be skeletonised.
You can't use morphological operations like open and close very sensibly on images where the details are one pixel wide. You can put the image through the operation, but essentially there is no distinction between detail and noise if all features are one pixel. However once you fill the letters, that problem goes away.
This isn't in any way telling you how to do it, just giving some pointers.
As mentioned in the previous answer by malcolm OCR will work better on filled letters so you can do the following
1 use your second approach but take the inverse result and not the one you are showing.
2 run connected component labeling
3 for each component you can run the OCR algorithm
In order to discard outliers I will try to use the spatial relation between detected letters. They sold have other letter horizontally or vertically next to them.
Good luck
I am making RTS game and whole terrain is like grid ( cells with x and y coordinates). I have couple soldiers in group (military unit) and I want to send them from point A to point B ( between points A and B is obstacles ). I can solve for one soldier using A* algorithm and that is not problem. How to achieve that my group of soldiers always going together ? (I notice couple corner cases when they split and go with different ways to the same destination point, I can choose leader of group but I don't need that soldiers going on same cells but by leader, for example couple at right side, couple at left side if it is possible). Was anyone solving similar problem in past ? Any idea for algorithm modification ?
You want a flocking algorithm, where you have the leader of the pack follow the A* directions and the other follow the leader in formation.
In case you have very large formations you are going to get into issues like "how to fit all those soldiers through this small hole" and that's where you will need to get smart.
An example could be to enforce a single line formation for tight spots, others would involve breaking down the groups into smaller squads.
If you don't have too many soldiers, a straightforward modification would be to consider the problem as multidimensional problem with each soldier representing 2 dimensions. You can add constaints to this multidimensional space to ensure that your soldiers keep close to each other. However, this might become computationally expensive.
Artifical Potential Fields are usually less expensive and easy to implement. And they can be extended to cooperative strategies. If combined with graph search techiques, you cannot get stuck in local minima. Google gives plenty of starting points: http://www.google.com/search?ie=UTF-8&oe=utf-8&q=motion+planning+potential+fields
I'm building a store locator based on in-house geocoding data. Effectively I need to query stories near City X or Zip Y within a certain radius. The data sets I'm working with are relatively comprehensive and include things such as population.
One issue is that large cities (Los Angeles for example) are many miles in radius so you could be within the city but miles from the coordinate we have loaded.
Is there a rule of thumb, or a free data feed which would list an approximate radius of a city, or perhaps even outlines of the city points?
Also, assuming I have a shape defining the city what calculation would I use to say "stores within X miles of this area"?
Why don't you use the zip codes and latitude/longitude of the stores, instead of the cities? You know the addresses of the stores, so use its zip code, look up the coordinates, and calculate the distance from the origin zip code. Then it wouldn't matter how big the city is, because big cities have many zip codes, but each store has its own zip code.
It would only be a problem for states with big zip codes like Texas, but then there is likely not more than 1 store per zip code anyways so not a big deal.
Ultimately we didn't implement this feature, but before it was cancelled I had a fair amount of success using the below approach:
Finding coordinates for the city itself, as well as all zip codes of the city
"Connecting the dots" of all the above coordinates to create a polygon of the (very rough shape of the city)
Checking if the user's input coordinate was within the given range of the polygon
The above approach worked relatively well and may have ultimately developed into a sound solution with some more enhancements and tuning.
I have an image which was shown to groups of people with different domain knowledge of its content. I than recorded gaze fixation data of them watching the image.
I now kind of want to compare the results of the two groups - so what I need to know is, if there is a correlation of the positions of the sampling data between the two groups or not.
I have the original image as well as the fixation coords. Do you have any good idea how to start analyzing the data?
It's more about the idea or the plan so you don't have to be too technical on that one.
Thanks
Simple idea: render all the coordinates on the original image in a 'heat map' like way, one image for each group. You can then visually compare the images for correlation, and you have some nice graphics for in your paper.
There is something like the two-dimensional correlation coefficient. With software like R or Matlab you can do the number crunching for the correlation.
Matlab has a function for this:
Two Dimensional Correlation Function: corr2
Computes two dimensional correlation coefficient between two matrices
and the matrices must be of the same size. r = corr2 (A,B)
In gaze tracking, the most interesting data lies in two areas.
In where all people look, for that you can use the heat map Daan suggests. Make a heat map for all people, and heat maps for separate groups of people.
In when people look there. For that I would recommend you start by making heat maps as above, but for short time intervals starting from the time the picture was first shown. Again, for all people, and for the separate groups you have.
The resulting set of heat-maps, perhaps animated for the ones from the second point, should give you some pointers for further analysis.
For Operating Systems class I'm going to write a scheduling simulator entitled "Jurrasic Park".
The ultimate goal is for me to have a series of cars following a set path and passengers waiting in line at a set location for those cars to return to so they can be picked up and be taken on the tour. This will be a simple 2d, top-down view of the track and the cars moving along it.
While I can code this easily without having to visually display anything I'm not quite sure what the best way would be to implement a car moving along a fixed track.
To start out, I'm going to simply use OpenGL to draw my cars as rectangles but I'm still a little confused about how to approach updating the car's position and ensuring it is moving along the set path for the simulated theme park.
Should I store vertices of the track in a list and have each call to update() move the cars a step closer to the next vertex?
If you want curved track, you can use splines, which are mathematically defined curves specified by two vector endpoints. You plop down the endpoints, and then solve for a nice curve between them. A search should reveal source code or math that you can derive into source code. The nice thing about this is that you can solve for the heading of your vehicle exactly, as well as get the next location on your path by doing a percentage calculation. The difficult thing is that you have to do a curve length calculation if you don't want the same number of steps between each set of endpoints.
An alternate approach is to use a hidden bitmap with the path drawn on it as a single pixel wide curve. You can find the next location in the path by matching the pixels surrounding your current location to a direction-of-travel vector, and then updating the vector with a delta function at each step. We used this approach for a path traveling prototype where a "vehicle" was being "driven" along various paths using a joystick, and it works okay until you have some intersections that confuse your vector calculations. But if it's a unidirectional closed loop, this would work just fine, and it's dead simple to implement. You can smooth out the heading angle of your vehicle by averaging the last few deltas. Also, each pixel becomes one "step", so your velocity control is easy.
In the former case, you can have specially tagged endpoints for start/stop locations or points of interest. In the latter, just use a different color pixel on the path for special nodes. In either case, what you display will probably not be the underlying path data, but some prettied up representation of your "park".
Just pick whatever is easiest, and write a tick() function that steps to the next path location and updates your vehicle heading whenever the car is in motion. If you're really clever, you can do some radius based collision handling so that cars will automatically stop when a car in front of them on the track has halted.
I would keep it simple:
Run a timer (every 100msec), and on each timer draw each ones of the cars in the new location. The location is read from a file, which contains the 2D coordinates of the car (each car?).
If you design the road to be very long (lets say, 30 seconds) writing 30*10 points would be... hard. So how about storing at the file the location at every full second? Then between those 2 intervals you will have 9 blind spots, just move the car in constant speed (x += dx/9, y+= dy/9).
I would like to hear a better approach :)
Well you could use some path as you describe, ether a fixed point path or spline. Then move as a fixed 'velocity' on this path. This may look stiff, if the car moves at the same spend on the straight as cornering.
So you could then have speeds for each path section, but you would need many speed set points, or blend the speeds, otherwise you'll get jerky speed changes.
Or you could go for full car simulation, and use an A* to build the optimal path. That's over kill but very cool.
If there is only going forward and backward, and you know that you want to go forward, you could just look at the cells around you, find the ones that are the color of the road and move so you stay in the center of the road.
If you assume that you won't have abrupt curves then you can assume that the road is directly in front of you and just scan to the left and right to see if the road curves a bit, to stay in the center, to cut down on processing.
There are other approaches that could work, but this one is simple, IMO, and allows you to have gentle curves in your road.
Another approach is just to have it be tile-based, so you just look at the tile before you, and have different tiles for changes in road direction an so you know how to turn the car to stay on the tile.
This wouldn't be as smooth but is also easy to do.