When calculating a distance from a city, how can I factor in the approximate size (physical area) of the city? - geocoding

I'm building a store locator based on in-house geocoding data. Effectively I need to query stories near City X or Zip Y within a certain radius. The data sets I'm working with are relatively comprehensive and include things such as population.
One issue is that large cities (Los Angeles for example) are many miles in radius so you could be within the city but miles from the coordinate we have loaded.
Is there a rule of thumb, or a free data feed which would list an approximate radius of a city, or perhaps even outlines of the city points?
Also, assuming I have a shape defining the city what calculation would I use to say "stores within X miles of this area"?

Why don't you use the zip codes and latitude/longitude of the stores, instead of the cities? You know the addresses of the stores, so use its zip code, look up the coordinates, and calculate the distance from the origin zip code. Then it wouldn't matter how big the city is, because big cities have many zip codes, but each store has its own zip code.
It would only be a problem for states with big zip codes like Texas, but then there is likely not more than 1 store per zip code anyways so not a big deal.

Ultimately we didn't implement this feature, but before it was cancelled I had a fair amount of success using the below approach:
Finding coordinates for the city itself, as well as all zip codes of the city
"Connecting the dots" of all the above coordinates to create a polygon of the (very rough shape of the city)
Checking if the user's input coordinate was within the given range of the polygon
The above approach worked relatively well and may have ultimately developed into a sound solution with some more enhancements and tuning.

Related

How to maintain city road data? (What data structure should I use)

(Sorry my English is not good, but I will try to phrase it clearly)
For example, I've got road data in a form like this:
Latitude Longitude
RoadA(consists of 2 dots)
31.263319 121.5555711
31.2619722 121.5564754
RoadB(consists of 3 dots)
31.2619722 121.5564754
31.2611567 121.557023
31.2610903 121.557088
As you can see, each road consists of several (2~x) dots. The road may be a curve and need many dots to describe it. Between two dots they are connected by a straight line.
Once I have read in all the road data, I will read in a set of dots, my task here is that once a new dot is given, I need to find out if it is on any of the roads. If not, I need to draw a perpendicular towards the nearest road and find out the coordinate of the pedal foot(the nearest point on road).
The amount of query is huge, so I need the speed to be as fast as possible.What kind of data structure should I use?
There are some Spatial Partitioning methods in Game Development and theory.
Maybe you should use one of them.
link
You should partition your locations in Binary,Quad,Oct, ... trees.
I think the best way, is to use a map of Pairs.

Finding objects within x miles of a point

I'm working on getting all events within 10 miles of the user's location. My models look something like this:
class User(models.Model):
location = models.PointField()
...
class Event(models.Model):
location = models.PointField()
...
In my tests, when I check the distance between the user and an event, I get the value 11.5122663513:
from geopy.distance import vincenty
print vincenty(request.user.location, event.location).miles # 11.5122663513
Yet, when I query for all events within 10 miles of the user's location, that event is returned:
Event.objects.filter(location__distance_lte=(request.user.location, D(mi=10))).count() # 1
Only when I drop the radius to less than 4 miles does the filter take effect:
Event.objects.filter(location__distance_lte=(request.user.location, D(mi=3))).count() # 0
I'm following the docs' example almost exactly, so I don't think my query is the problem.
What could be causing this discrepancy?
This very much depends on what type of database you are using.
Because cartesian math is much faster than geospatial math, the query likely treats coordinates as if they are on a plane rather than on a sphere.
The docs explain it this way:
Most people are familiar with using latitude and longitude to
reference a location on the earth’s surface. However, latitude and
longitude are angles, not distances. In other words, while the
shortest path between two points on a flat surface is a straight line,
the shortest path between two points on a curved surface (such as the
earth) is an arc of a great circle. Thus, additional computation
is required to obtain distances in planar units (e.g., kilometers and
miles). Using a geographic coordinate system may introduce
complications for the developer later on. For example, Spatialite does
not have the capability to perform distance calculations between
geometries using geographic coordinate systems, e.g. constructing a
query to find all points within 5 miles of a county boundary stored as
WGS84.
Portions of the earth’s surface may projected onto a two-dimensional,
or Cartesian, plane. Projected coordinate systems are especially
convenient for region-specific applications, e.g., if you know that
your database will only cover geometries in North Kansas, then you may
consider using projection system specific to that region. Moreover,
projected coordinate systems are defined in Cartesian units (such as
meters or feet), easing distance calculations.
Furthermore, this may be influenced by your database choice. If you are using Postgres/PostGIS, it has the following note in the docs:
In PostGIS, ST_Distance_Sphere does not limit the geometry types
geographic distance queries are performed with. However, these
queries may take a long time, as great-circle distances must be
calculated on the fly for every row in the query. This is because the
spatial index on traditional geometry fields cannot be used.
For much better performance on WGS84 distance queries, consider using
geography columns in your database instead because they are able to
use their spatial index in distance queries. You can tell GeoDjango to
use a geography column by setting geography=True in your field
definition.
You can check this yourself by printing out the raw SQL:
qs = Event.objects.filter(location__distance_lte=(request.user.location, D(mi=10))
print qs.query
Depending on your database type, and the amount of data you plan to store, you have a couple options:
Filter the points a second time in python
Try setting geography=True
Set an explicit SRID
Take a point, buffer it out into a circle with the given radius and then find points within that circle using contains
Use a different database type
If you share the raw query it'll be easier to figure out what is happening.

Spatial queries on AWS SimpleDB

I would like to know what people suggest as efficient ways of doing a spatial query in an Amazon Web Services SimpleDB?
By spatial query I mean finding objects in a given radius of a latitude and longitude.
SimpleDB doesn't currently offer any built-in spatial search operations but that doesn't mean it can't be done. There's several methods of implementing geospatial searches in non-geospatially aware databases such as SimpleDB and all of them center around the idea of using the database to retrieve a rough first selection based on a geospatial bounding box and then filtering the returned data in your application using more accurate algorithms such as the Haversine formula.
You could store the latitude and longitude as (zero-padded and normalized) numeric attributes and then perform a double range query (lat >= minLat and lat <= maxLat and lon >= minLat and lon <= maxLat) but since neither of theese predicates are selective (each predicate matches a lot of items) it's not ideal (see Tuning Queries).
A better way would be using GeoHashes.
Geohashes offer properties like arbitrary precision, similar prefixes
for nearby positions, and the possibility of gradually removing
characters from the end of the code to reduce its size (and gradually
lose precision).
As a practical example, the Geohash 6gkzwgjzn820 decodes to the
coordinates -25.382708 and -49.265506, while the Geohash 6gkzwgjz will
decode to -25.383 and -49.266, and if we take a similar position in
the same region, such as -25.427 and -49.315, we can see it being
encoded as 6gkzmg1w (note the similar prefix).
From http://geohash.org/site/tips.html
With your item positions as GeoHashes you could use the like operator to search for a bounding box (where GeoHash like '6gkzmg1w%') but since the like operator is expensive (Comparison Operators) a better way would be to denormalize the data by storing each GeoHash prefix level (how many depends on your required search precision) as a separate attribute (GeoHash6 GeoHash8 etc) and then use a simple equality predicate (where Geohash8 = '6gkzmg1w').
Now on to the downside of GeoHashes. Since you can't make any assumption of a GeoHash being centered within your search box you have to search all neighboring prefixes as well. The process is excellently described by geohash-js
Geohash also has the property that as the number of digits decreases
(from the right), accuracy degrades. This property can be used to do
bounding box searches, as points near to one another will share
similar Geohash prefixes.
However, because a given point may appear at the edge of a given
Geohash bounding box, it is necessary to generate a list of Geohash
values in order to perform a true proximity search around a point.
Because the Geohash algorithm uses a base-32 numbering system, it is
possible to derive the Geohash values surrounding any other given
Geohash value using a simple lookup table.
So, for example, 1600 Pennsylvania Avenue, Washington DC resolves to:
38.897, -77.036
Using the geohash algorithm, this latitude and longitude is converted
to: dqcjqcp84c6e
A simple bounding box around this point could be described by
truncating this geohash to: dqcjqc
However, 'dqcjqcp84c6e' is not centered inside 'dqcjqc', and searching
within 'dqcjqc' may miss some desired targets.
So instead, we can use the mathematical properties of the Geohash to
quickly calculate the neighbors of 'dqcjqc'; we find that they are:
'dqcjqf','dqcjqb','dqcjr1','dqcjq9','dqcjqd','dqcjr4','dqcjr0','dqcjq8'
This gives us a bounding box around 'dqcjqcp84c6e' roughly 2km x 1.5km
and allows for a database search on just 9 keys: SELECT * FROM table
WHERE LEFT(geohash,6) IN ('dqcjqc',
'dqcjqf','dqcjqb','dqcjr1','dqcjq9','dqcjqd','dqcjr4','dqcjr0','dqcjq8');
Translated to a SimpleDB query that'd be where GeoHash6 in('dqcjqc', 'dqcjqf', 'dqcjqb', 'dqcjr1', 'dqcjq9', 'dqcjqd', 'dqcjr4', 'dqcjr0', 'dqcjq8') and then you'll do your Haversine filtering on the results in order to only get the items that's within your search radius.
I'm going to leave this here because it might help you!
14 years ago we tried to do a geo lookup table of locations within a radius. There was obviously no geospatial indexes or anything like that.
There was literally only standard SQL and Oracle... anyway, we ended up converting all lat/lng into kilometers from a fixed plane field. Essentially what geospatial indexes do these days.
To explain what exactly it does, it turns the world into a flat surface and with a bit of SQL trickery you can even select by radius, you even get the distance from the two points you're selecting. Since it's also raw full integers the queries are blazing fast.
Here is a simple example in PHP and a very complex looking but pretty easy once you understand it SQL query:
https://gist.github.com/tobsn/899413

Analyzing gaze tracking data

I have an image which was shown to groups of people with different domain knowledge of its content. I than recorded gaze fixation data of them watching the image.
I now kind of want to compare the results of the two groups - so what I need to know is, if there is a correlation of the positions of the sampling data between the two groups or not.
I have the original image as well as the fixation coords. Do you have any good idea how to start analyzing the data?
It's more about the idea or the plan so you don't have to be too technical on that one.
Thanks
Simple idea: render all the coordinates on the original image in a 'heat map' like way, one image for each group. You can then visually compare the images for correlation, and you have some nice graphics for in your paper.
There is something like the two-dimensional correlation coefficient. With software like R or Matlab you can do the number crunching for the correlation.
Matlab has a function for this:
Two Dimensional Correlation Function: corr2
Computes two dimensional correlation coefficient between two matrices
and the matrices must be of the same size. r = corr2 (A,B)
In gaze tracking, the most interesting data lies in two areas.
In where all people look, for that you can use the heat map Daan suggests. Make a heat map for all people, and heat maps for separate groups of people.
In when people look there. For that I would recommend you start by making heat maps as above, but for short time intervals starting from the time the picture was first shown. Again, for all people, and for the separate groups you have.
The resulting set of heat-maps, perhaps animated for the ones from the second point, should give you some pointers for further analysis.

Population-weighted center of a state

I have a list of states, major cities in each state, their populations, and lat/long coordinates for each. Using this, I need to calculate the latitude and longitude that corresponds to the center of a state, weighted by where the population lives.
For example, if a state has two cities, A (population 100) and B (population 200), I want the coordinates of the point that lies 2/3rds of the way between A and B.
I'm using the SAS dataset that comes installed called maps.uscity. It also has some variables called "Projected Logitude/Latitude from Radians", which I think might allow me just to take a simple average of the numbers, but I'm not sure how to get them back into unprojected coordinates.
More generally, if anyone can suggest of a straightforward approach to calculate this it would be much appreciated.
The Census Bureau has actually done these calculations, and posted the results here: http://www.census.gov/geo/www/cenpop/statecenters.txt
Details on the calculation are in this pdf: http://www.census.gov/geo/www/cenpop/calculate2k.pdf
To answer the question that was asked, it sounds like you might be looking for a weighted mean. Just use PROC MEANS and take a weighted average of each coordinate:
/* data from http://www.world-gazetteer.com/ */
data AL;
input city $10 pop lat lon;
datalines;
Birmingham 242452 33.53 86.80
Huntsville 159912 34.71 86.63
Mobile 199186 30.68 88.09
Montgomery 201726 32.35 86.28
;
proc means data=AL;
weight pop;
var lat lon;
run;
Itzy's answer is correct. The US Census's lat/lng centroids are based on population. In constrast, the USGS GNIS data's lat/lng averages are based on administrative boundaries.
The files referenced by Itzy are the 2000 US Census data. The Census bureau is in the processing of rolling our the 2010 data. The following link is a segway to all of this data.
http://www.census.gov/geo/www/tiger/
I can answer a lot of geospatial questions. I am part of a public domain geospatial team at OpenGeoCode.Org
I believe you can do this using the same method used for calculating the center of gravity of an airplane:
Establish a reference point southwest of any part of the state. Actually it doesn't matter where the reference point is, but putting it SW will keep all numbers positive in the usual x-y send we tend to think of things.
Logically extend N-S and E-W lines from this point.
Also extend such lines from the cities.
For each city get the distance from its lines to the reference lines. These are the moment arms.
Multiply each of the distance values by the population of the city. Effectively you're getting the moment for each city.
Add all of the moments.
Add all of the populations.
Divide the total of the moments by the total of the populations and you have the center of gravity with respect for the reference point of the populations involved.