YAHOO! Place Finder API - geocoding

Is there a way to get all of the POI (Points of Interest) and AOI (Areas of Interest) lets say for a specific state.
I would like to be able to auto-complete while someone is typing if it contains any of that data, but I don't know how to get a list of all that data.
I would even be good with storing it my database if need be, because I can't see it changing very often.

No, PlaceFinder itself does not provide a way to download its places database. It's only intended for individual lookups.
You might look at the GeoPlanet data downloads which make available under CC license the entire WOEID (Where on Earth ID) geo database.

Related

Bulk geocoding for associating addresses to shapefiles [here-api]

I was wondering if I can use the here-api to geocode addresses (bulk way) and store the results. I need to store them on a python data frame to afterwards merge them with a shape file. However, in the TOS HERE says that you cannot store the results from geocoding.
I've been in your situation. Let me list the options that we've considered.
You could fetch address information dynamically using their Geocoding API. Advantage is that you'll always have up-to-date information, and you don't need to query places that you'll never use anyway.
I'm assuming that your shape file is based on HERE data. You could still try to use OpenStreetMaps. You'll get some inaccuracies if the map data differs, but it's free, and not as restrictive as HERE.
You could opt to buy the HERE map data, to work around the TOS restrictions
By the way, we went for an option 4: switching to OpenStreetMaps entirely, but it sounds like you don't have that luxery, because you have to work with existing shapefiles that you want to "enrich".

Retrieving data extension used to filter e-mail send

I'm new to ExactTarget, and I'm having some hard time to do something that should be simple.
We direct our e-mails using a list of All Subscribers, and filter then using data extensions. When I go on the tracking page of a particular e-mail, there it is in the Summary, all the info and the Data Extensions, on the format:
name (number) sent (Using All Subscribers) (number sent)
The thing is, when I do a tracking extract I don't get this information. I tried to extract everything that was possible, and couldn't get this information. All I get is, for each sendID (for each subscriber in fact), the ListID, which will always be the same (the one for All Subscribers). I want to know which segment we used.
Tried to deep dive on the SOAP API, with no luck, again all I can retrieve is the List, never the data extension.
This must be retrievable, once it is on the Tracking Summary. So my question is how do I retrieve is.
I considered programming a simple robot to scrape for this info, but there must be a better way.
Thanks
Guess nobody cares much, but in case someone faces the same problem, I've got a final answer for support, the information is not retrievable (except through the tracking summary, which must be accessed one by one) through API or UI
But a custom report can be asked for. Still waiting to see if it's free of charge (should be, right? As it's something that should be available in first place)

Best practice for creating an unalterable report file in c++

I am currently developping a windows application who test railroad equipments to find any defaults.
Utility A => OK
Utility B => NOK
...
This application will check the given equipment and generate a report.
This report needs to be written once, and no further modifications are allowed since this file can be used as working proof for the equipment.
My first idea was ta use pdf files (haru lib looks great), but pdf can also be modified.
I told myself that I could obsfuscate the report, and implement a homemade reader inside my application, but whatever way I store it, the file would always be possibly accessed and modified right?
So I'm running out of ideas.
Sorry if my approach and my problem appear naive but it's an intership.
Thanks for any help.
Edit: I could also add checksums for files after I generated them, and keep a "checksums record file", and implement a checksums comparison tool for verification? just thought about this.
I believe the answer to your question is to use any format whatosever, and use a digital signature anybody can verify, e.g., create a gnupg, get that key signed by the people who require to check your documents, upload it to one of the key servers, and use it to sign the documents. You can publish the documents, and have a link to your public key available for verification; for critical cases someone verifying must be trust your signature (i.e., trust somebody who signed your key).
People's lives depend on the state of train inspections. Therefore, I find it hard to believe that someone expects you to solve this problem only using free-as-in-beer components.
Adobe supports a strong digital signature model. If you buy into their technology base, you can create PDF's that are digitally signed, and are therefore tamper-evident, as the consumer can check for the signature.
You can, as someone else pointed out, use GNUpg, or for that matter OpenSSL, to implement your own signature scheme, but railroad regulators are somewhat less likely to figure out how to work with it.
I would store reports in an encrypted/protected datastore.
When a user accesses a report (requests a copy, the original is of course always in the database and cannot be modified), it includes the text "Report #XXXXX". If you want to validate the report, retrive a new copy from the system using the Report ID.

Selecting content based on locale

Given I have description of say product x in multiple languages along with its price and availability dictated by region/locale, how do I go about telling django to render the most appropriate variant of the content based on region of request origin? Amazon would be a good example of what I am trying to achieve.
Is is best to store each variant in the database, and afterwards look at request header to serve the most appropriate content, or is there a best practise way to achieve this.
I was struggling with the same problem. The localeurl library seems to handle these cases, so you don't have to write the logic by yourself. I still haven't tested the library, but at first glance it seems to be exactly what we need. You can read more about it here

Looking for Ideas: How would you start to write a geo-coder?

Because the open source geo-coders cannot begin to compare to Google's or even Yahoo's, I would like to start a project to create a good open source geo-coder. Just to clarify, a geo-coder takes some text (usually with some constraints) and returns one or more lat/lon pairs.
I realize that this is a difficult and garguntuan task, so I am wondering how you might get started. What would you read? What algorithms would you familiarize yourself with? What code would you review?
And also, assuming you were going to develop this very agilely, what would you want the first prototype to be able to do?
EDIT: Let's set aside the data question for now. I am going to use OpenStreetMap data, along with a database of waypoints that I have. I would later plan to include other data sets as well, and I realize the geo-coder would be inherently limited by the quality of the original data.
The first (and probably blocking) problem would be: where do you get your data from? (unless you are willing to pay thousands of dollars for proprietary sets).
You could build a geocoding-api on top of OpenStreetMap (they publish their data in dumps on a regular basis) I guess, but that one was still very incomplete last time I checked.
Algorithms are easy. Good mapping data, however, is expensive. Very expensive.
Google drove their cars all over the world, collecting this data among other things.
From a .NET point of view these articles might be interesting for you:
Writing Your Own GPS Applications: Part I
Writing Your Own GPS Applications: Part 2
Writing GIS and Mapping Software for .NET
I've only glanced at the articles but they've been on CodeProject's 'Most Popular' list for a long time.
And maybe this CodePlex project which the author of the articles above made available.
I would start at the absolute beginning by figuring out how you're going to get the data that matches a street address with a geocode. Either Google had people going around with GPS units, OR they got the information from some existing source. That existing source may have been... (all guesses)
The Postal Service
Some existing maps(printed)
A bunch of enthusiastic users that were early adopters of GPS technology who ere more than willing to enter in street addresses and GPS coordinates
Some government entity (or entities)
Their own satellites
etc
I guess what I'm getting at is the information was either imported from somewhere or was input by someone via some interface. As my starting point I would look at how to get that information. In an open source situation, you may be able to get a bunch of enthusiastic people to enter information.
So for my first prototype, boring as it would be, I would create a form for entering information.
Then you need to know the math for figuring out the closest distance (as the crow flies). From there, try to figure out how to include roads. (My guess is you would have to have data point for each and every curve, where you hold the geocode location of the curve, and the angle of the road on a north/south and east/west vector. You'd probably need to take incline into account, too to get accurate road measurements.)
That's just where I'd start.
But in all honesty, I wouldn't even start on this. Other programmers have done it already, I'm more interested in what hasn't already been done.
get my free raw data from somewhere like http://ipinfodb.com/ip_database.php
load it into a database, denormalizing for fast lookups
design my API
build it out as a RESTful web service
return results in varying formats: JSON, XML, CSV, raw text
The first prototype should accept a ZIP code and return lat/lon in raw text.