Ways to list pages by location via Graph API - facebook-graph-api

Since the API changed in regards to the search endpoint and I can no longer use this:
/search?q=*&type=page&center=x,y&distance=500&access_token=[token]
I need to find a different way to get pages from a specific location. I tried to work with places instead, but there is no apparent way to get pages of that place.
Is there any way to bypass this limitation or perhaps a different approach?

Related

Should I make requests to this API/change the structure of this API/use a DB instead?

That's what my Django Rest Framework API looks like. I want to use information from it on my website(where users can search for a flight and they will see the appropriate results). Of course I can filter the API in many ways, but the problem is I don't know what to do when it comes to searching for indirect flights. I also don't know what functions to use. I've already used fetch to create autocomplete but that's a different situation. Can fetch be used here too? Or maybe I should create the API in another way? And of course there's an option of using a db, the easiest one for me, but I would prefer using API. Maybe that's a dumb question, but I'm just confused

Is there a way to detect from which source an API is being called?

Is there any method to identify from which source an API is called? source refer to IOS application, web application like a page or button click( Ajax calls etc).
Although, saving a flag like (?source=ios or ?source=webapp) while calling api can be done but i just wanted to know is there any other better option to accomplish this?
I also feel this requirement is weird, because in general an App or a web application is used by n number of users so it is difficult to monitor those many API calls.
please give your valuable suggestions.
There is no perfect way to solve this. Designating a special flag won't solve your problem, because the consumer can put in whatever she wants and you cannot be sure if it is legit or not. The same holds true if you issue different API keys for different consumers - you never know if they decide to switch them up.
The only option that comes to my mind is to analyze the HTTP header and see what you can deduce from it. As you probably know a typical HTTP header looks something like this:
You can try and see how the requests from all sources differ in your case and decide if you can reliably differentiate between them. If you have the luxury of developing the client (i.e. this is not a public API), you can set your custom User-Agent strings for different sources.
But keep in mind that Referrer is not mandatory and thus it is not very reliable, and the user agent can also be spoofed. So it is a solution that is better than nothing, but it's not 100% reliable.
Hope this helps, also here is a similar question. Good luck!

Django : looking for a good LDAP manipulation library

I am looking for a good ldap library on Django, that would allow me to manage my ldap server :
adding, modifying, deleting entries
for groups, users, and all kind of objects
The library django-ldapdb looked promising, it offers a Model base class that can be used to declare ldap objects in a Django fashion (which is what we ideally want), however we've had some bugs with it, and furthermore it seems like it is not maintained any more.
Does somebody know a good library that could do the trick ? Otherwise I guess I'll just try to improve and debug django-ldapdb ...
Thanks !
sebpiq, you say you applied "one or two fixes" to django-ldapdb, would you care to share them? So far django-ldapdb meets my needs, but I'd be happy to integrate any fixes you might have.
When using ldapdb to query ldap with more results than the server allows instead of getting the partial list (of say the first 500 users) I get SIZELIMIT_EXCEEDED exception. Trying to change the code to catch that exception resulted in an empty result objects.
Anyone else had that problem?
I fixed that problem by changing the search_s function to use search_ext and read the results one by one until the exception happens.
http://www.python-ldap.org/doc/html/index.html
The beauty of Django is that you can use any python module within your application.
There is also django-auth-ldap which claims
LDAP configuration can be as simple as a single distinguished name template, but there are many rich options for working with User objects, groups, and permissions.
Actually, I have found out that with one or two fixes, django-ldapdb is a pretty good library. The only bad point is that it is not very actively maintained... I will use it anyways, because it is the best solution I have found.

Attributes: Are the current REST-architecture tools limited to a tree-structure?

Historically operating system directory-structures have been trees:
C:
Windows
System32
Program Files
Common Files
Internet Explorer
And the REST architecture emulates the same thing:
http://...//Thomas/
http://...//Thomas/Mexico/Year2003/Photos
http://...//Thomas/Mexico/Year2007/Photos
http://...//Thomas/Finland/Year2005/Photos
http://...//Thomas/Finland/Year2010/Photos
http://...//Thomas/Finland/Year2010/Videos
http://...//Thomas/USA/Year2005/Photos
But, looking the current structure, I need to make searches:
All pictures that are not from
Finland?
All pictures taken in 2005?
All pictures in timeline?
It is not efficient to do a REST-interface with every tree-hierarchy combinations. You need more efficient information management; you need an attribute-system rather than a tree-structure.
(Oh, why the operating systems are not based on attributes?)
StackOverflow and Google seem to use attributes and syntax with "+"-marks like:
http://www.stackoverflow.com/Tags/asp.net+iis7
http://www.google.com/search?&q=iis7+asp.net
Today's frameworks like WCF and ASP.NET MVC have a good support for RESTful tree-structures. But is there support for attribute-structures? Wouldn't you call an attribute-structure still REST?
I would like to make an attribute-WebService and use it with a LINQ in Silverlight-client... Which is the best way to start? :-)
In order to create an effective REST interface you need to identify the resources that make sense for your client application. If you look at you use cases:
All pictures that are not from Finland?
All pictures taken in 2005?
All pictures in timeline?
The question you need to answer, is if this requires three resources or just one. I am assuming you want to have more than just these three queries, so therefore the most flexible solution is to define a generic resource which is a "collection of pictures".
/Thomas/pictures
From here, you want to be able limit contents of this resource by using query parameters.
/Thomas/pictures?country=not-finland
/Thomas/pictures?year=2005
In the case of the third item it may make sense to create a separate resource for that item.
/Thomas/PictureTimeline
There are other scenarios where it may make sense to create additional resource such as
/Thomas/FavouritePictures
The important thing is to identify what key concepts of your application you want to model as resources and then assign those resources an URL. Trying to do REST design via the URL space is going to make you bang your head against the wall.
What you are looking for are URI matrix parameters:
http://www.w3.org/DesignIssues/MatrixURIs.html
When to use query parameters versus matrix parameters?.

Retrieve a list of the most popular GET param variations for a given URL?

I'm working on building intelligence around link propagation, and because I need to deal with many short URL services where a reverse-lookup from an exact URL address is required, I need to be able to resolve multiple approximate versions of the same URL.
An example would be a URL like http://www.example.com?ref=affil&hl=en&ct=0
Of course, changing GET params in certain circumstances can refer to a completely different page, especially if the GET params in question refer to a profile or content ID.
But a quick parse of the page would quickly determine how similar the pages were to each other. Using a bit of machine learning, it could quickly become clear which GET params don't effect the content of the pages returned for a given site.
I'm assuming a service to send a URL and get a list of very similar URLs could only be offered by the likes of Google or Yahoo (or Twitter), but they don't seem to offer this feature, and I haven't found any other services that do.
If you know of any services that do cluster together groups of almost identical URLs in the aforementioned way, please let me know.
My bounty is a hug.
Every URL is akin an "address" to a location of data on the internet. The "host" part of the URL (in your example, "www.example.com") is a web-server, or a set of web-servers somewhere in the world. If we think of a URL as an "address", then the host could be a "country".
The country itself might keep track of every piece of mail that enters it. Some do, some don't. I'm talking about web-servers! Of course real countries don't make note of every piece of mail you get! :-)
But even if that "country" keeps track of every piece of mail - I really doubt they have any mechanism in place to send that list to you.
As for organizations that might do that harvesting themselves, I think the best bet would be Google, but even there the situation is rather grim. You see, because Google isn't the owner every web-server ("country") in the world, they cannot know of every URL that accesses that web-server.
But they can do the reverse. Since they can index every page they encounter, they can get a pretty good idea of every URL that appears in public HTML pages on the web. Of course, this won't include URLs people send to each other in chats, SMSs, or e-mails. But still, they can get a pretty good idea of what URLs exist.
I guess what I'm trying to say is that what you're looking for doesn't exist, really. The only way you can get all the URLs used to access a single website, is to be owner of that website.
Sorry, mate.
It sounds like you need to create some sort of discrete similarity rank between pages. This could be done by finding the number of similar words between two pages and normalizing the value to a bounded range then mapping certain portions of the range to different similarity ranks.
You would also need to know for each pair that you compare what GET parameters they had in common or how close they were. This information would become the attributes that define each of your instances (stored along side the rank mentioned above). After you have amassed a few hundred pairs of comparisons you could perhaps do some feature subset selection to identify the GET parameters that most identify how similar two pages are.
Of course, this could end up not finding anything useful at all as this dataset is likely to contain a great deal of noise.
If you are interested in this approach you should look into Infogain and feature subset selection in general. This is a link to my professors lecture notes which may come in handy. http://stuff.ttoy.net/cs591o/FSS.html