Graph API Limit Exceeded - facebook-graph-api

We have an application which displays user's album + photos + posts. User can further filter this content. Our issue is that some time for heavy FB user we are getting limit exceeded error.
To have a work around of this situation what we were planning to require use to login again use FB.login() function in a hope that we get new access token and again we can query the data.
But this approach is not working for us.
Is there any other way we can get around this problem?
Any help is highly appreciated.

Top of the head:
Use FQL multiquery to obtain data in larger batches
Use the ?ids=<CSV LIST OF IDS> syntax in the Graph API documentation to retrieve data for several objects at once
If those are already in place and you're still hitting limits, just throttle your calls and slowly make the necessary calls in the background over the space of a few minutes

It's hard to give advice without knowing exactly what you're doing, but the general advice is to reduce the number of API calls you're making. So, check that you're not querying the same data more than once. Also, if you're querying data that doesn't change very often you could cache it on your server to avoid calling Facebook each time you need that data.

Related

Which is the best way to retrieve data from a remote server using concurrent calls?

I'm working on retrieving data like Products, Orders eCommerce platforms such as BigCommerce, Shopify, etc., and save it in our own databases. To improve the data retrieval speed from their APIs, we're planning to use the Bluebird library.
Earlier, the data retrieval logic was like retrieving one page at a time. Since we're planning to make concurrent calls "n" number of pages will be retrieved concurrently.
For example, Bicommerce allows us to make up to 3 concurrent calls at a time. So, we need to make the concurrent calls so that we will not retrieve the same page more than once, and in case if a request failed then a request for that page will be resent.
What's the best way to implement this? One idea that strikes my mind is,
One Possible Solution - Keep an index of ongoing requests in the database and update it on the API completion, so we will know which are unsuccessful.
Is there a better way of doing this? Any suggestions/ideas on this would be highly appreciated.

Render slow loading results

I have a website that uses a pretty slow external API (0.9 seconds for a request).
The results from this API request are rendered to the page.
I use some kind of own caching, because I store the results in a DB and subsequent queries for the same resource are queried from the DB rather than requesting from the API again. If the data in the DB is too old (>10 Minutes), I update the DB with a new API request.
It will be pretty common to check the website only occasionally during the day, so you will always hit the 10 Minute limit and always have a pretty long loading time >1s. This feels very unresponsive.
I then searched for ways to get around the loading time and found this.
I think this could be the right direction, but I am still not confident on how to tackle the task. Can anybody point me in the right direction as how to implement this?
Should I use the low level cache api?
Could I use the default cache? Or should I implement my own version?
Do you think the solution provided in the first link is a good idea at all?

Is it possible to ask for temporary increase on the quota of Mirror API Request

We are going to have a important demo for our glassware, but we keep hitting the limit of Mirror API, now we use apply different project client id to handle it, but not convenience.
Is it possible to ask for a temporary increase on the quota and how?
Thanks
You can ask for a permanent increase using the form at https://developers.google.com/glass/distribute/form
The form is intended to collect information to make your Glassware public, but you can fill in "n/a" for the non-relevant portions and say that you're not trying to make it public yet.

How do I increase Facebook API's session length?

I'm using Facebook's Graph API from Android.
First, what is the default session length (time)? The ONLY resource I've found stating anything helpful and relevant is from this Facebook blog post which suggests the default is 2 hours.
Is there a way to set the session timeout? If that's the default, one would reason that it can be set. It'd be great if I could set it for, say, 24 hours. If it can't be set, do you have a strategy for how you deal with it, without making the user login all the time? For example, if the session time automatically increases on the server side (say, by 2 hours) with each call to the API, then I'd implement a strategy where I'd call SOME API method every, say, 2-ish hours.
Note that I am aware of the offline_access permission. That seems like overkill.

Developing/Testing Twitter apps without slamming the API

I'm currently working on an app that works with Twitter, but while developing/testing (especially those parts that don't rely heavily on real Twitter data), I'd like to avoid constantly hitting the API or publishing junk tweets.
Is there a general strategy people use for taking it easy on the API (caching aside)? I was thinking of rolling my own library that would essentially intercept outgoing requests and return mock responses, but I wanted to make sure I wasn't missing anything obvious first.
I would probably start by mocking the specific parts of the API you need for your application. In fact, this may actually force you to come up with a cleaner design for your app, because it more or less requires you to think about your application in terms of "what" it should do rather than "how" it should do it.
For example, if you are using the Twitter Search API, your application most likely should not care whether or not you are using the JSON or the Atom format option. The ability to search Twitter using a given query and get results back represents the functionality you want, so you should mock the API at that level of abstraction. The output format is just an implementation detail.
By mocking the API in terms of functionality instead of in terms of low-level implementation details, you can ensure that the application does what you expect it to do, before you actually connect to Twitter for real. At that point, you've already verified that the app works as intended, so the only thing left is to write the code to make the REST requests and parse the responses, which should be fairly straightforward, so you probably won't end up hitting Twitter with a lot of junk data at that point.
Caching is probably the best solution. Besides that, I believe the API is limited to 100 requests per hour. So maybe make a function that keeps counting each request and as it gets close to 100, it says, OK, every 10 API requests I will pull data. It wouldn't be hard set, probably a gradient function that curbs off when you are nearing the limit.
I've used Tweet#, it caches and should do everything you need since it has 100% of twitter's api covered and then some...
http://dimebrain.com/2009/01/introducing-tweet-the-complete-fluent-c-library-for-twitter.html
Cache stuff in a database... If the cache is too old then request the latest data via the API.
Also think about getting your application account white-listed, it will allow you to have a 20,000 api request limit per hour vs the measly 100 (which is made for a user not an application).
http://twitter.com/help/request_whitelisting