I'm wondering how can I can get exact number of facebook posts/shares/likes during a period, regarding I do a request for a week before and the number of shares/likes could have evolved.
I'm using graph facebook API with request like this :
https://graph.facebook.com/[id page]/posts?access_token=[token]&fields=id,from,story,name,caption,type,status_type,object_id,created_time,updated_time,shares,likes,comments&since=[timestamp_bebinning_week_before]&until=[timestamp_end]
Thanks
There's no way to count via the Graph API. You'll have to produce the counts yourself by comparing the results from certain timeframes.
Related
I have a list of football team names my_team_list = ['Bayern Munich', 'Chelsea FC', 'Manchester United', ...] and try to search for their official Facebook page to get their fan_count using the Python facebook-api. This is my code so far:
club_list = []
for team in my_team_list:
data = graph.request('/pages/search?q=' + team[0])
for i in data['data']:
likes = graph.get_object(id=i['id'], fields='id,name,fan_count,is_verified')
if likes['is_verified'] is True:
club_list.append([team[0],likes['name'],likes['fan_count'],likes['id']])
However, because my list contains over 3000 clubs, with this code I will get the following rate limit error:
GraphAPIError: (#4) Application request limit reached
How can I reduce the calls to get more than one club's page per call (e.g. batch calls?)
As the comment on the OP states, batching will not save you. You need to be actively watching the rate limiting:
"All responses to calls made to the Graph API include an X-App-Usage HTTP header. This header contains the current percentage of usage for your app. This percentage is equal to the usage shown to you in the rate limiting graphs. Use this number to dynamically balance your call load to avoid being throttled."
https://developers.facebook.com/docs/graph-api/advanced/rate-limiting#application-level-rate-limiting
On your first run through, you should save all of the valid page ids so in the future you can just query those ids instead of doing a search.
I was trying to run the following code and I get variable number of tweets when I keep running the code at some interval of time (more than 15min). Sometimes I get 1400 tweets and 1200,1000,1600 tweets the other time. Can't I get fixed number of tweets all the time I run the code even if i change the keyword?
for tweet in tweepy.Cursor(api.search, q="#narendramodi", rpp=100).items(200):
You search does not specify any id limit.
Because of pagination, Twitter Search API looks for latest tweets every time you call it. Since tweets are added continuously, simple call to Search API returns the most recent ones and you'll get different number of tweets based on how many tweets were posted during the time you were querying. See Working with Timelines.
Please also note that Twitter Search API focuses on relevance rather than completeness of the results. See The Search API.
If you want to iterate over tweets, starting from the moment you run your application and continuing to older tweets, I recommend using max_id in your next query parameters setting it with the id field of the last result from your query as suggested here.
We are just getting started with MWS. We'd like to be able to use the lowest offers on each product to help calculate our price. There is an API to GetLowestOfferListForSku but that only returns a single sku and there is a throttle limit which would make it so we'd have to take several days to get all the data.
Does anybody know a way to get that data for multiple products in a single request?
You can fetch data on up to 20 SKUs using GetLowestOfferListingsForSKU by adding a SellerSKUList.SellerSKU.n parameter for each product (where n is a number from 1 to 20). The request looks something like this:
https://mws.amazonservices.com/Products/2011-10-01
?AWSAccessKeyId=AKIAJGUVGFGHNKE2NVUA
&Action=GetMatchingProduct
&SellerId=A2NK2PX936TF53
&SignatureVersion=2
&Timestamp=2012-02-07T01%3A22%3A39Z
&Version=2011-10-01
&Signature=MhSREjubAxTGSldGGWROxk4qvi3sawX1inVGF%2FepJOI%3D
&SignatureMethod=HmacSHA256
&MarketplaceId=ATVPDKIKX0DER
&SellerSKUList.SellerSKU.1=SKU1
&SellerSKUList.SellerSKU.2=SKU2
&SellerSKUList.SellerSKU.3=SKU3
Here's some relevant documentation which explains this: http://docs.developer.amazonservices.com/en_US/products/Products_ProcessingBulkOperationRequests.html
You might also find the MWS scratchpad helpful for testing:
https://mws.amazonservices.com/scratchpad/index.html
Currently I am using following API call to retrieve Post Likes and Post Comments for Facebook Page (PageId). Here in below i am making only one API call and retrieving ALL posts and their comments total count.
1). https://graph.facebook.com/PageId/posts?access_token=xyz&method=GET&format=json
But, as per "July 2013 Breaking Changes" : - Now comments counts are not available with above API call. so , as per Road Map documentation I am using following API call to retrieve comments count ('total_count') for that particular POST ID.
2). https://graph.facebook.com/post_ID/?summary=true&access_token=xyz&method=GET&format=json
So , with second API call - I am able to retrieve comments count per Post Wise. But, here you can see that I need to iterate through each post & need to retrieve its comments count one by one per each post id. then need to sum up all to find out total comments count. so that requires too much API calls.
My Question is :- Is it possible to retrieve Page -> Posts -> ALL comments total count in single API call by considering 10 July breaking changes ?
Is there any alternative to my second API call to retrieve all comments total count per Facebook page posts ?
Hmm, well, I don't believe there is a way to bundle this all in a single api call. But, you can batch requests to get this in the seemingly same api call (will save time), but they will count against your rate limits separately. (my example below would be 4 calls against the limits)
Example batch call (json encoded) - and i'm storing the post ID in the php variable $postId.:
[{"method":"GET","relative_url":"' . $postId . '"},
{"method":"GET","relative_url":"' . $postId . '/likes?limit=1000&summary=true"},
{"method":"GET","relative_url":"' . $postId . /comments?filter=stream&limit=1000&summary=true"},
{"method":"GET","relative_url":"' . $postId . '/insights"}]
I'm batching 4 queries in this single call. First to get post info, second to get likes (up to 1000, plus the total count), third to get all the comments, plus the summary count, and finlly, insights (if it's the page's own posts).
You can drastically simplify this batch call if you don't want all the details I'm pulling.
In this case you still need to iterate though all. But, Facebook allows you to bundle up to 50 calls per batch request I believe, so you could request multiple post ids in the same batch call to speed things up too.
I have found that if Graph Connection me/feed is used with limit and offset then its returning empty json array for most of limit and offset values.
For example:-
me/feed?limit=10&offset=0 is giving proper data
But me/feed?limit=10&offset=10 is returning empty json data array
Please help me :( thanks in advance....
** This behavior can be reproduced with the GRAPH API explorer tool too and I have got all the permissions possible using the GRAPH API explorer tool.
I have been doing similar experimentation on the graph api and I think that the the offset is with in the retrieved data set. This means that when you are retrieving only ten items and starting from 10th offset then zero items will be returned. However, if limit is greater than offset then some data would be returned.
See this link as well