Currently I am using following API call to retrieve Post Likes and Post Comments for Facebook Page (PageId). Here in below i am making only one API call and retrieving ALL posts and their comments total count.
1). https://graph.facebook.com/PageId/posts?access_token=xyz&method=GET&format=json
But, as per "July 2013 Breaking Changes" : - Now comments counts are not available with above API call. so , as per Road Map documentation I am using following API call to retrieve comments count ('total_count') for that particular POST ID.
2). https://graph.facebook.com/post_ID/?summary=true&access_token=xyz&method=GET&format=json
So , with second API call - I am able to retrieve comments count per Post Wise. But, here you can see that I need to iterate through each post & need to retrieve its comments count one by one per each post id. then need to sum up all to find out total comments count. so that requires too much API calls.
My Question is :- Is it possible to retrieve Page -> Posts -> ALL comments total count in single API call by considering 10 July breaking changes ?
Is there any alternative to my second API call to retrieve all comments total count per Facebook page posts ?
Hmm, well, I don't believe there is a way to bundle this all in a single api call. But, you can batch requests to get this in the seemingly same api call (will save time), but they will count against your rate limits separately. (my example below would be 4 calls against the limits)
Example batch call (json encoded) - and i'm storing the post ID in the php variable $postId.:
[{"method":"GET","relative_url":"' . $postId . '"},
{"method":"GET","relative_url":"' . $postId . '/likes?limit=1000&summary=true"},
{"method":"GET","relative_url":"' . $postId . /comments?filter=stream&limit=1000&summary=true"},
{"method":"GET","relative_url":"' . $postId . '/insights"}]
I'm batching 4 queries in this single call. First to get post info, second to get likes (up to 1000, plus the total count), third to get all the comments, plus the summary count, and finlly, insights (if it's the page's own posts).
You can drastically simplify this batch call if you don't want all the details I'm pulling.
In this case you still need to iterate though all. But, Facebook allows you to bundle up to 50 calls per batch request I believe, so you could request multiple post ids in the same batch call to speed things up too.
Related
I am trying to do some unit tests using elasticsearch. I first start by using the index API about 100 times to add new data to my index. Then I use the search API with aggs. The problem is if I don't pause for 1 second after adding data 100 times, I get random results. If I wait 1 second I always get the same result.
I'd rather not have to wait x amount of time in my tests, that seems like bad practice. Is there a way to know when the data is ready?
I am waiting until I get a success response from elasticsearch /index api already, but that is not enough it seems.
First I'd suggest you to index your documents with a single bulk query : it would save some time because of less http/tcp overhead.
To answer your question, you should consider using the refresh=true parameter (or wait_for) while indexing your 100 documents.
As stated in documentation, it would :
Refresh the relevant primary and replica shards (not the whole index)
immediately after the operation occurs, so that the updated document
appears in search results immediately
More about it here :
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-refresh.html
I have a list of football team names my_team_list = ['Bayern Munich', 'Chelsea FC', 'Manchester United', ...] and try to search for their official Facebook page to get their fan_count using the Python facebook-api. This is my code so far:
club_list = []
for team in my_team_list:
data = graph.request('/pages/search?q=' + team[0])
for i in data['data']:
likes = graph.get_object(id=i['id'], fields='id,name,fan_count,is_verified')
if likes['is_verified'] is True:
club_list.append([team[0],likes['name'],likes['fan_count'],likes['id']])
However, because my list contains over 3000 clubs, with this code I will get the following rate limit error:
GraphAPIError: (#4) Application request limit reached
How can I reduce the calls to get more than one club's page per call (e.g. batch calls?)
As the comment on the OP states, batching will not save you. You need to be actively watching the rate limiting:
"All responses to calls made to the Graph API include an X-App-Usage HTTP header. This header contains the current percentage of usage for your app. This percentage is equal to the usage shown to you in the rate limiting graphs. Use this number to dynamically balance your call load to avoid being throttled."
https://developers.facebook.com/docs/graph-api/advanced/rate-limiting#application-level-rate-limiting
On your first run through, you should save all of the valid page ids so in the future you can just query those ids instead of doing a search.
I was trying to run the following code and I get variable number of tweets when I keep running the code at some interval of time (more than 15min). Sometimes I get 1400 tweets and 1200,1000,1600 tweets the other time. Can't I get fixed number of tweets all the time I run the code even if i change the keyword?
for tweet in tweepy.Cursor(api.search, q="#narendramodi", rpp=100).items(200):
You search does not specify any id limit.
Because of pagination, Twitter Search API looks for latest tweets every time you call it. Since tweets are added continuously, simple call to Search API returns the most recent ones and you'll get different number of tweets based on how many tweets were posted during the time you were querying. See Working with Timelines.
Please also note that Twitter Search API focuses on relevance rather than completeness of the results. See The Search API.
If you want to iterate over tweets, starting from the moment you run your application and continuing to older tweets, I recommend using max_id in your next query parameters setting it with the id field of the last result from your query as suggested here.
We are just getting started with MWS. We'd like to be able to use the lowest offers on each product to help calculate our price. There is an API to GetLowestOfferListForSku but that only returns a single sku and there is a throttle limit which would make it so we'd have to take several days to get all the data.
Does anybody know a way to get that data for multiple products in a single request?
You can fetch data on up to 20 SKUs using GetLowestOfferListingsForSKU by adding a SellerSKUList.SellerSKU.n parameter for each product (where n is a number from 1 to 20). The request looks something like this:
https://mws.amazonservices.com/Products/2011-10-01
?AWSAccessKeyId=AKIAJGUVGFGHNKE2NVUA
&Action=GetMatchingProduct
&SellerId=A2NK2PX936TF53
&SignatureVersion=2
&Timestamp=2012-02-07T01%3A22%3A39Z
&Version=2011-10-01
&Signature=MhSREjubAxTGSldGGWROxk4qvi3sawX1inVGF%2FepJOI%3D
&SignatureMethod=HmacSHA256
&MarketplaceId=ATVPDKIKX0DER
&SellerSKUList.SellerSKU.1=SKU1
&SellerSKUList.SellerSKU.2=SKU2
&SellerSKUList.SellerSKU.3=SKU3
Here's some relevant documentation which explains this: http://docs.developer.amazonservices.com/en_US/products/Products_ProcessingBulkOperationRequests.html
You might also find the MWS scratchpad helpful for testing:
https://mws.amazonservices.com/scratchpad/index.html
I am looking for a solution which I can get that last 50 comments to the page's wall, or all comments in an hour to the page's wall and posts date wont matter, could be posted 2 years before but If gets a comment in an hour I need to get it. I don't want to get all posts and look one by one.
thank you for your effort
The first one is easy. Issue an API call to this endpoint:
/PAGE_NAME_OR_ID/feed?fields=comments.limit(50)
You will be restricted to the normal limits of feed, so the comments returned here will only be those made on the last 30 days or 50 posts, whichever is fewer.
If you want the last 50 comments, you'll need to use FQL.
SELECT time, text, text_tags, post_id FROM comment WHERE post_id IN
(SELECT post_id FROM stream WHERE source_id IN
(SELECT id FROM profile WHERE username="cocacola") LIMIT 100)
ORDER BY time DESC LIMIT 50
Keep in mind that Facebook's filtering algorithms operate after FQL. You may need to increase the LIMIT values substantially to be guaranteed get 50 results.