Facebook Graph API 2.1 feed attachments - facebook-graph-api

According to Facebook Platform Changelog
/v2.1/{post-id} will now return all photos attached to the post: In previous versions of the API only the first photo was returned with a post. This removes the need to use FQL to get all a post's photos.
Though this statement is applicable only for separate API calls per post that looks follows:
https://graph.facebook.com/v2.1/{post_id}?fields=attachments
Since I need to retrieve all possible data that user posts to timeline, I use corresponding feed edge to do so.
https://graph.facebook.com/v2.1/me?fields=feed
So when I make a post with more than one picture attached to it, retrieved API response doesn't reflects that (and as I understand it's is per design). However I found that Graph API Explorer allows to choose attachments edge while constructing feed query that in this case appears like this
https://graph.facebook.com/v2.1/me?fields=feed{attachments}
but execution of such request triggers "Unsupported get request" exception.
Summing it up, the whole issue with the approach to make separate API calls for pictures is that it will dramatically increase number of calls which in turn not only decreases overall performance of the processing algorithm, but also leads to failing API call limits restrictions that in my case is unacceptable.
So I'm curious, is there any possibility to retrieve all post attachments (i.e. pictures) while working with feed edge or any alternative approach?
Thanks.

This should work.
me/home?fields=attachments,<other stuff>

The issue eventually resolved itself.
I found that Graph API Explorer allows to choose attachments edge while constructing feed
query that in this case appears like this
https://graph.facebook.com/v2.1/me?fields=feed{attachments}
but execution of such request triggers "Unsupported get request" exception.
It seems like non-working attachment edge for feed was unimplemented feature or a bug, because, surprisingly, now all attachments are retrieved successfully as subattachments collection.
Thanks, everybody.

Related

Jmeter concurrency user with different UIID

So here is my case, I am trying to implement concurrency test using jMeter with over 100 users. I got the foundation set up. However, the challenge I am encountering is that I have two APIs on postman one which I need to get accident case as UIID that is the first API and the second API is an API in which I register the accident. Each accident api requires different accident case. In other words, all 100 users will have different accident case. I usually do that manually when manual test but how do I do that automated in jMeter
enter image description here
Thank you and best regards
You can use __UUID
"accidentCase": "${__UUID()}",
The UUID function returns a pseudo random type 4 Universally Unique IDentifier (UUID).
If you're getting an accident case in 1st request and need to use it in 2nd request you can extract it from the 1st request response using a suitable JMeter Post-Processor and save it into a JMeter Variable
Then in 2nd request you need to replace the 21d2a592-f883-45f7-a1c4-9f55413f01b9 with the JMeter Variable holding the UUID from the first request.
The process is known as correlation and there is a lot of information on it in the Internet.

S3 consistency for sucessful read after write

Could not find a definitive answer here.
According to Amazon S3 docs, the caveat for read after write is if I got 404 for GET, then PUT a new object, then GET.
My question is, after I do GET a successful read,
does subsequent reads will be successful too?
Example:
GET key 404
PUT key 200
GET key 404 # because caveat
GET key 200
From now on, does any subsequent GET key is guaranteed to be successful?
The caveat AWS describes in the S3 documentation suggests that they use a caching layer on top of the database they use to store details of objects in S3 like it's key and meta data.
If you do a PUT for a object as first operation and a GET afterwards, there will be a cache miss for the GET operation so the caching layer will fetch information about this object from the database.
If you do a GET before the PUT the caching layer will query the database, will receive the information that this object doesn't exist and cache that information, even though after the PUT creates the mentioned object shortly after. So the GET after the PUT will receive the information that the object doesn't exist from the cache.
That's probably why this caveat exists. Unfortunately that doesn't answer your question, because we don't know how that caching layer works. If this layer uses shared state, then you should receive a 200 response for all requests, once you received one response with 200. My guess is that they don't use shared state for the caching layer, as that's easier to scale. Without shared state it depends on your luck, the time-to-live for items in the cache and if they employ some kind of cache invalidation for updated objects whether you receive a 200 or a 404 for requests even after the first successful 200 request.
Because the details of the inner workings of S3 are unknown I wouldn't rely on ubsequent calls to succeed, but my guess is that the probability of receiving a 404 after a successful 200 is rather low. In the end you have to decide based on your use case if and how it makes sense to account for this situation or not.
Updated answer
Snippets from the official AWS blog
S3 is Now Strongly Consistent
After that overly-long introduction, I
am ready to share some good news!
Effective immediately, all S3 GET, PUT, and LIST operations, as well
as operations that change object tags, ACLs, or metadata, are now
strongly consistent. What you write is what you will read, and the
results of a LIST will be an accurate reflection of what’s in the
bucket. This applies to all existing and new S3 objects, works in all
regions, and is available to you at no extra charge! There’s no impact
on performance, you can update an object hundreds of times per second
if you’d like, and there are no global dependencies.

Designing RESTful API for Invoking process methods

I would like to know how do design the RESTful web service for process methods. For example I want to make a REST Api for ProcessPayroll for given employee id. Since ProcessPayroll is time consuming job, I don't need any response from the method call but just want to invoke the ProcessPayroll method asynchronously and return. I can't use ProcessPayroll in the URL since it is not a resource and it is not a verb. So I thought that, I can go with the below approach
Request 1
http://www.example.com/payroll/v1.0/payroll_processor POST
body
{
"employee" : "123"
}
Request 2
http://www.example.com/payroll/v1.0/payroll_processor?employee=123 GET
Which one of the above approach is correct one? Is there any Restful API Design guidelines to make a Restful service for process methods and functions?
Which one of the above approach is correct one?
Of the two, POST is closest.
The problem with using GET /mumble is that the specification of the GET method restricts its use to operations that are "safe"; which is to say that they don't change the resource in any way. In other words, GET promises that a resource can be pre-fetched, just in case it is needed, by the user agent and the caches along the way.
Is there any Restful API Design guidelines to make a Restful service for process methods and functions?
Jim Webber has a bunch of articles and talks that discuss this sort of thing. Start with How to GET a cup of coffee.
But the rough plot is that your REST api acts as an integration component between the process and the consumer. The protocol is implemented as the manipulation of one or more resources.
So you have some known bookmark that tells you how to submit a payroll request (think web form), and when you submit that request (typically POST, sometimes PUT, details not immediately important) the resource that handles it as a side effect (1) starts an instance of ProcessPayroll from the data in your message, (2) maps that instance to a new resource in its namespace and (3) redirects you to the resource that tracks your payroll instance.
In a simple web api, you just keep refreshing your copy of this new resource to get updates. In a REST api, that resource will be returning a hypermedia representation of the resource that describes what actions are available.
As Webber says, HTTP is a document transport application. Your web api handles document requests, and as a side effect of that handling interacts with your domain application protocol. In other words, a lot of the resources are just messages....
We've come up with the similar solution in my project, so don't blame if my opinion is wrong - I just want to share our experience.
What concerns the resource itself - I'd suggest something like
http://www.example.com/payroll/v1.0/payrollRequest POST
As the job is supposed to be run at the background, the api call should return Accepted (202) http code. That tells the user that the operation will take a lot time. However you should return a payrollRequestId unique identifier (Guid for example) to allow users to get the posted resource later on by calling:
http://www.example.com/payroll/v1.0/payrollRequest/{payrollRequestId} GET
Hope this helps
You decide the post and get on the basis of the API work-
If your Rest API create any new in row DB(means new resource in DB) , then you have to go for POST. In your case if your payroll process method create any resource then you have to choose to POST
If your Rest API do both, create and update the resources. Means ,if your payroll method process the data and update it and create a new data , then go for PUT
If your Rest API just read the data, go for GET. But as I think from your question your payroll method not send any data.So GET is not best for your case.
As I think your payroll method is doing both thing.
Process the data , means updating the data and
Create new Data , means creating the new row in DB
NOTE - One more thing , the PUT is idempotent and POST is not.Follow the link PUT vs POST in REST
So, you have to go for PUT method.

How to avoid sending 2 duplicate POST requests to a webservice

I send a POST request to create an object. That object is created successfully on the server, but I cannot receive the response (dropped somewhere), so I try to send the POST request again (and again). The result is there are many duplicated objects on the server side.
What is the official way to handle that issue? I think it is a very common problem, but I don't know its exact name, so cannot google it. Thanks.
In REST terminology, which is how interfaces where POST is used to create an object (and PUT to modify, DELETE to delete and GET to retrieve) are called, the POST operation is attributed un-'safe' and non-'idempotent, because the second operation of every other type of petition has no effect in the collection of objects.
I doubt there is an "official" way to deal with this, but there are probably some design patterns to deal with it. For example, these two alternatives may solve this problem in certain scenarios:
Objects have unicity constraints. For example, a record that stores a unique username cannot be duplicated, since the database will reject it.
Issue an one-time use token to each client before it makes the POST request, usually when the client loads the page with the input form. The first POST creates an object and marks the token as used. The second POST will see that the token is already used and you can answer with a "Yes, yes, ok, ok!" error or success message.
Useful link where you can read more about REST.
It is unreliable to fix these issues on the client only.
In my experience, RESTful services with lots of traffic are bound to receive duplicate incoming POST requests unintentionally - e.g. sometimes a user will click 'Signup' and two requests will be sent simultaneously; this should be expected and handled by your backend service.
When this happens, two identical users will be created even if you check for uniqueness on the User model. This is because unique checks on the model are handled in-memory using a full-table scan.
Solution: these cases should be handled in the backend using unique checks and SQL Server Unique Indices.

A way to bypass the per-ip limit retrieving profile picture?

My app download all the user's friends pictures.
All the requests are of this kind:
https://graph.facebook.com/<friend id>/picture?type=small
But, after a certain limit is reached, instead of the picture I get:
{"error""message":"(#4) Application request limit reached","type":"OAuthException"}}
Actually, the only way I found to prevent this is to change the server ip (manually).
There isn't a better way?
For the record:
The limit is related to the Graph Api only, and the graph.facebook.com/<user>/picture url is a graph api call that returns a redirect.
So, to avoid the daily limit, simply fetch all the images url from FQL, like:
SELECT uid, pic_small, pic_big, pic, pic_square FROM user WHERE uid = me() or IN (SELECT uid2 FROM friend WHERE uid1=me())
now these are the direct urls to the images, for eg:
http://profile.ak.fbcdn.net/hprofile-ak-snc4/275716_1546085527_622218197_q.jpg
so don't store them since they continuously change.
If it's needed for an online app better way not to download those images, but use an online version, there is couple of reasons for doing so:
Users change pictures (some frequently), do you need an updated version?
Facebook's servers probably faster than yours and friends pictures probably cached within browser of your user.
Update:
Since limit you reach is Graph API call limit, not the image retrieval, another solution that comes to my head just now is using friends connection of user in Graph API and specifying picture in fields argument, eq: https://graph.facebook.com/me/friends?fields=picture, this will return direct URL-s for friends pictures so you can do only one call to get all needed info to download the images for every user...