When I request a user's tagged places using the Facebook Graph API and receive back 200 places (in groups of 25, more or less) does that count as one API request or 200?
I know if I ask for 5 specific Ids it'll be counted as 5 separate API calls but I'm wondering about when you are asking for everything. You won't know how much data is coming back. If it counts as one API call then great, but if it's 200 then I'll need some mechanism to gather the data over a longer period of time.
This is the API call.
https://developers.facebook.com/docs/graph-api/reference/user/tagged_places
One call to the API is...well, one API call, obviously. If you call the API one time and get 25 items in the result, it will be one API call. You can increase the limit and it will still be one API call.
More information: https://developers.facebook.com/docs/graph-api/advanced/rate-limiting#what-do-we-consider-an-api-call-
The example in the docs shows that one request can also count as several API calls, if you ask for some specific IDs.
If you get 200 results, you either increased the limit or you used paging. Since you mentioned "in groups of 25", i assume you are using paging. Which means, you would need 8 API calls to get 200 items.
Related
So here is my case, I am trying to implement concurrency test using jMeter with over 100 users. I got the foundation set up. However, the challenge I am encountering is that I have two APIs on postman one which I need to get accident case as UIID that is the first API and the second API is an API in which I register the accident. Each accident api requires different accident case. In other words, all 100 users will have different accident case. I usually do that manually when manual test but how do I do that automated in jMeter
enter image description here
Thank you and best regards
You can use __UUID
"accidentCase": "${__UUID()}",
The UUID function returns a pseudo random type 4 Universally Unique IDentifier (UUID).
If you're getting an accident case in 1st request and need to use it in 2nd request you can extract it from the 1st request response using a suitable JMeter Post-Processor and save it into a JMeter Variable
Then in 2nd request you need to replace the 21d2a592-f883-45f7-a1c4-9f55413f01b9 with the JMeter Variable holding the UUID from the first request.
The process is known as correlation and there is a lot of information on it in the Internet.
I am going through the documentation of ListObjects function in AWS' go SDK.
(the same holds more or less for the actual API endpoint)
So the docs write:
Returns some or all (up to 1,000) of the objects in a bucket.
What does this mean? If my bucket has 200.000 objects this API call will not work?
This example uses ListObjectsPages (which calls ListObjects under the hood) and claims to list all objects.
What is the actual case here?
I am going through the documentation of ListObjects function in AWS' go SDK.
Use ListObjectsV2. It behaves more or less the same, but it's an updated version of ListObjects. It's not super common for AWS to update APIs, and when they do, it's usually for a good reason. They're great about backwards compatibility which is why ListObjects still exists.
This example uses ListObjectsPages (which calls ListObjects under the hood) and claims to list all objects.
ListObjectsPages is a paginated equivalent of ListObjects, and ditto for the V2 versions which I'll describe below.
Many AWS API responses are paginated. AWS uses Cursor Pagination; this means request responses include a cursor - ContinuationToken in the case of ListObjectsV2 . If more objects exist (IsTruncated in the response), a subsequent ListObjectsV2 request content can provide the ContinuationToken to continue the listing where the first response left off.
ListObjectsV2Pages handles the iterative ListObjectsV2 requests for you so you don't have to handle the logic of ContinuationToken and IsTruncated. Instead, you provide a function that will be invoked for each "page" in the response.
So it's accurate to say ListObjectsV2Pages will list "all" the objects, but it's because it makes multiple ListObjectsV2 calls in the backend that it will list more than one page of responses.
Thus, ...Pages functions can be considered convenience functions. You should always use them when appropriate - they take away the pain of pagination, and pagination is critical to make potentially high volume api responses operable. In AWS, if pagination is supported, assume you need it - in typical cases, the first page of results is not guaranteed to contain any results, even if subsequent pages do.
The AWS Go SDK V2 gives us paginator types to help us manage S3's per-query item limits. ListObjectsV2Pages is gone. In its place we get ListObjectsV2Paginator, which deals with the pagination details that #Daniel_Farrell mentioned.
The constructor accepts the same params as the list objects query (type ListObjectsV2Input). The paginator exposes 2 methods: HasMorePages: bool and NextPage: (*ListObjectsV2Output, error).
var items []Item
for p.HasMorePages() {
batch, err := p.NextPage(ctx)
// etc...
item = append(items, newItems...)
}
We have a service which inserts into dynamodb certain values. For sake of this question let's say its key:value pair i.e., customer_id:customer_email. The inserts don't happen that frequently and once the inserts are done, that specific key doesn't get updated.
What we have done is create a client library which, provided with customer_id will fetch customer_email from dynamodb.
Given that customer_id data is static, what we were thinking is to add cache to the table but one thing which we are not sure that what will happen in the following use-case
client_1 uses our library to fetch customer_email for customer_id = 2.
The customer doesn't exist so API Gateway returns not found
APIGateway will cache this response
For any subsequent calls, this cached response will be sent
Now another system inserts customer_id = 2 with its email id. This system doesn't know if this response has been cached previously or not. It doesn't even know that any other system has fetched this specific data. How can we invalidate cache for this specific customer_id when it gets inserted into dynamodb
You can send a request to the API endpoint with a Cache-Control: max-age=0 header which will cause it to refresh.
This could open your application up to attack as a bad actor can simply flood an expensive endpoint with lots of traffic and buckle your servers/database. In order to safeguard against that it's best to use a signed request.
In case it's useful to people, here's .NET code to create the signed request:
https://gist.github.com/secretorange/905b4811300d7c96c71fa9c6d115ee24
We've built a Lambda which takes care of re-filling cache with updated results. It's a quite manual process, with very little re-usable code, but it works.
Lambda is triggered by the application itself following application needs. For example, in CRUD operations the Lambda is triggered upon successful execution of POST, PATCH and DELETE on a specific resource, in order to clear the general GET request (i.e. clear GET /books whenever POST /book succeeded).
Unfortunately, if you have a View with a server-side paginated table you are going to face all sorts of issues because invalidating /books is not enough since you actually may have /books?page=2, /books?page=3 and so on....a nightmare!
I believe APIG should allow for more granular control of cache entries, otherwise many use cases aren't covered. It would be enough if they would allow to choose a root cache group for each request, so that we could manage cache entries by group rather than by single request (which, imho, is also less common).
Did you look at this https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html ?
There is way to invalidate entire cache or a particular cache entry
According to Facebook Platform Changelog
/v2.1/{post-id} will now return all photos attached to the post: In previous versions of the API only the first photo was returned with a post. This removes the need to use FQL to get all a post's photos.
Though this statement is applicable only for separate API calls per post that looks follows:
https://graph.facebook.com/v2.1/{post_id}?fields=attachments
Since I need to retrieve all possible data that user posts to timeline, I use corresponding feed edge to do so.
https://graph.facebook.com/v2.1/me?fields=feed
So when I make a post with more than one picture attached to it, retrieved API response doesn't reflects that (and as I understand it's is per design). However I found that Graph API Explorer allows to choose attachments edge while constructing feed query that in this case appears like this
https://graph.facebook.com/v2.1/me?fields=feed{attachments}
but execution of such request triggers "Unsupported get request" exception.
Summing it up, the whole issue with the approach to make separate API calls for pictures is that it will dramatically increase number of calls which in turn not only decreases overall performance of the processing algorithm, but also leads to failing API call limits restrictions that in my case is unacceptable.
So I'm curious, is there any possibility to retrieve all post attachments (i.e. pictures) while working with feed edge or any alternative approach?
Thanks.
This should work.
me/home?fields=attachments,<other stuff>
The issue eventually resolved itself.
I found that Graph API Explorer allows to choose attachments edge while constructing feed
query that in this case appears like this
https://graph.facebook.com/v2.1/me?fields=feed{attachments}
but execution of such request triggers "Unsupported get request" exception.
It seems like non-working attachment edge for feed was unimplemented feature or a bug, because, surprisingly, now all attachments are retrieved successfully as subattachments collection.
Thanks, everybody.
I need to generate automatically multiple PDF files and save them as attachments in its correspondent objects records. I have tried to resolve
this topic making use of a batch file and a rendered visualForce page as 'PDF' but Salesforce have here a limit not allowing to use a getContent() method in a batch class.
Searching in the internet I have found this possible solution:
Why are HTML emails being sent by a APEX Schedulable class being delivered with blank bodies?
It propose to:
Create a class which implements the Schedulable interface.
Have an execute() method call and a #future method.
Create a #future method that call a web service enabled method in the class that sends the email.
The problem I found is when I try to authenticate in my Web Services (REST) inside Salesforce (http://help.salesforce.com/help/doc/en/remoteaccess_oauth_web_server_flow.htm)
In the first step I am making a request and I get a code through the callback URL, but It is impossible to know how to read this parameter from Salesforce. In the answer I don't have a method called 'getParameter()' and the body is empty.
As an example:
Request: https://login.salesforce.com/services/oauth2/authorize?response_type=code&client_id=
3MVG9lKcPoNINVBIPJjdw1J9LLM82HnFVVX19KY1uA5mu0QqEWhqKpoW3svG3XHrXDiCQjK1mdgAvhCscA
9GE&redirect_uri=https%3A%2F%2Fwww.mysite.com%2Fcode_callback.jsp&state=mystate
Response: https://www.mysite.com/code_callback.jsp?code=aPrxsmIEeqM9&state=mystate
It exists any way to connect with my Webservices making the call inside Salesforce in order to implement this solution??
It would be easier if a make a call from an external application but inside salesforce???
Can you suggest any possible solution???
(Necromancy of old questions ;))
How about this:
Directly from your context or from Schedulable class...
Call #future up to 10 times
Each #future can send up to 10 callouts, use them to RESTfully access your VF page with renderAs="pdf" and save the content as your attachment?
If 100 attachments aren't enough - play with daisy-chaining of batches. I believe since Summer or Winter '13 finish() method in a batch can be used to fire Database.execute() on next batch ;)
As to how exactly get REST access to the PDF - you can use either my question https://salesforce.stackexchange.com/questions/4692/screen-scrape-salesforce-with-rest-get-call-from-apex or maybe my end solution for saving reports: https://salesforce.stackexchange.com/questions/4303/scheduled-reports-as-attachment (you'll need only the authentication part I imagine + of course a working entry in remote site settings).
This is under assumption that sid cookie will work on VF pages as good as it does on standard ones... Good luck?
Since Winter '16 release, PageReference.getContent can be called from Batch Apex.
Spread the PDF attachment generation across multiple batch execution contexts depending on the size and count you need to generate - getContent is an HTTP request and the servlet can take a long time to respond.
https://releasenotes.docs.salesforce.com/en-us/winter16/release-notes/rn_apex_pagereference_getcontent.htm#rn_apex_pagereference_getcontent
You can now make calls to the getContent() and getContentAsPdf() methods of the PageReference class from within asynchronous Apex such as Batch Apex, Schedulable and Queueable classes, and #future methods. This allows you to design much more flexible and scalable services that, for example, render Visualforce pages as PDF files.
For the purposes of limits and restrictions, calls to getContent() (and getContentAsPdf(), which behaves the same) are treated as callouts. The behavior and restrictions vary depending on the PageReference it’s called on, and in certain cases are relaxed. There are three different possibilities.
PageReference instances that explicitly reference a Visualforce page
For example, Page.existingPageName.
Calls to getContent() aren’t subject to either the maximum concurrent callouts limit or the maximum total callouts per transaction limit.
Additionally, only PageReference instances that reference a Visualforce page can call getContent() from scheduled Apex, that is, Apex classes that implement the Schedulable interface.
PageReference instances that reference a Salesforce URL:
For example, PageReference('/' + recordId).
Calls to getContent() for Salesforce URLs aren’t subject to the maximum concurrent callouts limit, but they are subject to the maximum total callouts per transaction limit.
PageReference instances that reference an external URL:
For example, PageReference('https://www.google.com/').
Calls to getContent() for external URLs are subject to both the maximum concurrent callouts limit and the maximum total callouts per transaction limit.
Even when calls to getContent() and getContentAsPdf() aren’t tracked against callout limits, they’re still subject to the usual Apex limits such as CPU time, etc. These limits are cumulative across all callouts within the originating transaction.
Finally, we’ve relaxed the restrictions on calling getContent() and getContentAsPdf() after performing DML operations (excluding creating Savepoints). If the calls to getContent() and getContentAsPdf() are internal calls, they’re now allowed. External callouts after DML operations are still blocked, and you still can’t make callouts from within getContent() calls themselves, that is, during the rendering of the page.