I've made a request using "me/inbox" to get the threads that are in my inbox. I can then request some of the thread objects using their ID directly, as well as being able to access the comments within some of the thread objects using "/comments" as the GraphAPI URL. However, some of the thread objects for some friends will not be returned. Instead, I get the following error:
{
"error": {
"message": "Unsupported get request.",
"type": "GraphMethodException",
"code": 100
}
}
I was wondering if anyone had any idea where I might be going wrong in requesting particular threads, or if this is a Facebook issue?
Instead of making a request to "me/inbox", I needed to be making a request to "me/threads".
From the "data" value that is returned, you can get a list of Thread objects which contain an "id" value. This "id" value can be used to request a specific thread. Using this method of retrieving Threads seemed to solve the issue.
Would welcome further solutions to retrieving Threads from Facebook!
i had the same problem with fetching
https://graph.facebook.com/246252455409826
but this can be solved with putting your access_token like this
https://graph.facebook.com/246252455409826?access_token= *****
I think it only takes the age authentication.
Related
New to Postman and Newman. I am iterating through a CSV data file, at times the values are few (20 or less) and at times they are great (50 or more). When iterating through large sets of values, 50 or more I receive a 429 error. Is there a way to write a retry function on a single request when the status is not 200?
I am working with the Amazon SP-API and reading through the documentation it appears as if there is nothing I can do about the x-amzn-RateLimit-Limit. My current limit is set at 5.0 I believe this is 5 requests per second.
Any advice would be helpful: a retry function, telling the requests to sleep/wait every X-amount, another method I am not aware of, etc.
This is the error I receive
{
"errors": [
{
"code": "QuotaExceeded",
"message": "You exceeded your quota for the requested resource.",
"details": ""
}
]
}
#Danny Dainton pointed me to the right place. By reading through the documentation I found out that by using options.delayRequest I am able to delay the time between requests. My final code looks like the sample below—and it works now:
newman.run({
delayRequest: 3000,
...
})
We are seeing a random error that seems to be caused by two requests' data getting mixed up. We receive a request for quoting shipping costs on an Order, but the request fails because the requested Order is not accessible by the requesting account. I'm looking for anyone who can provide an inkling on what might be happening here, I haven't found anything on google, the official flask help channels, or SO that looks like what we're experiencing.
We're deployed on AWS, with apache, mod_wsgi, 1 process, 15 threads, about 10 instances.
Here's the code that sends the email:
msg = f"Order ID {self.shipping.order.id} is not valid for this Account {self.user.account_id}"
body = f"Error:<br/>{msg}<br/>Request Data:<br/>{request.data}<br/>Headers:<br/>{request.headers}"
send_email(msg, body, "devops#*******.com")
request_data = None
The problem is that in that scenario we email ourselves with the error and the request data, and the request data we're getting, in many cases, would've never landed in that particular piece of code. It can be a request from the frontend to get the current user's settings, for example, that make no reference to any orders, nevermind trying to get a shipping quote for it.
Comparing the application logs with apache's access_log, we see that, in all cases, we got two requests on the same instance, one requesting the quoting, and another which is the request that is actually getting logged. We don't know whether these two requests are processed by the same thread in rapid succession, or by different threads, but they come so close together that I think the latter is much more probable. We have no way of univocally tying the access_log entries with the application logging, so far, so we don't know which one of the requests is logging the error, but the fact is that we're getting routed to a view that does not correspond to the request's content (i.e., we're not sure whether the quoting request is getting the wrong request object, or if the other one is getting routed to the wrong view).
Another fact that is of interest is that we use graphql, so part of the routing is done after flask/werkzeug do theirs, but the body we get from flask.request at the moment the error shows up does not correspond with the graphql function/mutation that gets executed. But this also happens in views mapped directly through flask. The user is looked up by the flask-login workflow at the very beginning, and it corresponds to the "bad" request (i.e., the one not for quoting).
The actual issue was a bug on one of python-graphql's libraries (promise), not on Flask, werkzeug or apache. It was not the request data that was "moving" to a different thread, but a different thread trying to resolve the promise for a query that was supposed to be handled elsewhere.
I have been trying to get the response from API gateway, but after countless tries and going through several online answers, I still wasn't able to solve my issue.
When I test my POST method for the API, it gives me proper response on lambda test and API gateway method test, but when I try it from my react app, it doesn't return the same output.
My lambda snippet:
const response = {
statusCode: 200,
body: JSON.stringify({payload: {"key": "value"}})
};
return response;
But the response I am getting using fetch API on my react app:
I am new to AWS and would appreciate if someone point me in the right direction.
So the fetch API allows you to receive responses as a readablestream, which is what it shows you are receiving in that image there. This resource here, should be helpful in how to properly handle the response.
There are also many other commonly used libraries like axios that are primarily promise / callback driven and you won't have to worry about streams too much unless you want to. You should be able to get fetch working with promises too, but I've never done it myself.
In general, streams are really useful when you have a large amount of data and receiving it all at once in a giant chunk would be really slow, cause timeouts, etc.
Pretend I am building a simple image-processing API. This API is completely stateless and only needs three items in the request, an image, the image format and an authentication token.
Upon receipt of the image, the server merely processes the image and returns a set of results.
Ex: I see five faces in this image.
Would this still work with a REST based API? Should this be used with a REST based API?
Most of the examples I have seen when comparing REST and SOAP have been purely CRUD based, so I am slightly confused with how they compare in a scenario such as this.
Any help would be greatly appreciated, and although this question seems quite broad, I have yet to find a good answer explaining this.
REST is not about CRUD. It is about resources. So you should ask yourself:
What are my resources?
One answer could be:
An image processing job is a resource.
Create a new image processing job
To create a new image processing job, mak a HTTP POST to a collection of jobs.
POST /jobs/facefinderjobs
Content-Type: image/jpeg
The body of this POST would be the image.
The server would respond:
201 Created
Location: /jobs/facefinderjobs/03125EDA-5044-11E4-98C5-26218ABEA664
Here 03125EDA-5044-11E4-98C5-26218ABEA664 is the ID of the job assigned by the server.
Retrieve the status of the job
The client now wants to get the status of the job:
GET /jobs/facefinderjobs/03125EDA-5044-11E4-98C5-26218ABEA664
If the job is not finished, the server could respond:
200 OK
Content-Type: application/json
{
"id": "03125EDA-5044-11E4-98C5-26218ABEA664",
"status": "processing"
}
Later, the client asks again:
GET /jobs/facefinderjobs/03125EDA-5044-11E4-98C5-26218ABEA664
Now the job is finished and the response from the server is:
200 OK
Content-Type: application/json
{
"id": "03125EDA-5044-11E4-98C5-26218ABEA664",
"status": "finished",
"faces": 5
}
The client would parse the JSON and check the status field. If it is finished, it can get the number of found faces from the faces field.
We are developing a web API which processes potentially very large amounts of user-submitted content, which means that calls to our endpoints might not return immediate results. We are therefore looking at implementing an asynchronous/non-blocking API. Currently our plan is to have the user submit their content via:
POST /v1/foo
The JSON response body contains a unique request ID (a UUID), which the user then submits as a parameter in subsequent polling GETs on the same endpoint:
GET /v1/foo?request_id=<some-uuid>
If the job is finished the result is returned as JSON, otherwise a status update is returned (again JSON).
(Unless they fail both the above calls simply return a "200 OK" response.)
Is this a reasonable way of implementing an asynchronous API? If not what is the 'right' (and RESTful) way of doing this? The model described here recommends creating a temporary status update resource and then a final result resource, but that seems unnecessarily complicated to me.
Actucally the way described in the blog post you mentioned is the 'right' RESTful way of processing aysnchronous operations. I've implemented an API that handles large file uploads and conversion and does it this way. In my opinion this is not over complicated and definitely better then delaying the response to the client or something.
Some additional note: If a task has failed, I would also return 200 OK together with a representation of the task resource and the information that the resource creation has failed.