so I am looking into the postmates API and I have been able to create a delivery. This was great, I also setup a webhook url with ngrok to test the response from postmates but I am totally stumped as to how to determine when the pickup was actually completed and the dropoff/delivery was actually completed.
I saved all of the responses in a database and each time I did the test delivery, I received exactly 70 calls from the webhook endpoint. And each time 47 of them were in regards to the 'kind': 'event.delivery_status'. Here are the stats:
THIS IS ALL IN TEST MODE WITH THE SANDBOX...
11 of those are 'status':'pickup_complete'
14 of those are 'status':'pickup'
11 of those are 'status':'dropoff'
11 of those are 'status':'delivered'
all of the webhook responses for status=delivered have a 'data.courier_imminent':false value.
I went to the webpage for the 'data.tracking_url' and when the webpage showed that the delivery was complete, I immediately updated the database to see how many records that I had saved and I was only at 32 total records. this means that the webhook was continuing to send me updates after it was supposedly complete.
Lastly, all of these statuses are not in order, they are totally random, in fact the 6th to last record that was received was a pickup_complete status..
The real question:
how will I know what is actually a picked=completed, delivered=complete etc..
You'll receive a webhook of type event.delivery_status. One of the field within the body of the payload will be {status: "delivered"}. This has been accurate so far. Postmates doesn't return adelivered_at` timestamp, but you could create your own timestamp and store it along with the delivery for reporting.
As for the number of webhooks, Postmates has a delivery robot (called robo) that moves as if it was a real postmate. You'll receive a lot of webhooks of type event.courier_update with the updated location.
Related
I'm building the Alexa skill that sends the request to my web server,
then web server will do some process and upload a file to Amazon S3.
During the period of web server process, I make skill keep getting the file from Amazon S3 per 10 seconds till get the file. And the response is based on the file content.
But unfortunately, the web server process takes more than 1 minute. That means skill must stay more than 1 minute to get the file to response.
For now, I used progressive response with async await in my code,
and skill did keep waiting for the file on S3.
But I found that the skill will send the second request to Lambda after 50 seconds automatically. That means for the same skill, i got the two lambda function running at the same time.
And the execution result is : After the first response that progressive response made, 50 seconds later will hear another response that also made by the progressive response which belongs to the second request.
And nothing happened till the end.
I know it is bad to let skill waits this long, but i still want to figure out the executable way if skill needs to wait this long.
There are some points I want to figure out.
Is there anyway to prevent the skill to send the second
requests to Lambda?
Is there another way I can try to accomplish the goal?
Thanks
Eventually, I found that the second invoke of Lambda is not from Alexa, is from AWS Lambda itself. Refer to the following artical
https://cloudonaut.io/your-lambda-function-might-execute-twice-deal-with-it/
So you have to deal with this kind of situation in your Lambda code. One thing can be used is these two times invoke's request id is the same. So you can tell if this is the first time execution by checking your storage for the same request id which you store at the first time execution.
Besides, I also found that once the Alexa Skill waits for more than 1 minutes, it will crash and return the error by speaking (test by Amazon Echo). And there is nothing different in the AWS Lambda log compare to the normal execution one. That meaning the Log seems to be fine but actually the execution result is not.
Hope this can help someone is also struggled at this problem.
I'm troubleshooting an issue with a Node application I've inherited serving as a webhook callback endpoint.
To debug, I'm posting messages to a page that has been subscribed by the Facebook app associated with the endpoint and following my Node app's log.
After several hours, I still see no update requests from Facebook for my page posts.
Comparing timestamps on posts to my app's logs for the last update requests it received (several days ago), I see that it appears that there was about an 8 hour lag between the post and the update request.
I've searched the documentation for help but could only find this:
Update notifications are aggregated and sent in a batch of up to 1000 updates.
If any update sent to your server fails, we will retry immediately, then try a few more times with decreasing frequency over the next 24 hours. Your server should handle deduplication in these cases. Updates unaccepted for 24 hours will be dropped.
This gives me the impression that updates are not instantaneous. But are several hour delays the norm?
Can anybody with more experience with Graph API webhooks provide a ballpark for normal lag?
I am noticing AWS SES stats are not being updated in real-time. After sending email, it takes time for sent count to increase on SES Dashboard. Sometimes it takes few minutes and sometimes it takes long.
Has anyone also experienced this? Any thoughts?
On the assumption that the console is simply making a call to a standard API action (rather than using some kind a console-only backend service that is not documented or user-accessible -- such things are not unheard-of, but are pretty rare in AWS, so it's a reasonably safe assumption), it looks like this is not really designed to be real-time. The stats are reported in 15 minute windows.
From the SES API reference:
GetSendStatistics
Returns the user's sending statistics. The result is a list of data points, representing the last two weeks of sending activity.
Each data point in the list contains statistics for a 15-minute interval.
— http://docs.aws.amazon.com/ses/latest/APIReference/API_GetSendStatistics.html
AWS/SES dashboard stats are for pure hint performace but not to rely on them. In such case, if you want to have real time notifications of sent emails you will need to create SNS notifications. Keep in mind that Spam-Complaint notifications can take up to a couple of days as this is based on information provided by the ISP to Amazon. And complaints within the Gmail evil-system will NEVER get to you.
I have a mobile device that is constantly recording information. The information is stored on the device's local database. Every few minutes, the device will upload the data to a server through a REST API - sometimes the uploaded data corresponds to dozens of records from the same table. Right now, the server responds with
{status: "SAVED"}
if the data is saved to the server.
In the interest of being 100% sure that the data is actually uploaded (so the device won't attempt to upload it again), is that simple response enough? Or should I be hashing the incoming data and responding with it, or something similar? Perhaps I should send back the local row ids of the device's table's rows?
I think it's fine to have a very simple "SUCCESS" response if the entire request did indeed successfully save.
However, I think that when there is a problem, your response needs to include the IDs (or some other unique identifier) of the records that failed to save so that they can be queued to be resent.
If the same records fail multiple times, you might need to log the error or display it so that further action can be taken.
A successful response could be something as simple as:
<response>
<status>1</status>
</response>
An error response could be something like:
<response>
<status>0</status>
<errorRecords>
<id>441</id>
<id>8462</id>
<id>12</id>
</errorRecords>
</response>
You could get fancy and have different status codes that mean different, more specific messages.
If I have a web api service (Order Notification) that allows a third party client to call in (they must call in to us, not use pushing to them) periodically (every 10 minutes) and gets new orders it has not yet received, how do I deal with failures?
For example there are 10 new Orders the client has not received since they last called in. The client calls into our Order Notification service. We retrieve the orders we have not sent (10 in this case). We update these 10 Orders as sent and return the response to the client.
However the client did not receive the response (sometime happened after leaving us e.g. http time out or something else).
So now we have a problem where on our side we have marked the orders as sent but the client never received them.
Any thoughts on how to solve this?
Just an idea, can you assign the caller some sort of identifier and when the caller succeeds it replies back saying it has acknowledged the request? The server will never know if something failed on the client side unless the client reports it.
For example, when caller A calls in for the requests it may do something like this:
call -> http://server/requests
server replies back with some xml that contains the result set for this caller along with a unique identifier that it will track to know if that particular call had a response (you can time out this identifier after a reasonable period of time)
when the client gets the request it can call back again
call -> http://server/requestComplete?id=[generatedID]
and the server marks it successful.
Lots of API's require some sort of identification token so it would already lend itself well to this kind of send/ack messaging system.
If you have access to both sides of the system you could create a received request so once the client picking up the data has received it makes a request to the original host telling that it's received successfully.