I want to schedule a delivery rather than the ETA time provided by the POSTMATE api in response.
Because the customer wants a delivery at some specific time.
How to schedule a ETA?
I didn't found a documentation for it in postmate api documentation.
I am using a CREATE DELIVERY api endpoint to generate the order delivery.
You can pass param like
Pickup and dropoff windows are specified using pickup_ready_dt, pickup_deadline_dt, dropoff_ready_dt, and dropoff_deadline_dt.
pickup_ready_dt must be less than 30 days in the future.
pickup_deadline_dt must be at least 10 mins later than pickup_ready_dt and at least 20 minutes in the future, thus providing a realistic pickup window.
dropoff_ready_dt must be less than or equal to pickup_deadline_dt. This is to prevent a scenario where a courier has to hold onto an order between the pickup and dropoff windows.
dropoff_deadline_dt must be at least 20 mins later than dropoff_ready_dt, thus providing a realistic dropoff window.
dropoff_deadline_dt must be greater than or equal to pickup_deadline_dt.
Related
As far as I can see from the OPC-UA specifications, on every publishing interval OPC-UA server sends one NotificationMessage containing all Notifications of all changes (I'm monitoring variable values) it has sampled in queue.
But as I found in open62541 documentation, its subscription methods UA_Client_MonitoredItems_createDataChange and UA_Client_MonitoredItems_createDataChanges work on "callback per every single monitored item" basis. Item by item.
Is there a way to get all the monitored items changes of one publishing interval in bulk?
1-st publishing interval: changed values of items 1, 2, 3
2-nd publishing interval: changed values of items 2, 4, 5
etc...
As far I know, this depends on the software stack you are using. (E.g. Softing, Microsoft .NET Standard). A NotificationMessage contains all changes of a MonitoredItem which you added to a Subscription. But how the API of the software stack provides you the change differs slightly.
I am working on some Twilio based functionality and have a total of 4 sprints to complete.
The first three sprints are over. The code where I connect / forward a call to a user is as follows
response.Say(string.Concat("Please wait, transferring your call to ", strCallServiceUserName));
response.Dial(strUserDialToPhoneNumber, null, null, null, null, null, strCallerId);
return TwiML(response);
Let's just say that I am at my wit's end about completing the last sprint.
Here is what I need to do.
I need to make this call duration for 10 minutes.
(This I think i can handle that by changing the 4th param to number of equivalent seconds)
I need to know if there is any mechanism by which I can check when 9 minutes have elapsed in the call. At 9 minutes I just want to interrupt the call with a message...
Questions
How do I do this?
Do I need to create a conference room and dial into that? Or conversely can this be done without using a conference. Even if I create and use a conference room, the basic question is how to determine that 9 minutes have elapsed in a call duration of 10 minutes,
The big ticket question is how do i find out when the 9 minutes have elapsed?
I have checked out this resource but could not find an answer here
Modifying live calls
Any help would be greatly appreciated.
Thanks in advance!
Question 1
Do I have to use conference? The reason i am asking is that I want to limit the participants in that call to two. If I understand correctly, anyone else - a third caller can also dial into the same conference.
As per this resource modify live calls I can fetch the in progress call from
/2010-04-01/Accounts/{AccountSid}/Calls/{CallSid}
and then make a http post request to terminate the call by providing the parameters url, method & status
So, why does it have to be a conference call? Can't I use the timer to call into an ordinary call started up with the dial verb?
Twilio developer evangelist here.
First up, you can define how long a call is with the timeLimit parameter of the <Dial> element. The parameter is set in seconds, so setting it to 600 will get you 10 minute call. You can also use named parameters, so you don't need positional parameters.
response.Say(string.Concat("Please wait, transferring your call to ", strCallServiceUserName));
var dial = new Dial(callerId: strCallerId, timeLimit: 600);
dial.Number(strUserDialToPhoneNumber);
response.Dial(dial);
return TwiML(response);
As for speaking a message with 1 minute to go. The best case is actually to dial the callers into a conference and then when the call connects setup a timer to cause a script to dial into the conference as well (the conference will need a phone number assigned) and then when it joins the conference <Say> a message or <Play> an audio file.
Let me know if that helps at all.
Update
The reason that it needs to be a conference is in order to have the 1 minute warning dial in and tell the callers. You cannot have a third call join a normal <Dial> between two numbers because that is a conference call.
You could certainly have some logic on the webhook that connects the conference that ensures that only your two participants and your 1 minute warning are allowed to join this specific conference.
We are experiencing double Lambda invocations of Lambdas triggered by S3 ObjectCreated-Events. Those double invocations happen exactly 10 minutes after the first invocation, not 10 minutes after the first try is complete, but 10 minutes after the first invocation happened. The original invocation takes anything in the range between 0.1 to 5 seconds. No invocations results in errors, they all complete successfully.
We are aware of the fact that SQS for example does not guarantee exactly-once but at-least-once delivery of messages and we would accept some of the lambdas getting invoked a second time due to results of the distributed system underneath. A delay of 10 minutes however sounds very weird.
Of about 10k messages 100-200 result in double invocations.
The AWS Support basically says "the 10 minute wait time is by design but we cannot tell you why", which is not at all helpful.
Has anyone else experienced this behaviour before?
How did you solve the issue or did you simply ignore it (which we could do)?
One proposed solution is not to use direct S3-lambda-triggers, but let S3 put its event on SNS and subscribe a Lambda to that. Any experience with that approach?
example log: two invocations, 10 minutes apart, same RequestId
START RequestId: f9b76436-1489-11e7-8586-33e40817cb02 Version: 13
2017-03-29 14:14:09 INFO ImageProcessingLambda:104 - handle 1 records
and
START RequestId: f9b76436-1489-11e7-8586-33e40817cb02 Version: 13
2017-03-29 14:24:09 INFO ImageProcessingLambda:104 - handle 1 records
After a couple of rounds with the AWS support and others and a few isolated trial runs it seems like this is simply "by design". It is not clear why, but it simply happens. The problem is neither S3 nor SQS / SNS but simply the lambda invocation and how the lambda service dispatches the invocations to lambda instances.
The double invocations happen somewhere between 1% and 3% of all invocations, 10 minutes after the first invocation. Surprisingly there are even triple (and probably quadruple) invocations with a rate of powers of the base probability, so basically 0.09%, ... The triple invocations happened 20 minutes after the first one.
If you encounter this, you simply have to work around it using whatever you have access to. We for example now store the already processed entities in a Cassandra with a TTL of 1 hour and only responding to messages from the lambda if the entity has not been processed yet. The double and triple invocations all happen within this one hour timeframe.
Not wanting to spin up a data store like Dynamo just to handle this, I did two things to solve our use case
Write a lock file per function into S3 (which we were already using for this one) and check for its existence on function entry, aborting if present; for this function we only ever want one of it running at a time. The lock file is removed before we call callback on error or success.
Write a request time in the initial event payload and check the request time on function entry; if the request time is too old then abort. We don't want Lambda retries on error unless they're done quickly, so this handles the case where a duplicate or retry is sent while another invocation of the same function is not already running (which would be stopped by the lock file) and also avoids the minimal overhead of the S3 requests for the lock file handling in this case.
I am learning azure webjob and queue, in the company I work for, every night, we calculate like 7 million records. The calculation is quite simple, but the quantity is quite big (as there around 7 million records in db).
As the company is moving to Azure, I am wondering can azure queue be used to handle this? In theory, I can create some code to populate the queue (as there is a 64kb message size limit, I guess I can put like 10 records in one message,
so total message will be like 700k?). Then I will have webjob that triggered by the queue and do the calculation.
However, according to here,there is a max limit of 32 in message retrieving.
Does this mean that there will be a maximum of 32 webjob triggered concurrently?If that is the case, 700000/32 = 21875, which means it will take quite a while to finish.
Is there a way to trigger more webjob to run concurrently other than the 32 limit?
You can get much higher concurrency by combining BatchSize and NewBatchThreshold. The best way to look at it is that the concurrency limit is the sum of the two flags. So if you leave BatchSize at 32 and set NewBatchThreshold to 100, the concurrency limit will be 132.
See https://github.com/Azure/azure-webjobs-sdk/issues/628 more for details.
We are building a REST service that will take about 5 minutes to execute. It will be only called a few times a day by an internal app. Is there an issue using a REST (ie: HTTP) request that takes 5 minutes to complete?
Do we have to worry about timeouts? Should we be starting the request in a separate thread on the server and have the client poll for the status?
This is one approach.
Create a new request to perform ProcessXYZ
POST /ProcessXYZRequests
201-Created
Location: /ProcessXYZRequest/987
If you want to see the current status of the request:
GET /ProcessXYZRequest/987
<ProcessXYZRequest Id="987">
<Status>In progress</Status>
<Cancel method="DELETE" href="/ProcessXYZRequest/987"/>
</ProcessXYZRequest>
when the request is finished you would see something like
GET /ProcessXYZRequest/987
<ProcessXYZRequest>
<Status>Completed</Status>
<Results href="/ProcessXYZRequest/Results"/>
</ProcessXYZRequest>
Using this approach you can easily imagine what the following requests would give
GET /ProcessXYZRequests/Pending
GET /ProcessXYZRequests/Completed
GET /ProcessXYZRequests/Failed
GET /ProcessXYZRequests/Today
Assuming that you can configure HTTP timeouts using whatever framework you choose, then you could request via a GET and just hang for 5 mins.
However it may be more flexible to initiate an execution via a POST, get a receipt (a number/id whatever), and then perform a GET using that 5 mins later (and perhaps retry given that your procedure won't take exactly 5 mins every time). If the request is still ongoing then return an appropriate HTTP error code (404 perhaps, but what would you return for a GET with a non-existant receipt?), or return the results if available.
As Brian Agnew points out, 5 minutes is entirely manageable, if somewhat wasteful of resources, if one can control timeout settings. Otherwise, at least two requests must be made: The first to get the result-producing process rolling, and the second (and third, fourth, etc., if the result takes longer than expected to compile) to poll for the result.
Brian Agnew and Darrel Miller both suggest similar approaches for the two(+)-step approach: POST a request to a factory endpoint, starting a job on the server, and later GET the result from the returned result endpoint.
While the above is a very common solution, and indeed adheres to the letter of the REST constraints, it smells very much of RPC. That is, rather than saying, "provide me a representation of this resource", it says "run this job" (RPC) and then "provide me a representation of the resource that is the result of running the job" (REST). EDIT: I'm speaking very loosely here. To be clear, none of this explicitly defies the REST constraints, but it does very much resemble dressing up a non-RESTful approach in REST's clothing, losing out on its benefits (e.g. caching, idempotency) in the process.
As such, I would rather suggest that when the client first attempts to GET the resource, the server should respond with 202 "Accepted" (http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3), perhaps with "try back in 5 minutes" somewhere in the response entity. Thereafter, the client can poll the same endpoint to GET the result, if available (otherwise return another 202, and try again later).
Some additional benefits of this approach are that single-use resources (such as jobs) are not unnecessarily created, two separate endpoints need not be queried (factory and result), and likewise the second endpoint need not be determined from parsing the response from the first, thus simpler. Moreover, results can be cached, "for free" (code-wise). Set the cache expiration time in the result header according to how long the results are "valid", in some sense, for your problem domain.
I wish I could call this a textbook example of a "resource-oriented" approach, but, perhaps ironically, Chapter 8 of "RESTful Web Services" suggests the two-endpoint, factory approach. Go figure.
If you control both ends, then you can do whatever you want. E.g. browsers tend to launch HTTP requests with "connection close" headers so you are left with fewer options ;-)
Bear in mind that if you've got some NAT/Firewalls in between you might have some drop connections if they are inactive for some time.
Could I suggest registering a "callback" procedure? The client issues the request with a "callback end-point" to the server, gets a "ticket". Once the server finishes, it "callbacks" the client... or the client can check the request's status through the ticket identifier.