I am working on some Twilio based functionality and have a total of 4 sprints to complete.
The first three sprints are over. The code where I connect / forward a call to a user is as follows
response.Say(string.Concat("Please wait, transferring your call to ", strCallServiceUserName));
response.Dial(strUserDialToPhoneNumber, null, null, null, null, null, strCallerId);
return TwiML(response);
Let's just say that I am at my wit's end about completing the last sprint.
Here is what I need to do.
I need to make this call duration for 10 minutes.
(This I think i can handle that by changing the 4th param to number of equivalent seconds)
I need to know if there is any mechanism by which I can check when 9 minutes have elapsed in the call. At 9 minutes I just want to interrupt the call with a message...
Questions
How do I do this?
Do I need to create a conference room and dial into that? Or conversely can this be done without using a conference. Even if I create and use a conference room, the basic question is how to determine that 9 minutes have elapsed in a call duration of 10 minutes,
The big ticket question is how do i find out when the 9 minutes have elapsed?
I have checked out this resource but could not find an answer here
Modifying live calls
Any help would be greatly appreciated.
Thanks in advance!
Question 1
Do I have to use conference? The reason i am asking is that I want to limit the participants in that call to two. If I understand correctly, anyone else - a third caller can also dial into the same conference.
As per this resource modify live calls I can fetch the in progress call from
/2010-04-01/Accounts/{AccountSid}/Calls/{CallSid}
and then make a http post request to terminate the call by providing the parameters url, method & status
So, why does it have to be a conference call? Can't I use the timer to call into an ordinary call started up with the dial verb?
Twilio developer evangelist here.
First up, you can define how long a call is with the timeLimit parameter of the <Dial> element. The parameter is set in seconds, so setting it to 600 will get you 10 minute call. You can also use named parameters, so you don't need positional parameters.
response.Say(string.Concat("Please wait, transferring your call to ", strCallServiceUserName));
var dial = new Dial(callerId: strCallerId, timeLimit: 600);
dial.Number(strUserDialToPhoneNumber);
response.Dial(dial);
return TwiML(response);
As for speaking a message with 1 minute to go. The best case is actually to dial the callers into a conference and then when the call connects setup a timer to cause a script to dial into the conference as well (the conference will need a phone number assigned) and then when it joins the conference <Say> a message or <Play> an audio file.
Let me know if that helps at all.
Update
The reason that it needs to be a conference is in order to have the 1 minute warning dial in and tell the callers. You cannot have a third call join a normal <Dial> between two numbers because that is a conference call.
You could certainly have some logic on the webhook that connects the conference that ensures that only your two participants and your 1 minute warning are allowed to join this specific conference.
Related
I want to schedule a delivery rather than the ETA time provided by the POSTMATE api in response.
Because the customer wants a delivery at some specific time.
How to schedule a ETA?
I didn't found a documentation for it in postmate api documentation.
I am using a CREATE DELIVERY api endpoint to generate the order delivery.
You can pass param like
Pickup and dropoff windows are specified using pickup_ready_dt, pickup_deadline_dt, dropoff_ready_dt, and dropoff_deadline_dt.
pickup_ready_dt must be less than 30 days in the future.
pickup_deadline_dt must be at least 10 mins later than pickup_ready_dt and at least 20 minutes in the future, thus providing a realistic pickup window.
dropoff_ready_dt must be less than or equal to pickup_deadline_dt. This is to prevent a scenario where a courier has to hold onto an order between the pickup and dropoff windows.
dropoff_deadline_dt must be at least 20 mins later than dropoff_ready_dt, thus providing a realistic dropoff window.
dropoff_deadline_dt must be greater than or equal to pickup_deadline_dt.
I am building a 1 vs 1 tetris game using cocos2d-x(C++) and a node.js(websocket).
Currently, I am working on a PVP feature between two players.
The basic logic looks like this:
1. Client1 completes a horizontal line and destroy it.
2. In this case, Client1 sends an attack packet to a server.
3. A server sends an attack packet to client2.
4. Client 2 gets an additional line.
The problem is that this is a little bit slow even though I use the update() function with 10 frames per second. The delay must be caused by the long route(Client1 => server => Client2).
I am thinking about sending the packet to the opponent directly.
In this case, the client1 needs to know the IP address of the client2.
My questions is:
1) In a real game, isn't it dangerous for client2 to let client1 know the IP address of his?
2) In general, how do game developers handle this kind of latency issue?
1) In a real game, isn't it dangerous for client2 to let client1 know the IP address of his?
It's generally not a good idea to expose the ip of your user, at least without notice. While it is legal (I'm not lawyer), if people doing bad thing the user may get upset and complain to you.
2) In general, how do game developers handle this kind of latency issue?
First, what's the latency you measured? Have you tried host your server on local network and see how is it? To deal with this problem, we usually define the required latency (e.g. 100ms, 200ms, 500ms) and have game designed in way that information can be propagated in a delayed manner without impact on gaming experience. In your case of attack logic, the usual trick is to make a charge timer to initiate the attack, so that both client will agree on the same wall clock for the actual attack. So,
1. Client1 completes a horizontal line and destroy it.
2. In this case, Client1 sends an attack packet to a server, and start a 1 second charging timer (may show a charging bar)
3. A server sends an attack packet to client2.
4. Client 2 start a timer and show the bar. Note that you need to adjust the time to account for the round trip.
5. Client 2 gets an additional line after 1 second since 2. Client 1 show the exact scene at the same time.
Note that, Client 2 actually got attacked later than your original design, but since the player see a charging bar, the experience is however smooth.
You may also want to adjust the "1 second" according to your latency requirement.
I've got a service system that gets requests from another system. A request contains information that is stored on the service system's MySQL database. Once a request is received, the server should start a timer that will send a FAIL message to the sender if the time has elapsed.
The problem is, it is a dynamic system that can get multiple requests from the same, or various sources. If a request is received from a source with a timeout limit of 5 minutes, and another request comes from the same source after only 2 minutes, it should be able to handle both. Thus, a timer needs to be enabled for every incoming message. The service is a web-service that is programmed in C++ with the information being stored in a MySQL database.
Any ideas how I could do this?
A way I've seen this often done: Use a SINGLE timer, and keep a priority queue (sorted by target time) of every timeout. In this way, you always know the amount of time you need to wait until the next timeout, and you don't have the overhead associated with managing hundreds of timers simultaneously.
Say at time 0 you get a request with a timeout of 100.
Queue: [100]
You set your timer to fire in 100 seconds.
Then at time 10 you get a new request with a timeout of 50.
Queue: [60, 100]
You cancel your timer and set it to fire in 50 seconds.
When it fires, it handles the timeout, removes 60 from the queue, sees that the next time is 100, and sets the timer to fire in 40 seconds. Say you get another request with a timeout of 100, at time 80.
Queue: [100, 180]
In this case, since the head of the queue (100) doesn't change, you don't need to reset the timer. Hopefully this explanation makes the algorithm pretty clear.
Of course, each entry in the queue will need some link to the request associated with the timeout, but I imagine that should be simple.
Note however that this all may be unnecessary, depending on the mechanism you use for your timers. For example, if you're on Windows, you can use CreateTimerQueue, which I imagine uses this same (or very similar) logic internally.
At the moment I am writing a turn based game for the iOS platform. The client is written in Objective-C with CocoaTouch, and the server is written in C++ for the Ubuntu Server OS. The server is connected to a MySQL database, in which it stores user & game data.
Right now I wish to implement a time-per-turn restriction, and this has to be done on the server side. When a user takes a turn, the next user will have a maximum of 24 hours to answer, otherwise I want the game to skip this user's turn and move on to the next player. I have some ideas about how to do this, but I am not sure if they are any good. What I've been thinking of is storing the date&time of the last turn taken as an entity related to the Game table on my SQL database. Then I'm thinking of launching a thread on the server which runs until termination, and looks up current time minus every game's last turn, say every minute or so. If it's been more than 24hrs since the last turn was taken, this thread allows the next player in the queue to take their turn, and skips the lazy player.
Does it sound over-complicated? Is there another, more simple way to do this? I know it's been done in many games before, I just don't know how. Thanks in advance!
I don't think you need any threads or background processes at all in this case.
What I see here is a simple algorithm:
When a user logs in to the game/match - check up the last turn ending time in the database,
If the elapsed time from the last turn ending time is greater than 24h, get the current time, substract the time from the database (obviously you need to convert both times into hours) and divide it by 24,
If the division yelds an odd number, it's the turn of the other player (player A)
If the division yelds an even number, it's the turn of the player B.
Set the database time to databaseTime+division*24
This algorithm can skip multiple turns. When player A finishes his move, and 48h passed, it's players B turn.
You probably just want a background process that has a schedule of "next actions" to take, a sort of priority queue you can work through as the events should be triggered.
A single process can handle a lot of independent games if you design the server properly. The architecture would pick up an event, load any associated data, dispatch accordingly, and then go back to waiting for new events.
C++ does have frameworks for this, but you could prototype it in NodeJS or Python's Twisted really quickly.
Please look at the reactor pattern (boost.asio, ACE). These frameworks are asynchronous, use an event-driven model and require no threads. Below is pseudo code on how you can solve it:
reactor.addTCPListener(acceptSock(), Handler::AcceptSock) // calls AcceptSock when accepting a new TCP connection
rector.addTCPListener(clientSock, Handler::ClientData) // calls ClientData when user is sending the server game stats (its move, status etc)
.
.
.
later on somewhere
.
.
.
for(set<Game>::Iterator it = games.begin(); it != games.end(); ++it) {
(it*)->checkTurn() // this call can be responsible for checking the timestamps from the ClientData function
}
Summary:
With the reactor pattern you will be able to have a non-blocking server that can do cleanup tasks when it is not handling IO. That cleanup can be comparing timestamps to switch/pass turns.
We are building a REST service that will take about 5 minutes to execute. It will be only called a few times a day by an internal app. Is there an issue using a REST (ie: HTTP) request that takes 5 minutes to complete?
Do we have to worry about timeouts? Should we be starting the request in a separate thread on the server and have the client poll for the status?
This is one approach.
Create a new request to perform ProcessXYZ
POST /ProcessXYZRequests
201-Created
Location: /ProcessXYZRequest/987
If you want to see the current status of the request:
GET /ProcessXYZRequest/987
<ProcessXYZRequest Id="987">
<Status>In progress</Status>
<Cancel method="DELETE" href="/ProcessXYZRequest/987"/>
</ProcessXYZRequest>
when the request is finished you would see something like
GET /ProcessXYZRequest/987
<ProcessXYZRequest>
<Status>Completed</Status>
<Results href="/ProcessXYZRequest/Results"/>
</ProcessXYZRequest>
Using this approach you can easily imagine what the following requests would give
GET /ProcessXYZRequests/Pending
GET /ProcessXYZRequests/Completed
GET /ProcessXYZRequests/Failed
GET /ProcessXYZRequests/Today
Assuming that you can configure HTTP timeouts using whatever framework you choose, then you could request via a GET and just hang for 5 mins.
However it may be more flexible to initiate an execution via a POST, get a receipt (a number/id whatever), and then perform a GET using that 5 mins later (and perhaps retry given that your procedure won't take exactly 5 mins every time). If the request is still ongoing then return an appropriate HTTP error code (404 perhaps, but what would you return for a GET with a non-existant receipt?), or return the results if available.
As Brian Agnew points out, 5 minutes is entirely manageable, if somewhat wasteful of resources, if one can control timeout settings. Otherwise, at least two requests must be made: The first to get the result-producing process rolling, and the second (and third, fourth, etc., if the result takes longer than expected to compile) to poll for the result.
Brian Agnew and Darrel Miller both suggest similar approaches for the two(+)-step approach: POST a request to a factory endpoint, starting a job on the server, and later GET the result from the returned result endpoint.
While the above is a very common solution, and indeed adheres to the letter of the REST constraints, it smells very much of RPC. That is, rather than saying, "provide me a representation of this resource", it says "run this job" (RPC) and then "provide me a representation of the resource that is the result of running the job" (REST). EDIT: I'm speaking very loosely here. To be clear, none of this explicitly defies the REST constraints, but it does very much resemble dressing up a non-RESTful approach in REST's clothing, losing out on its benefits (e.g. caching, idempotency) in the process.
As such, I would rather suggest that when the client first attempts to GET the resource, the server should respond with 202 "Accepted" (http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3), perhaps with "try back in 5 minutes" somewhere in the response entity. Thereafter, the client can poll the same endpoint to GET the result, if available (otherwise return another 202, and try again later).
Some additional benefits of this approach are that single-use resources (such as jobs) are not unnecessarily created, two separate endpoints need not be queried (factory and result), and likewise the second endpoint need not be determined from parsing the response from the first, thus simpler. Moreover, results can be cached, "for free" (code-wise). Set the cache expiration time in the result header according to how long the results are "valid", in some sense, for your problem domain.
I wish I could call this a textbook example of a "resource-oriented" approach, but, perhaps ironically, Chapter 8 of "RESTful Web Services" suggests the two-endpoint, factory approach. Go figure.
If you control both ends, then you can do whatever you want. E.g. browsers tend to launch HTTP requests with "connection close" headers so you are left with fewer options ;-)
Bear in mind that if you've got some NAT/Firewalls in between you might have some drop connections if they are inactive for some time.
Could I suggest registering a "callback" procedure? The client issues the request with a "callback end-point" to the server, gets a "ticket". Once the server finishes, it "callbacks" the client... or the client can check the request's status through the ticket identifier.