NSURLConnection works in Simulator but not from iPad - django

I have an iOS app that inserts a record into an MySQL server via a Python-Django server, and then immediately queries that information.
I am using [NSURLConnection sendAsynchronousRequest: queue: completionHandler:] with a processing block to parse data from a server JSON object.
The insert statement and query statement are run from separate sendAsynchronousRequests, but the latter is called from the completionHandler block in the former, preventing any race conditions.
On the simulator, this works perfectly and returns the inserted data 100% of the time.
On a physical iPad, running iOS7, the response does not ever contain the immediately inserted data.
If I insert a long pause between the two sendAsynchronousRequests using [performSelector: withObject: afterDelay:] it does eventually work -- but I need to add a delay of 2 - 3 seconds.
On the simulator, the timing between the insert and the correct query response is less than 500 ms, so the very long delay should not be necessary and does not seem to be caused by a lower executing process on the simulator.
I have tried Cache-Control: max-age=0, and Expires= headers in my server code, and neither makes a difference, so I don't currently believe that it is a NSURLConnection caching issue.
What else am I overlooking here? Why does this work so perfectly on the Simulator but not on a physical iPad?
Thanks!

I finally figured this out.
My server code was filtering results based on a calculated time period, which ended "now". The time clock on my iPad was a few seconds into the future compared to the server, while my Mac's clock was in sync with the server. As a result, the records posted from the iPad were getting filtered out.

Related

Limit a Game Server Thread loop to 30FPS without a game engine/rendering window

I'll try to explain this as easily as I can.
I basically have a game that hosts multiple matches.
All the matches on the server are all processed by the server, no players host a match on their computers via port forwarding, it's all done by the server so they don't have to.
When a player requests to make a match, a match object with a thread is made on the server. This works fine. Each match object has it's own list of players. However, the game client runs at 30FPS and it needs to be in sync with the server, so just updating all the players in the thread loop will not do since it's not run at 30FPS.
What I'm doing right now is I use a game engine: SFML, which in its window loop, goes through all the players in the server and runs their update code at 30FPS.
This is fine however, when there comes a time where they may be a huge chunk of people, it will be better that the players are updated via the match threads so as not to slow down the processing speed by having it all done in one render loop.
What I want to know is, how would I simulate 30FPS in a match's thread loop? Basically have it so each player's update() function is called in the timeframe of 30FPS, without having to use a rendering engine such as SFML to limit how fast the code is to be run? The code is run in the background and output is shown on a console, no rendering is needed on the server.
Or put simply, how do I limit a while loop code to run in 30FPS without a game engine?
This seems a bit like a X Y problem : You try to sync players via server "FPS" limit. Server side doesn't really work like that.
Servers sync with clients in terms of time by passing to the client the server's time on each package or via a specific package (in other words, all clients have only the server's time).
But regarding server side implementations for gaming :
The problem is a bit more broad than what you mentioned. I'll post some guidelines which hopefully will help your research.
First of all, on server you don't require rendering, so FPS is not relevant (30 fps is required in order to give our eyes the sensation of fluidity). Server usually handles logic, like for example various events (for example someone fires a rocket, or a new enemy has spawned). As you can see events don't require FPS, and they are randomly triggered.
Something else that is done on the server is the Physics (or other player-player or player-environment interactions ). Collisions are done using a fixed update step. Physics are usually calculated at 20 FPS for example. Moving objects get capsule colliders in order to properly simulate interaction. In other words, severs ,while not rendering anything, are responsible for movement/collisions (meaning that if you don't have a connection to the server you won't move / go through walls , etc - implementation dependent).
Modern games also have prediction in order to reduce lagging (after all, after you give any input to your character, that input needs to get to the server first, get processed and be received back in order to have any effect on the client). This means that when you have an input on a client (let's take moving for example) , the client starts making the action in anticipation , and when the server response comes (for our example the new position) it will be considered as a correction. That's why sometimes in a game when you have lag you perceive that you are moving in a certain direction then all of a sudden you are somewhere completely different.
Regarding your question :
Inside your while loop, make a deltaT , and if that deltaT is lesser than 33 miliseconds use sleep(33-deltaT) .
As you requested, I'm posting a sample of deltaT Code :
while (gameIsRunning)
{
double time = GetTickCount();
UpdateGame();
double deltaT = GetTickCount()-time;
if (deltaT < 33 )
{
sleep(33- deltaT);
}
}
Where gameIsRunning is a global boolean, and UpdateGame is your game update function.
Please note that the code above works on Windows. For linux, you will require other functions instead of GetTicksCount and sleep.

CFStoredProc Timing Out

I have a very basic app that plugs data into a stored procedure which in turn returns a recordset. I've been experiencing what I thought were 'timeouts'. However, I'm now no longer convinced that this is what is really happening. The reason why is that the DBA and I watched sql server spotlight to see when the stored procedure was finished processing. As soon as the procedure finished processing and returned a recordset, the ColdFusion page returned a 'timeout' error. I'm finding this to be consistent whenever the procedure takes longer than a minute. To prove this, I created a stored procedure with nothing more than this:
BEGIN
WAITFOR DELAY '00:00:45';
SELECT TOP 1000 *
FROM AnyTableName
END
If I run it for 59 seconds I get a result back in ColdFusion. If I change it to one minute:
WAITFOR DELAY '00:01';
I get a cfstoredproc timeout error. I've tried running this in different instances of ColdFusion on the same server, different databases/datasources. Now, what is strange, is that I have other procedures that run longer than a minute and return a result. I've even tried this locally on my desktop with ColdFusion 10 and get the same result. At this point, I'm out of places to look so I'm reaching out for other things to try. I've also increased the timeout in the datasource connections and that didn't help. I even tried ColdFusion 10 with the timeout attribute but no luck there either. What is consistent is that the timeout error is displayed when the query completes.
Also, I tried adding the WAITFOR in cfquery and the same result happened. It worked when set for 59 seconds, but timed out when changed to a minute. I can change the sql to select top 1 and there is no difference in the result.
Per the comments, it looks like your request timeout is set to sixty seconds.
Use cfsetting to extend your timeout to whatever you need.
<cfsetting requesttimeout = "{numberOfSeconds}">
The default timeout for all pages is 60s, you need to change this in the cfadmin if it is not enough, but most pages should not run this long.
Take some time to familiarise yourself with the cfadmin and all its settings to avoid such head scratching.
As stated use cfsetting tag to override for specific pages.

Suspend and resume ColdFusion code for server intensive process

I have a loop the runs over a couple thousand records and for each record it hits is does some image resizing and manipulation on the server. The process runs well in testing over a small record set but when it moves to the live server I would like to suspend and resume the process after 50 records so the server is not taxed to the point of slow performance or quits altogether.
The code looks like this:
<cfloop query="imageRecords">
<!--create and save images to server - sometimes 3 - 7 images for each record -->
</cfloop>
Like I said, I would like to pause after 50 records, then resume where it left off. I looked at cfschedule but was unsure of how to work that into this.
I also looked at the sleep() function but the documentation talks about using this within cfthread tags. Others have posted about using it to simulate long processes.
So, I'm not sure sleep() can be safely used in the fashion I need it to.
Server is CF9 and db is MySQL.
I would create a column called worked in the database that is defaulted to 0 and once the image has been updated set the flag to 1. Then your query can be something like
SELECT TOP 50 imagename
FROM images
WHERE worked = 0
Then set up a CF scheduled task to run every x minutes
Here's a different approach which you could combine with probably any of the other approaches:
<cfscript>
th=CreateObject("java","java.lang.Thread");
th.setPriority(th.MIN_PRIORITY);
// Your work here
th.setPriority(th.NORM_PRIORITY);
</cfscript>
That ought to set the thread to have lower priority than the other threads which are serving your other requests. In theory, you'll get your work done in the shortest time, but with less affect on your other users. I've not had opportunity to test this yet, so your mileage may vary.

How do I detect an aborted connection in Django?

I have a Django view that does some pretty heavy processing and takes around 20-30 seconds to return a result.
Sometimes the user will end up closing the browser window (terminating the connection) before the request completes -- in that case, I'd like to be able to detect this and stop working. The work I do is read-only on the database so there isn't any issue with transactions.
In PHP the connection_aborted function does exactly this. Is this functionality available in Django?
Here's example code I'd like to write:
def myview(request):
while not connection_aborted():
# do another bit of work...
if work_complete:
return HttpResponse('results go here')
Thanks.
I don't think Django provides it because it basically can't. More than Django itself, this depends on the way Django interfaces with your web server. All this depends on your software stack (which you have not specified). I don't think it's even part of the FastCGI and WSGI protocols!
Edit: I'm also pretty sure that Django does not start sending any data to the client until your view finishes execution, so it can't possibly know if the connection is dead. The underlying socket won't trigger an error unless the server tries to send some data back to the user.
That connection_aborted method in PHP doesn't do what you think it does. It will tell you if the client disconnected but only if the buffer has been flushed, i.e. some sort of response is sent from the server back to the client. The PHP versions wouldn't even work as you've written if above. You'd have to add a call to something like flush within your loop to have the server attempt to send data.
HTTP is a stateless protocol. It's designed to not have either the client or the server dependent on each other. As a result the state of either is only known when there is a connection is created, and that only occurs when there's some data to send one way or another.
Your best bet is to do as #MattH suggested and do this through a bit of AJAX, and if you'd like you can integrate something like Node.js to make client "check-ins" during processing. How to set that up properly is beyond my area of expertise, though.
So you have an AJAX view that runs a query that takes 20-30 seconds to process requested in the background of a rendered page and you're concerned about wasted resources for when someone cancels the page load.
I see that you've got options in three broad categories:
Live with it. Improve the situation by caching the results in case the user comes back.
Make it faster. Throw more space at a time/space trade-off. Maintain intermediate tables. Precalculate the entire thing, etc.
Do something clever with the browser fast-polling a "is it ready yet?" query and the server cancelling the query if it doesn't receive a nag within interval * 2 or similar. If you're really clever, you could return progress / ETA to the nags. However, this might not have particularly useful behaviour when the system is under load or your site is being accessed over limited bandwidth.
I don't think you should go for option 3 because it's increasing complexity and resource usage for not much gain.

C Web Server and Chrome Dev tools question

I recently starting diving into http programming in C and have a functioning server that can handle GET and POST. My question comes in to my site load times and how I should send the response headers and response message.
I notice in Chromes resource tracking tool that there is almost no (a few ms) connecting/sending/proxy/blocking/waiting time in most cases (on the same network as the server), but the receive time can vary wildly. I'm not entirely sure what the receive time is including. I mostly see a long receive (40 to 140ms or more) time on the png files and sometimes javascript files and rarely other files, but it's not really consistent.
Could anyone shed some light on this for me?
I haven't done much testing yet, but I was wondering if I changed the method which I use to send the header/message would help. I currently have every file for the site cached in server memory along with it's header (all in the same char*). When I send the file that was requested, I just do 1 send() call with the header/file combo (it does not involve any string operations b/c it is all done in advance on server start up).
Would it be better to break it into multiple small send() calls?
Just some stats that I get with Chrome dev tools (again, on local network through a wireless router connection), the site loads in from 120ms to 570ms. It's 19 files at a total of 139.85KB. The computer it's on is a Asus 901 netbook (atom 1.6ghz, 2gb ddr2) with TinyCore linux. I know there are some optimizations I could be doing with how threads start up and a few other things, but not sure that's affecting it to much atm.
If you're sending the entire response in one send(), you should set the TCP_NODELAY socket option.
If that doesn't help, you may want to try using a packet capturing tool like Wireshark to see if you can spot where the delay is introduced.