coldfusion / browser timeout - coldfusion

I have a template doing a boat load of manipulations, I expect it to take 30-45 minutes to complete it's processing... I've had SOME success in setting my application and session vars to timeout # 2 hr. and I've set my request timeout to 9999 (which should be 2.77 hrs)...
However - there seems to be a magic threshold - somewhere around the 20 min mark, my browser goes to a white screen (no output) and it appears as though the CF engine has also stopped working on my task...
can anyone suggest a reliable way to keep this process going - until it's done or my astronomical timeout occurs? in addition , is there any way to push feedback to the browser so it doesn't time out....I've tried cfflush, but that doesn't seem to do it.

You could use cfthread to run the process in a separate thread and then on the page you are accessing in the browser, you could use javascript to periodically poll the system to check on its status. For example, inside the long running process in cfthread, as you work through, you could set a application variables indicating that the process is still running and how far along it is, and retrieve and report those in the browser. When its complete, you could clear the variables, or set a complete flag, etc, and your browser report page will be able to indicate that it is complete.

I strongly suggest refactoring the code to use a simple messaging / queue system. It wouldn't take but 30 minutes to implement (or write a simple one from scratch!) and would provide a lot of benefits over and above solving this issue.
For example, its not a pass/fail for the entire operation. If you hit a snag at say the 1.5 hour mark, you won't be re-doing the entire process again, only parts which fail.
Doing it this way there is literally no limit to how much processing you can do because you'll be adding and removing from the stack as needed.
If you give a little more background, I'd be happy to help you figure out logical divisions to make it possible.

If you have a process running for that long, then you'll want to run it as a scheduled task.
I imagine your browser is the one dying.
Did you check to see if the request is still running?
<cfsetting requesttimeout= "3600" /> will set the page to last for an hour. If you run it as a scheduled task, then session timeout shouldn't affect anything.

Please do not store queries in session like that. Depending on the size of the query and the number of concurrent users in the system, you could easily run out of memory, causing some current and all subsequent requests to fail.
The database should be more than able to handle the heavy lifting. I'd hazard a guess that much of the processing you're doing in the application could be re-factored to happen directly on the database and save you a considerable amount of time.
Regardless, you should look into something like CFTHREAD as Sean mentioned, a scheduled task or a queuing system to handle a long process like this. The user most likely doesn't want to wait for the process to end before seeing the next screen. If they're told up front that the process is lengthy, they'll cope with waiting as long as they can move on to other tasks.

I had the same problem. Generally the browser will timeout after 3 minutes of nothing being sent from the server. For most of these long operations I was able to periodically output a dot to keep the browser alive but when it came to some extremely long queries importing 20M records from a server side CSV file I had to think of another way.
cUrl was the answer.
So here's what I did.
<?
function get_page($page)
{
$ch = curl_init($page);
curl_setopt($ch, CURLOPT_TIMEOUT, 0);
curl_setopt($ch, CURLOPT_NOPROGRESS,false);
curl_setopt($ch, CURLOPT_PROGRESSFUNCTION,'progress');
curl_setopt($ch, CURLOPT_BUFFERSIZE, 128);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_exec($ch);
}
function progress($clientp,$dltotal,$dlnow,$ultotal,$ulnow='')
{
echo '. ';
flush();
return(0);
}
get_page('http://www.example.com/my_extremely_long_operation_script.php');
?>
Even with no output from the server, curl updates the download progress periodically.
Solved!

Related

C++: How to set a timeout (not reading input, not threaded)?

Got a large C++ function in Linux that calls a whole lot of other functions, making up an algorithm. At various points given certain bad inputs, the algorithm can get "stuck" and go on forever. Adding a timeout seems appropriate as all potential "stuck" points cannot be predicted. But despite scouring the Internet for timeout examples I've only found how to apply timeouts when either the thing your timing is a separate thread or it's reading inputs. My code is a single thread and does not modify file descriptors, so not coming up with any luck. Do I basically have no choice but to thread it?
I am not sure about the situation, actually server applications or embedded applications often run for years in background without stopping. I think one option is to let your program run in background and log to a file(or screen) timely, and, if you really want to stop the program after certain time, you can use timeout command or a script to kill your program after that time, say, timeout 15s your-prog.

CFStoredProc Timing Out

I have a very basic app that plugs data into a stored procedure which in turn returns a recordset. I've been experiencing what I thought were 'timeouts'. However, I'm now no longer convinced that this is what is really happening. The reason why is that the DBA and I watched sql server spotlight to see when the stored procedure was finished processing. As soon as the procedure finished processing and returned a recordset, the ColdFusion page returned a 'timeout' error. I'm finding this to be consistent whenever the procedure takes longer than a minute. To prove this, I created a stored procedure with nothing more than this:
BEGIN
WAITFOR DELAY '00:00:45';
SELECT TOP 1000 *
FROM AnyTableName
END
If I run it for 59 seconds I get a result back in ColdFusion. If I change it to one minute:
WAITFOR DELAY '00:01';
I get a cfstoredproc timeout error. I've tried running this in different instances of ColdFusion on the same server, different databases/datasources. Now, what is strange, is that I have other procedures that run longer than a minute and return a result. I've even tried this locally on my desktop with ColdFusion 10 and get the same result. At this point, I'm out of places to look so I'm reaching out for other things to try. I've also increased the timeout in the datasource connections and that didn't help. I even tried ColdFusion 10 with the timeout attribute but no luck there either. What is consistent is that the timeout error is displayed when the query completes.
Also, I tried adding the WAITFOR in cfquery and the same result happened. It worked when set for 59 seconds, but timed out when changed to a minute. I can change the sql to select top 1 and there is no difference in the result.
Per the comments, it looks like your request timeout is set to sixty seconds.
Use cfsetting to extend your timeout to whatever you need.
<cfsetting requesttimeout = "{numberOfSeconds}">
The default timeout for all pages is 60s, you need to change this in the cfadmin if it is not enough, but most pages should not run this long.
Take some time to familiarise yourself with the cfadmin and all its settings to avoid such head scratching.
As stated use cfsetting tag to override for specific pages.

Suspend and resume ColdFusion code for server intensive process

I have a loop the runs over a couple thousand records and for each record it hits is does some image resizing and manipulation on the server. The process runs well in testing over a small record set but when it moves to the live server I would like to suspend and resume the process after 50 records so the server is not taxed to the point of slow performance or quits altogether.
The code looks like this:
<cfloop query="imageRecords">
<!--create and save images to server - sometimes 3 - 7 images for each record -->
</cfloop>
Like I said, I would like to pause after 50 records, then resume where it left off. I looked at cfschedule but was unsure of how to work that into this.
I also looked at the sleep() function but the documentation talks about using this within cfthread tags. Others have posted about using it to simulate long processes.
So, I'm not sure sleep() can be safely used in the fashion I need it to.
Server is CF9 and db is MySQL.
I would create a column called worked in the database that is defaulted to 0 and once the image has been updated set the flag to 1. Then your query can be something like
SELECT TOP 50 imagename
FROM images
WHERE worked = 0
Then set up a CF scheduled task to run every x minutes
Here's a different approach which you could combine with probably any of the other approaches:
<cfscript>
th=CreateObject("java","java.lang.Thread");
th.setPriority(th.MIN_PRIORITY);
// Your work here
th.setPriority(th.NORM_PRIORITY);
</cfscript>
That ought to set the thread to have lower priority than the other threads which are serving your other requests. In theory, you'll get your work done in the shortest time, but with less affect on your other users. I've not had opportunity to test this yet, so your mileage may vary.

IIS App Pool Monitoring Infinite Loops (or in-appropriate load)

Im just wondering if there is anyway I can handle when our webservice might get stuck in an infinite loop. I know the first answer is not to have an infinite loop and we have tested the system and no loops should occur. But just for a fallback is there a way on putting something on the IIS app pool to say if the CPU has been running at say 99% for more than 1 minute than recycle the app pool?
Thanks in advance
There is no IIS-built-in way of doing something like that (the recycle options allow you to recycle at a set time each day, or after a set number of minutes, based on hitting virtual or private memory limits, or based on hitting a particular number of requests - nothing CPU-ish).
You could build your own monitor that would watch for certain events (like CPU going above 99% for a minute) and causes a recycle to happen (there are various programmatic ways to do this).
In IIS 7.0+ this can be done very easily (although instead of recycling the Application Pool, it will terminate the process and then restart it when resetInterval has been reached). See:
http://www.iis.net/configreference/system.applicationhost/applicationpools/add/cpu

How do I detect an aborted connection in Django?

I have a Django view that does some pretty heavy processing and takes around 20-30 seconds to return a result.
Sometimes the user will end up closing the browser window (terminating the connection) before the request completes -- in that case, I'd like to be able to detect this and stop working. The work I do is read-only on the database so there isn't any issue with transactions.
In PHP the connection_aborted function does exactly this. Is this functionality available in Django?
Here's example code I'd like to write:
def myview(request):
while not connection_aborted():
# do another bit of work...
if work_complete:
return HttpResponse('results go here')
Thanks.
I don't think Django provides it because it basically can't. More than Django itself, this depends on the way Django interfaces with your web server. All this depends on your software stack (which you have not specified). I don't think it's even part of the FastCGI and WSGI protocols!
Edit: I'm also pretty sure that Django does not start sending any data to the client until your view finishes execution, so it can't possibly know if the connection is dead. The underlying socket won't trigger an error unless the server tries to send some data back to the user.
That connection_aborted method in PHP doesn't do what you think it does. It will tell you if the client disconnected but only if the buffer has been flushed, i.e. some sort of response is sent from the server back to the client. The PHP versions wouldn't even work as you've written if above. You'd have to add a call to something like flush within your loop to have the server attempt to send data.
HTTP is a stateless protocol. It's designed to not have either the client or the server dependent on each other. As a result the state of either is only known when there is a connection is created, and that only occurs when there's some data to send one way or another.
Your best bet is to do as #MattH suggested and do this through a bit of AJAX, and if you'd like you can integrate something like Node.js to make client "check-ins" during processing. How to set that up properly is beyond my area of expertise, though.
So you have an AJAX view that runs a query that takes 20-30 seconds to process requested in the background of a rendered page and you're concerned about wasted resources for when someone cancels the page load.
I see that you've got options in three broad categories:
Live with it. Improve the situation by caching the results in case the user comes back.
Make it faster. Throw more space at a time/space trade-off. Maintain intermediate tables. Precalculate the entire thing, etc.
Do something clever with the browser fast-polling a "is it ready yet?" query and the server cancelling the query if it doesn't receive a nag within interval * 2 or similar. If you're really clever, you could return progress / ETA to the nags. However, this might not have particularly useful behaviour when the system is under load or your site is being accessed over limited bandwidth.
I don't think you should go for option 3 because it's increasing complexity and resource usage for not much gain.