How can I return a 400 status code and close the connection, without aborting script execution? I'm looking to initiate execution of the script using <cfhttp> but not wait for the script to end before returning.
You need to run the portion of the script you want to keep running after the connection is close, on a different thread.
Here's a tutorial on how to launch a new thread in Coldfusion.
If you want to hit a new request triggered from in another request without waiting just set the cfhttp timeout to 0 and it will return right away without waiting on response. For continuing a script after returning a separate thread may be a better idea but if you are hitting something outside the server or really need a separate request then cfhttp timeout=0 should do the trick.
Related
I have a request that lasts more than 3 minutes, I want the request to be sent and immediately give the answer 200 and after the end of the work - give the result
The workflow you've described is called asynchronous task execution.
The main idea is to remove time or resource consuming parts of work from the code that handles HTTP requests and deligate it to some kind of worker. The worker might be a diffrent thread or process or even a separate service that runs on a different server.
This makes your application more responsive, as the users gets the HTTP response much quicker. Also, with this approach you can display such UI-friendly things as progress bars and status marks for the task, create retrial policies if task failes etc.
Example workflow:
user makes HTTP request initiating the task
the server creates the task, adds it to the queue and returns the HTTP response with task_id immediately
the front-end code starts ajax polling to get the results of the task passing task_id
the server handles polling HTTP requests and gets status information for this task_id. It returns the info (whether results or "still waiting") with the HTTP response
the front-end displays spinner if server returns "still waiting" or the results if they are ready
The most popular way to do this in Django is using the celery disctributed task queue.
Suppose a request comes, you will have to verify it. Then send response and use a mechanism to complete the request in the background. You will have to be clear that the request can be completed. You can use pipelining, where you put every task into pipeline, Django-Celery is an option but don't use it unless required. Find easy way to resolve the issue
I'm trying to execute a long running function (ex: sleep(30)) after a Django view returns a response. I've tried implementing the solutions suggested to similar questions:
How to execute code in Django after response has been sent
Execute code in Django after response has been sent to the
client
However, the client's page load only completes after the long running function completes running when using a WSGI server like gunicorn.
Now that Django supports asynchronous views is it possible to run a long running query asynchronously?
Obviously, I am looking for a solution regarding the same issue, to open a view which should start a background task and send a response to the client without waiting until started task is finished.
As far as I understand yet this is not one of the objectives of async view in Django. The problem is that all executed code is connected the the worker started to handle the http request. If the response is sent back to the client the worker cannot handle any other code / task anymore started in the view asynchronous. Therefore, all async functions require an "await" in front of. Consequently, the view will only send its response to the client if the awaited function is finished.
As I understand all background tasks must be pushed in a queue of tasks where another worker can catch each new task. There are several solution for this, like Djangp Channels or Django Q. However, I am not sure what is the most lightweighted solution.
I am using django framework and ran into some performance problems.
There is a very heavy (which costs about 2 seconds) in my views.py. And let's call it heavy().
The client uses ajax to send a request, which is routed to heavy(), and waits for a json response.
The bad thing is that, I think heavy() is not concurrent. As shown in the image below, if there are two requests routed to heavy() at the same time, one must wait for another. In another word, heavy() is serial: it cannot take another request before returning from current request. The observation is tested and proven on my local machine.
I am trying to make the functions in views.py concurrent and asynchronous. Ideally, when there are two requests coming to heavy(), heavy() should throw the job to some remote worker with a callback, and return. Then, heavy() can process another request. When the task is done, the callback can send the results back to client. The logic is demonstrated as below:
However, there is a problem: if heavy() wants to process another request, it must return; but if it returns something, the django framework will send a (fake)response to the client, and the client may not wait for another response. Moreover, the fake response doesn't contain the correct data. I have searched throught stackoverflow and find less useful tips. I wonder if anyone have tried this and knows a good way to solve this problem.
Thanks,
First make sure that 'inconcurrency' is actually caused by your heavy task. If you're using only one worker for django, you will be able to process only one request at a time, no matter what it will be. Consider having more workers for some concurrency, because it will affect also short requests.
For returning some information when task is done, you can do it in at least two ways:
sending AJAX requests periodicaly to fetch status of your task
using SSE or websocket to subscribe for actual result
Both of them will require to write some more JavaScript code for handling it. First one is really easy achievable, for second one you can use uWSGI capabilities, as described here. It can be handled asynchronously that way, independently of your django workers (django will just create connection and start task in celery, checking status and sending it to client will be handled by gevent.
To follow up on GwynBliedD's answer:
celery is commonly used to process tasks, it has very simple django integration. #GwynBlieD's first suggestion is very commonly implemented using celery and a celery result backend.
https://www.reddit.com/r/django/comments/1wx587/how_do_i_return_the_result_of_a_celery_task_to/
A common workflow Using celery is:
client hits heavy()
heavy() queues heavy() task asynchronously
heavy() returns future task ID to client (view returns very quickly because little work was actually performed)
client starts polling a status endpoint using the task ID
when task completes status returns result to client
I have a web service that is made using spring, hibernate and c3p0. I also have a service wide cache(which has the results of requests ever made to the service) which can be used to return results when the service isn't able to return(due to whatever reason). The cache might return stale results when the database is out but that's ok.
I recently faced a database outage and my service came to a crashing halt.
I want the clients of my service to survive database outages happening ever again in future.
For that, I need my service to:
Handle new incoming requests like this: quickly say that the database is down and throw some exception(fast-fail).
Requests already being processed: Don't last longer than x seconds. How do I make the thread handling the request be interrupted somehow.
Cache the whole database in memory for read-only purposes(Is this insane?).
There are some observations that I made:
If there is one or more connection(s) with status ESTABLISHED, then an attempt to checkout a new connection is not made. Seems like any one connection with status ESTABLISED is handed over to the thread receiving the request. Now, this thread just hangs till the time the database comes back up.
I would want to make this request fast-fail by knowing before handling over a connection to a thread whether db is up or not. If no, the service should throw exception instead of hanging up.
If there's no connection with status ESTABLISHED, then the request fails in 10 secs with the exception that "Could not checkout a new connection". This is due to my checkout timeout being set for 10s.
If the service was processing some request, now the db goes and then the service makes a call to db, the thread making the call to db gets stuck forever. It resumes execution only after the db comes back.
I would like to interrupt the thread after say x seconds whether or not it was able to complete the request.
Are there ways to accomplish what I seek?
Thanks in advance.
I'm working on a script that creates a MySQL dump via <cfexecute> and then FTPs the SQL script to another server. I've resorted to checking once per second to see if the filesize has changed, and if it has not changed within the past five seconds I assume it has completed.
This is fine for the current application, but eventually I would like to be able to import the SQL script on the second server and provide some sort of notification that it has completed.
Is there some way to track the status of a running process?
If not, is there a way to accomplish a full DB export and import via ColdFusion alone?
Actually you may not realize it, but when you call <cfexecute> without passing a timeout attribute it defaults to '0' timeout. And if you read the docs on <cfexecute> you'd see:
If the value is 0:
ColdFusion starts a process and returns immediately. ColdFusion may
return control to the calling page
before any program output displays. To
ensure that program output displays,
set the value to 2 or higher.
So I would suggest passing a higher value for timeout which will cause ColdFusion to wait for mysqldump to complete before moving on.
Reference
Check out Event Gateways[1] for one way to deal with asynchronous operations. There's a Directory Watcher gateway that comes with CF as an example.[2]
Barring that, create some sort of batch processing facility using CF Scheduled Tasks. Add the job to a database table and have a scheduled task periodically pull jobs out of the table and execute them, reporting on the result. A second scheduled task can detect that the first completed and carry out the next step of the process.
[1] http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec214e3-7fa7.html
[2] http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec22c24-77f7.html