I'm doing file uploads using Django's File Upload mechanism with a custom handler (by subclassing django.core.files.uploadhandler.FileUploadHandler) which does some additional processing in the
receive_data_chunk(self, raw_data, start) function.
I was curious when the handler is actually called (i.e. after the file has been completely uploaded by the server or as it arrives on the socket)?
From my tests I found out that you have access to the data as it arrives on the socket, but I would like someone to confirm this. I'm a little puzzled by this, because I thought mod_wsgi was a content generator in Apache, thus being called after the input filters which pre-process the client's request.
PS: I'm using Apache + mod_wsgi + Django.
In Apache, input filters are only applied to input content when the request handler reads the input content. So, no preprocessing is done by input filters, it is done inline with the request handler consuming the input content.
Related
Im having a project running on python2.7. The project is old but still its necessary to update the database when a request is received. But the update process takes time and ends up with a timeout. Is there anyway return JsonResponse/Httpresponse, before updating the database so that timeout doesn't occur. I know its not logical to do so, but its a temporary fix.
Also, i cant use async since its python2
Use multiprocessing or multithreading this will execute your task with another process and send HTTP response fastly to client-side
I am making a Django Rest framework based server and in one of the request, I get an audio file from front-end, on which I need to run some ML based algorithm(I have script for same) and respond back to user with the result. Problem is that this request might take 5-10 seconds to execute. I am trying to understand following things:
Will Celery help me reduce the workload on server, as in any case I need to wait for the result of the ML Algo and respond back to user.
Should I create a different server to handle this type of request? Will that be a better approach?
Also, is my flow of doing things correct. First, Upload the file to some cloud platform for storage and serialize the instance to get the url of file. Second run the script using celery and wait for the result. Third, Respond back with the result.
Thanks for helping.
I use Celery with Django to put my pdf generation in background, while I display a loading page.
But when the task is complete (i.e. my pdf is generated), I want to redirect to the next view which is responsible to send mail and display a friendly confirmation message to the user.
I know i can get the task_postrun or task_success signal, but I can't redirect from there.
I searched for hours but didn't find any solution, any ideas ?
Thanks !
There are two ways:
Ask the server: save the task_id in the model where you are storing the PDF, and create an ajax view to check every X seconds if task is completed, the result of this view will determine if it should redirect or still wait for the PDF.
result = MyTask.AsyncResult(task_id)
result.get()
Real-time web: another way is using pusher with pusher_client_python, when PDF generation is completed (in your PDF creation rutine), make a api call to pusher who will send a notification to the connected client (that one waiting for the result) and will redirect, this approach is more convenient because you don't have to be asking the server every X seconds. You will need to learn about sockets paradigm, but its very easy to implement.
Hope this helps.
I've got a django site that is producing a csv download. The content of the csv is dictated by user defined parameters. It's possible that users will set parameters that require significant thinking time on the server. I need a way of sustaining the http connection so the browser doesn't kick up an error message. I heard that it's possible to send intermittent http headers to do this. Can anyone point me in the right direction to set this up on a django site?
(unfortunatly I'm stuck with the possibility of slow reports - improving my sql won't mitigate this)
Don't do it online. Trigger an offline task, use a bit of Javascript to repeatedly call a view that checks if the task has finished, and redirect to the finished file when it's ready.
Instead of blocking the user and it's browser for 20 minutes (which is not a good idea) do the time-consuming task in the background. When the task will finish and generate the result simply notify the user so that he/she will just need to download the ready result.
I am using the TemporaryFileUploadHandler to upload files. If a user is uploading a large file and cancels the upload, the file remains in my temporary directory.
Is there a way to trap a cancelled upload (connection reset before a file was fully uploaded) in order to cleanup these files?
The only alternative I can think of is a cron job which looks at the temp directory and deletes files which have not been updates in some reasonable amount of time.
Not sure if it helps, but you may try to connect to django request signals:
request_finished - Sent when Django finishes processing an HTTP request.
got_request_exception - This signal is sent whenever Django encounters an exception while processing an incoming HTTP request.
I think Django should raise error if connection is aborted, so the usage of second one is probably a solution. Please let me know it it helps.