Django kills older requests - django

I am running a Django website over IIS. When a user saves an object it can take some time, so rather than have them wait for it to finish an AJAX request is sent to the server with the submission information and the page is immediately redirected. However, if the server gets many more requests that old saving request is killed rather inelegantly. The log files show that it ends mid execution with no error messages or other indication that it failed.
How do I keep older requests alive in Django?
P.S. I have already investigated starting a new multithreading Process but encountered issues around Django models, and I am looking for something more simple than Celery.

Turns out Django wasn't killing the requests, IIS was. There is a timeout setting for FastCGI, the medium between Django and IIS, that was set to 30 seconds. So when my saving request hit that time limit it just ended with no warning.
You can change this by clicking on the server name in IIS, then clicking the "FastCGI Settings" icon, then clicking the FastCGI application being used. Under "Process Model", change "Activity Timeout" and "Request Timeout" to a higher limit, I used 300 for 5 minutes to be safe.

Related

django 2FA with yubikey creates wsgi error in login: Timeout when reading response headers from daemon process

Usual 2FA first requires to type your password and then the yubikey token, either when setting it up or when it is already set. I'm using this code (for simplicity I'm not changing anything on it): https://github.com/jazzband/django-two-factor-auth
When i have to type the yubikey token, i.e., press the yubikey, the information is sent to the database but in the client side the process is not finished (page doesn't go to the next step and browser shows the loading icon but nothing happens until gateway timeout).
Nothing works, what I see is that my mysql processlist goes idle (sleep)
after the token is typed so it makes me think that the connection with the database is ok. The same code in my localhost with the lightweight django server works fine but in my apache server I have this issue.
How could I track down the problem? the error log only says wsgi error, Timeout when reading response headers from daemon process.
In my localhost this step gives a 302 response (it works. I believe 302 response is ok with the yubikey). In the apache server is 504 (Gateway timeout).
Any help is appreciated

Django on IIS: Debugging IIS Error due to FastCGI request timeout on large file upload

I'm trying to host a Django web application on a windows 10 machine with IIS 10 with FastCGI.
Whilst everything is running good so far, I'm running into problems with certain POST-requests while uploading larger files (~120MB), namely an HTTP 500 Error. I'm at a point where I don't know how to debug any further.
I resolved the Error "413.1 - Request Entity Too Large" by increasing the request limits. However, now I get an HTTP-error stating the following:
C:\Apps\Python3\python.exe - The FastCGI process exceeded configured request timeout
The timeout is set to 90 seconds, and I can tell that after the uploading of files completes, my browser is waiting about that time for a response from the server.
There are not that much operations to perform within the Django-view for responding to the request. If I run the django developement server on the same machine, the response is beeing send justs seconds after the files were uploaded. The duration, the IIS takes to send the HTTP 500 response, takes more than 1 minute longer.
I added some code to the Django-view in the post()-method to write something to a file whenever the method is called:
def post(self, request, *args, **kwargs):
with open(os.path.join(settings.REPORT_DIR, "view_output.txt"), "w") as f:
f.write("tbd.")
(...)
However, this action is never performed, although it works in other Django-Views. I therefore assume a problem with IIS processing the request.
I enabled FREB logging, but am a little lost with interpretation. The "Errors & Warnings" just state the LOG_FILE_MAX_SIZE_TRUNCATE event, probably due to the large request.
Since I'm new to IIS, how can I debug any further?
Thank you very much!
To resolve the issue you could follow the below steps:
The IIS default file upload size is 30mb(30000000 bytes) increase this value by using:
open IIS manager, select your site.
Double click request filtering feature from the middle pane.
From the Actions pane on the right-hand side of the screen click Edit Feature Settings... link.
In the Request Limits section, enter the appropriate Maximum allowed content length (Bytes) e.g.2147483648 which means 2GB and then click the OK button.
click ok and apply the setting then go back.
increase the site connection time out:
Open Internet Information Services (IIS) Manager.
Expand the local computer node, expand Web Sites, right-click the appropriate Web site, point to Manage Web Site, click Advanced Settings.
In the Advanced Settings window, expand Connection Limits, change the value in the Connection time-out field, and then click OK.
Application Pool setting:
Open IIS.
On the left side, select"Application Pools"
On the right side, right-click this application pool and select Advanced Settings.
In the advanced settings, increase "Idle Time-out (minutes)".
CGI Time out:
in IIS, double-click the CGI icon and increase "Time-out".

How do I add custom headers to CloudFoundry error pages?

I have an application that I'm deploying on a private CloudFoundry instance, using the Ruby buildpack. Sometimes, an in-bound request causes my application to crash and the container to restart. At this point, the user is served an error page, saying something like Error 502 - container was unable to service your request, or something. This is not an error served by my app, but by the infrastructure, so I don't have any control over it.
My app is designed to be run as part of a dashboard/kiosk that refreshes periodically, and adds a Refresh header to every successful request. The refresh time is dynamic and not always the same value (it may be anything from 5 mins to 0 seconds), and that's why I don't use a browser refresh extension.
When I hit the error page, there is no Refresh header so the page just sits there forever. How can I get CloudFoundry to add a Refresh header to the error page? I'd be content with that value being some static value set in my manifest.yml, but I can't see any option to get it to do that.
You can't modify responses that are generated by the Gorouters. If you want to customize THOSE, you should consider, if you have the authority, to put something in your external load balancer that would watch for errors from the infrastructure (I believe all such errors have headers that start with X-Cf-* but I may be mistaken) and customize when they are received.

504 gateway timeout django site with nginx+fastcgi

we added ability for admin users to change server date&time through the portal. Changing the date&time back is working fine, but changing forward(more than fastcgi_read_timeout) is returning '504 gateway timeout' even though server time successfully changed behind the scenes.
Please advice how to handle this?
Thanks.
I had a very similar issue with another project. Maybe it is best to submit the date&time credentials (I assume you would be using NTP servers IPs to do this) through the portal asynchronously via a JavaScript AJAX request. Then, let the server then do its thing with the date&time.
Meanwhile, have the client side JavaScript, continuously probe the server with interval AJAX requests (perhaps every 5 seconds) to get back a response message on the server time. That way, each subsequent AJAX request initiates a new Nginx session and if the first fails/timeouts, then try a second time, if that fails, then try a third time, and so on.
This worked on our system. However, I do not know if your product has login/authentication credentials. If it does, then the user may have to log back in once all set and done because a change in time may also expire their log-in session. I don't think this is such a big deal though because theoretically they should only need to change the date/time once in a while if not just one time only. So it shouldn't have too much of an impact on the user experience.
tags: nginx, NTP, timeout, 504

IE8 loses cookies when opening a new window after a redirect

I'm using Internet Explorer 8 beta 2.
Client performs POST on http://alpha/foo
Server responds with redirect to http://beta/bar
Client performs GET on http://beta/bar
Server responds with redirect to http://beta/baz and sets cookie
Client performs GET on http://beta/baz including cookie
Server provides response
User selects "Open in new window" on a link in the page
Client performs GET on http://beta/link, without cookie!
If in step 7 the user just clicks the link, the cookie is passed correctly. If there is no redirect, the cookie is passed correctly even if the user selects "Open in new window".
Is there a way to convince IE8 to pass the cookie in step 8?
(Edit: I believe this is a bug in IE8 beta 2, so I've raised it on the IE beta newsgroup. Workaround suggestions welcome.)
I believe that IE8 uses a separate process for each window. If you're using session cookies, the new process will start with a fresh session and therefore won't be able to submit the session cookies received by the other process. Can you try using persistent cookies and see what happens?
From http://www.microsoft.com/windows/internet-explorer/beta/readiness/developers-existing.aspx#lcie
Loosely-coupled Internet Explorer
(LCIE) is an architectural effort to
improve the browser by separating its
components and loosening their
interdependence: most notably, it is
an attempt to isolate the Internet
Explorer frame and its tabs into
separate processes. In Internet
Explorer 8, this isolation will bring
about improved performance and
scalability, as well as more potent
methods to recover from problems like
disk or system failure.