I have many django installation which must run under one URL only. So I have a structure like
Django Installation 1
Django Installation 2
Django Installation N
under my root directory.
Now from the URL "www.mysite.com/installation1" I pick up subpart "installation1" and set the os.environ['DJANGO_SETTINGS_MODULE'] to "installation.settings" and let the request be handled. Now for the request "www.mysite.com/installation2" I must do the same. However since django caches site object, AppCache etc internally I must restart wsgi process before each request so that internal cache of django gets cleared. ( I know performance would not be good but Im not worried about that). To implement the above scenerio I have implemented the below solution:
In httpd.conf as WSGIDaemonProcess django processes=5 threads=1
In the django.core.handlers.wsgi I made the following change in def "call"
if environ['mod_wsgi.process_group'] != '':
import signal, os
print 'Sending the signal to kill the wsgi process'
os.kill(os.getpid(), signal.SIGINT)
return response
My assumption being that deamon process would be killed at each request after the response has been sent. I want to confirm this assumption that my process would only be killed only and only after my response has been sent.
Also is there another way I can solve this problem
Thanks
EDIT: After the suggestion to set the MaxRequestsPerChild to 1 I made the following changes to httpd.conf
KeepAlive Off
Listen 12021
MaxSpareThreads 1
MinSpareThreads 1
MaxRequestsPerChild 1
ServerLimit 1
SetEnvIf X-Forwarded-SSL on HTTPS=1
ThreadsPerChild 1
WSGIDaemonProcess django processes=5 threads=1
But my process is not getting restarted at each request. once the process is started it keeps on processing the request. Am i missing something?
This should be possible using the apache directive MaxRequestsForChild to 1 as stated in http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxrequestsperchild
However it will be global to every process daemon.
Update:
Or you chould check maximum-requests option into WSGIDaemonProcess as in http://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines
Don't do this. There's a reason why WSGI processes last longer than a single request - you say you're not worried about performance, but that's still no reason to do this. I don't know why you think you need to - plenty of people run multiple Django sites on one server, and none of them have ever felt the need to kill the process after each request.
Instead you need to ensure each site runs in its own WSGI process. See the mod_wsgi documentation for how to do this: Application Groups seems like a good place to start.
Related
I have a Flask site deployed to IIS via wfastcgi configuration.
When I use chrome or firefox developer tools to analyse the loading time of the homepage, I find many seconds (ranging from 6 to 10 in average) as waiting time to receive the first byte.
It was even 30 seconds before, but then I "optimized" my python code to avoid any db sql operation at loading time. Then I've followed the hints of this blog of nspointers, and now from the taskbar of the server I see the w3wp.exe for my app pool identity
w3wp.exe – It is the IIS worker process for the application pool
staying up and running even during idle time. But that is not true for the other
python.exe – The main FastCGI process for the Django or flask
applications.
and I'm not sure if this is a problem and just in case what I am supposed to do, aside from the step 4 described in the mentioned post.
Now in the “Edit FastCGI Application” dialog under “Process Model”
edit the “Idle Timeout” and set it to 2592000 which is the max value
for that field in seconds
I've also looked at the log written by the Flask app and compared it to the log written by IIS and this is the most important point in making me believe that the issue is in the wfastcgi part, before the execution of the python code.
Because I see that the time-taken of the IIS log matches with the client time reported by chrome or firefox as TTFB and the log written by python at the start of the execution is logged at almost the same time of the time written by IIS, that
corresponds to the time that the request finished
(as I thought indeed and as I find it's confirmed by this answer)
So in conclusion, based on what I tried and what I understand, I suspect that IIS is "wasting" many seconds to "prepare" the python wfascgi command, before actually starting to execute my app code to produce a response for the web request. It is really too much in my opinion, since other applications I've developed (for example in F# WebSharper) under IIS without this mechanism of wfastcgi load immediately in the browser and the difference in the response time between them and the python Flask app is quite noticeable. Is there anything else I can do to improve the response time?
Ok, now I have the proof I was searching and I know where the server is actually spending the time.
So I've researched a bit about the wfastcgi and finally opened the script itself under venv\Lib\site-packages.
Skimming over the 900 lines, you can spot the relevant log part:
def log(txt):
"""Logs messages to a log file if WSGI_LOG env var is defined."""
if APPINSIGHT_CLIENT:
try:
APPINSIGHT_CLIENT.track_event(txt)
except:
pass
log_file = os.environ.get('WSGI_LOG')
if log_file:
with open(log_file, 'a+', encoding='utf-8') as f:
txt = txt.replace('\r\n', '\n')
f.write('%s: %s%s' % (datetime.datetime.now(), txt, '' if txt.endswith('\n') else '\n'))
Now, well knowing how to set the environment variables, I defined a specific WSGI_LOG path, and here we go, now I see those 5 seconds TTFB from chrome (as well as the same 5 seconds from IIS log with time 11:23:26 and time-taken 5312) in the wfastcgi.py log.
2021-02-01 12:23:21.452634: wfastcgi.py 3.0.0 initializing
2021-02-01 12:23:26.624620: wfastcgi.py 3.0.0 started
So, of course, wfastcgi.py is the script one would possibly try to optimize...
BTW, after digging into it, that time is due to importing the main flask app
handler = __import__(module_name, fromlist=[name_list[0][0]])
What remains to be verified is the behavior of rerunning the process (and the import of the main flask module, that is time consuming) for each request.
In conclusion, I guess it is a BUG, but I have solved it by deleting the "monitoring changes to file" FastCGI settings as per the screenshot below.
The response time is under a second.
I have a different answer to you by suggesting you try to switch over to HTTP Platform Handler for your IIS fronted Flask app.
Config Reference
This is also the recommended option by Microsoft:
Your app's web.config file instructs the IIS (7+) web server running on Windows about how it should handle Python requests through either HttpPlatform (recommended) or FastCGI.
https://learn.microsoft.com/en-us/visualstudio/python/configure-web-apps-for-iis-windows?view=vs-2019
Example config can be:
<configuration>
<system.webServer>
<handlers>
<add name="httpplatformhandler" path="*" verb="*" modules="httpPlatformHandler" resourceType="Unspecified"/>
</handlers>
<httpPlatform processPath="c:\inetpub\wwwroot\run.cmd"
arguments="%HTTP_PLATFORM_PORT%"
stdoutLogEnabled="true"
stdoutLogFile="c:\inetput\Logs\logfiles\python_app.log"
processPerApplication="2"
startupTimeLimit="60"
requestTimeout="00:01:05"
forwardWindowsAuthToken="True"
>
<environmentVariables>
<environmentVariable name="FLASK_RUN_PORT" value="%HTTP_PLATFORM_PORT%" />
</environmentVariables>
</httpPlatform>
</system.webServer>
</configuration>
With run.cmd being something like
cd %~dp0
.venv\scripts\waitress-serve.exe --host=127.0.0.1 --port=%1 myapp:wsgifunc
Note that the HTTP Platform handler will dynamically set on a port and passing that into the python process via the FLASK_RUN_PORT env var which flask will automatically take as a port configuration.
Security notes:
Make sure you bind your flask app to localhost only, so it's not visible directly from the outside - especially if you are using authentication via IIS
In the above example the forwardWindowsAuthToken is being set which then can be used to rely on Windows Integrated authentication done by IIS then the token passed over to Python and you can get the authenticated user name from Python. I have documented that here. I actually use that for single-sign on with Kerberos and AD group based authorization, so it works really nice.
Example to only listen on localhost / loopback adapter to avoid external requests hitting the python app directly. In case you want all requests to go via IIS.
if __name__ == "__main__":
app.run(host=127.0.0.1)
I have two Django applications working on the AWS Lightsail. First one is working great with www.firstapp.com and firstapp.com, but when I try to visit the second app without www in URL, it returns 400 Bad Request. In both apps, DEBUG set to False, and I have necessary hosts in settings.py like this:
ALLOWED_HOSTS = [
'.secondapp.com'
]
I have tried with '*' and also tried to write down all possible hosts in ALLOWED_HOSTS but it didn't work. I am able to see website with www.secondapp.com but secondapp.com always return Bad Request (400)
After any update in settings.py, I always restart Apache (tried to reload also) nothing changes, still getting 400 Bad Request. Any ideas? Maybe I should set up AWS in some way, this is my first experience with Django
For anyone who will face this kind of issues, check your VirtualHost configurations. In my VirtualHost configurations I had ServerName as www.secondapp.com when I add ServerAlias secondapp.com it works. Now I am able to see my app with www.secondapp.com and secondapp.com.
P.S.: However I don't have ServerAlias for first application but it still working as www.firstapp.com and firstapp.com, not sure why this casing an issue for the second one.
I am fairly new to using uwsgi, nginx with modsecurity and pagespeed, and django. When I comment out the lines:
ModSecurityEnabled on;
ModSecurityConfig modsec_includes.conf;
in my mysite_nginx.conf I am able to log into the django admin account as expected, but when I enable them, I get
my.server.ip.address didn’t send any data.
ERR_EMPTY_RESPONSE
in my browser when I try to log in. Looking in my nginx error log shows no modsecurity errors. The error it is showing is:
2017/01/26 12:08:13 [alert] 3521#0: worker process 8640 exited on signal 11
Since everything seems to be working fine when modsecurity is off, presumably the problem is arising here.
Compile ModSecurity Branch v3/master (necessary because you need the common core
libmodsecurity)
Compile ModSecurity-nginx connector
Compile nginx with add-module ModSecurity-nginx
Clone and configure OWASP ruleset and config to Nginx
This would work
https://www.howtoforge.com/tutorial/nginx-with-libmodsecurity-and-owasp-modsecurity-core-rule-set-on-ubuntu-1604/
I have a Django app running on a gunicorn server with an
nginx up front.
I need to diagnose a production failure with an HTTP 500 outcome,
but the error log files do not contain the information I would expect.
Thusly:
gunicorn has setting errorlog = "/somepath/gunicorn-errors.log"
nginx has setting error_log /somepath/nginx-errors.log;
My app has an InternalErrorView the dispatch of which does an
unconditional raise Exception("Just for testing.")
That view is mapped to URL /fail_now
I have not modified handler500
When I run my app with DEBUG=True and have my browser request
/fail_now, I see the usual Django error screen alright, including
the "Just for testing." message. Fine.
When I run my app with DEBUG=False, I get a response that consists
merely of <h1>Server Error (500)</h1>, as expected. Fine.
However, when I look into gunicorn-errors.log, there is no entry
for this HTTP 500 event at all. Why? How can I get it?
I would like to get a traceback.
Likewise in nginx-errors.log: No trace of a 500 or the /fail_now URL.
Why?
Bonus question:
When I compare this to my original production problem, I am getting
a different response there: a 9-line document with
<h1><p>Internal Server Error</p></h1> as the central message.
Why?
Bonus question 2:
When I copy my database contents to my staging server (which is identical
in configuration to the production server) and set
DEBUG=True in Django there, /fail_now works as expected, but my original
problem still shows up as <h1><p>Internal Server Error</p></h1>.
WTF?
OK, it took long, but I found it all out:
The <h1>Server Error (500)</h1> response comes from Django's
django.views.defaults.server_error (if no 500.html template exists).
The <h1><p>Internal Server Error</p></h1> from the bonus question
comes from gunicorn's gunicorn.workers.base.handle_error.
nginx logs the 500 error in the access log file, not the error log file;
presumably because it was not nginx itself that failed.
For /fail_now, gunicorn will also log the problem in the access log,
not the error log; again presumably because gunicorn as such has
not failed, only the application has.
My original problem did actually appear in the gunicorn error log,
but I had never searched for it there, because I had
introduced the log file only freshly (I had relied on Docker logs
output before, which is pretty confusing) and assumed it would be
better to use the very explicit InternalErrorView for initial
debugging. (This was an idea that was wrong in an interesting way.)
However, my actual programming error involved sending a response
with a Content-Disposition header (generated in Django code) like this:
attachment; filename="dag-wönnegården.pdf".
The special characters are apparently capable of making
gunicorn stumble when it processes this response.
Writing the question helped me considerably with diagnosing this situation.
Now if this response helps somebody else,
the StackOverflow magic has worked once again.
may be server response 500 is logged in access_log not in errorlog
in nginx default file
access_log /var/log/nginx/example.log;
i think <h1><p>Internal Server Error</p></h1> is generated by nginx in production `
in debug=False
raise exception is treated as error or http500,so unless you changed the view for handler500,default 500 error page will be displayed
debug =true
raise exception is displayed in fancy djnago's debug page
I'm running a site using Django in a shared environment (Dreamhost), but 1.4 in a local environment.
Somtimes, I get hit by many, many Apache dummy connections (e.g., [10/Jul/2012:00:49:16 -0700] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
), which makes the site non-responsive (either it gets killed for resource consumption or max connections).
This does not happen on other sites on this account (though none are running Django). I'm trying to figure out a way to prevent this from happening, but I'm not sure what trouble-shooting process to use. Guidance on process or common sources of this issue would be useful.
Try:
<Limit OPTIONS>
Order allow,deny
Deny from all
</Limit>
This would cause a 403 forbidden to be returned by Apache and would not be handed off to any Django application if the issue is that they are getting through to the application at the moment.