I have a python script written with Flask, which requires some preparation work (connect to databases, acquire some other resources etc) before it can actually accept requests.
I was using it under Apache HTTPD with wsgi. Apache config:
WSGIDaemonProcess test user=<uid> group=<gid> threads=1 processes=4
WSGIScriptAlias /flask/test <path>/flask/test/data.wsgi process-group=test
And it was working fine: Apache would start 4 completely separate processes, each with its own database connection.
I am now trying to switch to uwsgi + nginx. nginx config:
location /flask/test/ {
include uwsgi_params;
uwsgi_pass unix:/tmp/uwsgi.sock;
}
uwsgi:
uwsgi -s /tmp/uwsgi.sock --mount /flask/test=test.py --callable app --manage-script-name --processes=4 --master
The simplified script test.py:
from flask import Flask, Response
app = Flask(__name__)
def do_some_preparation():
print("Prepared!")
#app.route("/test")
def get_test():
return Response("test")
do_some_preparation()
if __name__ == "__main__":
app.run()
What I would expect is to see "Prepared!" 4 times in the output. However, uwsgi does not do that, output:
Python main interpreter initialized at 0x71a7b0
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 363800 bytes (355 KB) for 4 cores
*** Operational MODE: preforking ***
mounting test.py on /flask/test
Prepared! <======================================
WSGI app 0 (mountpoint='/flask/test') ready in 0 seconds ...
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 1212)
spawned uWSGI worker 1 (pid: 1216, cores: 1)
spawned uWSGI worker 2 (pid: 1217, cores: 1)
spawned uWSGI worker 3 (pid: 1218, cores: 1)
spawned uWSGI worker 4 (pid: 1219, cores: 1)
So, in this simplified example, uwsgi spawned 4 workers, but executed do_some_preparation() only once. In real application, there are several database connections opened, and apparently those are being reused by these 4 processes and cause issues with concurrent request.
Is there a way to tell uwsgi to spawn several completely separate processes?
EDIT: I could, of course, get it working with a workaround like:
from flask import Flask, Response
app = Flask(__name__)
all_prepared = False
def do_some_preparation():
global all_prepared
all_prepared = True
print("Prepared!")
#app.route("/test")
def get_test():
if not all_prepared:
do_some_preparation()
return Response("test")
if __name__ == "__main__":
app.run()
But then I will have to place this "all_prepared" check into every route, which does not seem like a good solution.
By default uWSGI does preforking. So your app is loaded one time, and then forked.
If you want to load the app one time per worker add --lazy-apps to the uWSGI options.
By the way in both cases you are under true multiprocessing :)
It seems like I have found an answer myself. And the answer is: my code should have been redesigned as:
#app.before_first_request
def do_some_preparation():
...
Then Flask will take care of running do_some_preparation() function for each worker separately, allowing each one to have its own database connection (or other concurrency-intolerant resource).
Related
Following code approximates real code. Flask app on starts creates a worker thread. Routing function use data processing done by worker function .
app = Flask(__name__)
timeStr=""
def loop ():
global timeStr
while True:
time.sleep (2)
timeStr =datetime.now().replace(microsecond=0).isoformat()
print (timeStr)
ThreadID = Thread (target=loop)
ThreadID.daemon = True
ThreadID.start()
#app.route('/')
def test():
return os.name + " " + platform.platform() + " " + timeStr
application=app
if __name__ == "__main__":
app.run(host='0.0.0.0', port=8080, debug=True)
The above app works beautifully for days when started likes this:
python3 app.py
However in uwsgi, even though I have enabled threads, app is not working. It's not updating global timeStr
sudo /usr/local/bin/uwsgi --wsgi-file /home/pi/pyTest/app.py --http :80 --touch-reload /home/pi/pyTest/app.py --enable-threads --stats 127.0.0.1:9191
What do I need to do for app to function correctly under UWSGI, so I have create systemd service proper way?
Bad news, good news.
I have an app that starts a worker thread in much the same way. It uses a queue.Queue to let routes pass work on the worker thread. The app has been running happily on my home intranet (on a Rapsberry Pi) using the Flask development server. I tried putting my app behind uwsgi, and observed the same failure -- the worker thread didn't appear to get scheduled. The thread reported as _is_alive = True, but I couldn't find a uwsgi switch combination that let it actually run.
Using gunicorn resolved the issue.
virtualenv venv --python=python3
. venv/bin/activate
pip install flask gunicorn
gunicorn -b 0.0.0.0:5000 demo:app
was enough to get my app to work (meaning the worker thread actually ran, and side-effects were noticeable).
I'm building a server with Flask/Gunicorn and Nginx. My script (Flask server) does two things with 'threading':
connect to MQTT broker
run a flask server
But when i try using gunicorn: gunicorn --bind 0.0.0.0:5000 wsgi:app, the first thread doens't run.
Here is the code (not complet):
import threading
def run_mqtt():
while True:
mqtt_client.connect(mqtt_server, port=mqtt_port)
def run_server():
app.run(host='0.0.0.0', port=5000, debug=False)
if __name__ == '__main__':
t1 = threading.Thread(target=run_mqtt)
t2 = threading.Thread(target=run_server)
t1.daemon = True
t2.daemon = True
t1.start()
t2.start()
Please help me, i have to find the solution very fast! Thanks!!
Gunicorn is based on the pre-fork worker model. That means that when it starts, it has a master process and spawns off worker processes as necessary. Most likely the first thread did run, but you lost track of it in the other prefork processes.
If you want to have a background thread that flask controllers can interact with and share memory with, it's unlikely that gunicorn is a good solution for you.
Welcome stackoverflowers. I`ve been fighting with setting up nginx with uwsgi with django app... There has to be a small mistake somewhere but I cant find it. Here is my file from pastebin, containing files directly related to my issue and also a console log. I would be very greateful if somebody could take a look and help me out.
artcolor_uwsgi.ini file
[uwsgi]
chdir = /home/seb/pypassion/artcolor/src/
module = artcolor.wsgi
home = /home/seb/pypassion/artcolor/artcolor_venv/
master = true
processes = 10
socket = /home/seb/pypassion/artcolor/src/artcolor.sock
#http-socket = :8001
#vacuum = true
artcolor_nginx.conf file
upstream django {
server /home/seb/pypassion/artcolor/src/artcolor.sock; # for a file socket
#server 127.0.0.1:8001;
}
# configuration of the server
server {
# the port your site will be served on
listen 8001;
server_name localhost; # substitute your machine's IP address or FQDN
charset utf-8;
access_log /home/seb/pypassion/artcolor/logs/nginx-access.log;
error_log /home/seb/pypassion/artcolor/logs/nginx-error.log;
# max upload size
client_max_body_size 1G; # adjust to taste
# Django media
location /media/ {
alias /home/seb/pypassion/artcolor/src/media/; # your Django project's media files - amend as required
}
location /static/ {
alias /home/seb/pypassion/artcolor/src/static/; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include uwsgi_params; # the uwsgi_params file you installed
}
}
wsgi.py file
import os
import sys
sys.path.append("/home/seb/pypassion/artcolor/src/")
sys.path.append("/home/seb/pypassion/artcolor/src/artcolor/")
sys.path = sys.path[::-1]
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "artcolor.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
my console
(artcolor_venv)seb#debian:~/pypassion/artcolor/src$ uwsgi --ini artcolor_uwsgi.ini
[uWSGI] getting INI configuration from artcolor_uwsgi.ini
*** Starting uWSGI 2.0.9 (64bit) on [Fri Feb 27 11:48:45 2015] ***
compiled with version: 4.7.2 on 27 February 2015 11:00:34
os: Linux-3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u2
nodename: debian
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 8
current working directory: /home/seb/pypassion/artcolor/src
detected binary path: /home/seb/pypassion/artcolor/artcolor_venv/bin/uwsgi
chdir() to /home/seb/pypassion/artcolor/src/
your processes number limit is 63796
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /home/seb/pypassion/artcolor/src/artcolor.sock fd 3
Python version: 2.7.3 (default, Mar 13 2014, 11:26:58) [GCC 4.7.2]
Set PythonHome to /home/seb/pypassion/artcolor/artcolor_venv/
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x20e9d30
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 800448 bytes (781 KB) for 10 cores
*** Operational MODE: preforking ***
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x20e9d30 pid: 14848 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 14848)
spawned uWSGI worker 1 (pid: 14849, cores: 1)
spawned uWSGI worker 2 (pid: 14850, cores: 1)
spawned uWSGI worker 3 (pid: 14851, cores: 1)
spawned uWSGI worker 4 (pid: 14852, cores: 1)
spawned uWSGI worker 5 (pid: 14853, cores: 1)
spawned uWSGI worker 6 (pid: 14854, cores: 1)
spawned uWSGI worker 7 (pid: 14855, cores: 1)
spawned uWSGI worker 8 (pid: 14856, cores: 1)
spawned uWSGI worker 9 (pid: 14857, cores: 1)
spawned uWSGI worker 10 (pid: 14858, cores: 1)
SOLVED
I linked my nginx file to sites-available not as I was supossed to sites-enabled
SOLUTION:
I managed to resolve problem.
The reason was wrong linking the nginx.conf file.
I accidently linked in to sites-available, not as I was supposed to sites-enabled
Thanks
I am trying to deploy a Django app using nginx + uwsgi.
I created a virtual environment (virtualenv), and installed both uwsgi and Django inside the virtual env (i.e.local to the virtual environment). I have no global Django and uwsgi. When I run the uwsgi --ini project.ini, I am having an 'ImportError: No module named django.core.wsgi' exception:
from django.core.wsgi import get_wsgi_application
ImportError: No module named django.core.wsgi
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 5987)
spawned uWSGI worker 1 (pid: 5988, cores: 1)
spawned uWSGI worker 2 (pid: 5989, cores: 1)
spawned uWSGI worker 3 (pid: 5990, cores: 1)
spawned uWSGI worker 4 (pid: 5991, cores: 1)
Based on my search, it's recommended to put env and pythonpath variables in the ini if you are using Django1.5 or lower. However, I am using Django 1.7, so I did not place them anymore. Here's my project.ini:
#project.ini file
[uwsgi]
# Django-related settings
# the base directory (full path)
chdir = /root/virtualenv/project
# Django wsgi file
module = project.wsgi:application
# the virtualenv (full path)
home = /root/virtualenv
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 4
# the socket (use the full path to be safe
socket = /root/virtualenv/project/project.sock
# ... with appropriate permissions - may be needed
chmod-socket = 666
chown-socket = root:root
# clear environment on exit
vacuum = true
# other config options
uid = root
gid = root
processes = 4
daemonize = /var/log/uwsgi/project.log
no-site = True
How will i fix this? I'm quite stuck on this for a day already.
Any ideas are greatly appreciated.
Thanks in advance!
your module is pointed to your project, shouldn't it be pointed to your projects main app that way it can find the wsgi file?
so on my INI file looks like this.
In my particular case I'm using a virtual environment, django 1.7 and uwsgi.
vhost = true
plugins = python
socket = /tmp/noobmusic.sock
master = true
enable-threads = true
processes = 2
wsgi-file = /home/myname/webapps/music/music/music/wsgi.py
virtualenv = /home/myname/webapps/music/musicenv/
chdir = /home/myname/webapps/music/music/
this is the only site I've ever setup uwsgi as I typically use mod-wsgi and unfortunately do not remember all the steps.
I had similar issue. Solved it -after many hours- by making sure that uwsgi is installed using same python version (2 / 3) as the python version of your virtualenv. Otherwise it will not use your virtualenv and thus start throwing 'can not find module xyz' kind of errors. To install uwsgi under python3 you have to use pip3 (which in turn might need to be installed with something like 'apt-get install python-pip3'). When calling uwsgi on cli or via .ini file you need to reference your virtualenv mentioning the full path (which ends one folderlevel above the folder in which the /bin/ is; so /example/myvenv/bin/activate means the full path is /example/myvenv.
I made the uwsgi install global so outside of my virtualenv. I suppose above applies/would work as well when installing uwsgi within the virtualenv, but have not tried (yet).
Keep the system-wide uwsgi the same version as your virtual environment python.
In my environment, my virtual environment is python3.7, but the system default python is python3.6.
After I uninstall uWSGI, and re-install the system-wide uWSGI with python3.7, the problem has been resolved.
sudo pip uninstall uwsgi
sudo -H python3.7 -m pip install uwsgi
I can't see any problem in your configuration (though I'm not very good at these topics). I can try to advice some steps to localize the problem.
Test uwsgi without using virtualenv. Note that the virtual directory is just a directory, so add it to your PYTHONPATH and run uwsgi.
Before that you can try
python -c 'import django.core.wsgi'
If that works, then the problem is in uwsgi virtualenv configuration.
Test virtualenv. Run it and check that the module can be imported.
If that works, then the problem is in uwsgi. Go to the previous case.
I am trying to set a Django app on Amazon EC2 using Nginx + uWSGI.
Following basic these tutorials
https://uwsgi.readthedocs.org/en/latest/tutorials/Django_and_nginx.html
http://www.yaconiello.com/blog/setting-aws-ec2-instance-nginx-django-uwsgi-and-mysql/#sthash.TsdnEDM8.oK2geOwb.dpbs
Nginx welcome page appears ok, Instance is running, Loadbalancer is In Service, Route53 alias to loadbalancer. But I can't see my app...
It appears that the app is running. I have tested local and it works.
I typed on terminal
uwsgi --ini myproject_uwsgi.ini
And get this
[uWSGI] getting INI configuration from myproject_uwsgi.ini
*** Starting uWSGI 1.9.15 (64bit) on [Wed Sep 11 06:14:04 2013] ***
compiled with version: 4.7.3 on 10 September 2013 09:27:00
os: Linux-3.8.0-19-generic #29-Ubuntu SMP Wed Apr 17 18:16:28 UTC 2013
nodename: ip-10-252-80-160
machine: x86_64
clock source: unix
detected number of CPU cores: 1
current working directory: /home/ubuntu/myproject
writing pidfile to /tmp/myproject-master.pid
detected binary path: /usr/local/bin/uwsgi
your processes number limit is 4569
your memory page size is 4096 bytes
*** WARNING: you have enabled harakiri without post buffering. Slow upload could be rejected on post-unbuffered webservers ***
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
uwsgi socket 0 bound to UNIX address /tmp/myproject.sock fd 3
Python version: 2.7.4 (default, Apr 19 2013, 18:30:41) [GCC 4.7.3]
Set PythonHome to /home/ubuntu/.virtualenvs/myproject
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x1235b30
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 363880 bytes (355 KB) for 4 cores
*** Operational MODE: preforking ***
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x1235b30 pid: 1500 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 1500)
spawned uWSGI worker 1 (pid: 1501, cores: 1)
spawned uWSGI worker 2 (pid: 1502, cores: 1)
spawned uWSGI worker 3 (pid: 1503, cores: 1)
spawned uWSGI worker 4 (pid: 1504, cores: 1)
And I try to see the error.log and I get nothing...
EDIT
myproject_uwsgi.ini
[uwsgi]
# Django-related settings
chdir = /home/ubuntu/myproject
module = myproject.wsgi
home = /home/ubuntu/.virtualenvs/myproject
env = DJANGO_SETTINGS_MODULE=myproject.settings
# process-related settings
master = true
processes = 4
socket = /tmp/myproject.sock
chmod-socket = 664
harakiri = 20
vacuum = true
max-requests = 5000
pidfile = /tmp/myproject-master.pid
daemonize = /home/ubuntu/myproject/log/myproject.log
myproject_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
server unix:///tmp/myproject.sock;
# server 127.0.0.1:8001;
}
# configuration of the server
server {
listen 80;
server_name myproject.com www.myproject.com;
charset utf-8;
root /home/ubuntu/myproject/;
client_max_body_size 75M;
location /media {
alias /home/ubuntu/myproject/myproject/media;
}
location /static {
alias /home/ubuntu/myproject/myproject/static;
}
location / {
uwsgi_pass unix:///tmp/myproject.sock;
include /home/ubuntu/myproject/uwsgi_params;
}
}
`
I have finally make my app works... first I made it works with TCP, test on port 8001 but the static files was getting error 404. So I want to have at least the app working through unix sockets even with no static files...
I started changing the nginx.conf and uwsgi.ini to sockets and started receiving error 502. Much better than yesterday errors (unable to connect).
Searching and reading through web and SO... found this 502 error with nginx + uwsgi +django
Can't vote neither comment. But thanks #zzart !!
So added to my uwsgi.ini
uid = www-data
gid = www-data
chmod-socket = 777
I have added yesterday the uid (www-data), gid(www-data) and chmod-socket = 664 or 644. But not worked for me on Amazon EC2. But 777 make it works and also static files working too.
Now I will drink a beer and tomorrow will change security groups, loadbalancer and route53.
Hope it helps others.
Simple Example used FastCGI Deamon in Amazon EC2
django.sh -> https://gist.github.com/romuloigor/5707566
nginx.conf -> https://gist.github.com/romuloigor/5707527