I'm building a server with Flask/Gunicorn and Nginx. My script (Flask server) does two things with 'threading':
connect to MQTT broker
run a flask server
But when i try using gunicorn: gunicorn --bind 0.0.0.0:5000 wsgi:app, the first thread doens't run.
Here is the code (not complet):
import threading
def run_mqtt():
while True:
mqtt_client.connect(mqtt_server, port=mqtt_port)
def run_server():
app.run(host='0.0.0.0', port=5000, debug=False)
if __name__ == '__main__':
t1 = threading.Thread(target=run_mqtt)
t2 = threading.Thread(target=run_server)
t1.daemon = True
t2.daemon = True
t1.start()
t2.start()
Please help me, i have to find the solution very fast! Thanks!!
Gunicorn is based on the pre-fork worker model. That means that when it starts, it has a master process and spawns off worker processes as necessary. Most likely the first thread did run, but you lost track of it in the other prefork processes.
If you want to have a background thread that flask controllers can interact with and share memory with, it's unlikely that gunicorn is a good solution for you.
Related
Following code approximates real code. Flask app on starts creates a worker thread. Routing function use data processing done by worker function .
app = Flask(__name__)
timeStr=""
def loop ():
global timeStr
while True:
time.sleep (2)
timeStr =datetime.now().replace(microsecond=0).isoformat()
print (timeStr)
ThreadID = Thread (target=loop)
ThreadID.daemon = True
ThreadID.start()
#app.route('/')
def test():
return os.name + " " + platform.platform() + " " + timeStr
application=app
if __name__ == "__main__":
app.run(host='0.0.0.0', port=8080, debug=True)
The above app works beautifully for days when started likes this:
python3 app.py
However in uwsgi, even though I have enabled threads, app is not working. It's not updating global timeStr
sudo /usr/local/bin/uwsgi --wsgi-file /home/pi/pyTest/app.py --http :80 --touch-reload /home/pi/pyTest/app.py --enable-threads --stats 127.0.0.1:9191
What do I need to do for app to function correctly under UWSGI, so I have create systemd service proper way?
Bad news, good news.
I have an app that starts a worker thread in much the same way. It uses a queue.Queue to let routes pass work on the worker thread. The app has been running happily on my home intranet (on a Rapsberry Pi) using the Flask development server. I tried putting my app behind uwsgi, and observed the same failure -- the worker thread didn't appear to get scheduled. The thread reported as _is_alive = True, but I couldn't find a uwsgi switch combination that let it actually run.
Using gunicorn resolved the issue.
virtualenv venv --python=python3
. venv/bin/activate
pip install flask gunicorn
gunicorn -b 0.0.0.0:5000 demo:app
was enough to get my app to work (meaning the worker thread actually ran, and side-effects were noticeable).
I have a test Django site using a mod_wsgi daemon process, and have set up a simple Celery task to email a contact form, and have installed supervisor.
I know that the Django code is correct.
The problem I'm having is that when I submit the form, I am only getting one message - the first one. Subsequent completions of the contact form do not send any message at all.
On my server, I have another test site with a configured supervisor task running which uses the Django server (ie it's not using mod_wsgi). Both of my tasks are running fine if I do
sudo supervisorctl status
Here is my conf file for the task I've described above which is saved at
/etc/supervisor/conf.d
the user in this instance is called myuser
[program:test_project]
command=/home/myuser/virtualenvs/test_project_env/bin/celery -A test_project worker --loglevel=INFO --concurrency=10 -n worker2#%%h
directory=/home/myuser/djangoprojects/test_project
user=myuser
numprocs=1
stdout_logfile=/var/log/celery/test_project.out.log
stderr_logfile=/var/log/celery/test_project.err.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
stopasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq is supervised, it will start first.
priority=1000
My other test site has this set as the command - note worker1#%%h
command=/home/myuser/virtualenvs/another_test_project_env/bin/celery -A another_test_project worker --loglevel=INFO --concurrency=10 -n worker1#%%h
I'm obviously doing something wrong in that my form is only submitted. If I look at the out.log file referred to above, I only see the first task, nothing is visible for the other form submissions.
Many thanks in advance.
UPDATE
I submitted the first form at 8.32 am (GMT) which was received, and then as described above, another one shortly thereafter for which a task was not created. Just after finishing the question, I submitted the form again at 9.15, and for this a task was created and the message received! I then submitted the form again, but no task was created again. Hope this helps!
use ps auxf|grep celery to see how many worker you started,if there is any other worker you start before and you don't kill it ,the worker you create before will consume the task,result in you every two or three(or more) times there is only one task is received.
and you need to stop celery by:
sudo supervisorctl -c /etc/supervisord/supervisord.conf stop all
everytime, and set this in supervisord.conf:
stopasgroup=true ; send stop signal to the UNIX process group (default false)
Otherwise it will causes memory leaks and regular task loss.
If you has multi django site,here is a demo support by RabbitMQ:
you need add rabbitmq vhost and set user to vhost:
sudo rabbitmqctl add_vhost {vhost_name}
sudo rabbitmqctl set_permissions -p {vhost_name} {username} ".*" ".*" ".*"
and different site use different vhost(but can use same user).
add this to your django settings.py:
BROKER_URL = 'amqp://username:password#localhost:5672/vhost_name'
some info here:
Using celeryd as a daemon with multiple django apps?
Running multiple Django Celery websites on same server
Run Multiple Django Apps With Celery On One Server With Rabbitmq VHosts
Run Multiple Django Apps With Celery On One Server With Rabbitmq VHosts
We have two servers, Server A and Server B. Server A is dedicated for running django web app. Due to large number of data we decided to run the celery tasks in server B. Server A and B uses a common database. Tasks are initiated after post save in models from Server A,webapp. How to implement this idea using rabbitmq in my django project
You have 2 servers, 1 project and 2 settings(1 per server).
server A (web server + rabbit)
server B (only celery for workers)
Then you set up the broker url in both settings. Something like this:
BROKER_URL = 'amqp://user:password#IP_SERVER_A:5672//' matching server A to IP of server A in server B settings.
For now, any task must be sent to rabbit in server A to virtual server /.
In server B, you must just initialize celery worker, something like this:
python manage.py celery worker -Q queue_name -l info
and thats it.
Explanation: django sends messages to rabbit to queue a task, then celery workers request some message to execute a task.
Note: Is not required that rabbitMQ have to be installed in server A, you can install rabbit in server C and reference it in the BROKER_URL in both settings(A and B) like this: BROKER_URL='amqp://user:password#IP_SERVER_C:5672//'.
Sorry for my English.
greetings.
I have a python script written with Flask, which requires some preparation work (connect to databases, acquire some other resources etc) before it can actually accept requests.
I was using it under Apache HTTPD with wsgi. Apache config:
WSGIDaemonProcess test user=<uid> group=<gid> threads=1 processes=4
WSGIScriptAlias /flask/test <path>/flask/test/data.wsgi process-group=test
And it was working fine: Apache would start 4 completely separate processes, each with its own database connection.
I am now trying to switch to uwsgi + nginx. nginx config:
location /flask/test/ {
include uwsgi_params;
uwsgi_pass unix:/tmp/uwsgi.sock;
}
uwsgi:
uwsgi -s /tmp/uwsgi.sock --mount /flask/test=test.py --callable app --manage-script-name --processes=4 --master
The simplified script test.py:
from flask import Flask, Response
app = Flask(__name__)
def do_some_preparation():
print("Prepared!")
#app.route("/test")
def get_test():
return Response("test")
do_some_preparation()
if __name__ == "__main__":
app.run()
What I would expect is to see "Prepared!" 4 times in the output. However, uwsgi does not do that, output:
Python main interpreter initialized at 0x71a7b0
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 363800 bytes (355 KB) for 4 cores
*** Operational MODE: preforking ***
mounting test.py on /flask/test
Prepared! <======================================
WSGI app 0 (mountpoint='/flask/test') ready in 0 seconds ...
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 1212)
spawned uWSGI worker 1 (pid: 1216, cores: 1)
spawned uWSGI worker 2 (pid: 1217, cores: 1)
spawned uWSGI worker 3 (pid: 1218, cores: 1)
spawned uWSGI worker 4 (pid: 1219, cores: 1)
So, in this simplified example, uwsgi spawned 4 workers, but executed do_some_preparation() only once. In real application, there are several database connections opened, and apparently those are being reused by these 4 processes and cause issues with concurrent request.
Is there a way to tell uwsgi to spawn several completely separate processes?
EDIT: I could, of course, get it working with a workaround like:
from flask import Flask, Response
app = Flask(__name__)
all_prepared = False
def do_some_preparation():
global all_prepared
all_prepared = True
print("Prepared!")
#app.route("/test")
def get_test():
if not all_prepared:
do_some_preparation()
return Response("test")
if __name__ == "__main__":
app.run()
But then I will have to place this "all_prepared" check into every route, which does not seem like a good solution.
By default uWSGI does preforking. So your app is loaded one time, and then forked.
If you want to load the app one time per worker add --lazy-apps to the uWSGI options.
By the way in both cases you are under true multiprocessing :)
It seems like I have found an answer myself. And the answer is: my code should have been redesigned as:
#app.before_first_request
def do_some_preparation():
...
Then Flask will take care of running do_some_preparation() function for each worker separately, allowing each one to have its own database connection (or other concurrency-intolerant resource).
I am using django celery and rabbitmq as my broker (guest rabbit user has full access on local machine). I have a bunch of projects all in their own virtualenv but recently needed celery on 2 of them. I have one instance of rabbitmq running
(project1_env)python manage.py celery worker
normal celery stuff....
[Configuration]
broker: amqp://guest#localhost:5672//
app: default:0x101bd2250 (djcelery.loaders.DjangoLoader)
[Queues]
push_queue: exchange:push_queue(direct) binding:push_queue
In my other project
(project2_env)python manage.py celery worker
normal celery stuff....
[Configuration]
broker: amqp://guest#localhost:5672//
app: default:0x101dbf450 (djcelery.loaders.DjangoLoader)
[Queues]
job_queue: exchange:job_queue(direct) binding:job_queue
When I run a task in project1 code it fires to the project1 celery just fine in the push_queue. The problem is when I am working in project2 any task tries to fire in the project1 celery even if celery isn't running on project1.
If I fire back up project1_env and start celery I get
Received unregistered task of type 'update-jobs'.
If I run list_queues in rabbit, it shows all the queues
...
push_queue 0
job_queue 0
...
My env settings and CELERYD_CHDIR and CELERY_CONFIG_MODULE are both blank.
Some things I have tried:
purging celery
force_reset on rabbitmq
rabbitmq virtual hosts as outlined in this answer: Multi Celery projects with same RabbitMQ broker backend process
moving django celery setting out and setting CELERY_CONFIG_MODULE to the proper settings
setting the CELERYD_CHDIR in both projects to the proper directory
None of these thing have stopped project2 tasks trying to work in project1 celery.
I am on Mac, if that makes a difference or helps.
UPDATE
Setting up different virtual hosts made it all work. I just had it configured wrong.
If you're going to be using the same RabbitMQ instance for both Celery instances, you'll want to use virtual hosts. This is what we use and it works. You mention that you've tried it, but your broker URLs are both amqp://guest#localhost:5672//, with no virtual host specified. If both Celery instances are connected to the same host and virtual host, they will produce to and consume from the same set of queues.