django + celery + uwsgi without supervisor - django

On dev server Django is working well with celery and django-celery:
python manage.py runserver
celery -A backup worker -l info -B
celerycam --frequency=10.0
On production server I tried to run celery with:
[uwsgi]
...
master = True
smart-attach-daemon = ${path}/${name_project}/.env/bin/python ${path}/manage.py celery -A test worker -l info -B
smart-attach-daemon = ${path}/${name_project}/.env/bin/python ${path}/manage.py celerycam --frequency=10.0
But it does not work.
How can I run it all without the use supervisor?
Update
It does not see Django, but the site works. In logs uwsgi:
Mon Aug 3 16:10:57 2015 - spawned uWSGI master process (pid: 23462)
Mon Aug 3 16:10:57 2015 - spawned uWSGI worker 1 (pid: 23666, cores: 1)
Mon Aug 3 16:10:57 2015 - spawned uWSGI worker 2 (pid: 23667, cores: 1)
Mon Aug 3 16:10:57 2015 - [uwsgi-daemons] spawning "/home/1/2/3/manage.py celery -A backup worker -l info -B"
Mon Aug 3 16:10:57 2015 - [uwsgi-daemons] spawning "/home/1/2/3/manage.py celerycam --frequency=10.0"
Traceback (most recent call last):
File "/home/1/2/3/manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
Traceback (most recent call last):
File "/home/1/2/3/manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
Mon Aug 3 16:10:58 2015 - subprocess 23668 exited with code 1
Mon Aug 3 16:10:58 2015 - subprocess 23669 exited with code 1
manage.py:
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "backup.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)

You can use either attach-daemon or smart-attach-daemon. But if you are using smart-attach-daemon then you should also start celery with pid file and set path to it in uwsgi:
smart-attach-daemon = ${path}/${name_project}/var/celery-worker.pid ${path}/${name_project}/.env/bin/python ${path}/manage.py celery -A test worker --pidfile=${path}/${name_project}/var/celery-worker.pid -l info -B

Replace "smart-attach-daemon" on "attach-daemon"

Related

ImportError: libpq.so.5: cannot open shared object file: No such file or directory

(My operating system is fedora 34)
I use django with haystack and postgresql. For development purposes I run heroku local command. I use three files for settings: base.py, local.py, pro.py. When I run heroku I use local.py file:
from . base import *
DEBUG = True
SECRET_KEY='secretKey'
DATABASES = {
'default': {
'ENGINE':'django.db.backends.sqlite3',
'NAME':os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
if DEBUG:
INTERNAL_IPS = ('127.0.0.1', 'localhost',)
DEBUG_TOOLBAR_PANELS = [
'debug_toolbar.panels.versions.VersionsPanel',
'debug_toolbar.panels.timer.TimerPanel',
'debug_toolbar.panels.settings.SettingsPanel',
'debug_toolbar.panels.headers.HeadersPanel',
'debug_toolbar.panels.request.RequestPanel',
'debug_toolbar.panels.sql.SQLPanel',
'debug_toolbar.panels.staticfiles.StaticFilesPanel',
'debug_toolbar.panels.templates.TemplatesPanel',
'debug_toolbar.panels.cache.CachePanel',
'debug_toolbar.panels.signals.SignalsPanel',
'debug_toolbar.panels.logging.LoggingPanel',
'debug_toolbar.panels.redirects.RedirectsPanel',
]
DEBUG_TOOLBAR_CONFIG = {
'INTERCEPT_REDIRECTS': False,
}
export DJANGO_SETTINGS_MODULE=myshop.settings.local
but heroku shows this error:
12:54:14 PM web.1 | File "/home/user/env2/lib64/python3.9/site-packages/django/contrib/postgres/apps.py", line 1, in <module>
12:54:14 PM web.1 | from psycopg2.extras import (
12:54:14 PM web.1 | File "/home/user/env2/lib64/python3.9/site-packages/psycopg2/__init__.py", line 51, in <module>
12:54:14 PM web.1 | from psycopg2._psycopg import ( # noqa
12:54:14 PM web.1 | ImportError: libpq.so.5: cannot open shared object file: No such file or directory
12:54:14 PM web.1 | [2021-07-21 09:54:14 +0000] [7689] [INFO] Worker exiting (pid: 7689)
12:54:14 PM web.1 | [2021-07-21 12:54:14 +0300] [7688] [INFO] Shutting down: Master
12:54:14 PM web.1 | [2021-07-21 12:54:14 +0300] [7688] [INFO] Reason: Worker failed to boot.
Postgresql is running:
postgresql.service - PostgreSQL database server
Loaded: loaded (/usr/lib/systemd/system/postgresql.service; disabled; vend>
Active: active (running) since Wed 2021-07-21 10:54:34 EEST; 1h 58min ago
Process: 2474 ExecStartPre=/usr/libexec/postgresql-check-db-dir postgresql >
Main PID: 2476 (postmaster)
Tasks: 8 (limit: 9381)
Memory: 31.2M
CPU: 663ms
CGroup: /system.slice/postgresql.service
├─2476 /usr/bin/postmaster -D /var/lib/pgsql/data
├─2477 postgres: logger
├─2479 postgres: checkpointer
├─2480 postgres: background writer
├─2481 postgres: walwriter
├─2482 postgres: autovacuum launcher
├─2483 postgres: stats collector
└─2484 postgres: logical replication launcher
Jul 21 10:54:34 fedora systemd[1]: Starting PostgreSQL database server...
Jul 21 10:54:34 fedora postmaster[2476]: 2021-07-21 10:54:34.242 EEST [2476] LO>
Jul 21 10:54:34 fedora postmaster[2476]: 2021-07-21 10:54:34.242 EEST [2476] HI>
Jul 21 10:54:34 fedora systemd[1]: Started PostgreSQL database server.
How do I fix this error. Thank you
I fixed this error by adding psycopg2-binary dependency

What's the right procfile / requirements for heroku with django channels?

tl;dr - django channels app runs local with manage.py runserver but not on heroku.
I'm new to django channels - trying to deploy a very basic django app using channels to heroku. I initially built the project using the standard django polls tutorial and deployed that to heroku. Then i added in a chat app using the django channels tutorial. Managed to get that running fine locally using docker to run a redis server as they suggested and "python manage.py runserver".
I'm getting stuck trying to deploy this to heroku or run it locally using heroku local. I've already added the redis addon in heroku and modified settings.py to point to the REDIS_URL environment variable. I also modified my template to use wss if appropriate (I believe that's necessary for heroku):
var ws_scheme = window.location.protocol == "https:" ? "wss" : "ws";
var target = ws_scheme + '://'
+ window.location.host
+ '/ws/chat/'
+ roomName
+ '/';
const chatSocket = new WebSocket(
target
);
...
Thus I'm concluding that the problem is with the procfile. I'm not sure what the instructions to use there are. The initial polls tutorial used:
web: gunicorn gettingstarted.wsgi --log-file -
If I just use that 'heroku local' works fine and deploying works fine, but when I try to send a chat message it does nothing and shows a 404 in the console. I know I have to change it to use an asgi server instead of gunicorn. found this tutorial on deploying an app with channels to heroku, which used:
web: daphne chat.asgi:channel_layer --port $PORT --bind 0.0.0.0 -v2
worker: python manage.py runworker -v2
I tried that, but that's where I get stuck. Here's what I'm getting when I run heroku local:
krishnas-air:python-getting-started Krishna$ heroku local
[OKAY] Loaded ENV .env File as KEY=VALUE Format
6:46:50 PM worker.1 | Traceback (most recent call last):
6:46:50 PM worker.1 | File "manage.py", line 8, in <module>
6:46:50 PM worker.1 | from django.core.management import execute_from_command_line
6:46:50 PM worker.1 | ImportError: No module named django.core.management
[DONE] Killing all processes with signal SIGINT
6:46:50 PM worker.1 Exited with exit code null
6:46:50 PM web.1 | Traceback (most recent call last):
6:46:50 PM web.1 | File "/usr/local/bin/daphne", line 5, in <module>
6:46:50 PM web.1 | from daphne.cli import CommandLineInterface
6:46:50 PM web.1 | File "/usr/local/lib/python3.7/site-packages/daphne/cli.py", line 1, in <module>
6:46:50 PM web.1 | import argparse
6:46:50 PM web.1 | File "<frozen importlib._bootstrap>", line 983, in _find_and_load
6:46:50 PM web.1 | File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
6:46:50 PM web.1 | File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
6:46:50 PM web.1 | File "<frozen importlib._bootstrap_external>", line 724, in exec_module
6:46:50 PM web.1 | File "<frozen importlib._bootstrap_external>", line 857, in get_code
6:46:50 PM web.1 | File "<frozen importlib._bootstrap_external>", line 525, in _compile_bytecode
6:46:50 PM web.1 | KeyboardInterrupt
6:46:50 PM web.1 Exited with exit code null
The import error message made me think my requirements.txt could be missing something, so I've included that here as well for reference:
django
gunicorn
django-heroku
requests
channels
channels_redis
asgi_redis
asgiref
daphne
redis
gevent
gevent-websocket
greenlet
Thanks for any help!
I just figured out a very similar issue. First of all, although not Heroku specific, these docs are a must read.
I think the problem is that Heroku is routing ws:// requests through WSGI instead of ASGI. So the first step is to create an asgi.py file in the same folder as wsgi.py with something like this:
import django
from channels.routing import get_default_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dropdjango.settings")
django.setup()
application = get_default_application()
Then, in the Procfile, define dynos for web worker:
web: daphne <my-web-app>.asgi:application --port $PORT --bind 0.0.0.0 -v2
worker: python manage.py runworker channel_layer -v2
If the dynos don't exist yet in Heroku, create them using the Heroku CLI.
In my case, the web dyno already existed so I only created the worker dyno:
heroku ps:scale worker=1:free -a <your-heroku-app-name>
Finally, double-check your settings.py to make sure you have:
ASGI_APPLICATION="<my-web-app>.routing.application"
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {"hosts": [os.environ.get("REDIS_URL", "redis://localhost:6379")]},
},
}
Warning: This will run all your requests through daphne. I've read of caveats but I haven't experienced them yet.
I just want to say this thread is a mess. I will figure out the correct configuration of everything and post back here my results.
I'll make a github repo to share all the files.

Unable to make gunicorn run on EC2

I am following this tutorial to setup a Django-gunicorn-nginx server in AWS EC2. After installing all dependancies and making a change in wsgi.py as follows
import os, sys
# add the hellodjango project path into the sys.path
sys.path.append('/home/ubuntu/project/ToDo-application/')
# add the virtualenv site-packages path to the sys.path
sys.path.append('/home/ubuntu/.local/lib/python3.6/site-packages')
# poiting to the project settings
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "todo_app.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
I run gunicorn todo_app.wsgi and get the following error:
ubuntu#ip-172-31-61-163:~/project/ToDo-application$ gunicorn todo_app.wsgi
[2018-11-07 11:25:35 +0000] [8211] [INFO] Starting gunicorn 19.7.1
[2018-11-07 11:25:35 +0000] [8211] [INFO] Listening at: http://127.0.0.1:8000 (8211)
[2018-11-07 11:25:35 +0000] [8211] [INFO] Using worker: sync
[2018-11-07 11:25:35 +0000] [8215] [INFO] Booting worker with pid: 8215
[2018-11-07 11:25:35 +0000] [8215] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 578, in spawn_worker
worker.init_process()
File "/usr/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 126, in init_process
self.load_wsgi()
File "/usr/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 135, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/lib/python2.7/dist-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 65, in load
return self.load_wsgiapp()
File "/usr/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/lib/python2.7/dist-packages/gunicorn/util.py", line 377, in import_app
__import__(module)
File "/home/ubuntu/urbanpiper/ToDo-application/todo_app/wsgi.py", line 20, in <module>
from django.core.wsgi import get_wsgi_application
File "/home/ubuntu/.local/lib/python3.6/site-packages/django/__init__.py", line 1, in <module>
from django.utils.version import get_version
File "/home/ubuntu/.local/lib/python3.6/site-packages/django/utils/version.py", line 71, in <module>
#functools.lru_cache()
AttributeError: 'module' object has no attribute 'lru_cache'
Is this because of gunicorn having python2 dependancies and Django being on python3? I tried uninstalling gunicorn and trying it again but it did not work.
# WRONG:
# add the virtualenv site-packages path to the sys.path
sys.path.append('/home/ubuntu/.local/lib/python3.6/site-packages')
You ought to create a virutalenv for each uwsgi application you wish to host on the server, rather than setting the virtualenv to the path above. If you followed the linked tutorial word-by-word, then this is the part which needs more explaining:
Make a virtualenv and install your pip requirements
Essentially:
# install virtualenv3
sudo apt-get install virtualenv3
# create the virtual environment, specifically for the stated python version
virtualenv -p python3.6 TITLE_OF_VENV
# You now have a directory called TITLE_OF_VENV (You may wish to replace this
# with something more subtle).
# Activate the virtualenv for your current shell session
. TITLE_OF_VENV/bin/activate
# The dot above is intentional and is a quick way to write source, which
# imports the environment vars
Your shell prompt should now look like this: (TITLE_OF_VENV) ubuntu#ip-172-31-61-163:~/project/ToDo-application$ indicating that the venv is active. To switch out of the venv run the command deactivate.
Anything which you install with pip here will then live in the directory TITLE_OF_VENV/python3.6/site-packages (while this virutal environment is active). This has the advantage of keeping different project requirements separate.
Test the python version (with the venv still active):
(TITLE_OF_VENV)$ python --version
Python 3.6
Now install gunicorn into this virtual environment, along with any other project requirements:
(TITLE_OF_VENV)$ pip install gunicorn
(TITLE_OF_VENV)$ pip install -r requirements.txt
Update your uwsgi.py:
import os
# poiting to the project settings
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "todo_app.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
And then launch it from within the virtual environment:
(TITLE_OF_VENV)$ gunicorn todo_app.wsgi:application
You could add the -D flag to the gunicorn command also, which makes it run in the background. Also don't make this server publicly accessible. If it's a production box, you need to run it behind nginx!

Celery timeout exception on Amazon Elastic Beanstalk using RabbitMQ

I'm trying to use Celery on my Beanstalk environment (this is the final piece in order to complete the technology stack of my project :P).
This is what I've done so far:
Since, RabbitMQ is the best broker for Celery and Amazon does not provide a dedicated service I created a custom AMI based on Ubuntu 13 64bit
installed RabbitMQ
removed the default user guest/guest
created a custom user
created a custom virtual host
installed admin plugins
tested my configuration using the http API in order to confirm that my RabbitMQ server is up and running.
So far so good! Then in my beanstalk .config file I added a couple of commands for celery:
04_celery_periodic_tasks:
command: "celery worker --app=com.cygora --loglevel=info --beat --autoreload -n period_tasks_worker.%h"
leader_only: true
05_celery_standard_worker:
command: "celery worker --app=com.cygora --loglevel=info --autoreload -n worker_1.%h"
Once I deployed my app, I didn't find any error related to celery (so I'm assuming it's all ok, from "the Python/Django side")... but as soon as I use a feature of my site that requires sending a message to Rabbit via Celery I get a timeout exception:
[Thu Feb 20 22:01:24 2014] [error] File "/opt/python/run/venv/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 111, in establish_connection
[Thu Feb 20 22:01:24 2014] [error] conn = self.Connection(**opts)
[Thu Feb 20 22:01:24 2014] [error] File "/opt/python/run/venv/lib/python2.7/site-packages/amqp/connection.py", line 165, in __init__
[Thu Feb 20 22:01:24 2014] [error] self.transport = create_transport(host, connect_timeout, ssl)
[Thu Feb 20 22:01:24 2014] [error] File "/opt/python/run/venv/lib/python2.7/site-packages/amqp/transport.py", line 274, in create_transport
[Thu Feb 20 22:01:24 2014] [error] return TCPTransport(host, connect_timeout)
[Thu Feb 20 22:01:24 2014] [error] File "/opt/python/run/venv/lib/python2.7/site-packages/amqp/transport.py", line 89, in __init__
[Thu Feb 20 22:01:24 2014] [error] raise socket.error(last_err)
[Thu Feb 20 22:01:24 2014] [error] error: timed out
I specified the broker url in settings as:
BROKER_URL = "amqp://myuser:mypassword#myelasticip:5672/myvirtualhost"
What I'm missing or what I did wrong? Why the socket connection can't be established?
I forgot I had asked this question... anyway I solved. It was just a matter of opening the right TCP ports for RabbitMQ:
22
15672
5672
I also changed the way I run celery, by using supervisor + django-supervisor in order to daemonize it properly :)

setting up uWSGI and Django app

I am trying to follow the steps in this guide: http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html
Before I even get to the nginx part I am trying to make sure that uWSGI works correctly
my folder structure is srv/www/domain/projectdatabank/
the project databank folder contains my manage.py file
my wsgi.py file looks like this:
import os
import sys
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "databank.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
i go into the projectdatabank folder and run the following command
uwsgi --http :8000 --wsgi-file projectdatabank/databank/wsgi.py
when i go to the web page i get this error
compiled with version: 4.4.7 20120313 (Red Hat 4.4.7-3) on 06 July 2013 00:16:13
os: Linux-3.8.4-linode50 #1 SMP Mon Mar 25 15:50:29 EDT 2013
nodename:
machine: i686
clock source: unix
pcre jit disabled
detected number of CPU cores: 8
current working directory: /srv/www/databankinfo.com
detected binary path: /usr/bin/uwsgi
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 1024
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
uWSGI http bound on :8000 fd 4
spawned uWSGI http 1 (pid: 10091)
uwsgi socket 0 bound to TCP address 127.0.0.1:47129 (port auto-assigned) fd 3
Python version: 2.6.6 (r266:84292, Feb 21 2013, 23:54:59) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x8cf8598
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 64000 bytes (62 KB) for 1 cores
*** Operational MODE: single process ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x8cf8598 pid: 10090 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 10090, cores: 1)
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 236, in __call__
self.load_middleware()
File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 45, in load_middleware
for middleware_path in settings.MIDDLEWARE_CLASSES:
File "/usr/lib/python2.6/site-packages/django/conf/__init__.py", line 53, in __getattr__
self._setup(name)
File "/usr/lib/python2.6/site-packages/django/conf/__init__.py", line 48, in _setup
self._wrapped = Settings(settings_module)
File "/usr/lib/python2.6/site-packages/django/conf/__init__.py", line 134, in __init__
raise ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e))
ImportError: Could not import settings 'databank.settings' (Is it on sys.path?): No module named databank.settings
[pid: 10090|app: 0|req: 1/1] 66.56.35.151 () {38 vars in 669 bytes} [Tue Jul 9 17:34:52 2013] GET / => generated 0 bytes in 1 msecs (HTTP/1.1 500) 0 headers in 0 bytes (0 switches on core 0)
however I know that settings.py exists in the same directory as wsgi.py
You need to provide an additional argument to your uwsgi call:
--chdir /path/to/your/project/