I've been trying to run a celeryd (celery 3.1.25) service for my django project, but when I define
CELERYD_USER="www-data"
CELERYD_GROUP="www-data"
in /etc/default/celeryd which seems to be used a lot, I get the following:
celery init v10.1.
Using config script: /etc/default/celeryd
This account is currently not available.
which of course has to do with the www-data user. If I change the user to, say, "celery", I get a permission denied error on a log file owned by www-data.
Really frustrated...
Related
I am hosting a site via elastic beanstalk and I have a 01_migrate.sh file in .platform/hooks/postdeploy in order to migrate model changes to a postgres database on Amazon RDS:
#!/bin/sh
source /var/app/venv/staging-LQM1lest/bin/activate
python /var/app/current/manage.py migrate --noinput
python /var/app/current/manage.py createsu
python /var/app/current/manage.py collectstatic --noinput
This used to work well bu now when I check the hooks log, although it appears to find the file there is no output to suggest that the migrate command has been ran
i.e. previously I would get the following even if no new migrations:
2022/03/29 05:12:56.530728 [INFO] Running command .platform/hooks/postdeploy/01_migrate.sh
2022/03/29 05:13:11.872676 [INFO] Operations to perform:
Apply all migrations: account, admin, auth, blog, contenttypes, home, se_balance, sessions, sites, socialaccount, taggit, users, wagtailadmin, wagtailcore, wagtaildocs, wagtailembeds, wagtailforms, wagtailimages, wagtailredirects, wagtailsearch, wagtailusers
Running migrations:
No migrations to apply.
Found another file with the destination path 'favicon.ico'. It will be ignored since only the first encountered file is collected. If this is not what you want, make sure every static file has a unique path.
Whereas now I just get
2022/05/23 08:47:49.602719 [INFO] Running command .platform/hooks/postdeploy/01_migrate.sh
Found another file with the destination path 'favicon.ico'. It will be ignored since only the first encountered file is collected. If this is not what you want, make sure every static file has a unique path.
I dont know what has occurred to make this change. Of potential relevance is that eb deploy stopped being ablke to find the 01_migrate.sh file so I had to move the folder and its contents .platform/hooks/postdeploy/01_migrate.sh up a to the parent directory and then it became able to find it again.
As per documentation on platform hooks:
All files must have execute permission. Use chmod +x to set execute permission on your hook files. For all Amazon Linux 2 based platforms versions that were released on or after April 29, 2022, Elastic Beanstalk automatically grants execute permissions to all of the platform hook scripts. In this case you don't have to manually grant execute permissions.
The permissions on your script may have changed after moving the file around locally.
Try setting executable permissions again on your script - chmod +x 01_migrate.sh - and redeploying your application.
I have set up a simple Django project with MySQL on Ubuntu 16.04 using wsgi
Have created a superuser as per the docs, and have verified that the user exists in the auth_user table, and that the status is superuser. Everything looks great!
However, I can't login via the admin url - I just get a 500 server error and don't know of a way to check what could possibly be going wrong.
However, if I run the site with the Django server (python manage.py runserver 0.0.0.0:8000), I can login with no problem.
It doesnt make a difference whether DEBUG is set to True or False.
I hope someone who's had the same type of issue can help!
Many thanks.
UPDATE
screenshot of 500 error when trying to login, DEBUG set to True
Have you gave permission to www-data user for apache server ? If not, then try following commands
sudo adduser $USER www-data
sudo chown www-data:www-data /var/www/venv/project_name
sudo chown www-data:www-data /var/www/venv/project_name/db.sqlite3
I'm currently having some trouble running celery as daemon. I use apache to serve my Django application, so I set uid and gid in celery setting all as "www-data". There are 2 places I know so far that need access permission: /var/log/celery/*.log, /var/run/celery/*.pid, and I already set them owned by "www-data". However, celery couldn't get started when I run sudo service celeryd start. If I get rid of the --uid and --gid option for the command, celery could get started by user "root".
One other thing I noticed is that if I could start celery using "root", it will put some files like: celery.bak, celery.dat, celery.dir in my CELERYD_CHDIR, which is my django application directory. I also changed the application directory owned by "www-data", celery still couldn't get started. I copied all the setting files from another machine in which celery runs fine, so I suppose it's not my setting's problem. Does anyone have any clue? Thanks.
Su to celery user and start celery from the command line. Most likely you have an app log, not celery, that you need permission for.
I've been trying to trudge through the docs and examples to get my Django running through upstart so I can have it running all the time but am unable to so.
Here's my upstart configuration file located at /etc/init/myapp.conf:
start on startup
#expect daemon
#respawn
console output
script
chdir /app/env/bin
exec source activate
exec /app/env/bin/python /app/src/manage.py runserver 0.0.0.0:8000 > /dev/null 2>&1 &
end script
When I type sudo service myapp start, the console says that it has started but it doesn't seem to be running.
Is it possible to see some debugging output to see what's going wrong?
I need to run my Django application as another user — i.e. djangouser. How can I do so?
(I've been commenting out some lines to test where the service is going wrong). This is not for production use but my internal development use only.
Thanks.
Edit #1:
I have wrapped both my commands into a simple script at /app/run.sh
#!/bin/bash
cd /app/env/bin
source activate
cd /app/src
python manage.py runserver 0.0.0.0:8000 > /dev/null 2>&1 &
..and I've modified my /etc/init/myapp.conf to
start on startup
expect daemon
exec su - djangouser -c "bash /app/run.sh"
When executing sudo service myapp start — the application starts but the PID is wrong and I can't seem to kill it with sudo service myapp stop
Any ideas?
Change:
exec source activate
By just:
source activate
This will load the virtual environment. You should probably drop the other "exec". If that doesn't work, please post your upstart logs.
A couple of remarks:
logging the output to somewhere else than /dev/null might be useful :)
runserver is not ment to be stable, I see it crashing sometimes and in that case i guess you'll need to force upstart to reload, or put the runserver call in a while loop
you will not be able to use an interactive debugger like ipdb with this setup
How about using nginx and uwsgi with your virtualenv. this will give you a production like environment but will also start your django app at start up. if you are using ubuntu 10 you should take a look at uwsgi-python, otherwise just install the latest uwsgi. i usually start my virtualenv in uwsgi like so : sudo nano /etc/uwsgi-python/apps-available/app.xml
<uwsgi>
<socket>127.0.0.1:8889</socket>
<pythonpath>/home/user/code/</pythonpath>
<virtualenv>/home/user/code</virtualenv>
<pythonpath>/home/user/code/app</pythonpath>
<app mountpoint="/">
<script>uwsgiApp</script>
</app>
</uwsgi>
also setup yournginx files at /etc/nginx/apps-available/default (the file is a bit straight forward). this will help you have your django app at all times,
su is problematic becouse it forks the process. You can use sudo -u djangouser instead or simply add
setuid djangouser
in your conf file.
This should work on Ubuntu 14.04 and possibly other versions as well:
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app start
my_app start/running, process 7799
root#vagrant-ubuntu-trusty-64:/etc/init# cat /var/log/upstart/my_app.log
Performing system checks...
System check identified no issues (0 silenced).
You have unapplied migrations; your app may not work properly until they are applied.
Run 'python manage.py migrate' to apply them.
June 30, 2015 - 06:54:18
Django version 1.8.2, using settings 'my_test.settings'
Starting development server at http://0.0.0.0:8080/
Quit the server with CONTROL-C.
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app status
my_app start/running, process 7799
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app stop
my_app stop/waiting
root#vagrant-ubuntu-trusty-64:/etc/init# service my_app status
my_app stop/waiting
Here is the config to make it work:
root#vagrant-ubuntu-trusty-64:/etc/init# cat my_app.conf
description "my_app upstart script"
start on runlevel [23]
respawn
script
su vagrant -c "source /home/vagrant/dj_app/bin/activate; /home/vagrant/dj_app/bin/python /home/vagrant/my_test/manage.py runserver 0.0.0.0:8080"
end script
Last week I setup RabbitMQ and Celery on my production system after having tested it on my local dev and all worked fine.
I get the feeling that my tasks are not being executed on production since I have about 1200 tasks that are still in the queue.
I run a CentOS 5.4 setup, with celeryd and celerybeat daemons and WSGI
I have made the import onto the wsgi module.
When I run, /etc/init.d/celeryd start I get the following response
[root#myvm myproject]# /etc/init.d/celeryd start
celeryd-multi v2.3.1
> Starting nodes...
> w1.myvm.centos01: OK
When I run /etc/init.d/celerybeat start I get the following response
[root#myvm fundedmyprojectbyme]# /etc/init.d/celerybeat start
Starting celerybeat...
So by the output it seems that the items are executed successully - although when looking at the queues they only seem to get more than getting executed.
Now if I perform the same execution, but use django's manage.py instead ./manage.py celeryd and ./manage.py celerybeat the tasks immediately start to get processed.
My /etc/default/celeryd
# Where to chdir at start.
CELERYD_CHDIR="/www/myproject/"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Name of the celery config module.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
my /etc/default/celerybeat
# Where the Django project is.
CELERYD_CHDIR="/www/myproject/"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
# Path to celeryd
CELERYD="/www/myproject/manage.py celeryd"
# Path to celerybeat
CELERYBEAT="/www/myproject/manage.py celerybeat"
# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celerybeat-schedule"
my /etc/init.d files for /celeryd and /celerybeat are based on the generic scripts
Am I missing a part of the configuration???
I ran into a situation once where I have to add 'python' as a prefix to the CELERYD_MULTI variable.
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="python $CELERYD_CHDIR/manage.py celeryd_multi"
For whatever reason, my manage.py script would not execute normally (even though I had chmod +x and configured my shebang's properly.) You might try this to see if it works.
Try running the following and see what the output tells you:
sh -x /etc/init.d/celeryd start
In my case there were some permission problems on /var/log for the user that celery runs as