starting celery workers as daemons - django

I am trying to set up celery to run in production. I have been following the instructions here:
https://www.linode.com/docs/development/python/task-queue-celery-rabbitmq/#start-the-workers-as-daemons
I am currently up to step #7, i.e. 'sudo systemctl start celeryd'. When I am running this I am being told celeryd.service has failed. I have run 'journalctl -xe' to find the log details, which I have copied in below.
I'm very new to celery so I'm finding difficulty in interpreting the log file to figure out what's going wrong, so any help would be much appreciated. If more information is needed then please ask and i'll do my best to provide it.
Apr 05 10:44:47 user-admin systemd[6477]: celeryd.service: Failed to determine user credentials: No such process
Apr 05 10:44:47 user-admin systemd[6477]: celeryd.service: Failed at step USER spawning /bin/sh: No such process
-- Subject: Process /bin/sh could not be executed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The process /bin/sh could not be executed and failed.
--
-- The error number returned by this process is 3.
Apr 05 10:44:47 user-admin systemd[1]: celeryd.service: Control process exited, code=exited status=217
Apr 05 10:44:47 user-admin systemd[1]: celeryd.service: Failed with result 'exit-code'.
Apr 05 10:44:47 user-admin systemd[1]: Failed to start Celery Service.
-- Subject: Unit celeryd.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Unit celeryd.service has failed.
--
-- The result is RESULT.
Apr 05 10:44:47 user-admin sudo[6472]: pam_unix(sudo:session): session closed for user root
Apr 05 10:45:01 user-admin CRON[6481]: pam_unix(cron:session): session opened for user root by (uid=0)
Apr 05 10:45:01 user-admin CRON[6482]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Apr 05 10:45:01 user-admin CRON[6481]: pam_unix(cron:session): session closed for user root
Apr 05 10:45:05 user-admin sudo[6485]: djangoadmin : TTY=pts/1 ; PWD=/var/log/celery ; USER=root ; COMMAND=/bin/journalctl -xe

Remove the /bin/sh -c from ExecStart, ExecStop and ExecRestart (in your celeryd.service).
Assuming you have a virtual environment in /home/celery/venv, and Celery installed in this environment, then your ExecStart (and other Exec* lines) should look like:
ExecStart=/home/celery/venv/bin/celery multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL}
${CELERYD_OPTS}'
To create virtual environment do something like: python3 -m venv /home/celery/venv
If the celery user is created in a different path, then change the /home/celery in the code above to the appropriate "home" of the celery user...
UPDATE: If you used the same config file as on the Linode page, then you may use ExecStart=${CELERY_BIN} multi start...

Related

Failed to start Zabbix Agent for every 10 seconds

I am using centos 7.
How did I check the log.
journalctl -xe
What I got from the log.(I saw the same log every 10 seconds.)
Oct 02 10:19:51 lp01.localdomain systemd[1]: zabbix-agent.service holdoff time over, scheduling restart.
Oct 02 10:19:51 lp01.localdomain systemd[1]: Starting Zabbix Agent...
-- Subject: Unit zabbix-agent.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit zabbix-agent.service has begun starting up.
Oct 02 10:19:51 lp01.localdomain zabbix_agentd[8985]: zabbix_agentd [8987]: cannot open "/var/log/zabbix/zabbix_agentd.log": [13] Permission denied
Oct 02 10:19:51 lp01.localdomain systemd[1]: PID file /run/zabbix/zabbix_agentd.pid not readable (yet?) after start.
Oct 02 10:19:51 lp01.localdomain systemd[1]: zabbix-agent.service never wrote its PID file. Failing.
Oct 02 10:19:51 lp01.localdomain systemd[1]: Failed to start Zabbix Agent.
-- Subject: Unit zabbix-agent.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit zabbix-agent.service has failed.
--
-- The result is failed.
Oct 02 10:19:51 lp01.localdomain systemd[1]: Unit zabbix-agent.service entered failed state.
Oct 02 10:19:51 lp01.localdomain systemd[1]: zabbix-agent.service failed.
So I checked "/var/log/zabbix/zabbix_agentd.log" file first.
ll /var/log/zabbix/zabbix_agentd.log
But it said No such file or directory.
ls: cannot access /var/log/zabbix/zabbix_agentd.log: No such file or directory
and then I checked "/run/zabbix/zabbix_agentd.pid" file.
ll /run/zabbix/zabbix_agentd.pid
It also said No such file or directory.
ls: cannot access /run/zabbix/zabbix_agentd.pid: No such file or directory
You have new mail in /var/spool/mail/root
I checked if Selinux is running.
getenforce
and it said Selinux is Disabled..
My questions are
How can I start zabbix?
If I can't start zabbix, can I stop zabbix from starting-failing every 10 seconds?
Thank you.
add permission to the directory - /var/log/zabbix/ & /var/log/zabbix-agent/
chmod 707 /var/log/zabbix/
chmod 707 /var/log/zabbix-agent/
or
change owner of the directory?
chown zabbix:zabbix /var/log/zabbix/
chown zabbix:zabbix /var/log/zabbix-agent/
And then, would stop service zabbix?
systemctl stop zabbix-agent

NGINX failed to work while configuring server for subdomain

Job for nginx.service failed because the control process exited with error code.
See "systemctl status nginx.service" and "journalctl -xe" for details.
Prior to this prompt, the server was working fine, but when I tried to config the server for subdomain it failed to work with this error.
Also for detailed error ...
[ec2-user#ip--------- conf.d]$ systemctl status nginx.service
? nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-07-07 06:03:04 UTC; 4min 17s ago
Process: 71445 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
Process: 71444 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 70298 (code=exited, status=0/SUCCESS)
Jul 07 06:03:04 ip-999.ap-south-1.compute.internal systemd[1]: Starting The nginx HTTP and reverse proxy serv>
Jul 07 06:03:04 ip-999.ap-south-1.compute.internal nginx[71445]: nginx: [emerg] unexpected end of file, expec>
Jul 07 06:03:04 ip-999.ap-south-1.compute.internal nginx[71445]: nginx: configuration file /etc/nginx/nginx.c>
Jul 07 06:03:04 ip-999.ap-south-1.compute.internal systemd[1]: nginx.service: Control process exited, code=ex>
Jul 07 06:03:04 ip-999.ap-south-1.compute.internal systemd[1]: nginx.service: Failed with result 'exit-code'.
Jul 07 06:03:04 ip-999.ap-south-1.compute.internal systemd[1]: Failed to start The nginx HTTP and reverse pro>
lines 1-13/13 (END)
You can always check configuration errors by using this command :
nginx -T
before launch a catastrophic restart :)
You have an error unexpected end of file, check the syntax of your NGINX configuration as it is likely that you have a broken file.
The likely candidate is forgetting to close a } character.

Failed to start Redis In-Memory Data Store. Ubuntu 18.04

I am trying to install redis on my AWS server. I have Ubuntu 18.04 installed on it. I am following steps to install redis from digitalocean article.
When i run sudo systemctl status redis command i am getting below error.
screenshot
I tried to edit /etc/systemd/system/redis.service file and added Type=forking under [Service] section but still getting the same error.
Can anyone suggest me how i can get it fixed?
Thanks in advance.
Based on same digitalocean tutorial, actually it's running fine.
Run this command sudo systemctl restart redis.service, we get (showing "failed" on last line):
● redis.service - Redis In-Memory Data Store
Loaded: loaded (/etc/systemd/system/redis.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2021-06-28 12:03:11 +03; 1min 0s ago
Process: 20428 ExecStart=/usr/local/bin/redis-server /etc/redis/redis.conf (code=exited, status=
Main PID: 20428 (code=exited, status=203/EXEC)
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Service hold-off time over, scheduling restar
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Scheduled restart job, restart counter is at
Jun 28 12:03:11 XYZ systemd[1]: Stopped Redis In-Memory Data Store.
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Start request repeated too quickly.
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Failed with result 'exit-code'.
Jun 28 12:03:11 XYZ systemd[1]: Failed to start Redis In-Memory Data Store.
But if you run sudo service redis-server status, we get (showing "running" on 3rd line):
● redis-server.service - Advanced key-value store
Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-06-28 11:50:13 +03; 19min ago
Docs: http://redis.io/documentation,
man:redis-server(1)
Process: 19278 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=0/SUCCESS)
Process: 19371 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCC
Main PID: 19382 (redis-server)
Tasks: 4 (limit: 4915)
CGroup: /system.slice/redis-server.service
└─19382 /usr/bin/redis-server 127.0.0.1:6379
Jun 28 11:50:13 XYZ systemd[1]: Starting Advanced key-value store...
Jun 28 11:50:13 XYZ systemd[1]: redis-server.service: Can't open PID file /var/run/redis/red
Jun 28 11:50:13 XYZ systemd[1]: Started Advanced key-value store.
After searching for hours, it seems like it's some difference between systemctl & service and nothing more, but the actual redis server is running fine. Corrects me if that's not the case. Here's the link: https://askubuntu.com/questions/903354/difference-between-systemctl-and-service-commands
You can even check if redis is working fine, by redis-cli ping, should print PONG
I also encountered this problem, then I tried to check it again.
Finally, I found that when I authorized /var/lib/redis, I entered the wrong command, causing the redis account to have no access to /var/lib/redis.
sudo chown redis:redis /var/lib/redis
sudo systemctl restart redis
succeeded.

docker-machine create with digitalocean driver: ssh command error

I´m using docker tools on windows.
create command was working perfectly last week and I managed to create a number of machines on Digital Ocean. Then I tried today with no success. I repeated the same command with different regions and I always get the same result:
λ docker-machine create -d digitalocean --digitalocean-access-token=MYTOKEN --digitalocean-region=ams2 vmname
Running pre-create checks...
Creating machine...
(fernu) Creating SSH key...
(fernu) Creating Digital Ocean droplet...
(fernu) Waiting for IP address to be assigned to the Droplet...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Error creating machine: Error running provisioning: ssh command error:
command : sudo systemctl -f start docker
err : exit status 1
output : Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
If I execute the suggested command:
root#fernu:~# systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─10-machine.conf
Active: inactive (dead) (Result: exit-code) since Fri 2017-06-30 20:56:13 UTC; 8min ago
Docs: https://docs.docker.com
Process: 4943 ExecStart=/usr/bin/docker daemon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=digitalocean (code=exited, status=1/FAILURE)
Main PID: 4943 (code=exited, status=1/FAILURE)
Jun 30 20:56:13 fernu systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 20:56:13 fernu systemd[1]: Failed to start Docker Application Container Engine.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Unit entered failed state.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Failed with result 'exit-code'.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Jun 30 20:56:13 fernu systemd[1]: Stopped Docker Application Container Engine.
Jun 30 20:56:13 fernu systemd[1]: docker.service: Start request repeated too quickly.
Jun 30 20:56:13 fernu systemd[1]: Failed to start Docker Application Container Engine.
Any help would be appreciated
Update
It´s working with ubuntu 14:
--digitalocean-image=ubuntu-14-04-x64 so it seams like a problem with the default image (ubuntu-16-04-x64)
This seems to be hitting a lot of people. TL;DR: There is a bug in docker-machine v0.12.0 and this issue can be resolved by upgrading.
Logging in to the DigitalOcean instance and running journalctl -xe provides more information:
-- Unit docker.service has begun starting up.
Jul 07 20:03:52 docker-sandbox docker[4930]: `docker daemon` is not supported on Linux. Please run `do
Jul 07 20:03:52 docker-sandbox systemd[1]: docker.service: Main process exited, code=exited, status=1/
Jul 07 20:03:52 docker-sandbox systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
The key here is docker daemon is not supported on Linux. A bug in docker-machine's version comparison code caused an incorrect systemd unit file to be produced (located at /etc/systemd/system/docker.service.d/10-machine.conf) on certain versions of Ubuntu.
A fix has been committed and a new release (v0.12.1) was made.
You can grab the latest release at: https://github.com/docker/machine/releases/tag/v0.12.1

Can't run uwsgi .ini file with systemd emperor

I am trying to set up uwsgi.service to run on systemd for Django 1.10 on Linode with Fedora 24.
/etc/systemd/system/uwsgi.service
[Unit]
Description=uWSGI Emperor
After=syslog.target
[Service]
ExecStart=/home/ofey/djangoenv/bin/uwsgi --ini /etc/uwsgi/emperor.ini
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
This should then call,
/etc/uwsgi/emporer.ini
[uwsgi]
emperor = /etc/uwsgi/vassals
uid = www-data
gid = www-data
limit-as = 1024
logto = /tmp/uwsgi.log
I then use a symbolic link,
$ sudo ln -s /home/ofey/djangoForum/django.ini /etc/uwsgi/vassals/
to
/home/ofey/djangoForum/django.ini
[uwsgi]
project = djangoForum
base = /home/ofey
chdir = %(base)/%(project)
home = %(base)/djangoenv
module = crudProject.wsgi:application
master = true
processes = 2
socket = 127.0.0.1:3031
chmod-socket = 664
vacuum = true
I have restarted all with,
$ sudo systemctl daemon-reload
$ sudo systemctl restart nginx.service
$ sudo systemctl retart uwsgi.service
The last command gives,
Job for uwsgi.service failed because the control process exited with error code. See "systemctl status uwsgi.service" and "journalctl -xe" for details.
$ sudo systemctl status uwsgi.service
gives,
● uwsgi.service - uWSGI Emperor
Loaded: loaded (/etc/systemd/system/uwsgi.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Dec 07 23:56:28 ofeyspi systemd[1]: Starting uWSGI Emperor...
Dec 07 23:56:28 ofeyspi uwsgi[7834]: [uWSGI] getting INI configuration from /etc/uwsgi/emperor.ini
Dec 07 23:56:28 ofeyspi systemd[1]: uwsgi.service: Main process exited, code=exited, status=1/FAILURE
Dec 07 23:56:28 ofeyspi systemd[1]: Failed to start uWSGI Emperor.
Dec 07 23:56:28 ofeyspi systemd[1]: uwsgi.service: Unit entered failed state.
Dec 07 23:56:28 ofeyspi systemd[1]: uwsgi.service: Failed with result 'exit-code'.
Dec 07 23:56:28 ofeyspi systemd[1]: uwsgi.service: Service hold-off time over, scheduling restart.
Dec 07 23:56:28 ofeyspi systemd[1]: Stopped uWSGI Emperor.
Dec 07 23:56:28 ofeyspi systemd[1]: uwsgi.service: Start request repeated too quickly.
Dec 07 23:56:28 ofeyspi systemd[1]: Failed to start uWSGI Emperor.
I can not figure out why uwsgi.service will not run.
uwsgi runs when I don't go through systemd and instead use,
$ sudo --ini django.ini
The most likely reason for that is that Emperor is not able to:
create the .pid file to write the processID;
create the unix-socket files for the vassals (seems to be not your case, since you are using :3031 port);
write to the log file specified by --logto option (/tmp/uwsgi.log in your case).
This often happens when any of these files exist and are owned by another user (most likely, root) or are located in a directory to which the user starting the service is not able to write.
Systemd's status log is not very informative on this subject. So, the quickest way to identify the case is to run your ExecStart command from systemd's service not as root and to see the output:
$ /home/ofey/djangoenv/bin/uwsgi --ini /etc/uwsgi/emperor.ini
If the output shows that the problem is permission, try the following.
Since you are going to run your server on behalf of www-data user, as it has been suggested already, make sure you have:
[Service]
User=www-data
Group=www-data
RuntimeDirectory=uwsgi
in your systemd unit config. Then, make sure the .pid files and unix-socket files (if any) are created under that directory (i.e. under /run/uwsgi) by adding this to your vassals .ini files:
runtime_dir = /run/uwsgi
pidfile=%(runtime_dir)/%n.pid
# if you prefer using unix-socket instead of a port
socket = %(runtime_dir)/%n.sock
# trying to chmod-socket is useless with a port, by the way
chmod-socket = 664
The %n variable in the given example stands for the vassal's .ini file name without extension (see the full-list here).
Finally, make sure the --logto file specified in Emperor's and vassals' configs is writable by such.
Please note, if you run uwsgi --ini /etc/uwsgi/emperor.ini as root and then terminate the procecc with ctrl+D, it will leave the above-mentioned temp files existing and owned by root, which will prevent other owners (like www-data) from writing to them, until you delete the files or chown/chmod them again.
The uwsgi systemd docs advise adding RuntimeDirectory=uwsgi to your service file. Try adding that.
Also check /tmp/uwsgi.log to see if any logging was generated there.
Comment out KillSignal = SIGQUIT
It can cause problems, see http://uwsgi-docs.readthedocs.io/en/latest/Systemd.html
also causes issues on Centos 7