Permission Denied when attempting to start Daphne systemctl process - django

I'm deploying a website using Django and Django-Channels, with Channel's daphne ASGI server substituting for the typical Gunicorn WSGI setup. Using this Gunicorn WSGI tutorial as a jumping off guide, I attempted to write a systemctl service for my daphne server, when I hit the below error:
CRITICAL Listen failure: [Errno 13] Permission denied: '27646' -> b'/run/daphne.sock.lock'
I was unfortunately unable to find any answers to why permissions would be denied to the .sock file, (in context to Daphne) so I was hoping I could get some hints on where to begin debugging this problem. Below are my daphne.socket and my daphne.service files.
daphne.service
[Unit]
Description=daphne daemon
Requires=daphne.socket
After=network.target
[Service]
User=brianl
Group=www-data
WorkingDirectory=/home/brianl/autoXMD
ExecStart=/home/brianl/autoXMD/env/bin/daphne -u /run/daphne.sock -b 0.0.0.0 -p 8000 autoXMD.asgi:application
[Install]
WantedBy=multi-user.target
daphne.socket
[Unit]
Description=daphne socket
[Socket]
ListenStream=/run/daphne.sock
[Install]
WantedBy=sockets.target
Based off the linked DigitalOcean tutorial, I start my service with sudo systemctl start daphne.socket.
My guess is that there's some kind of discrepancy between setting up systemctl services for Gunicorn and Daphne that I missed, but I don't know for sure.
(If it helps, I'm planning on using Nginx as the main server, but I haven't reached that point yet)
EDIT:
It would help if I also attached the full output systemd gives:
● daphne.service - daphne daemon
Loaded: loaded (/etc/systemd/system/daphne.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Thu 2019-09-05 22:00:43 UTC; 1min 51s ago
Process: 22041 ExecStart=/home/brianl/autoXMD/env/bin/daphne -u /run/daphne.sock -b 0.0.0.0 -p 8000 autoXMD.asgi:application (code=exited, status=0/SUCCESS)
Main PID: 22041 (code=exited, status=0/SUCCESS)
Sep 05 22:00:43 autoxmd daphne[22041]: warnings.warn('%s. joblib will operate in serial mode' % (e,))
Sep 05 22:00:43 autoxmd daphne[22041]: 2019-09-05 22:00:43,013 INFO Starting server at tcp:port=8000:interface=0.0.0.0, unix:/run/daphne.sock
Sep 05 22:00:43 autoxmd daphne[22041]: 2019-09-05 22:00:43,017 INFO HTTP/2 support not enabled (install the http2 and tls Twisted extras)
Sep 05 22:00:43 autoxmd daphne[22041]: 2019-09-05 22:00:43,020 INFO Configuring endpoint tcp:port=8000:interface=0.0.0.0
Sep 05 22:00:43 autoxmd daphne[22041]: 2019-09-05 22:00:43,022 INFO Listening on TCP address 0.0.0.0:8000
Sep 05 22:00:43 autoxmd daphne[22041]: 2019-09-05 22:00:43,022 INFO Configuring endpoint unix:/run/daphne.sock
Sep 05 22:00:43 autoxmd daphne[22041]: 2019-09-05 22:00:43,022 CRITICAL Listen failure: [Errno 13] Permission denied: '22041' -> b'/run/daphne.sock.lock'
Sep 05 22:00:43 autoxmd systemd[1]: daphne.service: Start request repeated too quickly.
Sep 05 22:00:43 autoxmd systemd[1]: daphne.service: Failed with result 'start-limit-hit'.
Sep 05 22:00:43 autoxmd systemd[1]: Failed to start daphne daemon.

I think this occurred for the permission issue. By default /run directory is owned by root. So the daphne socket failed to create the daphne.sock.lock file in /run directory.
The solution is that create a folder in /run directory and give permission to your user and group.
For example:
sudo mkdir /run/daphne
sudo chown brianl:www-data /run/daphne
Now change the Unix sock path in the service & socket file.
daphne.service
[Unit]
Description=daphne daemon
Requires=daphne.socket
After=network.target
[Service]
User=brianl
Group=www-data
WorkingDirectory=/home/brianl/autoXMD
ExecStart=/home/brianl/autoXMD/env/bin/daphne -u /run/daphne/daphne.sock -b 0.0.0.0 -p 8000 autoXMD.asgi:application
[Install]
WantedBy=multi-user.target
daphne.socket
[Unit]
Description=daphne socket
[Socket]
ListenStream=/run/daphne/daphne.sock
[Install]
WantedBy=sockets.target
Hopefully, this works for you. For further you can go through a similar issue MySQL Daemon Lock issue

Related

Unable to start NFS server on GCE instance mounted on GCS

I want to setup a NFS server on GCP, so used a VM, mounted the GCS bucket on /vol using the gcsfuse. Then installed nfs-kernel-server packages on the VM, created a dir nfs_share under /vol, added the entry in /etc/exports and while restarting the nfs-kernel-server service, ran into the below error:
sudo systemctl status nfs-kernel-server
● nfs-server.service - NFS server and services
Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2022-06-04 15:28:54 UTC; 10s ago
Process: 2139 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
Process: 2138 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
Process: 2137 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=1/FAILURE)
Jun 04 15:28:54 g-non-prod-nfs-test-vm systemd[1]: Starting NFS server and services...
Jun 04 15:28:54 g-non-prod-nfs-test-vm exportfs[2137]: exportfs: /vol/nfs_share requires fsid= for NFS export
Jun 04 15:28:54 g-non-prod-nfs-test-vm systemd[1]: nfs-server.service: Control process exited, code=exited status=1
Jun 04 15:28:54 g-non-prod-nfs-test-vm systemd[1]: nfs-server.service: Failed with result 'exit-code'.
Jun 04 15:28:54 g-non-prod-nfs-test-vm systemd[1]: Stopped NFS server and services.
Filestore a service of GCP (NFS instance) needs to be deployed with 1TB storage as a minimum, thus looking for alternatives. Above approach looks feasible, but unable to get the nfs service up and running.

NGINX failed to work while configuring server for subdomain

Job for nginx.service failed because the control process exited with error code.
See "systemctl status nginx.service" and "journalctl -xe" for details.
Prior to this prompt, the server was working fine, but when I tried to config the server for subdomain it failed to work with this error.
Also for detailed error ...
[ec2-user#ip--------- conf.d]$ systemctl status nginx.service
? nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-07-07 06:03:04 UTC; 4min 17s ago
Process: 71445 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE)
Process: 71444 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 70298 (code=exited, status=0/SUCCESS)
Jul 07 06:03:04 ip-999.ap-south-1.compute.internal systemd[1]: Starting The nginx HTTP and reverse proxy serv>
Jul 07 06:03:04 ip-999.ap-south-1.compute.internal nginx[71445]: nginx: [emerg] unexpected end of file, expec>
Jul 07 06:03:04 ip-999.ap-south-1.compute.internal nginx[71445]: nginx: configuration file /etc/nginx/nginx.c>
Jul 07 06:03:04 ip-999.ap-south-1.compute.internal systemd[1]: nginx.service: Control process exited, code=ex>
Jul 07 06:03:04 ip-999.ap-south-1.compute.internal systemd[1]: nginx.service: Failed with result 'exit-code'.
Jul 07 06:03:04 ip-999.ap-south-1.compute.internal systemd[1]: Failed to start The nginx HTTP and reverse pro>
lines 1-13/13 (END)
You can always check configuration errors by using this command :
nginx -T
before launch a catastrophic restart :)
You have an error unexpected end of file, check the syntax of your NGINX configuration as it is likely that you have a broken file.
The likely candidate is forgetting to close a } character.

Failed to start Redis In-Memory Data Store. Ubuntu 18.04

I am trying to install redis on my AWS server. I have Ubuntu 18.04 installed on it. I am following steps to install redis from digitalocean article.
When i run sudo systemctl status redis command i am getting below error.
screenshot
I tried to edit /etc/systemd/system/redis.service file and added Type=forking under [Service] section but still getting the same error.
Can anyone suggest me how i can get it fixed?
Thanks in advance.
Based on same digitalocean tutorial, actually it's running fine.
Run this command sudo systemctl restart redis.service, we get (showing "failed" on last line):
● redis.service - Redis In-Memory Data Store
Loaded: loaded (/etc/systemd/system/redis.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2021-06-28 12:03:11 +03; 1min 0s ago
Process: 20428 ExecStart=/usr/local/bin/redis-server /etc/redis/redis.conf (code=exited, status=
Main PID: 20428 (code=exited, status=203/EXEC)
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Service hold-off time over, scheduling restar
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Scheduled restart job, restart counter is at
Jun 28 12:03:11 XYZ systemd[1]: Stopped Redis In-Memory Data Store.
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Start request repeated too quickly.
Jun 28 12:03:11 XYZ systemd[1]: redis.service: Failed with result 'exit-code'.
Jun 28 12:03:11 XYZ systemd[1]: Failed to start Redis In-Memory Data Store.
But if you run sudo service redis-server status, we get (showing "running" on 3rd line):
● redis-server.service - Advanced key-value store
Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-06-28 11:50:13 +03; 19min ago
Docs: http://redis.io/documentation,
man:redis-server(1)
Process: 19278 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=0/SUCCESS)
Process: 19371 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCC
Main PID: 19382 (redis-server)
Tasks: 4 (limit: 4915)
CGroup: /system.slice/redis-server.service
└─19382 /usr/bin/redis-server 127.0.0.1:6379
Jun 28 11:50:13 XYZ systemd[1]: Starting Advanced key-value store...
Jun 28 11:50:13 XYZ systemd[1]: redis-server.service: Can't open PID file /var/run/redis/red
Jun 28 11:50:13 XYZ systemd[1]: Started Advanced key-value store.
After searching for hours, it seems like it's some difference between systemctl & service and nothing more, but the actual redis server is running fine. Corrects me if that's not the case. Here's the link: https://askubuntu.com/questions/903354/difference-between-systemctl-and-service-commands
You can even check if redis is working fine, by redis-cli ping, should print PONG
I also encountered this problem, then I tried to check it again.
Finally, I found that when I authorized /var/lib/redis, I entered the wrong command, causing the redis account to have no access to /var/lib/redis.
sudo chown redis:redis /var/lib/redis
sudo systemctl restart redis
succeeded.

Deploying Django Channels: how to keep Daphne running after exiting shell on web server

As practice, I'm trying to deploy Andrew Godwin's multichat example with Django Channels 2.1.1 on DigitalOcean Ubuntu 16.04.4. However, I don't know how to exit the Ubuntu server without Channels' Daphne server stopping.
Right now, almost my entire familiarity with deployment comes from this tutorial, and that's how I've deployed the site. But according to Channels' documentation, I have to run one of these three in production to start Daphne:
daphne myproject.asgi:application
daphne -p 8001 myproject.asgi:application
daphne -b 0.0.0.0 -p 8001 myproject.asgi:application
So, in addition to following the DigitalOcean tutorial, I ran these also. The third one worked for me and the site ran well. However, if I exit shell on Ubuntu, Daphne stops too.
The tutorial has gunicorn access a sock file (--bind unix:/home/sammy/myproject/myproject.sock), and in my research so far I've seen on a few sites published before 2018 that somehow somewhere a daphne.sock file is generated. So, I guess Channels deploys similarly? But I haven't seen any details about how this is done.
How do I deploy the multichat example so I can exit the Ubuntu web server without Daphne stopping?
Update 6 May 2018, 8pm CET:
I tried kagronick's solution below and created a systemd file at /etc/systemd/system/daphne_seb.service:
[Unit]
Description=daphne daemon for seb
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/home/seb/seb
ExecStart=/home/seb/env_seb/bin/python /home/seb/seb/manage.py daphne -b 0.0.0.0 -p 8001 multichat.asgi:application
Restart=on-failure
[Install]
WantedBy=multi-user.target
I did systemctl daemon-reload and systemctl start daphne_seb.service and it ran for a couple of seconds. Then, systemctl status daphne_seb.service said Unknown command: 'daphne'.
I tried restarting it. Then I rebooted the OS. Now status says:
daphne_seb.service - daphne daemon for seb
Loaded: loaded (/etc/systemd/system/daphne_seb.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Sun 2018-05-06 19:33:31 UTC; 1s ago
Process: 2459 ExecStart=/home/seb/env_seb/bin/python /home/seb/seb/manage.py daphne -b 0.0.0.0 -p 8001 multichat.asgi:application (code=exited, status=1
Main PID: 2459 (code=exited, status=1/FAILURE)
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Main process exited, code=exited, status=1/FAILURE
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Unit entered failed state.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Failed with result 'exit-code'.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Service hold-off time over, scheduling restart.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: Stopped daphne daemon for seb.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Start request repeated too quickly.
May 06 19:33:31 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: Failed to start daphne daemon for seb.
I checked the filepaths. They're correct. I also tried adding Environment=DJANGO_SETTINGS_MODULE=multichat.settings, which I saw here. But that didn't help either.
Update 6 May 2018, 10pm CET:
Then, I read here that only 5 restarts are allowed within a 10-second period. So I removed Restart=on-failure to start the service myself.
This brought me back to Unknown command: 'daphne'. This solution suggested to me that I shouldn't be pointing to Python but to Daphne in my virtualenv, so I changed it: ExecStart=/home/seb/env_seb/bin/daphne. I also removed /home/seb/seb/manage.py based on the same solution. This gave me a new problem:
daphne_seb.service - daphne daemon for seb
Loaded: loaded (/etc/systemd/system/daphne_seb.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2018-05-06 19:55:24 UTC; 5s ago
Process: 2903 ExecStart=/home/seb/env_seb/bin/daphne daphne -b 0.0.0.0 -p 8001 multichat.asgi:application (code=exited, status=2)
Main PID: 2903 (code=exited, status=2)
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [-v VERBOSITY] [-t HTTP_TIMEOUT] [--access-log ACCESS_LOG]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [--ping-interval PING_INTERVAL] [--ping-timeout PING_TIMEOUT]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [--application-close-timeout APPLICATION_CLOSE_TIMEOUT]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [--ws-protocol [WS_PROTOCOLS [WS_PROTOCOLS ...]]]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: [--root-path ROOT_PATH] [--proxy-headers]
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: application
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 daphne[2903]: daphne: error: unrecognized arguments: multichat.asgi:application
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Unit entered failed state.
May 06 19:55:24 ubuntu-s-1vcpu-1gb-ams3-01 systemd[1]: daphne_seb.service: Failed with result 'exit-code'.
I check my multichat.asgi:application and it's in the right place. So I can't figure out why it says error: unrecognized arguments: multichat.asgi:application.
I use systemd for it. You would set a unit file in /etc/systemd/system/daphne.service
with contents like:
[Unit]
Description=My Daphne Service
After=network.target
[Service]
Type=simple
User=wwwrunn
WorkingDirectory=/srv/myapp
ExecStart=/path/to/my/virtualenv/bin/python /path/to/daphne -b 0.0.0.0 -p 8001 myproject.asgi:application
Restart=on-failure
[Install]
WantedBy=multi-user.target
I have all the parts of my application configured this way. It allows them to resume if they ever crash. Systemd can handle all the logging. And they are all started in the right order. Use Type=simple so you don't need to do any forking or writing of PID files. It will just manage the one process. I run a few of these to spread out the load.
After you put this file in place you would run the following commands:
Make the system aware of the service
sudo systemctl daemon-reload
Start the service
sudo systemctl start daphne.service
Make it run on reboot
sudo systemctl enable daphne.service
You will need to edit some of the parameters to fit your needs. If you called the file something other than daphne.service you'll need to change it in the commands. The ExecStart line will need to be changed to your python path and the namespace where your application lives.
Your logs will now show up in journalctl. To see logs for this service you would do journalctl -u daphine.service. If you want to tail the logs put -f at the end for "follow". journalctl has a bunch of options that you can find with --help or by looking up resources online. The same goes with systemd unit files.
Of course you should know that Daphine shouldn't be exposed directly to the web. Nginx, Apache, or another web server should be proxying to Daphne and serving your static assets. (You could also use a CDN for that)
My question is in fact the same as this one. Questions like that usually get sent to this solution. I did not recognise the question as the same as mine at first because (i.) I wasn't familiar with the vocabulary of deployment with Channels yet and (ii.) a lot of these questions and their solutions mention things that have already been deprecated for Channels 2. I've lost the page but, for instance, Channels 2 doesn't require us to do python manage.py runworker.
There's also the issue of upstart and systemd. Several solutions use an upstart script. But, I had a vague sense that maybe if I'm on Ubuntu 16.04.4, I should use systemd. I did a google and found that systemd had indeed replaced upstart, according to articles like this one.
#kagronick provides the correct systemd solution, but I had to edit it to make it work for me:
# /etc/systemd/system/daphne_seb.service
[Unit]
Description=daphne daemon for seb
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/home/seb/seb
ExecStart=/home/seb/env_seb/bin/daphne -b 0.0.0.0 -p 8001 multichat.asgi:application
# I turned this off for testing purposes.
# Still not sure if should use 'on-failure' or 'always'.
# Restart=on-failure
[Install]
WantedBy=multi-user.target
And then in shell:
systemctl daemon-reload
systemctl start daphne_seb.service

Can't run uwsgi .ini file with systemd emperor

I am trying to set up uwsgi.service to run on systemd for Django 1.10 on Linode with Fedora 24.
/etc/systemd/system/uwsgi.service
[Unit]
Description=uWSGI Emperor
After=syslog.target
[Service]
ExecStart=/home/ofey/djangoenv/bin/uwsgi --ini /etc/uwsgi/emperor.ini
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
This should then call,
/etc/uwsgi/emporer.ini
[uwsgi]
emperor = /etc/uwsgi/vassals
uid = www-data
gid = www-data
limit-as = 1024
logto = /tmp/uwsgi.log
I then use a symbolic link,
$ sudo ln -s /home/ofey/djangoForum/django.ini /etc/uwsgi/vassals/
to
/home/ofey/djangoForum/django.ini
[uwsgi]
project = djangoForum
base = /home/ofey
chdir = %(base)/%(project)
home = %(base)/djangoenv
module = crudProject.wsgi:application
master = true
processes = 2
socket = 127.0.0.1:3031
chmod-socket = 664
vacuum = true
I have restarted all with,
$ sudo systemctl daemon-reload
$ sudo systemctl restart nginx.service
$ sudo systemctl retart uwsgi.service
The last command gives,
Job for uwsgi.service failed because the control process exited with error code. See "systemctl status uwsgi.service" and "journalctl -xe" for details.
$ sudo systemctl status uwsgi.service
gives,
● uwsgi.service - uWSGI Emperor
Loaded: loaded (/etc/systemd/system/uwsgi.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Dec 07 23:56:28 ofeyspi systemd[1]: Starting uWSGI Emperor...
Dec 07 23:56:28 ofeyspi uwsgi[7834]: [uWSGI] getting INI configuration from /etc/uwsgi/emperor.ini
Dec 07 23:56:28 ofeyspi systemd[1]: uwsgi.service: Main process exited, code=exited, status=1/FAILURE
Dec 07 23:56:28 ofeyspi systemd[1]: Failed to start uWSGI Emperor.
Dec 07 23:56:28 ofeyspi systemd[1]: uwsgi.service: Unit entered failed state.
Dec 07 23:56:28 ofeyspi systemd[1]: uwsgi.service: Failed with result 'exit-code'.
Dec 07 23:56:28 ofeyspi systemd[1]: uwsgi.service: Service hold-off time over, scheduling restart.
Dec 07 23:56:28 ofeyspi systemd[1]: Stopped uWSGI Emperor.
Dec 07 23:56:28 ofeyspi systemd[1]: uwsgi.service: Start request repeated too quickly.
Dec 07 23:56:28 ofeyspi systemd[1]: Failed to start uWSGI Emperor.
I can not figure out why uwsgi.service will not run.
uwsgi runs when I don't go through systemd and instead use,
$ sudo --ini django.ini
The most likely reason for that is that Emperor is not able to:
create the .pid file to write the processID;
create the unix-socket files for the vassals (seems to be not your case, since you are using :3031 port);
write to the log file specified by --logto option (/tmp/uwsgi.log in your case).
This often happens when any of these files exist and are owned by another user (most likely, root) or are located in a directory to which the user starting the service is not able to write.
Systemd's status log is not very informative on this subject. So, the quickest way to identify the case is to run your ExecStart command from systemd's service not as root and to see the output:
$ /home/ofey/djangoenv/bin/uwsgi --ini /etc/uwsgi/emperor.ini
If the output shows that the problem is permission, try the following.
Since you are going to run your server on behalf of www-data user, as it has been suggested already, make sure you have:
[Service]
User=www-data
Group=www-data
RuntimeDirectory=uwsgi
in your systemd unit config. Then, make sure the .pid files and unix-socket files (if any) are created under that directory (i.e. under /run/uwsgi) by adding this to your vassals .ini files:
runtime_dir = /run/uwsgi
pidfile=%(runtime_dir)/%n.pid
# if you prefer using unix-socket instead of a port
socket = %(runtime_dir)/%n.sock
# trying to chmod-socket is useless with a port, by the way
chmod-socket = 664
The %n variable in the given example stands for the vassal's .ini file name without extension (see the full-list here).
Finally, make sure the --logto file specified in Emperor's and vassals' configs is writable by such.
Please note, if you run uwsgi --ini /etc/uwsgi/emperor.ini as root and then terminate the procecc with ctrl+D, it will leave the above-mentioned temp files existing and owned by root, which will prevent other owners (like www-data) from writing to them, until you delete the files or chown/chmod them again.
The uwsgi systemd docs advise adding RuntimeDirectory=uwsgi to your service file. Try adding that.
Also check /tmp/uwsgi.log to see if any logging was generated there.
Comment out KillSignal = SIGQUIT
It can cause problems, see http://uwsgi-docs.readthedocs.io/en/latest/Systemd.html
also causes issues on Centos 7