Celery - Permission Problem - Create folder - django

I use celery (jobs manager) on prod mode for a website (Django) on a centos7 server.
My problem is that in a celery task my function did not create folder (see my_function).
the function
def my_fucntion():
parent_folder = THE_PARENT_PATH
if not os.path.exists(centrifuge_recentrifuge_work_dir_path):
os.makedirs(centrifuge_recentrifuge_work_dir_path)
# The folder THE_PARENT_PATH is created
celery_task(parent_folder)
the celery task
#app.task(name='a task')
def celery_task(parent_folder):
import getpass; print("permission : ", getpass.getuser())
# permission : apache
path_1 = os.path.join(parent_folder, "toto")
if not os.path.exists(path_1):
os.makedirs(path_1)
# The folder path_1 is NOT created
..... some others instructions...
# Singularity image run (needed the path_1 folder)
I use Supervisord for daemonization of celery.
 celery.init
[program:sitecelery]
command=/etc/supervisord.d/celery.sh
directory=/mnt/site/
user=apache
numprocs=1
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker.log
autostart=true
autorestart=true
priority=999
The folder path_1 is created when user=root but i want that it was not rot but apache user.
celery.sh
#!/bin/bash
cd /mnt/site/
exec ../myenv/bin/python3 -m celery -A site.celery_settings worker -l info --autoscale 20
sudo systemctl status supervisord
● supervisord.service - Process Monitoring and Control Daemon
Loaded: loaded (/usr/lib/systemd/system/supervisord.service; disabled; vendor preset: disabled)
Active: active (running) since lun. 2018-10-15 09:09:05 CEST; 4min 59s ago
Process: 61477 ExecStart=/usr/bin/supervisord -c /etc/supervisord.conf (code=exited, status=0/SUCCESS)
Main PID: 61480 (supervisord)
CGroup: /system.slice/supervisord.service
├─61480 /usr/bin/python /usr/bin/supervisord -c /etc/supervisord.conf
└─61491 ../myenv/bin/python3 -m celery -A Site_CNR.celery_settings worker -l info --autoscale 20
oct. 15 09:09:05 web01 systemd[1]: Starting Process Monitoring and Control Daemon...
oct. 15 09:09:05 web01 systemd[1]: Started Process Monitoring and Control Daemon.
oct. 15 09:09:17 web01 Singularity[61669]: action-suid (U=48,P=61669)> Home directory is not owned by calling user: /usr/share/httpd
oct. 15 09:09:17 web01 Singularity[61669]: action-suid (U=48,P=61669)> Retval = 255
oct. 15 09:09:17 web01 Singularity[61678]: action-suid (U=48,P=61678)> Home directory is not owned by calling user: /usr/share/httpd
oct. 15 09:09:17 web01 Singularity[61678]: action-suid (U=48,P=61678)> Retval = 255
EDIT 1 os.makedirs
In the celery tasks :
if not os.path.exists(path_1):
print("test")
# test
print(os.makedirs(path_1))
# None
os.makedirs(path_1)
The os.makedirs return None :/

I dont know why but with this correction on a post error of this problem with a sudo chown -R apache:apache /usr/share/httpd/ resolve this problem oO

Related

Celery/systemd not talking to my django app

I am trying to daemonize my celery/redis workers on Ubuntu 18.04 and I am making progress! Celery is now running, but it does appear to be communicating with my django app. I found that removing the directive Type=forking from the celery.service file, celery started working.
# systemctl status celery.service
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2020-12-17 18:35:19 MST; 1min 52s ago
Main PID: 21509 (code=exited, status=1/FAILURE)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/celery.service
Dec 17 18:35:17 t-rex systemd[1]: Starting Celery Service...
Dec 17 18:35:19 t-rex sh[24331]: celery multi v4.3.0 (rhubarb)
Dec 17 18:35:19 t-rex sh[24331]: > Starting nodes...
Dec 17 18:35:19 t-rex sh[24331]: > w1#t-rex: OK
Dec 17 18:35:19 t-rex sh[24331]: > w2#t-rex: OK
Dec 17 18:35:19 t-rex sh[24331]: > w3#t-rex: OK
Dec 17 18:35:19 t-rex systemd[1]: Started Celery Service.
When I test celery from the python prompt in my apps virtualenv the test fails. This is the test I use in my app before I call a celery task.
>>> celery_app.control.broadcast('ping', reply=True, limit=1)
[]
My celery.service file (straight from the celery docs) with a few local changes.
[Unit]
Description=Celery Service
After=network.target redis.service
Requires=redis.service
[Service]
#Type=forking
User=www-data
Group=www-data
EnvironmentFile=/etc/conf.d/celery
WorkingDirectory=/home/mark/python-projects/archive
ExecStart=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi start $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --loglevel="${CELERYD_LOG_LEVEL}"'
ExecReload=/bin/sh -c '${CELERY_BIN} -A $CELERY_APP multi restart $CELERYD_NODES \
--pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} \
--loglevel="${CELERYD_LOG_LEVEL}" $CELERYD_OPTS'
Restart=always
[Install]
WantedBy=multi-user.target
and my environment file (also from the same celery docs):
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1 w2 w3"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/mark/.virtualenvs/archive/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="MemorabiliaJSON"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="DEBUG"
The redis server is running, so that not be the issue. I am not sure if redis is talking to my daemonized celery or not.
I start celery with "celery -A MemorabiliaJSON worker -l debug" when using django runserver, and I am not sure if my daemonized celery needs something else to make it talk to my django apps.
Is there any magic needed to get django/apache/wsgi to work with daemonized celery? There is nothing in the celery log files when I try my test above.
Thanks for any assistance you can give me in debugging this problem!
Mark

Adding content_type to render_to_response in Django's views.py causing 'Server Error (500)'

In django 1.11, in views.py I am using the render_to_response function as follows:
return render_to_response(domainObject.template_path, context_dict, context)
This works fine. Now I am trying to specify the content_type for this response as 'txt/html'. So I switch to
content_type = 'txt/html'
return render_to_response(domainObject.template_path, context_dict, context, content_type)
But with this setup the server returns a
Server Error (500)
Following the documentation at https://docs.djangoproject.com/en/1.8/topics/http/shortcuts/#render-to-response I think I am providing the variables in the right order...
Here is the full 'def' block for reference:
def myview(request):
context = RequestContext(request)
if request.homepage:
migrationObject = calltomigration()
else:
integrationObject = Integration.objects.filter(subdomain_slug=request.subdomain).get()
except ObjectDoesNotExist:
logger.warning(ObjectDoesNotExist)
raise Http404
sectionContent = None
if not request.homepage:
sectionContent = getLeafpageSectionContent(referenceObject)
context_dict = {
'reference': referenceObject,
'sectionContent': sectionContent,
'is_homepage': request.homepage
}
# content_type = 'txt/html'
return render_to_response(domainObject.template_path, context_dict, context)
Here is the NGINX status:
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-01-17 16:34:15 UTC; 40s ago
Docs: man:nginx(8)
Process: 14517 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=2)
Process: 14558 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 14546 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 14562 (nginx)
Tasks: 2 (limit: 1152)
CGroup: /system.slice/nginx.service
├─14562 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─14564 nginx: worker process
Jan 17 16:34:15 ip-172-31-8-232 systemd[1]: nginx.service: Failed with result 'timeout'.
Jan 17 16:34:15 ip-172-31-8-232 systemd[1]: Stopped A high performance web server and a reverse proxy server.
Jan 17 16:34:15 ip-172-31-8-232 systemd[1]: Starting A high performance web server and a reverse proxy server...
Jan 17 16:34:15 ip-172-31-8-232 systemd[1]: nginx.service: Failed to parse PID from file /run/nginx.pid: Invalid argument
Jan 17 16:34:15 ip-172-31-8-232 systemd[1]: Started A high performance web server and a reverse proxy server.
[1]+ Done sudo systemctl restart nginx ```
Today I fixed the issue. I found out that in render_to_response, the MIME type has to be specified in the third position (at least in the setup I am working on). Most OS/browser combinations figured out the misformed MIME type, with the exception of Edge on PC. Fixed now!
The standard Django shortcuts function 'render' provides the same functionality as 'render_on_response'. Django's 'render_to_reponse' function was depreciated in 2.2 and is officially removed from the Django in 3.0. You can check the release notes here:
https://docs.djangoproject.com/en/3.0/releases/3.0/
Check out the official documentation for render function below,
https://docs.djangoproject.com/en/3.0/topics/http/shortcuts/
Also, the context_type should be 'text/html' instead of 'txt/html'.

Upstart Gunicorn doesn't working

Hi I am a south korean student :)
I am studing staging, production test using nginx, gunicorn
first I want run gunicorn using socket
gunicorn --bind unix:/tmp/tddtest.com.socket testlists.wsgi:applicaion
and It shows
[2016-06-26 05:33:42 +0000] [27861] [INFO] Starting gunicorn 19.6.0
[2016-06-26 05:33:42 +0000] [27861] [INFO] Listening at: unix:/tmp/tddgoat1.amull.net.socket (27861)
[2016-06-26 05:33:42 +0000] [27861] [INFO] Using worker: sync
[2016-06-26 05:33:42 +0000] [27893] [INFO] Booting worker with pid: 27893
and I running function test in local repository
python manage.py test func_test
and I was working!
Creating test database for alias 'default'...
..
----------------------------------------------------------------------
Ran 2 tests in 9.062s
OK
Destroying test database for alias 'default'...
and I want auto start gunicorn when I boot server
So I decide to using Upstart(in ubuntu)
In /etc/init/tddtest.com.conf
description "Gunicorn server for tddtest.com"
start on net-device-up
stop on shutdown
respawn
setuid elspeth
chdir /home/elspeth/sites/tddtest.com/source/TDD_Test/testlists/testlists
exec gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
(path of wsgi.py is)
/sites/tddtest.com/source/TDD_Test/testlists/testlists
and I command
sudo start tddtest.com
It shows
tddtest.com start/running, process 27905
I think it is working
but I running function test in local repository
python manage.py test func_test
but it show
======================================================================
FAIL: test_can_start_a_list_and_retrieve_it_later (functional_tests.tests.NewVisitorTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/hanminsoo/Documents/TDD_test/TDD_Test/superlists/functional_tests/tests.py", line 38, in test_can_start_a_list_and_retrieve_it_later
self.assertIn('To-Do', self.browser.title)
AssertionError: 'To-Do' not found in 'Error'
----------------------------------------------------------------------
Ran 2 tests in 4.738s
THE GUNICORN IS NOT WORKING ㅠ_ㅠ
I want look process
ps aux
but I can't found gunicorn process
[...]
ubuntu 24387 0.0 0.1 105636 1700 ? S 02:51 0:00 sshd: ubuntu#pts/0
ubuntu 24391 0.0 0.3 21284 3748 pts/0 Ss 02:51 0:00 -bash
root 24411 0.0 0.1 63244 1800 pts/0 S 02:51 0:00 su - elspeth
elspeth 24412 0.0 0.4 21600 4208 pts/0 S 02:51 0:00 -su
root 26860 0.0 0.0 31088 960 ? Ss 04:45 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nobody 26863 0.0 0.1 31524 1872 ? S 04:45 0:00 nginx: worker process
elspeth 28005 0.0 0.1 17160 1292 pts/0 R+ 05:55 0:00 ps aux
I can't found problem...
please somebody help me thankyou :)
Please modify your upstart script as follows:
exec /home/elspeth/.pyenv/versions/3.5.1/envs/sites/bin/gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
If that does not work it could very well be because the /home/elspeth/.pyenv/ folder is inaccessible please check it's permission. If permissions are found to be correct and you are continuing to have problems try this:
script
cd /home/elspeth/sites/tddtest.com/source/TDD_Test/testlists/testlists
/home/elspeth/.pyenv/versions/3.5.1/envs/sites/bin/gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
end script

start: Job failed to start UWSGI,

Trying to configure ubuntu+nginx+uwsgi+django
upstart script /etc/init/uwsgi.conf
description "uWSGI application server in Emperor mode"
start on runlevel [2345]
stop on runlevel [!2345]
setuid voxa
setgid www-data
exec /usr/local/bin/uwsgi --emperor /etc/uwsgi/sites
uwsgi configuration
[uwsgi]
project = project
base = /home/user
chdir = %(base)/%(project)
home = home/user/Env/project_env
module = %(project).wsgi:application
master = true
processes = 5
socket = %(base)/%(project)/%(project).sock
chmod-socket = 664
vacuum = true
socket = %(base)/%(project)/%(project).sock
chmod-socket = 664
vacuum = true
But after running the command sudo service uwsgi start I get an error
start: Job failed to start
What should I check to handle it?
UPD:
with virtualenv enabled app successfully runs with uwsgi command
uwsgi --http :8000 --module project.wsgi
uWSGI doesn't have permissions to create socket file in specified directory. To solve that, you can run emperor as root and drop privileges in vassal, after creating socket. Simply add to your vassal config:
username = voxa
gropuname = www-data
And remove setuid and setgid from your upstart config file.
If you're worried that someone will abuse that and use other user/group, you can use emperor tyrant mode, by adding --emperor-tyrant to uwsgi start line in upstart config. That will disallow change of username and groupname to other than owner of vassal config file.

supervisor, gunicorn and django only work when logged in

I'm using this guide to setting up an intranet server. Everything goes ok, the server works and I can checked it is working in my network.
But when I logout, I get 404 error.
The sock file is in the path indicated in gunicorn_start.
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ ls -l run/
total 0
srwxrwxrwx 1 javier javier 0 mar 10 17:31 cmi.sock
Actually I can se the workers when listed the process list.
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ ps aux | grep cmi
javier 17354 0.0 0.2 14652 8124 ? S 17:27 0:00 gunicorn: master [cmi]
javier 17365 0.0 0.3 18112 10236 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17366 0.0 0.3 18120 10240 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17367 0.0 0.5 36592 17496 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17787 0.0 0.0 4408 828 pts/0 S+ 17:55 0:00 grep --color=auto cmi
And supervisorctl responds that the process is running:
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ sudo supervisorctl status cmi
[sudo] password for javier:
cmi RUNNING pid 17354, uptime 0:29:21
There is an error in nginx logs,
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ tail logs/nginx-error.log
2014/03/10 17:38:57 [error] 17299#0: *19 connect() to
unix:/home/javier/workspace/cmi/cmi/run/cmi.sock failed (111: Connection refused) while
connecting to upstream, client: 10.69.0.174, server: , request: "GET / HTTP/1.1",
upstream: "http://unix:/home/javier/workspace/cmi/cmi/run/cmi.sock:/", host:
"10.69.0.68:2014"
Again, the error appears only when I logout or close session, but everything works fine when run or reload supervisor and stay connected.
By the way, ngnix, supervisor and gunicorn run on my uid.
Thanks in advance.
Edit Supervisor conf
[program:cmi]
command = /home/javier/entornos/cmi2014/bin/cmi_start
user = javier
stdout_logfile = /home/javier/workspace/cmi/cmi/logs/cmi_supervisor.log
redirect_stderr = true
autostart=true
autorestart=true
Gnunicor start script
#!/bin/bash
NAME="cmi" # Name of the application
DJANGODIR=/home/javier/workspace/cmi/cmi # Django project directory
SOCKFILE=/home/javier/workspace/cmi/cmi/run/cmi.sock # we will communicte using this unix socket
USER=javier # the user to run as
GROUP=javier # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=cmi.settings # which settings file should Django use
DJANGO_WSGI_MODULE=cmi.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/javier/entornos/cmi2014/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
export CMI_SECRET_KEY='***'
export CMI_DATABASE_HOST='***'
export CMI_DATABASE_NAME='***'
export CMI_DATABASE_USER='***'
export CMI_DATABASE_PASS='***'
export CMI_DATABASE_PORT='3306'
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/javier/entornos/cmi2014/bin/gunicorn ${DJANGO_WSGI_MODULE}:application --name $NAME --workers $NUM_WORKERS --user=$USER --group=$GROUP --log-level=debug --bind=unix:$SOCKFILE