I'm a new bie to python. Recently, I face a problem with Python Popen, and hope someone can help me. Thanks :D
a.py
#!/usr/bin/env python
import b
b.run()
while True:
pass
b.py
#!/usr/bin/env python
import subprocess
def run():
subprocess.Popen(['ping www.google.com > /dev/null &'], shell=True)
run()
When run b.py and grep process status
$ ps aux | grep
test 35806 0.0 0.0 2451284 592 s010 Sl 10:11 0:00.00 ping www.google.com
the ping process runs in background with STATE Sl.
And now I try run a.py
$ ps aux | grep
test 36088 0.0 0.0 2444116 604 s010 Sl+ 10:15 0:00.00 ping www.google.com
ping process STATE changes to Sl+, and if I stop a.py with ctrl + c, ping process also terminated.
Is there any way to make ping process to run in backgroud, and it will not be affected when I stop a.py? And why ping process STATE changes from Sl to Sl+?
After research, I found that we can add "preexec_fn=os.setsid", and it solve the problem.
subprocess.Popen(['ping www.google.com > /dev/null &'], shell=True, preexec_fn=os.setsid)
Related
How to use Django with AWS Elastic Beanstalk that would also run tasks by celery on main node only?
This is how I set up celery with django on elastic beanstalk with scalability working fine.
Please keep in mind that 'leader_only' option for container_commands works only on environment rebuild or deployment of the App. If service works long enough, leader node may be removed by Elastic Beanstalk. To deal with that, you may have to apply instance protection for your leader node. Check: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html#instance-protection-instance
Add bash script for celery worker and beat configuration.
Add file root_folder/.ebextensions/files/celery_configuration.txt:
#!/usr/bin/env bash
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}
# Create celery configuraiton script
celeryconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A django_app --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv
[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A django_app --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker
Take care about script execution during deployment, but only on main node (leader_only: true).
Add file root_folder/.ebextensions/02-python.config:
container_commands:
04_celery_tasks:
command: "cat .ebextensions/files/celery_configuration.txt > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
05_celery_tasks_run:
command: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
Beat is configurable without need of redeployment, with separate django applications: https://pypi.python.org/pypi/django_celery_beat.
Storing task results is good idea to: https://pypi.python.org/pypi/django_celery_beat
File requirements.txt
celery==4.0.0
django_celery_beat==1.0.1
django_celery_results==1.0.1
pycurl==7.43.0 --global-option="--with-nss"
Configure celery for Amazon SQS broker
(Get your desired endpoint from list: http://docs.aws.amazon.com/general/latest/gr/rande.html)
root_folder/django_app/settings.py:
...
CELERY_RESULT_BACKEND = 'django-db'
CELERY_BROKER_URL = 'sqs://%s:%s#' % (aws_access_key_id, aws_secret_access_key)
# Due to error on lib region N Virginia is used temporarily. please set it on Ireland "eu-west-1" after fix.
CELERY_BROKER_TRANSPORT_OPTIONS = {
"region": "eu-west-1",
'queue_name_prefix': 'django_app-%s-' % os.environ.get('APP_ENV', 'dev'),
'visibility_timeout': 360,
'polling_interval': 1
}
...
Celery configuration for django django_app app
Add file root_folder/django_app/celery.py:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_app.settings')
app = Celery('django_app')
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
Modify file root_folder/django_app/__init__.py:
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from django_app.celery import app as celery_app
__all__ = ['celery_app']
Check also:
How do you run a worker with AWS Elastic Beanstalk? (solution without scalability)
Pip Requirements.txt --global-option causing installation errors with other packages. "option not recognized" (solution for problems coming from obsolate pip on elastic beanstalk that cannto deal with global options for properly solving pycurl dependency)
This is how I extended the answer by #smentek to allow for multiple worker instances and a single beat instance - same thing applies where you have to protect your leader. (I still don't have an automated solution for that yet).
Please note that envvar updates to EB via the EB cli or the web interface are not relflected by celery beat or workers until app server restart has taken place. This caught me off guard once.
A single celery_configuration.sh file outputs two scripts for supervisord, note that celery-beat has autostart=false, otherwise you end up with many beats after an instance restart:
# get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}
# create celery beat config script
celerybeatconf="[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A lexvoco --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=false
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 10
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# create celery worker config script
celeryworkerconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A lexvoco --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999
environment=$celeryenv"
# create files for the scripts
echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf
echo "$celeryworkerconf" | tee /opt/python/etc/celeryworker.conf
# add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celerybeat.conf celeryworker.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# reread the supervisord config
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update
Then in container_commands we only restart beat on leader:
container_commands:
# create the celery configuration file
01_create_celery_beat_configuration_file:
command: "cat .ebextensions/files/celery_configuration.sh > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && sed -i 's/\r$//' /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
# restart celery beat if leader
02_start_celery_beat:
command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat"
leader_only: true
# restart celery worker
03_start_celery_worker:
command: "/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker"
If someone is following smentek's answer and getting the error:
05_celery_tasks_run: /usr/bin/env bash does not exist.
know that, if you are using Windows, your problem might be that the "celery_configuration.txt" file has WINDOWS EOL when it should have UNIX EOL. If using Notepad++, open the file and click on "Edit > EOL Conversion > Unix (LF)". Save, redeploy, and error is no longer there.
Also, a couple of warnings for really-amateur people like me:
Be sure to include "django_celery_beat" and "django_celery_results" in your "INSTALLED_APPS" in settings.py file.
To check celery errors, connect to your instance with "eb ssh" and then "tail -n 40 /var/log/celery-worker.log" and "tail -n 40 /var/log/celery-beat.log" (where "40" refers to the number of lines you want to read from the file, starting from the end).
Hope this helps someone, it would've saved me some hours!
Hi I am a south korean student :)
I am studing staging, production test using nginx, gunicorn
first I want run gunicorn using socket
gunicorn --bind unix:/tmp/tddtest.com.socket testlists.wsgi:applicaion
and It shows
[2016-06-26 05:33:42 +0000] [27861] [INFO] Starting gunicorn 19.6.0
[2016-06-26 05:33:42 +0000] [27861] [INFO] Listening at: unix:/tmp/tddgoat1.amull.net.socket (27861)
[2016-06-26 05:33:42 +0000] [27861] [INFO] Using worker: sync
[2016-06-26 05:33:42 +0000] [27893] [INFO] Booting worker with pid: 27893
and I running function test in local repository
python manage.py test func_test
and I was working!
Creating test database for alias 'default'...
..
----------------------------------------------------------------------
Ran 2 tests in 9.062s
OK
Destroying test database for alias 'default'...
and I want auto start gunicorn when I boot server
So I decide to using Upstart(in ubuntu)
In /etc/init/tddtest.com.conf
description "Gunicorn server for tddtest.com"
start on net-device-up
stop on shutdown
respawn
setuid elspeth
chdir /home/elspeth/sites/tddtest.com/source/TDD_Test/testlists/testlists
exec gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
(path of wsgi.py is)
/sites/tddtest.com/source/TDD_Test/testlists/testlists
and I command
sudo start tddtest.com
It shows
tddtest.com start/running, process 27905
I think it is working
but I running function test in local repository
python manage.py test func_test
but it show
======================================================================
FAIL: test_can_start_a_list_and_retrieve_it_later (functional_tests.tests.NewVisitorTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/hanminsoo/Documents/TDD_test/TDD_Test/superlists/functional_tests/tests.py", line 38, in test_can_start_a_list_and_retrieve_it_later
self.assertIn('To-Do', self.browser.title)
AssertionError: 'To-Do' not found in 'Error'
----------------------------------------------------------------------
Ran 2 tests in 4.738s
THE GUNICORN IS NOT WORKING ㅠ_ㅠ
I want look process
ps aux
but I can't found gunicorn process
[...]
ubuntu 24387 0.0 0.1 105636 1700 ? S 02:51 0:00 sshd: ubuntu#pts/0
ubuntu 24391 0.0 0.3 21284 3748 pts/0 Ss 02:51 0:00 -bash
root 24411 0.0 0.1 63244 1800 pts/0 S 02:51 0:00 su - elspeth
elspeth 24412 0.0 0.4 21600 4208 pts/0 S 02:51 0:00 -su
root 26860 0.0 0.0 31088 960 ? Ss 04:45 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nobody 26863 0.0 0.1 31524 1872 ? S 04:45 0:00 nginx: worker process
elspeth 28005 0.0 0.1 17160 1292 pts/0 R+ 05:55 0:00 ps aux
I can't found problem...
please somebody help me thankyou :)
Please modify your upstart script as follows:
exec /home/elspeth/.pyenv/versions/3.5.1/envs/sites/bin/gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
If that does not work it could very well be because the /home/elspeth/.pyenv/ folder is inaccessible please check it's permission. If permissions are found to be correct and you are continuing to have problems try this:
script
cd /home/elspeth/sites/tddtest.com/source/TDD_Test/testlists/testlists
/home/elspeth/.pyenv/versions/3.5.1/envs/sites/bin/gunicorn --bind \ unix:/tmp/tdd.com.socket testlists.wsgi:application
end script
I would like to run nload (a network throughput monitor) as a daemon on startup (or just automate in general). I can successfully run it as a daemon from the command line by typing this:
nload eth0 >& /dev/null &
Just some background: I modified the nload source code (written in C++) slightly to write to a file in addition to outputting to the screen. I would like to read the throughput values from the file that nload writes to. The reason I am outputting to /dev/null is so that I don't need to worry about the stdout output.
The weird thing is that, when I run it manually it runs just fine as a dameon and I am able to read throughput values from the file. But every attempt at automation has failed. I have tried init.d, rc.local, cron but no luck. The script I wrote to run this in automation is:
#!/bin/bash
echo "starting nload"
/usr/bin/nload eth0 >& /dev/null &
if [ $? -eq 0 ]; then
echo started nload
else
echo failed to start nload
fi
I can confirm that when automated, the script does run, since I tried logging the output. It even logs "started nload", but when I look at the list of processes running nload is not one of them. I can also confirm that when the script is run manually from the shell, nload starts up just fine as a daemon.
Does anyone know what could be preventing this program from running when run via an automated script?
looks like nload is crashing if it's not run from terminal.
viroos#null-linux:~$ cat /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
strace -o /tmp/nload.trace /usr/bin/nload
exit 0
looks like HOME env var is missing:
viroos#null-linux:~$ cat /tmp/nload.trace
brk(0x1f83000) = 0x1f83000
write(2, "Could not retrieve home director"..., 34) = 34
write(2, "\n", 1) = 1
exit_group(1) = ?
+++ exited with 1 +++
lets fix this:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
export HOME=/tmp
strace -o /tmp/nload.trace /usr/bin/nload
exit 0
we have another problem:
viroos#null-linux:~$ cat /tmp/nload.trace
read(3, "\32\1\36\0\7\0\1\0\202\0\10\0unknown|unknown term"..., 4096) = 320
read(3, "", 4096) = 0
close(3) = 0
munmap(0x7f23e62c9000, 4096) = 0
ioctl(2, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7ffedd149010) = -1 ENOTTY (Inappropriate ioctl for device)
ioctl(2, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7ffedd148fb0) = -1 ENOTTY (Inappropriate ioctl for device)
write(2, "Error opening terminal: unknown."..., 33) = 33
exit_group(1) = ?
+++ exited with 1 +++
I saw you mentioned that you modified nload code but my guess is you haven't removed handling missing termin. You can try further editing nload code or use screen in detached mode:
viroos#null-linux:~$ cat /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
export HOME=/tmp
screen -S nload -dm /usr/bin/nload
exit 0
Relatively new to running cron jobs in Centos6, I can't seem to get this Python script to execute properly. I would like this script to execute and then email me the output. I have been receiving emails, but they're empty.
So far, in Crontab I've tried entering:
*/10 * * * * cd /home/local/MYCOMPANY/purrone/MyPythonScripts_Dev1 && /usr/bin/python ParserScript_CampusInsiders.py > /var/log/cron`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log 2>&1 ; mailx -s "Feedparser Output" my#email.com
and
*/10 * * * * /home/local/MYCOMPANY/purrone/MyPythonScripts_Dev1/ParserScript_CampusInsiders.py > /var/log/cron`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log 2>&1 ; mailx -s "Feedparser Output" my#email.com
I have run chmod +x on the python script to make the script executable and the Python script has #!/usr/bin/env python at the header. What am I doing wrong here?
The other problem might be that I shouldn't be using the log file? All I see at /var/log/cron when I open with cat cron is entires like this, for example (no actual output from the script):
Jul 23 13:20:01 ent-mocdvsmg01 CROND[24681]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Jul 23 13:20:01 ent-mocdvsmg01 CROND[24684]: (MYJOB\purrone) CMD (/home/local/MYCOMPANY/purrone/MyPythonScripts_Dev1/ParserScript_CampusInsiders.py > /var/log/cron`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log 2>&1 ; mailx -s "Feedparser Output" my#email.com)
There is nothing going into your mailx input; it expects the message on stdin. Try running it outside of crontab as a test until it sends a valid email. You could test with:
% echo hello |mailx -s test my#email.com
Note that cron can email you the output of its run. You just need to add a line to the top of crontab like:
MAILTO = you#email.com
Solution was to omit the redirect > and instead edit the Crontab thusly:
*/15 * * * * /home/local/COMPANY/malvin/SilverChalice_CampusInsiders/SilverChalice_Parser.py | tee /home/local/COMPANY/malvin/SilverChalice_CampusInsiders`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log | mailx -s "SilverChalice CampusInsiders" my#email.com
I have a python script that checks continuously for snmpd and a socket script to be running .If any of these get killed it should kill both and start new session. The problem
is once socket is running it wait for connection for long ,in between if anyone kill snmpd its not getting started (think its not going to loop back).What may be the reason and a possible solution.? any optimisation possible for the code?
def terminator():
i=0
j=0
os.system("ps -eaf|grep snmpd|cut -d \" \" -f7 >snmpd_pid.txt")
os.system("ps -eaf|grep iperf|cut -d \" \" -f7 >iperf_pid.txt")
os.system("ps -eaf|grep sock_bg.py|cut -d \" \" -f7 >script_pid.txt")
snmpd_pids = tuple(line.strip() for line in open('snmpd_pid.txt'))
iperf_pids = tuple(line.strip() for line in open('iperf_pid.txt'))
script_pids = tuple(line.strip() for line in open('script_pid.txt'))
k1 = len(snmpd_pids) - 2
k2 = len(iperf_pids) - 2
k3 = len(script_pids) - 2
if (k1 == 0 or k3 == 0):
for i in range(k1):
cmd = 'kill -9 %s' %(snmpd_pids[i])
os.system(cmd)
for i in range(k2):
cmd = 'kill -9 %s' %(iperf_pids[i])
os.system(cmd)
for i in range(k3):
cmd = 'kill -9 %s' %(script_pids[i])
os.system(cmd)
os.system("/usr/local/sbin/snmpd -f -L -d -p 9999")
os.system("python /home/maxuser/utils/python-bg/sock_bg.py")
try:
terminator()
except:
print 'an exception occured'
I found the answer ,its the problem of getting the prompt back .
I used screen -d -m option and now able to get intended result.
os.system("screen -d -m /usr/local/sbin/snmpd -f -L -d -p 9999 &")
os.system("screen -d -m python /home/maxuser/utils/python-bg/sock_bg.py &")
Also those system commands need to be inside if condition.