django_cron config when .ebextensions is not in root directory - django

There are a few solutions to configuring .ebextension container commands for cronjobs but none of them are working for me.
I am concerned that the reason they aren't working is because .ebextensions is not in the root directory. This messy code was handed over to me and I've tried to move .ebextensions to where it needs to be but that seems to break everything else.
This app is a streaming video application currently in production and I can't afford to break it so I ended up just leaving it where it is.
Can someone tell if I am doing this right and I just need to find a way to move .ebextensions or is the problem in my cronjob configuration?
app1/.ebextensions/02_python.config
container_commands:
...
cronjob:
command: "echo .ebextensions/cronjobs.txt > /etc/cron.d/cronjobs && 644 /etc/cron.d/cronjobs"
leader_only: true
...
app1/.ebextensions/cronjobs.txt
***** root source /opt/python/run/venv/bin/activate && python3 manage.py runcrons > /var/log/cronjobs.log
app1/settings.py
INSTALLED_APPS = [
...
'django_cron',
...
]
CRON_CLASSES = [
'app2.crons.MyCronJob',
]
app2/crons
from django_cron import CronJobBase, Schedule
class MyCronJob(CronJobBase):
RUN_EVERY_MINS = 1
schedule = Schedule(run_every_mins=RUN_EVERY_MINS)
def do(self):
# calculate stuff
# update variables
This deploys to AWS elastic beanstalk without error and logs show it's run but the work doesn't get done and it only runs the command once on deploy. Logs show this.
Command cronjob] : Starting activity...
[2018-02-15T12:58:41.648Z] INFO [24604] - [Application update ingest16#207/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_api_backend/Test for Command 05_cronjob] :
Completed activity. Result:
Completed successfully
This does the job but only once on deploy.
container_commands:
...
cronjob:
command: "source /opt/python/run/venv/bin/activate && python3 manage.py runcrons"
leader_only: true
...
This doesn't work at all.
container_commands:
...
cronjob:
command: "echo /app1/.ebextensions/cronjobs.txt > /etc/cron.d/cronjobs && 644 /etc/cron.d/cronjobs"
leader_only: true
...

Hi why using django_cron, when you only need cron ?
Here is my config .ebextensions:
container_commands:
...
0.0.1.cron.mailing:
command: "cat .ebextensions/mailing.txt > /etc/cron.d/mailing && chmod 644 /etc/cron.d/mailing"
leader_only: true
Here my mailing.txt:
Every Morning at 05:00am
#* * * * * * command
#| | | | | | |
#| | | | | | + Comande Line
#| | | | | +-- Year (range: 1900-3000)
#| | | | +---- Day of the Week (range: 1-7, 1 standing for Monday)
#| | | +------ Month of the Year (range: 1-12)
#| | +-------- Day of the Month (range: 1-31)
#| +---------- Hour (range: 0-23)
#+------------ Minute (range: 0-59)
# m h dom mon dow command
0 5 * * * root source /opt/python/run/venv/bin/activate && source /opt/python/current/env && cd /opt/python/current/app/ && python manage.py my_command >> /home/ec2-user/cron-mailing.log 2>&1
And here how to create custom command : https://docs.djangoproject.com/en/2.0/howto/custom-management-commands/#module-django.core.management
Hope this help,

You need space in your cron file between * :
Your cronfile :
***** root source /opt/python/run/venv/bin/activate && python3 manage.py runcrons > /var/log/cronjobs.log
Fix it like that :
* * * * * root source /opt/python/run/venv/bin/activate && python3 manage.py runcrons > /var/log/cronjobs.log

Related

AWS Elastic Beanstalk ebextensions configuration doesn't work when new instance gets spawned

In my Laravel application, I have created a folder .ebextensions and it has the configuration to install supervisor in the EC2 instance.
When I deploy the application for the first time and the instance gets created, everything works fine. Supervisor gets installed.
But when the instance scales and a new EC2 gets spawned, it doesn't take the same configuration. I need to install supervisor manually on the newer instance.
Is there a way, where the newer instances would take the configuration from .ebextensions and run it in the similar way it did the first time?
This is the structure of the .ebextensions folder
.ebextensions
- supervisor
- setup.sh
- supervisor_laravel.conf
- supervisord.conf
- supervisor.config
setup.sh
#!/bin/bash
echo "Supervisor - starting setup"
. /opt/elasticbeanstalk/deployment/env
if [ ! -f /usr/bin/supervisord ]; then
echo "installing supervisor"
easy_install supervisor
else
echo "supervisor already installed"
fi
if [ ! -d /etc/supervisor ]; then
mkdir /etc/supervisor
echo "create supervisor directory"
fi
if [ ! -d /etc/supervisor/conf.d ]; then
mkdir /etc/supervisor/conf.d
echo "create supervisor configs directory"
fi
. /opt/elasticbeanstalk/deployment/env && cat .ebextensions/supervisor/supervisord.conf > /etc/supervisor/supervisord.conf
. /opt/elasticbeanstalk/deployment/env && cat .ebextensions/supervisor/supervisord.conf > /etc/supervisord.conf
. /opt/elasticbeanstalk/deployment/env && cat .ebextensions/supervisor/supervisor_laravel.conf > /etc/supervisor/conf.d/supervisor_laravel.conf
if ps aux | grep "[/]usr/bin/supervisord"; then
echo "supervisor is running"
else
echo "starting supervisor"
/usr/bin/supervisord
fi
/usr/bin/supervisorctl reread
/usr/bin/supervisorctl update
echo "Supervisor Running!"
yum -y install http://cbs.centos.org/kojifiles/packages/beanstalkd/1.9/3.el7/x86_64/beanstalkd-1.9-3.el7.x86_64.rpm
if ps aux | grep "[/]usr/bin/beanstalkd"; then
echo "beanstalkd is running"
else
echo "starting beanstalkd"
/bin/systemctl start beanstalkd.service
fi
echo "Beanstalkd Running..."
supervisor_laravel.conf
[program:worker]
process_name=%(program_name)s_%(process_num)02d
command=/usr/bin/php /var/www/html/artisan queue:work --tries=3 --timeout=0
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
;user=forge
numprocs=3
redirect_stderr=true
stderr_logfile=/var/log/supervisor_laravel.err.log
stdout_logfile=/var/log/supervisor_laravel.out.log
supervisor.conf
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
environment=SYMFONY_ENV=prod
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[include]
files = /etc/supervisor/conf.d/*.conf
[inet_http_server]
port = 9000
username = user
password = pw
supervisor.config
container_commands:
01_install_supervisor:
command: ".ebextensions/supervisor/setup.sh"
I have found the answer. But slightly in a different way.
Initially, I had all the files in .ebextensions folder, I moved some of the files into a different folder called "awsconfig" (The name doesn't matter), and moved some of the commands of setup.sh file to .platform/hooks/postdeploy and gave the permission as chmod +x to setup.sh
Important
AWS cleans up the .ebextensions folder after they are executed. I read that here
I did this because supervisor was getting installed in the newer instance (the instance which gets created after scaling), but the configuration files weren't getting copied because there was no .ebextensions folder in the newer instance
To copy the configuration files after the instance gets deployed, I used platform hooks postdeploy.
You can read about platform-hooks-postdeploy here
This is the structure I have now
.ebextensions
- supervisor
- setup.sh
- supervisor.config
.platform
- hooks
- postdeploy
- setup.sh
awsconfig
- supervisor
- supervisor_laravel.conf
- supervisord.conf
These are the changes in setup.php file which is split into two files in different folders
.ebextensions/supervisor/setup.sh
#!/bin/bash
echo "Supervisor - starting setup"
. /opt/elasticbeanstalk/deployment/env
if [ ! -f /usr/bin/supervisord ]; then
echo "installing supervisor"
easy_install supervisor
else
echo "supervisor already installed"
fi
if [ ! -d /etc/supervisor ]; then
mkdir /etc/supervisor
echo "create supervisor directory"
fi
if [ ! -d /etc/supervisor/conf.d ]; then
mkdir /etc/supervisor/conf.d
echo "create supervisor configs directory"
fi
.platform/hooks/postdeploy/setup.sh
#!/bin/bash
. /opt/elasticbeanstalk/deployment/env && cat /var/www/html/awsconfig/supervisor/supervisord.conf > /etc/supervisor/supervisord.conf
. /opt/elasticbeanstalk/deployment/env && cat /var/www/html/awsconfig/supervisor/supervisord.conf > /etc/supervisord.conf
. /opt/elasticbeanstalk/deployment/env && cat /var/www/html/awsconfig/supervisor/supervisor_laravel.conf > /etc/supervisor/conf.d/supervisor_laravel.conf
if ps aux | grep "[/]usr/bin/supervisord"; then
echo "supervisor is running"
else
echo "starting supervisor"
/usr/bin/supervisord
fi
/usr/bin/supervisorctl reread
/usr/bin/supervisorctl update
echo "Supervisor Running!"
yum -y install http://cbs.centos.org/kojifiles/packages/beanstalkd/1.9/3.el7/x86_64/beanstalkd-1.9-3.el7.x86_64.rpm
if ps aux | grep "[/]usr/bin/beanstalkd"; then
echo "beanstalkd is running"
else
echo "starting beanstalkd"
/bin/systemctl start beanstalkd.service
fi
echo "Beanstalkd Running..."
The content of all other files remains the same as what I have posted in the question.

AWS Elasticbeanstalk post deploy script problem

I'm trying to run the celery worker after my application has been deployed to AWS Elasticbeanstalk.
99_celery_start.sh
#!/usr/bin/env bash
#make dirs
sudo mkdir -p /usr/etc/
sudo chmod 755 /usr/etc/
sudo touch /usr/etc/celery.conf
sudo touch /usr/etc/supervisord.conf
# Get django environment variables
celeryenv=`cat /var/app/rootfolder/myprject/.env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}
# Create celery configuration script
celeryconf="[program:celery-worker]
user=root
directory=/var/app/rootfolder
; Set full path to celery program if using virtualenv
command=/var/app/venv/*/bin/celery -A myprject worker -P solo --loglevel=INFO
.
.
.
.
celery.config
container_commands:
01_celery_configure:
command: "mkdir -p /.platform/hooks/postdeploy/ && cp .ebextensions/99_celery_start.sh /.platform/hooks/postdeploy/ && chmod 774 /.platform/hooks/postdeploy/99_celery_start.sh"
02_run_celery:
command: "sudo /.platform/hooks/postdeploy/99_celery_start.sh"
So what I'm trying is to copy the celery-worker script in the .ebextension folder and paste it into the post-deploy hook folder so that the script runs after the application is deployed on the instance.But the command 02_run_celery is executing before the application is extracted and deployed on the instance. Since the script requires the application folder /var/app/rootfolder/myprjct/.env, the deployment process gives an error cat: /var/app/rootfolder/myprjct/.env: No such file or directory.
If your assumption is right and this is a race condition (the second command execute before the application deployed) - what about waiting for the application deployment? or waiting for that specific file you're interested in?
...
# Wait for the app deployment
while [ ! -f /var/app/rootfolder/myprject/.env ]
do
sleep 1
echo "Waiting for the application deployment"
done
# Get django environment variables
celeryenv=`cat /var/app/rootfolder/myprject/.env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}
...
If anyone still looking for the answer then this is how I did it:
Folder structure:
|-- .ebextensions/
| |-- celery.config # Option settings
| `-- cloudwatch.config # Other .ebextensions sections, for example files and container commands
`-- .platform/
|-- nginx/ # Proxy configuration
| |-- nginx.conf
| `-- conf.d/
| `-- custom.conf
|-- hooks/ # Application deployment hooks
| `-- postdeploy/
| `-- 99_celery_start.sh
Now add permissions for 99_celery_start.script in celery.config:
01_celery_perm:
command: "sudo chmod +x .platform/hooks/postdeploy/99_celery_start.sh"
02_dos2unix:
command: "perl -i -pe's/\r$//;' .platform/hooks/postdeploy/99_celery_start.sh"
IMPORTANT: Make sure the script should be saved in LF line endings instead of CRLF.

Remove old crons when redeplying to AWS beanstack

Not an expert on AWS and trying to fool around with Cron jobs. For testing, I had a sample script send me emails every minute. Now, I want to change it to once every 10 minutes (*/10 * * * *) These are the container commands I tried and none of them seems to work.
I am using a config file and a txt file to define the crons.
Config file contents (with various ideas I read from online sources)
container_commands:
00_remove_old_cron_jobs0:
command: "rm -fr /etc/cron.d/cron_job"
01_remove_old_cron_jobs1:
command: "sudo sed -i 's/empty stuff//g' /etc/cron.d/cron_job"
02_remove_old_cron_jobs2:
command: "crontab -r || exit 0"
03_cron_job:
command: "cat .ebextensions/cron_job.txt > /etc/cron.d/cron_job && chmod 644 /etc/cron.d/cron_job"
leader_only: true
cron_job.txt file contents.
# The newline at the end of this file is extremely important. Cron won't run without it.
0 * * * * ec2-user /usr/bin/php -q /var/www/html/cron1.php > /dev/null
0 * * * * ec2-user /usr/bin/php -q /var/www/html/html/cron2.php > /dev/null
*/10 * * * * ec2-user /usr/bin/php -q /var/www/html/cronTestEmailer.php > /dev/null
The test emailer script keeps firing every minute instead of every 10 mins and I dont know how I can make sure cron updates are reflected correctly.
You can achieve the same with the follow ebextensions config file.
files:
"/etc/cron.d/mycron":
mode: "000644"
owner: root
group: root
content: |
* * * * * root /usr/local/bin/myscript.sh
"/usr/local/bin/myscript.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
date > /tmp/date
# Your actual script content
exit 0
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/*.bak"
More Details about the config file below:
files: Creates a Cron job and a file with the name myscript.sh. If a file with the same name exists already, first it moves the old file .bak and creates the file with new contents.
commands: deletes the all .bak files

No Output for Python Script Executed via Cron Job

Relatively new to running cron jobs in Centos6, I can't seem to get this Python script to execute properly. I would like this script to execute and then email me the output. I have been receiving emails, but they're empty.
So far, in Crontab I've tried entering:
*/10 * * * * cd /home/local/MYCOMPANY/purrone/MyPythonScripts_Dev1 && /usr/bin/python ParserScript_CampusInsiders.py > /var/log/cron`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log 2>&1 ; mailx -s "Feedparser Output" my#email.com
and
*/10 * * * * /home/local/MYCOMPANY/purrone/MyPythonScripts_Dev1/ParserScript_CampusInsiders.py > /var/log/cron`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log 2>&1 ; mailx -s "Feedparser Output" my#email.com
I have run chmod +x on the python script to make the script executable and the Python script has #!/usr/bin/env python at the header. What am I doing wrong here?
The other problem might be that I shouldn't be using the log file? All I see at /var/log/cron when I open with cat cron is entires like this, for example (no actual output from the script):
Jul 23 13:20:01 ent-mocdvsmg01 CROND[24681]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Jul 23 13:20:01 ent-mocdvsmg01 CROND[24684]: (MYJOB\purrone) CMD (/home/local/MYCOMPANY/purrone/MyPythonScripts_Dev1/ParserScript_CampusInsiders.py > /var/log/cron`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log 2>&1 ; mailx -s "Feedparser Output" my#email.com)
There is nothing going into your mailx input; it expects the message on stdin. Try running it outside of crontab as a test until it sends a valid email. You could test with:
% echo hello |mailx -s test my#email.com
Note that cron can email you the output of its run. You just need to add a line to the top of crontab like:
MAILTO = you#email.com
Solution was to omit the redirect > and instead edit the Crontab thusly:
*/15 * * * * /home/local/COMPANY/malvin/SilverChalice_CampusInsiders/SilverChalice_Parser.py | tee /home/local/COMPANY/malvin/SilverChalice_CampusInsiders`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log | mailx -s "SilverChalice CampusInsiders" my#email.com

cron not firing at regular intervals

I want to set a cron that fetches some stories from an api every 5 minutes and shoot a mail if any new stories comes up. Here is my crontab file. (Used a django management command to do so). I fired the management command and its sending me the correct info, but when I am trying to set a cronjob for the same, its not firing. Here is my crontab file
vi /etc/crontab
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# m h dom mon dow user command
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
5 * * * * root /home/anurag/projects/virtualenvs/django6/bin/python /home/anurag/Documents/HNStories/hnstories/manage.py get_stories >> /home/anurag/cron_log.txt
Here are its permission
ls -l /etc/crontab
-rw-r--r-- 1 root root 884 Aug 17 20:20 /etc/crontab
Also I am not able to see any warning and error in syslog file
cat /var/log/syslog | grep crontab
Aug 17 12:58:01 anurag cron[1257]: (*system*) RELOAD (/etc/crontab)
Aug 17 17:24:01 anurag cron[8534]: (*system*) RELOAD (/etc/crontab)
Aug 17 20:21:01 anurag cron[1139]: (*system*) RELOAD (/etc/crontab)
I also tried to restart the crontab and restart my computer. But I am not able to fix this issue.
The correct syntax for executing every 5 minutes would be
*/5 * * * * root /home/anurag/projects/virtualenvs/django6/bin/python /home/anurag/Documents/HNStories/hnstories/manage.py get_stories >> /home/anurag/cron_log.txt
Another reason why the command isn't executing might be a missing newline at the end of /etc/crontab
EDIT:
You might also want to look into django-extensions which provides a command extension (runjobs) to run scheduled jobs.