Can Celery run on Elastic Beanstalk? - django

I'm looking for a straight-forward way to run Celery on an Elastic Beanstalk environment. Does this exist, or do I need to use SQS instead?
I have tried putting a line in the the .config file without good results. This is my .config file:
container_commands:
01_syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
02_collectstatic:
command: "./manage.py collectstatic --noinput"
03_migrate:
command: "./manage.py migrate --noinput"
04_start_celery:
command: "./manage.py celery worker &"
When I ssh to the EC2 server and run ps -ef | grep celery it shows that Celery isn't running.
Any help appreciated. Thanks!

Celery doesn't show up because the container commands are run prior to reboot of the webserver during deployment. Basically, your celery workers get wiped out after the machine restarts.
I would suggest starting celery by using post deployment hooks.
See http://junkheap.net/blog/2013/05/20/elastic-beanstalk-post-deployment-scripts/ and How do you run a worker with AWS Elastic Beanstalk?

Related

Starting RabbitMQ and Celery for Django when launching Amazon Linux 2 instance on Elastic Beanstalk

I have been trying to setup celery to run tasks for my Django application on Amazon Linux 2. Everything worked on AL1, but things have changed.
When I SSH into the instance I can get everything running properly - however the commands upon deployment do not work properly.
I have tried this in my .platform/hooks/postdeploy directory:
How to upgrade Django Celery App from Elastic Beanstalk Amazon Linux 1 to Amazon Linux 2
However that is not seeming to work. I have container commands to install epel, erlang and rabbitmq as the broker - they seem to work.
After that answer, #Jota suggests:
"No, ideally it should go in the Procfile file in the root of your repository. Just write celery_worker: celery worker -A my_django_app.settings.celery.app --concurrency=1 --loglevel=INFO -n worker.%%h. Super simple."
However would I include the entire script in the procfile or just the line:
celery_worker: celery worker -A my_django_app.settings.celery.app --concurrency=1 --loglevel=INFO -n worker.%%h
This seems to suggests it would just be the command:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html
Then where would the script be if not in the procfile with the above line?

How to run the collectstatic script after deployment to amazon elastic beanstalk?

I have a django app that is deployed on aws elastic beanstalk when I want to deploy I need to run the migrate, and the collectstatic script.
I have created 01_build.config in .ebextensions directory and this is its content
commands:
migrate:
command: "python manage.py migrate"
ignoreErrors: true
collectstatic:
command: "python manage.py collectstatic --no-input"
ignoreErrors: true
but still, it is not running these scripts.
Sounds like you want to run these scripts after the app has been set up, in which case you need to use the key container_commands rather than commands. From the docs:
The commands run before the application and web server are set up and the application version file is extracted.
and
Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed. Non-container commands and other customization operations are performed prior to the application source code being extracted.

Beanstalk not running makemigrations

I am trying to deploy my Django server to Amazon through Beanstalk, and so far it's been ok except I've made a few changes to my models and when I deployed the instance on Aws is not updating accordingly.
I have followed the guide from amazon and created a file named db-migrate.config with the content
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: myAppName.settings
but obviously it doesn't seem to be working. I tried to access my django instance on Aws with
eb ssh myAppEnv
but when I enter I saw nothing and I couldn't find the code for my django server anywhere, thus i am unable to debug and manually run makemigrations also.
Anyone can help me with this?
Only way i was able to fix this was by specifying the exact applications i wanted to makemigrate and migrate.
02_migrateapps:
command: "source /opt/python/run/venv/bin/activate && python3 manage.py makemigrations organisations shows media exhibitors && python3 manage.py migrate --noinput"
leader_only: true
It's a real pain but each time i make a new app i'll need to add it to the list of makemigrations.
Hope this helps.

Elastic Beanstalk Django migrate issue

I'm deploying Django app to the AWS EB using CLI and noticed that EB doesn't see new migrations files for the first time. So, when I have new migrations I need to deploy twice. I looked at logs and indeed migrations were not found for the first time and found for the second time.
Here is my code for migrations:
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "python ras-server/manage.py collectstatic --noinput"
Am I need to change commands order? Also, I think that issue could be with Jenkins as I deploy from Jenkins. Any suggestions?
The issue was with Jenkins: for some reason when I deployed using execute shell migrations where not found for the first time.
The solution is to use Elastic Beanstalk Deployment plugin. Also, it takes less time to deploy with the plugin.
Same Error for me. In my case. I forgot to include App name in the migration. Try including App name exams
01_migrate:
command: "python manage.py makemigrations exams --noinput"
command: "python manage.py migrate exams --noinput"
leader_only: true

Django migrations with Docker on AWS Elastic Beanstalk

I have a django app running inside a single docker container on AWS Elastic Beanstalk. I cannot get it to run migrations properly, it always sees the old docker image and tries to run migrations from that (but it doesn’t have the latest files).
I package an .ebextensions directory with my EBS source bundle (a zip containing a Dockerrun.aws.json file and the .ebextensions dir). And it has a setup.config file that looks like this:
container_commands:
01_migrate:
command: "CONTAINER=`docker ps -a --no-trunc | grep aws_beanstalk | cut -d' ' -f1 | head -1` && docker exec $CONTAINER python3 manage.py migrate"
leader_only: true
Which is partially modeled after the comments on this SO question.
I have verified that it can work if I simply re-deploy the app a second time, since this time the previous running image will have the updated migrations file.
Does anyone know how to access the latest docker image or latest running container in an .ebextensions script?
Based on AWS Documentation on Customizing Software on Linux Servers, container_commands will be executed before your app is deployed.
You can use the container_commands key to execute commands for your container. The commands in container_commands are processed in alphabetical order by name. They run after the application and web server have been set up and the application version file has been extracted, but before the application version is deployed. They also have access to environment variables such as your AWS security credentials. Additionally, you can use leader_only. One instance is chosen to be the leader in an Auto Scaling group. If the leader_only value is set to true, the command runs only on the instance that is marked as the leader.
Take a look also into my answer in here. It run some command in different app deployment state and give the command result.
So, your problem solution might be create an post app deployment hook.
.ebextensions/00_post_migrate.config
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/10_post_migrate.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
if [ -f /tmp/leader_only ]
then
rm /tmp/leader_only
docker exec `docker ps --no-trunc -q | head -n 1` python3 manage.py migrate
fi
container_commands:
01_migrate:
command: "touch /tmp/leader_only"
leader_only: true
I am using another approach. What I did is run a container based on the newly build image, then pass in the environment variables from Elastic Beanstalk and run the custom command in that container. When that command is done, it will remove itself and proceed with the deployment.
So this is the script I have put inside .ebextensions/scripts/container_command.sh (make sure you replace everything that is within <>):
#!/bin/bash
COMMAND=$1
EB_CONFIG_DOCKER_IMAGE_STAGING=$(/opt/elasticbeanstalk/bin/get-config container -k <environment_name>_image)
EB_SUPPORT_FILES=$(/opt/elasticbeanstalk/bin/get-config container -k support_files_dir)
# build --env arguments for docker from env var settings
EB_CONFIG_DOCKER_ENV_ARGS=()
while read -r ENV_VAR; do
EB_CONFIG_DOCKER_ENV_ARGS+=(--env "${ENV_VAR}")
done < <($EB_SUPPORT_FILES/generate_env)
docker run --name=shopblender_pre_deploy -d \
"${EB_CONFIG_DOCKER_ENV_ARGS[#]}" \
"${EB_CONFIG_DOCKER_IMAGE_STAGING}"
docker exec shopblender_pre_deploy ${COMMAND}
# clean up
docker stop shopblender_pre_deploy
docker rm shopblender_pre_deploy
Now, you can use this script to execute any custom command to the container that will be deployed later.
Something like this .ebextensions/container_commands.config:
container_commands:
01-command:
command: bash .ebextensions/scripts/container_command.sh "php app/console doctrine:schema:update --force --no-interaction" &>> /var/log/database.log
leader_only: true
02-command:
command: bash .ebextensions/scripts/container_command.sh "php app/console fos:elastica:reset --no-interaction" &>> /var/log/database.log
leader_only: true
03-command:
command: bash .ebextensions/scripts/container_command.sh "php app/console doctrine:fixtures:load --no-interaction" &>> /var/log/database.log
leader_only: true
This way you also do not need to worry about what your latest started container is, which is a problem with the solution described above.