Run Django commands on Elastic Beanstalk SSH -> Missing environment variables - django

So this has been a long-running problem for me and I'd love to fix it - I also think it will help a lot of others. I'd love to run Django commands after ssh'ing on my Elastic Beanstalk EC2 instance. E. g.
python manage.py dumpdata
The reason why this is not possible are the missing environment variables. They are present when the server boots up but are unset as soon as the server is running (EB will create a virtual env within the EC2 and delete the variables from there).
I've recently figured out that there is a prebuilt script to retrieve the env variables on the EC2 instances:
/opt/elasticbeanstalk/bin/get-config environment
This will return a stringified object like this:
{"AWS_STATIC_ASSETS_SECRET_ACCESS_KEY":"xxx-xxx-xxx","DJANGO_KEY":"xxx-xxx-xxx","DJANGO_SETTINGS_MODULE":"xx.xx.xx","PYTHONPATH":"/var/app/venv/staging-LQM1lest/bin","RDS_DB_NAME":"xxxxxxx":"xxxxxx","RDS_PASSWORD":"xxxxxx"}
This is where I'm stuck currently. I think need would need a script, that takes this object parses it and sets the key / values as environment variables. I would need to be able to run this script from the ec2 instance.
Or a command to execute from the .ebextensions that would get the variables and sets them.
Am I absolutely unsure how to proceed at this point? Am I overlooking something obvious here? Is there someone who has written a script for this already? Is this even the right approach?
I'd love your help!

Your env variables are stored in /opt/elasticbeanstalk/deployment/env
Thus to export them, you can do the following (must be root to access the file):
export $(cat /opt/elasticbeanstalk/deployment/env | xargs)
Once you execute the command you can confirm the presence of your env variables using:
env
To use this in your .extentions, you can try:
container_commands:
10_dumpdata:
command: |
export $(cat /opt/elasticbeanstalk/deployment/env | xargs)
source $PYTHONPATH/activate
python ./manage.py dumpdata

Related

How to login to W&B by using ENTRYPOINT

I want to know how to use ENTRYPOINT in a Dockerfile to run a shell script that logs me in.
There’s no need for a custom entry point. Simply set the WANDB_API_KEY environment variable in a kubernetes spec or via the -e flag passed to docker run

Elastic Beanstalk deleting generated files on config changes

On Elastic Beanstalk, with an AWS Linux 2 based environment, updating the Environment Properties (i.e. environment variables) of an environment causes all generated files to be deleted. It also doesn't run container_commands as part of this update.
So, for example, I have a Django project with collectstatic in the container commands:
05_collectstatic:
command: |
source $PYTHONPATH/activate
python manage.py collectstatic --noinput --ignore *.scss
This collects static files to a folder called staticfiles as part of deploy. But when I do an environment variable update, staticfiles is deleted. This causes all static files on the application to be broken until I re-deploy, which is extremely undesirable.
This behavior did not occur on AWS Linux 1 based environments. The difference appears to be that AWS Linux 2 based environments replace the /var/app/current folder during environment variable changes, where AWS Linux 1 based environments did not do this.
How do I fix this?
Research
I can verify that the container commands are not being run during an environment variable change by monitoring /var/log/cfn-init.log; no new entries are added to this log.
This happens with both rolling update type "disabled" and "immutable".
This happens even if I convert the environment command to be a platform hook, despite the fact that hooks are listed as running when environment properties are updated.
It seems to me like there are two potential solutions, but I don't know of an Elastic Beanstalk setting for either:
Have environment variable changes leave /var/app/current rather than replacing it.
Have environment variable changes run container commands.
The Elastic Beanstalk docs on container commands say "Leader-only container commands are only executed during environment creation and deployments, while other commands and server customization operations are performed every time an instance is provisioned or updated." Is this a bug in Elastic Beanstalk?
Related question: EB: Trigger container commands / deploy scripts on configuration change
The solution is to use a Configuration deployment platform hook for any commands that change the files in the deployment directory. Note that this is different from an Application deployment platform hook.
Using the example of the collectstatic command, the best thing to do is to move it from a container command to a pair of hooks, one for standard deployments and one for configuration changes.
To do this, remove the collectstatic container command. Then, make two identical files:
.platform/confighooks/predeploy/predeploy.sh
.platform/hooks/predeploy/predeploy.sh
Each file should have the following code:
#!/bin/bash
source $PYTHONPATH/activate
python manage.py collectstatic --noinput --ignore *.scss
You need two seemingly redundant files because different hooks have different trigger conditions. Scripts in hooks run when you deploy the app whereas scripts in confighooks run when you change the configuration of the app.
Make sure to make both of these files executable according to git or else you will run into a "permission denied" error when you try to deploy. You can check if they are executable via git ls-files -s .platform; you should see 100755 before any shell files in the output of this command. If you see 100644 before any of your shell files, run git add --chmod=+x -- .platform/*/*/*.sh to make them executable.

Where are my environment variables in Elastic Beanstalk for AL2?

I'm using elastic beanstalk to deploy a Django app. I'd like to SSH on the EC2 instance to execute some shell commands but the environment variables don't seem to be there. I specified them via the AWS GUI (configuration -> environment properties) and they seem to work during the boot-up of my app.
I tried activating and deactivating the virtual env via:
source /var/app/venv/*/bin/activate
Is there some environment (or script I can run) to access an environment with all the properties set? Otherwise, I'm hardly able to run any command like python3 manage.py ... since there is no settings module configured (I know how to specify it manually but my app needs around 7 variables to work).
During deployment, the environment properties are readily available to your .platform hook scripts.
After deployment, e.g. when using eb ssh, you need to load the environment properties manually.
One option is to use the EB get-config tool. The environment properties can be accessed either individually (using the -k option), or as a JSON or YAML object with key-value pairs.
For example, one way to export all environment properties would be:
export $(/opt/elasticbeanstalk/bin/get-config --output YAML environment |
sed -r 's/: /=/' | xargs)
Here the get-config part returns all environment properties as YAML, the sed part replaces the ': ' in the YAML output with '=', and the xargs part fixes quoted numbers.
Note this does not require sudo.
Alternatively, you could refer to this AWS knowledge center post:
Important: On Amazon Linux 2, all environment properties are centralized into a single file called /opt/elasticbeanstalk/deployment/env. You must use this file during Elastic Beanstalk's application deployment process only. ...
The post describes how to make a copy of the env file during deployment, using .platform hooks, and how to set permissions so you can access the file later.
You can also perform similar steps manually, using SSH. Once you have the copy set up, with the proper permissions, you can source it.
Beware:
Note: Environment properties with spaces or special characters are interpreted by the Bash shell and can result in a different value.
Try running the command /opt/elasticbeanstalk/bin/get-config environment after you ssh into the EC2 instance.
If you are trying to access the environment variables in eb script elastic beanstalk
Use this
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)
{ "Ref" : "AWSEBEnvironmentName" }
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)

Upload .env environment variables to elastic beanstalk

As far as I know the only way to set environment variables to elastic beanstalk is either:
The AWS Online Console.
The eb setenv command.
I have a .env file in my project that contains all of my environment variables over 100 variables and I'm looking for a way to push them all to the eb instance at the same time.
First off, you should note that Elastic Beanstalk has a limit of fifty (50) environment variables per environment.
Now, if you've got a bunch of environment variables set in a .env file, an easy way to set them en-mass is to just dump them all into the eb setenv command like this...
eb setenv `cat .env | sed '/^#/ d' | sed '/^$/ d'`
This will set them all in one shot and save you spending all night on the aws console page. The sed command remove commented lines and new lines.
Explain Shell
From the beginning, personally I had a deploy.sh script which was doing npm install, npm build, etc.
So, in a .ebextensions/example.config file, I put the following:
option_settings:
aws:elasticbeanstalk:environment:process:default:
HealthCheckPath: /health
# This must be at the end of the file
aws:elasticbeanstalk:application:environment:
# Will be populated by the deployment script
And in my script I have something like this:
sed -r 's/^([_[:alnum:]]+)=/ \1: /g' .env >> dist/.ebextensions/example.config
This sed command replaces ^[_\w]+= with $1: so that all my environment variables are passed in a config file. Thanks to this, I don't have to:
Use eb setenv
Wait for the update to process
Use eb deploy

cloudformation composer install

So I am using cloudformation for my AWS setup, I am trying to run composer but for some reason no matter what command I put in my userdata section I always can an error, this is my error:
php /usr/local/bin/composer.phar create-project composer/satis /var/www/satis --stability=dev
[RuntimeException]
The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly
This is my code within the userdata section:
"#composer\n",
"curl -sS https://getcomposer.org/installer | php\n",
"mv composer.phar /usr/local/bin/composer.phar\n",
"#satis\n",
"php /usr/local/bin/composer.phar create-project composer/satis /var/www/satis --stability=dev\n",
Does anyone have any ideas why this might not work and should I should be doing ?
Composer is looking for the location of the .composer directory. Export the HOME or COMPOSER_HOME env variable, e.g. : HOME=/root php /usr/local/bin/composer.phar create-project composer/satis /var/www/satis --stability=dev, it will work fine then.
I had the similar issue with amazon linux ami 2, it was showing in the log All settings correct for using Composer. The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly, but it was not installed at all. Below is the way to fix it. Might be helpful to somebody rather waisting 2,3 hours!
sudo curl -sS https://getcomposer.org/installer | sudo php
mv composer.phar /usr/bin/composer
chmod +x /usr/bin/composer
export COMPOSER_HOME=/root
Agree with Ntwobike's answer.
When launching AWS EC2 instances I was installing composer by running an Ansible playbook during in the user data script run. (The user data script is called by cloud-init during the instance build process).
For some reason at this point in the build the $HOME environment variable is not set. So I needed to add 'export HOME=/root' - e.g.
# These need to be set to enable the composer installer to run. It is probably due to an issue
# with the $HOME variable not yet being set at this point in the instance creation.
export HOME=/root
ansible-playbook --extra-vars "target=localhost" playbooks/debian-9/drush.yml