Where are my environment variables in Elastic Beanstalk for AL2? - django

I'm using elastic beanstalk to deploy a Django app. I'd like to SSH on the EC2 instance to execute some shell commands but the environment variables don't seem to be there. I specified them via the AWS GUI (configuration -> environment properties) and they seem to work during the boot-up of my app.
I tried activating and deactivating the virtual env via:
source /var/app/venv/*/bin/activate
Is there some environment (or script I can run) to access an environment with all the properties set? Otherwise, I'm hardly able to run any command like python3 manage.py ... since there is no settings module configured (I know how to specify it manually but my app needs around 7 variables to work).

During deployment, the environment properties are readily available to your .platform hook scripts.
After deployment, e.g. when using eb ssh, you need to load the environment properties manually.
One option is to use the EB get-config tool. The environment properties can be accessed either individually (using the -k option), or as a JSON or YAML object with key-value pairs.
For example, one way to export all environment properties would be:
export $(/opt/elasticbeanstalk/bin/get-config --output YAML environment |
sed -r 's/: /=/' | xargs)
Here the get-config part returns all environment properties as YAML, the sed part replaces the ': ' in the YAML output with '=', and the xargs part fixes quoted numbers.
Note this does not require sudo.
Alternatively, you could refer to this AWS knowledge center post:
Important: On Amazon Linux 2, all environment properties are centralized into a single file called /opt/elasticbeanstalk/deployment/env. You must use this file during Elastic Beanstalk's application deployment process only. ...
The post describes how to make a copy of the env file during deployment, using .platform hooks, and how to set permissions so you can access the file later.
You can also perform similar steps manually, using SSH. Once you have the copy set up, with the proper permissions, you can source it.
Beware:
Note: Environment properties with spaces or special characters are interpreted by the Bash shell and can result in a different value.

Try running the command /opt/elasticbeanstalk/bin/get-config environment after you ssh into the EC2 instance.

If you are trying to access the environment variables in eb script elastic beanstalk
Use this
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)
{ "Ref" : "AWSEBEnvironmentName" }
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)

Related

Elastic Beanstalk deleting generated files on config changes

On Elastic Beanstalk, with an AWS Linux 2 based environment, updating the Environment Properties (i.e. environment variables) of an environment causes all generated files to be deleted. It also doesn't run container_commands as part of this update.
So, for example, I have a Django project with collectstatic in the container commands:
05_collectstatic:
command: |
source $PYTHONPATH/activate
python manage.py collectstatic --noinput --ignore *.scss
This collects static files to a folder called staticfiles as part of deploy. But when I do an environment variable update, staticfiles is deleted. This causes all static files on the application to be broken until I re-deploy, which is extremely undesirable.
This behavior did not occur on AWS Linux 1 based environments. The difference appears to be that AWS Linux 2 based environments replace the /var/app/current folder during environment variable changes, where AWS Linux 1 based environments did not do this.
How do I fix this?
Research
I can verify that the container commands are not being run during an environment variable change by monitoring /var/log/cfn-init.log; no new entries are added to this log.
This happens with both rolling update type "disabled" and "immutable".
This happens even if I convert the environment command to be a platform hook, despite the fact that hooks are listed as running when environment properties are updated.
It seems to me like there are two potential solutions, but I don't know of an Elastic Beanstalk setting for either:
Have environment variable changes leave /var/app/current rather than replacing it.
Have environment variable changes run container commands.
The Elastic Beanstalk docs on container commands say "Leader-only container commands are only executed during environment creation and deployments, while other commands and server customization operations are performed every time an instance is provisioned or updated." Is this a bug in Elastic Beanstalk?
Related question: EB: Trigger container commands / deploy scripts on configuration change
The solution is to use a Configuration deployment platform hook for any commands that change the files in the deployment directory. Note that this is different from an Application deployment platform hook.
Using the example of the collectstatic command, the best thing to do is to move it from a container command to a pair of hooks, one for standard deployments and one for configuration changes.
To do this, remove the collectstatic container command. Then, make two identical files:
.platform/confighooks/predeploy/predeploy.sh
.platform/hooks/predeploy/predeploy.sh
Each file should have the following code:
#!/bin/bash
source $PYTHONPATH/activate
python manage.py collectstatic --noinput --ignore *.scss
You need two seemingly redundant files because different hooks have different trigger conditions. Scripts in hooks run when you deploy the app whereas scripts in confighooks run when you change the configuration of the app.
Make sure to make both of these files executable according to git or else you will run into a "permission denied" error when you try to deploy. You can check if they are executable via git ls-files -s .platform; you should see 100755 before any shell files in the output of this command. If you see 100644 before any of your shell files, run git add --chmod=+x -- .platform/*/*/*.sh to make them executable.

How to clone an AWS EB environment across platform branches

Background
Our AWS Elastic Beanstalk environment, running the latest version of the pre-configured "Python 3.7 on 64-bit Amazon Linux 2" platform branch, has a lot of custom configuration and environment properties.
Now we would like to switch this environment to the "Python 3.8 on 64-bit Amazon Linux 2" platform branch.
Basically, the goal is to clone the environment, keeping the current configuration (other than platform branch and version) and environment properties.
Problem
Unfortunately, when cloning, it is not possible to switch between different platform branches (we can only switch between platform versions within the same platform branch).
The documentation suggests that a blue/green deployment is required here. However, a blue/green deployment involves creating a new environment from scratch, so we would still need some other way to copy our configuration settings and environment properties.
Question
What would be the recommended way to copy the configuration settings and/or environment properties from the original environment into a newly created environment?
I suppose we could use eb config to download the original configuration, modify the environment name, platform branch and version, and so on, and then use eb config --update on the new environment. However, that feels like a hack.
Summary
save current config: eb config save <env name>
use a text editor to modify the platform branch in the saved config file
create new environment based on modified config file: eb create --cfg <config name> (add --sample to use the sample application)
if necessary, delete local config files
if necessary, use eb printenv and eb setenv to copy environment properties
EDIT: For some reason the saved config does not include all security group settings, so it may be necessary to check those manually, using the EB console (configuration->instances).
Background
AWS support have confirmed that using eb config is the way to go, and they referred to the online documentation for details.
Unfortunately, the documentation for the eb cli does not provide all the answers.
The following is based on my own adventures using the latest version of the eb cli (3.20.2) with botocore 1.21.50, and documentation at the time of writing (Sep 30, 2021). Note there's a documentation repo on github but it was last updated six months ago and does not match the latest online docs...
eb config
Here's a screenshot from the eb config docs:
Indeed, if you call eb config my-env or eb config my-env --display, environment properties are not shown.
However, this does not hold for eb config save: YAML files created using eb config save actually do include environment properties*.
*Beware, if your environment properties include secrets (e.g. passwords), these also end up in your saved configs, so make sure you don't commit those to version control.
Moreover, it is currently also possible to set environment properties using eb config --update.
This implies we should be able to "copy" both configuration settings and environment properties in one go.
EDIT: After some testing it turns out eb config save does not always get the complete set of environment properties: some properties may be skipped. Not yet sure why... Step 5 below might help in those cases.
Walk-through
Not sure if this is the best way to do it, but here's what seems to work for me:
Suppose we have an existing EB environment called py37-env with lots of custom configuration and properties, running the Python 3.7 platform branch.
The simplest way to "clone" this would be as follows:
Step 1: download the existing configuration
Download the configuration for the existing environment:
eb config save py37-env
By default, the config file will end up in our project directory as .elasticbeanstalk/saved_configs/py37-env-sc.cfg.yml.
The saved config file could look like this (just an example, also see environment manifest):
EnvironmentConfigurationMetadata:
Description: Configuration created from the EB CLI using "eb config save".
DateCreated: '1632989892000'
DateModified: '1632989892000'
Platform:
PlatformArn: arn:aws:elasticbeanstalk:eu-west-1::platform/Python 3.7 running on 64bit Amazon Linux 2/3.3.5
OptionSettings:
aws:elasticbeanstalk:application:environment:
MY_ENVIRONMENT_PROPERTY: myvalue
aws:elasticbeanstalk:command:
BatchSize: '30'
BatchSizeType: Percentage
aws:elb:policies:
ConnectionDrainingEnabled: true
aws:elb:loadbalancer:
CrossZone: true
aws:elasticbeanstalk:environment:
ServiceRole: aws-elasticbeanstalk-service-role
aws:elasticbeanstalk:healthreporting:system:
SystemType: enhanced
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
EC2KeyName: my-key
aws:autoscaling:updatepolicy:rollingupdate:
RollingUpdateType: Health
RollingUpdateEnabled: true
EnvironmentTier:
Type: Standard
Name: WebServer
AWSConfigurationTemplateVersion: 1.1.0.0
Also see the list of available configuration options in the documentation.
Step 2: modify the saved configuration
We are only interested in the Platform, so it is sufficient here to replace 3.7 by 3.8 in the PlatformArn value.
If necessary, you can use e.g. eb platform list to get an overview of valid platform names.
Step 3: create a new environment based on the modified config file
eb create --cfg py37-env-sc
This will deploy the most recent application version. Use --version <my version> to deploy a specific version, or use --sample to deploy the sample application, as described in the docs.
This will automatically look for files in the default saved config folder, .elasticbeanstalk/saved_configs/.
If you get a ServiceError or InvalidParameterValueError at this point, make sure only to pass in the name of the file, i.e. without the file extension .cfg.yml and without the folders.
Step 4: clean up local saved configuration file
Just in case you have any secrets stored in the environment properties.
Step 5: alternative method for copying environment properties
If environment properties are not included in the saved config files, or if some of them are missing, here's an alternative way to copy them (using bash).
This might not be the most efficient implementation, but I think it serves to illustrate the approach. Error handling was omitted, for clarity.
source_env="py37-env" # or "$1"
target_env="py38-env" # or "$2"
# get the properties from the source environment
source_env_properties="$(eb printenv "$source_env")"
# format the output so it can be used with `eb setenv`
mapfile -t arg_array < <(echo "$source_env_properties" | grep "=" | sed -e 's/ =/=/g' -e 's/= /=/g' -e 's/^ *//g')
# copy the properties to the target environment
eb setenv -e "$target_env" "${arg_array[#]}"
This has the advantage that it does not store any secrets in local files.

Run Django commands on Elastic Beanstalk SSH -> Missing environment variables

So this has been a long-running problem for me and I'd love to fix it - I also think it will help a lot of others. I'd love to run Django commands after ssh'ing on my Elastic Beanstalk EC2 instance. E. g.
python manage.py dumpdata
The reason why this is not possible are the missing environment variables. They are present when the server boots up but are unset as soon as the server is running (EB will create a virtual env within the EC2 and delete the variables from there).
I've recently figured out that there is a prebuilt script to retrieve the env variables on the EC2 instances:
/opt/elasticbeanstalk/bin/get-config environment
This will return a stringified object like this:
{"AWS_STATIC_ASSETS_SECRET_ACCESS_KEY":"xxx-xxx-xxx","DJANGO_KEY":"xxx-xxx-xxx","DJANGO_SETTINGS_MODULE":"xx.xx.xx","PYTHONPATH":"/var/app/venv/staging-LQM1lest/bin","RDS_DB_NAME":"xxxxxxx":"xxxxxx","RDS_PASSWORD":"xxxxxx"}
This is where I'm stuck currently. I think need would need a script, that takes this object parses it and sets the key / values as environment variables. I would need to be able to run this script from the ec2 instance.
Or a command to execute from the .ebextensions that would get the variables and sets them.
Am I absolutely unsure how to proceed at this point? Am I overlooking something obvious here? Is there someone who has written a script for this already? Is this even the right approach?
I'd love your help!
Your env variables are stored in /opt/elasticbeanstalk/deployment/env
Thus to export them, you can do the following (must be root to access the file):
export $(cat /opt/elasticbeanstalk/deployment/env | xargs)
Once you execute the command you can confirm the presence of your env variables using:
env
To use this in your .extentions, you can try:
container_commands:
10_dumpdata:
command: |
export $(cat /opt/elasticbeanstalk/deployment/env | xargs)
source $PYTHONPATH/activate
python ./manage.py dumpdata

Environment variables with AWS SSM Run Command

I am using AWS SSM Run Command with the AWS-RunShellScript document to run a script on an AWS Linux 1 instance. Part of the script includes using an environment variable. When I run the script myself, everything is fine. But when I run the script with SSM, it can't see the environment variable.
This variable needs to be passed to a Python script. I had originally been trying os.environ['VARIABLE'] to no effect.
I know that AWS SSM uses root privileges and so I have put a line exporting the variable in the root ~/.bashrc file, yet it still can not see the variable. The root user can see it when I run it myself.
Is it not possible for AWS SSM to use environment variables, or am I not exporting it correctly? If it is not possible, I'll try using AWS KMS instead to store my variable.
~/.bashrc
export VARIABLE="VALUE"
script.sh
"$VARIABLE"
Security is important, hence why I don't want to just store the variable in the script.
SSM does not open an actual SSH session so passing environment variables won't work. It's essential a daemon running on the box that's taking your requests and processing them. It's a very basic product: it doesn't support any of the standard features that come with SSH such as SCP, port forwarding, tunneling, passing of env variables etc. An alternative way of passing a value you need to a script would be to store it in AWS Systems Manager Parameter Store, and have your script pull the variable from the store.
You'll need to update your instance role permissions to have access to ssm:GetParameters for the script you run to access the value stored.
My solution to this problem:
set -o allexport; source /etc/environment; set +o allexport
-o allexport enables all variables in /etc/environment to be exported. +o allexport disables this feature.
For more information see the Set builtin documentation
I have tested this solution by using the AWS CLI command aws ssm send-command:
"commands": [
"set -o allexport; source /etc/environment; set +o allexport",
"echo $TEST_VAR > /home/ec2-user/app.log"
]
I am running bash script in my SSM command document, so I just source the profile/script to have env variables ready to be used by the subsequent commands. For example,
"runCommand": [
"#!/bin/bash",
". /tmp/setEnv.sh",
"echo \"myVar: $myVar, myVar2: $myVar2\""
]
You can refer to Can a shell script set environment variables of the calling shell? for sourcing your env variables. For python, you will have to parse your source profile/script, see Emulating Bash 'source' in Python

Get elastic beanstalk environment variables in docker container

So, i'm trying not to put sensitive information on the dockerfile. A logical approach is to put the creds in the ebs configuration (the GUI) as a ENV variable. However, docker build doesn't seem to be able to access the ENV variable. Any thoughts?
FROM jupyter/scipy-notebook
USER root
ARG AWS_ACCESS_KEY_ID
RUN echo {$AWS_ACCESS_KEY_ID}
I assume that for every deployment you create a new Dockerrun.aws.json file with the correct docker image tag for that deployment. At deployment stage, you can inject environment values which will then be used in docker run command by EB agent. So your docker containers can now access to these environment variables.
Putting sensitive information (for a Dockerfile to use) can be either for allowing a specific step of the image to run (build time), or for the resulting image to have that secret still there at runtime.
For runtime, if you can use the latest docker 1.13 in a swarm mode configuration, you can manage secrets that way
But the first case (build time) is typically for passing credentials to an http_proxy, and that can be done with --build-arg:
docker build --build-arg HTTP_PROXY=http://...
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile.
Also, these values don’t persist in the intermediate or final images like ENV values do.
In that case, you would not use ENV, but ARG:
ARG <name>[=<default value>]
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag