Get elastic beanstalk environment variables in docker container - amazon-web-services

So, i'm trying not to put sensitive information on the dockerfile. A logical approach is to put the creds in the ebs configuration (the GUI) as a ENV variable. However, docker build doesn't seem to be able to access the ENV variable. Any thoughts?
FROM jupyter/scipy-notebook
USER root
ARG AWS_ACCESS_KEY_ID
RUN echo {$AWS_ACCESS_KEY_ID}

I assume that for every deployment you create a new Dockerrun.aws.json file with the correct docker image tag for that deployment. At deployment stage, you can inject environment values which will then be used in docker run command by EB agent. So your docker containers can now access to these environment variables.

Putting sensitive information (for a Dockerfile to use) can be either for allowing a specific step of the image to run (build time), or for the resulting image to have that secret still there at runtime.
For runtime, if you can use the latest docker 1.13 in a swarm mode configuration, you can manage secrets that way
But the first case (build time) is typically for passing credentials to an http_proxy, and that can be done with --build-arg:
docker build --build-arg HTTP_PROXY=http://...
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile.
Also, these values don’t persist in the intermediate or final images like ENV values do.
In that case, you would not use ENV, but ARG:
ARG <name>[=<default value>]
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag

Related

any "docker" command that i try to run on terminal throw this message "context requires credentials to be passed as environment variables"

I was reading the Docker documentation about deploy Docker containers on AWS ECS https://docs.docker.com/cloud/ecs-integration/ . And after i run the command docker context create ecs myecscontext and select the option AWS environment variables every docker commands that i try to run throw this message on my terminal context requires credentials to be passed as environment variables. I've tried to set the AWS environments with the windows set command but it dosen't work.
I've used like this:
set AWS_SECRET_ACCESS_KEY=any-value
set AWS_ACCESS_KEY_ID=any-value
I'm searching how to solve this problem and the only thing that i've found is to set environment variables like i've already done. What i have to do?
UPDATE:
I've find another way to set environment variables on windows in this site https://www.tutorialspoint.com/how-to-set-environment-variables-using-powershell
Instead use set i had to use $env:VARIABLE_NAME = 'any-value' this sintax to really update the vars.
Like this:
$env:AWS_ACCESS_KEY_ID = 'my-aws-access-key-id'
$env:AWS_SECRET_ACCESS_KEY = 'my-aws-secret-access-key'

Elastic Beanstalk deleting generated files on config changes

On Elastic Beanstalk, with an AWS Linux 2 based environment, updating the Environment Properties (i.e. environment variables) of an environment causes all generated files to be deleted. It also doesn't run container_commands as part of this update.
So, for example, I have a Django project with collectstatic in the container commands:
05_collectstatic:
command: |
source $PYTHONPATH/activate
python manage.py collectstatic --noinput --ignore *.scss
This collects static files to a folder called staticfiles as part of deploy. But when I do an environment variable update, staticfiles is deleted. This causes all static files on the application to be broken until I re-deploy, which is extremely undesirable.
This behavior did not occur on AWS Linux 1 based environments. The difference appears to be that AWS Linux 2 based environments replace the /var/app/current folder during environment variable changes, where AWS Linux 1 based environments did not do this.
How do I fix this?
Research
I can verify that the container commands are not being run during an environment variable change by monitoring /var/log/cfn-init.log; no new entries are added to this log.
This happens with both rolling update type "disabled" and "immutable".
This happens even if I convert the environment command to be a platform hook, despite the fact that hooks are listed as running when environment properties are updated.
It seems to me like there are two potential solutions, but I don't know of an Elastic Beanstalk setting for either:
Have environment variable changes leave /var/app/current rather than replacing it.
Have environment variable changes run container commands.
The Elastic Beanstalk docs on container commands say "Leader-only container commands are only executed during environment creation and deployments, while other commands and server customization operations are performed every time an instance is provisioned or updated." Is this a bug in Elastic Beanstalk?
Related question: EB: Trigger container commands / deploy scripts on configuration change
The solution is to use a Configuration deployment platform hook for any commands that change the files in the deployment directory. Note that this is different from an Application deployment platform hook.
Using the example of the collectstatic command, the best thing to do is to move it from a container command to a pair of hooks, one for standard deployments and one for configuration changes.
To do this, remove the collectstatic container command. Then, make two identical files:
.platform/confighooks/predeploy/predeploy.sh
.platform/hooks/predeploy/predeploy.sh
Each file should have the following code:
#!/bin/bash
source $PYTHONPATH/activate
python manage.py collectstatic --noinput --ignore *.scss
You need two seemingly redundant files because different hooks have different trigger conditions. Scripts in hooks run when you deploy the app whereas scripts in confighooks run when you change the configuration of the app.
Make sure to make both of these files executable according to git or else you will run into a "permission denied" error when you try to deploy. You can check if they are executable via git ls-files -s .platform; you should see 100755 before any shell files in the output of this command. If you see 100644 before any of your shell files, run git add --chmod=+x -- .platform/*/*/*.sh to make them executable.

AWS pass large number of ENV variables into codebuild

Currently our singleton application including 5 containers goes through AWS pipeline into code build and then code deploy into ECS services. During codebuild base on an ENV set in codebuild $Stage it can be dev, prod or staging and loads a specific config file for which contains all the ENV variables each container needs. See below:
build:
commands:
#Get commit id
- "echo STAGE $STAGE"
- "export STAGE=$STAGE"
#Assigning AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY needs to be done in two steps, otherwise it ends up in "Partial credentials found in env" error
- "export ANSIBLE_VARS=\"\
USE_EXISTING_VPC=true \
DISABLE_BASIC_AUTH=true\""
- "export DOCKER_ARGS=\"-e COMMIT_ID=$GIT_COMMIT -e APP_ENV=$STAGE
Problem 1: is these config files are within the repo and anybody can modify them. So there are lots of human errors like the production redirect Url is pointing to the wrong place, or new ENV is not set.
So I want to move away from loading different config files and move ENV variables to AWS to handle. Something like during code build it will load from parameter store. Is this correct way?
Problem 2 is there are lots of ENV variables, is the only option to list them one by one in the CloudFormation template ? Are there any other better way to load all of ENV variable into DOCKER_ARG from above build command ?

Where are my environment variables in Elastic Beanstalk for AL2?

I'm using elastic beanstalk to deploy a Django app. I'd like to SSH on the EC2 instance to execute some shell commands but the environment variables don't seem to be there. I specified them via the AWS GUI (configuration -> environment properties) and they seem to work during the boot-up of my app.
I tried activating and deactivating the virtual env via:
source /var/app/venv/*/bin/activate
Is there some environment (or script I can run) to access an environment with all the properties set? Otherwise, I'm hardly able to run any command like python3 manage.py ... since there is no settings module configured (I know how to specify it manually but my app needs around 7 variables to work).
During deployment, the environment properties are readily available to your .platform hook scripts.
After deployment, e.g. when using eb ssh, you need to load the environment properties manually.
One option is to use the EB get-config tool. The environment properties can be accessed either individually (using the -k option), or as a JSON or YAML object with key-value pairs.
For example, one way to export all environment properties would be:
export $(/opt/elasticbeanstalk/bin/get-config --output YAML environment |
sed -r 's/: /=/' | xargs)
Here the get-config part returns all environment properties as YAML, the sed part replaces the ': ' in the YAML output with '=', and the xargs part fixes quoted numbers.
Note this does not require sudo.
Alternatively, you could refer to this AWS knowledge center post:
Important: On Amazon Linux 2, all environment properties are centralized into a single file called /opt/elasticbeanstalk/deployment/env. You must use this file during Elastic Beanstalk's application deployment process only. ...
The post describes how to make a copy of the env file during deployment, using .platform hooks, and how to set permissions so you can access the file later.
You can also perform similar steps manually, using SSH. Once you have the copy set up, with the proper permissions, you can source it.
Beware:
Note: Environment properties with spaces or special characters are interpreted by the Bash shell and can result in a different value.
Try running the command /opt/elasticbeanstalk/bin/get-config environment after you ssh into the EC2 instance.
If you are trying to access the environment variables in eb script elastic beanstalk
Use this
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)
{ "Ref" : "AWSEBEnvironmentName" }
$(/opt/elasticbeanstalk/bin/get-config environment -k ENVURL)

Docker on EC2, RUN command in dockerfile not reading environment variable

I have two elastic-beanstalk environments on AWS: development and production. I'm running a glassfish server on each instance and it is requested that the same application package be deployable in production and in development environment, without requiring two different .EAR files.The two instance differ in size: the dev has a micro instance while the production has a medium instance, therefore I need to deploy two different configuration files for glassfish, one for each environment.
The main problem is that the file has to be in the glassfish config directory before the server starts, therefore I thought it could be better moving it while the container was created.
Of course each environment uses a docker container to host the glassfish instance, so my first thought was to configure an environment variable for the elastic-beanstalk. In this case
ypenvironment = dev
for the development environment and
ypenvironment = pro
for the production environment. Then in my DOCKERFILE I put this statement in the RUN command:
RUN if [ "$ypenvironment"="pro" ] ; then \
mv --force /var/app/GF_domain.xml /usr/local/glassfish/glassfish/domains/domain1/config/domain.xml ; \
elif [ "$ypenvironment"="dev" ] ; then \
mv --force /var/app/GF_domain.xml.dev /usr/local/glassfish/glassfish/domains/domain1/config/domain.xml ; \
fi
unfortunately, when the startup finishes, both GF_domain files are still in var/app.
Then I red that the RUN command runs things BEFORE the container is fully loaded, maybe missing the elastic-beanstalk-injected variables. So I tried to move the code to the ENTRYPOINT directive. No luck again, the container startup fails. Also tried the
ENTRYPOINT ["command", "param"]
syntax, but it didn't work giving a
System error: exec: "if": executable file not found in $PATH
Thus I'm stuck.
You need:
1/ Not to use entrypoint (or at least use a sh -c 'if...' syntax): that is for runtime execution, not compile-time image build.
2/ to use build-time variables (--build-arg):
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image.
However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
In your case, your Dockefile should include:
ENV ypenvironment
Then docker build --build-arg ypenvironment=dev ... myDevImage
You will build 2 different images (based on the same Dockerfile)
I need to be able to use the same EAR package for dev and pro environments,
Then you want your ENTRYPOINT, when run, to move a file depending on the value of an environment variable.
Your Dockerfile still needs to include:
ENV ypenvironment
But you need to run your one image with
docker run -x ypenvironment=dev ...
Make sure your script (referenced by your entrypoint) includes the if [ "$ypenvironment"="pro" ] ; then... you mention in your question, plus the actual launch (in foreground) of your app.
Your script needs to not exit right away, or your container would switch to exit status right after having started.
When working with Docker you must differentiate between build-time actions and run-time actions.
Dockerfiles are used for building Docker images, not for deploying containers. This means that all the commands in the Dockerfile are executed when you build the Docker image, not when you deploy a container from it.
The CMD and ENTRYPOINT commands are special build-time commands which tell Docker what command to execute when a container is deployed from that image.
Now, in your case a better approach would be to check if Glassfish supports environment variables inside domain.xml (or somewhere else). If it does, you can use the same domain.xml file for both environments, and have the same Docker image for both of them. You then differentiate between the environments by injecting run-time environment variables to the containers by using docker run -e "VAR=value" when running locally, and by using the Environment Properties configuration section when deploying on Elastic Beanstalk.
Edit: In case you can't use environment variables inside domain.xml, you can solve the problem by starting the container with a script which reads the runtime environment variables and puts their values in the correct places in domain.xml using sed, then starts your application as usual. You can find an example in this post.