Passing environment variables to `docker-compose` when using ecs context - amazon-web-services

I am trying to use the new ecs context that the latest docker cli exposes. When I create the context using docker context create ecs my_context, I select the use environment variables method (the recommended way), how now matter how I try to call docker compose, it always returns the message context requires credentials to be passed as environment variables. Searching for this message in google returns nothing :/
I have tried passing using the -e and --environment flags, I have tried with a .env file, and the --env-file flag.
docker --context my_ecs compose -f docker-compose.yml -f docker-compose.dev.yml -e AWS_ACCESS_KEY_ID=XXXXXXX -e AWS_SECRET_ACCESS_KEY=XXXXXXXX -e AWS_DEFAULT_REGION=XXXXXXXX up
If I don't use the environment variable method, it hits another known (frustrating) bug, whereby it uses the wrong aws region, even though the context is setup with the correct region. I feel the environment variable option would resolve that, if I could get it to see the environment variables.

I don't think it works passing the variables during the docker compose up command. The way it works is that you export the access key ID, secret access key and region variables in your shell and then you create the docker context by pointing to the variables as shown here.
You can do that interactively by launching docker context create ecs myecscontext and then picking AWS environment variables or you can just run docker context create ecs myecscontext --from-env.
And yes the region bug is annoying, if this is what you are referring to.

Google search lead me to this post, (I am not using Windows but aws cloud9 IDE,)
I could get rid of the message context requires credentials to be passed as environment variables by passing dummy values to the environment vars, it goes like this.
$ docker context create ecs myecscontext
? Create a Docker context using: AWS environment variables
Successfully created ecs context "myecscontext"
$ docker context use myecscontext
myecscontext
$ docker context ls
context requires credentials to be passed as environment variables
$ AWS_SECRET_ACCESS_KEY=dummy AWS_ACCESS_KEY_ID=dummy AWS_DEFAULT_REGION=dummy docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
myecscontext * ecs credentials read from environment
$ AWS_SECRET_ACCESS_KEY=dummy AWS_ACCESS_KEY_ID=dummy AWS_DEFAULT_REGION=dummy docker context use default
default
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
myecscontext ecs credentials read from environment
$ docker context rm myecscontext
myecscontext

Modify and export environmental variables on Linux or MacOs with Environment variables to configure the AWS CLI:
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
Later, you can see the options with this command:
docker context create ecs --help
Description: Create a context for Amazon ECS
Usage:
docker context create ecs CONTEXT [flags]
Flags:
--access-keys string :Use AWS access keys from file
--description string :Description of the context
--from-env:Use AWS environment variables for profile, or credentials and region
-h, --help: Help for ecs
--local-simulation :Create context for ECS local simulation endpoints
--profile string :Use an existing AWS profile
Answer: docker context create ecs CONTEXT --from-env

I had to resource my terminal with source ~/.bash_profile because I was using .bashrc as my source and this did not have my environment variables in it.

I've handled this situation by setting environment variables. You can use this guide for this.

Related

Dockerfile copy files from amazon s3 or another source that needs credentials

I am trying to build a Docker image and I need to copy some files from S3 to the image.
Inside the Dockerfile I am using:
Dockerfile
FROM library/ubuntu:16.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
# Copy files from S3 inside docker
RUN aws s3 COPY s3://filepath_on_s3 /tmp/
However, aws requires AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
I know I can probably pass them using ARG. But, is it a bad idea to pass them to the image at build time?
How can I achieve this without storing the secret keys in the image?
In my opinion, Roles is the best to delegate S3 permissions to Docker containers.
Create role from IAM -> Roles -> Create Role -> Choose the service that will use this role, select EC2 -> Next -> select s3policies and Role should be created.
Attach Role to running/stopped instance from Actions-> Instance Settings -> Attach/Replace Role
This worked successfully in Dockerfile:
RUN aws s3 cp s3://bucketname/favicons /var/www/html/favicons --recursive
I wanted to build upon #Ankita Dhandha answer.
In the case of Docker you are probably looking to use ECS.
IAM Roles are absolutely the way to go.
When running locally, locally tailored Docker file and mount your AWS CLI ~/.aws directory to the root users ~/.aws directory in the container (this allows it to use your or a custom IAM user's CLI credentials to mock behavior in ECS for local testing).
# local sytem
from ubuntu:latest
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
docker run --mount type=bind,source="~/.aws",target=/root/.aws
Role Types
EC2 Instance Roles define the global actions any instance can preform. An example would be having access to S3 to download ecs.config to /etc/ecs/ecs.config during your custom user-data.sh setup.
Use the ECS Task Definition to define a Task Role and a Task Execution Role.
Task Roles are used for a running container. An example would be a live web app that is moving files in and out of S3.
Task Execution Roles are for deploying the task. An example would be downloading the ECR image and deploying it to ECS, downloading an environment file from S3 and exporting it to the Docker container.
General Role Propagation
In the example of C# SDK there is a list of locations it will look in order to obtain credentials. Not everything behaves like this. But many do so you have to research it for your situation.
reference: https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/creds-assign.html
Plain text credentials fed into either the target system or environment variables.
CLI AWS credentials and a profile set in the AWS_PROFILE environment variable.
Task Execution Role used to deploy the docker task.
The running task will use the Task Role.
When the running task has no permissions for the current action it will attempt to elevate into the EC2 instance role.
Blocking EC2 instance role access
Because of the EC2 instance role commonly needing access for custom system setup such as configuring ECS its often desirable to block your tasks from accessing this role. This is done by blocking the tasks access to the EC2 metadata endpoints which are well known DNS endpoints in any AWS VPC.
reference: https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-ec2-metadata/
AWS VPC Network Mode
# ecs.confg
ECS_AWSVPC_BLOCK_IMDS=true
Bind Network Mode
# ec2-userdata.sh
# install dependencies
yum install -y aws-cli iptables-services
# setup ECS dependencies
aws s3 cp s3://my-bucket/ecs.config /etc/ecs/ecs.config
# setup IPTABLES
iptables --insert FORWARD 1 -i docker+ --destination 169.254.169.254/32 --jump DROP
iptables --append INPUT -i docker+ --destination 127.0.0.1/32 -p tcp --dport 51679 -j ACCEPT
service iptables save
Many people pass in the details through the args, which I see as being fine and the way I would personally do it. I think you can overkill certain processes and this I think this is one of them.
Example docker with args
docker run -e AWS_ACCESS_KEY_ID=123 -e AWS_SECRET_ACCESS_KEY=1234
Saying that I can see why some companies want to hide this away and get this from a private API or something. This is why AWS have created IAM roles - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html.
Details could be retrieved from the private ip address which the S3 can only access meaning you would never have to store your credentials in your image itself.
Personally i think its overkill for what you are trying to do, if someone hacks your image they can console the credentials out and still get access to those details. Passing them in as args is safe as long as you protect yourself as you should anyway.
you should configure your credentials on the ~/.aws/credentials file
~$ cat .aws/credentials
[default]
aws_access_key_id = AAAAAAAAAAAAAAAAAAAAAAAAAAAAa
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBB

AWS - can we set Environment variable to read inside Docker?

Can we set Environment variable against an EC2 instance from AWS console?
Or do we need load any other service to achieve this?
Also can we load this variable insider Docker?
That appears to be out of scope for the AWS Console. EC2 however is basically AWS hosted VMs, So anything you could do normally on that OS can be done.
Just connect to the machine and do what you would normally do. (See the AWS CLI Guide for further help)
Using the AWS Console:
Stop the ec2 instance
For that ec2 instance, from top dropdown - select Actions > Instance Settings > View/Change User Data
Create a script that would add required environment variables to init scripts.
!/bin/sh echo 'export MY_ENV_VARIABLE=value' >> ~/.bashrc
or a script to /etc/init.d or other similar locations..
Start the EC2 instance for the Environment variables to be available.
If you want to pass the EC2 instance environment variable to the Docker container environment variable:
When you start a docker instance through "docker run" you can pass the value through --env param in the command line.
eg: docker run -e DOCKER_ENV_VARIABLE=$MY_ENV_VARIABLE ...
https://docs.docker.com/engine/reference/commandline/run/#options
You can use user_data to load and afterwards pass to your docker container using docker run -e CONTAINER_ENV_VAR=$EC2_ENV_VARor put in your Dockerfile.
for keys or any sensitive data I would advise you to use the Parameter Store, you can put passwords, users, or whatever data you want and use a service call chamber

How to provide argument to a task (container) on its launch on Amazon ECS using AWS Fargate

Here is an example, where I can run (locally) multiple instances of a docker image with different arguments to the echo command and to AUTHOR environment variable.
NAME=Peter
docker run alpine echo "Hello $NAME"
docker run --name static-site -e AUTHOR="$NAME" -d -P dockersamples/static-site
Here I can programmatically change the value of $NAME and run the containers.
I want the same flexibility while I try to run those containers on Amazon ECS using AWS Fargate launch type. Alas, I am unable to figure out how can I programmatically provide different values to $NAME variable.
ecs task acts like the docker run command so
when your creating a task in the bottom you can add container details. in there you have environment vaiables.
so when your deploying programmatically you have to create a new task per each container.
You can override the environment variable settings when you run an existing task or create a new version of an existing task. If you click on 'create new revision' in the Task Definitions tab on the ECS Dashboard, you'll see that it lets you edit your container definitions as well. Navigate to Advanced container configuration>ENVIRONMENT and in there you should be able to add new env vars or update existing ones.

AWS static ip address

I am using AWS code deploy agent and deploying my project to the server through bitbucket plugin.
The code deployment agent first executes the script files which has the command to execute my spring-boot project.
Since I have two environments one development and another production. I want the script to do things differently based on the environment i.e two different instances.
My plan is to fetch the aws static ip-address which is mapped and from that determine the environment
(production or stage).
How to fetch the elastic ip address through sh commands.
edited
Static IP will work.
Here is a more nature CodeDeploy way to solve this is - set up 2 CodeDeploy deployment groups, one for your development env and the other for your production env. Then in your script, you can use environment variables that CodeDeploy will set during the deployment to understand which env you are deploying to.
Here is a blog post about how to use CodeDeploy environment variables: https://aws.amazon.com/blogs/devops/using-codedeploy-environment-variables/
You could do the following:
id=$( curl http://169.254.169.254/latest/meta-data/instance-id )
eip=$( aws ec2 describe-addresses --filter Name=instance-id,Values=${id} | aws ec2 describe-addresses | jq .Addresses[].PublicIp --raw-output )
The above gets the instance-id from metadata, then uses the aws cli to look for elastic IPs filtered by the id from metadata. Using jq this output can then be parsed down to the IP you are looking for.
Query the metadata server
eip=`curl -s 169.254.169.254/latest/meta-data/public-ipv4`
echo $eip
The solution is completely off tangent to what I originally asked but it was enough for my requirement.
I just needed to know the environment I am in to do certain actions. So what I did was to set an environment variable in an independent script file where the environment variable is set and the value is that of the environment.
ex: let's say in a file env-variables.sh
export profile= stage
In the script file where the commands have to be executed based on the environment I access it this way
source /test/env-variables.sh
echo current profile is $profile
if [ $profile = stage ]
then
echo stage
elif [ $profile = production ]
then
echo production
else
echo failure
fi
Hope some one finds it useful

'aws configure' in docker container will not use environment variables or config files

So I have a docker container running jenkins and an EC2 registry on AWS. I would like to have jenkins push containers back to the EC2 registry.
To do this, I would like to be able to automate the aws configure and get login steps on container startup. I figured that I would be able to
export AWS_ACCESS_KEY_ID=*
export AWS_SECRET_ACCESS_KEY=*
export AWS_DEFAULT_REGION=us-east-1
export AWS_DEFAULT_OUTPUT=json
Which I expected to cause aws configure to complete automatically, but that did not work. I then tried creating configs as per the AWS docs and repeating the process, which also did not work. I then tried using aws configure set also with no luck.
I'm going bonkers here, what am I doing wrong?
No real need to issue aws configure instead as long as you populate env vars
export AWS_ACCESS_KEY_ID=aaaa
export AWS_SECRET_ACCESS_KEY=bbbb
... also export zone and region
then issue
aws ecr get-login --region ${AWS_REGION}
you will achieve the same desired aws login status ... as far as troubleshooting I suggest you remote login into your running container instance using
docker exec -ti CONTAINER_ID_HERE bash
then manually issue above aws related commands interactively to confirm they run OK before putting same into your Dockerfile