Can I use docker secrets within AWS ECS infrastructure? - amazon-web-services

I'm struggling with the best way to pass secret config data to my Node.Js app.
The problem is that I need it to run both on my local machine, within the CI environment for testing and staging, as well as on production (AWS).
I was thinking to use docker secrets as it's described here:
https://medium.com/better-programming/how-to-handle-docker-secrets-in-node-js-3aa04d5bf46e
The problem is that it only works if you run Docker as a service (via Swarm), which I could do locally, but not on AWS ECS and not on CI. (Or am I missing something there?)
Then I could also use Amazon secrets, but how would I get to them on my CI environment and on the local environment? Or if I don't have the internet?
Isn't there a way to make like a separate file or something that I could use for every environment no matter whether it's my local one running via docker run or the CI one or the AWS ECS one?

Isn't there a way to make like a separate file or something that I
could use for every environment no matter whether it's my local one
running via docker run or the CI one or the AWS ECS one? Or if I don't have the internet?
Targeting N-Environment with a condition like No Internet is something that one hardly relay on AWS service for keeping secret like store parameter etc.
What I can suggest is to use dot env which will be environment independent, all you need to handle different environment from different sources, for example
Pull from s3 when running on staging and production on AWS
Bind local .env when working on dev machine to handle No internet condition
Pull from s3 or generate dynamic dot env for CI
So deal with this each environment consumes proper Dot ENV file you can add logic in Docker entrypoint.
#!/bin/sh
if [ "${NODE_ENV}" == "production" ];then
# in production we are in aws and we can pull dot env file from s3
aws s3 cp s3://mybucket/production/.env /app/.env
elif [ "${NODE_ENV}" == "staging" ];then
# in staging we also assume that we are in aws and we can pull dot env file from s3
aws s3 cp s3://mybucket/staging/.env /app/.env
elif [ "${NODE_ENV}" == "ci" ];then
# generate dynamic ENV or pull from s3
aws s3 cp s3://mybucket/ci/.env
else
echo "running against local env, please dot env file like docker run -it $PWD/.env:/app/.env ..."
fi
echo "Startin node application"
exec node "$#
Enable encryption on s3, plus only production env should able to pull production env file, a more strong policy will lead to a more secure mechanism.
For local setup you can try
docker run -it -e NODE_ENV="local" --rm $PWD/.env:/app/.env myapp

The most common way of passing environmental specific configuration to an application running in a Docker container is to use environment variable as proposed as the third factor of Twelve-Factor Apps.
With this your application should read all configuration, including secrets, from environment variables.
If you are running locally and outside of a Docker container you can manually set these environment variables, run a script that exports them to your shell or use a dotenv style helper for your language that will automatically load envrionment variables from an environment file and expose them as environment variables to your application so you can fetch them with process.env.FOO, os.environ["FOO"], ENV['HOSTNAME'] or however your application's language accesses environment variables.
When running in a Docker container locally you can avoid packaging your .env file into the image and instead just inject the environment variables from the environment file by using the --env-file argument to docker run or instead just individually inject the environment variables by hand with --env.
When these are just being accessed locally then you just need to make sure you don't store any secrets in source control so would add your .env file or equivalent to your .gitignore.
When it comes to running in CI you will need to have your CI system store these secret variables securely and then inject them at runtime. In Gitlab CI, for example, you would create the variables in the project CI/CD settings, these are then stored encrypted in the database and are then injected transparently in plain text at runtime to the container as environment variables.
For deployment to ECS you can store non secret configuration directly as environment variables in the task definition. This leaves the environment variables readable by anyone with read only access to your AWS account which is probably not what you want for secrets. Instead you can create these in SSM Parameter Store or Secrets Manager and then refer to these in the secrets parameter of your task definition:
AWS documentation includes this smallish example of a task definition that gets secrets from Secrets Manager:
{
"requiresCompatibilities": [
"EC2"
],
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "web",
"image": "httpd",
"memory": 128,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
}
],
"logConfiguration": {
"logDriver": "splunk",
"options": {
"splunk-url": "https://sample.splunk.com:8080"
},
"secretOptions": [
{
"name": "splunk-token",
"valueFrom": "arn:aws:secretsmanager:us-east-1:awsExampleAccountID:secret:awsExampleParameter"
}
]
},
"secrets": [
{
"name": "DATABASE_PASSWORD",
"valueFrom": "arn:aws:ssm:us-east-1:awsExampleAccountID:parameter/awsExampleParameter"
}
]
}
],
"executionRoleArn": "arn:aws:iam::awsExampleAccountID:role/awsExampleRoleName"
}

Related

How to pull Docker image from a private repository using AWS Batch?

I'm using AWS Batch and my Docker image is hosted on private Nexus repo. I'm trying to create the Job Definition but i can't find anywere how to specify the Repo Credentials like we did with a Task Definition in ECS.
I tried to manually specify it in the Json like that :
{
"command": ["aws", "s3", "ls"],
"image": "nexus-docker-repo.xxxxx.xxx/my-image",
"memory": 1024,
"vcpus": 1,
"repositoryCredentials": {
"credentialsParameter": "ARN_OF_CREDENTIALS"
},
"jobRoleArn" : "ARN_OF_THE_JOB"
}
But when i apply the changes the parameter credentialsParameter was removed . I think that it's not supported.
So how to pull an image from a private repo with AWS Batch ? Is it possible ?
Thank you.
I do not see the option repositoryCredentials either in the batch job definition.
A secure option could be
Generate the config.json for docker login
Place that file in s3
Generate an IAM role that has access to that file.
Create a compute environment with a
Launch Template and user data to download the config.json
Run the jobs with that compute environment.
Ok i was able to do it by modifying the file /etc/ecs/ecs.config
If the file is not there you have to create it.
Then I had to add these 2 lines in that file :
ECS_ENGINE_AUTH_TYPE=docker
ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":{"username":"admin","password":"admin","email":"admin#example.com "}}
Then i had to restart the ECS agent :
sudo systemctl restart ecs ## for the Amazon ECS-optimized Amazon Linux 2 AMI
Or
sudo stop ecs && sudo start ecs ## for For the Amazon ECS-optimized Amazon Linux AMI

SSL certs with AWS SSM Parameter Store

I am trying to pass in SSL certificate to AWS SSM parameter store
the SSL certificate is password protected as well
my question is how do i retrieve this as a certificate file inside the containers in ECS? I do know how to use SSM parameter store to store secret environment variables BUT how do i use it to create a secret file to a location on containers? We have a string and a file here, how does SSM manage files?
Thanks
I'm not aware of a way to create a file from SSM, but I expect your ENTRYPOINT in the Docker container could handle this logic
Task Definition Snippet
{
"containerDefinitions": [{
"secrets": [{
"name": "MY_SSM_CERT_FILE",
"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:MY_SSM_CERT_FILE"
},
{
"name": "MY_SSM_CERT_FILE_LOCATION",
"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:MY_SSM_CERT_FILE_LOCATION"
}]
}]
}
entrypoint.sh
echo "$MY_SSM_CERT_FILE" >> $MY_SSM_CERT_FILE_LOCATION
// Run rest of the logic for application
Dockerfile
FROM ubuntu:16.04
COPY ./entrypoint.sh .entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
Why don't you use AWS Secret Manager which can complement AWS SSM? I think secrets manager supports secrets file:
$ aws secretsmanager create-secret --name TestSecret --secret-string file://secret.txt # The Secrets Manager command takes the --secret-string parameter from the contents of the file
see this link for further information:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/best-practices.html
The link below shows how you can integrate Secrets manager with SSM
https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-ps-secretsmanager.html
Hope this helps

writing file from docker container to host instance on AWS

So I am using Travis CI to automatically deploy my application to AWS Elasticbeanstalk environment. I have this issue that I need to update the nginx.conf file that is located in the host machine files.
Im running a Single container Docker image inside that host machine.
How can I copy or link the nginx.conf file from docker container to host machines nginx.conf file.
Currently my Dockerrun.aws.json looks like that:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "some:image:url:here",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8001"
}
],
"Volumes": [
{
"HostDirectory": "/etc/nginx/nginx.conf",
"ContainerDirectory": "/home/node/app/nginx.conf"
}
]
}
When I tried to use dockerrunversion: 2, it gave me an error on the build that version is wrong.
How can I link those two files with Single Container Docker application?
The "Volumes" key is used to map full volumes, not individual files. See Dockerrun.aws.json file specifications for an explanation.
I know of 2 ways you can solve this problem: 1) Custom AMI or 2) use a Dockerfile with your Dockerrun.aws.json.
1. Build a Custom AMI
The idea behind building a custom AMI is to launch an instance from one of Amazons existing AMIs. You make the changes you need to it (in your case, change the nginx.conf). Finally you create a new AMI from this instance and it will be available to you when you create your environment in Elastic Beanstalk. Here are the detailed steps to create your own AMI and how to use it with Elastic Beanstalk.
2. Use a Dockerfile with your Dockerrun.aws.json
If you dont build your own AMI, you can copy your conf file with the help of a Dockerfile. Dockerfile is a text file that provides commands to Elastic Beanstalk to run to build your custom image. The Docerfile reference details the commands that can be added to a Dockerfile to build your image. You are going to need to to use the Copy command or if the file is simple, you can use Run and echo to build it like in the example here.
Once you create your Dockerfile, you will need to put the Dockerfile and your Dockerrun.aws.json into a directory and create a zip file with both. Provide this to Elastic Beanstalk as your source bundle. Follow this guide to build the source bundle correctly.

Elastic Beanstalk MultiContainer Docker Environment Variables

I have some docker images stored in ECR, and I'm trying to deploy them to ElasticBeanstalk. They're being deployed fine, but they're not picking up any of the environment variables from the host. If I deploy just the default multi-container docker setup, the containers do pick up the environment variables (set with eb setenv). Even trying to run locally they do not pick up the environment variables. Has anyone else experienced this and found a solution?
You can define the container environment variables in your Dockerrun.aws.json file.
For example, the following entry defines an environment variable with the name APP and the value PYTHON:
"environment": [
{
"name": "APP",
"value": "PYTHON"
}
],
Ref- http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html

AWS Beanstalk docker image automatic update doesn't work

I have a node.js application packaged in a docker image hosted in a public repository.
I have deployed that image in an AWS Beanstalk docker application successfully.
The problem is that I was expecting the Beanstalk application to be automatically updated when I update the image in the public repository, as the following configuration sugggests.
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "peveuve/dynamio-payment-service",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8000"
}
],
"Logging": "/var/log/dynamio"
}
The Dockerfile is very simple:
FROM node:4.2.1-onbuild
# Environment variables
ENV NODE_ENV test
ENV PORT 8000
# expose application port outside
EXPOSE $PORT
The Amazon documentation is pretty clear on that:
Optionally include the Update key. The default value is "true" and
instructs Elastic Beanstalk to check the repository, pull any updates
to the image, and overwrite any cached images.
But I have to update the Beanstalk application manually by uploading a new version of the Dockerrun.aws.json descriptor. Did I miss something? Is it supposed to work like that?
You can use the aws command-line tool to trigger the update:
aws elasticbeanstalk update-environment --application-name [your_app_name] --environment-name [your_environment_name] --version-label [your_version_label]
You specify the version that contains the Dockerrun.aws.json file, that way a new version won't be added to the application. In this case the Dockerrun file works as the "source" for the application, but it only tells aws to pull the docker image, so it would be redundant to create new versions for the application in Elastic Beanstalk (unless you use specifically tagged docker images in the Dockerrun file)
Links:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html
http://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_UpdateEnvironment.htm
The documentation should be more clear. What they are saying is with update=true:
EBS will do a docker pull before it does a docker run when the application is first started. It will not continually poll docker hub.
In contrast, issuing a docker run without first doing a docker pull will always use the locally stored version of that machine, which may not always be the latest.
In order to acheive what you want, you'll need to set up a webhook on Docker Hub, that calls an application you control, that rebuilds your ELB app.