SSL certs with AWS SSM Parameter Store - amazon-web-services

I am trying to pass in SSL certificate to AWS SSM parameter store
the SSL certificate is password protected as well
my question is how do i retrieve this as a certificate file inside the containers in ECS? I do know how to use SSM parameter store to store secret environment variables BUT how do i use it to create a secret file to a location on containers? We have a string and a file here, how does SSM manage files?
Thanks

I'm not aware of a way to create a file from SSM, but I expect your ENTRYPOINT in the Docker container could handle this logic
Task Definition Snippet
{
"containerDefinitions": [{
"secrets": [{
"name": "MY_SSM_CERT_FILE",
"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:MY_SSM_CERT_FILE"
},
{
"name": "MY_SSM_CERT_FILE_LOCATION",
"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:MY_SSM_CERT_FILE_LOCATION"
}]
}]
}
entrypoint.sh
echo "$MY_SSM_CERT_FILE" >> $MY_SSM_CERT_FILE_LOCATION
// Run rest of the logic for application
Dockerfile
FROM ubuntu:16.04
COPY ./entrypoint.sh .entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]

Why don't you use AWS Secret Manager which can complement AWS SSM? I think secrets manager supports secrets file:
$ aws secretsmanager create-secret --name TestSecret --secret-string file://secret.txt # The Secrets Manager command takes the --secret-string parameter from the contents of the file
see this link for further information:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/best-practices.html
The link below shows how you can integrate Secrets manager with SSM
https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-ps-secretsmanager.html
Hope this helps

Related

Can I use docker secrets within AWS ECS infrastructure?

I'm struggling with the best way to pass secret config data to my Node.Js app.
The problem is that I need it to run both on my local machine, within the CI environment for testing and staging, as well as on production (AWS).
I was thinking to use docker secrets as it's described here:
https://medium.com/better-programming/how-to-handle-docker-secrets-in-node-js-3aa04d5bf46e
The problem is that it only works if you run Docker as a service (via Swarm), which I could do locally, but not on AWS ECS and not on CI. (Or am I missing something there?)
Then I could also use Amazon secrets, but how would I get to them on my CI environment and on the local environment? Or if I don't have the internet?
Isn't there a way to make like a separate file or something that I could use for every environment no matter whether it's my local one running via docker run or the CI one or the AWS ECS one?
Isn't there a way to make like a separate file or something that I
could use for every environment no matter whether it's my local one
running via docker run or the CI one or the AWS ECS one? Or if I don't have the internet?
Targeting N-Environment with a condition like No Internet is something that one hardly relay on AWS service for keeping secret like store parameter etc.
What I can suggest is to use dot env which will be environment independent, all you need to handle different environment from different sources, for example
Pull from s3 when running on staging and production on AWS
Bind local .env when working on dev machine to handle No internet condition
Pull from s3 or generate dynamic dot env for CI
So deal with this each environment consumes proper Dot ENV file you can add logic in Docker entrypoint.
#!/bin/sh
if [ "${NODE_ENV}" == "production" ];then
# in production we are in aws and we can pull dot env file from s3
aws s3 cp s3://mybucket/production/.env /app/.env
elif [ "${NODE_ENV}" == "staging" ];then
# in staging we also assume that we are in aws and we can pull dot env file from s3
aws s3 cp s3://mybucket/staging/.env /app/.env
elif [ "${NODE_ENV}" == "ci" ];then
# generate dynamic ENV or pull from s3
aws s3 cp s3://mybucket/ci/.env
else
echo "running against local env, please dot env file like docker run -it $PWD/.env:/app/.env ..."
fi
echo "Startin node application"
exec node "$#
Enable encryption on s3, plus only production env should able to pull production env file, a more strong policy will lead to a more secure mechanism.
For local setup you can try
docker run -it -e NODE_ENV="local" --rm $PWD/.env:/app/.env myapp
The most common way of passing environmental specific configuration to an application running in a Docker container is to use environment variable as proposed as the third factor of Twelve-Factor Apps.
With this your application should read all configuration, including secrets, from environment variables.
If you are running locally and outside of a Docker container you can manually set these environment variables, run a script that exports them to your shell or use a dotenv style helper for your language that will automatically load envrionment variables from an environment file and expose them as environment variables to your application so you can fetch them with process.env.FOO, os.environ["FOO"], ENV['HOSTNAME'] or however your application's language accesses environment variables.
When running in a Docker container locally you can avoid packaging your .env file into the image and instead just inject the environment variables from the environment file by using the --env-file argument to docker run or instead just individually inject the environment variables by hand with --env.
When these are just being accessed locally then you just need to make sure you don't store any secrets in source control so would add your .env file or equivalent to your .gitignore.
When it comes to running in CI you will need to have your CI system store these secret variables securely and then inject them at runtime. In Gitlab CI, for example, you would create the variables in the project CI/CD settings, these are then stored encrypted in the database and are then injected transparently in plain text at runtime to the container as environment variables.
For deployment to ECS you can store non secret configuration directly as environment variables in the task definition. This leaves the environment variables readable by anyone with read only access to your AWS account which is probably not what you want for secrets. Instead you can create these in SSM Parameter Store or Secrets Manager and then refer to these in the secrets parameter of your task definition:
AWS documentation includes this smallish example of a task definition that gets secrets from Secrets Manager:
{
"requiresCompatibilities": [
"EC2"
],
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "web",
"image": "httpd",
"memory": 128,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
}
],
"logConfiguration": {
"logDriver": "splunk",
"options": {
"splunk-url": "https://sample.splunk.com:8080"
},
"secretOptions": [
{
"name": "splunk-token",
"valueFrom": "arn:aws:secretsmanager:us-east-1:awsExampleAccountID:secret:awsExampleParameter"
}
]
},
"secrets": [
{
"name": "DATABASE_PASSWORD",
"valueFrom": "arn:aws:ssm:us-east-1:awsExampleAccountID:parameter/awsExampleParameter"
}
]
}
],
"executionRoleArn": "arn:aws:iam::awsExampleAccountID:role/awsExampleRoleName"
}

AWS Elastic Beanstalk and Secret Manager

Does anyone know is it possible to pass a secret value as an environment variable in elastic beanstalk?
The alternative obviously is to use the sdk in our codebase but I want to explore the environment variable approach first
Cheers
Damien
Per #Ali's answer, it is not built-in at this point. However, it is relatively easy to use .ebextensions and the AWS cli. Here is an example that extracts a secret to a file, according to an MY_ENV environment variable. This value could then be set to an environment variable, but keep in mind environment variables are specific to the shell. You'd need to pass them in to anything you are launching.
10-extract-htpasswd:
env:
MY_ENV:
"Fn::GetOptionSetting":
Namespace: "aws:elasticbeanstalk:application:environment"
OptionName: MY_ENV
command: |
aws secretsmanager get-secret-value --secret-id myproj/$MY_ENV/htpasswd --region=us-east-1 --query=SecretString --output text > /etc/nginx/.htpasswd
chmod o-rwx /etc/nginx/.htpasswd
chgrp nginx /etc/nginx/.htpasswd
This also requires giving the EB service role IAM permissions to the secrets. i.e. A policy like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "xxxxxxxxxx",
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": "arn:aws:secretsmanager:us-east-1:xxxxxxxxxxxx:secret:myproj*"
}
]
}
As above answers mention, there is still no build-in solution if you want to do this in Elastic Beanstalk. However a work around solution is to use "platform hook". Unfortunately it is poorly documented at this point.
To store your secret, best solution is to create a custom secret in AWS-Secret-Manager. In secret manager you can create a new secret by clicking "Store a new secret", then selecting "Other type of secret" and entering your secret key/value (see ). At the next step you need to provide a Secret Name (say "your_secret_name") and you can leave everything else to their default settings.
Then, you need to allow Elastic Beanstalk to get this secret. You can do it by creating a new IAM policy, for instance with this content:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Getsecretvalue",
"Effect": "Allow",
"Action": [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
],
"Resource": "your-secret-arn"
}
]}
You need to replace "your-secret-arn" with your secret ARN which you can get on AWS-secret-manager interface. Then, you need to add the policy you created to EB roles (it should be either "aws-elasticbeanstalk-ec2-role" or "aws-elasticbeanstalk-service-role").
Finally you need to add a hook file in your application. From the root of your application location should be ".platform/hooks/prebuild/your_hook.sh". Content of your file can be something like this:
#!/bin/sh
export your_secret_key=$(aws secretsmanager get-secret-value --secret-id your-secret-name --region us-east-1 | jq -r '.SecretString' | jq -r '. your_secret_key')
touch .env
{
printf "SECRET_KEY=%s\n" "$your_secret_key"
# printf whatever other variable you want to pass
} >> .env
Obviously you need to replace "your_secret_name" and the other variable by your own values and set the region to the region where your secret is stored (if it is not us-east-1). And don't forget to make it executable ("chmod +x your_hook.sh").
This assumes that your application can load its env from a .env file (which works fine with docker / docker-compose for example).
Another option is to store the variable in an ".ebextensions" config file but unfortunately it doesn't seem to work with the new Amazon Linux 2 platform. What's more you should not store sensitive information such as credentials directly in your application build. Builds of the application can be accessed by anyone with Elastic Beanstalk Read Access and they are also store unencrypted on S3.
With the hook approach, the secret is only stored locally on your Elastic Beanstalk underlying EC2 instances, and you can (should!) restrict direct SSH access to them.
Unfortunately, EB doesn't support secrets at this point, this might be added down the road. You can use them in your environment variables as the documentation suggests but they will appear in plain text in the console. Another, and IMO better, approach would be to use ebextensions, and use AWS CLI commands to grab secrets from the secrets manager, which needs some set up (e.g. having AWS CLI installed and having your secrets stored in SM). You can set these as environment variables in the same eb configuration. Hope this helps!
I'm just adding to #kaliatech's answer because while very helpful, it had a few gaps that left me unable to get this working for a few days. Basically you need to add a config file to the .ebextensions directory of your EB app, which uses a container_commands section retrieve your secret (in JSON format) and output it as a .env. file into the /var/app/current directory of the EC2 instances where your app's code lives:
# .ebextensions/setup-env.config
container_commands:
01-extract-env:
env:
AWS_SECRET_ID:
"Fn::GetOptionSetting":
Namespace: "aws:elasticbeanstalk:application:environment"
OptionName: AWS_SECRET_ID
AWS_REGION: {"Ref" : "AWS::Region"}
ENVFILE: .env
command: >
aws secretsmanager get-secret-value --secret-id $AWS_SECRET_ID --region $AWS_REGION |
jq -r '.SecretString' |
jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]' > $ENVFILE
Note: this assumes the AWS_SECRET_ID is configured in the app environment, but it can easily be hardcoded here as well.
All the utils needed for this script to work are already baked into the EC2 Linux image, but you'll need to grant permissions to the IamInstanceProfile role (usually named aws-elasticbeanstalk-ec2-role) which is assumed by EC2 to allow it access SecretManager:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SecretManagerAccess",
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": "arn:aws:secretsmanager:ap-southeast-2:xxxxxxxxxxxx:secret:my-secret-name*"
}
]
}
Finally, to debug any issues encountered during EC2 instance bootstrap, download the EB logs and check the EC2 log files at /var/log/cfn-init.log and /var/log/cfn-init-cmd.log.
This answer only applies if you're using code pipeline.
I think you can add a secret in the environment variables section now
If you use AWS CodeBuild use pre_build, add the following commands in your project's buildspec.yml to retrieve your environment variables from AWS Secrets Manager use sed to do some substituting/formatting and append them to .ebextensions/options.config's aws:elasticbeanstalk:application:environment namespace:
phases:
pre_build:
commands:
- secret=$(aws secretsmanager get-secret-value --secret-id foo-123 --region=bar-xyz --query=SecretString --output text)
- regex=$(cat ./sed_substitute)
- echo $secret | sed "${regex}" >> .ebextensions/options.config
Bit of a hack but the sed_substitute used in the commands above used to get the correct indentation/formatting that .ebextensions/options.config demands was:
s/",/\n /g; s/":/": /g; s/{"/ /g; s/"}//g; s/"//g;

Connect Airflow docker to RDS using AWS SSM Parameter Store Secrets Backend

I have an airflow running on a docker machine.
Now it is working fine but I am interested in use AWS Systems manager as a provider of the user and password for the RDS database (postgresql).
I saw about AWS SSM Parameter Store Secrets Backend, so I set up the airflow.cfg with:
[secrets]
backend = airflow.contrib.secrets.aws_systems_manager.SystemsManagerParameterStoreBackend
backend_kwargs ={"rds_user": "/pro/database/airflow/user", "rds_password": "/pro/database/airflow/password", "profile_name":"myrole"}
The profile_name variable defines the AWS role where the parameters exists.
I also have an entrypoint.sh script which uses:
if [ -z "$AIRFLOW__CORE__SQL_ALCHEMY_CONN" ]; then
# Default values corresponding to the default compose files
: "${POSTGRES_HOST:="hostname"}"
: "${POSTGRES_PORT:="5432"}"
: "${POSTGRES_USER:=AIRFLOW__SECRETS__BACKEND_KWARGS['rds_user']}"
: "${POSTGRES_PASSWORD:="mypassword"}"
: "${POSTGRES_DB:="postgres"}"
: "${POSTGRES_EXTRAS:-""}"
As you can see I am using AIRFLOW__SECRETS__BACKEND_KWARGS['rds_user'] to read from the environ variable defined in the cfg file but it is not working. If I enter to the docker with Bash and echo variables I dont have any variable such as AIRFLOW__XXXXXX.
I am trying to follow airflow docs about it, but it doesnt explain at all how you should work with it.

How to pull Docker image from a private repository using AWS Batch?

I'm using AWS Batch and my Docker image is hosted on private Nexus repo. I'm trying to create the Job Definition but i can't find anywere how to specify the Repo Credentials like we did with a Task Definition in ECS.
I tried to manually specify it in the Json like that :
{
"command": ["aws", "s3", "ls"],
"image": "nexus-docker-repo.xxxxx.xxx/my-image",
"memory": 1024,
"vcpus": 1,
"repositoryCredentials": {
"credentialsParameter": "ARN_OF_CREDENTIALS"
},
"jobRoleArn" : "ARN_OF_THE_JOB"
}
But when i apply the changes the parameter credentialsParameter was removed . I think that it's not supported.
So how to pull an image from a private repo with AWS Batch ? Is it possible ?
Thank you.
I do not see the option repositoryCredentials either in the batch job definition.
A secure option could be
Generate the config.json for docker login
Place that file in s3
Generate an IAM role that has access to that file.
Create a compute environment with a
Launch Template and user data to download the config.json
Run the jobs with that compute environment.
Ok i was able to do it by modifying the file /etc/ecs/ecs.config
If the file is not there you have to create it.
Then I had to add these 2 lines in that file :
ECS_ENGINE_AUTH_TYPE=docker
ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":{"username":"admin","password":"admin","email":"admin#example.com "}}
Then i had to restart the ECS agent :
sudo systemctl restart ecs ## for the Amazon ECS-optimized Amazon Linux 2 AMI
Or
sudo stop ecs && sudo start ecs ## for For the Amazon ECS-optimized Amazon Linux AMI

AWS ElasticBeanstalk pull Docker image from Gitlab registry

I’m having a hard time to pull Docker image from private Gitlab registry to AWS MultiContainer ElasticBeanstalk environment.
I have added .dockercfg into S3 in the same region as my cluster and also allowed to aws-elasticbeanstalk-ec2-role IAM role to get data from S3.
ElasticBeanstalk always return error CannotPullContainerError: API error (500)
My .dockercfg is in this format:
{
"https://registry.gitlab.com" : {
"auth" : “my gitlab deploy token“,
"email" : “my gitlab token name“
}
}
Inside Dockerrun.aws.json I have added following
"authentication": {
"bucket": "name of my bucket",
"key": ".dockercfg"
},
When I try to login via docker login -u gitlabtoken-name -p token it works perfectly.
The gitlab deploy token is not the auth key.
To generate a proper auth key I usually do the following:
docker run -ti docker:dind sh -c "docker login -u name -p deploy-token registry.gitlab.com && cat /root/.docker/config.json"
and it'll print something like:
{
"auths": {
"registry.gitlab.com": {
"auth": "your-auth-key"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.0 (linux)"
}
}
Then, as per elasticbeanstalk docs "Using Images From a Private Repository
", you should take just what it's needed.
Hope this'll help you!