Can anyone suggest what's the best way to retrieve AWS secrets-manager secrets from Dockerfile and send the secret values to Docker container as environment variables after Docker ran.
The reason behind why I am asking this, I am trying to remove all sensitive password information hard coded in different places of git code repository and move the passwords to AWS secrets-manager.
https://github.com/s12v/secure-exec is a similar tool, which supports the Secrets Manager (including JSON).
As mentioned above, with ECS there's no need in such tools.
Take a look at ssm-env, which populates ENV vars from Parameter Store. There is an example of using it with Docker.
If you are using ECS, there is built-in support for this.
Related
I have this ECS cluster that is running task definitions with singular container inside each group. I'm trying to add some fancy observability to my application by introducing OpenTelemetry. Following the AWS'es docs I found https://github.com/aws-observability/aws-otel-collector which is the AWS version of OTEL collector. This collector needs a config file (https://github.com/aws-observability/aws-otel-collector/blob/main/config/ecs/ecs-default-config.yaml) that specifies stuff like receivers, exporters, etc. I need to be able to create my own config file with 3rd party exporter (also need to add my secret API key somewhere inside there - maybe it can go to secrets manager and get mounted as env var :shrug:).
I'm wondering if this is doable without having to build my own image with baked config somewhere inside purely using cloudformation (what I use to deploy my app) and other amazon services?
The plan is to add this container besides each other app container (inside the task definition) [and yeah I know this is overkill but for now simple > perfect]
Building additional image will require some cardinal changes to the CI/CD so if I can go without those it will be awesome.
You can't mount an S3 bucket in ECS. S3 isn't a file system, it is object storage. You would need to either switch to EFS, which can be mounted by ECS, or add something to the startup script of your docker image to download the file from S3.
I would recommend to check a doc for AWS ADOT. You will find that it supports config variable AOT_CONFIG_CONTENT (doc). So you don't need a config file, only a config env variable. That plays very well with AWS ecosystem, because you can use AWS Systems Manager Parameter Store and/or AWS Secrets Manager, where you can store otel collector configuration (doc).
I have a fargate instance, running Python(Flask) Docker containers. I was unable to read AWS secrets as I do from Lambda and my localhost. The system times out after 30 seconds and I have partly given up on that as a solution. (The problem is that my Python container cannot read AWS secrets manager) I do know however that I can import my AWS secrets into Terraform and I can see they are in Terraform.
How can I get those variables from Terraform into my docker container as a global variable? Is there anyway to inject them and then read them with, for instance, os.environ.get("zzz)"?
In your aws_ecs_task_definition, when you specify container_definitions, you can provide secrets:
The secrets to pass to the container. For more information, see Specifying Sensitive Data in the Amazon Elastic Container Service Developer Guide.
This allows you to:
inject sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition.
Thus if you modify your container_definitions to use secrets, they will be automatically injected into your containers.
With secret code such as MongoDB password, Firebase admin password in my NodeJS server code, I am wondering how I should go about deploying this to EC2 (and multiple EC2 instances with CodeDeploy / AutoScaling, in the future).
Is there a common way to go about this - keeping your credentials secure? You could argue that the security layer is at the instance: make sure that there is no unwanted access to your instance(s) and you should be good. But is this really the way to go?
Given a service that has a secret password in its config file called config.json, create a software config file called config-development.json:
password=[PASSWORD]
During Codedeploy, there are scripts or hooks, that run during the deployment cycle eg BeforeInstall, Install, AfterInstall. During the AfterInstall script execution, get the secret from the parameter store via cli, store it in a variable, and then replace the [PASSWORD] value in the json file, using sed or any search and replace command line tool.
Rename the resulting file to the config.json, and restart the service.
This approach will allow you to keep secrets out of your repo, and use only value from the parameter store.
See https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#reference-appspec-file-structure-hooks-list
I am developing microservice based app with jHipster (but the question is for spring cloud config in general), for development purposes I was using docker-compose and now I'm creating stage environment on Amazon Elastic Container Service.
I'm facing a problem with connecting registry to bitbucket to download spring cloud config files. With docker-compose I was mounting a volume which contained ssh key, that is required to access BitBucket:
services:
jhipster-registry:
image: jhipster/jhipster-registry:v3.2.3
volumes:
- /home/ubuntu/bb-key:/root/.ssh
I don't know how I can pass this key to container running in ECS?
I can't put it directly on EC2 - I don't know on which instance in cluster registry will start. Maybe I must put it on s3 and change registry image to download it from s3? But it sounds somehow not right.
I know this is a bit late, but you can add user environment variables. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
Much like export commands within Linux you can use ECS to pass those variables to the docker instances much the same way you would using -e switch command. This allows you to pass secrets. There might be a better way, but since you can restrict access to those variables this may be an ok work around. You just need to work any scripts within the docker image to use those environment variables, since the variables can change overtime but not the image I normally make my scripts to accept/look for environment variables and document those.
In your case, you can write a script to export the SSH key, found in the rsa key file, and export the string since it is all one line, and have a script to output that export into a file in the .ssh directory.
echo $SSH_KEY > ~/.ssh/some_key Just have this line of code in an entry.sh script or something similar and you should be good. So when ever the container starts it will output the key into the .ssh file.
The other way, is as you described, use an S3 bucket and leave the key value pairs in there, or in this case an ssh key, and ECS can load those through the task scripts, or through AWS cli commands in the docker container. However, the last part means you need to add AWS CLI to your image, which may not be an option depending on what you need the image for and requires a small script to run at startup IE an entry script.
If this doesn't solve your issue, let me know, and I'll rework this answer to better suit the issue you are having. But from what I read, this should get you in the ball park of what you need.
One more way is to make an API key that will allow you to access the bitbucket repo, or other repo depending on ever changing needs, and feed that key in the same way you were thinking of doing the SSH key and just use the variable in the git command to pull the image and use http(s) if that is an option for your setup.
In the environment: section of docker-compose docs for AWS CLI (cmd-ecs-cli-compose), the following is stated:
Important
We do not recommend using plaintext environment variables for sensitive > information, such as credential data.
What is the recommended way of storing sensitive information, like passwords, with docker-compose and ECS task definitions? Why is plain text not recommended?
Plain text is not recommended for environment variables because well docker is not a security container and environment variables are readable to all processes that have access to the top level Docker namespace. So if someone has access to /proc on your EC2 instance, they can read the secrets by querying the process running inside the container.
I recommend either encrypting them with KMS or storing them in Parameter Store or DynamoDB and downloading on startup.