I am developing microservice based app with jHipster (but the question is for spring cloud config in general), for development purposes I was using docker-compose and now I'm creating stage environment on Amazon Elastic Container Service.
I'm facing a problem with connecting registry to bitbucket to download spring cloud config files. With docker-compose I was mounting a volume which contained ssh key, that is required to access BitBucket:
services:
jhipster-registry:
image: jhipster/jhipster-registry:v3.2.3
volumes:
- /home/ubuntu/bb-key:/root/.ssh
I don't know how I can pass this key to container running in ECS?
I can't put it directly on EC2 - I don't know on which instance in cluster registry will start. Maybe I must put it on s3 and change registry image to download it from s3? But it sounds somehow not right.
I know this is a bit late, but you can add user environment variables. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
Much like export commands within Linux you can use ECS to pass those variables to the docker instances much the same way you would using -e switch command. This allows you to pass secrets. There might be a better way, but since you can restrict access to those variables this may be an ok work around. You just need to work any scripts within the docker image to use those environment variables, since the variables can change overtime but not the image I normally make my scripts to accept/look for environment variables and document those.
In your case, you can write a script to export the SSH key, found in the rsa key file, and export the string since it is all one line, and have a script to output that export into a file in the .ssh directory.
echo $SSH_KEY > ~/.ssh/some_key Just have this line of code in an entry.sh script or something similar and you should be good. So when ever the container starts it will output the key into the .ssh file.
The other way, is as you described, use an S3 bucket and leave the key value pairs in there, or in this case an ssh key, and ECS can load those through the task scripts, or through AWS cli commands in the docker container. However, the last part means you need to add AWS CLI to your image, which may not be an option depending on what you need the image for and requires a small script to run at startup IE an entry script.
If this doesn't solve your issue, let me know, and I'll rework this answer to better suit the issue you are having. But from what I read, this should get you in the ball park of what you need.
One more way is to make an API key that will allow you to access the bitbucket repo, or other repo depending on ever changing needs, and feed that key in the same way you were thinking of doing the SSH key and just use the variable in the git command to pull the image and use http(s) if that is an option for your setup.
Related
I have this ECS cluster that is running task definitions with singular container inside each group. I'm trying to add some fancy observability to my application by introducing OpenTelemetry. Following the AWS'es docs I found https://github.com/aws-observability/aws-otel-collector which is the AWS version of OTEL collector. This collector needs a config file (https://github.com/aws-observability/aws-otel-collector/blob/main/config/ecs/ecs-default-config.yaml) that specifies stuff like receivers, exporters, etc. I need to be able to create my own config file with 3rd party exporter (also need to add my secret API key somewhere inside there - maybe it can go to secrets manager and get mounted as env var :shrug:).
I'm wondering if this is doable without having to build my own image with baked config somewhere inside purely using cloudformation (what I use to deploy my app) and other amazon services?
The plan is to add this container besides each other app container (inside the task definition) [and yeah I know this is overkill but for now simple > perfect]
Building additional image will require some cardinal changes to the CI/CD so if I can go without those it will be awesome.
You can't mount an S3 bucket in ECS. S3 isn't a file system, it is object storage. You would need to either switch to EFS, which can be mounted by ECS, or add something to the startup script of your docker image to download the file from S3.
I would recommend to check a doc for AWS ADOT. You will find that it supports config variable AOT_CONFIG_CONTENT (doc). So you don't need a config file, only a config env variable. That plays very well with AWS ecosystem, because you can use AWS Systems Manager Parameter Store and/or AWS Secrets Manager, where you can store otel collector configuration (doc).
With secret code such as MongoDB password, Firebase admin password in my NodeJS server code, I am wondering how I should go about deploying this to EC2 (and multiple EC2 instances with CodeDeploy / AutoScaling, in the future).
Is there a common way to go about this - keeping your credentials secure? You could argue that the security layer is at the instance: make sure that there is no unwanted access to your instance(s) and you should be good. But is this really the way to go?
Given a service that has a secret password in its config file called config.json, create a software config file called config-development.json:
password=[PASSWORD]
During Codedeploy, there are scripts or hooks, that run during the deployment cycle eg BeforeInstall, Install, AfterInstall. During the AfterInstall script execution, get the secret from the parameter store via cli, store it in a variable, and then replace the [PASSWORD] value in the json file, using sed or any search and replace command line tool.
Rename the resulting file to the config.json, and restart the service.
This approach will allow you to keep secrets out of your repo, and use only value from the parameter store.
See https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#reference-appspec-file-structure-hooks-list
This is the first time i'm using fargate and docker, i know fargate is using docker api to run, so i'm trying to launch a container based on an image that needs some files to be avialable inside the container when it launches, along with some env variables, the problem is that the command and entrypoint is consfusing me and whatever i try, it just won't create these files, the command i'm trying to run is :
echo "some text here" >> /path/to/file/file
How can i do this in command or entrypoint in the fargate console ?
Thank you,
To perform any operations such as the creation of mandatory files you would generally include these within the DockerFile that then becomes a Docker image.
You generally don't want to update the command/entry point settings in the Task definitions as this will override the behaviour of the container and may lead to the expected behaviour being cancelled.
If some file need to exist within the Docker Container there are a couple of suggestions I would go with:
Create a EFS mount and use it within your containers. These files will then always exist for all containers, this will allow the container to launch without additional tasks.
If it must be available ahead of the container launch you could create a new Docker image and deploy to ECR containing your additional changes.
If it does not need to create ahead of the application then it could be created by the application on the launch.
Regarding environment variables there are 2 options both officially supported within ECS, the first is the environment attribute. This should be used for non sensitive information.
The second option is the secrets attribute that will load a secret from either Secrets Manager or SSM Parameter Store.
These will be available to the containers on launch.
I have a basic django/postgres app running locally, based on the Docker Django docs. It uses docker compose to run the containers locally.
I'd like to run this app on Amazon Web Services (AWS), and to deploy it using the command line, not the AWS console.
My Attempt
When I tried this, I ended up with:
this yml config for ecs-cli
these notes on how I deployed from the command line.
Note: I was trying to fire up the Python dev server in a janky way, hoping that would work before I added nginx. The cluster (RDS+server) would come up, but then the instances would die right away.
Issues I Then Failed to Solve
I realized over the course of this:
the setup needs another container for a web server (nginx) to run on AWS (like this blog post, but the tutorial uses the AWS Console, which I wanted to avoid)
ecs-cli uses a different syntax for yml/json config than docker-compose, so you need to have some separate/similar code from your local docker.yml (and I'm not sure if my file above was correct)
Question
So, what ecs-cli commands and config do I use to deploy the app, or am I going about things all wrong?
Feel free to say I'm doing it all wrong. I could also use Elastic Beanstalk - the tutorials on this don't seem to use docker/docker-compose, but seem easier overall (at least well documented).
I'd like to understand why any given approach is a good way to do this.
One alternative you may wish to consider in lieu of ECS, if you just want to get it up in the amazon cloud, is to make use of docker-machine using the amazonec2 driver.
When executing the docker-compose, just ensure the remote Amazon host machine is ACTIVE which can be viewed with a docker-machine ls
One item you will have to revisit with the Amazon Mmgt Console is to open the applicable PORTS such as Port 80 and any other ports exposed in the compose file. Once the security group is in place for the VPC, you should be able to simply refer to the VPC ID on subsequent executions bypassing any need to use the Mgmt console to add the ports. You may wish to bump up the instance size from the default t2.micro to match the t2.medium specified in your NOTES.
If ECS orchestration is needed, then a task definition will need to be created containing the container definitions you require as defined in your docker compose file. My recommendation would be to take advantage of the Mgmt console to construct the definition and then grab the accompanying JSON defintion which is made available and store in your source code repository for future executions on the command line where they can be referenced in registering task definitions, executing tasks and services within a given cluster.
I'm using Packer, and I'm new to creating machine images. Although I've created and deployed Docker containers before.
One concept I'd like to apply to the machine image building that I've found useful with Docker images is using the same exact image for staging testing that gets deployed to production. The different environments behave differently due to different environment variable values being passed in on startup, which in the case of Docker containers is often handled by a startup script ("entrypoint" in Docker terminology).
This has worked fine for me, but now I need to handle SSL certificates (actual files) being different between staging and production. In the case of Docker containers, you could just mount different volumes to the container. But I can't do that with machine images.
So how do people handle this scenario with machine images? Should I store these important files encrypted externally, and curl them in a startup script?
You could consider using a configuration management tool such as Ansible or Puppet to do any environment/host specific configuration you need once Packer has deployed the bulk of the VM.
Alternatively you could do as you mentioned and simply have a startup script curl the appropriate SSL certs (or any other environment specific files/config) that are needed from some location. Considering you've tagged your question with amazon-web-services you could use separate, private S3 buckets for testing or production and only allow certain instances access to the relevant buckets via IAM roles, protecting that data from being viewed by others or the wrong environment but also reducing the need to encrypt the data and then manage keys as well.
When you launch EC2 instances using your AMI, you can specify tags. Inside instances you can use AWS CLI to read these tags, so you can craft a script to run when the system starts and load whatever external files as you want based on the tag values (as #ydaetskcoR suggested from a private S3 bucket).
This is also useful: Find out the instance id from within an ec2 machine