How to update .env value of container definitions in AWS? - amazon-web-services

I’m new to AWS ECS deployment. This is my first time.
I have updated the .env in my container definition on my AWS account.
But when I run docker exec e718a29eb0e3 env in my container I still seeing the latest value updated.
I even tried
node#db39b382163a:/api$ pm2 restart all
I still not seeing it updated.
Do I need to restart something else ?

The native CodePipeline -> ECS integration will only update the container definitions' image attribute so you cannot use it to manage environment variables. you have a couple of other options:
You can use a Lambda function instead to drive your deployment and do something similar to the above to edit both the image and environment attributes.
If you're using CloudFormation to manage your task definition and service, you can use these templates to manage those fields instead of the native integration.

Related

Create a file at container launch in aws fargate

This is the first time i'm using fargate and docker, i know fargate is using docker api to run, so i'm trying to launch a container based on an image that needs some files to be avialable inside the container when it launches, along with some env variables, the problem is that the command and entrypoint is consfusing me and whatever i try, it just won't create these files, the command i'm trying to run is :
echo "some text here" >> /path/to/file/file
How can i do this in command or entrypoint in the fargate console ?
Thank you,
To perform any operations such as the creation of mandatory files you would generally include these within the DockerFile that then becomes a Docker image.
You generally don't want to update the command/entry point settings in the Task definitions as this will override the behaviour of the container and may lead to the expected behaviour being cancelled.
If some file need to exist within the Docker Container there are a couple of suggestions I would go with:
Create a EFS mount and use it within your containers. These files will then always exist for all containers, this will allow the container to launch without additional tasks.
If it must be available ahead of the container launch you could create a new Docker image and deploy to ECR containing your additional changes.
If it does not need to create ahead of the application then it could be created by the application on the launch.
Regarding environment variables there are 2 options both officially supported within ECS, the first is the environment attribute. This should be used for non sensitive information.
The second option is the secrets attribute that will load a secret from either Secrets Manager or SSM Parameter Store.
These will be available to the containers on launch.

Container is not able to call S3 in Fargate

I'm not able to synchronize a log-folder to s3 inside a container.
I'm trying to get the following setup:
Docker Container with installed awscli
there are logfiles and other files generated inside the container
There is a cronjob, which calls the "aws s3 sync" command through a shell-script.
The synchronisation is not working properly and I'm not sure why not.
I tried the following, which worked just fine:
provided access key/secret access key inside the docker container
this worked locally, with plain ECS and with fargate
but it's not recommended to use the access keys
plain ECS without any keys (just the IAM role)
this worked too
I played a little with the configuration and read through the documentation.
The only hints I got are:
Has it something to do with the network mode "awsvpc"? (which fargate has to use)
Has it something to do with the "AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" path variable?
I found a few hits there on the web, but I'm not sure if it's set or not. I'm not able to look inside the container in fargate.
ECS Task Definition has two parameters related to defining IAM Role.
executionRoleArn - Provides access to the task or container to start running by performing needed actions such as pulling images from ECR, writing logs to Cloudwatch.
taskRoleArn - Allows the Task to execute AWS API calls to interact with AWS resources such as S3, etc...
In my case i had a shell script which i used to call using entrypoint in the task definition. I had correctly set the Task Role with access to S3 however it did not work. So using the information provided here https://forums.aws.amazon.com/thread.jspa?threadID=273767#898645
i added the first line in my shell script as
export AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
Still it did not work. Then i upgraded the AWS cli on the docker container to version 2 and it worked. So for me the real problem was that the docker image had an old CLI version.

Continuous Deployment of Docker Compose App to AWS/EC2

I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.

Deploying jhipster registry on Amazon ECS

I am developing microservice based app with jHipster (but the question is for spring cloud config in general), for development purposes I was using docker-compose and now I'm creating stage environment on Amazon Elastic Container Service.
I'm facing a problem with connecting registry to bitbucket to download spring cloud config files. With docker-compose I was mounting a volume which contained ssh key, that is required to access BitBucket:
services:
jhipster-registry:
image: jhipster/jhipster-registry:v3.2.3
volumes:
- /home/ubuntu/bb-key:/root/.ssh
I don't know how I can pass this key to container running in ECS?
I can't put it directly on EC2 - I don't know on which instance in cluster registry will start. Maybe I must put it on s3 and change registry image to download it from s3? But it sounds somehow not right.
I know this is a bit late, but you can add user environment variables. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
Much like export commands within Linux you can use ECS to pass those variables to the docker instances much the same way you would using -e switch command. This allows you to pass secrets. There might be a better way, but since you can restrict access to those variables this may be an ok work around. You just need to work any scripts within the docker image to use those environment variables, since the variables can change overtime but not the image I normally make my scripts to accept/look for environment variables and document those.
In your case, you can write a script to export the SSH key, found in the rsa key file, and export the string since it is all one line, and have a script to output that export into a file in the .ssh directory.
echo $SSH_KEY > ~/.ssh/some_key Just have this line of code in an entry.sh script or something similar and you should be good. So when ever the container starts it will output the key into the .ssh file.
The other way, is as you described, use an S3 bucket and leave the key value pairs in there, or in this case an ssh key, and ECS can load those through the task scripts, or through AWS cli commands in the docker container. However, the last part means you need to add AWS CLI to your image, which may not be an option depending on what you need the image for and requires a small script to run at startup IE an entry script.
If this doesn't solve your issue, let me know, and I'll rework this answer to better suit the issue you are having. But from what I read, this should get you in the ball park of what you need.
One more way is to make an API key that will allow you to access the bitbucket repo, or other repo depending on ever changing needs, and feed that key in the same way you were thinking of doing the SSH key and just use the variable in the git command to pull the image and use http(s) if that is an option for your setup.

Are ECS Task Definitions supposed to be committed into version control?

I'm fairly new to AWS ECS and am unsure if you are supposed to commit into version control some things that AWS already hosts for you. Specifically, should task-definitions created on AWS be committed to GitHub? Or do we just use AWS ECS/ECR as the version control FOR ECS Task definitions?
First ECS/ECR not used for version control like GithHub.
ECS is container management service where ECR is nothing but docker registry on AWS.
Task definition used to run docker container in ECS.You can create task definition on or after creating ECS cluster.You can create/modify task definition in many ways like console,aws cli, aws cloud formation,terraform etc. It depends on how you want to do this and how frequently you change task definition. yes, you can keep your task definition GitHub and create automated job to execute every time from there but there is no need to store task definition anywhere once your task running.
ECR is Elastic container registry which is used to store the container images.
ECS Task definition A task definition is required to run Docker containers in Amazon ECS. Some of the parameters you can specify in a task definition & include: The Docker images to use with the containers in your & task.
You have to provide your ECR URI while creating task definition or it will look for the default docker hub registry for the container image.
Also, You can keep your task definition json on any version control system if you want to use them lateron.