Create a file at container launch in aws fargate - amazon-web-services

This is the first time i'm using fargate and docker, i know fargate is using docker api to run, so i'm trying to launch a container based on an image that needs some files to be avialable inside the container when it launches, along with some env variables, the problem is that the command and entrypoint is consfusing me and whatever i try, it just won't create these files, the command i'm trying to run is :
echo "some text here" >> /path/to/file/file
How can i do this in command or entrypoint in the fargate console ?
Thank you,

To perform any operations such as the creation of mandatory files you would generally include these within the DockerFile that then becomes a Docker image.
You generally don't want to update the command/entry point settings in the Task definitions as this will override the behaviour of the container and may lead to the expected behaviour being cancelled.
If some file need to exist within the Docker Container there are a couple of suggestions I would go with:
Create a EFS mount and use it within your containers. These files will then always exist for all containers, this will allow the container to launch without additional tasks.
If it must be available ahead of the container launch you could create a new Docker image and deploy to ECR containing your additional changes.
If it does not need to create ahead of the application then it could be created by the application on the launch.
Regarding environment variables there are 2 options both officially supported within ECS, the first is the environment attribute. This should be used for non sensitive information.
The second option is the secrets attribute that will load a secret from either Secrets Manager or SSM Parameter Store.
These will be available to the containers on launch.

Related

Container is not able to call S3 in Fargate

I'm not able to synchronize a log-folder to s3 inside a container.
I'm trying to get the following setup:
Docker Container with installed awscli
there are logfiles and other files generated inside the container
There is a cronjob, which calls the "aws s3 sync" command through a shell-script.
The synchronisation is not working properly and I'm not sure why not.
I tried the following, which worked just fine:
provided access key/secret access key inside the docker container
this worked locally, with plain ECS and with fargate
but it's not recommended to use the access keys
plain ECS without any keys (just the IAM role)
this worked too
I played a little with the configuration and read through the documentation.
The only hints I got are:
Has it something to do with the network mode "awsvpc"? (which fargate has to use)
Has it something to do with the "AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" path variable?
I found a few hits there on the web, but I'm not sure if it's set or not. I'm not able to look inside the container in fargate.
ECS Task Definition has two parameters related to defining IAM Role.
executionRoleArn - Provides access to the task or container to start running by performing needed actions such as pulling images from ECR, writing logs to Cloudwatch.
taskRoleArn - Allows the Task to execute AWS API calls to interact with AWS resources such as S3, etc...
In my case i had a shell script which i used to call using entrypoint in the task definition. I had correctly set the Task Role with access to S3 however it did not work. So using the information provided here https://forums.aws.amazon.com/thread.jspa?threadID=273767#898645
i added the first line in my shell script as
export AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
Still it did not work. Then i upgraded the AWS cli on the docker container to version 2 and it worked. So for me the real problem was that the docker image had an old CLI version.

How to update .env value of container definitions in AWS?

I’m new to AWS ECS deployment. This is my first time.
I have updated the .env in my container definition on my AWS account.
But when I run docker exec e718a29eb0e3 env in my container I still seeing the latest value updated.
I even tried
node#db39b382163a:/api$ pm2 restart all
I still not seeing it updated.
Do I need to restart something else ?
The native CodePipeline -> ECS integration will only update the container definitions' image attribute so you cannot use it to manage environment variables. you have a couple of other options:
You can use a Lambda function instead to drive your deployment and do something similar to the above to edit both the image and environment attributes.
If you're using CloudFormation to manage your task definition and service, you can use these templates to manage those fields instead of the native integration.

Deploying jhipster registry on Amazon ECS

I am developing microservice based app with jHipster (but the question is for spring cloud config in general), for development purposes I was using docker-compose and now I'm creating stage environment on Amazon Elastic Container Service.
I'm facing a problem with connecting registry to bitbucket to download spring cloud config files. With docker-compose I was mounting a volume which contained ssh key, that is required to access BitBucket:
services:
jhipster-registry:
image: jhipster/jhipster-registry:v3.2.3
volumes:
- /home/ubuntu/bb-key:/root/.ssh
I don't know how I can pass this key to container running in ECS?
I can't put it directly on EC2 - I don't know on which instance in cluster registry will start. Maybe I must put it on s3 and change registry image to download it from s3? But it sounds somehow not right.
I know this is a bit late, but you can add user environment variables. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html
Much like export commands within Linux you can use ECS to pass those variables to the docker instances much the same way you would using -e switch command. This allows you to pass secrets. There might be a better way, but since you can restrict access to those variables this may be an ok work around. You just need to work any scripts within the docker image to use those environment variables, since the variables can change overtime but not the image I normally make my scripts to accept/look for environment variables and document those.
In your case, you can write a script to export the SSH key, found in the rsa key file, and export the string since it is all one line, and have a script to output that export into a file in the .ssh directory.
echo $SSH_KEY > ~/.ssh/some_key Just have this line of code in an entry.sh script or something similar and you should be good. So when ever the container starts it will output the key into the .ssh file.
The other way, is as you described, use an S3 bucket and leave the key value pairs in there, or in this case an ssh key, and ECS can load those through the task scripts, or through AWS cli commands in the docker container. However, the last part means you need to add AWS CLI to your image, which may not be an option depending on what you need the image for and requires a small script to run at startup IE an entry script.
If this doesn't solve your issue, let me know, and I'll rework this answer to better suit the issue you are having. But from what I read, this should get you in the ball park of what you need.
One more way is to make an API key that will allow you to access the bitbucket repo, or other repo depending on ever changing needs, and feed that key in the same way you were thinking of doing the SSH key and just use the variable in the git command to pull the image and use http(s) if that is an option for your setup.

What commands/config can I use to deploy a django+postgres+nginx using ecs-cli?

I have a basic django/postgres app running locally, based on the Docker Django docs. It uses docker compose to run the containers locally.
I'd like to run this app on Amazon Web Services (AWS), and to deploy it using the command line, not the AWS console.
My Attempt
When I tried this, I ended up with:
this yml config for ecs-cli
these notes on how I deployed from the command line.
Note: I was trying to fire up the Python dev server in a janky way, hoping that would work before I added nginx. The cluster (RDS+server) would come up, but then the instances would die right away.
Issues I Then Failed to Solve
I realized over the course of this:
the setup needs another container for a web server (nginx) to run on AWS (like this blog post, but the tutorial uses the AWS Console, which I wanted to avoid)
ecs-cli uses a different syntax for yml/json config than docker-compose, so you need to have some separate/similar code from your local docker.yml (and I'm not sure if my file above was correct)
Question
So, what ecs-cli commands and config do I use to deploy the app, or am I going about things all wrong?
Feel free to say I'm doing it all wrong. I could also use Elastic Beanstalk - the tutorials on this don't seem to use docker/docker-compose, but seem easier overall (at least well documented).
I'd like to understand why any given approach is a good way to do this.
One alternative you may wish to consider in lieu of ECS, if you just want to get it up in the amazon cloud, is to make use of docker-machine using the amazonec2 driver.
When executing the docker-compose, just ensure the remote Amazon host machine is ACTIVE which can be viewed with a docker-machine ls
One item you will have to revisit with the Amazon Mmgt Console is to open the applicable PORTS such as Port 80 and any other ports exposed in the compose file. Once the security group is in place for the VPC, you should be able to simply refer to the VPC ID on subsequent executions bypassing any need to use the Mgmt console to add the ports. You may wish to bump up the instance size from the default t2.micro to match the t2.medium specified in your NOTES.
If ECS orchestration is needed, then a task definition will need to be created containing the container definitions you require as defined in your docker compose file. My recommendation would be to take advantage of the Mgmt console to construct the definition and then grab the accompanying JSON defintion which is made available and store in your source code repository for future executions on the command line where they can be referenced in registering task definitions, executing tasks and services within a given cluster.

How do I configure "ulimits" for a Docker container running in AWS ECS?

My Java application appears to be running out of "open files" limit when running in a Docker container in AWS ECS. Upon further investigation, I found that the open files limit defaults to 1024.
Typically, in Linux I would edit /etc/security/limits.conf. That does not appear to take effect when I modify that file in my Docker container.
I know I can also pass command line ulimit parameters to docker run as documented here. But I do not have direct access to the Docker command line in ECS. There's gotta be a way to do it via a Task definition. How do I accomplish this ?
ECS task definition JSON allows you to set ulimits.
The Ulimits on ecs task definition correspond to Ulimits in docker run.
Please refer to the following page for more information:
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
Via the Console navigate to the task definitions and under the "Resource Limits" section you can set the NOFILE soft/hard limits
For cloudformation you can set it via the task definition > Container Definition > Ulimits
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-containerdefinitions.html#cfn-ecs-taskdefinition-containerdefinition-ulimits
Be mindful of the fact that you should keep the hard limit below 1/10th of the afforded memory in Kilobytes to this task.