How to update docker version in AWS ECS - amazon-web-services

I intend to run some apps stateful in only 1 container instance in ECS, no config autoscaling. My purpose when running this container instance in ECS is find an easy way to update docker version by support of ECS.
But seen like to update docker version in ECS, I have to launch a new instance with latest Amazon ECS-optimized AMI, then move data from a old one to new one, lastly remove the old one, it such complicated, so my question: is there any way to update docker version in AWS ECS without downtime?
Thanks.

You have to bring down the containers that are running in order to update Docker.

Related

Is it possible to install Apache superset on a ECS container

I am working on Apache Superset, I was able to install it on a linux EC2 instance using docker , is there is any possibility to install it on ECS ?
There are a couple of approaches to this.
First you can take the container image here and build an ECS task def / ECS service around it by bringing it up standalone. Make sure you enable ECS exec to be able to execute into the container and launch those commands. I have not tested this but I see no reason why it should not work.
I have also spent some time trying to make the docker compose files in the Superset GH repo work with Amazon ECS. You can read more about my findings here.

How to run commands in a fargate task

I have a requirement where i have to create a Fargate task that can clone a gitab repository(source code) and run a maven build command to build the code.
And there would be another fargate task that would create a docker image out of it.
Gitlab is on an EC2 instance.
Since we do not have exec access into the containers on Fargate, how and what would be the best way to do this. (I have multiple repos on Gitlab and so the repo that i want to clone and build is not going be the same every time)
I have been reading about the Amazon Elastic Container Service (ECS) / Fargate plugin on Jenkins.But i'm not sure if Jenkins can be used to get into a Fargate container and run commands.
nowadays you can use ECS exec. Here's how to set it up: https://aws.amazon.com/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/
or in short:
https://www.ernestchiang.com/en/posts/2021/using-amazon-ecs-exec/

ECS - User Data For EC2 instances

I am trying to create a docker image based on httpd with a custom information about the docker image. So for that am trying to set the ECS_ENABLE_CONTAINER_METADATA=true in /etc/ecs/ecs.config.
I am trying to do it in the user data of the ecs instance. First thing i noticed is there is no provision to specify the user data while creating the cluster.
Then tried copying the launch configuration and edited the user data per below stackoverflow,
ECS, how to add user-data after creating ecs instance
But when i try to run tasks, I found that no ecs instance is linked with the cluster.
Any suggestions if you had run to similar issue ?
It seems that the ECS instance is not registered with the cluster. You need to ensure that the AMIs you use to create the ECS instance has the ECS agent enabled and running. The full list of AMIs is available in the ECS developer docs under container instances.

Save running ECS container as new image and upload to ECR

I am launching Apache, MySQL, and memcached docker containers from AWS ECR into an ECS instance. Engineers are able to browse around and make changes as they see fit. These containers expire after a set period of time but they are wanting to save their database changes for use in future containers.
I am looking into seeing if there's a solution I can automate this process to occur before the containers terminate, either with Lambda, aws-cli, or some other utility.
I am looking for a solution that would take the mysql container and create a new image from it. I saw this question and it's mostly what I want:
How to create a new docker image from a running container on Amazon?
But you have to run docker commit from the ECS instance as well as perform the login and push from there. There doesn't appear to be a way to have the committed image pushed to the ECR without having to login with aws ecr get-login --no-include-email and running the output for docker to get the token.
The issue I have with that is if we get to a point where we have multiple ECS instances running it would be difficult to know where the container the engineer is running from, SSHing into that server, and running the docker commit, docker tag, aws ecr login, and docker push commands. To me, that seems kind of hacky and prone to error.
I have the MySQL containers rebuilt and repushed to the ECR every hour so that they have the latest content updates. To launch the containers I am using a combination of ecs-cli and aws-cli to use a docker-compose.yml file to create a task in ECS.
Is there some functionality I can use to commit a running container to ECR with a new name/tag?
The other option I was looking into was starting the MySQL container with persistent storage (EBS/EFS) but am still trying to see if that's doable since I would have to somehow tag the persistent storage so that it will only be used when the engineer launches it that way. Essentially, I would have a unique docker-compose.yml file that is specific to persistent volumes and it would either launch a new container with fresh mysql data or use an existing one if it exists, given a specific name.

Difference between Docker and AMI

In the context of AWS:
AMI is used to package software and can be deployed on EC2.
Docker can also be used to package software and can also be deployed to EC2.
What's the difference between both and how do I choose between them?
An AMI is an image. This is a whole machine that you can start new instances from. A docker container is more lightweight and portable. A docker container should be transportable between providers while an AMI is not (easily).
AMI's are VM images basically.
Docker containers are packaged mini-images that run on some VM in an isolated environment.
Eventhough this doesn't answer the question directly, but gives some background on how they are used.
One approach is you launch EC2 instances with Amazon AMI's (or can be any AMI) then run docker containers (with all dependencies) on top of it. With this approach, the docker image gets bloated over time and there is a container drift over time. Also time taken for the application to be up and running is more as the Ec2 has to be booted and docker has to bring up your app server.
Another approach is "Immutable Ec2 instances". With this approach, you use Amazon AMI as base and install all the dependencies ( use shell scripts or Ansible) and bake them in the AMI. We use Hashicorp Packer which is an amazing tool. Here the time taken for the application to be up and running is greatly reduced as all the dependencies ( java8 , tomcat, war file etc) are already installed in the AMI.
For production use case, use Packer to create AMI and use Terraform to launch cloud resources to use this AMI. Tie all this together in Jenkins pipeline.
This link has details about differences between Docker and AMI:-
https://forums.docker.com/t/how-would-you-differentiate-between-docker-vs-ec2-image/1235/2