I am launching Apache, MySQL, and memcached docker containers from AWS ECR into an ECS instance. Engineers are able to browse around and make changes as they see fit. These containers expire after a set period of time but they are wanting to save their database changes for use in future containers.
I am looking into seeing if there's a solution I can automate this process to occur before the containers terminate, either with Lambda, aws-cli, or some other utility.
I am looking for a solution that would take the mysql container and create a new image from it. I saw this question and it's mostly what I want:
How to create a new docker image from a running container on Amazon?
But you have to run docker commit from the ECS instance as well as perform the login and push from there. There doesn't appear to be a way to have the committed image pushed to the ECR without having to login with aws ecr get-login --no-include-email and running the output for docker to get the token.
The issue I have with that is if we get to a point where we have multiple ECS instances running it would be difficult to know where the container the engineer is running from, SSHing into that server, and running the docker commit, docker tag, aws ecr login, and docker push commands. To me, that seems kind of hacky and prone to error.
I have the MySQL containers rebuilt and repushed to the ECR every hour so that they have the latest content updates. To launch the containers I am using a combination of ecs-cli and aws-cli to use a docker-compose.yml file to create a task in ECS.
Is there some functionality I can use to commit a running container to ECR with a new name/tag?
The other option I was looking into was starting the MySQL container with persistent storage (EBS/EFS) but am still trying to see if that's doable since I would have to somehow tag the persistent storage so that it will only be used when the engineer launches it that way. Essentially, I would have a unique docker-compose.yml file that is specific to persistent volumes and it would either launch a new container with fresh mysql data or use an existing one if it exists, given a specific name.
Related
I am working on Apache Superset, I was able to install it on a linux EC2 instance using docker , is there is any possibility to install it on ECS ?
There are a couple of approaches to this.
First you can take the container image here and build an ECS task def / ECS service around it by bringing it up standalone. Make sure you enable ECS exec to be able to execute into the container and launch those commands. I have not tested this but I see no reason why it should not work.
I have also spent some time trying to make the docker compose files in the Superset GH repo work with Amazon ECS. You can read more about my findings here.
Lets say I have an EKS cluster, an EC2 instance and my local machine, I can pull images from my private ECR without any issues. But when I pull a generic image like nginx, it will come from Docker Hub straight to me. Would it be possible to redirect this pull to enter my ECR first (so that it gets scanned for vulnerabilities, and maybe even for caching purposes perhaps) and then from my ECR to where I pulled from?
If this is not possible, what would be a good alternative?
AWS container team person here. Can you clarify one thing? Would you be ok to point your manifests to ECR (acting as a hub/cache for external registries) or do you want to keep your manifests pointing to DockerHub but somewhat transparently go through ECR for caching? I am asking because we are working on the former scenario.
You can subscribe here to see the progress and leave comments.
It is not possible to redirect your request to pull generic image to ECR and then to Docker Hub.
I understand your concern to pull images from Docker Hub directly. So what you can do what we have done in our projects is:
pull generic image from Docker Hub for one time
Using that image, build your own image with any customisations you may require or not.
Publish the newly created image to your ECR repo.
Going forward use your only ECR repo to pull that image.
In this way, you will have full control on the image you have. Also, it would be more secure to pull it from your ECR repo rather then again and again using Docker Hub. Also, you can do any customisation you want.
Looking for the ways to implement the following scenario:
Deploy a Docker image into AWS ECS. This container runs as a REST service and accepts external requests (I already know how to do that).
Upon the request, execute code in the running container that pulls another Docker image from the external repo and deploys into the same ECS cluster as a single run container that exits upon completion.
(Bonus) the dynamic cluster needs to access some EC2 private IP within the same AWS console login.
The logic on the running container is written in Python so I wonder if I should use boto3 lib to do what I need to do?
I have created a docker image for DRUID and Superset, now I want to push these images to ECR. and start an ECS to run these containers. What I have done is I have created the images by running docker-compose up on my YML file. Now when I type docker image ls i can see multiple images running in them.
I have created an aws account and created a repository. They have provided the push command and I push the superset into the ECR for start. (Didn't push any dependancy)
I created a cluster in AWS, in one configuration step if provided custom port 8088. I don't know what and why they ask these port for.
Then I created a load balancer with the default configuration
After some time I could see the container status turned running
I navigated to the public ip i mentioned with port 8088 and could see superset running
Now I have two problems
It always shows login error in a superset
It stops automatically after some time and restarts after that and this cycle continues.
Should I create different ECR repos and push all the dependencies to ECR before creating a cluster in ECS?
For the service going up and down. Since you mentioned you have an LB associated with the service, you may have an issue with the health check configuration.
If the health check fails consecutively a number of times, ecs will kill it and re-start it.
I've been trying to find an efficient way to handle continuous deployment with a Docker compose setup and AWS hosting.
So far I've looked into CodeDeploy, S3 buckets, and ECS. My application is relatively small with only 3 docker services, a Django app, NGINX, and PostgreSQL. I was unable to find any reliable information for using CodeDeploy with Docker compose and because of the small scale ECS seems impractical. I've considered an S3 bucket but that seems no better than just deploying my application with something like git or scp.
What is a standard way of handling deploying a docker compose setup on AWS? If possible I would like to use Bitbucket Pipelines or CircleCI to perform the deployment in a manually triggered step after running tests. But I've been unable to find a solution that would easily let me copy over the code (which is in a git repo on a production branch and is how I get the code onto the production server at the moment).
I would like to add some possibilities to #gasc answer
It would be better if you make a cloudformation template for deploying your EC2 resources with all required groups, auto scaling and other stuff.
Then Create the AMI with docker compose installed or any other thing you would be required for your ec2 enviroment.
Then you can use code deploy pipeline, here also aws provides private container registry may be you want to use that
Rest of the steps are same just SCP the compose file into EC2 launch
docker-compose up
command and you are done.
Let me know if you want more help I'm open for discussion
What I will do in your case is:
1 - If needed, update your docker-compose.yml file (or however you called it) to version 3 or higher, to use swarm.
2 - During your pipeline build all images needed, and push them to a registry.
3 - In your pipeline scp your compose file to a manager node.
4 - Deploy your application using swarm (docker stack deploy -c <your-docker-compose-file> your_app_name). This way you can handle rolling updates and scale easily.
Note that if you want to use multiple nodes you need to open a few ports in them
I see you mentioned that ECS might seem impractical for such a small scale - in my opinion not necesarilly. It would require of you to rewrite your docker-compose.yml into task and services definitions, but since there's not a lot of services, that shouldn't take you much time.