AWS ECS mysql DB instance - amazon-web-services

Hey I wanted a design answer regarding using MySQL database in AWS ECS container . I'm not using RDS as currently doing some MVP. Is it possible to use Mysql DB as a docker container, and if it is so, then how do i make sure prod data is persisted when deployment happens of this DB container.
Please guide me for this scenario.

Yes, entirely possible.
Explaining it from start to finish is way too much for an SO answer. AWS has thorough documentation on ECS, and I would recommend starting there: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html
The section concerning data persistence is here: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
The thing to remember with volumes and ECS - named volumes are for sharing data between containers; hosted volumes are for persisting data beyond the lifecycle of any number of containers. So you'll want to mount a volume from the underlying EC2 instance into the container where the MySQL data is stored.
Depending on which MySQL image you choose, the container data directory might differ. Any image worth it's salt is going to tell you where this directory is located in the README, because that is a very common question with databases + Docker.

Yes it is possible. All you have to do is to find a MYSQL image such as the official one and just as instructed in the documentation of the image you will have to run:
docker run --name my-container-name -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql/mysql-server:tag
The -v /my/own/datadir:/var/lib/mysql part of the command mounts the /my/own/datadir directory from the underlying host system as /var/lib/mysql inside the container, where MySQL by default will write its data files.

Related

How to Deploy a docker container with volume in Cloud Run

I am trying to publish an application I wrote in .NET Core with docker and a mounted volume. I can't really figure out or see any clear solution to my issue that will be cheap (Its for a university project.)
I tried running a docker-compose via a cloudbuild.yml linked in this post with no luck, also tried to put my dbfile in a firebase project and tried to access it via the program but it didn't work. I also read in the GCP documentation that i can probably use Filestore but the pricing is way out of budget for me. I need to publish an SQLite so my server can work correctly, that's it.
Any help would be really apreciated!
Basically, you can't mount volume in Cloud Run. It's a stateless environment and you can't persist data on it. You have to use external storage to persist your data. See the runtime contract
WIth the 2nd execution runtime environment, you can now mount Cloud Storage bucket with GCSFuse, and Filestore path with NFS

Is there an example of a docker-compose.yml that mounts persistent storage on AWS Fargate?

I'd like to deploy a docker-compose.yml file to AWS Fargate with access to persistent storage. The official documentation includes a Fargate tutorial that defines a task with docker compose, but without persistent storage, while a separate tutorial on how to mount an EFS defines a task with json, but without a corresponding example in docker compose. Essentially, I'm looking to do what the second tutorial does with the tools from the first tutorial.
The most relevant question/answers here provide further examples that mount EFS persistent storage in Fargate, but in json format. I'm committed to docker-compose in anticipation of Docker's integration with ECS (currently in beta), but am otherwise open to suggestions.

AWS ECS instance missing `ecs.capability.efsAuth` attribute

I have a custom ECS AMI, running Debian 10. I launch the ECS-Agent as a container, as suggested in the docs here. Everything works fine.
Recently, I was asked to integrate EFS into the cluster, so that containers running within specific tasks would have access to shared, persistent storage.
I added the efs-utils package to the AMI build, as documented in the git repo. The instances themselves now automatically mount to EFS on boot, and users on the instances can read/write to the EFS mount.
However, tasks configured to use the efsVolumeConfiguration parameter in the task volume definition fail to get placed; the good old Container instance missing required attribute error.
Because the instances themselves have no problem mounting to EFS on boot, I've implemented a workaround using regular docker volumes, so the containers running in the task mount EFS on the host via normal docker volume, but I'd prefer to have the ECS -> EFS integration working properly.
When I run the ECS-CLI check-attributes command against any of the instances in my cluster I get:
ecs-cli check-attributes --task-def my-task --container-instances my-container-instance-id --cluster my-ecs-cluser
Container Instance Missing Attributes
my-container-instance-id ecs.capability.efsAuth
And indeed, in the console, when I go cluster->instances->specific-instance->actions->view/edit attributes, all of the ecs.capability.xxx contain empty values.
When do these values get populated? How should I augment the AMI build so that these values get populated with the proper values?
Please let me know if you need any additional information.
Thanks in advance!
I am not sure if this functionality of using EFS with ECS is supported on Debian based systems since the documentation 1 does not provide commands for Debian.
Still, try these steps:
Install efs utils and enable amazon-ecs-volume-plugin 1
Add the tag manually: 2
Name=ecs.capability.efsAuth
Value=<empty>
Apologies, I thought I marked this as the answer a long time ago.
Answer from #bravinator932421
I think I solved this. From github.com/aws/amazon-ecs-agent/blob/… I
saw where to set efsAuth, so placing it in my config file at
/etc/ecs/ecs.config: ECS_VOLUME_PLUGIN_CAPABILITIES=["efsAuth"] worked
This also worked for me .
I had the same problem but I got it when trying out Bottlerocket, which apparently does not support encrypted EFS mounts. Removing the transit encryption requirement fixed it.

Save running ECS container as new image and upload to ECR

I am launching Apache, MySQL, and memcached docker containers from AWS ECR into an ECS instance. Engineers are able to browse around and make changes as they see fit. These containers expire after a set period of time but they are wanting to save their database changes for use in future containers.
I am looking into seeing if there's a solution I can automate this process to occur before the containers terminate, either with Lambda, aws-cli, or some other utility.
I am looking for a solution that would take the mysql container and create a new image from it. I saw this question and it's mostly what I want:
How to create a new docker image from a running container on Amazon?
But you have to run docker commit from the ECS instance as well as perform the login and push from there. There doesn't appear to be a way to have the committed image pushed to the ECR without having to login with aws ecr get-login --no-include-email and running the output for docker to get the token.
The issue I have with that is if we get to a point where we have multiple ECS instances running it would be difficult to know where the container the engineer is running from, SSHing into that server, and running the docker commit, docker tag, aws ecr login, and docker push commands. To me, that seems kind of hacky and prone to error.
I have the MySQL containers rebuilt and repushed to the ECR every hour so that they have the latest content updates. To launch the containers I am using a combination of ecs-cli and aws-cli to use a docker-compose.yml file to create a task in ECS.
Is there some functionality I can use to commit a running container to ECR with a new name/tag?
The other option I was looking into was starting the MySQL container with persistent storage (EBS/EFS) but am still trying to see if that's doable since I would have to somehow tag the persistent storage so that it will only be used when the engineer launches it that way. Essentially, I would have a unique docker-compose.yml file that is specific to persistent volumes and it would either launch a new container with fresh mysql data or use an existing one if it exists, given a specific name.

Docker Volume vs AWS s3

Probably I am completely off in my assumptions but I am pretty new to both Docker and Aws and we have two applications which are Dockerized containers working under the same docker-compose network bridge.
Now, We have been looking for a way that these two containers can share some files. Since we are on the cloud, one suggestion was Amazon s3 Bucket. Which is great. But My questions is that since we are on Docker envionment does it not make more sense to share those files in a Docker Volume? I thought that's exactly what Docker Volume is. A mounted virtual place where files can be shared. At least that is my shallow and simplistic understanding after reading about Docker Volumes
So I do have some questions
Is my assumptions that AWS s3 bucket and Docker volumes provide similar functionality like comparing apples to apples?
If my assumption is correct then would a Docker Volume qualify to be called an object store?
If it does qualify to be called an object store then would it be wise to use Docker Volume as replacement of AWS s3?
If not, why?
Yes. They are different and even complementary. There's a plugin for Docker volumes on AWS here:
https://github.com/joeduffy/blocker
I wouldn't use the term object store. It's implemented as a filesystem mounted on the container.
No...
... for the reason stated in (1).