AWS ECS spawn a Docker container from the running one - amazon-web-services

Looking for the ways to implement the following scenario:
Deploy a Docker image into AWS ECS. This container runs as a REST service and accepts external requests (I already know how to do that).
Upon the request, execute code in the running container that pulls another Docker image from the external repo and deploys into the same ECS cluster as a single run container that exits upon completion.
(Bonus) the dynamic cluster needs to access some EC2 private IP within the same AWS console login.
The logic on the running container is written in Python so I wonder if I should use boto3 lib to do what I need to do?

Related

Connect to a running container in AWS ECS Cluster

My team inherited an AWS ECS Cluster with multiple linked containers running on it, but without the source code (yeah, I know...). We need to connect to one of the running containers and execute a few commands in it. Here's what we tried:
connecting to the container instance, but there's no associated instance with the cluster
using ECS EXEC with AWS Copilot but it's not clear how we could connect to the cluster without having access to the source code used for deployment
How else could we connect to a container running on AWS ECS?
UPDATE:
I tried accessing the container with AWS CLI following an example here, only to find out that execute command was not enabled in the task:
An error occurred (InvalidParameterException) when calling the ExecuteCommand operation: The execute command failed because execute command was not enabled when the task was run or the execute command agent isn’t running. Wait and try again or run a new task with execute command enabled and try again.
Is now a good time to give up?
If exec command wasn't enabled on the task when it was created, and it's running in Fargate instead of EC2, then there's no way to connect to it like you are trying to do.
Are the docker images in ECR? You should be able to examine the ECS task definitions to see where the Docker images are, then pull down the Docker images to an EC2 server or your local computer, at which point you should be able to get to the contents of those Docker images.

Deploy Django Code with Docker to AWS EC2 instance without ECS

Here is my setup.
My Django code is hosted on Github
I have docker setup in my EC2 Instance
My deployment process is manual. I have to run git pull, docker build and docker run on every single code change. I am using a dockerservice account to perform this step.
How do I automate step 3 with AWS code deploy or something similar.
Every example I am seeing on the internet involves ECS, Fargate which I am not ready to use yet
Check this out on how to Use Docker Images from a Private Registry (eg. dockerhub) for Your Build Environment
How to Use Docker Images from a Private Registry for Your Build Environment

AWS ECS & Prisma2 & RDS: Connect local machine to private DB for migration with prisma?

I deployed my Full-Stack project to AWS ECS(Docker). Everything is working so far. My problem is that I don't know how to connect my local machine to the RDS-DB to migrate my DB-Schema.
I want to run the command: prisma migrate deploy --preview-feature --> creates Tables and Fields in the DB
My RDS-DP is private(no public accessibility) and is in the same VPC as my frontend and backend. The frontend has a public security-group(Load-Balencer) and the backend a private security group and has permissions to the DB(Request is working just get the error: "The table public.Game does not exist in the current database" which I can solve with the migration). At the moment only my backend can access the RDS.
I also tried it with a test DB which was public(public accessibility) and I was able to migrate from my local machine.
How do you general migrate Prisma in production and how can I give my local machine permission to RDS with no public accessibility?
IF you can run that command from one of the containers you deployed with ECS AND you deployed the ECS tasks on EC2 instances you can ssh into the instances and docker exec into the container (that has connectivity to the RDS db) from which you can, supposedly, run that command. Note that it is possible that your instances themselves would not be publicly available and reachable from your laptop (in which case you'd need to have some sort of bastion host to do that).
IF you can run that command from one of the containers you deployed with ECS AND you deployed the ECS tasks on Fargate this is a bit more tricky as there are no EC2 instances you can SSH into. In this case I guess you'd need to deploy a temporary environment (on EC2 or ECS/EC2 that would allow to run that command to prepare the DB.
FYI we are releasing a new feature soon that will allow you to exec into the container (running on either ECS/EC2 or ECS/Fargate) without having to do those jumps (when possible). But this feature is not (yet) available. More here.
If you have it running in ECS, it might be simple to create another Task Definition that uses the same Docker image, but overrides the command parameter to run the migrate command instead of the normal command to start your app. Another similar approach would be to use the CLI to run the aws ecs run-task command and execute it that way.

On Demand Docker Container Run On AWS or Google Cloud Platform?

I am interested in running builds/scripts on demand, inside a Docker container, on AWS or GCP.
I keep reading of the ECS service (https://aws.amazon.com/ecs/) , but I am not sure it is what I need. I surely do not need a cluster of managed EC2 instances. I also don't think Google Container Engine is the answer.
I just need to start a Docker container, run a build or any script inside it and shut it down. The lifetime of the container would be max 1h. So it's not about long running, or scaling any application. Just start, run, stop a Docker container on demand. Which AWS or GCP service is most suitable for this requirement?
Besides the service, what HTTP endpoints of it do I need to call in order to automate this process?
My application receives some bash script from the user and has to fire up a container, run the script, shut everything down when it's finished or errored and come back with the script's output. I imagine it would connect to the created/runnig instance via SSH.
Any help or hint to the proper documentation is appreciated, thanks!

Jenkins: deploy to AWS ECS with docker compose

I have some misunderstanding. I have Jenkins instance, ESC cluster and docker-compose config file to compose my images and link up containers.
After git push my Jenkins instance grabs all sources from all repos (webapp, api, lb) and makes a batch of operations like build, copy files and etc.
After that I have all folders with Dockerfiles in state "ready for compose".
And in this stage I cant get how I should ping my ESC cluster on AWS to grab all images from Jenkins and compose them with my docker-compose.yml config.
I would be glad of any useful information.
Thanks.
First, you need to push your images from the Jenkins server into ECR. Push every image individually to an independent repo.
Next, you have to create an ECS cluster. In this cluster, you create an ECS service that runs in the cluster. For this Service, you create and configure a Task Definition to run your linked containers. You don't need to use docker-compose for this: in ECS you define the links between containers in the configuration of the Task Definition. You can add several container definitions to the same Task Definition, and ECS will link them together.
You can automate all of this from your Jenkins server by attaching an Instance Profile to it that allows it to call the ECS API. In order to deploy new images, all you need to do is pushing them to their ECR repos, and create a new Task Definition pointing to them, and update the service that runs these task definitions. ECS will automatically trigger a blue-green deployment on your behalf.