My team inherited an AWS ECS Cluster with multiple linked containers running on it, but without the source code (yeah, I know...). We need to connect to one of the running containers and execute a few commands in it. Here's what we tried:
connecting to the container instance, but there's no associated instance with the cluster
using ECS EXEC with AWS Copilot but it's not clear how we could connect to the cluster without having access to the source code used for deployment
How else could we connect to a container running on AWS ECS?
UPDATE:
I tried accessing the container with AWS CLI following an example here, only to find out that execute command was not enabled in the task:
An error occurred (InvalidParameterException) when calling the ExecuteCommand operation: The execute command failed because execute command was not enabled when the task was run or the execute command agent isn’t running. Wait and try again or run a new task with execute command enabled and try again.
Is now a good time to give up?
If exec command wasn't enabled on the task when it was created, and it's running in Fargate instead of EC2, then there's no way to connect to it like you are trying to do.
Are the docker images in ECR? You should be able to examine the ECS task definitions to see where the Docker images are, then pull down the Docker images to an EC2 server or your local computer, at which point you should be able to get to the contents of those Docker images.
Related
I deployed my Full-Stack project to AWS ECS(Docker). Everything is working so far. My problem is that I don't know how to connect my local machine to the RDS-DB to migrate my DB-Schema.
I want to run the command: prisma migrate deploy --preview-feature --> creates Tables and Fields in the DB
My RDS-DP is private(no public accessibility) and is in the same VPC as my frontend and backend. The frontend has a public security-group(Load-Balencer) and the backend a private security group and has permissions to the DB(Request is working just get the error: "The table public.Game does not exist in the current database" which I can solve with the migration). At the moment only my backend can access the RDS.
I also tried it with a test DB which was public(public accessibility) and I was able to migrate from my local machine.
How do you general migrate Prisma in production and how can I give my local machine permission to RDS with no public accessibility?
IF you can run that command from one of the containers you deployed with ECS AND you deployed the ECS tasks on EC2 instances you can ssh into the instances and docker exec into the container (that has connectivity to the RDS db) from which you can, supposedly, run that command. Note that it is possible that your instances themselves would not be publicly available and reachable from your laptop (in which case you'd need to have some sort of bastion host to do that).
IF you can run that command from one of the containers you deployed with ECS AND you deployed the ECS tasks on Fargate this is a bit more tricky as there are no EC2 instances you can SSH into. In this case I guess you'd need to deploy a temporary environment (on EC2 or ECS/EC2 that would allow to run that command to prepare the DB.
FYI we are releasing a new feature soon that will allow you to exec into the container (running on either ECS/EC2 or ECS/Fargate) without having to do those jumps (when possible). But this feature is not (yet) available. More here.
If you have it running in ECS, it might be simple to create another Task Definition that uses the same Docker image, but overrides the command parameter to run the migrate command instead of the normal command to start your app. Another similar approach would be to use the CLI to run the aws ecs run-task command and execute it that way.
Looking for the ways to implement the following scenario:
Deploy a Docker image into AWS ECS. This container runs as a REST service and accepts external requests (I already know how to do that).
Upon the request, execute code in the running container that pulls another Docker image from the external repo and deploys into the same ECS cluster as a single run container that exits upon completion.
(Bonus) the dynamic cluster needs to access some EC2 private IP within the same AWS console login.
The logic on the running container is written in Python so I wonder if I should use boto3 lib to do what I need to do?
I have created a docker image for DRUID and Superset, now I want to push these images to ECR. and start an ECS to run these containers. What I have done is I have created the images by running docker-compose up on my YML file. Now when I type docker image ls i can see multiple images running in them.
I have created an aws account and created a repository. They have provided the push command and I push the superset into the ECR for start. (Didn't push any dependancy)
I created a cluster in AWS, in one configuration step if provided custom port 8088. I don't know what and why they ask these port for.
Then I created a load balancer with the default configuration
After some time I could see the container status turned running
I navigated to the public ip i mentioned with port 8088 and could see superset running
Now I have two problems
It always shows login error in a superset
It stops automatically after some time and restarts after that and this cycle continues.
Should I create different ECR repos and push all the dependencies to ECR before creating a cluster in ECS?
For the service going up and down. Since you mentioned you have an LB associated with the service, you may have an issue with the health check configuration.
If the health check fails consecutively a number of times, ecs will kill it and re-start it.
I have a dockerized Jenkins build server set up like below and I want move it to AWS.
I have some questions on how to do it, Thanks.
Is ECS the right choice to deploy dockerized Jenkins agents?
Is Fargate launch type of ECS support Windows containers?
I know ECS can dynamically provision EC2 instances, can ECS provision like below?
a. If there is no job to build, there is no ECS2 instance running in the cluster.
b. If a build job started, ECS dynamically launch a EC2 instance to run the dockerized agents to handle it.
c. After build job is finished, ECS cluster will automatically stop or terminate the running EC2 instance.
==================================================================
Jenkins master:
Runs as a Linux container hosted on a UBUNTU virtual machine.
Jenkins Agents:
Linux Agent:
Runs as a Linux container hosted on the same UBUNTU virtual machine as master.
Windows Agents:
Runs as a windows container hosted on a Windows server 2019.
Well, I have some tips for you:
Yes, ECS can dynamically provision EC2 Instances using autoscaling, but only when the threshold of a metric is reached in cloudwatch and an alarm is thrown out and autoscaling start to works. Start a task in ECS with a jenkins master server, and then start 1 or 2 agents when you go to execute a job is not a good tactic neither practical idea, who going to wake up these tasks?
If you want to use a jenkins docker inside an EC2 instance and you have a master node running and you want to keep stopped your unused agents and start it only if is needed by a job maybe in your Jenkinsfile you can call a lambda function to start your agent, here and example in a Jenkinsfile:
stage('Start Infrastructure') {
steps {
sh '''
#!/bin/bash
aws lambda invoke --function-name Wake_Up_Jenkins_Agent --invocation-type Event --log-type Tail --payload '{"node":"NodeJS-Java-Agent","action":"start"}' logsfile.txt
'''
}
}
And later another stage to stop your agent, but your master node need to be online because it is the key and main component either called from the repository or your CI/CD process. Also you need to implement the lambda with a logical procedure to start or stop the instance.
In my experience run Jenkins directly in EC2 is a better choice that run it in ECS or Fargate.
Has anyone been able to configure selenoid on aws ecs ? I am able to run the selenoid-ui container but the selenoid hub image keeps throwing an error regarding the browsers.json however I have not been able to find a way to add the browsers.json file because it stops before it executes the CMD command
There is no point to run selenoid on AWS ECS, as your setup won't scale (your browser containers will be launched on the same EC2 instance where your selenoid container is running). With ECS, you run your service on a cluster, so either your cluster contains on 1 EC2 instance, or you waste your compute resources.
If you don't need scaling, I'd suggest you run selenoid on simple EC2 instance with docker installed. If you do want to have scaling, then I suggest you to take a look at a commercial version of selenoid (called Moon), which you can run on AWS EKS.