How to run commands in a fargate task - amazon-web-services

I have a requirement where i have to create a Fargate task that can clone a gitab repository(source code) and run a maven build command to build the code.
And there would be another fargate task that would create a docker image out of it.
Gitlab is on an EC2 instance.
Since we do not have exec access into the containers on Fargate, how and what would be the best way to do this. (I have multiple repos on Gitlab and so the repo that i want to clone and build is not going be the same every time)
I have been reading about the Amazon Elastic Container Service (ECS) / Fargate plugin on Jenkins.But i'm not sure if Jenkins can be used to get into a Fargate container and run commands.

nowadays you can use ECS exec. Here's how to set it up: https://aws.amazon.com/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/
or in short:
https://www.ernestchiang.com/en/posts/2021/using-amazon-ecs-exec/

Related

Is it possible to install Apache superset on a ECS container

I am working on Apache Superset, I was able to install it on a linux EC2 instance using docker , is there is any possibility to install it on ECS ?
There are a couple of approaches to this.
First you can take the container image here and build an ECS task def / ECS service around it by bringing it up standalone. Make sure you enable ECS exec to be able to execute into the container and launch those commands. I have not tested this but I see no reason why it should not work.
I have also spent some time trying to make the docker compose files in the Superset GH repo work with Amazon ECS. You can read more about my findings here.

Deploy Django Code with Docker to AWS EC2 instance without ECS

Here is my setup.
My Django code is hosted on Github
I have docker setup in my EC2 Instance
My deployment process is manual. I have to run git pull, docker build and docker run on every single code change. I am using a dockerservice account to perform this step.
How do I automate step 3 with AWS code deploy or something similar.
Every example I am seeing on the internet involves ECS, Fargate which I am not ready to use yet
Check this out on how to Use Docker Images from a Private Registry (eg. dockerhub) for Your Build Environment
How to Use Docker Images from a Private Registry for Your Build Environment

Dockerized Jenkins server on AWS

I have a dockerized Jenkins build server set up like below and I want move it to AWS.
I have some questions on how to do it, Thanks.
Is ECS the right choice to deploy dockerized Jenkins agents?
Is Fargate launch type of ECS support Windows containers?
I know ECS can dynamically provision EC2 instances, can ECS provision like below?
a. If there is no job to build, there is no ECS2 instance running in the cluster.
b. If a build job started, ECS dynamically launch a EC2 instance to run the dockerized agents to handle it.
c. After build job is finished, ECS cluster will automatically stop or terminate the running EC2 instance.
==================================================================
Jenkins master:
Runs as a Linux container hosted on a UBUNTU virtual machine.
Jenkins Agents:
Linux Agent:
Runs as a Linux container hosted on the same UBUNTU virtual machine as master.
Windows Agents:
Runs as a windows container hosted on a Windows server 2019.
Well, I have some tips for you:
Yes, ECS can dynamically provision EC2 Instances using autoscaling, but only when the threshold of a metric is reached in cloudwatch and an alarm is thrown out and autoscaling start to works. Start a task in ECS with a jenkins master server, and then start 1 or 2 agents when you go to execute a job is not a good tactic neither practical idea, who going to wake up these tasks?
If you want to use a jenkins docker inside an EC2 instance and you have a master node running and you want to keep stopped your unused agents and start it only if is needed by a job maybe in your Jenkinsfile you can call a lambda function to start your agent, here and example in a Jenkinsfile:
stage('Start Infrastructure') {
steps {
sh '''
#!/bin/bash
aws lambda invoke --function-name Wake_Up_Jenkins_Agent --invocation-type Event --log-type Tail --payload '{"node":"NodeJS-Java-Agent","action":"start"}' logsfile.txt
'''
}
}
And later another stage to stop your agent, but your master node need to be online because it is the key and main component either called from the repository or your CI/CD process. Also you need to implement the lambda with a logical procedure to start or stop the instance.
In my experience run Jenkins directly in EC2 is a better choice that run it in ECS or Fargate.

Aerokube Selenoid on AWS ECS

Has anyone been able to configure selenoid on aws ecs ? I am able to run the selenoid-ui container but the selenoid hub image keeps throwing an error regarding the browsers.json however I have not been able to find a way to add the browsers.json file because it stops before it executes the CMD command
There is no point to run selenoid on AWS ECS, as your setup won't scale (your browser containers will be launched on the same EC2 instance where your selenoid container is running). With ECS, you run your service on a cluster, so either your cluster contains on 1 EC2 instance, or you waste your compute resources.
If you don't need scaling, I'd suggest you run selenoid on simple EC2 instance with docker installed. If you do want to have scaling, then I suggest you to take a look at a commercial version of selenoid (called Moon), which you can run on AWS EKS.

Jenkins: deploy to AWS ECS with docker compose

I have some misunderstanding. I have Jenkins instance, ESC cluster and docker-compose config file to compose my images and link up containers.
After git push my Jenkins instance grabs all sources from all repos (webapp, api, lb) and makes a batch of operations like build, copy files and etc.
After that I have all folders with Dockerfiles in state "ready for compose".
And in this stage I cant get how I should ping my ESC cluster on AWS to grab all images from Jenkins and compose them with my docker-compose.yml config.
I would be glad of any useful information.
Thanks.
First, you need to push your images from the Jenkins server into ECR. Push every image individually to an independent repo.
Next, you have to create an ECS cluster. In this cluster, you create an ECS service that runs in the cluster. For this Service, you create and configure a Task Definition to run your linked containers. You don't need to use docker-compose for this: in ECS you define the links between containers in the configuration of the Task Definition. You can add several container definitions to the same Task Definition, and ECS will link them together.
You can automate all of this from your Jenkins server by attaching an Instance Profile to it that allows it to call the ECS API. In order to deploy new images, all you need to do is pushing them to their ECR repos, and create a new Task Definition pointing to them, and update the service that runs these task definitions. ECS will automatically trigger a blue-green deployment on your behalf.