docker build taking too much time in aws ecs - amazon-web-services

I have setup master and slave configuration of jenkins on aws ecs. Written a job that will build docker images and push to ecr. So each time the job builds it is taking the same amount of time approx to 10 min. My jenkins master is running on container and and have used Amazon EC2 Container Service Plugin to configure slave. I have mounted the docker socket file so that the slave node will have access to docker daemon but it is not using the layer images of the ecs instance. Each time it starts from fresh.
Overview of each build:

Probably your docker build is not using caching mechanism of docker. Please refer this for building cache.

Related

local vmware takes lot of time to pull the docker image from AWS ECR

Recently I moved my registry from Dockerhub to AWS ECR.
I'm using Jenkins to pull image and deploy to local vmware. I'm using docker swarm as container orchestration tool.When I was using Dockerhub jenkins was able to pull and deploy docker services successfully.But when I'm using AWS ECR,the jenkins job is UNSTABLE.
Jenkins job is getting timeout.When i checked in server,some images are successfully pulled but some are not.
docker pull image is taking more time when we are using aws ecr.Any idea?
This could be due to many reasons.
Network/firewall issues.
Storage volume slowness etc.
I think first you need to narrow down the issue. Did you check the latency between the Jenkins worker and AWS ECR? I would suggest directly login into the Jenkins worker and try pulling the Image directly. If the direct pulling is working without a slowness you may have to dig into Jenkins to understand what's happening.
I was able to resolve the issue. Because of network slowness jenkins got timed out.
So I removed the time out limit from jenkins then it worked fine.
Goto Post-build Actions
Open Advanced
Exec timeout (ms) : 0
Save and rebuild the job

Deploy Django Code with Docker to AWS EC2 instance without ECS

Here is my setup.
My Django code is hosted on Github
I have docker setup in my EC2 Instance
My deployment process is manual. I have to run git pull, docker build and docker run on every single code change. I am using a dockerservice account to perform this step.
How do I automate step 3 with AWS code deploy or something similar.
Every example I am seeing on the internet involves ECS, Fargate which I am not ready to use yet
Check this out on how to Use Docker Images from a Private Registry (eg. dockerhub) for Your Build Environment
How to Use Docker Images from a Private Registry for Your Build Environment

How to run commands in a fargate task

I have a requirement where i have to create a Fargate task that can clone a gitab repository(source code) and run a maven build command to build the code.
And there would be another fargate task that would create a docker image out of it.
Gitlab is on an EC2 instance.
Since we do not have exec access into the containers on Fargate, how and what would be the best way to do this. (I have multiple repos on Gitlab and so the repo that i want to clone and build is not going be the same every time)
I have been reading about the Amazon Elastic Container Service (ECS) / Fargate plugin on Jenkins.But i'm not sure if Jenkins can be used to get into a Fargate container and run commands.
nowadays you can use ECS exec. Here's how to set it up: https://aws.amazon.com/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/
or in short:
https://www.ernestchiang.com/en/posts/2021/using-amazon-ecs-exec/

Dockerized Jenkins server on AWS

I have a dockerized Jenkins build server set up like below and I want move it to AWS.
I have some questions on how to do it, Thanks.
Is ECS the right choice to deploy dockerized Jenkins agents?
Is Fargate launch type of ECS support Windows containers?
I know ECS can dynamically provision EC2 instances, can ECS provision like below?
a. If there is no job to build, there is no ECS2 instance running in the cluster.
b. If a build job started, ECS dynamically launch a EC2 instance to run the dockerized agents to handle it.
c. After build job is finished, ECS cluster will automatically stop or terminate the running EC2 instance.
==================================================================
Jenkins master:
Runs as a Linux container hosted on a UBUNTU virtual machine.
Jenkins Agents:
Linux Agent:
Runs as a Linux container hosted on the same UBUNTU virtual machine as master.
Windows Agents:
Runs as a windows container hosted on a Windows server 2019.
Well, I have some tips for you:
Yes, ECS can dynamically provision EC2 Instances using autoscaling, but only when the threshold of a metric is reached in cloudwatch and an alarm is thrown out and autoscaling start to works. Start a task in ECS with a jenkins master server, and then start 1 or 2 agents when you go to execute a job is not a good tactic neither practical idea, who going to wake up these tasks?
If you want to use a jenkins docker inside an EC2 instance and you have a master node running and you want to keep stopped your unused agents and start it only if is needed by a job maybe in your Jenkinsfile you can call a lambda function to start your agent, here and example in a Jenkinsfile:
stage('Start Infrastructure') {
steps {
sh '''
#!/bin/bash
aws lambda invoke --function-name Wake_Up_Jenkins_Agent --invocation-type Event --log-type Tail --payload '{"node":"NodeJS-Java-Agent","action":"start"}' logsfile.txt
'''
}
}
And later another stage to stop your agent, but your master node need to be online because it is the key and main component either called from the repository or your CI/CD process. Also you need to implement the lambda with a logical procedure to start or stop the instance.
In my experience run Jenkins directly in EC2 is a better choice that run it in ECS or Fargate.

How to authenticate docker client commands in AWS?

Below authentication can be implemented using certificates(client & server), for any human user using docker client that talks to docker daemon:
But, jenkins pipeline also run docker commands to talk to docker daemon.
How to authenticate jenkins pipeline to run specific docker commands? where this pipeline is launched as jenkins slave container in AWS EC2 on every new commit in Git..... Does ECS cluster approach in launching pipeline task help in authentication?
You can run docker login from your jenkins script and store the secrets in jenkins config. You could also pre-install credentials on the machine as part of your build process. If you are talking about permissions to talk to the daemon, you have to give the jenkins user the appropriate permissions (usually add it to the docker group`