I can't find a right solution here.
I got installed Jenkins on EC2 where I added a pipeline with link to the github where I have repository with docker-compose.yml file.
How to install this image of docker into ECR on AWS.
Documentation on pushing to ECR is available here
The steps are:
Authenticate docker with ECR
Build your docker image
tag the docker image with the required ECR format:
docker tag <image>:<version> <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<image>:<version>
docker push <aws_account_id>.dkr.ecr..amazonaws.com/
The simplest way to accomplish this is with a shell script. There are also jenkins plugins that will do this for you.
Related
I have ec2 instance for testing. I deployed using OpsWorks, and now I'm making new job on Jenkins to deploy automatically. what I want to do is
when someone push to branch
Jenkins server build docker image
push image to ecr
ec2 instance pull ecr image and build docker container and run
I made a job that using ecr and deploy ECS Fargate, but never done using ecr and deploy pre existed ec2 instance.I wonder this is possible to make it.
Pre-requisite
On your EC2 you first have to install docker.
There are many ways you can do it.
Once Jenkin build & push docker image to ECR you can further add the step in Jenkin build steps. Jenkin will do SSH inside EC2 and pull and run the docker image.
Once Jenkin build & push docker image to ECR you can further add the step in Jenkin build steps. Jenkin will trigger shell script file on EC2. That sh file having all logic to pull the latest one and stop existing etc.
From Jenkins also you can do it via ansible script.
Current Situation:
We have CI and CD as below.
we have poll SCM in Jenkins, once new commits comes Jenkins will start the build through jenkinsfile, and Jenkins file look for pom and starts building jar file, once jar created it will start to create docker image out of it with help of dockerfile, and image will push to docker hub(private dockerhub).(CD==> then we use portainer to deploy the latest image to aws docker swarm cluster manually).
We are trying to achieve CD with below fashion:
Now I have to deploy the latest image from dockerhub to aws(docker swarm cluster) automatically through Jenkins like one click deployment.
How Can we achieve this deployment using Ansible or Portainer in auto-fashion like build and deploy?
If so please suggest with reference or steps to achieve this?
is there any better approach than Ansible?
New to AWS so any help would be appreciated.
I'm attempting to run Jenkins through Docker on AWS. I found this article https://docs.aws.amazon.com/aws-technical-content/latest/jenkins-on-aws/containerized-deployment.html
Can anyone share a better step-by-step tutorial to achieve this? the page above seems incomplete.
It talks about "The Dockerfile should also contain the steps to install the Jenkins Amazon ECS plugin" but does not show how to go about installing the plugin using the Dockerfile.
thanks
Please follow below steps:
Launch EC2 cluster according to your needs.
Install docker in you local machine. For example, for ubuntu (sudo apt-get isntall docker.io)
systemctl start docker
Create new folder for our jenkins docker. Create new Dockerfile inside it with following contents.
FROM Jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
Create plugins.txt in same folder and add below line
amazon-ecs:1.3
Login to ECR using aws cli. Configure aws first with your credentials.
aws ecr get-login --region <REGION>
Run the output returned from above command to docker login.
sudo docker build -t jenkins_master .
sudo docker tag jenkins_master:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_master:latest
Create repository in ECR for this image
aws ecr create-repository --repository-name jenkins_master
Push the image in AWS ECR.
sudo docker push <AWS ACC ID>.dkr.ecr.<REGION>+.amazonaws.com/jenkins_master:latest
Our Jenkins docker image is ready. But data stored by this Jenkins server will not be persistent. To store data permanently, we will create another docker image which will create a volume with mount point. For that, create new directory for this new docker image and inside it create another Dockerfile with below content.
FROM Jenkins
VOLUME ["/var/jenkins_home"]
Again follow same commands to push this new repository to ECR.
sudo docker build -t jenkins_dv .
sudo docker tag jenkins_dv:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
aws ecr create-repository --repository-name jenkins_dv
sudo docker push <AWS Account Number>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
Now our images are ready. We will use this images to run them as service on our ECS cluster. For that we need to install ecs-cli using below command for linux.
sudo curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
Create a new txt file with below contents which will have jenkins configuration.
jenkins_master:
image: jenkins_master
cpu_shares: 100
mem_limit: 2000M
ports:
- "8080:8080"
- "50000:50000"
volumes_from:
- jenkins_dv
jenkins_dv:
image: jenkins_dv
cpu_shares: 100
mem_limit: 500M
15. Finally push this service using above file to your newly created cluster.
ecs-cli compose --file docker_compose.txt service up --cluster <cluster_name>
Hope this helps!
I have an Ec2 on AWS.
I tried
SSH into that box
install Docker
pull Docker image from my : Repository URI
docker pull bheng-api-revision-test:latest 616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-revision-test:latest
tag it
docker tag bheng-api-revision-test:latest 616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-revision-test:latest
I'm trying to build it, and I don't know what command I should use.
I've tried
docker build bheng-api-revision-test:latest 616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-revision-test:latest .
I kept getting
How would one go about debugging this further?
Google Container Registry documentation explains that in order to pull and push images to gcr.io, you have to prefix docker push and pull commands with gcloud preview.
gcloud preview docker push gcr.io/<gcr_namespace>/<docker-image>
gcloud preview docker pull gcr.io/<gcr_namespace>/<docker-image>
Is there a way to use Google Container Registry with the docker CLI directly, without gcloud preview prefix?
You can use the following commands:
gcloud preview docker -a
to update your local docker configuration w/ gcr.io credentials.
And then use the regular docker CLI commands to push and pull images:
docker build -t gcr.io/<gcr_namespace>/<docker-image> .
docker push gcr.io/<gcr_namespace>/<docker-image>
Or for existing images:
docker tag <docker-image> gcr.io/<gcr_namespace>/<docker-image>
docker push gcr.io/<gcr_namespace>/<docker-image>
docker pull gcr.io/<gcr_namespace>/<docker-image>
This configuration is good for interoperability with the native docker CLI, but not ideal as gcloud preview docker -a will need to be run again after the credentials expires.
When building a new image, tag it directly to gcr.io during a docker build:
gcloud preview docker -a
docker build -t gcr.io/<project_id>/<docker-image> <directory>
push gcr.io/<project_id>/<docker-image>