How to push Docker image to ECR in Jenkins? - amazon-web-services

I am working with Jenkins. I am trying to push image to ECR. I am using local Docker to build the images.
Below is my Jenkins file:
pipeline {
agent any
stages {
stage('Build') {
steps {
bat 'docker build -t sampleapp -f SampleApp/Dockerfile .'
}
}
stage('Push image') {
steps {
withDockerRegistry([url: "https://536703334988.dkr.ecr.ap-southeast-2.amazonaws.com/test-repository",credentialsId: "ecr:ap-southeast-2:demo-ecr-credentials"]) {
bat 'docker push sampleapp:latest'
}
}
stage('Deploy') {
steps {
echo 'Deploying....'
}
}
}
}
In the above code, I am able to build and create an image. In the second stage, I am facing the issues. I am getting the below error:
$ docker login -u AWS -p ******** https://536703334988.dkr.ecr.ap-southeast-2.amazonaws.com/test-repository
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
C:\Program Files (x86)\Jenkins\workspace\SampleAppPipeLine>docker push sampleapp:latest
The push refers to repository [docker.io/library/sampleapp]
a160522d6d0e: Preparing
2e2c2606bd45: Preparing
9b0a482c69b1: Preparing
995a0cc6a5f6: Preparing
c1b55dcb46c2: Preparing
cf5b3c6798f7: Preparing
cf5b3c6798f7: Waiting
denied: requested access to the resource is denied
Can someone help me to fix this issue? Any help would be appreciated.
Thanks.

Default repository of docker.io is being hardcode is : docker.io/library/
So for AWS ECR repo, you should :
docker build -t test-repository .
docker tag test-repository:latest
536703334988.dkr.ecr.ap-southeast-2.amazonaws.com/test-repository:latest
docker push 536703334988.dkr.ecr.ap-southeast-2.amazonaws.com/test-repository:latest
Make sure test-repository repo is already create on ECR.

Related

Jenkins pipeline. 401 Error when using JIB via authentication on AWS

I'm trying to run a Jenkins pipeline using JIB to deploy the docker image of my project on the AWS ECR register:
stage('Build and verify'){
steps {
withAWS(credentials: 'blockforgery-aws-credential', region: 'eu-south-1') {
configFileProvider([configFile(fileId: 'blockforgery-mvn-settings', variable: 'SETTINGS_XML')]) {
echo 'Build and deploy on register (AWS)'
sh 'mvn -s $SETTINGS_XML -Dspring.profiles.active=${SPRING_PROFILE} -f $PROJECT_DIRECTORY/pom.xml compile jib:build'
}
}
}
}
I get a 401 authentication error:
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:3.2.1:build (default-cli) on project blockforgery.backend: Build image failed, perhaps you should make sure your credentials for '****.dkr.ecr.eu-south-1.amazonaws.com/block-forgery' are set up correctly. See https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#what-should-i-do-when-the-registry-responds-with-unauthorized for help: Unauthorized for ****.dkr.ecr.eu-south-1.amazonaws.com/block-forgery: 401 Unauthorized -> [Help 1]
Is there a way to authenticate on AWS using Jib and jenkins? Thank you

How to remove an image from Artifact Registry automatically

Using gcloud I can list and remove the images I want through those commands:
gcloud artifacts docker images list LOCATION/PROJECT-ID/RESPOSITORY-ID/IMAGE \
--include-tags --filter="tags:IPLA*" --filter="create_time>2022-04-20T00:00:00"
and then
gcloud artifacts docker images delete LOCATION/PROJECT-ID/RESPOSITORY-ID/IMAGE:tag
I am trying to automate that so I can filter by tag name and date and run every day or week.
I've tried to use inside a cloud function, but I don't think that is allowed.
const { spawn } = require("child_process");
const listening = spawn('gcloud', ['artifacts', 'docker', 'images', 'list',
'LOCATION/PROJECT-ID/RESPOSITORY-ID/IMAGE',
'--include-tags',
'--filter="tags:IPLA*"',
'--filter="create_time>2022-04-20T00:00:00"'
]);
listening.stdout.on("data", data => {
console.log(`stdout: ${data}`);
});
listening.stderr.on("data", data => {
console.log(`stderr: ${data}`);
});
listening.on('error', (error) => {
console.log(`error: ${error.message}`);
});
I get this error when running the cloud function:
error: spawn gcloud ENOENT
I accept any other solution like trigger on cloud build, terraform as long is it can live on google cloud.
You use Cloud Functions, a serverless product where you deploy your code that run somewhere, on something that you don't manage.
Here, in your code, you assume that gcloud is installed in the runtime. It's a mistake, you can't perform that assumption (that is wrong!)
However, you can use another serverless product where you manage your runtime environemnt: Cloud Run. The principle is to create your container (and therefore install what you want in it) and then deploy it. That time you can use gcloud command, because you know it exists on the VM.
However, it's not the right option. You have 2 better things
First of all, use something already done for you by a Google Cloud Developer Advocate (Seth Vargo). It's named GCR cleaner and remove images older than something
Or you can use directly the API to perform the exact same operation than GCLOUD bur without gcloud, by invoking the Artifact registry REST API. If you want to cheat and go faster, you can use the gcloud command with the --log-http parameter to display all the API call performed by the CLI. Copy the URL and parameters, and enjoy!!
Initially I started to look in the solution suggested by Guillaume, though it looked too overkill deploying a whole image just to clean the Artifact Registry. Ended up finding a lighter approach.
I create a shell script file to clean the images with the filters I wanted:
#!/usr/bin/env bash
_cleanup() {
image_path="$location-docker.pkg.dev/$project_id/$repository_id/$image_name"
echo "Starting to filter: $image_path"
tags=$(gcloud artifacts docker images list $image_path \
--include-tags \
--filter="tags:IPLA* AND UPDATE_TIME.date('%Y-%m-%d', Z)<=$(date --date="-$older_than_days days" +'%Y-%m-%d')" \
--format='value(TAGS)')
if [ -z "$tags" ]; then
echo "No images to clean"
else
echo "Images found: $tags"
for tag in $tags; do
echo "Deleting image: $image_path:$tag"
gcloud artifacts docker images delete "$image_path:$tag" --quiet
done
fi
}
location=$1
project_id=$2
repository_id=$3
image_name=$4 #In this case I just want to clean the old branchs for same image
older_than_days=$5 #7 - Number of days in the repository
_cleanup
echo
echo "DONE"
Then I created a scheduled trigger on Cloud Build for the following cloudbuild.yaml file:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
id: Clean up older versions
entrypoint: 'bash'
args: [ 'cleanup-old-images.sh', '$_LOCATION', '$PROJECT_ID','$_REPOSITORY_ID', '$_IMAGE_NAME', '$_OLDER_THAN_DAYS' ]
timeout: 1200s
##!/usr/bin/env bash
_cleanup() {
image_path="$2-docker.pkg.dev/$project_id/$1"
echo "Starting to filter: $image_path"
images=$(gcloud artifacts docker images list $image_path \
--filter="UPDATE_TIME.date('%Y-%m-%d', Z)<=$(date --date="-1 years" +'%Y-%m-%d')" \
--format='value(IMAGE)')
if [ -z "$images" ]; then
echo "No images to clean"
else
echo "Images found: $images"
for each in $images; do
echo "Deleting image: $image_path:$each"
gcloud artifacts docker images delete "$images" --quiet
done
fi
}
project_id=$1
gcloud artifacts repositories list --format="value(REPOSITORY,LOCATION)" --project=$project_id | tee -a repo.txt
while read p; do
sentence=$p
stringarray=($sentence)
_cleanup ${stringarray[0]} ${stringarray[1]}
done < repo.txt
echo
echo "DONE"
rm -rf repo.txt
echo "Deleteing repo.txt file"

Image permission in jenkins docker agent

I am using normal jenkins installation (NOT THE DOCKER IMAGE) on a normal AWS ec2 instance, with docker engine installed along side jenkins.
I have a simple jenkins pipeline like this:
pipeline {
agent none
stages {
stage('Example Build') {
agent { docker {
image 'cypress/base:latest'
args '--privileged --env CYPRESS_CACHE_FOLDER=~/.cache'
} }
steps {
sh 'ls'
sh 'node --version'
sh 'yarn install'
sh 'make e2e-test'
}
}
}
}
this will make the pipeline fail in the yarn install step while installing cypress although all it's dependenices is satisfied from the cypress image.
ERROR LOG FROM JENKINS
error /var/lib/jenkins/workspace/Devops-Capstone-Project_master/node_modules/cypress: Command failed.
Exit code: 1
Command: node index.js --exec install
Arguments:
Directory: /var/lib/jenkins/workspace/Devops-Capstone-Project_master/node_modules/cypress
Output:
Cypress cannot write to the cache directory due to file permissions
See discussion and possible solutions at
https://github.com/cypress-io/cypress/issues/1281
----------
Failed to access /.cache:
EACCES: permission denied, mkdir '/.cache'
After some investigation i found that although i have provided the environment variable "CYPRESS_CACHE_FOLDER=~/.cache" to override the default location in the root directory, and also provided the "--privileged". it fails because for some reason jenkins and docker is forcing their args and user mapping from the jenkins host.
I have also tried providing "-u 1000:1000" to override the user mapping but it didn't work.
What could possibly be wrong? and any recommendations or work arounds about this issue?
Thanks ,,
I have found a work around by creating a docker file to build the image and pass the jenkins user id and group to it as build arguments, as described here on this thread .
But this is not guaranteed to work on multiple nodes (master->slaves) jenkins installations as the jenkins user id and group may differ.

Pushing docker image through jenkins

I'm pushing docker image through Jenkins pipeline, but I'm getting the following error:
ERROR: Could not find credentials matching
gcr:["google-container-registry"]
I tried with:
gcr:["google-container-registry"]
gcr:[google-container-registry]
gcr:google-container-registry
google-container-registry
but none of them worked.
In the global credentials I have:
NAME: google-container-registry
KIND: Google Service Account from private key
DESCRIPTION: A Google robot account for accessing Google APIs and
services.
The proper syntax is the following (provided your gcr credentials id is 'google-container-registry'):
docker.withRegistry("https://gcr.io", "gcr:google-container-registry") {
sh "docker push [your_image]"
}
check if you have https://plugins.jenkins.io/google-container-registry-auth/ plugin installed.
After plugin installed use gcr:credential-id synthax
Example:
stage("docker build"){
Img = docker.build(
"gcpProjectId/imageName:imageTag",
"-f Dockerfile ."
)
}
stage("docker push") {
docker.withRegistry('https://gcr.io', "gcr:credential-id") {
Img.push("imageTag")
}
}
Go to Jenkins → Manage Jenkins → Manage Plugins and install plugins:
Google Container Registry
Google OAuth Credentials
CloudBees Docker Build and Publish
Jenkins → Credentials → Global Credentials → Add Credentials, choose desired ‘Project Name’ and upload JSON file
Jenkinsfile:
stage('Deploy Image') {
steps{
script {
docker.withRegistry( 'https://gcr.io', "gcr:${ID}" ) {
dockerImage.push("$BUILD_NUMBER")
dockerImage.push('latest')
}
}
}
}

Ballerina DEPLOYING ON DOCKER Sample

I'm trying to run the "DEPLOYING ON DOCKER" sample in this.
Q1) When I call the service deployed on docker, it gives me a 500.
The logs in docker says error: wso2.twitter:TwitterError, message: bad Authentication data.
It seems the twitter.toml is not inside the docker container. That makes sense because I never mentioned in below commands that such a file is there while building the docker image.
$ ballerina build hello_service.bal
$ docker run -d -p 9090:9090 registry.hub.docker.com/helloworld:v1.0
$ curl -d "Hello Ballerina" -X POST localhost:9090
How can I provide the config file?
Q2) What's the use of registry here?
// Docker configurations
#docker:Config {
registry:"registry.hub.docker.com",
name:"helloworld",
tag:"v1.0"
}
Following annotation should be added to the ballerina service. This copy the ballerina file to Docker container. Stating isBallerinaConf:true will pass the toml file to ballerina run command.
#docker:CopyFiles {
files: [{source: "./twitter.toml", target: "/opt/twitter.toml", isBallerinaConf: true}]
}
The registry is used to push an image to a remote docker registry.
Refer sample3 for usage. The final docker image would be:
registry.hub.docker.com/helloworld:v.1.0
https://github.com/ballerinax/docker/tree/master/samples/sample3
For ballerina 1.0.4 its
#docker:CopyFiles {
files: [{sourceFile: "./ballerina.conf", target: "/opt/ballerina.conf", isBallerinaConf: true}]
}
according to
https://ballerina.io/learn/api-docs/ballerina/docker/records/FileConfig.html