Pushing docker image through jenkins - google-cloud-platform

I'm pushing docker image through Jenkins pipeline, but I'm getting the following error:
ERROR: Could not find credentials matching
gcr:["google-container-registry"]
I tried with:
gcr:["google-container-registry"]
gcr:[google-container-registry]
gcr:google-container-registry
google-container-registry
but none of them worked.
In the global credentials I have:
NAME: google-container-registry
KIND: Google Service Account from private key
DESCRIPTION: A Google robot account for accessing Google APIs and
services.

The proper syntax is the following (provided your gcr credentials id is 'google-container-registry'):
docker.withRegistry("https://gcr.io", "gcr:google-container-registry") {
sh "docker push [your_image]"
}

check if you have https://plugins.jenkins.io/google-container-registry-auth/ plugin installed.
After plugin installed use gcr:credential-id synthax
Example:
stage("docker build"){
Img = docker.build(
"gcpProjectId/imageName:imageTag",
"-f Dockerfile ."
)
}
stage("docker push") {
docker.withRegistry('https://gcr.io', "gcr:credential-id") {
Img.push("imageTag")
}
}

Go to Jenkins → Manage Jenkins → Manage Plugins and install plugins:
Google Container Registry
Google OAuth Credentials
CloudBees Docker Build and Publish
Jenkins → Credentials → Global Credentials → Add Credentials, choose desired ‘Project Name’ and upload JSON file
Jenkinsfile:
stage('Deploy Image') {
steps{
script {
docker.withRegistry( 'https://gcr.io', "gcr:${ID}" ) {
dockerImage.push("$BUILD_NUMBER")
dockerImage.push('latest')
}
}
}
}

Related

Jenkins pipeline. 401 Error when using JIB via authentication on AWS

I'm trying to run a Jenkins pipeline using JIB to deploy the docker image of my project on the AWS ECR register:
stage('Build and verify'){
steps {
withAWS(credentials: 'blockforgery-aws-credential', region: 'eu-south-1') {
configFileProvider([configFile(fileId: 'blockforgery-mvn-settings', variable: 'SETTINGS_XML')]) {
echo 'Build and deploy on register (AWS)'
sh 'mvn -s $SETTINGS_XML -Dspring.profiles.active=${SPRING_PROFILE} -f $PROJECT_DIRECTORY/pom.xml compile jib:build'
}
}
}
}
I get a 401 authentication error:
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:3.2.1:build (default-cli) on project blockforgery.backend: Build image failed, perhaps you should make sure your credentials for '****.dkr.ecr.eu-south-1.amazonaws.com/block-forgery' are set up correctly. See https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#what-should-i-do-when-the-registry-responds-with-unauthorized for help: Unauthorized for ****.dkr.ecr.eu-south-1.amazonaws.com/block-forgery: 401 Unauthorized -> [Help 1]
Is there a way to authenticate on AWS using Jib and jenkins? Thank you

Next JS serverless deployment on AWS ECS/Fargate: environment variable issue

so my goal is to deploy a serverless Dockerized NextJS application on ECS/Fargate.
So when I docker build my project using the command docker build . -f development.Dockerfile --no-cache -t myapp:latest everything is running successfully except Docker build doesn't consider the env file in my project's root directory. Once build finishes, I push the Docker image to Elastic Container Repository(ECR) and my Elastic Container Service(ECS) references that ECR.
So naturally, my built image doesn't have a ENV file(contains the API keys and DB credentials), and as a result my app is deployed but all of the services relying on those credentials are failing because there isn't an ENV file in my container and all of the variables become undefined or null.
To fix this issue I looked at this AWS doc and implemented a solution that stores my .env file in AWS S3 and that S3 ARN gets refrenced in the container service where the .env file is stored. However, that didn't workout and I think it's because of the way I'm setting my
next.config.js to reference my environmental files in my local codebase. I also tried to set my environmental variables manually(very unsecure, screenshot below) when configuring the container in my task defination, and that didn't work either.
My next.confg.js
const dotEnvConfig = { path: `../../${process.env.NODE_ENV}.env` };
require("dotenv").config(dotEnvConfig);
module.exports = {
serverRuntimeConfig: {
// Will only be available on the server side
xyzKey: process.env.xyzSecretKey || "",
},
publicRuntimeConfig: {
// Will be available on both server and client
appUrl: process.env.app_url || "",
},
};
So on my local codebase in the root directory I have two files development.env (local api keys) and production.env(live api keys) and my next.config.js is located in /packages/app/next.config.js
So apparently it was just a plain NextJS's way of handling env variables.
In next.config.js
module.exports = {
env: {
user: process.env.SQL_USER || "",
// add all the env var here
},
};
and to call the environmental variable user in the app all you have to do is call process.env.user and user will reference process.env.SQL_USER in my local .env file where it will be stored as SQL_USER="abc_user"
You should be setting the environment variables in the ECS task definition. In order to prevent storing sensitive values in the task definition you should use AWS Parameter Store, or AWS Secrets Manager, as documented here.

Google Cloud Build error no project active

I am trying to set up Google Cloud Build with a really simple project hosted on firebase, but every time it reaches the deploy stage it tells me:
Error: No project active, but project aliases are available.
Step #2: Run firebase use <alias> with one of these options:
ERROR: build step 2 "gcr.io/host-test-xxxxx/firebase" failed: step exited with non-zero status: 1
I have set the alias to production and my .firebasesrc is:
{
"projects": {
"default": "host-test-xxxxx",
"production": "host-test-xxxxx"
}
I have firebase admin and API Keys Admin permissions on my cloud builder service and I also want to encrypt so I have Cloud KMS CryptoKey Decrypter
I do
firebase login:ci
to generate a token in my terminal and paste this to my .env variable, then generate an alias called production and do
firebase use production
My yaml is:
steps:
# Install
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
# Build
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'build']
# Deploy
- name: 'gcr.io/host-test-xxxxx/firebase'
args: ['deploy']
and install and build work fine. What is happening here?
Rerunning firebase init does not seem to help.
Update:
building locally then doing firebase deploy does not help either.
Ok the thing that worked was changing the .firebasesrc file to:
{
"projects": {
"default": "host-test-xxxxx"
}
}
and
firebase use --add
and adding an alias called default.

Using Gradle plugin to push docker images to ECR

I am using gradle-docker-plugin to build and push docker images to Amazon's ECR. To do this I am also using a remote docker daemon running on an EC2 instance. I have configured a custom task EcrLoginTask to fetch the ECR authorization token using aws-java-sdk-ecr library. Relevant code looks like : -
class EcrLoginTask extends DefaultTask {
String accessKey
String secretCode
String region
String registryId
#TaskAction
String getPassword() {
AmazonECR ecrClient = AmazonECRClient.builder()
.withRegion(Regions.fromName(region))
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey, secretCode))).build()
GetAuthorizationTokenResult authorizationToken = ecrClient.getAuthorizationToken(
new GetAuthorizationTokenRequest().withRegistryIds(registryId))
String token = authorizationToken.getAuthorizationData().get(0).getAuthorizationToken()
System.setProperty("DOCKER_PASS", token) // Will this work ?
return token
}
}
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'com.amazonaws:aws-java-sdk-ecr:1.11.244'
classpath 'com.bmuschko:gradle-docker-plugin:3.2.1'
}
}
docker {
url = "tcp://remote-docker-host:2375"
registryCredentials {
username = 'AWS'
password = System.getProperty("DOCKER_PASS") // Need to provide at runtime !!!
url = 'https://123456789123.dkr.ecr.eu-west-1.amazonaws.com'
}
}
task getECRPassword(type: EcrLoginTask) {
accessKey AWS_KEY
secretCode AWS_SECRET
region AWS_REGION
registryId '139539380579'
}
task dbuild(type: DockerBuildImage) {
dependsOn build
inputDir = file(".")
tag "139539380579.dkr.ecr.eu-west-1.amazonaws.com/n6duplicator"
}
task dpush(type: DockerPushImage) {
dependsOn dbuild, getECRPassword
imageName "123456789123.dkr.ecr.eu-west-1.amazonaws.com/n6duplicator"
}
The remote docker connection works fine, ECR token is also fetched successfully and the dbuild task also gets executed successfully.
PROBLEM
The dpush task fails - "Could not push image: no basic auth credentials"
I believe this is because the authorization token received using the EcrLoginTask was not passed on to in the docker configuration closure password property.
How do I fix it ? I need to provide the credentials on the fly each time the build is executed.
Have a look at the 'gradle-aws-ecr-plugin'. It's able to get a fresh (latest) Amazon ECR docker registry token, during every AWS/Docker command call:
All Docker tasks such as DockerPullImage, DockerPushImage, etc. that
are configured with the ECR registry URL will get a temporary ECR
token. No further configuration is necessary. It is possible to set
the registry URL for individual tasks.
This should work well alongside either the gradle-docker-plugin or Netflix's nebula-docker-plugin, which is also based on, and extends, the 'bmuschko' docker plugin.
The 'gradle-aws-ecr-plugin' BitBucket homepage explains concisely how to configure both the AWS and ECR [URL] credentials.

User Google Cloud credentials inside ephemeral container?

We use Docker containers for most of our work, including development on our own machines. These are ephemeral (started each time we run a test, for example).
For AWS, the auth is easy - we have our keys in our environment, and those are passed through to the container.
We're starting to use Google Cloud services, and the auth path seems harder than AWS. When doing local development, gcloud auth login works well. But when working in an ephemeral container, the login process would be needed each time, and I haven't found a way of persisting user credentials using either a) environment variables or b) mapping volumes - which are the two ways of passing data to containers.
From what I can read, the only path is to use service accounts. But I think then everyone needs their own service account, and needs to be constantly updating that account's permissions to be aligned with their own.
Is there a better way?
The easiest for making a local container see the gcloud credentials might be mapping the file system location of the application default credentials into the container.
First, do
gcloud auth application-default login
Then, run your container as
docker run -ti -v=$HOME/.config/gcloud:/root/.config/gcloud test
This should work. I tried it with a Dockerfile like
FROM node:4
RUN npm install --save #google-cloud/storage
ADD test.js .
CMD node ./test.js
and the test.js file like
var storage = require('#google-cloud/storage');
var gcs = storage({
projectId: 'my-project-515',
});
var bucket = gcs.bucket('my-bucket');
bucket.getFiles(function(err, files) {
if (err) {
console.log("failed to get files: ", err)
} else {
for (var i in files) {
console.log("file: ", files[i].name)
}
}
})
and it worked as expected.
I had the same issue, but I was using docker-compose. This was solved with adding following to docker-compose.yml:
volumes:
- $HOME/.config/gcloud:/root/.config/gcloud