Using Gradle plugin to push docker images to ECR - amazon-web-services

I am using gradle-docker-plugin to build and push docker images to Amazon's ECR. To do this I am also using a remote docker daemon running on an EC2 instance. I have configured a custom task EcrLoginTask to fetch the ECR authorization token using aws-java-sdk-ecr library. Relevant code looks like : -
class EcrLoginTask extends DefaultTask {
String accessKey
String secretCode
String region
String registryId
#TaskAction
String getPassword() {
AmazonECR ecrClient = AmazonECRClient.builder()
.withRegion(Regions.fromName(region))
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey, secretCode))).build()
GetAuthorizationTokenResult authorizationToken = ecrClient.getAuthorizationToken(
new GetAuthorizationTokenRequest().withRegistryIds(registryId))
String token = authorizationToken.getAuthorizationData().get(0).getAuthorizationToken()
System.setProperty("DOCKER_PASS", token) // Will this work ?
return token
}
}
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'com.amazonaws:aws-java-sdk-ecr:1.11.244'
classpath 'com.bmuschko:gradle-docker-plugin:3.2.1'
}
}
docker {
url = "tcp://remote-docker-host:2375"
registryCredentials {
username = 'AWS'
password = System.getProperty("DOCKER_PASS") // Need to provide at runtime !!!
url = 'https://123456789123.dkr.ecr.eu-west-1.amazonaws.com'
}
}
task getECRPassword(type: EcrLoginTask) {
accessKey AWS_KEY
secretCode AWS_SECRET
region AWS_REGION
registryId '139539380579'
}
task dbuild(type: DockerBuildImage) {
dependsOn build
inputDir = file(".")
tag "139539380579.dkr.ecr.eu-west-1.amazonaws.com/n6duplicator"
}
task dpush(type: DockerPushImage) {
dependsOn dbuild, getECRPassword
imageName "123456789123.dkr.ecr.eu-west-1.amazonaws.com/n6duplicator"
}
The remote docker connection works fine, ECR token is also fetched successfully and the dbuild task also gets executed successfully.
PROBLEM
The dpush task fails - "Could not push image: no basic auth credentials"
I believe this is because the authorization token received using the EcrLoginTask was not passed on to in the docker configuration closure password property.
How do I fix it ? I need to provide the credentials on the fly each time the build is executed.

Have a look at the 'gradle-aws-ecr-plugin'. It's able to get a fresh (latest) Amazon ECR docker registry token, during every AWS/Docker command call:
All Docker tasks such as DockerPullImage, DockerPushImage, etc. that
are configured with the ECR registry URL will get a temporary ECR
token. No further configuration is necessary. It is possible to set
the registry URL for individual tasks.
This should work well alongside either the gradle-docker-plugin or Netflix's nebula-docker-plugin, which is also based on, and extends, the 'bmuschko' docker plugin.
The 'gradle-aws-ecr-plugin' BitBucket homepage explains concisely how to configure both the AWS and ECR [URL] credentials.

Related

AWS EB docker-compose deployment from private registry access forbidden

I'm trying to get docker-compose deployment to AWS Elastic Beanstalk working, in which the docker images are pulled from a private registry hosted by GitLab.
The strange thing is that initial deployment works perfectly; It pulls the image from the private registry and starts the containers using docker-compose, and the webpage (served by Django) is accessible through the host.
Deploying a new version using the same docker-compose and the same docker image will result in an error while pulling the docker image:
2021/03/16 09:28:34.957094 [ERROR] An error occurred during execution of command [app-deploy] - [Run Docker Container]. Stop running the command. Error: failed to run docker containers: Command /bin/sh -c docker-compose up -d failed with error exit status 1. Stderr:Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Creating network "current_default" with the default driver
Pulling redis (redis:alpine)...
Pulling mysql (mysql:5.7)...
Pulling project.dockertest(registry.gitlab.com/company/spikes/dockertest:latest)...
Get https://registry.gitlab.com/v2/company/spikes/dockertest/manifests/latest: denied: access forbidden
2021/03/16 09:28:34.957104 [INFO] Executing cleanup logic
Setup
AWS Elastic Beanstalk 64bit Amazon Linux 2/3.2
Gitlab registry credentials are stored within a S3 bucket, with the filename .dockercfg and has the following content:
{
"auths": {
"registry.gitlab.com": {
"auth": "base64 encoded username:personal_access_token"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.03.1-ce (linux)"
}
}
The repository contains a v3 Dockerrun.aws.json file to refer to the credential file in S3:
{
"AWSEBDockerrunVersion": "3",
"Authentication": {
"bucket": "gitlab-dockercfg",
"key": ".dockercfg"
}
}
Reproduce
Setup docker-compose.yml that uses a service with a private docker image (and can be pulled with the credentials setup in the dockercfg within S3)
Create a new applicatoin that uses the docker-platform.
eb init testapplication --platform=docker --region=eu-west-1
Note: region must be the same as the S3 bucket containing the dockercfg.
Initial deployment (this will succeed)
eb create testapplication-test --branch_default --cname testapplication-test --elb-type=application --instance-types=t2.micro --min-instance=1 --max-instances=4
The initial deployment shows that the image is available and can be started:
2021/03/16 08:58:07.533988 [INFO] save docker tag command: docker tag 5812dfe24a4f redis:alpine
2021/03/16 08:58:07.533993 [INFO] save docker tag command: docker tag f8fcde8b9ae2 mysql:5.7
2021/03/16 08:58:07.533998 [INFO] save docker tag command: docker tag 1dd9b65d6a9f registry.gitlab.com/company/spikes/dockertest:latest
2021/03/16 08:58:07.534010 [INFO] Running command /bin/sh -c docker rm `docker ps -aq`
Without changing anything to the local repository and the remote docker image on the private registry, lets do a redeployment which will trigger the error:
eb deploy testapplication-test
This will fail with the following output:
...
2021-03-16 10:02:28 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2021-03-16 10:02:29 ERROR Unsuccessful command execution on instance id(s) 'i-0dc445d118ac14b80'. Aborting the operation.
2021-03-16 10:02:29 ERROR Failed to deploy application.
ERROR: ServiceError - Failed to deploy application.
And logs of the instance show (/var/log/eb-engine.log):
Pulling redis (redis:alpine)...
Pulling mysql (mysql:5.7)...
Pulling project.dockertest (registry.gitlab.com/company/spikes/dockertest:latest)...
Get https://registry.gitlab.com/v2/company/spikes/dockertest/manifests/latest: denied: access forbidden
2021/03/16 10:02:25.902479 [INFO] Executing cleanup logic
Steps I've tried to debug or solve the issue
Rename dockercfg to .dockercfg on S3 (somewhere mentioned on the internet as possible solution)
Use the 'old' docker config format instead of the one generated by docker 1.7+. But later on I figured out that Amazon Linux 2-instances are compatible with the new format together with Dockerrun v3
Having an incorrectly formatted dockercfg on S3 will cause an error deployment regarding the misformatted file (so it actually does something with the dockercfg from S3)
Documentation
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/single-container-docker-configuration.html
I'm out of debug options, and I've no idea where to look any further to debug this problem. Perhaps someone can see what is going wrong here?
First of all, the issue describe above is a bug confirmed by Amazon. To get the deployment working on our side, we've contacted Amazon support.
They've a fix in place which should be released this month, so keep an eye on the changelog of the Elastic beanstalk platform: https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/relnotes.html
Although the upcoming release should have the fix, there is a workaround available to get the docker-compose deployment working.
Elastic Beanstalk allows hook to be executed within the deployment, which can be used to fetch the .docker.cfg from a S3 bucket to authenticate with against the private registry.
To do so, create the following file and directories from the root of the project:
File location: .platform/hooks/predeploy/docker_login
#!/bin/bash
aws s3 cp s3://{{bucket_name_to_use}}/.dockercfg ~/.docker/config.json
Important: Add execution rights to this file (for example: chmod +x .platform/hooks/predeploy/docker_login)
To support instance configuration changes, please symlink the hooks directory to confighooks:
ln -s .platform/hooks/ .platform/confighooks/
Updating configuration requires the .dockercfg credentials to be fetched too.
This should enable continuous deployments to the same EB-instance without the authentication errors, because the hook will be execute before the docker image pulling.
Some background:
The docker daemon reads credentials from ~/.docker/config by default on traditional linux systems. On the initial deploy this file will exist on the Elastic Beanstalk instance. On the next deployment this file is removed. Unfortunately, on the next deployment the .dockercfg is not refetched, therefor the docker daemon does not have the correct credentials to authenticate with.
I was dealing the same errors while trying to pull images from a privately hosted GitLab instance. I was able to resolve them by including the email address that was associated with the generated token found in the auth field of the .dockercfg file.
The following file format worked for me:
"registry.gitlab.com" {
"auth": "base64 encoded username:personal_access_token",
"email": "email for personal access token"
}
In my case I used a Project Access Token, which has an e-mail address associated with it once it is created.
The file format in the Elastic Beanstalk documentation for the authentication file here, indicates that this is the required file format, though the versions that it says this format is required for are almost certainly outdated, since we are running Docker ^19.

Next JS serverless deployment on AWS ECS/Fargate: environment variable issue

so my goal is to deploy a serverless Dockerized NextJS application on ECS/Fargate.
So when I docker build my project using the command docker build . -f development.Dockerfile --no-cache -t myapp:latest everything is running successfully except Docker build doesn't consider the env file in my project's root directory. Once build finishes, I push the Docker image to Elastic Container Repository(ECR) and my Elastic Container Service(ECS) references that ECR.
So naturally, my built image doesn't have a ENV file(contains the API keys and DB credentials), and as a result my app is deployed but all of the services relying on those credentials are failing because there isn't an ENV file in my container and all of the variables become undefined or null.
To fix this issue I looked at this AWS doc and implemented a solution that stores my .env file in AWS S3 and that S3 ARN gets refrenced in the container service where the .env file is stored. However, that didn't workout and I think it's because of the way I'm setting my
next.config.js to reference my environmental files in my local codebase. I also tried to set my environmental variables manually(very unsecure, screenshot below) when configuring the container in my task defination, and that didn't work either.
My next.confg.js
const dotEnvConfig = { path: `../../${process.env.NODE_ENV}.env` };
require("dotenv").config(dotEnvConfig);
module.exports = {
serverRuntimeConfig: {
// Will only be available on the server side
xyzKey: process.env.xyzSecretKey || "",
},
publicRuntimeConfig: {
// Will be available on both server and client
appUrl: process.env.app_url || "",
},
};
So on my local codebase in the root directory I have two files development.env (local api keys) and production.env(live api keys) and my next.config.js is located in /packages/app/next.config.js
So apparently it was just a plain NextJS's way of handling env variables.
In next.config.js
module.exports = {
env: {
user: process.env.SQL_USER || "",
// add all the env var here
},
};
and to call the environmental variable user in the app all you have to do is call process.env.user and user will reference process.env.SQL_USER in my local .env file where it will be stored as SQL_USER="abc_user"
You should be setting the environment variables in the ECS task definition. In order to prevent storing sensitive values in the task definition you should use AWS Parameter Store, or AWS Secrets Manager, as documented here.

AWS CDK: run external build command in CDK sequence?

Is it possible to run an external build command as part of a CDK stack sequence? Intention: 1) create a rest API, 2) write rest URL to config file, 3) build and deploy a React app:
import apigateway = require('#aws-cdk/aws-apigateway');
import cdk = require('#aws-cdk/core');
import fs = require('fs')
import s3deployment = require('#aws-cdk/aws-s3-deployment');
export class MyStack extends cdk.Stack {
const restApi = new apigateway.RestApi(this, ..);
fs.writeFile('src/app-config.json',
JSON.stringify({ "api": restApi.deploymentStage.urlForPath('/myResource') }))
// TODO locally run 'npm run build', create 'build' folder incl rest api config
const websiteBucket = new s3.Bucket(this, ..)
new s3deployment.BucketDeployment(this, .. {
sources: [s3deployment.Source.asset('build')],
destinationBucket: websiteBucket
})
}
Unfortunately, it is not possible, as the necessary references are only available after deploy and therefore after you try to write the file (the file will contain cdk tokens).
I personally have solved this problem by telling cdk to output the apigateway URLs to a file and then parse it after the deploy to upload it so a S3 bucket, to do it you need:
deploy with the output file options, for example:
cdk deploy -O ./cdk.out/deploy-output.json
In ./cdk.out/deploy-output.json you will find a JSON object with a key for each stack that produced an output (e.g. your stack that contains an API gateway)
manually parse that JSON to get your apigateway url
create your configuration file and upload it to S3 (you can do it via aws-sdk)
Of course, you have the last steps in a custom script, which means that you have to wrap your cdk deploy. I suggest to do so with a nodejs script, so that you can leverage aws-sdk to upload your file to S3 easily.
Accepting that cdk doesn't support this, I split logic into two cdk scripts, accessed API gateway URL as cdk output via the cli, then wrapped everything in a bash script.
AWS CDK:
// API gateway
const api = new apigateway.RestApi(this, 'my-api', ..)
// output url
const myResourceURL = api.deploymentStage.urlForPath('/myResource');
new cdk.CfnOutput(this, 'MyRestURL', { value: myResourceURL });
Bash:
# deploy api gw
cdk deploy --app (..)
# read url via cli with --query
export rest_url=`aws cloudformation describe-stacks --stack-name (..) --query "Stacks[0].Outputs[?OutputKey=='MyRestURL'].OutputValue" --output text`
# configure React app
echo "{ \"api\" : { \"invokeUrl\" : \"$rest_url\" } }" > src/app-config.json
# build React app with url
npm run build
# run second cdk app to deploy React built output folder
cdk deploy --app (..)
Is there a better way?
I solved a similar issue:
Needed to build and upload react-app as well
Supported dynamic configuration reading from react-app - look here
Released my react-app with specific version (in a separate flow)
Then, during CDK deployment of my app, it took a specific version of my react-app (version retrieved from local configuration) and uploaded its zip file to S3 bucket using CDK BucketDeployment
Then, using AwsCustomResource I generated a configuration file with references to Cognito and API-GW and uploaded this file to S3 as well:
// create s3 bucket for react-app
const uiBucket = new Bucket(this, "ui", {
bucketName: this.stackName + "-s3-react-app",
blockPublicAccess: BlockPublicAccess.BLOCK_ALL
});
let confObj = {
"myjsonobj" : {
"region": `${this.region}`,
"identity_pool_id": `${props.CognitoIdentityPool.ref}`,
"myBackend": `${apiGw.deploymentStage.urlForPath("/")}`
}
};
const dataString = JSON.stringify(confObj, null, 4);
const bucketDeployment = new BucketDeployment(this, this.stackName + "-app", {
destinationBucket: uiBucket,
sources: [Source.asset(`reactapp-v1.zip`)]
});
bucketDeployment.node.addDependency(uiBucket)
const s3Upload = new custom.AwsCustomResource(this, 'config-json', {
policy: custom.AwsCustomResourcePolicy.fromSdkCalls({resources: custom.AwsCustomResourcePolicy.ANY_RESOURCE}),
onCreate: {
service: "S3",
action: "putObject",
parameters: {
Body: dataString,
Bucket: `${uiBucket.bucketName}`,
Key: "app-config.json",
},
physicalResourceId: PhysicalResourceId.of(`${uiBucket.bucketName}`)
}
});
s3Upload.node.addDependency(bucketDeployment);
As others have mentioned, this isn't supported within CDK. So this how we solved it in SST: https://github.com/serverless-stack/serverless-stack
On the CDK side, allow defining React environment variables using the outputs of other constructs.
// Create a React.js app
const site = new sst.ReactStaticSite(this, "Site", {
path: "frontend",
environment: {
// Pass in the API endpoint to our app
REACT_APP_API_URL: api.url,
},
});
Spit out a config file while starting the local environment for the backend.
Then start React using sst-env -- react-scripts start, where we have a simple CLI that reads from the config file and loads them as build-time environment variables in React.
While deploying, replace these environment variables inside a custom resource based on the outputs.
We wrote about it here: https://serverless-stack.com/chapters/setting-serverless-environments-variables-in-a-react-app.html
And here's the source for the ReactStaticSite and StaticSite constructs for reference.
In my case, I'm using the Python language for CDK. I have a Makefile which I invoke directly from my app.py like this:
os.system("make"). I use the make to build up a layer zip file per AWS Docs. Technically you can invoke whatever you'd like. You must import the os package of course. Hope this helps.

Pushing docker image through jenkins

I'm pushing docker image through Jenkins pipeline, but I'm getting the following error:
ERROR: Could not find credentials matching
gcr:["google-container-registry"]
I tried with:
gcr:["google-container-registry"]
gcr:[google-container-registry]
gcr:google-container-registry
google-container-registry
but none of them worked.
In the global credentials I have:
NAME: google-container-registry
KIND: Google Service Account from private key
DESCRIPTION: A Google robot account for accessing Google APIs and
services.
The proper syntax is the following (provided your gcr credentials id is 'google-container-registry'):
docker.withRegistry("https://gcr.io", "gcr:google-container-registry") {
sh "docker push [your_image]"
}
check if you have https://plugins.jenkins.io/google-container-registry-auth/ plugin installed.
After plugin installed use gcr:credential-id synthax
Example:
stage("docker build"){
Img = docker.build(
"gcpProjectId/imageName:imageTag",
"-f Dockerfile ."
)
}
stage("docker push") {
docker.withRegistry('https://gcr.io', "gcr:credential-id") {
Img.push("imageTag")
}
}
Go to Jenkins → Manage Jenkins → Manage Plugins and install plugins:
Google Container Registry
Google OAuth Credentials
CloudBees Docker Build and Publish
Jenkins → Credentials → Global Credentials → Add Credentials, choose desired ‘Project Name’ and upload JSON file
Jenkinsfile:
stage('Deploy Image') {
steps{
script {
docker.withRegistry( 'https://gcr.io', "gcr:${ID}" ) {
dockerImage.push("$BUILD_NUMBER")
dockerImage.push('latest')
}
}
}
}

User Google Cloud credentials inside ephemeral container?

We use Docker containers for most of our work, including development on our own machines. These are ephemeral (started each time we run a test, for example).
For AWS, the auth is easy - we have our keys in our environment, and those are passed through to the container.
We're starting to use Google Cloud services, and the auth path seems harder than AWS. When doing local development, gcloud auth login works well. But when working in an ephemeral container, the login process would be needed each time, and I haven't found a way of persisting user credentials using either a) environment variables or b) mapping volumes - which are the two ways of passing data to containers.
From what I can read, the only path is to use service accounts. But I think then everyone needs their own service account, and needs to be constantly updating that account's permissions to be aligned with their own.
Is there a better way?
The easiest for making a local container see the gcloud credentials might be mapping the file system location of the application default credentials into the container.
First, do
gcloud auth application-default login
Then, run your container as
docker run -ti -v=$HOME/.config/gcloud:/root/.config/gcloud test
This should work. I tried it with a Dockerfile like
FROM node:4
RUN npm install --save #google-cloud/storage
ADD test.js .
CMD node ./test.js
and the test.js file like
var storage = require('#google-cloud/storage');
var gcs = storage({
projectId: 'my-project-515',
});
var bucket = gcs.bucket('my-bucket');
bucket.getFiles(function(err, files) {
if (err) {
console.log("failed to get files: ", err)
} else {
for (var i in files) {
console.log("file: ", files[i].name)
}
}
})
and it worked as expected.
I had the same issue, but I was using docker-compose. This was solved with adding following to docker-compose.yml:
volumes:
- $HOME/.config/gcloud:/root/.config/gcloud