How to pass artifact build from gitlab ci to dockerfile? - dockerfile

I need a way to pass a job artifact from gitlab ci to a Dockerfile, so I can copy it into a directory. What is the path where this artifact is located?
Thank you!

Steps:
use artifacts at your stage.
use dependencies to pass dependencies to the current stage.
And you can see files in Dockerfile.
For example, I run the vueJS project, the main flow:
stage1: build. Run npm run build:prod in vueJS project
build-dist:
stage: build-dist
image: node
script:
- npm run build:prod
artifacts:
paths:
- dist/
stage2: use dependencies
build-docker:
stage: build-docker
image: docker:stable
script:
- docker build
dependencies:
- build-dist
stage3: copy dist to Dockerfile
FROM fholzer/nginx-brotli
COPY ./dist /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/nginx.conf

You should use dependencies, the docs also state that job artifacts are passed to the next job by default.
The artifacts from the previous jobs will be downloaded and extracted in the context of the build.

You can use RUN --mount=type=secret
Build images with BuildKit
There is an example to show how to copy credentials into dockerfile
This dockerfile
# syntax = docker/dockerfile:experimental
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials \
cat /root/.aws/credentials
This the CI
$ docker build -t test --secret id=aws,src=$HOME/.aws/credentials .

Related

How to CI/CD deploy static Dockerized React build files to S3

I currently have a React application that I have a AWS CodePipeline set up for that does the following.
Detect changes in GitHub repository
Build the "build" files (with CodeBuild) using buildspec.yaml file
Push "build" files to S3 bucket
The S3 bucket is configured to serve the static files to my domain.
This setup is great because it's cheap, I don't need to have an EC2 server always up and running serving these static files, which is completely unnecessary.
Recently however I've Dockerized this application, which is fantastic for me when I'm working on it from different machines.
However now that it's Dockerized it seems like it would be a better idea to have a docker container build the "build" files and push them to the S3 bucket, to ensure that the files being built on my machine are identical to the ones being pushed to the S3 Bucket.
Ideally I would like to have this all be automated when I push to the repo like it currently is.
I've seen a lot of tutorials about how to automate the creation of docker images getting pushed to AWS ECR and then using ECS (Fargate) to run the container. However to me this is just the same thing as running my app on an EC2 server... why do I want to do all this and then have a container continuously running on a server? Now it would just be a ECS server...
So what I am asking is, how can I create an automated CI/CD pipeline that builds the static files using a docker container, and then pushes them to S3, as I currently have it?
Here is current CodeBuild buildspec.yaml file for reference
version: 0.2
phases:
install:
runtime-versions:
nodejs: 12
commands:
# install yarn
- npm install yarn
# install dependencies
- yarn
# so that build commands work
- yarn add eslint-config-react-app
build:
commands:
# run build script
- yarn build
artifacts:
# include all files required to run application
# we include only the static build files
files:
- '**/*'
base-directory: 'build'
I figured this out. It is possible to do this without modifying the Source or Deploy sections of the CodePipeline. You do not need EC2,ECR, ECS or Fargate.
You will modify the CodeBuild section of the pipeline to use a buildspec.yml file like this:
version: 0.2
phases:
install:
runtime-versions:
docker: 19
commands:
# log in to docker account to prevent rate limiting
- docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
# build the Docker image for the application
- docker build -t my-react-app:latest -f Dockerfile.prod .
build:
commands:
# run container from built image (builds production files)
- docker run my-react-app:latest
# set container id to variable
- CONTAINER=$(docker ps -alq)
# copy build files from container to host
- docker cp $CONTAINER:/app/build/ $CODEBUILD_SRC_DIR/build
artifacts:
# include all files required to run application
# we include only the static build files
files:
- "**/*"
base-directory: "build"
There are some additional details, I've written a blog post about it here:
https://ncoughlin.com/posts/aws-codepipeline-dockerized-react-s3/

Google Cloud Run inaccessible even on successful build

My Google Cloud Run image was build successfully using Cloud Build via Github repo. I don't see anything concerning in the build logs.
This is my Dockerfile:
# Use the official lightweight Node.js 10 image.
# https://hub.docker.com/_/node
FROM node:17-slim
RUN set -ex; \
apt-get -y update; \
apt-get -y install ghostscript; \
apt-get -y install pngquant; \
rm -rf /var/lib/apt/lists/*
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install dependencies.
# If you add a package-lock.json speed your build by switching to 'npm ci'.
RUN npm ci --only=production
# RUN npm install --production
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
CMD [ "npm", "start" ]
But when I try to access the cloud through the public URL I see:
Oops, something went wrong…
Continuous deployment has been set up, but your repository has failed to build and deploy.
This revision is a placeholder until your code successfully builds and deploys to the Cloud Run service myapi in asia-east1 of the GCP project myproject.
What's next?
From the Cloud Run service page, click "Build History".
Examine your build logs to understand why it failed.
Fix the issue in your code or Dockerfile (if any).
Commit and push the change to your repository.
It appears that the node app did not run. What am I doing wrong?
Turns out that cloudbuild.yaml is not really optional. Adding the file with the following resolved the issue:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA", "."]
# Push the container image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"]
# Deploy container image to Cloud Run
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "myapi"
- "--image"
- "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"
- "--region"
- "asia-east1"
images:
- "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"

Return code: 1 Output: Dockerfile and Dockerrun.aws.json are both missing, abort deployment

I have set up a CI/CD pipeline using Travis CI so that when i push the code to it automatically gets deployed to AWS beanstalk.
I am using docker as a platform in AWS.
When i push the code it passes through travis but aws shows the error "Command failed on instance. Return code: 1 Output: Dockerfile and Dockerrun.aws.json are both missing, abort deployment."
I don't need dockerrun.aws.json as i am using a local docker image
But not able to figure out why is this error being shown as there is a docker file.
Travis file
sudo: required
language: node_js
node_js:
- "10.16.0"
sudo: true
addons:
chrome: stable
branches:
only:
- master
before_script:
- npm install -g #angular/cli
script:
- ng test --watch=false --browsers=ChromeHeadless
deploy:
provider: elasticbeanstalk
access_key_id:
secure: "$accesskey"
secret_access_key:
secure: "$AWS_SECRET_KEY"
region: "us-east-2"
app: "portfolio"
env: "portfolio-env"
bucket_name: "elasticbeanstalk-us-east-2-646900675324"
bucket_path: "portfolio"
Dockerfile
FROM node:12.7.0-alpine as builder
WORKDIR /src/app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
# To copy the files from build folder to directory where nginx could serve up the files
FROM nginx
EXPOSE 80
COPY --from=builder /src/app/dist/portfio /usr/share/nginx/html
Any possible solution for this one ?
I had the same issue. Turns out my dockerfile was not capitalized, and AWS is case sensitive. When I changed the file name to "Dockerfile", everything worked as expected.

Why isn't Kaniko able to push multi-stage Docker Image?

Building the following Dockerfile on GitLab CI using Kaniko, result in the error error pushing image: failed to push to destination eu.gcr.io/stritzke-enterprises/eliah-speech-server:latest: Get https://eu.gcr.io/...: exit status 1
If I remove the first FROM, RUN and COPY --from statements from the Dockerfile, the Docker Image is built and pushed as expected. If I execute the Kaniko build using Docker on my local machine everything works as expected. I execute other Kaniko builds and pushed on the same GitLab CI runner with the same GCE Service Account credentials.
What is going wrong with the GitLab CI based Kaniko build?
Dockerfile
FROM alpine:latest as alpine
RUN apk add -U --no-cache ca-certificates
FROM scratch
COPY --from=alpine /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY binaries/speech-server /speech-server
EXPOSE 8080
ENTRYPOINT ["/speech-server"]
CMD ["serve", "-t", "$GOOGLE_ACCESS_TOKEN"]
GitLab CI build stage
buildDockerImage:
stage: buildImage
dependencies:
- build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
GOOGLE_APPLICATION_CREDENTIALS: /secret.json
script:
- echo "$GCR_SERVICE_ACCOUNT_KEY" > /secret.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $DOCKER_IMAGE:latest -v debug
only:
- branches
except:
- master
As tdensmore pointed out this was most likely an authentication issue.
So for everyone who has come here, the following Dockerfile and Kaniko call work just fine.
FROM ubuntu:latest as ubuntu
RUN echo "Foo" > /foo.txt
FROM ubuntu:latest
COPY --from=ubuntu /foo.txt /
CMD ["/bin/cat", "/foo.txt"]
The Dockerfile can be built by running
docker run -v $(pwd):/workspace gcr.io/kaniko-project/executor:latest --context /workspace --no-push

AWS CodeBuild - Unable to find DockerFile during build

Started playing with AWS CodeBuild.
Goal is to have a docker images as a final results with the nodejs, hapi and sample app running inside.
Currently i have an issue with:
"unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /tmp/src049302811/src/Dockerfile: no such file or directory"
Appears on BUILD stage.
Project details:
S3 bucket used as a source
ZIP file stored in respective S3 bucket contains buildspec.yml, package.json, sample *.js file and DockerFile.
aws/codebuild/docker:1.12.1 is used as a build environment.
When i'm building an image using Docker installed on my laptop there is no issues so i can't understand which directory i need to specify to get rid off this error message.
Buildspec and DockerFile attached below.
Thanks for any comments.
buildspec.yml
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region eu-west-1)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.eu-west-1.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <id>.eu-west-1.amazonaws.com/<image>:latest
DockerFile
FROM alpine:latest
RUN apk update && apk upgrade
RUN apk add nodejs
RUN rm -rf /var/cache/apk/*
COPY . /src
RUN cd /src; npm install hapi
EXPOSE 80
CMD ["node", "/src/server.js"]
Ok, so the solution was simple.
Issue was related to the Dockerfile name.
It was not accepting DockerFile (with capital F, strange it was working locally) but Dockerfile (with lower-case f) worked perfectly.
Can you validate that Dockerfile exists in the root of the directory? One way of doing this would be to run ls -altr as part of the pre-build phase in your buildspec (even before ecr login).