How to add GOOGLE_APPLICATION_CREDENTIALS in CircleCI job? - google-cloud-platform

My CircleCI job needs to set the GOOGLE_APPLICATION_CREDENTIALS variable. It keeps failing with the error: google.auth.exceptions.DefaultCredentialsError: File ************* was not found.
I have encoded it by base64 before adding it to the environment variable in CircleCI. Checking upon the output, base64 decodes correctly and gcloud auth activate-service-account --key-file ${HOME}/key.json statement yeilds the output: Activated service account credentials for: [test-compute#developer.gserviceaccount.com]. How can I fix this?
CircleCI config is below:
test-job:
docker:
- image: cimg/python:3.9.9
steps:
- checkout
- run:
name: copy to a file
command: |
echo $GOOGLE_APPLICATION_CREDENTIALS | base64 -d > ${HOME}/keys.json
cat ${HOME}/keys.json
- run:
name: set to the env var
command: |
export GOOGLE_APPLICATION_CREDENTIALS="${HOME}/keys.json" >> $BASH_ENV
gcloud auth activate-service-account --key-file ${HOME}/keys.json
- run:
name: install
command: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- run:
name: pytest
command: |
pytest

Try this :
https://circleci.com/docs/set-environment-variable/#set-an-environment-variable-in-a-shell-command
Try to copy
path/to/service_account.json to
/root/.config/gcloud/application_default_credentials.json
and then try to run gcloud auth application-default

Related

unable to prepare context: unable to evaluate symlinks in Dockerfile path

I'm using AWS Code Build to build a Docker image from ECR. This is the Code Build configuration.
Here is the buidspec.yml
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- aws ecr get-login-password --region my-region | docker login --username AWS --password-stdin my-image-uri
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t pos-drf .
- docker tag pos-drf:latest my-image-uri/pos-drf:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push my-image-uri/pos-drf:latest
Now it's working up until the build command docker build -t pos-drf .
the error message I get is the following
[Container] 2022/12/30 15:12:39 Running command docker build -t pos-drf .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /codebuild/output/src696881611/src/Dockerfile: no such file or directory
[Container] 2022/12/30 15:12:39 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build -t pos-drf .. Reason: exit status 1
Now quite sure this is not a permission related issue.
Please let me know if I need to share something else.
UPDATE:
This is the Dockerfile
# base image
FROM python:3.8
# setup environment variable
ENV DockerHOME=/home/app/webapp
# set work directory
RUN mkdir -p $DockerHOME
# where your code lives
WORKDIR $DockerHOME
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
# copy whole project to your docker home directory.
COPY . $DockerHOME
RUN apt-get dist-upgrade
# RUN apt-get install mysql-client mysql-server
# run this command to install all dependencies
RUN pip install -r requirements.txt
# port where the Django app runs
EXPOSE 8000
# start server
CMD python manage.py runserver
My mistake was that I had the Dockerfile locally but hadn't pushed it.
CodeBuild worked successfully after pushing the Dockerfile.

.gitlab-ci.yaml throws "Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1" at the end after successfully run the job

When I am trying to commit changes to gitlab for continuous integrations i am facing this error even though all my steps pass successfully, Gitlab CI shows this
Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1
I am running 1 stages "deploy" at the moment here is my script for deploy:
image: python:3.8
stages:
- deploy
default:
before_script:
- wget https://golang.org/dl/go1.16.5.linux-amd64.tar.gz
- rm -rf /usr/local/go && tar -C /usr/local -xzf go1.16.5.linux-amd64.tar.gz
- export PATH=$PATH:/usr/local/go/bin
- source ~/.bashrc
- pip3 install awscli --upgrade
- pip3 install aws-sam-cli --upgrade
deploy-development:
only:
- feature/backend/ci/cd
stage: deploy
script:
- sam build -p
- yes | sam deploy
This command probably creates an issue in the docker shell:
yes | sam deploy
Try this command:
sam deploy --no-confirm-changeset --no-fail-on-empty-changeset
From https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-deploy.html:
--confirm-changeset | --no-confirm-changeset Prompt to confirm whether the AWS SAM CLI deploys the computed changeset.
--fail-on-empty-changeset | --no-fail-on-empty-changeset Specify whether to return a non-zero exit code if there are no changes to be made to the stack. The default behavior is to return a non-zero exit code.

How to use AWS CodeArtifact *within* A Dockerfile in AWSCodeBuild

I am trying to do a pip install from codeartifact from within a dockerbuild in aws codebuild.
This article does not quite solve my problem: https://docs.aws.amazon.com/codeartifact/latest/ug/using-python-packages-in-codebuild.html
The login to AWS CodeArtifct is in the prebuild; outside of the Docker context.
But my pip install is inside my Dockerfile (we pull from a private pypi registry).
How do I do this, without doing something horrible like setting an env variable to the password derived from reading ~/.config/pip.conf/ after running the login command in prebuild?
You can use the environment
variable: PIP_INDEX_URL[1].
Below is an AWS CodeBuild buildspec.yml file where we construct the
PIP_INDEX_URL for CodeArtifact by using
this example from the AWS documentation.
buildspec.yml
pre_build:
commands:
- echo Getting CodeArtifact authorization...
- export CODEARTIFACT_AUTH_TOKEN=$(aws codeartifact get-authorization-token --domain "${CODEARTIFACT_DOMAIN}" --domain-owner "${AWS_ACCOUNT_ID}" --query authorizationToken --output text)
- export PIP_INDEX_URL="https://aws:${CODEARTIFACT_AUTH_TOKEN}#${CODEARTIFACT_DOMAIN}-${AWS_ACCOUNT_ID}.d.codeartifact.${AWS_DEFAULT_REGION}.amazonaws.com/pypi/${CODEARTIFACT_REPO}/simple/"
In your Dockerfile, add an ARG PIP_INDEX_URL line just above
your RUN pip install -r requirements.txt so it can become an environment
variable during the build process:
Dockerfile
# this needs to be added before your pip install line!
ARG PIP_INDEX_URL
RUN pip install -r requirements.txt
Finally, we build the image with the PIP_INDEX_URL build-arg.
buildspec.yml
build:
commands:
- echo Building the Docker image...
- docker build -t "${IMAGE_REPO_NAME}" --build-arg PIP_INDEX_URL .
As an aside, adding ARG PIP_INDEX_URL to your Dockerfile shouldn't break any
existing CI or workflows. If --build-arg PIP_INDEX_URL is omitted when
building an image, pip will still use the default PyPI index.
Specifying --build-arg PIP_INDEX_URL=${PIP_INDEX_URL} is valid, but
unnecessary. Specifying the argument name with no value will make Docker take
its value from the environment variable of the same
name[2].
Security note: If someone runs docker history ${IMAGE_REPO_NAME}, they can
see the value
of ${PIP_INDEX_URL}[3]
. The token is only good for a maximum of 12 hours though, and you can shorten
it to as little as 15 minutes with the --duration-seconds parameter
of aws codeartifact get-authorization-token[4],
so maybe that's acceptable. If your Dockerfile is a multi-stage build, then it
shouldn't be an issue if you're not using ARG PIP_INDEX_URL in your target
stage. docker build --secret does not seem to be supported in CodeBuild at this time.
So, here is how I solved this for now. Seems kinda hacky, but it works. (EDIT: we have since switched to #phistrom answer)
In the prebuild, I run the command and copy ~/.config/pip/pip.conf to the current build directory:
pre_build:
commands:
- echo Logging in to Amazon ECR...
...
- echo Fetching pip.conf for PYPI
- aws codeartifact --region us-east-1 login --tool pip --repository ....
- cp ~/.config/pip/pip.conf .
build:
commands:
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
Then in the Dockerfile, I COPY that file in, do the pip install, then rm it
COPY requirements.txt pkg/
COPY --chown=myuser:myuser pip.conf /home/myuser/.config/pip/pip.conf
RUN pip install -r ./pkg/requirements.txt
RUN pip install ./pkg
RUN rm /home/myuser/.config/pip/pip.conf

How to pass config file to docker run command on Google Compute Engine?

I'm deploying this Dockerfile:
FROM zenika/alpine-chrome:with-node
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD 1
ENV PUPPETEER_EXECUTABLE_PATH /usr/bin/chromium-browser
WORKDIR /usr/src/app
COPY --chown=chrome package.json yarn.lock ./
RUN yarn --frozen-lockfile
COPY --chown=chrome src ./src
COPY --chown=chrome tsconfig.json ./
COPY --chown=chrome chrome.json /
RUN yarn run build
ENTRYPOINT ["tini", "--"]
CMD ["node", "./dist/start.js"]
using this bash script:
echo "start deploying"
PROJECT_ID=...
APP_ID=...
LAST_COMMIT_HASH=`git log --pretty=format:'%h' -n 1`
GCR_ADDRESS="gcr.io/$PROJECT_ID/$APP_ID:$LAST_COMMIT_HASH"
echo "authenticate with service account"
gcloud auth activate-service-account --key-file=./google-key.json
gcloud config set project $PROJECT_ID
gcloud config set compute/zone us-central1-a
gcloud auth configure-docker
echo "build docker image"
docker build . -t $GCR_ADDRESS
echo "push docker image to $GCR_ADDRESS"
docker push $GCR_ADDRESS
echo "create VM, if it doesn't exist yet"
gcloud compute instances create-with-container my-vm --container-image=$GCR_ADDRESS --container-arg="-it --rm --security-opt seccomp=/chrome.json" || {
echo "failed to create VM. Probably it already exists. Updating existing VM..."
gcloud compute instances update-container my-vm --container-image=$GCR_ADDRESS --container-arg="-it --rm --security-opt seccomp=/chrome.json"
}
When this container is being started by GCE, it throws the error:
[FATAL tini (6)] exec -it --rm --security-opt seccomp=/chrome.json failed: No such file or directory
How do I pass seccomp file to GCE?
In your args seccomp=/chrome.json
Seccomp json file is referenced to the root directory.
Verify that the file is really at / (not recommended), If not, change the path --security-opt seccomp=/path/to/seccomp/profile.json [1] to for example ./chrome.json
Also take in consideration that each argument to append to a container entrypoint must have a separate flag. Arguments are appended in the order of flags.
Assuming the default entry point of the container (or an entry point overridden with --container-command flag) is a Bourne shell-compatible executable, in order to execute 'ls -l' command in the container:[2]
--container-arg="-c" --container-arg="ls -l"

Installing gcloud on Travis CI

I'm following this tutorial on how to use Travis CI with Google Cloud for Continuous Deployments:
https://cloud.google.com/solutions/continuous-delivery-with-travis-ci
When Travis builds, it tells me that the gcloud command is not found. Here's my .travis file:
sudo: false
language: python
cache:
directories:
- "$HOME/google-cloud-sdk/"
env:
- GAE_PYTHONPATH=${HOME}/.cache/google_appengine PATH=$PATH:${HOME}/google-cloud-sdk/bin
PYTHONPATH=${PYTHONPATH}:${GAE_PYTHONPATH} CLOUDSDK_CORE_DISABLE_PROMPTS=1
before_install:
- openssl aes-256-cbc -K $encrypted_404aa45a170f_key -iv $encrypted_404aa45a170f_iv
-in credentials.tar.gz.enc -out credentials.tar.gz -d
- if [ ! -d "${GAE_PYTHONPATH}" ]; then python scripts/fetch_gae_sdk.py $(dirname
"${GAE_PYTHONPATH}"); fi
- if [ ! -d ${HOME}/google-cloud-sdk ]; then curl https://sdk.cloud.google.com | bash;
fi
- tar -xzf credentials.tar.gz
- mkdir -p lib
- gcloud auth activate-service-account --key-file client-secret.json
install:
- gcloud config set project continuous-deployment-192112
- gcloud -q components update gae-python
- pip install -r requirements.txt -t lib/
script:
- python test_main.py
- gcloud -q preview app deploy app.yaml --promote
- python e2e_test.py
This is the same file provided by the example repository from the tutorial. The line that fails is:
- gcloud auth activate-service-account --key-file client-secret.json
Even though it's already checked for the SDK and installed it if it isn't there.
I've already tried adding - source ~/.bash_profile after the install, but this doesn't work.
Am I missing a command somewhere?
I ran into the same issue and this has worked for me:
- if [ ! -d "$HOME/google-cloud-sdk" ]; then
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)";
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list;
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ;
sudo apt-get update && sudo apt-get install google-cloud-sdk;
fi
The only issue however is since it needs sudo, it will run on gce which is much slower then ec2
https://docs.travis-ci.com/user/reference/overview/#Virtualisation-Environment-vs-Operating-System
Updated:
This is the best solution -
How to install Google Cloud SDK on Travis?