Google Cloud Platform: secret as build env variable - google-cloud-platform

I have a few Google Functions with some private NPM packages, that I need to install during the build phase.
Credentials to NPM registries are set via .npmrc file. Token is expected to be ENV variable, as someUrlToRegistry:/_authToken=${NPM_REGISTRY_TOKEN}
I have this token saved in Secret Manager.
How can I pass this secret as a build environment variable?
I am able to do so as runtime variable, no problem there, but build does not see this secret and registry returns unauthorized response.

As per official document you can add a secretEnv field specifying the environment variable in a build.
Add an availableSecrets field to specify the secret version and environment variables to use for your secret. You can include substitution variables in the value of the secretVersion field. You can specify more than one secret in a build.
Example from doc:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'docker login --username=$$USERNAME --password=$$PASSWORD']
secretEnv: ['PASSWORD']
availableSecrets:
secretManager:
- versionName: projects/PROJECT_ID/secrets/DOCKER_PASSWORD_SECRET_NAME/versions/DOCKER_PASSWORD_SECRET_VERSIO
env: 'PASSWORD'
Attaching a similar blog and stack link for your reference.

Related

How to setup terraform cicd with gcp and github actions in a multidirectory repository

Introduction
I have a repository with all the infrastructure defined using IaC, separated in folders. For instance, all terraform configuration is in /terraform/. I want to apply all terraform files inside that directory from the CI/CD.
Configuration
The used github action is shown below:
name: 'Terraform'
on: [push]
permissions:
contents: read
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
environment: production
# Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
defaults:
run:
shell: bash
#working-directory: terraform
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout#v3
# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
- id: 'auth'
uses: 'google-github-actions/auth#v1'
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud#v1'
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init
# Checks that all Terraform configuration files adhere to a canonical format
- name: Terraform Format
run: terraform fmt -check
# On push to "master", build or change infrastructure according to Terraform configuration files
# Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
- name: Terraform Apply
run: terraform apply -auto-approve -input=false
Problem
If I log in and then change directory to apply terraform it doesn't find to log in.
storage.NewClient() failed: dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
On the other hand, if I don't change the directory then it doesn't find the configuration files as expected.
Error: No configuration files
Tried to move the terraform configuration files to the root of the repository and works. How could I implement it in a multidirectory repository?
Such feature was requested before. As explained in the issue, auth files is named as follows gha-creds-*.json.
Therefore, added a step just before using terraform to update the variable environment and moving the file itself:
- name: 'Setup google auth in multidirectory repo'
run: |
echo "GOOGLE_APPLICATION_CREDENTIALS=$GITHUB_WORKSPACE/terraform/`ls -1 $GOOGLE_APPLICATION_CREDENTIALS | xargs basename`" >> $GITHUB_ENV
mv $GITHUB_WORKSPACE/gha-creds-*.json $GITHUB_WORKSPACE/terraform/

Amplify Build using Secrets Manager

I am trying to access my Secret Manager values as environment variables in the build of my Amplify application.
I have followed AWS documentation and community threads/videos.
I have included in my build spec file, amplify.yml, as below per the guide: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-versions
version: 1
env:
secrets-manager:
TOKEN: mySecret:myKey
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- echo "$TOKEN"
- yarn run build
artifacts:
baseDirectory: .next
files:
- '**/*'
cache:
paths:
- node_modules/**/*
- .next/cache/**/*
I have attached Secret Manager access policies to my Amplify service role per community threads and this YouTube video:
https://youtu.be/jSY7Xerc8-s
However, echo $TOKEN returns blank
Is there no way to access Secret Manager key-values in the Amplify build settings (https://docs.aws.amazon.com/amplify/latest/userguide/build-settings.html) like you can just the same in Code Build (see above guide)?
So far I have only been able to store my sensitive enviroment variables with Parameter Store (following this guide: https://docs.aws.amazon.com/amplify/latest/userguide/environment-variables.html) but from my understanding does seem secure as the values are displayed when echo is used, which will be exposed during logs, whereas values from Secret Manager with be censored out as '***'.

Google Cloud Build with multiple git repositories

I have a git repository with git sub-module, which linked to another git repository.
main-repo
-> file1.txt
-> submodule-repo
-> file2.txt
I created a Google Cloud Build trigger that has permissions to main-repo.
In order to load the submodule-repo repository, I added this command to the build instructions:
steps:
- name: gcr.io/cloud-builders/git
args: ['submodule', 'update', '--init', '--recursive']
...
And it fail in this stage. Why? permissions problem:
Submodule 'XXX' (XXX) registered for path '***' Cloning into
'/workspace/XXX'... ssh: Could not resolve hostname c: Name or service
not known fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository
exists.
The read permission I gave Google is for the main-repo git repository. Since I can give access only for one repository, I can't give another permission for the submodule-repo repsoitory.
How I can use Google Cloud Build to build an git repository with git sub-module?
I did the following and it's working for me:
I followed these instructions to access my private repo from google cloud:
Create an SSH key
Store the private SSH key in Secret Manager
Add the public SSH key to your private repository's deploy keys (if you need to access more than one repo, create a user or use an existing user who has access to these repos and put the deploy key in this account, not in the repo itself > from GitHub account > settings > SSH keys)
Grant permissions to Cloud Build service account to access Secret Manager
Add the public SSH key to known hosts (I stored the public key as a variable in cloud build and you can use GitHub secret to store it)
*Use this command to get the public key and don't copy it from .pub file
ssh-keyscan -t rsa github.com > known_hosts.github
Then these steps in the cloud build file:
- name: 'gcr.io/cloud-builders/git'
secretEnv: ['SSH_KEY']
entrypoint: 'bash'
args:
- -c
- |
echo "$$SSH_KEY" >> /root/.ssh/id_rsa
chmod 400 /root/.ssh/id_rsa
echo ${_SSH_PUBLIC_KEY} >> /root/.ssh/known_hosts
volumes:
- name: 'ssh'
path: /root/.ssh
- name: 'gcr.io/cloud-builders/git'
entrypoint: 'bash'
args:
- '-c'
- |
git submodule init
git submodule update
volumes:
- name: 'ssh'
path: /root/.ssh
availableSecrets:
secretManager:
- versionName: projects/[GCP_Project]/secrets/[SECRET_NAME]/versions/latest
env: 'SSH_KEY'

How to set the environment variable in cloudbuild.yaml file?

I am trying to set GOOGLE_APPLICATION_CREDENTIALS. Is this correct way to set environment variable ? Below is my yaml file:
steps:
- name: 'node:10.10.0'
id: installing_npm
args: ['npm', 'install']
dir: 'API/system_performance'
- name: 'node:10.10.0'
#entrypoint: bash
args: ['bash', 'set GOOGLE_APPLICATION_CREDENTIALS=test/emc-ema-cp-d-267406-a2af305d16e2.json']
id: run_test_coverage
args: ['npm', 'run', 'coverage']
dir: 'API/system_performance'
Please help me solve this.
You can use the env step parameter
However, when you execute Cloud Build, the platform uses its own service account (in the future, it will be possible to specify the service account that you want to use)
Thus, if you grant the Cloud Build service account with the correct role, you don't need to use a key file (which is committed in your Git repository, not a really good practice!)

How do I retrieve assets from a Google Storage bucket within a Google Container Registry automated build?

I've created a mirrored GitHub repo in Google's Container Registry and then created a Build Trigger. The dockerfile in the repo includes gsutil -m rsync -r gs://asset-bucket/ local-dir/ so that I can move shared private assets into the container.
But I get an error:
ServiceException: 401 Anonymous caller does not have storage.objects.list access to asset-bucket
I have an automatically created service account (#cloudbuild.gserviceaccount.com) for building and it has the Cloud Container Builder role. I tried adding Storage Object Viewer, but I still get the error.
Shouldn't the container builder automatically have the appropriate permissions?
Are you using the gcr.io/cloud-builders/gsutil build step to do this? That should use default credentials properly and it should Just Work.
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: [ "-m", "rsync", "gs://asset-bucket/", "local-dir/" ]
Alternatively, you could try the GCS Fetcher.
Just to be specific about the answer from #david-bendory, privileged calls cannot occur inside a dockerfile. I created a cloudbuild.yaml that looks like this:
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: [ "-m", "rsync", "-r", "gs://my-assets/", "." ]
dir: "static"
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/project-name', '.']
images: ['gcr.io/$PROJECT_ID/project-name']
and a dockerfile that includes
COPY static/* www/