Google Cloud Build with multiple git repositories - google-cloud-platform

I have a git repository with git sub-module, which linked to another git repository.
main-repo
-> file1.txt
-> submodule-repo
-> file2.txt
I created a Google Cloud Build trigger that has permissions to main-repo.
In order to load the submodule-repo repository, I added this command to the build instructions:
steps:
- name: gcr.io/cloud-builders/git
args: ['submodule', 'update', '--init', '--recursive']
...
And it fail in this stage. Why? permissions problem:
Submodule 'XXX' (XXX) registered for path '***' Cloning into
'/workspace/XXX'... ssh: Could not resolve hostname c: Name or service
not known fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository
exists.
The read permission I gave Google is for the main-repo git repository. Since I can give access only for one repository, I can't give another permission for the submodule-repo repsoitory.
How I can use Google Cloud Build to build an git repository with git sub-module?

I did the following and it's working for me:
I followed these instructions to access my private repo from google cloud:
Create an SSH key
Store the private SSH key in Secret Manager
Add the public SSH key to your private repository's deploy keys (if you need to access more than one repo, create a user or use an existing user who has access to these repos and put the deploy key in this account, not in the repo itself > from GitHub account > settings > SSH keys)
Grant permissions to Cloud Build service account to access Secret Manager
Add the public SSH key to known hosts (I stored the public key as a variable in cloud build and you can use GitHub secret to store it)
*Use this command to get the public key and don't copy it from .pub file
ssh-keyscan -t rsa github.com > known_hosts.github
Then these steps in the cloud build file:
- name: 'gcr.io/cloud-builders/git'
secretEnv: ['SSH_KEY']
entrypoint: 'bash'
args:
- -c
- |
echo "$$SSH_KEY" >> /root/.ssh/id_rsa
chmod 400 /root/.ssh/id_rsa
echo ${_SSH_PUBLIC_KEY} >> /root/.ssh/known_hosts
volumes:
- name: 'ssh'
path: /root/.ssh
- name: 'gcr.io/cloud-builders/git'
entrypoint: 'bash'
args:
- '-c'
- |
git submodule init
git submodule update
volumes:
- name: 'ssh'
path: /root/.ssh
availableSecrets:
secretManager:
- versionName: projects/[GCP_Project]/secrets/[SECRET_NAME]/versions/latest
env: 'SSH_KEY'

Related

How to setup terraform cicd with gcp and github actions in a multidirectory repository

Introduction
I have a repository with all the infrastructure defined using IaC, separated in folders. For instance, all terraform configuration is in /terraform/. I want to apply all terraform files inside that directory from the CI/CD.
Configuration
The used github action is shown below:
name: 'Terraform'
on: [push]
permissions:
contents: read
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
environment: production
# Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
defaults:
run:
shell: bash
#working-directory: terraform
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout#v3
# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
- id: 'auth'
uses: 'google-github-actions/auth#v1'
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud#v1'
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init
# Checks that all Terraform configuration files adhere to a canonical format
- name: Terraform Format
run: terraform fmt -check
# On push to "master", build or change infrastructure according to Terraform configuration files
# Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
- name: Terraform Apply
run: terraform apply -auto-approve -input=false
Problem
If I log in and then change directory to apply terraform it doesn't find to log in.
storage.NewClient() failed: dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
On the other hand, if I don't change the directory then it doesn't find the configuration files as expected.
Error: No configuration files
Tried to move the terraform configuration files to the root of the repository and works. How could I implement it in a multidirectory repository?
Such feature was requested before. As explained in the issue, auth files is named as follows gha-creds-*.json.
Therefore, added a step just before using terraform to update the variable environment and moving the file itself:
- name: 'Setup google auth in multidirectory repo'
run: |
echo "GOOGLE_APPLICATION_CREDENTIALS=$GITHUB_WORKSPACE/terraform/`ls -1 $GOOGLE_APPLICATION_CREDENTIALS | xargs basename`" >> $GITHUB_ENV
mv $GITHUB_WORKSPACE/gha-creds-*.json $GITHUB_WORKSPACE/terraform/

How to pull in additional private repositories in Amplify

Trying to use AWS Amplify to deploy a multi-repo dendron wiki which has a mix of public and private github repositories.
Amplify can be associated with a single repo but there doesn't seem to be a built-in way to pull in additional private repositories.
Create a custom deploy key for the private repo in github
generate the key
ssh-keygen -f deploy_key -N ""
Encode the deploy key as a base64 encoded env variable for amplitude
cat deploy_key | base64 | tr -d \\n
add this as a hosting environment variable (eg. DEPLOY_KEY)
Modify the amplify.yml file to make use of the deploy key
there's 2 key steps
adding deploy key to ssh-agent
WARNING: this implementation will print the $DEPLOY_KEY to stdout
disabling StrictHostKeyChecking
NOTE: amplify does not have a $HOME/.ssh folder by default so you'll need to create one as part of the deployment process
relevant excerpt below
- ...
- eval "$(ssh-agent -s)"
- ssh-add <(echo "$DEPLOY_KEY" | base64 -d)
- echo "disable strict host key check"
- mkdir ~/.ssh
- touch ~/.ssh/config
- 'echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ...
full build file here
Now you should be able to use git to clone the private repo.
For a more detailed writeup as well as alternatives and gotchas, see here
For GitLab repositories:
Create a deploy token in GitLab
Set env variables (eg. DEPLOY_TOKEN_USERNAME, DEPLOY_TONE_PASSWORD) in amplify panel
Add this line to the amplify config
- git config --global url."https://$DEPLOY_TOKEN_USERNAME:$DEPLOY_TOKEN_PASSWORD#gitlab.com/.../repo.git".insteadOf "https://gitlab.com/.../repo.git"

Gitlab Cloud run deploy successfully but Job failed

Im having an issue with my CI/CD pipeline ,
its successfully deployed to GCP cloud run but on Gitlab dashboard the status is failed.
I tried to replace images to some other docker images but it fails as well .
# File: .gitlab-ci.yml
image: google/cloud-sdk:alpine
deploy_int:
stage: deploy
environment: integration
only:
- integration # This pipeline stage will run on this branch alone
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild_int.yaml
# File: cloudbuild_int.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build','--build-arg','APP_ENV=int' , '-t', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/tpdropd-int-front']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'tpd-front', '--image', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '--region', 'us-central1', '--platform', 'managed', '--allow-unauthenticated']
gitlab build output :
ERROR: (gcloud.builds.submit)
The build is running, and logs are being written to the default logs bucket.
This tool can only stream logs if you are Viewer/Owner of the project and, if applicable, allowed by your VPC-SC security policy.
The default logs bucket is always outside any VPC-SC security perimeter.
If you want your logs saved inside your VPC-SC perimeter, use your own bucket.
See https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs.
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
I fix it by using:
options:
logging: CLOUD_LOGGING_ONLY
in cloudbuild.yaml
there you can use this work around :
Fix it by giving the Viewer role to the service account running this but this feels like giving too much permission to such a role.
This worked for me: Use --suppress-logs
gcloud builds submit --suppress-logs --tag=<my-tag>
To fix the issue, you just need to create a bucket in your project (by default - without public access) and add the role 'Store Admin' to your user or service account via https://console.cloud.google.com/iam-admin/iam
After that, you can refer the new bucket into the gcloud builds submit via parameter --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE like this:
gcloud builds submit --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE ...(other parameters here)
We need a new bucket because the default bucket for logs is a global (cross-projects). That's why it has specific security requirements to access it especially from outside the Google Cloud, like GitLab, Azure DevOps ant etc via service accounts.
(Moreover, in this case you no need to turn off logging via --suppress-logs)
Kevin's answer worked like a magic for me, since I am not able to comment, I am writing this new answer.
Initially I was facing the same issue where inspite of gcloud build submit command passed , my gitlab CI was failing.
Below is the cloudbuild.yaml file where I add the option logging as Kevin suggested.
steps:
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', '${_SCRIPT_NAME}']
options:
logging: CLOUD_LOGGING_ONLY
Check this document for details: https://cloud.google.com/build/docs/build-config-file-schema#options
To me worked the options solution as mentioned for #Kevin. Just add the parameter as mentioned before in the cloudbuild.yml file.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
options:
logging: CLOUD_LOGGING_ONLY

How to get un-versioned files included while deploying serverless apps via github actions?

I have a serverless project which requires SSL signed cert/private key for communication to an API. The cert/key aren't in version control, but locally are in my file system. The files get bundled with the lambdas in the service and are accessible for use when deployed.
package:
individually: true
include:
- signed-cert.pem
- private-key.pem
Deployment is done via Github Actions.
e.g. npm install serverless ... npx serverless deploy
How could those files be included without adding them to version control? Could they be retrieved from S3? Some other way?
It looks like encrypting the files may work, but is there a better approach? The lambdas could fetch them from S3, but I'd rather avoid additional latency on every startup if possible.
Looks like adding a GitHub secret for the private key and certificate works. Just paste the cert/private key text into a GitHub secret e.g.
Secret: SIGNED_CERT, Value: -----BEGIN CERTIFICATE-----......-----END CERTIFICATE-----
Then in the GitHub Action Workflow:
- name: create ssl signed certificate
run: 'echo "$SIGNED_CERT" > signedcert.pem'
shell: bash
env:
SIGNED_CERT: ${{secrets.SIGNED_CERT}}
working-directory: serverless/myservice
- name: create ssl private key
run: 'echo "$PRIVATE_KEY" > private-key.pem'
shell: bash
env:
PRIVATE_KEY: ${{secrets.PRIVATE_KEY}}
working-directory: serverless/myservice
Working directory if the serverless.yml isn't at that root level of the project.

Run node.js database migrations on Google Cloud SQL during Google Cloud Build

I would like to run database migrations written in node.js during the Cloud Build process.
Currently, the database migration command is being executed but it seems that the Cloud Build process does not have access to connect to Cloud SQL via an IP address with username/password.
In the case with Cloud SQL and Node.js it would look something like this:
steps:
# Install Node.js dependencies
- id: yarn-install
name: gcr.io/cloud-builders/yarn
waitFor: ["-"]
# Install Cloud SQL proxy
- id: proxy-install
name: gcr.io/cloud-builders/yarn
entrypoint: sh
args:
- "-c"
- "wget https://storage.googleapis.com/cloudsql-proxy/v1.20.1/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy && chmod +x cloud_sql_proxy"
waitFor: ["-"]
# Migrate database schema to the latest version
# https://knexjs.org/#Migrations-CLI
- id: migrate
name: gcr.io/cloud-builders/yarn
entrypoint: sh
args:
- "-c"
- "(./cloud_sql_proxy -dir=/cloudsql -instances=<CLOUD_SQL_CONNECTION> & sleep 2) && yarn run knex migrate:latest"
timeout: "1200s"
waitFor: ["yarn-install", "proxy-install"]
timeout: "1200s"
You would launch yarn install and download Cloud SQL Proxy in parallel. Once these two steps are complete, you run launch the proxy, wait 2 seconds and finally run yarn run knex migrate:latest.
For this to work you would need Cloud SQL Admin API enabled in your GCP project.
Where <CLOUD_SQL_INSTANCE> is your Cloud SQL instance connection name that can be found here. The same name will be used in your SQL connection settings, e.g. host=/cloudsql/example:us-central1:pg13.
Also, make sure that the Cloud Build service account has "Cloud SQL Client" role in the GCP project, where the db instance is located.
As of tag 1.16 of gcr.io/cloudsql-docker/gce-proxy, the currently accepted answer no longer works. Here is a different approach that keeps the proxy in the same step as the commands that need it:
- id: cmd-with-proxy
name: [YOUR-CONTAINER-HERE]
timeout: 100s
entrypoint: sh
args:
- -c
- '(/workspace/cloud_sql_proxy -dir=/workspace -instances=[INSTANCE_CONNECTION_NAME] & sleep 2) && [YOUR-COMMAND-HERE]'
The proxy will automatically exit once the main process exits. Additionally, it'll mark the step as "ERROR" if either the proxy or the command given fails.
This does require the binary is in the /workspace volume, but this can be provided either manually or via a prereq step like this:
- id: proxy-install
name: alpine:3.10
entrypoint: sh
args:
- -c
- 'wget -O /workspace/cloud_sql_proxy https://storage.googleapis.com/cloudsql-proxy/v1.16/cloud_sql_proxy.linux.386 && chmod +x /workspace/cloud_sql_proxy'
Additionally, this should work with TCP since the proxy will be in the same container as the command.
Use google-appengine/exec-wrapper. It is an image to do exactly this. Usage (see README in link):
steps:
- name: "gcr.io/google-appengine/exec-wrapper"
args: ["-i", "gcr.io/my-project/appengine/some-long-name",
"-e", "ENV_VARIABLE_1=value1", "-e", "ENV_2=value2",
"-s", "my-project:us-central1:my_cloudsql_instance",
"--", "bundle", "exec", "rake", "db:migrate"]
The -s sets the proxy target.
Cloud Build runs using a service account and it looks like you need to grant access to Cloud SQL for this account.
You can find additional info about setting service account permissions here.
Here's how to combine Cloud Build + Cloud SQL Proxy + Docker.
If you're running your database migrations/operations within a Docker container in Cloud Build, it won't be able to directly access your proxy, because Docker containers are isolated from the host machine.
Here's what I managed to get up and running:
- id: build
# Build your application
waitFor: ['-']
- id: install-proxy
name: gcr.io/cloud-builders/wget
entrypoint: bash
args:
- -c
- wget -O /workspace/cloud_sql_proxy https://storage.googleapis.com/cloudsql-proxy/v1.15/cloud_sql_proxy.linux.386 && chmod +x /workspace/cloud_sql_proxy
waitFor: ['-']
- id: migrate
name: gcr.io/cloud-builders/docker
entrypoint: bash
args:
- -c
- |
/workspace/cloud_sql_proxy -dir=/workspace -instances=projectid:region:instanceid & sleep 2 && \
docker run -v /workspace:/root \
--env DATABASE_HOST=/root/projectid:region:instanceid \
# Pass other necessary env variables like db username/password, etc.
$_IMAGE_URL:$COMMIT_SHA
timeout: '1200s'
waitFor: [build, install-proxy]
Because our db operations are taking place within the Docker container, I found the best way to provide the access to Cloud SQL by specifying the Unix socket -dir/workspace instead of exposing a TCP port 5432.
Note: I recommend using the directory /workspace instead of /cloudsql for Cloud Build.
Then we mounted the /workspace directory to Docker container's /root directory, which is the default directory where your application code resides. When I tried to mount it to other than /root, nothing seemed to happen (perhaps a permission issue with no error output).
Also: I noticed the proxy version 1.15 works well. I had issues with newer versions. Your mileage may vary.