How to access a GCP Cloud Source Repository from another project? - google-cloud-platform

I have project A and project B.
I use a GCP Cloud Source Repository on project A as my 'origin' remote.
I use Cloud Build with a trigger on changes to the 'develop' branch of the repo to trigger builds. As part of the build I deploy some stuff with the gcloud builder, to project A.
Now, I want to run the same build on project B. Maybe the same branch, maybe a different branch (i.e. 'release-*'). In the end want to deploy some stuff with the gcloud builder to project B.
The problem is, when I'm on project B (in Google Cloud Console), I can't even see the repo in project A. It asks me to "connect repository", but I can only select GitHub or Bitbucket repos for mirroring. The option "Cloud Source Repositories" is greyed out, telling me that they "are already connected". Just evidently not one from another project.
I could set up a new repo on project B, and push to both repos, but that seems inefficient (and likely not sustainable long term). The curious thing is, that such a setup could easily be achieved using an external Bitbucket/GitHub repo as origin and mirrored in both projects.
Is anything like this at all possible in Google Cloud Platform without external dependencies?
I also tried running all my builds in project A and have a separate trigger that deploys to project B (I use substitutions to manage that), but it fails with permission issues. Cloud Builds seem to always run with a Cloud Build service account, of which you can manage the roles, but I can't see how I could give it access to another project. Also in this case both builds would appear indistinguishable in a single build history, which is not ideal.

I faced a similar problem and I solved it by having multiple Cloud Build files.
A Cloud Build file (which got triggered when codes were pushed to a certain branch) was dedicated to copying all of my source codes into the new project source repo, of which it also has it's own Cloud Build file for deployment to that project.
Here is a sample of the Cloud Build file that copies sources to another project:
steps:
- name: gcr.io/cloud-builders/git
args: ['checkout', '--orphan', 'temp']
- name: gcr.io/cloud-builders/git
args: ['add', '-A']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.name', 'Your Name']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.email', 'Your Email']
- name: gcr.io/cloud-builders/git
args: ['commit', '-am', 'latest production commit']
- name: gcr.io/cloud-builders/git
args: ['branch', '-D', 'master']
- name: gcr.io/cloud-builders/git
args: ['branch', '-m', 'master']
- name: gcr.io/cloud-builders/git
args: ['push', '-f', 'https://source.developers.google.com/p/project-prod/r/project-repo', 'master']
This pushed all of the source codes into the new project.
Note that: You need to give your Cloud Build service account permissions to push source codes into the other project source repositories.

As you have already said, you can host your repos outside in BitBucket/Github and sync them to each project, but you need to pay an extra for each build.
You could use third party services otherwise to build your repos outside and deploy the result wherever you want for ex. look into CircleCI or similar service.
You could give permissions to build that it could refer to resources from another project, but I would keep them separated to minimize complexity.

My solution:
From service A, create new Cloud Build on branch release-* with Build Configuration specify $_PROJECT_ID is project B id
On GCP Cloud Build definition, add new Variable name _PROJECT_ID is project B id
NOTE: Remember grant permissons for your service account of project A(#cloudbuild.gserviceaccount.com) on project B
cloudbuild.yaml
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/cloud-builders/gcloud
args:
- beta
- run
- deploy
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
- '--project=$_PROJECT_ID'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
timeout: '20m'
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- driveit-hp-agreement-mngt-api```
[1]: https://i.stack.imgur.com/XhRJ4.png

Unfortunately Google doesn't seem to provide that functionality within Source Repositories (would rock if you could).
An alternative option you could consider (though involves external dependencies) is to mirror your Source Repositories first to GitHub or Bitbucket, then mirror back again into Source Repositories. That way, any changes made to any mirror of the repository will sync. (i.e. a change pushed in Project B will sync with Bitbucket, and likewise in Project A)
EDIT
To illustrate my alternative solution, here is a simple diagram

Related

Path and options for saving a Packer VM image to GCP Artifact Registry

(Warning, Newbie here) I’m learning Packer by building a VM. I followed links to cloud-builders-community/packer example. Unfortunately this seems to be out of date. It pushes the output to gcr.io … which I’m discovering is being deprecated in favour of Artifact Registry. It’s also using YAML instead of HCL2.
Is this old code and is there an up to date equivalent somewhere else?
Assuming I can or should continue using this sample code…
I’m confused about a couple things. Artifact Registry : Create Repository has options for Docker, Maven, etc. but does not have an option for VM images. Do I just choose Docker?
Then in cloud-builders-community/packer/cloudbuild.yaml what path do I use to replace gcr.io? gcr.io appears multiple times.
From: https://github.com/GoogleCloudPlatform/cloud-builders-community/packer/cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/wget'
args: ["https://releases.hashicorp.com/packer/${_PACKER_VERSION}/packer_${_PACKER_VERSION}_linux_amd64.zip"]
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/packer:${_PACKER_VERSION}',
'-t', 'gcr.io/$PROJECT_ID/packer',
'--build-arg', 'PACKER_VERSION=${_PACKER_VERSION}',
'--build-arg', 'PACKER_VERSION_SHA256SUM=${_PACKER_VERSION_SHA256SUM}',
'.']
substitutions:
_PACKER_VERSION: 1.7.8
_PACKER_VERSION_SHA256SUM: 8a94b84542d21b8785847f4cccc8a6da4c7be5e16d4b1a2d0a5f7ec5532faec0
images:
- 'gcr.io/$PROJECT_ID/packer:latest'
- 'gcr.io/$PROJECT_ID/packer:${_PACKER_VERSION}'
tags: ['cloud-builders-community']
BTW, the overall arc of my learning project is:
Packer => VM Image => GCP Artifact Repository => Terraform => GCP VM
I don't know specifics about packer, but in general, for using AR with docs that specify gcr.io:
Anytime you want to use AR as a replacement for GCR, you should choose docker.
You should replace gcr.io/$PROJECT_ID with $REGION-docker.pkg.dev/$PROJECT_ID/$REPOSITORY_ID for anywhere it refers to your project, and leave the gcr.io url as-is for other people's projects (like cloud-builders).

Google Cloud Run correctly running continuous deployment to github, but not updating when deployed

I've set up a Google Cloud Run with continuous deployment to a github, and it redeploys every time there's a push to the main (what I what), but when I go to check the site, it hasn't updated the HTML I've been testing with. I've tested it on my local machine, and it's updating the code when I run the Django server, so I'm guessing it's something with my cloudbuild.yml? There was another post I tried to mimic, but it didn't take.
Any advice would be very helpful! Thank you!
cloudbuild.yml:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/${PROJECT_ID}/exeplore', './ExePlore']
# Push the image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/${PROJECT_ID}/exeplore']
# Deploy image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'exeplore'
- '--image'
- 'gcr.io/${PROJECT_ID}/exeplore'
- '--region'
- 'europe-west2'
- '--platform'
- 'managed'
images:
- gcr.io/${PROJECT_ID}/exeplore
Here are the variables for GCR
Edit 1: I've now updated my cloudbuild, so the SHORT_SHA is all gone, but now google cloud run is saying it can't find my manage.py at /Exeplore/manage.py. I might have to trial and error it, as running the container locally is fine, and same with running the server locally. I have yet to try what Ezekias suggested, as I've tried rolled back to when it was correctly running the server and it doesn't like that.
Edit 2: I've checked the services, it is at 100% Latest
Check your Cloud Run service, either on the Cloud Console or by running gcloud run services describe. It may be set to serve traffic to a specific revision instead of having 100% of traffic serving LATEST.
If that's the case, it won't automatically move traffic to the new revision when you deploy. If you want it to automatically switch to the new update, you can run gcloud run services update-traffic --to-latest or use the "Manage Traffic" button on the revisions tab of the Cloud Console to set 100% of traffic to the latest healthy revision.
It looks like you're building gcr.io/${PROJECT_ID}/exeplore:$SHORT_SHA, but pushing and deploying gcr.io/${PROJECT_ID}/exeplore. These are essentially different images.
Update any image variables to include the SHORT_SHA to ensure all references are the same.
To avoid duplication you may also want to use dynamic substitution variables

Gitlab Cloud run deploy successfully but Job failed

Im having an issue with my CI/CD pipeline ,
its successfully deployed to GCP cloud run but on Gitlab dashboard the status is failed.
I tried to replace images to some other docker images but it fails as well .
# File: .gitlab-ci.yml
image: google/cloud-sdk:alpine
deploy_int:
stage: deploy
environment: integration
only:
- integration # This pipeline stage will run on this branch alone
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild_int.yaml
# File: cloudbuild_int.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build','--build-arg','APP_ENV=int' , '-t', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/tpdropd-int-front']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'tpd-front', '--image', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '--region', 'us-central1', '--platform', 'managed', '--allow-unauthenticated']
gitlab build output :
ERROR: (gcloud.builds.submit)
The build is running, and logs are being written to the default logs bucket.
This tool can only stream logs if you are Viewer/Owner of the project and, if applicable, allowed by your VPC-SC security policy.
The default logs bucket is always outside any VPC-SC security perimeter.
If you want your logs saved inside your VPC-SC perimeter, use your own bucket.
See https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs.
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
I fix it by using:
options:
logging: CLOUD_LOGGING_ONLY
in cloudbuild.yaml
there you can use this work around :
Fix it by giving the Viewer role to the service account running this but this feels like giving too much permission to such a role.
This worked for me: Use --suppress-logs
gcloud builds submit --suppress-logs --tag=<my-tag>
To fix the issue, you just need to create a bucket in your project (by default - without public access) and add the role 'Store Admin' to your user or service account via https://console.cloud.google.com/iam-admin/iam
After that, you can refer the new bucket into the gcloud builds submit via parameter --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE like this:
gcloud builds submit --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE ...(other parameters here)
We need a new bucket because the default bucket for logs is a global (cross-projects). That's why it has specific security requirements to access it especially from outside the Google Cloud, like GitLab, Azure DevOps ant etc via service accounts.
(Moreover, in this case you no need to turn off logging via --suppress-logs)
Kevin's answer worked like a magic for me, since I am not able to comment, I am writing this new answer.
Initially I was facing the same issue where inspite of gcloud build submit command passed , my gitlab CI was failing.
Below is the cloudbuild.yaml file where I add the option logging as Kevin suggested.
steps:
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', '${_SCRIPT_NAME}']
options:
logging: CLOUD_LOGGING_ONLY
Check this document for details: https://cloud.google.com/build/docs/build-config-file-schema#options
To me worked the options solution as mentioned for #Kevin. Just add the parameter as mentioned before in the cloudbuild.yml file.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
options:
logging: CLOUD_LOGGING_ONLY

Google cloud build - custom machine type

I'm using the Google Cloud Build service to create images of my application. I created a build trigger that looks for a git tag in a specific format. Each time that Cloud Build detects a new tag, a new build is performed.
Since the build time is pretty long, I am trying to make it faster.
I found that it's possible to ask Google to build the application on a faster machine (Source).
gcloud builds submit --config=cloudbuild.yaml --machine-type=n1-highcpu-8 .
This code works if you choose the manual build option. Since I created the build trigger from the GCP user interface, I can't find any place to define the machine-type argument.
How can I choose the machine-type on automatic build triggers?
UPDATE:
In the Trigger window, I chose Build Configuration=Docker File and this is my docker file preview:
docker build \
-t gcr.io/PROJ_NAME/APP_NAME/$TAG_NAME:$COMMIT_SHA \
-f deployments/docker/APPNAME.docker \
.
How should my buildconfig.yaml file look like?
You need to change to Build Configuration=Cloud Build configuration file, and commit the cloudbuild.yaml to git.
Then use the machineType field in the options property of your cloudbuild.yaml file.
E.g
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/PROJ_NAME/APP_NAME/$TAG_NAME:$COMMIT_SHA', '-f', 'deployments/docker/APPNAME.docker', '.']
options:
machineType: 'N1_HIGHCPU_8'

GCP cloudbuild.yaml: kmsKeyName requires hardcoded value. How can we adapt for separate environments?

We have a two separate GCP projects (one for dev, and one for prod). We are using CloudBuild to deploy our project by utilizing repo-mirroring and a CloudBuild trigger that fires when ever the dev or prod branches are updated. The cloudbuild.yaml file looks like this:
# Firestore security rules deploy
- name: "gcr.io/$PROJECT_ID/firebase"
args: ["deploy", "--only", "firestore:rules"]
secretEnv: ['FIREBASE_TOKEN']
# Firestore indexes deploy
- name: "gcr.io/$PROJECT_ID/firebase"
args: ["deploy", "--only", "firestore:indexes"]
secretEnv: ['FIREBASE_TOKEN']
secrets:
- kmsKeyName: 'projects/my-dev-project/locations/global/keyRings/ci-ring/cryptoKeys/deployment'
secretEnv:
FIREBASE_TOKEN: 'myreallylongtokenstring'
timeout: "1600s"
The problem we have is that the kmsKeyName apparently needs to be hardcoded in order for GCP to read it, meaning we can't do something like this:
secrets:
- kmsKeyName: 'projects/$PROJECT_ID/locations/global/keyRings/ci-ring/cryptoKeys/deployment'
secretEnv:
FIREBASE_TOKEN: 'myreallylongtokenstring'
This does not lend itself well to a continuous-deployment process like the one we are using since we'd like that kmsKeyName string to be dynamically set with the relevant project-id value depending on the dev or prod environment we are deploying to.
Is there a way around this that would allow us to dynamically specify the kmsKeyName?
Update:
We have found a quick/dirty solution which was to create individual cloudbuild.yaml files: one for dev (cloudbuild-dev.yaml) and one for prod (cloudbuild-prod.yaml). Each cloudbuild file is identical except for the last part where we specify our hardcoded "secrets" info.
Explanation: GCP Cloud Build relies on individual triggers for each environment build, and each trigger can be configured to point at a specific cloudbuild yaml file, whcih is what we have done. Dev build trigger points at cloudbuild-dev.yaml, and the production trigger points at cloudbuild-prod.yaml.
Indeed, I tried different configuration with simple quote, double, without, with substition variables,...
The boring solution is to use the manual decoding as described here. But you can use variables and substitution variables as you want
The boring part is that you have to inject the secret in each step which require it, like that (for example as environment variable):
- name: "gcr.io/$PROJECT_ID/firebase"
entrypoint: "bach"
args:
- "-c"
- "export FIREBASE_TOKEN=$(cat secrets.json) && firebase deploy --only firestore:rules"
I don't know other workaround