Overwrite Cloud Build repository url when triggering a push - google-cloud-platform

In GCP cloud build, I have a Bitbucket Data Center repository connected that uses a proxy; https://bitbucket-sv3.mydomain.com. However, the repo lives at https://bitbucket.mydomain.com
When I push to the repo, my trigger, which uses the connected repo starts a build by trying to clone the second domain, which the workerpool can't reach and I get a connection error. I assume that the webhook created by gcloud is just sending the server's address and the cloud build machine is using it from the payload. Is there some way I can force the build to use the sv3 address when it clones the source repo during the Fetch Source step?
Here's an example of my cloudbuild.yaml showing some of the things I've tried:
steps:
- name: gcr.io/cloud-builders/git
env:
- "_URL='https://bitbucket-sv3.mydomain.com/reponame.git'"
- "_HEAD_REPO_URL='https://bitbucket-sv3.mydomain.com/reponame.git'"
args: ['clone', 'https://bitbucket-sv3.mydowmain.com/reponame.git']
An alternative would be to stop the FETCHSOURCE step, but I haven't found a way to do that.

Related

--source in cloudbuild.yaml

I have my current Cloud Build working, I connect my github repo to trigger the Cloud Build when I push to the main branch which then creates my Cloud Function, but I am confused about the the --source flag. I have read the google cloud function docs. They state that the
minimal source repository URL is: https://source.developers.google.com/projects/${PROJECT}/repos/${REPO}. If I were to input this into my cloudbuild.yaml file, does this mean that I am mimicking the complete path of my github url? I am currently just using . which I believe is just the entire root directory.
my cloudbuild.yaml file:
steps:
- name: "gcr.io/cloud-builders/gcloud"
id: "deploypokedex"
args:
- functions
- deploy
- my_pokedex_function
- --source=.
- --entry-point=get_pokemon
- --trigger-topic=pokedex
- --timeout=540s
- --runtime=python39
- --region=us-central1
Yes you are mimicking the complete path of the Github URL. --source=. means that you are calling the source code in your current working directory. You can check this link on how to configure the Cloud Build deployment.
Also based on the documentation you provided,
If you do not specify the --source flag:
The current directory will be used for new function deployments.
If the function was previously deployed using a local filesystem path, then the function's source code will be updated using the current directory.
If the function was previously deployed using a Google Cloud Storage location or a source repository, then the function's source code will not be updated.
Let me know if you have questions or clarifications.

How can I configure Amplify to create a new backend environment for each branch?

I'm trying to configure Amplify to deploy every branch or PR on my Github repo to a new environment.
Using Previews:
Pull Request Previews is enabled
The configuration for the backend is "Create new backend environment for every Pull Request"
But every build skips the backend step with this message "No backend environment association found, continuing..." and because of that the frontend build fails because it requires the aws-exports file that should be generated on the backend stage.
The same occurs with Branch autodetection (With the option Create new backend environment for every connected branch selected)
I'm opening this question here because I couldn't get any answer from AWS on their repo
We were recently facing the same issue and discovered that in order to be able to automatically create new BE environments for Previews or Branch autodetection, you need to add backend build step into Build settings in Amplify (amplify.yml).
It can be found in Amplify documentation and in the most simple form it should look like this:
backend:
phases:
build:
commands:
- amplifyPush --simple
It is not very well documented or obvious, we stumbled across the solution by chance.

Google Cloud Run correctly running continuous deployment to github, but not updating when deployed

I've set up a Google Cloud Run with continuous deployment to a github, and it redeploys every time there's a push to the main (what I what), but when I go to check the site, it hasn't updated the HTML I've been testing with. I've tested it on my local machine, and it's updating the code when I run the Django server, so I'm guessing it's something with my cloudbuild.yml? There was another post I tried to mimic, but it didn't take.
Any advice would be very helpful! Thank you!
cloudbuild.yml:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/${PROJECT_ID}/exeplore', './ExePlore']
# Push the image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/${PROJECT_ID}/exeplore']
# Deploy image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'exeplore'
- '--image'
- 'gcr.io/${PROJECT_ID}/exeplore'
- '--region'
- 'europe-west2'
- '--platform'
- 'managed'
images:
- gcr.io/${PROJECT_ID}/exeplore
Here are the variables for GCR
Edit 1: I've now updated my cloudbuild, so the SHORT_SHA is all gone, but now google cloud run is saying it can't find my manage.py at /Exeplore/manage.py. I might have to trial and error it, as running the container locally is fine, and same with running the server locally. I have yet to try what Ezekias suggested, as I've tried rolled back to when it was correctly running the server and it doesn't like that.
Edit 2: I've checked the services, it is at 100% Latest
Check your Cloud Run service, either on the Cloud Console or by running gcloud run services describe. It may be set to serve traffic to a specific revision instead of having 100% of traffic serving LATEST.
If that's the case, it won't automatically move traffic to the new revision when you deploy. If you want it to automatically switch to the new update, you can run gcloud run services update-traffic --to-latest or use the "Manage Traffic" button on the revisions tab of the Cloud Console to set 100% of traffic to the latest healthy revision.
It looks like you're building gcr.io/${PROJECT_ID}/exeplore:$SHORT_SHA, but pushing and deploying gcr.io/${PROJECT_ID}/exeplore. These are essentially different images.
Update any image variables to include the SHORT_SHA to ensure all references are the same.
To avoid duplication you may also want to use dynamic substitution variables

How to solve insufficient authentication scopes when use Pubsub on GCP

I'm trying to build 2 microservices (in Java Spring Boot) to communicate with each other using GCP Pub/Sub.
First, I tested the programs(in Eclipse) working as epxected in my local laptop(http://localhost), i.e. one microservice published the message and the other received it successfully using the Topic/Subscriber created in GCP (as well as the credential private key: mypubsub.json).
Then, I deployed the same programs to run GCP, and got following errors:
- 2020-03-21 15:53:16.831 WARN 1 --- [bsub-publisher2] o.s.c.g.p.c.p.PubSubPublisherTemplate : Publishing to json-payload-sample-topic topic failed
- com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes. at com.google.api.gax.rpc.ApiExceptionFactory
What I did to deploy the programs(in container) to run on GCP/Kubernetes Engine:
Login the Cloud Shell after switch to my project for the Pubsub testing
Git clone my programs which being tested in Eclipse
Move the mypubsub.json file to under /home/my_user_id
export GOOGLE_APPLICATION_CREDENTIALS="/home/my_user_id/mp6key.json"
Run 'mvn clean package' to build the miscroservice programs
Run 'docker build' to create the image files
Run 'docker push' to push the image files to gcr.io repo
Run 'kubectl create' to create the deployments and expose the services
Once the 2 microservices deployed and exposed, when I tried to access them in browser, the one to publish a message worked fine to retrieve data from database and processed the data, then failed with the above errors when trying to access the GCP Pubsub API to publish the message
Could anyone provide a hint for what to check to solve the issue?
The issue has been resolved by following the guide:
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
Briefly the solution is to add following lines in the deployment.yaml to load the credential key:
- name: google-cloud-key
secret:
secretName: pubsub-key
containers:
- name: my_container
image: gcr.io/my_image_file
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
Try to explicitly provide CredentialsProvider to your Publisher class, I faced the same authentication issue.
This approach worked for me !
CredentialsProvider credentialsProvider = FixedCredentialsProvider.create(
ServiceAccountCredentials.fromStream(
PubSubUtil.class.getClassLoader().getResourceAsStream("key.json")));
Publisher publisher = Publisher.newBuilder(topicName)
.setCredentialsProvider(credentialsProvider)
.build();

How to access a GCP Cloud Source Repository from another project?

I have project A and project B.
I use a GCP Cloud Source Repository on project A as my 'origin' remote.
I use Cloud Build with a trigger on changes to the 'develop' branch of the repo to trigger builds. As part of the build I deploy some stuff with the gcloud builder, to project A.
Now, I want to run the same build on project B. Maybe the same branch, maybe a different branch (i.e. 'release-*'). In the end want to deploy some stuff with the gcloud builder to project B.
The problem is, when I'm on project B (in Google Cloud Console), I can't even see the repo in project A. It asks me to "connect repository", but I can only select GitHub or Bitbucket repos for mirroring. The option "Cloud Source Repositories" is greyed out, telling me that they "are already connected". Just evidently not one from another project.
I could set up a new repo on project B, and push to both repos, but that seems inefficient (and likely not sustainable long term). The curious thing is, that such a setup could easily be achieved using an external Bitbucket/GitHub repo as origin and mirrored in both projects.
Is anything like this at all possible in Google Cloud Platform without external dependencies?
I also tried running all my builds in project A and have a separate trigger that deploys to project B (I use substitutions to manage that), but it fails with permission issues. Cloud Builds seem to always run with a Cloud Build service account, of which you can manage the roles, but I can't see how I could give it access to another project. Also in this case both builds would appear indistinguishable in a single build history, which is not ideal.
I faced a similar problem and I solved it by having multiple Cloud Build files.
A Cloud Build file (which got triggered when codes were pushed to a certain branch) was dedicated to copying all of my source codes into the new project source repo, of which it also has it's own Cloud Build file for deployment to that project.
Here is a sample of the Cloud Build file that copies sources to another project:
steps:
- name: gcr.io/cloud-builders/git
args: ['checkout', '--orphan', 'temp']
- name: gcr.io/cloud-builders/git
args: ['add', '-A']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.name', 'Your Name']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.email', 'Your Email']
- name: gcr.io/cloud-builders/git
args: ['commit', '-am', 'latest production commit']
- name: gcr.io/cloud-builders/git
args: ['branch', '-D', 'master']
- name: gcr.io/cloud-builders/git
args: ['branch', '-m', 'master']
- name: gcr.io/cloud-builders/git
args: ['push', '-f', 'https://source.developers.google.com/p/project-prod/r/project-repo', 'master']
This pushed all of the source codes into the new project.
Note that: You need to give your Cloud Build service account permissions to push source codes into the other project source repositories.
As you have already said, you can host your repos outside in BitBucket/Github and sync them to each project, but you need to pay an extra for each build.
You could use third party services otherwise to build your repos outside and deploy the result wherever you want for ex. look into CircleCI or similar service.
You could give permissions to build that it could refer to resources from another project, but I would keep them separated to minimize complexity.
My solution:
From service A, create new Cloud Build on branch release-* with Build Configuration specify $_PROJECT_ID is project B id
On GCP Cloud Build definition, add new Variable name _PROJECT_ID is project B id
NOTE: Remember grant permissons for your service account of project A(#cloudbuild.gserviceaccount.com) on project B
cloudbuild.yaml
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/cloud-builders/gcloud
args:
- beta
- run
- deploy
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
- '--project=$_PROJECT_ID'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
timeout: '20m'
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- driveit-hp-agreement-mngt-api```
[1]: https://i.stack.imgur.com/XhRJ4.png
Unfortunately Google doesn't seem to provide that functionality within Source Repositories (would rock if you could).
An alternative option you could consider (though involves external dependencies) is to mirror your Source Repositories first to GitHub or Bitbucket, then mirror back again into Source Repositories. That way, any changes made to any mirror of the repository will sync. (i.e. a change pushed in Project B will sync with Bitbucket, and likewise in Project A)
EDIT
To illustrate my alternative solution, here is a simple diagram