Migrating google cloud functions from one project to another in GCP? - google-cloud-platform

I have some cloud functions. I deploy these cloud function through source repository using cloudbuild.yaml and Cloud Triggers
The cloudbuild.yaml is..
steps:
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
pip3 install -r requirements.txt
pytest -rP
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function_Name
- --runtime=python37
- --source=https://source.developers.google.com/projects/{}/repos/{}/moveable-aliases/master/paths/{}
- --entry-point=main
- --trigger-topic=TOPIC_NAME
- --region=REGION
Now i would like to move this cloud function from this project to another project( Project A to Project B)
Now as i am not defining my project_id here. From where it is getting the project id?From service account?
How can i efficiently move this cloud function from Repository A to Repository B? as well as deploy it at Project B.

When you run your Cloud Build, some variable are set automatically, like the current project. This current projet is used by default in the environment for deploying the function.
For keeping the current behavior and adding the capability to extend it to the next projet you can do this
Define a substitution variable, for example _TARGET_PROJECT_ID
Assign by default the current project
...
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- Function_Name
- --runtime=python37
- --source=https://source.developers.google.com/projects/{}/repos/{}/moveable-aliases/master/paths/{}
- --entry-point=main
- --trigger-topic=TOPIC_NAME
- --region=REGION
- --project=$_TARGET_PROJECT_ID
substitions:
_TARGET_PROJECT_ID: $PROJECT_ID
Now, in your trigger (a new one?) or when you run your Cloud Build manually, specify the new project_ID if you want. If no, the current behavior (deployment in the current project) will continue.

Related

Access Environment Variables through vite-react and GCP Cloud Build

I have a React application that is Dockerized and hosted on Google Cloud Build. I have set environment variables on the Cloud Build, but I am unable to access them within my React application. What am I doing wrong and how can I access these environment variables in my React application?
steps:
name: gcr.io/cloud-builders/docker
env:
-"VITE_PUBLIC_KEY={$_VITE_PUBLIC_KEY}",
-"VITE_SERVICE_ID={$_VITE_SERVICE_ID}",
-"VITE_TEMPLATE_ID={$_VITE_TEMPLATE_ID}"
args:
build
'--no-cache'
'-t'
'image_name'
.
'-f'
Dockerfile.prod
name: gcr.io/cloud-builders/docker
args:
push
'image_name'
name: gcr.io/cloud-builders/gcloud
args:
run
deploy
bob
'--image'
'image_name'
'--region'
$_DEPLOY_REGION
'--allow-unauthenticated'
'--platform'
$_PLATFORM
timeout: 600s
this is the yaml file:
I dont have a backend solution, I just want to be able to access 3 enviromnent variables within my application on client side. without declaring a .env file.
Tried declaring the enviroments in Cloud Run as well as declaring in the cloudbuild.yaml file. It works on aws but a different problem arises on aws.
One solution could be to hardcode the environment variables directly into your React code. This is not recommended as it could lead to security vulnerabilities and make it difficult to change the values in the future without re-deploying the entire application. Additionally, it would not be a true solution to accessing the environment variables as they would not be dynamically set at runtime.
There are different options, one and the best (in my opinion) is to store your env variables using the secret manager and then access them through your code (check this GitHub Repo).
. . .
The other option is to access the same secrets that you created before when your pipeline is running, the downside is that you always have to redeploy your pipeline to update the env variables.
This is an example:
steps:
# STEP 0 - BUILD CONTAINER 1
- id: Build-container-image-container-one
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
docker build -t gcr.io/$PROJECT_ID/container_one -f 'build/container_one.Dockerfile' .
# STEP 2 - PUSH CONTAINER 1
- id: Push-to-Container-Registry-container-one
name: 'gcr.io/cloud-builders/docker'
args:
- push
- gcr.io/$PROJECT_ID/container_one
waitFor: ["Build-container-image-container-one"]
# STEP 3 - DEPLOY CONTAINER 1
- id: Deploy-Cloud-Run-container-one
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- run
- deploy
- container_one
- --image=gcr.io/$PROJECT_ID/container_one
- --region={YOUR REGION}
- --port={YOUR PORT}
- --memory=3Gi
- --cpu=1
waitFor: ["Build-container-image-container-one", "Push-to-Container-Registry-container-one"]
# STEP 4 - ENV VARIABLES
- id: Accessing-secrets-for-env-variables
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
gcloud secrets versions access latest --secret=ENV_VARIABLE_ONE > key1.txt
gcloud secrets versions access latest --secret=ENV_VARIABLE_TWO > key2.txt
gcloud secrets versions access latest --secret=ENV_VARIABLE_THREE > key3.txt
waitFor: ["Push-to-Container-Registry-container-one", "Build-container-image-container-one"]
# STEP 5 - SETTING KEYS
- id: Setting-keys
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'gcloud run services update container_one --region={YOUR REGION} --set-env-vars="ENV_VARIABLE_ONE=$(cat key1.txt), ENV_VARIABLE_TWO=$(cat key2.txt), ENV_VARIABLE_THREE=$(cat key3.txt)"']
images:
- gcr.io/$PROJECT_ID/container_one

Local packages not loading to GCP python functions with github actions

I am trying to deploy a GCP function. My code uses a package that's on a private repository. I create a local copy of that package in the folder, and then use gcloud function deploy from the folder to deploy the function.
This works well. I can see a function that is deployed, with the localpackage.
The problem is with using github actions to deploy the function.
The function is part of a repository that has multiple functions, so when I deploy, I run github actions from outside this folder of the function, and while the function gets deployed, the dependencies do not get picked up.
For example, this is my folder structure:
my_repo
- .github/
- workflows/
-function_deploy.yaml
- function_1_folder
- main.py
- requirements.txt
- .gcloudignore
- localpackages --> These are the packages I need uploaded to GCP
My function_deploy.yaml looks like :
name: Build and Deploy to GCP functions
on:
push:
paths:
function_1_folder/**.py
env:
PROJECT_ID: <project_id>
jobs:
job_id:
runs-on: ubuntu-latest
permissions:
contents: 'read'
id-token: 'write'
steps:
- uses: 'actions/checkout#v3'
- id: 'auth'
uses: 'google-github-actions/auth#v0'
with:
credentials_json: <credentials>
- id: 'deploy'
uses: 'google-github-actions/deploy-cloud-functions#v0'
with:
name: <function_name>
runtime: 'python38'
region: <region>
event_trigger_resource: <trigger_resource>
entry_point: 'main'
event_trigger_type: <pubsub>
memory_mb: <size>
source_dir: function_1_folder/
The google function does get deployed, but it fails with:
google-github-actions/deploy-cloud-functions failed with: operation failed: Function failed on loading user code. This is likely due to a bug in the user code. Error message: please examine your function logs to see the error cause...
When I look at the google function, I see that the localpackages folder hasn't been uploaded to GCP.
When I deploy from my local machine however, it does upload the localpackages.
Any suggestions on what I maybe doing incorrectly? And how to upload the localpackages?
I looked at this question:
Github action deploy-cloud-functions not building in dependencies?
But didn't quite understand what was done here.

Is there a way to inspect the process.env variables on a cloud run service?

After deployment, is there a way to inspect the process.env variables on a running cloud run service?
I thought they would be available in the following page:
https://console.cloud.google.com/run/detail
Is there a way to make them available here? Or to inspect it in some other way?
PS: This is a Docker container.
I have the following ENV on my Dockerfile. And I know they are present, because everything is working as it should. But I cannot see them in the service details:
Dockerfile
ENV NODE_ENV=production
ENV PROJECT_ID=$PROJECT_ID
ENV SERVER_ENV=$SERVER_ENV
I'm using a cloudbuild.yaml file. The ENV directives are present in my Dockerfile, and they are being passed to my container. Maybe I should add env to my cloudbuild.yaml file? Because I'm using --substitutions on my gcloub builds sumbmit call and they are passed as --build-arg to my Docker build step. But I'm not declaring them as env in my cloudbuild.yaml.
I followed the official documentation and set the environment variables on a Cloud Run service using the console.Then I was able to list them on the Google Cloud Console.
You can set environment variables using the Cloud Console, the gcloud
command line, or a YAML file when you create a new service or deploy a
new revision:
With the help of #marian.vladoi's answer. This what I've ended up doing
In my deploy step from cloudbuild.yaml file:
I added the --set-env-vars parameter
steps:
# DEPLOY CONTAINER WITH GCLOUD
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "beta"
- "run"
- "deploy"
- "SERVICE_NAME"
- "--image=gcr.io/$PROJECT_ID/SERVICE_NAME:$_TAG_NAME"
- "--platform=managed"
- "--region=us-central1"
- "--min-instances=$_MIN_INSTANCES"
- "--max-instances=3"
- "--set-env-vars=PROJECT_ID=$PROJECT_ID,SERVER_ENV=$_SERVER_ENV,NODE_ENV=production"
- "--port=8080"
- "--allow-unauthenticated"
timeout: 180s

env step parameter in cloudbuild.yaml file not settings environment variable

My cloudbuild.yaml file looks like
steps:
# build the container image
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/backend:$COMMIT_SHA", "."]
env:
- "APP_ENV=production"
# push the container image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/backend:$COMMIT_SHA"]
# Deploy container image to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
- "backend"
- "--image"
- "gcr.io/$PROJECT_ID/backend:$COMMIT_SHA"
- "--region"
- "us-central1"
- "--platform"
- "managed"
images:
- "gcr.io/$PROJECT_ID/backend:$COMMIT_SHA"
and it builds and deploys a new container to Cloud Run, however it doesn't set the APP_ENV environment variable to "production". Why is that and how do I get it to?
I am following this guide.
steps:
- env: [...]
approach sets environment variables for the Cloud Build container that runs the docker build -t command, so in this case only docker build it executes gets APP_ENV variable (and probably doesn't do anything with it).
You should not expect this to set environment variable for Cloud Run. For that to work, you need to specify --set-env-vars or --update-env-vars to Cloud Run in the gcloud run deploy step by specifying additional args above like:
- name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
...
- "--set-env-vars=KEY1=VALUE1"
- "--set-env-vars=KEY2=VALUE2"
...
See https://cloud.google.com/run/docs/configuring/environment-variables#command-line to learn more or read this article about alternative ways of specifying environment variables for Cloud Run applications.

How to access a GCP Cloud Source Repository from another project?

I have project A and project B.
I use a GCP Cloud Source Repository on project A as my 'origin' remote.
I use Cloud Build with a trigger on changes to the 'develop' branch of the repo to trigger builds. As part of the build I deploy some stuff with the gcloud builder, to project A.
Now, I want to run the same build on project B. Maybe the same branch, maybe a different branch (i.e. 'release-*'). In the end want to deploy some stuff with the gcloud builder to project B.
The problem is, when I'm on project B (in Google Cloud Console), I can't even see the repo in project A. It asks me to "connect repository", but I can only select GitHub or Bitbucket repos for mirroring. The option "Cloud Source Repositories" is greyed out, telling me that they "are already connected". Just evidently not one from another project.
I could set up a new repo on project B, and push to both repos, but that seems inefficient (and likely not sustainable long term). The curious thing is, that such a setup could easily be achieved using an external Bitbucket/GitHub repo as origin and mirrored in both projects.
Is anything like this at all possible in Google Cloud Platform without external dependencies?
I also tried running all my builds in project A and have a separate trigger that deploys to project B (I use substitutions to manage that), but it fails with permission issues. Cloud Builds seem to always run with a Cloud Build service account, of which you can manage the roles, but I can't see how I could give it access to another project. Also in this case both builds would appear indistinguishable in a single build history, which is not ideal.
I faced a similar problem and I solved it by having multiple Cloud Build files.
A Cloud Build file (which got triggered when codes were pushed to a certain branch) was dedicated to copying all of my source codes into the new project source repo, of which it also has it's own Cloud Build file for deployment to that project.
Here is a sample of the Cloud Build file that copies sources to another project:
steps:
- name: gcr.io/cloud-builders/git
args: ['checkout', '--orphan', 'temp']
- name: gcr.io/cloud-builders/git
args: ['add', '-A']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.name', 'Your Name']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.email', 'Your Email']
- name: gcr.io/cloud-builders/git
args: ['commit', '-am', 'latest production commit']
- name: gcr.io/cloud-builders/git
args: ['branch', '-D', 'master']
- name: gcr.io/cloud-builders/git
args: ['branch', '-m', 'master']
- name: gcr.io/cloud-builders/git
args: ['push', '-f', 'https://source.developers.google.com/p/project-prod/r/project-repo', 'master']
This pushed all of the source codes into the new project.
Note that: You need to give your Cloud Build service account permissions to push source codes into the other project source repositories.
As you have already said, you can host your repos outside in BitBucket/Github and sync them to each project, but you need to pay an extra for each build.
You could use third party services otherwise to build your repos outside and deploy the result wherever you want for ex. look into CircleCI or similar service.
You could give permissions to build that it could refer to resources from another project, but I would keep them separated to minimize complexity.
My solution:
From service A, create new Cloud Build on branch release-* with Build Configuration specify $_PROJECT_ID is project B id
On GCP Cloud Build definition, add new Variable name _PROJECT_ID is project B id
NOTE: Remember grant permissons for your service account of project A(#cloudbuild.gserviceaccount.com) on project B
cloudbuild.yaml
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/cloud-builders/gcloud
args:
- beta
- run
- deploy
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
- '--project=$_PROJECT_ID'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
timeout: '20m'
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- driveit-hp-agreement-mngt-api```
[1]: https://i.stack.imgur.com/XhRJ4.png
Unfortunately Google doesn't seem to provide that functionality within Source Repositories (would rock if you could).
An alternative option you could consider (though involves external dependencies) is to mirror your Source Repositories first to GitHub or Bitbucket, then mirror back again into Source Repositories. That way, any changes made to any mirror of the repository will sync. (i.e. a change pushed in Project B will sync with Bitbucket, and likewise in Project A)
EDIT
To illustrate my alternative solution, here is a simple diagram