Google cloud build with Compute Engine - google-cloud-platform

I want to use Cloud Build with a trigger on commit to automatically fetch updated repo and run sudo supervisorctl restart on a Compute Engine instance.
On the Cloud Build settings page, there is an option to connect Compute Engine, but so far I only found examples including Kubernetes Engine and App Engine here.
Is it possible to accomplish? Is it the right way to make updates? Or should I instead restart the instance(s) with a startup-script?

There's a repo in Github from the cloud-builders-community that may be what you are looking for.
As specified in the aforementioned link, it does connect cloud Build to Compute Engine with the following steps:
A temporary SSH key will be created in your Container Builder workspace
A instance will be launched with your configured flags
The workpace will be copied to the remote instance
Your command will be run inside that instance's workspace
The workspace will be copied back to your Container Builder workspace
You will need to create an appropriate IAM role with create and destroy Compute Engine permissions:
export PROJECT=$(gcloud info --format='value(config.project)')
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT --format 'value(projectNumber)')
export CB_SA_EMAIL=$PROJECT_NUMBER#cloudbuild.gserviceaccount.com
gcloud services enable cloudbuild.googleapis.com
gcloud services enable compute.googleapis.com
gcloud projects add-iam-policy-binding $PROJECT --member=serviceAccount:$CB_SA_EMAIL --role='roles/iam.serviceAccountUser' --role='roles/compute.instanceAdmin.v1' --role='roles/iam.serviceAccountActor'
And then you can configure your build step with something similar to this:
steps:
- name: gcr.io/$PROJECT_ID/remote-builder
env:
- COMMAND=sudo supervisorctl restart
You can also find more information in the examples section of the Github repo.

Related

How can I run a gcloud SDK command to invalidate a Cloud CDN cache remotely from my backend server?

I need to run a specific gcloud SDK command. And I need to do it remotely on my express server. Is this possible?
The command is related to the Cloud CDN service, which doesn't seem to have an npm package to access its API in an easy way. I've noticed on a cloudbuild.yaml that you can actually run a gcloud command on a build process, like:
cloudbuild.yaml
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "server"
And then I thought, if it's possible to run a gcloud command through Cloud Build, isn't there some way to create basically a "Cloud Script" where I could access and trigger a gcloud command, like that?
This is my environment and what I'd like to run:
Express server hosted on Cloud Run
I would like to run a command to clear the Cloud CDN cache, like this:
gcloud compute url-maps invalidate-cdn-cache URL_MAP_NAME \
--path "/images/*"
There doesn't seem to be a Node.js client API to access the Cloud CDN service.
Here you have a REST POST endpoint https://cloud.google.com/compute/docs/reference/rest/v1/urlMaps/invalidateCache
You can pretty much create a cloud function or call it from other places to invalidate your Cache.
With the gcloud command you would probably have to create VM on Compute Engine and create some endpoint which execute the gcloud command no other way but I suggest you use REST endpoint as you can call it from whatever environment you use.

How to use 'gcloud deployment-manager deployments' within 'gcloud builds' do have a full CI/CD pipeline on GCP?

I am trying to setup a CI/CD pipeline on GCP. I would like following:
new modification in Github is used as a trigger
use gcloud builds submit --config=cloud_build.yaml to build a new docker image that contain the modification from git (mainly new python packages and python code) and push the image in ContainerRegistry
use gcloud deployment-manager deployments create xxx -- template pipeline.jinja --properties xxx` to deploy and run my container (it is a jupyter notebook)
I have the 2 last steps setup and working (gcloud and gcloud deployment-manager).
My question is how can I do that with one script ? I would line to have the pipeline fully automated. Some of the test I would like to implement is to test that python packages are installed properly will be done on the container after the deployment.
What is the best practices on GCP ? I was thinking that I could use gcloud deployment-manager inside gcloud builds but didn't really find documentation up to know how to do that. For the deployment, I have a lot of variables to pass to setup network, machine type and other parameters and I can only do it using jinja script.
What is the best practices on GCP ? I was thinking that I could use gcloud
deployment-manager inside gcloud builds but didn't really find
documentation up to know how to do that
Cloud Build provides and maintains pre-built images of builders that you can reference in your build steps to execute your tasks.
You can trigger the deployment manager using the gcr.io/cloud-builders/gcloud (doc) builder:
# Build images
[...]
# Load/Generate your Jinja templates
[...]
# Deploy
- name: 'gcr.io/cloud-builders/gcloud'
id: Deploy your application
args: ['deployment-manager', 'deployments', 'create', 'your-template']
However, there are more conventional ways to deploy containerized application within a GKE cluster:
via gcr.io/cloud-builders/kubectl to directly deploy application via well-defined Kubernetes manifests;
via Helm tool builder to package and deploy Kubernetes applications starting from custom templates.
Disclaimer: Comments and opinions are my own and not the views of my employer.

gcloud - ERROR: (gcloud.app.deploy) Permissions error fetching application

I am trying to deploy node js app on google cloud but getting following error -
Step #1: ERROR: (gcloud.app.deploy) Permissions error fetching application [apps
/mytest-240512]. Please make sure you are using the correct project ID and that
you have permission to view applications on the project.
I am running following command -
gcloud builds submit . --config cloudbuild.yaml
My cloudbuild.yaml file looks like -
steps:
#install
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
#deploy
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
The default Cloud Build service account does not allow access to deploy App Engine. You need to enable the Cloud Build service account to perform actions such as deploy.
The Cloud Build service account is formatted like this:
[PROJECT_NUMBER]#cloudbuild.gserviceaccount.com
Go to the Google Cloud Console -> IAM & admin -> IAM.
Locate the service account and click the pencil icon.
Add the role "App Engine Deployer" to the service account.
Wait a couple of minutes for the service account to update globally and then try again.
I had this same error today and the way I resolve it was by running: $ gcloud auth login on the console.
This will open a new browser tab for you to login with the credentials that has access to the project you're trying to deploy.
I was able to deploy to gcloud after that.
ps.: I'm not sure this is the best approach, but I'm leaving this as a possible solution as this is how I usually go around this problem. Worst case, I'll stand corrected and learn something new.
The most common way to deploy an app to App Engine is to use gcloud app deploy ....
When you use gcloud app deploy against App Engine Flex, the service uses Cloud Build.
It's entirely possible|reasonable to use Cloud Build to do your deployments too, it's just more involved.
I've not tried this but I think that, if you wish to use Cloud Build to perform the deployment, you will need to ensure that the Cloud Build service account has permissions to deploy to App Engine.
Here's an example of what you would need to do, specifically granting Cloud Build's service account the correct role.
Two commands can handle the perms needed (run in your terminal if you have gcloud sdk installed and authenticated or run in cloud shell for your project):
export PROJECT_ID=[[put your project id here]]
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")
gcloud iam service-accounts add-iam-policy-binding ${PROJECT_ID}#appspot.gserviceaccount.com \
--member=serviceAccount:${PROJECT_NUMBER}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT_ID}
```
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member=serviceAccount:${PROJECT_NUMBER}#cloudbuild.gserviceaccount.com \
--role=roles/appengine.appAdmin

Unable to connect to Google Container Engine

I've updated gcloud to the latest version (159.0.0)
I created a Google Container Engine node, and then followed the instructions in the prompt.
gcloud container clusters get-credentials prod --zone us-west1-b --project myproject
Fetching cluster endpoint and auth data.
kubeconfig entry generated for prod
kubectl proxy
Unable to connect to the server: error executing access token command
"/Users/me/Code/google-cloud-sdk/bin/gcloud ": exit status
Any idea why is it not able to connect?
You can try to run to see if the config was generated correctly:
kubectl config view
I had a similar issue when trying to run kubectl commands on a new Kubernetes cluster just created on Google Cloud Platform.
The solution for my case was to activate Google Application Default Credentials.
You can find a link below on how to activate it.
Basically, you need to set an environmental variable to the path of the .json with the credentials from GCP
GOOGLE_APPLICATION_CREDENTIALS -> c:\...\..\..Credentials.json exported from Google Cloud
https://developers.google.com/identity/protocols/application-default-credentials
I found this solution on a kuberenetes github issue: https://github.com/kubernetes/kubernetes/issues/30617
PS: make sure you have also set the environmental variables for:
%HOME% to %USERPROFILE%
%KUBECONFIG% to %USERPROFILE%
It looks like the default auth plugin for GKE might be buggy on windows. kubectl is trying to run gcloud to get a token to authenticate to your cluster. If you run kubectl config view you can see the command it tried to run, and run it yourself to see if/why it fails.
As Alexandru said, a workaround is to use Google Application Default Credentials. Actually, gcloud container has built in support for doing this, which you can toggle by setting a property:
gcloud config set container/use_application_default_credentials true
or set environment variable
%CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS% to true.
Using GKE, update the credentials from the "Kubernetes Engine/Cluster" management worked for me.
The cluster line provides "Connect" button that copy the credentials commands into console. And this refresh the used token. And then kubectl works again.
Why my token expired? well, i suppose GCP token are not eternal.
So, the button plays the same command automatically that :
gcloud container clusters get-credentials your-cluster ...
Bruno

`gcloud source repos clone` with service account is not working

Deploying my personal project to GCE(Google Compute Engine), I tried to clone a git repo in Google Cloud Platform. But it did not work. I guess git repo in GCP uses code.google.com internally, and that is not compatible with service accounts. It prints
Cloning into '/home/...'...
fatal: remote error: Invalid username/password.
You may need to use your generated googlecode.com password; see https://code.google.com/hosting/settings
ERROR: (gcloud.source.repos.clone) Command '['git', 'clone', 'https://source.developers.google.com/p/.../r/...', '/home/...', '--config', 'credential.helper="gcloud.sh"']' returned non-zero exit status 128
Currently logged in ...-compute#developer.gserviceaccount.com and image is Debian (GCE default)
How was your GCE instance created?
You need to make sure it has https://www.googleapis.com/auth/source.full_control scope. To list scopes you can run
gcloud compute instances describe INSTANCE_NAME \
--zone=INSTANCE_ZON --format="yaml(serviceAccounts)"
If you used gcloud command to create the instance you can use --scopes flag to include the source scope.
Alternatively if you use developer console to create the GCE instance make sure to select "Allow full access to all Cloud APIs" in "Service account" section.