How to replace GCP Load Balancer Backend Service Health Check via gcloud - google-cloud-platform

I have some "Classic" HTTP load balancers in my GCP project. Now I need to switch the health check used in one of the backend services.
Here you see in the GUI what I want to do:
However, as you can see, I got a warning not to change anything via GUI.
So, how do I do this via the gcloud command line tool? Could not find any hint in the docs, unfortunately.
Or can I ignore this warning and just go ahead?

This warning can be ignored. In case if the health check is not updated then follow these steps to update the health check through gcloud commands.
If you know the name of the backend service which you are trying to update the health check, you can use the following command:
gcloud compute backend-services update BACKEND_SERVICE_NAME \
--region=REGION \
--health-checks=HEALTH_CHECK_NAME \
--health-checks-region=REGION
Refer this document for more information about the backend-service update command.
In case you don't know the name of the backend-services attached to the load balancer follow this document to get the list of backend services used by the load balancer. By using the backend service name we can update the health check through gcloud command line tool.

Related

Bringing Google Cloud costs to zero (compute engine)

In the billing section for one of my projects the costs for Compute Engine - E2 Instance Core of 12 hours are listed every day. But there are no instances in the Compute Engine section. The project actually only contains special Google Maps API keys that cannot be transferred.
I have also tried to disable the Compute Engine API. Unfortunately this fails with the following error: Hook call/poll resulted in failed op for service 'compute.googleapis.com': Could not turn off service, as it still has resources in use] with failed services [compute.googleapis.com]
Any idea?
Based on the error message: ‘Could not turn off service, as it still has resources in use] with failed services [compute.googleapis.com]’
That means that there are resources under Compute Engine API, so, you can either run a gcloud command to list the current instances or run a gcloud command to view the Asset Inventory, I suggest you to open your GCP project in a Chrome incognito window and use cloud shell.
List instances
gcloud compute instances list
List Asset Inventory
gcloud asset search-all-resources
NOTE: The Asset Inventory API is not enabled by default, so, after you run the command you’ll receive this message:
user#cloudshell:~ (project-id)$ gcloud asset search-all-resources
API [cloudasset.googleapis.com] not enabled on project [project-id].
Would you like to enable and retry (this will take a few minutes)?
(y/N)?
Please type y, to enable the API and be able to see the output of the command.
Having said that, when you see the results on the screen you’ll be able to identify the resources under the Compute Engine API and all its components, e.g.
---
additionalAttributes:
networkInterfaces:
- network: https://www.googleapis.com/compute/v1/projects/project-id/global/networks/default
networkIP: 1.18.0.5
assetType: compute.googleapis.com/Instance
displayName: linux-instance
location: us-central1-a
name: //compute.googleapis.com/projects/project-id/zones/us-central1-a/instances/linux-instance
project: projects/12345678910
---
additionalAttributes: {}
assetType: compute.googleapis.com/Disk
displayName: linux-instance
location: us-central1-a
name: //compute.googleapis.com/projects/project-id/zones/us-central1-a/disks/linux-instance
project: projects/12345678910
---
As you can see the 2 above lines describe the instance ‘linux-instance’ and its components (disk and ip address), all of them are under the API -> compute.googleapis.com
If you need further assistance, please send the output of the command to a TXT file and remove the sensitive information like: project-id, external IPs, internal IPs and share the output with me so I can take a look at it.
Alternatively, you can sanitize the output of the command just like I did it by replacing the instance name, project ID, project number and IP address with fake data.
Please keep in mind that since this is a billing concern the GCP billing team is open to hear you.
Curious.
There are some services that require Compute Engine resources, e.g. Kubernetes Engine, but I thought that, if used, the resources are always exposed.
One way to surface the user of this resource may be to enumerate the project's services and eyeball the result for a service that may be consuming VMs:
gcloud services list --enabled --project=[[YOUR-PROJECT]]

Receiving an error with Google Cloud Endpoints with Cloudrun for ENDPOINTS_SERVICE_NAME environment variable

Receiving the following error message after setting up backend service with ESPv2 Beta sidecar container.
Serverless ESPv2 expects ENDPOINTS_SERVICE_NAME in environment variables.
Did you forget to build the Endpoints service configuration
into the ESPv2 image? Please refer to the official serverless
quickstart tutorials (below) for more information.
https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-run#configure_esp
https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-functions#configure_esp
If you are following along with these tutorials but have not
reached the step above yet, this error is expected. Feel free
to temporarily disregard this error message.
If you wish to skip this step, please specify the name of the
service in the ENDPOINTS_SERVICE_NAME environment variable.
Note this deployment mode is **not** officially supported.
It is recommended that you follow the tutorials linked above.
Looks like I was able to setup the cloud run service correctly able to get responses directly from the API.
Reviewing the gcloud_build_image it doesn't seem to have the variable.
https://github.com/GoogleCloudPlatform/esp-v2/blob/9a5a03d439867b0d5563081ac574e94d51922c32/docker/serverless/gcloud_build_image#L53
Update your Cloud Run environment variable where is deployed Cloud Endpoint like this
gcloud beta run services update <SERVICE NAME> \
--set-env-vars ENDPOINTS_SERVICE_NAME=<SERVICE NAME>-<hash>-<REGION>.a.run.app \
--region <REGION> --platform managed
More details in my article

Is VPC-native GKE cluster production ready?

This happens while trying to create a VPC-native GKE cluster. Per the documentation here the command to do this is
gcloud container clusters create [CLUSTER_NAME] --enable-ip-alias
However this command, gives below error.
ERROR: (gcloud.container.clusters.create) Only alpha clusters (--enable_kubernetes_alpha) can use --enable-ip-alias
The command does work when option --enable_kubernetes_alpha is added. But gives another message.
This will create a cluster with all Kubernetes Alpha features enabled.
- This cluster will not be covered by the Container Engine SLA and
should not be used for production workloads.
- You will not be able to upgrade the master or nodes.
- The cluster will be deleted after 30 days.
Edit: The test was done in zone asia-south1-c
My questions are:
Is VPC-Native cluster production ready?
If yes, what is the correct way to create a production ready cluster?
If VPC-Native cluster is not production ready, what is the way to connect privately from a GKE cluster to another GCP service (like Cloud SQL)?
Your command seems correct. Seems like something is going wrong during the creation of your cluster on your project. Are you using any other flags than the command you posted?
When I set my Google cloud shell to region europe-west1
The cluster deploys error free and 1.11.6-gke.2(default) is what it uses.
You could try to manually create the cluster using the GUI instead of gcloud command. While creating the cluster, check the “Enable VPC-native (using alias ip)” feature. Try using a newest non-alpha version of GKE if some are showing up for you.
Public documentation you posted on GKE IP-aliasing and the GKE projects.locations.clusters API shows this to be in GA. All signs point this to be production ready. For whatever it’s worth, the feature has been posted last May In Google Cloud blog.
What you can try is to update your version of Google Cloud SDK. This will bring everything up to the latest release and remove alpha messages for features that are in GA right now.
$ gcloud components update

Service Account does not exists on GCP

While trying for the first time to use Google Kubernetes Cloud solution, and according to the tutorial... I am trying to create new cluster.
But after pressing Create i receive
The request contains invalid arguments: "EXTERNAL: service account
"****#developer.gserviceaccount.com" does not exist.". Error code: "7"
in a red circle near the Kubernetes cluster name.
After some investigations it's looks like the default service account which google generated for my account.
I've looked over the create cluster options, but there isn't any option to change the service account.
Do I need to change Google Compute Engine default service account? how i can do it?
How I can overcome this issue?
Thank you
Default Compute Engine Service Account is essential for functions related to Compute Engine and is being generated automatically. Kubernetes Engine utilizes Compute Engine VM Instances as Nodes used for the cluster. GKE uses the Compute Engine Service Account to authorize the creation of these nodes.
In order to regenerate default service there are two options:
Regenerate by Disabling and Re-enabling the Google Compute Engine API. In the "API's & Services" dashboard. If for some reason performing this option encountering errors when disabling the API, then try option 2.
run command gcloud services enable compute.googleapis.com in Cloud SDK or Cloud Shell which is in the header of the page.
Looks like you either do not have any default service account or have more than one.
Simply go to the "Service Accounts" section "IAM & Admin" and select the app engine default service account, and provide this as an argument while creating cluster from gcloud or gshell as below:
gcloud container clusters create my-cluster --zone=us-west1-b --machine-type=n1-standard-1 --disk-size=100 --service-account=abc#appspot.gserviceaccount.com
To initialize GKE, go to the GCP Console. Wait for the "Kubernetes Engine is getting ready. This may take a minute or more" message to disappear.
Please open the page and wait for a while

Heapster not pushing metrics to Stackdriver on Google container engine

A newly created Kubernetes cluster on GKE is not pushing its metrics to Stackdriver. Output of kubectl cluster-info is:
Kubernetes master is running at https://XXX.XXX.XXX.XXX
KubeDNS is running at https://XXX.XXX.XXX.XXX/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://XXX.XXX.XXX.XXX/api/v1/proxy/namespaces/kube-system/services/kube-ui
Heapster is running at https://XXX.XXX.XXX.XXX/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster
When I try to create a dashboard on Stackdriver with 'Custom Metrics', it says 'No Match Found'. Metrics were supposed to be present at this location with 'kubernetes.io' prefix according to Heapster documentation.
I have also enabled Cloud Monitoring API with Read Write permission while creating cluster. Is it required for pushing cluster metrics?
What Heapster does with the metrics depends on its configuration. When running as part of GKE, the metrics aren't exported as "custom" metrics, but rather as official GKE service metrics. The feature is still in an experimental, soft-launch state, but you should be able to access them at app.google.stackdriver.com/gke
In the documentation it says you must enable monitoring by running:
gcloud alpha container clusters update --monitoring-service=monitoring.googleapis.com <cluster-name>
This is supposed to be on by default but it wasn't for me.