Command to create google cloud backend service fails - what am I doing wrong? - google-cloud-platform

I am currently working through the Google Cloud "load balancing" code lab:
https://codelabs.developers.google.com/codelabs/cpo200-load-balancing
On page 4 of the lab, it requires me to run the following command in the Cloud Shell, to create a backend-service (for load balancing of a group of web server, i.e. HTTP, instances):
gcloud compute backend-services create \
guestbook-backend-service \
--http-health-checks guestbook-health-check
However, running this command results in the following error:
ERROR: (gcloud.compute.backend-services.create) Some requests did not succeed:
- Invalid value for field 'resource.loadBalancingScheme': 'EXTERNAL'.
Backend Service based Network Load Balancing is not yet supported.
Assuming that all the preceding steps in the code lab are correct (which I have no reason to suspect is not the case), this appears to be a bug in the code lab.
I have submitted a bug report for this, however, since I am not expecting any response to the bug report any time soon but I do want to continue on with this lab, what command should I be running instead?
I presume there has been some sort of API change but the code lab has not caught up and the documentation does not appear to indicate any relevant changes.
I realize I could probably work out how to do this with the Cloud Console, but I would really like to learn the command line actions.
Does anyone have any ideas?
Thanks in advance!

And, as is the nature of these things, shortly after I post this I discover the answer for myself...
The command should be:
gcloud compute backend-services create \
guestbook-backend-service \
--http-health-checks guestbook-health-check \
--global
It appears that what the error message is actually complaining about is that regional backend-services are not supported; they must be global.
Leaving aside the fact that the lab directions are inadequate, it would be nice if this was detailed in the documentation, but I guess we can't have everything...

Related

Dataproc custom image: Cannot complete creation

For a project, I have to create a Dataproc cluster that has one of the outdated versions (for example, 1.3.94-debian10) that contain the vulnerabilities in Apache Log4j 2 utility. The goal is to get the alert related (DATAPROC_IMAGE_OUTDATED), in order to check how SCC works (it is just for a test environment).
I tried to run the command gcloud dataproc clusters create dataproc-cluster --region=us-east1 --image-version=1.3.94-debian10 but got the following message ERROR: (gcloud.dataproc.clusters.create) INVALID_ARGUMENT: Selected software image version 1.3.94-debian10 is vulnerable to remote code execution due to a log4j vulnerability (CVE-2021-44228) and cannot be used to create new clusters. Please upgrade to image versions >=1.3.95, >=1.4.77, >=1.5.53, or >=2.0.27. For more information, see https://cloud.google.com/dataproc/docs/guides/recreate-cluster, which makes sense, in order to protect the cluster.
I did some research and discovered that I will have to create a custom image with said version and generate the cluster from that. The thing is, I have tried to read the documentation or find some tutorial, but I still can't understand how to start or to run the file generate_custom_image.py, for example, since I am not confortable with cloud shell (I prefer the console).
Can someone help? Thank you

GCP Compute Engine won't show memory metrics

I want my compute engine VM to show memory usage metrics in the console, I went to this page and install Ops-Agents, restart the service and went to the VM observability section, but still saw a message that the agent is not installed (in the memory usage metric):
I thought maybe by default the memory usage is not installed (it's not mentioned anywhere, just a guess) and I need to modify the config. I went to this docs and added this code to /etc/google-cloud-ops-agent/config.yaml:
metrics:
receivers:
agent.googleapis.com/memory/bytes_used:
type: hostmetrics
collection_interval: 1m
According to the docs, this config will be merged with the built-in configuration when the agent restarts.
I restarted the agent service, went back to the dashboard but still it shows the message "Requires Ops Agent".
I don't know what I'm doing wrong, the documentations are really poor for that topic IMO, I couldn't find any example on how to turn on memory usage metrics.
EDIT
Running sudo systemctl status google-cloud-ops-agent"*"
I can see this error message:
otelopscol[2763]:
2022-05-02T14:07:02.780Z#011error#011collector#v0.26.1-0.20220307211504-dc45061a44f9/metrics.go:235#011could
not export time series to GCM#011{"error": "rpc error: code =
InvalidArgument desc = Name must begin with
'{resource_container_type}/{resource_container_id}', got: projects/"}
EDIT2
If I click INSTALL via the console, I see this installation instructions:
:> agents_to_install.csv && \
echo '"projects/<project>/zones/europe-west1-b/instances/<instance>","[{""type"":""ops-agent""}]"' >> agents_to_install.csv && \
curl -sSO https://dl.google.com/cloudagents/mass-provision-google-cloud-ops-agents.py && \
python3 mass-provision-google-cloud-ops-agents.py --file agents_to_install.csv
It's differente from the one here: https://cloud.google.com/monitoring/agent/monitoring/installation#joint-install
curl -sSO https://dl.google.com/cloudagents/add-monitoring-agent-repo.sh
sudo bash add-monitoring-agent-repo.sh --also-install
Not sure what installed what, tried both.
Regarding your questions “I couldn't find any example on how to turn on memory usage metrics” and “Is it installed but the configurations need to be modified for the memory usage metrics?” the answer is yes, you need to customize which group or groups of metrics to enable as specified here. The metric type strings must be prefixed with agent.googleapis.com/agent/. For memory metrics, the examples are:
agent.googleapis.com/agent/memory_usage
agent.googleapis.com/agent/memory_utilization
That prefix has been omitted from the entries in the table that I’m sharing here.
Now, you need to select the setting based on the target VM that you need to get metrics from, for example, Linux only:
agent.googleapis.com/memory/usage
Also, you can play with other options, changing the final criteria, for example:
agent.googleapis.com/memory/bytes_used
Ensure that you didn’t miss anything regarding the agent’s installation, follow these instructions to install it from the CLI. Then go to:
Resources -> Instances: You should see your VM instance.
Click on your instance -> click on Agent -> Scroll down and you see your memory and your swap usage.
Finally, you can follow this troubleshooting guide for Ops Agent issues, and these threads for more empirical cases and solutions Memory Usage Monitoring in GCP Compute Engine and No metric found.

What trace-token option for gcloud is used for?

Help definition is not clear for me:
Token used to route traces of service requests for investigation of
issues.
Could you provide simple example how to use it?
I tried:
gcloud compute instances create vm3 --trace-token xyz123
I can find "vm3" string in logs, but not my token xyz123.
The only use of it seems to be in grep:
history| grep xyz123
The flag --trace-token is intended to be used by the support agents when there is some error which is difficult to track from the logs. The Google Cloud Platform Support agent provides a time bound token which will expire after a specified time and asks the user to run the command for the specific product in which the user is facing the issue. Then it gets easier for the support agent to trace the error by using that --trace-token.
For example :
A user faced some error while creating a Compute Engine instance and contacted the Google Cloud Platform Support team. The support agent then inspected the logs and other resources but could not find the root cause of the issue. Then the support agent provides a --trace-token and asks the user to run the below command with the provided --trace-token.
--trace-token = abcdefgh
Command : gcloud compute instances create my-vm --trace-token abcdefgh
After the user runs the above command the support agent could find the error by analysing in depth with the help of the --trace-token
Please note that when a --trace-token flag is used the content of the trace may include sensitive information like auth tokens, the contents of any accessed files. Hence they should only be used for manual testing and should not be used in production environments.

Want to create a VM instance in Google Cloud Platform by using CLI

Trying to create a VM instance in Google Cloud Platform. Getting error in the mentioned process. Trying to resolve it.
Error: **Could not fetch a resource:
Invalid value for field 'resource.networkInterfaces[0].subnetwork': 'https://compute.googleapis.com/compute/v1/projects/xxx/regions/us/subnetworks/10.128.0.0/20'. The URL is malformed.**
Anyone, please guide me. My intention to make VM creation automated and make it simple by putting it all together in an automated Bash Script.
The error indicates that the URL is malformed. It probably because that "subnetwork" does not exist as you write it.
One way to fix it is to have a look in the documentation to know the right way to write the command. Also be sure that that subnet exist in your GCP project.
https://cloud.google.com/vpc/docs/create-use-multiple-interfaces
The easy way to avoid typos is to create the VM in the console the first time (you don't really have to create it, just start the form), at the bottom of the page you will see a line that says "Equivalent REST or command line", click in "command line" to see exactly the CLI command equivalent to the VM you are configuring. Use this command line in your CLI console or script.
Clicking in the "command line" will return something like:
gcloud compute instances create VM_NAME \
--network=NETWORK_NAME \
--subnet=SUBNET_NAME \
--zone=ZONE
with all the parameters already filled in for you.

Google CloudSQL instance non-responsive, how to get support?

When it comes to databases, we want to leave managing them to the pros, which is why we went for a managed solution in the form of a CloudSQL 2nd gen db instance. Today the instance stopped responding, I clicked restart, it has been restarting for hours and is not responding, I have tried clone the instance, also not responding.
I don't know what else to do, our db is crippled and the service that uses it is down. These things happen, fine.
The thing that shocked me is that I am unable to contact anybody to resolve this problem. I understand that I can pay for a support subscription, $150p/m and up. This confuses me though, the GCloud console UI is not responding, am I incorrect in assuming I should not have to pay for support for the core product to at least work?
This leads me to my main question, if I want to continue using Google Cloud products in production, do I NEED a support subscription?
Same happened to us yesterday. The cloud SQL instance did not respond for an hour and a half (from 18h to 19:30h GTM+1).
We couldn't do absolutely nothing, we tried to backup the instance to a bucket but the command was returning an error saying that another operation is in progress.
We are a small startup and we can't pay for a support plan, but when we hired the cloud SQL service we thought that this kind of situations doesn't happen.
Honestly, after this I believe that Cloud SQL is not a good option if you do not contract at the same time a gold or platinum support plan. It is frustrating that something fails and you can not do anything, or even report the error.
Try the gcloud command line tool in your active shell, instead of the console UI. Try exporting the data from your SQL instance to google cloud storage bucket by using this command:
gcloud sql instances \
export <sql-instnace-name> \
gs://<bucket-name>/backup.sql
The SQL instance's service account by default has read and write access to google cloud storage bucket.
Create a new SQL instance using this command:
gcloud sql instances \
create <new-sql-instance-name>
Now, add the data to the new SQL instance using this command:
gcloud sql instances \
import <new-sql-instance-name> \
gs://<bucket-name>/backup.sql
You can get free or premium support here. You do not need a subscription to get help; it all depends on your needs and the level of urgency you estimate for eventual future problems.
If you have a recent backup of your database, you may consider re-creating it in another instance, from there.
You may consider posting your issue in the Google Cloud SQL Product Issue Tracker. This way, it will enjoy much better visibility from developers and Google support, without attracting any extra costs.