I have logging enabled on my 2nd Generation CloudSQL instances in GCP - however I'm attempting to read these using the CLI and drawing a blank.
If I run $ gcloud logging logs list I can see the logs I want to read, example as follows:
projects/<project name>/logs/cloudsql.googleapis.com%2Fmysql-slow.log
projects/<project name>/logs/cloudsql.googleapis.com%2Fmysql.err
The docs are confusing, but it looks like I should be able to read them if I run:
gcloud logging read "logName=projects/<project name>/logs/cloudsql.googleapis.com%2Fmysql.err" --limit 10 --format json
However this only returns a blank array as []
I just want to read out the logs.
What am I doing wrong?
You have to execute the gcloud logging read projects/<project name>/logs/cloudsql.googleapis.com%2Fmysql-slow.log and gcloud logging read projects/<project name>/logs/cloudsql.googleapis.com%2Fmysql.err. As it is specified in Quickstart using Cloud SDK documentation. Do not use the " in the command.
Related
I need to run a specific gcloud SDK command. And I need to do it remotely on my express server. Is this possible?
The command is related to the Cloud CDN service, which doesn't seem to have an npm package to access its API in an easy way. I've noticed on a cloudbuild.yaml that you can actually run a gcloud command on a build process, like:
cloudbuild.yaml
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "server"
And then I thought, if it's possible to run a gcloud command through Cloud Build, isn't there some way to create basically a "Cloud Script" where I could access and trigger a gcloud command, like that?
This is my environment and what I'd like to run:
Express server hosted on Cloud Run
I would like to run a command to clear the Cloud CDN cache, like this:
gcloud compute url-maps invalidate-cdn-cache URL_MAP_NAME \
--path "/images/*"
There doesn't seem to be a Node.js client API to access the Cloud CDN service.
Here you have a REST POST endpoint https://cloud.google.com/compute/docs/reference/rest/v1/urlMaps/invalidateCache
You can pretty much create a cloud function or call it from other places to invalidate your Cache.
With the gcloud command you would probably have to create VM on Compute Engine and create some endpoint which execute the gcloud command no other way but I suggest you use REST endpoint as you can call it from whatever environment you use.
While I'm trying to get the pods or node states, from Google Cloud Platform Cloud Shell, I'm facing this error? Can someone please help me? I can see the output of the "kubectl config view".
Posting this answer as community wiki for better visibility and the fact that the possible solution was posted in the comments:
Does this answer your question? Unable to connect to the server: dial tcp i/o time out
Adding to that:
Below command:
$ kubectl config view
is used to show the configuration stored in your ./kube/config file. The fact that you can see the output of this command doesn't mean you have correct cluster configured to use with kubectl.
From the perspective of Google Cloud Platform and Cloud Shell
There is an official documentation regarding troubleshooting issues with GKE:
Cloud.google.com: Kubernetes Engine: Docs: Troubleshooting
There could be several reasons why you are getting following error:
You are referencing wrong cluster in your ~/.kube/config file.
$ gcloud container clusters get-credentials CLUSTER_NAME --zone=ZONE - you will need to run this command to fetch the correct configuration
You can also get above command from the Kubernetes Engine page (Connect button)
You are referencing a cluster in your ~/.kube/config file that was deleted
You created Private GKE cluster
For more information you can look in the Cloud Console -> Kubernetes Engine -> CLUSTER_NAME
You can also run:
$ gcloud container clusters list - this command will show clusters and their state (status) they are in
$ gcloud container clusters describe CLUSTER_NAME --zone=ZONE - this command will show you the configuration of the cluster
I'm trying to use secret in Cloud Run for Anthos by gcloud command line
Is there any example about how to use this secret in any documents?
I'm already looking for it in https://cloud.google.com/sdk/gcloud/reference/run
but nowhere in the doc talking about secret
Secret in Cloud Run
Secret mounts in "Cloud Run for Anthos" are regular Kubernetes Secrets. https://kubernetes.io/docs/concepts/configuration/secret/ You can use kubectl create secret command to create it.
You can browse the list of ConfigMaps and Secrets at the Cloud Console https://console.cloud.google.com/kubernetes/config but you can't create them or edit them there. Currently, kubectl is the only option.
I'm using Google Cloud for the first time and I'm trying to upload a test file to my root folder on my instance. However, I'm getting this error:
ERROR: (gcloud.compute.scp) Could not fetch resource:
- Invalid value '[ua2r-website]'. Values must match the following regular expression: '[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?'
I'm in the path for that file. Here's my command:
gcloud compute scp [testtext.txt] [ua2r-website]:~/
I've double-checked the spelling and the punctuation of the VM instance, and I can't find a difference.
You need to use gcloud auth login
gcloud auth login - authorize gcloud to access the Cloud Platform with Google user credentials
write the command
gcloud auth login
then you will get a link to click on from GCP. you will get a code from the link , copy it back to the VM. then you will be authorized to do the operation.
Here is more details
https://cloud.google.com/sdk/gcloud/reference/auth/login
Remove the ['s and ]'s
gcloud compute scp testtext.txt ua2r-website:~/
You can also drag and drop the files from your local computer filesystem to the open unix shell of your where your project is.
I've updated gcloud to the latest version (159.0.0)
I created a Google Container Engine node, and then followed the instructions in the prompt.
gcloud container clusters get-credentials prod --zone us-west1-b --project myproject
Fetching cluster endpoint and auth data.
kubeconfig entry generated for prod
kubectl proxy
Unable to connect to the server: error executing access token command
"/Users/me/Code/google-cloud-sdk/bin/gcloud ": exit status
Any idea why is it not able to connect?
You can try to run to see if the config was generated correctly:
kubectl config view
I had a similar issue when trying to run kubectl commands on a new Kubernetes cluster just created on Google Cloud Platform.
The solution for my case was to activate Google Application Default Credentials.
You can find a link below on how to activate it.
Basically, you need to set an environmental variable to the path of the .json with the credentials from GCP
GOOGLE_APPLICATION_CREDENTIALS -> c:\...\..\..Credentials.json exported from Google Cloud
https://developers.google.com/identity/protocols/application-default-credentials
I found this solution on a kuberenetes github issue: https://github.com/kubernetes/kubernetes/issues/30617
PS: make sure you have also set the environmental variables for:
%HOME% to %USERPROFILE%
%KUBECONFIG% to %USERPROFILE%
It looks like the default auth plugin for GKE might be buggy on windows. kubectl is trying to run gcloud to get a token to authenticate to your cluster. If you run kubectl config view you can see the command it tried to run, and run it yourself to see if/why it fails.
As Alexandru said, a workaround is to use Google Application Default Credentials. Actually, gcloud container has built in support for doing this, which you can toggle by setting a property:
gcloud config set container/use_application_default_credentials true
or set environment variable
%CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS% to true.
Using GKE, update the credentials from the "Kubernetes Engine/Cluster" management worked for me.
The cluster line provides "Connect" button that copy the credentials commands into console. And this refresh the used token. And then kubectl works again.
Why my token expired? well, i suppose GCP token are not eternal.
So, the button plays the same command automatically that :
gcloud container clusters get-credentials your-cluster ...
Bruno