unknown option "-L3000:localhost:3000" while deploying Grafana - google-cloud-platform

I was following the single node file server tutorials on Google cloud platform where it told me to create an SSH tunnel, use the following gcloud command:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=PROJECT --zone=ZONE INSTANCE_NAME
I enter the command on the gcloud sdk, and then I get this message.
unknown option "-L3000:localhost:3000"
Image of the message.
What did I do wrong? Any help will be appreciated. Thank you.

Try --ssh-flag="-L 3000:localhost:3000", PuTTY expects a space between the -L flag and the port definition.

Related

gcloud SSH in same terminal window

I use gcloud cloud-shell ssh to connect to Google Cloud Shell. However, this spawns a new window, (PuTTY) which bothers me. Is there a way (some -- flag / &c.) to use it from the same console window?
Thanks in advance!
The CLI is just a wrapper for launching Putty (on Windows) with the correct command line options. The SSH functionality is not built into gcloud.
Using --command flag, you can run the command on the virtual machine without opening new console window.
gcloud compute ssh your-instance --zone=your-zone --command="ps aux"
It runs the command on the target instance and then exits.
Use gcloud compute ssh --help command to learn more.
Instead of directly opening the SSH session with gcloud beta compute ssh you can set up a tunnel from a local port to the GCP instance and then use your preferred terminal to ssh over the tunnel.
You set up the tunnel with:
gcloud compute start-iap-tunnel --zone <zone> <name of instance> 22 --project <name of project> --local-host-port=localhost:4226
You open an SSH connection to localhost:4226 as you would do to the instance with:
ssh <user>#localhost -p 4226 -i <identity file>
(EDIT: make sure to include the identity file for GCP access)

Unable to connect cloudbuild to compute engine

I want to execute a script which is in my compute engine using cloudbuild but somehow cloudbuild is not able to ssh into my vm , in my vm "OS LOGIN" is enabled and also have only internal ip.
here is my cloudbuild.yaml file
steps:
name: 'gcr.io/cloud-builders/gcloud' id: Update staging server entrypoint: /bin/sh args:
'-c'
|
set -x &&
gcloud compute ssh vm_name --zone=us-central1-c --command='/bin/sh /pullscripts/pull.sh'
I am attaching my error pics
cloudbuild error page 1
cloudbuild error page 2
Also my question is , is it possible connect a vm using cloud sdk if "os login" is enabled.
You'll probably have to add the roles/iap.tunnelResourceAccessor role to the cloudbuild service account. Please read this Google documentation, which shows you what to do with a certain error code.
Error code 4033
Either you don't have permission to access the instance, the instance doesn't exist, or the instance is stopped.
in fact, you can use gcloudbuild to connect in any vm, just need a docker configuration and upload the files (private_key, scripts, etc). I've this repo to solve this problem: https://github.com/jmbl1685/gcloudbuild-vm-ssh-connect
I hope that the above help you
Try adding --internal-ip which looks like as follows:
gcloud compute ssh vm_name --zone=us-central1-c --internal-ip

whoami produces different names on gcp terminal and local terminal

I've somehow ended up as two different users depending on where I'm connecting from. I think it's the result of my org creating users for different projects. If I execute whoami from my local terminal I'm foo but if I execute the command from the ssh.cloud terminal I'm foo_foobar.
I have a folder projects on the VM and I can see it from both terminals, but all the subfolders that belong to foo are not visible to foo_foobar. OK, I get it.
The biggest issue is that from my local terminal, as foo I can't pull from or push to a cloud repo.
So my ask is: does there exist a cli command that will let me connect as foo_foobar from my local? I've looked at my config with gcloud config list and the email and project ID are correct. Thanks
The solution was to use this ssh command: gcloud compute ssh foo_foobar#foo-vm --zone us-west1-b
The confusion was caused by the command provided in the dashboard from the SSH dropdown View gcloud command which is: gcloud beta compute ssh with no "foo_foobar#foo-vm". In other words the provided ssh command does not indicate that username#instance_name should be part of the command

kubectl get componentstatus Unable to connect to the server: dial tcp xx.xxx.xx.x:xxx: i/o timeout

While I'm trying to get the pods or node states, from Google Cloud Platform Cloud Shell, I'm facing this error? Can someone please help me? I can see the output of the "kubectl config view".
Posting this answer as community wiki for better visibility and the fact that the possible solution was posted in the comments:
Does this answer your question? Unable to connect to the server: dial tcp i/o time out
Adding to that:
Below command:
$ kubectl config view
is used to show the configuration stored in your ./kube/config file. The fact that you can see the output of this command doesn't mean you have correct cluster configured to use with kubectl.
From the perspective of Google Cloud Platform and Cloud Shell
There is an official documentation regarding troubleshooting issues with GKE:
Cloud.google.com: Kubernetes Engine: Docs: Troubleshooting
There could be several reasons why you are getting following error:
You are referencing wrong cluster in your ~/.kube/config file.
$ gcloud container clusters get-credentials CLUSTER_NAME --zone=ZONE - you will need to run this command to fetch the correct configuration
You can also get above command from the Kubernetes Engine page (Connect button)
You are referencing a cluster in your ~/.kube/config file that was deleted
You created Private GKE cluster
For more information you can look in the Cloud Console -> Kubernetes Engine -> CLUSTER_NAME
You can also run:
$ gcloud container clusters list - this command will show clusters and their state (status) they are in
$ gcloud container clusters describe CLUSTER_NAME --zone=ZONE - this command will show you the configuration of the cluster

Can't connect to GCP cluster VM

I'm following this tutorial and can't connect to a GCP VM cluster using SSH port forwarding.
I run this command line:
$ gcloud compute ssh cluster-5b2b-m --zone=asia-northeast2-b \
--project=*** -- -L 8787:localhost:8787
but when I try to open
http://localhost:8787 in the browser i get an error saying This site can't be reached
Any suggestions please?
In Example, the full command should be as following, then gcloud will open a tunnel to your cluster. I think you forget to type [CLUSTER_NAME]-m
gcloud compute ssh \
--zone=[CLUSTER_ZONE] \
--project=[PROJECT_ID] \
[CLUSTER_NAME]-m -- \
-L 8787:localhost:8787
Yeah,
The issue is that you are trying to access to "localhost" in the browser but your cluster is in gcloud.
You can try to access using Rstudio: http://[CLUSTER_NAME]-m:8787 as the tutorial suggests, or http://[CLUSTER_NAME]-m:8088 from the browser, if the configuration is correct it should works.