I am using gcloud ssh to connect gce.
> gcloud compute --project "first-medium-2****8" ssh --zone "us-east1-b" "instance-2"
I entered the above command to powershell ,but it replies
>Using username "hogehoge".
>Authenticating with public key "DESKTOP-****hogehoge"
and stops. Nothing happened after all.
Yesterday I did the same thing and there was no problem.
But today, I can't. I tried gcloud init and reinstalled the gcloud.
But nothing changed. What should I do to solve this problem?
Additonal information.
OS Windows10
Google Cloud SDK 237.0.0
PowerShell 5.1.17134.590
Putty 0.70 (only one installation)
note1:I found I could use cloud shell without problem.
But, cloud shell has timeout.So I prefer gcloud to cloud shell.
note2:When I use cloudshell, it connects as "tomotomo".
Not "hogehoge" which username when I use gcloud.
When I run "gcloud compute ssh VM_NAME --verbosity=debug --log-http"
it replies
>DEBUG: SSH Known Hosts File [C:\Users\hogehoge\.ssh\google_compute_known_hosts] could not be opened: Unable to read file
[C:\Users\hogehoge\.ssh\google_compute_known_hosts]: [Errno 2] No such file or directory: u'C:\\Users\\hogehoge\\.ssh\\goo
gle_compute_known_hosts'
DEBUG: Current SSH keys in project: [u'tomotomo:ssh-rsa AAAAB***
DEBUG: Running command [C:\Users\hogehoge\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\sdk\putty.exe -t -i C:\User
s\hogehoge\.ssh\google_compute_engine.ppk hogehoge#3*****].
DEBUG: Executing command: [u'C:\\Users\\hogehoge\\AppData\\Local\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\sdk\\putty.ex
e', u'-t', u'-i', u'C:\\Users\\hogehoge\\.ssh\\google_compute_engine.ppk', u'hogehoge#3*****']
It was very long, so I only extract which I think important.
Running
putty -cleanup
solves this problem.
Putty saves some information in registry.(IP address,public key and so on)
This command removes those registries and random seed file.
Running "putty -cleanup" as per #redpawn fixed the issue.
Related
I am trying to athenticate to the gcloud sdk using : gcloud init.
I get a URL I'm supposed to access in order to copy a token and return it to the CLI... but instead of a token, I get this error :
Erreur d'autorisation
Erreur 400 : invalid_request
Missing required parameter: redirect_uri
Is this a bug?
gcloud version info:
Google Cloud SDK 377.0.0
alpha 2022.03.10
beta 2022.03.10
bq 2.0.74
bundled-python3-unix 3.8.11
core 2022.03.10
gsutil 5.8
I am running gcloud init on wsl2 (Ubuntu 18.04). This error occurs right after the installation of gcloud with sudo apt install google-cloud-sdk.
I had the same problem and gcloud has slightly changed the way their auth flow works.
Run gcloud auth login and then copy the whole output (not just the URL) to a terminal on a computer that has both a web browser and gcloud CLI installed. The command you should copy looks like
gcloud auth login --remote-bootstrap="https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=****.apps.googleusercontent.com&scope=openid+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth&state=****&access_type=offline&code_challenge=****&code_challenge_method=S256&token_usage=remote"
When you run that on your computer that has a web browser, it will open a browser window and prompt you to log in. Once you authorize your app in the web browser you get a new URL in your terminal that looks like
https://localhost:8085/?state=****&code=****&scope=email%20openid%20https://www.googleapis.com/auth/userinfo.email%20https://www.googleapis.com/auth/cloud-platform%20https://www.googleapis.com/auth/appengine.admin%20https://www.googleapis.com/auth/compute%20https://www.googleapis.com/auth/accounts.reauth&authuser=0&hd=****&prompt=consent
Paste this new URL back into the prompt in your headless machine after Enter the output of the above command: (in your case, this would be in your WSL2 terminal). Press enter and you get the output
You are now logged in as [****].
Your current project is [None]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
[8]+ Done code_challenge_method=S256
Try
gcloud init --console-only
Then you will get the url which will work.
You must log in to continue. Would you like to log in (Y/n)? y
WARNING: The --[no-]launch-browser flags are deprecated and will be removed on June 7th 2022 (Release 389.0.0). Use --no-browser to replace --no-launch-browser.
Go to the following link in your browser:
https://accounts.google.com/o/o....
update 2022-06-20. option console-only is removed for version 389.0.0.
So instead use
gcloud init --no-browser
There are some workarounds and they depend on your particular Windows environment.
In this post and in this one you can check the most related issues with respect to gcloud running in WSL.
Here you can find some Google groups related threads that might be helpful.
Finally, you could check some related Windows troubleshootings that can help in issues related to WSL2 on your own environment.
EDIT:
it seems this answer and the one from #K.I. give other commands that don't rely on implementation details. I've tested those 3 commands:
gcloud init --console-only
gcloud auth login --no-launch-browser
gcloud init --no-launch-browser
Original answer, another workaround (17/07/2022):
DISPLAY=":0" gcloud auth login
is a workaround mentioned in this issue. Instead of requiring you to install gcloud CLI outside WSL2, it pretends there is a browser.
A link is printed, click it, login on your browser, and you're authenticated with the CLI.
Then run again gcloud init.
You can do it without error by using another method of gcloud installation :
curl https://sdk.cloud.google.com | bash
exec -l $SHELL #restart shell
gcloud init
I am trying to sign in to the cloud sdk with the command: gcloud auth login, and I select my google account in the browser. After I click allow, in the terminal it says:
ERROR: gcloud crashed (ServerNotFoundError): Unable to find the server at www.googleapis.com
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
And when I run the command gcloud info --run-diagnostics it also stops with the error:
ERROR: Reachability Check failed.
Cannot reach https://www.googleapis.com/auth/cloud-platform (ServerNotFoundError)
Network connection problems may be due to proxy or firewall settings.
My config is the default one without any modifications.
I could sign in with no issues to the cloud sdk for a long time.
I am on windows 10.
I tried signing in both with the cloud sdk shell and the windows terminal, as administrators and not as administrators.
How do I fix this error?
UPDATE:
I run the tracert -4 www.googleapis.com and also -6 command and this is the result:
Unable to resolve target system name www.googleapis.com.
I am working from home, and I don't know what a network proxy is, I might be accidentally using one.
You may have enabled proxy with gcloud, use-> gcloud config list to get the proxy settings
To unset proxy use: gcloud config unset proxy/[params] where params are address, port etc.
You need to login into your gcloud SDK first using this command
gcloud auth login
It will open a google sign up page in the browser. Select your account and then you will get a conformation in you command line that you have been authenticated. Then try what you wanted to do.
I faced the same issue when connected to VPN. Disconnected from VPN and ran the below command and it worked.
gcloud auth login
I am trying to copy files from my instance to my local directory using following command
gcloud compute scp <instance-name>:~/<file-name> ~/Documents/
However, it is showing error as mentioned below
$USER/Documents/: Is a directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Copying from local directory to GCE works fine.
I have checked Stanford's tutorial and Google's documentation as well.
I have one another instance where there is no issue like this.
I somewhat believe it might be issue with SSH keys.
What might have gone wrong?
Your command is correct if your source and destination paths are correct
The command as you've posted in your question works for me when copying a file from the Google Compute Engine VM to my local machine.
$ gcloud compute scp vm1:~/.bashrc ~/Documents/
.bashrc 100% 3515 3.4KB/s 00:00
I also tried doing the copy from other side (i.e. from my local machine to GCE VM) and it works:
$ gcloud compute scp ~/Documents/.bashrc vm1:~/temp/
.bashrc 100% 3515 3.4KB/s 00:00
$ gcloud compute scp ~/Documents/.bashrc vm1:~/.bashrc-new
.bashrc 100% 3515 3.4KB/s 00:00
gcloud relies on the scp executable present in your PATH. The arguments you provide to the gcloud scp command are passed through to the scp binary. Assuming your source and destination paths are correct, it should work.
Recursive copying using scp
Based on your particular error message though, I've seen that variation only appear when the source path you're trying to copy from is a directory instead of file. For that particular case, you can pass a --recurse argument (similar to the -r argument supported by regular scp) which will recursively copy all files and directories under the specified directory.
gcloud compute scp --recurse SRC_PATH DEST_PATH
To copy files from VM to your desktop you can simply SSH into the VM and on top right corner there is a settings button, there you will find the download file option just enter the path of file.
If it is folder then first zip the folder then download it.
Everything was perfect except I was trying to run these commands on the terminal connected to GCE instead of local terminal.
oyashi#oyashi-torch-instance:~$ gcloud compute scp oyashi-torch-instance:~/spring1617_assignment1.zip ~/Documents/
/home/oyashi/Documents/: Is a directory ERROR: (gcloud.compute.scp)
[/usr/bin/scp] exited with return code [1].
But when I tried this one on my local terminal. This happened.
oyashi#oyashi:~/Documents$ gcloud compute scp oyashi-torch-instance:~/spring1617_assignment1.zip ~/Documents/
spring1617_assignment1.zip 100% 42KB 42.0KB/s 00:00
Thank you everyone for their comments and help. I know its a silly mistake from my end. But I posted this answer so that others might learn from my silliness.
If you need to pass the information of zone, project name you may like to do as it worked for me:
the instance name is the name you chose in the GCP instances.
gcloud beta compute scp --project "project_name" --zone "zone_name" instance_name:~jupyter/file_name /home/Downloads
I met the same problem. The point is you should run the scp command from a local terminal, rather than cloud terminal.
For copying file to local machine from Ubuntu vmware
For ex: you have instance by name : bhk
Run a basic nginx server and copy all the files in /var/www/html (nginx serving dir) and then from your local machine simple run wget <vm's IP>/<your file path>
For example If my vm's IP is 1.2.3.4 and I want to copy /home/me/myFolder/myFile , then simply copy this file in /var/www/html
then run wget 1.2.3.4/myfile
this works for me:
gcloud compute scp --project "my-project" ./my-file.zip user#instance-1:~
--project - google cloud project name
my-file.zip - local file to send to VM
user - vm linux username
instance-1 - instance name (vm name)
~ - instance destination path
I use below script to upload directory from local to remote directory
gcloud compute scp --recurse myweb-app/www/* user#instant-name:/var/www/html/sub-sites/myweb-app/www/
im using EMR and wanted to use jupyter(ipython) so i added to the cluster the bootstrap action:
s3://elasticmapreduce.bootstrapactions/ipython-notebook/install-ipython-notebook
I performed the port tunelling to access jupyter from my local host and works fine, but it is asking for a login password, tried empty, tried hadoop, but no luck, does any body knows what is the jypyter password?
I ran into this problem as well when I used the same bootstrap action. I tried adding in Args=[--password, jupyter] which I also could not get working. That was from this aws forum:
Name='Install Jupyter notebook',Path="s3://aws-bigdata-blog/artifacts/aws-blog-emr-jupyter/install-jupyter-emr5.sh",Args=[--r,--julia,--toree,--torch,--ruby,--ds-packages,--ml-packages,--python-packages,'ggplot nilearn',--port,8880,--password,jupyter,--jupyterhub,--jupyterhub-port,8001,--cached-install,--notebook-dir,s3://<your-s3-bucket>/notebooks/,--copy-samples]
What I did instead was to follow these instructions for installing anaconda directly in the EMR instance using the CLI. If you follow the first part you should be able to get it up and running. To summarize here:
ssh into your master emr instance using the .pem file you saved
once there's you'll want to install anaconda using super user priveledges: sudo wget http://repo.continuum.io/archive/Anaconda3-4.1.1-Linux-x86_64.sh. Then bash Anaconda3–4.1.1-Linux-x86_64.sh
Make sure you're using the anaconda version of python: which python
If you're not, specify your source: source .bashrc
Now make a jupyter config file: jupyter notebook --generate-config
cd into the jupyter folder: cd ~/.jupyter/
update the config file: vi jupyter_notebook_config.py
In the config file add the following lines:
c = get_config()
c.NotebookApp.ip = '*'
c.NotebookApp.open_browser = False
c.NotebookApp.port = 6789 <---pick whichever port you want
exit out of the config editor and run jupyter via: jupyter notebook
this should run a notebook with no active kernels (for now). But it will give you the token you're looking for: http://localhost:6789/?token=xxxxxx
Leave this running, and open a new terminal window. Now you'll want to tunnel to the EMR instance per this aws blog post (make the port the same as the one you specified in the config file). ssh -o ServerAliveInterval=10 -i <<credentials.pem>> -N -L 8192:<<master-public-dns-name>>:8192 hadoop#<<master-public-dns-name>>
Opening localhost:6789 in the browser should prompt you with the jupyter page to enter your password or token. Enter the token that was generated in the above step and you should be good to go.
Hope this helps! There might be a less convoluted way, but this is what ended up working for me.
I ssh onto GCE and submit a job using the following command:
gcutil ssh vmName sh /bin/someScript.sh
This works fine. Now I'd like to run the job under a different user on the GCE, I tried:
gcutil ssh -ssh-key-file MY_SSH_KEY_FILE anotherUser#vmName sh /bin/someScript.sh
it didn't work throwing error: "FATAL Flags parsing error: option -s not recognized"
Can anybody tell me what's wrong in the command? or more things need to be done?
gcutil is deprecated, you should be using instead gcloud compute ssh
To make things easier you can try something like this :
sudo ssh -i /home/testuser/.ssh/google_compute_engine testuser#IP_address sh /script/script.sh
You need to make sure that your key for user "testuser" is added to the server in Developers Console > Compute Engine > your GCE instance > SSH Keys