Error: The VM guest environment is outdated and only supports the deprecated 'sshKeys' metadata item. Please follow the steps here to update - google-cloud-platform

Every time I stop a VM Instance from VM instance details page or do
sudo reboot
from inside the vm
When I try to connect to the vm using ssh I get this message
The VM guest environment is outdated and only supports the deprecated 'sshKeys' metadata item. Please follow the steps here to update.
How can I fix this error and connect to the vm?
Ps:
the mentioned steps are at this page
and I can't follow them because I can't connect to the vm

On the page you linked to, with instructions, there are TWO primary ways to add the "Guest Environment" which should stop the error you're encountering. I encountered the same error by trying to use ssh by logging in as "root". You said that you tried the first method, but first you must gain access with a username that is loaded into the virtual machine via your metadata:
Have you tried using a different username to gain ssh access to the server? (Click on the gear on the top right of the browser based ssh terminal, and then select "Change Linux Username" and just put in a different name besides "root" -your google account name will usually work). If this works, then you CAN follow the instructions here.
If you have tried changing the username but get the same error, have you tried following the instructions that allow you basically pre-pend the scripts before the virtual machine starts? Those instructions are here.

Related

AWS Glue Development Endpoint Not Working properly

I am trying to use a development Endpoint to interactively run and edit ETL scripts but there seems to some issues in the development endpoint just after creating it as i am getting errors in scala/python REPL and also unable to do SSH tunnel to remote interpreter.
Let me explain what i did exactly - I created a development endpoint in the AWS console with all the default configurations. While creating the development endpoint i only provided three things 'Development endpoint name' and 'IAM Role' and my 'pub ssh key'. This is how it looks after creation
Then Right After creating the endpoint i am connecting to the spark/python REPL, I am able to connect to them successfully but within couple of minutes of connecting, the REPL starts throwing errors without writing a single line of code. This is happening in all the REPL present in the development endpoints.
Also When I am trying to do SSH tunneling to remote interpreter to connect my Local Zeppelin Notebook it is throwing - "bind: Cannot assign requested address".
Couple of things that are working though -
Able to do ssh to the endpoint.
Created a Sagemaker notebook in the AWS glue that is attached to this development endpoint and this notebook seems to be working fine, although surely it is adding an additional cost and i don't want to continue using it.
Can anyone please help what am i doing wrong? Am I missing any important steps that is needed to be done on the machine right after creating the development endpoint?
Thanks in Advance!
Not very sure about this error but if you are using it smaller datasets then probably you would like to use Docker implementation as it will not add any additional cost and you can go on with your developments.
You can refer this blog on how to set it up
https://towardsdatascience.com/develop-glue-jobs-locally-using-docker-containers-bffc9d95bd1

Browser drops connection during model training

I am currently trying to go through a fairly long hyperparameter grid search (4-5 hours) and I keep having issues with Jupyter Lab (or haven't figured out something yet) on a gcp notebook instance. The browser connection to the notebook keeps dropping, whereas the training process continues just fine. When it finishes training process, there's nowhere to write the output as the browser connection to the notebook has already dropped.
How can I keep that connection alive or make sure the output gets written into the notebook even if my laptop gets turned off/gets turned off?
There are multiple problems that may be affecting your notebook. It can be a GCP issue, a network issue... Therefore, you need to provide more information in order to diagnose what is happening. I would recommend you to open a ticket with GCP or Jupyter support to conduct a more thorough investigation as it can be something difficult to diagnose and they will have more tools to do it. Also, what #Joaquim suggested seems like a good workaround for the moment. Anyhow, I have gathered several troubleshooting steps that you can follow to find if it is one of this recurrent issues the one that is affecting you:
According to this Jupyter Notebook document, there is a ‘shutdown_no_activity_timeout’ option. The default value is ‘0’ that disables this automatic shutdown. The option might be overridden on ‘jupyter_notebook_config.py’ file. You may follow these steps to confirm it:
Click on the instance name of in which your Notebook is running on the AI Platform Notebooks page.
Remote access it by clicking “SSH”
Run this on the shell to confirm the existence of the overriding:
ls /home/*/.jupyter/jupyter_notebook_config.py
Run this command to confirm if the shutdown_no_activity_timeout option is doing the overriding:
cat /home/*/.jupyter/jupyter_notebook_config.py | grep shutdown_no_activity_timeout
Switch the option to ‘0’ if it is set to a different value, and reset the Notebook instances on this page to apply the change.
According to this other document, it might fail to connect when behind a proxy. You can try to disable your browser’s proxy settings.
You can also try to change the Jupyter port. On this Jupyter issue, the customer insists that his disconnection problem was gone after changing it. If you are using Chrome browser, could you please open the Inspect panel (Ctrl+Shift+I) and compare your connection symptoms with this image? If you get similar errors, you may try to change the port (c.NotebookApp.port).

How to deativate UFW from outside the VM on Google Cloud Compute Instance

I accidentaly enabled the UFW on my Google Cloud Compute debian instance and unfortunately port 22 is blocked now. I've tried every way to go inside the VM but i can't...
I'm trying to access trhougth the serial port but it's asking me for user and password that was never set.
Does anyone have any idea what can I do?
If I could 'edit' the files on disk, it would be possible to change firewall rules and disable it. Already thought on mounting the VM disk on another instance but Google doesn't allow to "hot detach" it.
Also tried to create another VM from a snapshot of VM disk, but of course, the new instance came with the same problem.
Lots of important files inside and can't go in...
This is the classical example when you close yourself outside of the house with the key inside.
There are several ways to get back inside a virtual machine when the ssh is not currently working in Google Cloud Platform, from my point of view the easiest is to make use of the startup script.
You can use them to run a script as Root when your machine starts, in this way you can basically change the configuration without accessing the virtual machine.
Therefore you can:
simply launch some command in order to deactivate UFW and then access again the machine
if it is not enough and you rally need to access to fix the configuration, you can set up username and password for the root user making use of the startup script and then accessing through the serial console therefore without ssh (basically it is like you had your keyboard directly connected to the hardware). Note as soon you access the instance remove or at least change the password you have just used was visible to the people having access to the project. A safer way is to write down the password in on a private file on a bucket and download it on the instance with the startup script.
Note that you can redirect the output of command to a file and then upload the file to a bucket if you need to debug the script, read the content of a file, understanding what is going on, etc.
The easiest way is to create a startup-script that disables ufw. which gets executed whenever the instance is booted:
Go into your Google Cloud Console. Go to your VM instance and click Edit button.
Scroll down to your "Custom metadata", and add a "startup-script" as key and the following script as value:
#! /bin/bash
/usr/sbin/ufw disable
Click save and reboot your instance.
Delete that startup-script and click Save, so that it won't get executed in future boots.
You can try google serial ports. From then you can enable the ssh
https://cloud.google.com/compute/docs/instances/interacting-with-serial-console
ufw allow ssh

username of google cloud instance - where to find out [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
My gmail account I used to create my google cloud instance is johnsmith#gmail.com.
When I tried to connect to instance with WinSCP using "johnsmith" it failed, but it worked with "JOHN.SMITH"? Where can I find out my exact username for google cloud instance?
UPDATE. I tried web-based SSH and typed whoami, it returned "johnsmith". "john.smith" is my windows local username. It is all very confusing.
ANOTHER UPDATE: My google cloud instance is Ubuntu 16.04 and I am connecting from Windows 7 local machine
P.S. I made up the name, but preserved format for this question.
P.S.S SO does not let me post because I don't meet their standard. Here is some bogus code to meet the standard:
print("Hello, World!")
Just to clarify, I presume you are trying to connect to a Compute Engine Windows Instance in your Google Cloud Platform project, is that correct?
In order to confirm the username of this instance I would recommend following these instructions:
1) Login to Google Cloud Platform Console at this link https://console.cloud.google.com/
2) In the Console main menu (the three horizontal stripes in the top left corner) navigate to 'Compute Engine' and then click on 'VM instances'.
3) You should now be able to see the name of your VM instance- click on it.
4) You will now be in the "VM instance details" screen.
Windows
5) If you now click on "Set Windows Password" a new pane will open that contains your username to login to the instance.
6) If you now press "Set" in the same window, you will receive your Windows password (copy or make note of this).
By using the username and password you retrieved in step 5 and 6 you will be able to access the instance.
You now also have the option of pressing "RDP" in the "VM instance details" page to gain access to your instance via RDP and change your username/password to something more memorable once you have access to the operating system.
Linux
Alternatively, if you are trying to disover the username of a Linux VM Instance, you can confirm this by accessing the machine via SSH from the Console. You will then be able to set the password of the machine.
To SSH into the machine, follow the same first 4 steps in the above instructions, then:
Click on "SSH". A new terminal window will open and you will gain access the machine.
You will be able to see your username in the shell, or alternatively you can type whoami in the shell and after pressing enter it will print out your username.
To set a password for the machine, type sudo passwd then press return. You will then be prompted to enter a new password.
Under Cloud Shell Session:
gcloud compute os-login describe-profile
You will see your username.
In GCP, I created Ubuntu 18.04 disk/instance (and created a discourse forum in it).
I found that when we create a new instance, there is a bar/panel on the right side of the screen, which showed our credentials to login into that instance/website. But I found no way to invoke that details screen again (I can remember that it listed all the software components went into my disk/instance installation).
Anyway, I remembered the username was 'user', but didn't keep record of the auto generated long alphanumeric password. But at last, found the same passsword, when I clicked on the instance name and without clicking on any further, where custom data details start, first line showed the password, labelled: Bitnami-base-password.
If its still not clear where to find the pw, check out the SS
Using google cloud SDK shell, type the following command:
gcloud compute ssh --project PROJECT_ID --zone ZONE VM_NAME
Replace the following:
PROJECT_ID: the ID of the project that contains the instance
ZONE: the name of the zone in which the instance is located
VM_NAME: the name of the instance
https://cloud.google.com/compute/docs/instances/connecting-to-instance

Pulling file from the Google Cloud server to local machine

Linux n00b here having trouble pulling a file from the server to my local Windows 7 professional 64 bit machine. I am using Wowza to stream live video and I am recording these live videos to my Google Cloud instance located here:
/usr/local/WowzaStreamingEngine/content/myStream.mp4
When I ssh:
gcutil --project=”myprojectname” pull “my instance”
“/usr/local/WowzaStreamingEngine/content/myStream.mp4” “/folder1”
I receive a permission denied error. When I try saving another folder deep on my local machine i.e "/folder1/folder2" the error returned is file or directory not found. I've checked that I have write permisions set on my local Windows 7 machine so I do not think it is a permissions error. Again, apologize for the n00b question, I'm just been stuck here for hours.
Thx,
~Greg
Comment added 7/18:
I enter the following through ssh:
gcutil --project=”Myproject” pull “instance-1” "/usr/local/WowzaStreamingEngine/content/myStream.mp4” “/content"
By entering this I'm expecting the file mystream.mp4 to be copied to my C:/content folder. The following is returned: Warning: Permanently added '107.178.218.8' (ECDSA) to the list of known hosts. Enter passphrase for key '/home/Greg/.ssh/google_compute_engine':
Here I enter the passphrase and the following error is returned: /content: Permission denied Have write set up on this folder. Thanks! – Greg
-=-=-==->
To answer the question about using Cygwin, I'm not familiar with Cygwin and I do not believe it was used in this instance. I ran these commands through the Google Cloud SDK shell which I installed per the directions found here: https://developers.google.com/compute/docs/gcutil/.
What I am doing:
After setting up my google cloud instance I open Google CLoud SDK and enter the following:
gcutil --service_version="v1" --project="myproject" ssh --zone="us-central1-a" "instance-1"
I then am prompted for a passphrase which I create and then run the following:
curl http://metadata/computeMetadata/v1/instance/id -H "X-Google-Metadata-Request:True"
This provides the password I use to login to the Wowza live video streaming engine. All of this works beautifully, I can stream video and record the video to the following location: /usr/local/WowzaStreamingEngine/content/myStream.mp4
Next I attempt to save the .mp4 file to my local drive and that is where I'm having issues. I attempt to run:
gcutil --project=”myproject” pull “instance-1” “/usr/local/WowzaStreamingEngine/content/myStream.mp4” “C:/content”
also tried, C:/content C:\content and C:\content
These attempts threw the following error:
Could not resolve hostname C: Name or service not known
Thanks again for your time, I know it is valuable, I really appreciate you helping out a novice.
Update I believe I am close thanks to your help. Switched to local C drive, entered the command as you displayed in your Answer update. Now returning a new, not before seen error:
Error: API rate limit exceeded
I did some research on S.O. and some suggestions made were that billing is not enabled or the relevant API is not enabled and I could solve by turning on Google Compute Engine. Billing has been enabled for a few weeks now on my project. In terms of Google Compute Engine, below are what I believe to be the relevant items turned on:
User Info: Enabled
Compute: Read Write
Storage: Full
Task Queue: Enabled
BigQuery: Enabled
Cloud SQL: Enabled
Cloud Database: Enabled
The test video I recorded was short and small in size. I also have not done anything else with this instance so at a loss as to why I am getting the API rate exceeded error.
I also went to the Google APIs console. I see very limited usage reported so, again, not sure why I am exceeding the API limit. Perhaps I do not have something set appropriately in the APIs console?
I'm guessing you're using Cygwin here (please correct me if I'm wrong).
The root directory for your Cygwin installation is most likely C:\cygwin (see FAQ) and not C: so when you say /content on the command line, you're referring to C:\cygwin\content and not C:\content.
Secondly, since you're likely running as a regular user (and not root) you cannot write to /content so that's why you're getting the permission denied error.
Solution: specify the target directory as C:/content (or C:\\content) rather than /content.
Update: from the update to the question, you're using the Google Cloud SDK shell, not Cygwin, so the above answer does not apply. The reason you're seeing the error:
Could not resolve hostname C: Name or service not known
is because gcutil (like ssh) parses destinations which include : as having the pattern [hostname]:[path]. Thus, you should avoid : in the destination, which means we need to drop the drive spec.
In this case, the following should suffice, assuming that you're currently at a prompt that looks like C:\...>:
gcutil --project=myproject pull instance-1 /usr/local/WowzaStreamingEngine/content/myStream.mp4 \content
If not, first switch to the C: drive by issuing the command:
C:
and then run the above command.
Note: I removed the quotes from the command line because you don't need it in the case where parameters don't have spaces in them.