My google cloud instance got a problem and it's preventing me to access the ssh. I would like to access the boot disk image from gcloud shell to download my files. How can I do that?
Thanks in advance
If you need to recover data from your existing boot disk of the problematic VM instance, you can detach the boot disk and then attach that disk as a secondary disk on a new instance so that you can have access to the data.
Detach the boot disk from the existing VM instance by running the following command.
gcloud compute instances detach-disk [INSTANCE_NAME] --disk=my-disk
Create a new VM and attach the old VM's boot disk as secondary disk by running the following command.
gcloud compute instances create [NEW_VM_NAME] --disk name=BOOT_DISK_NAME,boot=yes,auto-delete=no
Connect to your new VM using SSH:
gcloud compute ssh [NEW_VM_NAME]
Refer to the documentation that describes common errors that you may run into, when connecting to virtual machine (VM) instances using SSH, also ways to resolve errors for diagnosing failed SSH connections.
Create a new VM with a brand new disk. Add the problematic boot disk as additional disk. Start your new VM, log into it, and browse the additional disk to get your files.
Related
Can anyone please help me with this? I'm not able to access a Debian 8 GCP VM via SSH and also the serial console. That VM had an additional disk and after restart I'm not able to SSH. I even tried to connect to it via serial port and it showed me the below message.
You are in emerg
Cannot open access to console, the root account is locked.
See sulogin(8) man page for more details
You cannot recover the virtual machine - just the data.
Create a new instance. Detach the disk(s) and attach to the new instance. Copy your data to the new instance. Optionally delete the old instance and disks.
I've been trying to connect to a VM instance for the past couple of days now. Here's what I've tried:
Trying to SSH into it returns username#ipaddress: Permission denied (publickey).
Using the Google Cloud SDK returns this:
No zone specified. Using zone [us-central1-a] for instance: [instancename].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
SFATAL ERROR: No supported authentication methods available (server sent: publickey)
ERROR: (gcloud.compute.ssh) Could not SSH into the instance. It is possible that your SSH key has not propagated to theinstance yet. Try running this command again. If you still cannot connect, verify that the firewall and instance are set to accept ssh traffic.
Using the browser SSH just gets stuck on "Transferring SSH keys to the VM."
Using PuTTy also results in No supported authentication methods available (server sent: publickey)
I checked the serial console and found this:
systemd-hostnamed.service: Failed to run 'start' task: No space left on device
I did recently resize the disk and did restart the VM, but this error still occurs.
Access to port 22 is allowed in the firewall rules. What can I do to fix this?
After increasing the disk size you need to reboot the instance so the filesystem can be resized, just in this specific case because you already ran out of space.
If you have not already done so, create a snapshot of the VM's boot disk.
Try to restart the VM.
If you still can't access the VM, do the following:
Stop the VM:
gcloud compute instances stop VM_NAME
Replace VM_NAME with the name of your VM.
Increase the size the boot disk:
gcloud compute disks resize BOOT_DISK_NAME --size DISK_SIZE
Replace the following:
BOOT_DISK_NAME: the name of your VM's boot disk
DISK_SIZE: the new larger size, in gigabytes, for the boot disk
Start the VM:
gcloud compute instances start VM_NAME
Reattempt to SSH to the VM.
I cannot access the ssh console at all. Is there a way for me to download the disk inside the cloud platform?
You should be able to get onto the VM through a serial console:
https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-using-serial-console
Alternatively, mount the disk on another, VM revert the change and then put it back on the original VM.
I was able to successfully SSH into the Google Cloud VM I had set up yesterday, but today for some reason I can't, and I didn't mess with any of the settings, especially not the Firewall settings. It keeps giving me these errors now:
Connection via Cloud Identity-Aware Proxy Failed
Code: 4003
Reason: failed to connect to backend
You may be able to connect without using the Cloud Identity-Aware Proxy.
Then when I click on "Connect without Identity-Aware Proxy" I get the following error:
Connection Failed
We are unable to connect to the VM on port 22. Learn more about possible causes of this issue.
I don't know what happened. Yesterday it was working fine and now it's not.
At first, try to disable Cloud Identity-Aware Proxy and connect to the VM instance via web Console.
After that, check logs:
Go to Compute Engine -> VM instances -> click on NAME_OF_YOUR_VM -> at the VM instance details find section Logs and click on Serial port 1 (console)
Reboot your VM instance.
Check full boot log for any errors or/and warnings.
If your VM instance doesn't start up verify that your disk has a valid file system and a valid master boot record (MBR) by following the documentation General troubleshooting.
If you found errors/warning related to disk space you can try to resize it accordingly to the documentation Resizing a zonal persistent disk, also accordingly to the article Recovering an inaccessible instance or a full boot disk:
If an instance is completely out of disk space or if it is not running
a Linux guest environment, then automatically resizing your root
filesystem isn't possible, even after you've increased the size of the
persistent disk that backs it. If you can't connect to your instance,
or your boot disk is full and you can't resize it, you must create a
new instance and recreate the boot disk from a snapshot to resize it.
Otherwise try get access to your VM instance via serial console :
Enable serial console connection with gcloud command:
gcloud compute instances add-metadata NAME_OF_YOUR_VM_INSTANCE \
--metadata serial-port-enable=TRUE
or go to Compute Engine -> VM instances -> click on NAME_OF_YOUR_VM_INSTANCE -> click on EDIT -> go to section Remote access and check Enable connecting to serial ports
Create temporary user and password to login: shutdown your VM and set a startup script by adding at the section Custom metadata key startup-script and value:
useradd --groups google_sudoers tempuser
echo "tempuser:password" | chpasswd
and then start your VM.
Connect to your VM via serial port with gcloud command:
gcloud compute connect-to-serial-port NAME_OF_YOUR_VM_INSTANCE
or go to Compute Engine -> VM instances -> click on NAME_OF_YOUR_VM_INSTANCE -> and click on Connect to serial console
Check what went wrong.
Disable access via serial port with gcloud command:
gcloud compute instances add-metadata NAME_OF_YOUR_VM_INSTANCE \
--metadata serial-port-enable=FALSE
or go to Compute Engine -> VM instances -> click on NAME_OF_YOUR_VM_INSTANCE -> click on EDIT -> go to section Remote access and uncheck Enable connecting to serial ports. Keep in mind that accordingly to the documentation Interacting with the serial console:
Caution: The interactive serial console does not support IP-based access
restrictions such as IP whitelists. If you enable the interactive
serial console on an instance, clients can attempt to connect to that
instance from any IP address. Anybody can connect to that instance if
they know the correct SSH key, username, project ID, zone, and
instance name. Use firewall rules to control access to your network
and specific ports.
If you weren't able to connect via serial console, try follow the documentation Troubleshooting SSH section Inspect the VM instance without shutting it down and inspect the disk of your VM on another VM. Same way you can transfer your data to another working VM instance.
I had had the same issue while running composer update.
In my case an rebooting of the VM-Instance has solved it.
Beased on these error messages, I guess that your project has Identity-Aware Proxy (IAP) enabled, which sometimes may affect the ability to SSH into an instance, depending on the configuration.
In order to rule out this, you may try the following:
Create the firewall rules for allowing IAP to connect to your instances
Grant the necessary permissions to use IAP
Tunnel the SSH connection through IAP
I would like to migrate from my ProxMox server to Google Cloud Engine service. I tried to create an image from one LCX container then import on Google cloud.
I did these steps:
create a backup from LCX container (with entire filesystem)
from the backup I create a raw image using dd linux command
I import the raw image into Google Storage bucket
I create a Google VM instance from the imported image
After this I get the new instance but I'm not able to ping or connect via ssh. I notice that after few minutes the instance stops automatically.
Instead if I create another instance from a default template (example: Debian 9) I have no problem with ping and connection.
Any suggestions? Thanks
You should reconfigure the networking inside that instance after you have started it in GCE. Being imported from a LXC container from Proxmox, it still has the settings from LXC so probably your networking interface is not configured properly or with the correct ip address so it can function in GCE