Restore Google VM file permission - google-cloud-platform

I accidentally ran "sudo make chmod -R 777 /" on my GCP, now I'm not able to access the SSH anymore (Neither by terminal or browser):
Permissions 0777 for '/Users/username/.ssh/id_rsa' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
How can I access my VM and restore it?

As it was suggested by #John Hanley, you have (must) create a new instance to avoid serious problems in future with this broken VM.
To solve permissions issue with ~/.ssh/id_rsa.pub you can follow documentation Running startup scripts or/and article suggested by #John Hanley to execute command sudo chmod 644 ~/.ssh/id_rsa.pub or follow instructions from this article to connect to your instance via serial console and after that run sudo chmod 644 ~/.ssh/id_rsa.pub to set proper permissions.
Keep in mind that restoring SSH access WON'T SOLVE all other possible issues with your VM related to sudo make chmod -R 777 /. So, you can skip it and follow instructions below:
To move your data from broken VM to new VM you can follow this steps:
create snapshot of the boot disk of broken instance
$ gcloud compute disks snapshot BROKEN_INSTANCE_BOOT_DISK_NAME --snapshot-names=TEMPORARY_SNAPSHOT_NAME
create temporary disk with snapshot
$ gcloud compute disks create TEMPORARY_DISK_NAME --source-snapshot=TEMPORARY_SNAPSHOT_NAME
create new instance
attach temporary disk to new instance
$ gcloud compute instances attach-disk NEW_ISTANCE_NAME --disk=TEMPORARY_DISK_NAME
mount temporary disk
$ sudo su -
$ mkdir /mnt/TEMPORARY_DISK
$ mount /dev/disk/by-id/scsi-0Google_PersistentDisk_TEMPORARY_DISK_NAME /mnt/TEMPORARY_DISK
copy data from temporary disk to new instance
unmount temporary disk :
$ sudo umount /dev/disk/by-id/scsi-0Google_PersistentDisk_TEMPORARY_DISK_NAME
detach temporally disk
$ gcloud compute instances detach-disk NEW_ISTANCE_NAME --disk=TEMPORARY_DISK_NAME

Related

How to detach a disk in a Google Cloud TPU VM instance?

I created a TPU-VM instance (not a normal compute instance) and attach an external disk to it using this command:
gcloud alpha compute tpus tpu-vm create TPU-VM-NAME \
--zone=europe-west4-a \
--accelerator-type=v3-8 \
--version=v2-alpha \
--data-disk source=[PATH/TO/DISK]
Now I want to detach that disk from the TPU-VM but I cannot find the instance in the VM instances tab in the Google cloud console (They treated it as a TPU instance so it's not listed there). I can only find it in the TPUs tab, but in the TPUs tab I cannot edit the disk out of the instance.
I tried using this command too but it doesn't work:
gcloud compute instances detach-disk INSTANCE-NAME --disk=DISK-NAME
It says that resource (projects/project-name/zone/instances/tpu-vm-name) was not found.
Detaching a disk for the TPU VM architecture is not supported right now.
Actually, it is supported according to this tutorial! You need to follow this config when you are in TPU VM.Don't forget creating disk before detaching it and be sure you are using same billing account in both TPU VM and Disk. Otherwise, system will throw INTERNAL ERROR.
sudo lsblk
#Find the disk here, and the device name. Most likely it will be "sdb". Given that is correct, format the disk with this command:
sudo mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb
#Mount the disk - and give everyone
sudo mkdir -p /mnt/disks/flaxdisk
sudo mount -o discard,defaults /dev/sdb /mnt/disks/flaxdisk
sudo chmod a+w /mnt/disks/flaxdisk
#Configure automatic mount on restarts
sudo cp /etc/fstab /etc/fstab.backup
#Find the uid of the disk - you need this value in the following steps
sudo blkid /dev/sdb
#Add this to /etc/fstab with the correct uuid
UUID=52af08e4-f249-4efa-9aa3-7c7a9fd560b0 /mnt/disks/flaxdisk ext4 discard,defaults,nofail 0 2

Mounting AWS EFS with NFS on macOS

I'm trying to mount an EFS volume with NFS on macOS, but am having permissions trouble. I am running the following command to mount the volume:
sudo mount -t nfs -o vers=4 -o tcp -w <IP Address>:/ efs/
and am able to successfully mount the volume, but it mounts with root privileges, and I need to be able to grant access to the volume to the local user. I need the local user to be able to both read and write to the volume.
Trying to chown -R $(whoami) ./efs results in an Unknown error: 10039.
I can successfully chmod 666 the files inside of the mount (sometimes with odd behaviors), but I ultimately need to just grant the local user write access to the volume.
Am I missing an option in the mount command or does anyone know how to mount the efs volume and provide the local user permissions to it?

Amazon EC2 ssh operation times out

I have an ec2 host from January this year which was working fine. But recently I saw that my java app there was not responding and hence I tried to ssh to my ec2 host with the .pem key. Although, ssh 22 port is allowed for all in the security group, yet it times out.
I cannot lose the data in the mysql server that lives there. So I tried to add a rule to open 3306 port and access externally to that server. But that times out too. Double checked the security groups. They seems to be okay. Here's the screenshot of my security groups.
Then, I took an image of the instance and created another instance from that image. Guess what, I cannot ssh to that host either.
Then, just to assure myself, I created another instance but this time a fresh one i.e. not from the image. I can easily ssh into my host.
What am I missing here? Or what's the problem with my previous host? I've already search for the answer in SO and none could help me to solve my problem.
You could try creating a fresh instance, then stop your old instance, detach the volume, attach the volume to your new instance and mount the volume. Then at least you would have access to the drive and could save your mysql data.
#dmohr's answer was not an exact solution but helped me a lot. However, my aws guru #leapoffaith managed to recover the data for me using the following steps. He provided me the steps he followed later and then I thought of posting these here as well as I found these useful for other developers who might face the same issue like me.
Get a new ec2 instance.
Stop the corrupted ec2, detach it’s EBS volume, attach the volume to the new instance.
Then mount the volume with your newly created EC2 instance. Use the following commands to mount :
Make a new mount point : mkdir mount_point
To mount : sudo mount /dev/xvdf1 mount_point/ (note that the device name can be different).
Get permission of the previous mysql data:
sudo chown -R $USER:$(id -gn $USER) mount_point/var/lib/mysql
Install mysql in the new ec2 host :
sudo apt-get update
sudo apt-get install mysql-server - Remember to use the same root password that you used to connect to your mysql database from your server application.
sudo mysql_secure_installation
Stop mysql service : sudo service mysql stop
Copy your database folder from the mount_point’s mysql folder to /var/lib/mysql folder. For example, sudo cp -r yourdb /var/lib/mysql/
If InnoDB Engine,
sudo cp ibdata1 /var/lib/mysql/
sudo cp ib_logfile0 /var/lib/mysql/
sudo cp ib_logfile1 /var/lib/mysql/
Give ‘mysql’ user permission :
sudo chown -R mysql:mysql yourdb/
sudo chmod -R 777 mysql/yourdb
Start mysql service : sudo service mysql start
Unmount the EBS : sudo umount -d /dev/xvdf1

AWS EC2 Permission denied

When i try to log in into EC2 instance through ssh, i got the below error Permission denied (publickey)
I checked the host name and username, everything is fine. Wrongly i given the chmod -R 777 . from ec2 instance root directory when i was logged at last time, After that i could not able connect to instance. I need some files from ec2-instance. Is there any way to get log in into ec2?. Also i tried with new instance. Its working.
Is there any possibility?
I haven't tried this myself, but I don't see why it wouldn't work.
Try snapshotting your instance (create image button from ec2 console). Once complete, find your snapshot in the Ec2 console. It should be backed by an EBS volume with an id of the pattern "vol-xxxxxxxx".
Spin up a new instance and attach "vol-xxxxxxxx" as secondary storage. SSH to the new instance and mount the device "vol-xxxxxxxx" correlates to (e.g. /dev/xvdf) to a temp directory and find the files you're looking for.
Detach your root volume and attach to another instance.
Login to the ec2 instance
mkdir tempfolder
sudo mount /dev/svdf1 (normally /dev/svdf1, you can list out your volumes to make sure)
cd tempfolder/home
chmod 700 -R ec2-user
sudo umount tempfolder
Detach volume and attach it to old instance, remember it's root instance so you attach it with name "/dev/xvda".
I faced the similar problem.
you will not able to re-cover the old instance, just create new instance and set the permissions
chmod 777 (don't use -R) option, then your problem will be resolved.
One reason can be that your key file is not publicly viewable for SSH to work. Use this command if needed:
chmod 400 mykey.pem
Also keep in mind the correct user id for EC2(ec2-user) instance and the command:
ssh -l ec2-user -i .ssh/yourkey.pem public-ec2-host
Use Winscp to revert the permission change.
Recently I had accidentally changed the "/home/ec2-user" directory permissions to 777 using putty. I was immediately logged out. I was also logged into the server using "Winscp" and it didn't get disconnected after chaging the permissions.
The solution is change the permission on "/home/ec2-user" back to 700 using Winscp and I was able to log back in. It worked for me. Winscp saved me a lot of trouble.

Add Keypair to existing EC2 instance

I was given AWS Console access to an account with 2 instances running that I cannot shut down (in production). I would, however, like to gain SSH access to these instances, is it possible to create a new Keypair and apply it to the instances so I can SSH in? Obtaining the existing pem file for the keypair the instances were created under is currently not an option.
If this isn't possible is there some other way I can get into the instances?
You can't apply a keypair to a running instance. You can only use the new keypair to launch a new instance.
For recovery, if it's an EBS boot AMI, you can stop it, make a snapshot of the volume. Create a new volume based on it. And be able to use it back to start the old instance, create a new image, or recover data.
Though data at ephemeral storage will be lost.
Due to the popularity of this question and answer, I wanted to capture the information in the link that Rodney posted on his comment.
Credit goes to Eric Hammond for this information.
Fixing Files on the Root EBS Volume of an EC2 Instance
You can examine and edit files on the root EBS volume on an EC2 instance even if you are in what you considered a disastrous situation like:
You lost your ssh key or forgot your password
You made a mistake editing the /etc/sudoers file and can no longer
gain root access with sudo to fix it
Your long running instance is hung for some reason, cannot be
contacted, and fails to boot properly
You need to recover files off of the instance but cannot get to it
On a physical computer sitting at your desk, you could simply boot the system with a CD or USB stick, mount the hard drive, check out and fix the files, then reboot the computer to be back in business.
A remote EC2 instance, however, seems distant and inaccessible when you are in one of these situations. Fortunately, AWS provides us with the power and flexibility to be able to recover a system like this, provided that we are running EBS boot instances and not instance-store.
The approach on EC2 is somewhat similar to the physical solution, but we’re going to move and mount the faulty “hard drive” (root EBS volume) to a different instance, fix it, then move it back.
In some situations, it might simply be easier to start a new EC2 instance and throw away the bad one, but if you really want to fix your files, here is the approach that has worked for many:
Setup
Identify the original instance (A) and volume that contains the broken root EBS volume with the files you want to view and edit.
instance_a=i-XXXXXXXX
volume=$(ec2-describe-instances $instance_a |
egrep '^BLOCKDEVICE./dev/sda1' | cut -f3)
Identify the second EC2 instance (B) that you will use to fix the files on the original EBS volume. This instance must be running in the same availability zone as instance A so that it can have the EBS volume attached to it. If you don’t have an instance already running, start a temporary one.
instance_b=i-YYYYYYYY
Stop the broken instance A (waiting for it to come to a complete stop), detach the root EBS volume from the instance (waiting for it to be detached), then attach the volume to instance B on an unused device.
ec2-stop-instances $instance_a
ec2-detach-volume $volume
ec2-attach-volume --instance $instance_b --device /dev/sdj $volume
ssh to instance B and mount the volume so that you can access its file system.
ssh ...instance b...
sudo mkdir -p 000 /vol-a
sudo mount /dev/sdj /vol-a
Fix It
At this point your entire root file system from instance A is available for viewing and editing under /vol-a on instance B. For example, you may want to:
Put the correct ssh keys in /vol-a/home/ubuntu/.ssh/authorized_keys
Edit and fix /vol-a/etc/sudoers
Look for error messages in /vol-a/var/log/syslog
Copy important files out of /vol-a/…
Note: The uids on the two instances may not be identical, so take care if you are creating, editing, or copying files that belong to non-root users. For example, your mysql user on instance A may have the same UID as your postfix user on instance B which could cause problems if you chown files with one name and then move the volume back to A.
Wrap Up
After you are done and you are happy with the files under /vol-a, unmount the file system (still on instance-B):
sudo umount /vol-a
sudo rmdir /vol-a
Now, back on your system with ec2-api-tools, continue moving the EBS volume back to it’s home on the original instance A and start the instance again:
ec2-detach-volume $volume
ec2-attach-volume --instance $instance_a --device /dev/sda1 $volume
ec2-start-instances $instance_a
Hopefully, you fixed the problem, instance A comes up just fine, and you can accomplish what you originally set out to do. If not, you may need to continue repeating these steps until you have it working.
Note: If you had an Elastic IP address assigned to instance A when you stopped it, you’ll need to reassociate it after starting it up again.
Remember! If your instance B was temporarily started just for this process, don’t forget to terminate it now.
Though you can't add a key pair to a running EC2 instance directly, you can create a linux user and create a new key pair for him, then use it like you would with the original user's key pair.
In your case, you can ask the instance owner (who created it) to do the following. Thus, the instance owner doesn't have to share his own keys with you, but you would still be able to ssh into these instances. These steps were originally posted by Utkarsh Sengar (aka. #zengr) at http://utkarshsengar.com/2011/01/manage-multiple-accounts-on-1-amazon-ec2-instance/. I've made only a few small changes.
Step 1: login by default “ubuntu” user:
$ ssh -i my_orig_key.pem ubuntu#111.111.11.111
Step 2: create a new user, we will call our new user “john”:
[ubuntu#ip-11-111-111-111 ~]$ sudo adduser john
Set password for “john” by:
[ubuntu#ip-11-111-111-111 ~]$ sudo su -
[root#ip-11-111-111-111 ubuntu]# passwd john
Add “john” to sudoer’s list by:
[root#ip-11-111-111-111 ubuntu]# visudo
.. and add the following to the end of the file:
john ALL = (ALL) ALL
Alright! We have our new user created, now you need to generate the key file which will be needed to login, like we have my_orin_key.pem in Step 1.
Now, exit and go back to ubuntu, out of root.
[root#ip-11-111-111-111 ubuntu]# exit
[ubuntu#ip-11-111-111-111 ~]$
Step 3: creating the public and private keys:
[ubuntu#ip-11-111-111-111 ~]$ su john
Enter the password you created for “john” in Step 2. Then create a key pair. Remember that the passphrase for key pair should be at least 4 characters.
[john#ip-11-111-111-111 ubuntu]$ cd /home/john/
[john#ip-11-111-111-111 ~]$ ssh-keygen -b 1024 -f john -t dsa
[john#ip-11-111-111-111 ~]$ mkdir .ssh
[john#ip-11-111-111-111 ~]$ chmod 700 .ssh
[john#ip-11-111-111-111 ~]$ cat john.pub > .ssh/authorized_keys
[john#ip-11-111-111-111 ~]$ chmod 600 .ssh/authorized_keys
[john#ip-11-111-111-111 ~]$ sudo chown john:ubuntu .ssh
In the above step, john is the user we created and ubuntu is the default user group.
[john#ip-11-111-111-111 ~]$ sudo chown john:ubuntu .ssh/authorized_keys
Step 4: now you just need to download the key called “john”. I use scp to download/upload files from EC2, here is how you can do it.
You will still need to copy the file using ubuntu user, since you only have the key for that user name. So, you will need to move the key to ubuntu folder and chmod it to 777.
[john#ip-11-111-111-111 ~]$ sudo cp john /home/ubuntu/
[john#ip-11-111-111-111 ~]$ sudo chmod 777 /home/ubuntu/john
Now come to local machine’s terminal, where you have my_orig_key.pem file and do this:
$ cd ~/.ssh
$ scp -i my_orig_key.pem ubuntu#111.111.11.111:/home/ubuntu/john john
The above command will copy the key “john” to the present working directory on your local machine. Once you have copied the key to your local machine, you should delete “/home/ubuntu/john”, since it’s a private key.
Now, one your local machine chmod john to 600.
$ chmod 600 john
Step 5: time to test your key:
$ ssh -i john john#111.111.11.111
So, in this manner, you can setup multiple users to use one EC2 instance!!
For Elastic Beanstalk environments, you can apply a key-value pair to a running instance like this:
Create a key-value pair from EC2 -> Key Pairs (Under NETWORK & SECURITY tab)
Go to Elastic Beanstalk and click on your application
Go to Configuration -> Security and click Edit
Choose your EC2 key pair and click Apply
Click confirm to confirm the update. It will terminate the environment and apply the key value to your environment.
On your local machine, run command:
ssh-keygen -t rsa -C "SomeAlias"
After that command runs, a file ending in *.pub will be generated. Copy the contents of that file.
On the Amazon machine, edit ~/.ssh/authorized_keys and paste the contents of the *.pub file (and remove any existing contents first).
You can then SSH using the other file that was generated from the ssh-keygen command (the private key).
This happened to me earlier (didn't have access to an EC2 instance someone else created but had access to AWS web console) and I blogged the answer: http://readystate4.com/2013/04/09/aws-gaining-ssh-access-to-an-ec2-instance-you-lost-access-to/
Basically, you can detached the EBS drive, attach it to an EC2 that you do have access to. Add your SSH pub key to ~ec2-user/.ssh/authorized_keys on this attached drive. Then put it back on the old EC2 instance. step-by-step in the link using Amazon AMI.
No need to make snapshots or create a new cloned instance.
I didn't find an easy way to add a new key pair via the console, but you can do it manually.
Just ssh into your EC2 box with the existing key pair. Then edit the ~/.ssh/authorized_keys and add the new key on a new line. Exit and ssh via the new machine. Success!
In my case I used this documentation to associate a key pair with my instance of Elastic Beanstalk
Important
You must create an Amazon EC2 key pair and configure your Elastic Beanstalk–provisioned Amazon EC2 instances to use the Amazon EC2 key pair before you can access your Elastic Beanstalk–provisioned Amazon EC2 instances. You can set up your Amazon EC2 key pairs using the AWS Management Console. For instructions on creating a key pair for Amazon EC2, see the Amazon Elastic Compute Cloud Getting Started Guide.
Configuring Amazon EC2 Server Instances with Elastic Beanstalk
You can just add a new key to the instance by the following command:
ssh-copy-id -i ~/.ssh/id_rsa.pub domain_alias
You can configure domain_alias in ~/.ssh config
host domain_alias
User ubuntu
Hostname domain.com
IdentityFile ~/.ssh/ec2.pem
Once an instance has been started, there is no way to change the
keypair associated with the instance at a meta data level, but you
can change what ssh key you use to connect to the instance.
stackoverflow.com/questions/7881469/change-key-pair-for-ec2-instance
You can actually add a key pair through the elastic beanstalk config page. it then restarts your instance for you and everything works.