Google Cloud - accessing Linux VM via private key - google-cloud-platform

I have created a linux VM in Google cloud, and right now I am trying to access the VM through SSH.
I am able to SSH to the server, if I am loged-in to the console via the interface, However I am trying to generate a portable private key file (pem) which I can use it to remote to the server from anywhere.
I can achieve this easily on AWS, or Azure during the VM creation, but this doesn't seem to be the case on GC.

This is not how gcloud works.
Google Cloud Platform actually takes public key beforehand when you create VM instance in compute service. You can generate the key on your machine by using ssh-keygen and add it by following methods to your instance.
You have 2 options. Either you can add the ssh key instance-wide(screenshot 1) by editing your instance setting or you add ssh key project wise in the meta data section of compute service(screenshot 2).
Screenshot1
Screenshot 2

I understand what you mean, but google do it in a bit more automatically.
In any local computer, first get a service-account json with right access.
Authorized the gcloud by,
gcloud auth activate-service-account --key-file=KEY_FILE.json
Then,
gcloud compute config-ssh [--ssh-config-file=SSH_CONFIG_FILE] [--ssh-key-file=SSH_KEY_FILE]
You may already have ssh file, but that's fine if you simply let gcloud to generate it.
Finally you can ssh into any compute engine from this computer by,
gcloud compute ssh [USER#]INSTANCE
While, for next time in the same computer, you just need to use gcloud compute ssh to access it again.

Open a terminal on your workstation and use the ssh-keygen command to generate a new key. Specify the -C flag to add a comment with your username.
ssh-keygen -t rsa -f ~/.ssh/[KEY_FILENAME] -C [USERNAME]
where:
[KEY_FILENAME] is the name that you want to use for your SSH key files. For example, a filename of my-ssh-key generates a private key file named my-ssh-key and a public key file named my-ssh-key.pub.
[USERNAME] is the user for whom you will apply this SSH key.
Restrict access to your private key so that only you can read it and nobody can write to it.
chmod 400 ~/.ssh/[KEY_FILENAME]
where [KEY_FILENAME] is the name that you used for your SSH key files.
Repeat this process for every user who needs a new key.
If you created a key on a Linux workstation by using the ssh-keygen tool, the keys are saved under the following locations:
Public key file: ~/.ssh/[KEY_FILENAME].pub
Private key file: ~/.ssh/[KEY_FILENAME]
where [KEY_FILENAME] is the filename of the SSH key, which was set when the key was created.
To add or remove project-wide public SSH keys from the GCP Console:
In the Google Cloud Platform Console, go to the metadata page for your project. It can be found under the GCE menu.
Under SSH Keys, click Edit.
Modify the project-wide public SSH keys: To add a public SSH key, click Add item at the bottom of the page. This will produce a text box. Copy the contents of your public SSH key file and paste them into the text box. Repeat this process for each public SSH key that you want to add.
When you are done, click Save at the bottom of the page.
To connect to an instance using ssh
In a terminal, use the ssh command and your private SSH key file to connect to your instance. Specify your username and the external IP address of the instance that you want to connect to.
ssh -i [PATH_TO_PRIVATE_KEY] [USERNAME]#[EXTERNAL_IP_ADDRESS]
where:
[PATH_TO_PRIVATE_KEY] is the path to your private SSH key file.
[USERNAME] is the name of the user connecting to the instance. The username for your public SSH key was specified when the SSH key was created. You can connect to the instance as that user if the instance has a valid public SSH key for that user and if you have the matching private SSH key.
[EXTERNAL_IP_ADDRESS] is the external IP address for your instance.
If the connection is successful, you can use the terminal to run commands on your instance. When you are done, use the exit command to disconnect from the instance.

I found this answer and just wanted to update on what works for me
With gcloud client installed on your machine (whichever machine you wish to connect to the VM with).
Authenticate your service using your project JSON key
gcloud auth activate-service-account --key-file=[keyfile_for_project].json
Create ssh key pairs on the local machine
$(which ssh-keygen) -t rsa -C "your#email.com"
Add the public key you just created in id_rsa.pub to your VM's metadata (great screenshots of this included by Mohit Kumar's answer)
cat $PWD/id_rsa.pub (paste this output into the SSH key metadata)
ssh to the VM instance using the private key you just created in id_rsa
ssh -v -i id_rsa [user]#[external_ip]
If you want to make this portable, simply carry that private key (id_rsa) public key (id_rsa.pub) pair around with you

For SSH access, you wouldn't use a pem key. On your client machine, you should run (if in a unix/linux system) ssh-keygen which will walk you through creating your ssh key (default is RSA). You then need to add the public key (~/.ssh/id_rsa.pub or the file specified during creation) to ~/.ssh/authorized_keys on the server.

Related

GCP Notebook AI -- SSH with write permissions?

I'm trying to set up remote access (with VSCode) to the GCP VM that's setup with Notebooks AI. However, when I ssh into the VM I don't have write permissions for /home/jupyter so cannot edit any of the notebook files.
I have tried both gcloud compute ssh and setting up local aliases with gcloud compute config-ssh.
My best guess is that the users are different. It looks like the terminal on JupyterLab is logged in as jupyter#[instance...] while when I ssh in its myname#[instance...]. Checking permissions of /home/jupyter/, it's owned by user jupyter of group jupyter. I also tried adding users to the jupyter group with sudo usermod -a -G but that didn't do the trick. When I try to ssh in as jupyter#[instance...] from anywhere else I get permission denied (public key).
I can edit files once logged in if I use sudo vim ..., but that won't help for VS code.
EDIT: a partial solution is to open up permissions using sudo chmod 777 /home/jupyter/*. However, that's probably a hackish, unsafe way to do it. Moreover, it only works on existing files -- new files will still only be writable by whichever user created them.
To SSH into the notebook instance as the “jupyter” user, an SSH key should be generated for that user and be added to the notebook VM instance. Also, please make sure that the notebook instance VM has the appropriate firewall rule to allow the SSH connection. The following are the steps that would create an SSH connection to the “jupyter” user which has the write permissions.
Run the following commands on the local machine to generate the required SSH key:
ssh-keygen -t rsa -f ~/.ssh/jupyter-ssh-key -C jupyter
“jupyter-ssh-key” → Name of the pair of public and private keys (Public key: jupyter-ssh-key.pub, Private key: jupyter-ssh-key)
“jupyter” → User in the VM that we are trying to connect to
chmod 400 ~/.ssh/jupyter-ssh-key
In the Compute Engine console, edit the VM settings to add the contents of the generated SSH public key. Detailed instructions can be found here.
Initiate the SSH connection from the local machine to the notebook VM:
ssh -i ~/.ssh/jupyter-ssh-key jupyter#<external-ip-of-notebook-vm-instance>
If the SSH connection succeeds, the same can be followed in VSCode.
In VSCode, select the “Remote-SSH: Connect to Host” option from the command palette. Enter the above ssh -i command to add the notebook VM instance as a recognized host. A new VSCode window will appear where we have been logged in as the “jupyter” user.

Can't connect to GCP VM Permission denied (publickey) error

I'm creating a new VM instance. I've clean all the meta data. Then I'm running the following command in the cloud shell:
gcloud beta compute ssh --zone "europe-west2-c" "vmname" --project "myprojectname"
then I've been asking to enter a passphrase (which I don't know). I press enter until I get the following error Permission denied (publickey) error
I've delete and recreated my instance multiple time but I always have the same error. What should I do?
Troubleshooting Steps:
Logon using UI ssh. This creates an ephemeral ssh key, Google Agent also executes the codepath to refresh .ssh/authorized_keys and address any invalid dir/file permissions for both .ssh/ and .ssh/authorized_keys. This approach will address common gcloud compute ssh issues that relates to corrupted keys, missing dir/file or invalid dir/file permission. Try the gcloud again after performing the UI ssh.
Make sure that account has authenticated to gcloud as an IAM user with the compute instance admin role; for example, run gcloud auth revoke --all, gcloud auth login [IAM-USER] then try gcloud compute ssh again.
Verify that persistent SSH Keys metadata for gcloud is set for either the project or instance. Look in Compute Engine > Metadata, then click SSH Keys. Persistent keys do not have the expireOn attribute.
It's possible the account has lost the private key, mismatched a keypair, etc. You can force gcloud to generate a new SSH keypair by doing the following:
Move ~/.ssh/google_compute_engine and ~/.ssh/google_compute_engine.pub if present.
For example:
mv ~/.ssh/google_compute_engine.pub ~/.ssh/google_compute_engine.pub.old
mv ~/.ssh/google_compute_engine ~/.ssh/google_compute_engine.old
Try gcloud compute ssh [INSTANCE-NAME] again. A new keypair will be created and the public key will be added to the SSH keys metadata.
Verify that the Linux Google Agent scripts are installed, up-to-date, and running. See Determining Google Agent Status. If the Linux Google Agent is not installed, re-install it. See guest-environment.
Verify account home owner/permission is correct. Make sure that account home directory has the correct ownership and is not globally writable. If not using os-login (which is default), your's .ssh folder must have mode 0700, .ssh/authorized_keys file must have mode 0600. Review /var/log/auth.log for any errors.
Commands:
sudo chmod 700 /home/[user-id]/.ssh
sudo chmod 600 /home/[user-id]/.ssh/authorized_keys
If os-login is enabled and the Virtual Machine instance is using a service account (default). Add the following roles to the account.
roles/compute.osLogin
roles/iam.serviceAccountUser
For more information troubleshooting SSH.
The possible causes for a Permission denied (publickey) error are:
Your key expired and Compute Engine deleted your
~/.ssh/authorized_keys file.
You used an SSH key stored in metadata to connect to a VM that has
OS Login enabled.
You used an SSH key stored in an OS Login profile to connect to a VM
that doesn't have OS Login enabled.
You connected using a third-party tool and your SSH command is
misconfigured.
The sshd daemon isn't running or isn't configured properly.
You can find more information on how to troubleshoot SSH key errors in this link
I have the same issue sometimes . Cause and solution according to GCP troubleshooting link is:
Your key expired and Compute Engine deleted your
~/.ssh/authorized_keys file. If you manually added SSH keys to your VM
and then connected to your VM using the Google Cloud Console, Compute
Engine created a new key pair for your connection. After the new key
pair expired, Compute Engine deleted your ~/.ssh/authorized_keys file
in the VM, which included your manually added SSH key.
To resolve this issue, try one of the following:
Connect to your VM using the Google Cloud Console or the gcloud
command-line tool. Re-add your SSH key to metadata. For more information, see Add SSH keys to VMs that use metadata-based SSH keys.
I use terraform so in this case I instructed the workflow to destroy the VM and rebuild it.
To fix this issue when you cannot start ssh:
Edit VM and enable Serial port
Start serial console
Edit ~/.ssh/authorized_keys
On your desktop/client,
edit /Users/[yourdesktopuser]/.ssh/id_rsa.pub
copy contents to clipboard
Paste this content to the end of authorized_keys file in the VM serial console
Save and close
This will then recognize the public key from your desktop

Not to create a new user when I SSH into GCP compute engine

I am using a GCP compute engine(Ubuntu 18.04) for my flask app. I had no issues setting up the Flask and Python environment.
My issue is when I SSH into the instance a new user is created with the user name of the computer that I am using. When I SSH from a different system or one of my colleagues try to log in, a new user is getting created with the username of the computer that has been used. I don't want this behavior. I want to log into a single user all the time.
Have you considered using gcloud compute ssh cli? Using this you can override the user you are logging in as by providing user# see user argument.
[USER#]INSTANCE
Specifies the instance to SSH into.
USER specifies the username with which to SSH. If omitted, the user login name is used.
INSTANCE specifies the name of the virtual machine instance to SSH into.
see: https://cloud.google.com/sdk/gcloud/reference/compute/ssh
Also see this thread which seems to explain how you can achieve this with standard ssh means:
https://unix.stackexchange.com/questions/404116/how-to-login-with-ssh-as-a-specific-user
You can create an ssh key locally (with any username), and then add it's public SSH key to a GCP project or instance(s) via the console. Just place ssh files on each computer that you plan on using to access the VM, and this will use only the username you've specified during the SSH key creation.
For instructions on how to do this for Windows and/or the gcloud CLI, or for adding the public SSH key to a project or instance, follow the Google Cloud documentation guide Managing SSH keys in metadata
On Linux or MacOS workstations, you can generate a key by using the ssh-keygen tool.
Open a terminal on your workstation and use the ssh-keygen command
to generate a new key. Specify the -C flag to add a comment with
your username.
ssh-keygen -t rsa -f ~/.ssh/[KEY_FILENAME] -C [USERNAME]
where:
[KEY_FILENAME] is the name that you want to use for your SSH key files. For example, a filename of my-ssh-key generates a
private key file named my-ssh-key and a public key file named
my-ssh-key.pub.
[USERNAME] is the username for the user connecting to the instance.
This command generates a private SSH key file and a matching public
SSH key with the following structure:
ssh-rsa [KEY_VALUE] [USERNAME]
where:
[KEY_VALUE] is the key value that you generated.
[USERNAME] is the user that this key applies to.
Restrict access to your private key so that only you can read it and
nobody can write to it.
chmod 400 ~/.ssh/[KEY_FILENAME]
where:
[KEY_FILENAME] is the name that you used for your SSH key files.
Afterwards, locate the public SSH keys that you made and/or any existing public SSH keys that you want to add to a project or instance. Add those keys to the GCP project or instance(s) by editing the respective metadata as described in the Managing SSH keys in metadata guide.
You can now SSH into those GCP resources from a machine with the private SSH keys you created.
ssh [USERNAME]#[IP_ADDRESS]
Where:
[USERNAME] is the username you specified during the SSH key creation in step 1.
[IP_ADDRESS] is the IP address of the GCP instance you intend to SSH into.

GCP VMs + ssh/config file

guys.
GCP offers multiple ways of ssh-ing in gcloud, cloud shell, and local machine cloud SDK.
While all these options are great and I have been using them, I normally prefer using .ssh/config to shorten the process of logging in to machines.
For an example, for EC2, you just add:
Host $name
HostName $hostname
User $username
IdentityFile $pathtoFile
Is there any way to replicate this for GCP VMs?
Thanks
According to This Doc
If you have already connected to an instance through the gcloud tool, your keys are already generated and applied to your project or instance. The key files are available in the following locations:
Linux and macOS
Public key: $HOME/.ssh/google_compute_engine.pub
Private key: $HOME/.ssh/google_compute_engine
Windows
Public key: C:\Users[USERNAME].ssh\google_compute_engine.pub
Private key: C:\Users[USERNAME].ssh\google_compute_engine
You can use the key with typical -i or in .ssh/config config file.
Or simply do
ssh-add ~/.ssh/google_compute_engine
to add the identity to your ssh agent.
PS> I've seen people create an alias for the ssh command, something like
alias gce='gcloud compute ssh'
If you want to SSH to different instances of a google cloud project (from a mac or Linux), do the following:
Step 1. Install SSH keys without password
Use the following command to generate the keys on your mac
ssh-keygen -t rsa -f ~/.ssh/<private-key-name> -C <your gcloud username>
For example private-key-name can be bpa-ssh-key. It will create two files with the
following names in the ~/.ssh directory
bpa-ssh-key
bpa-ssh-key.pub
Step 2. Update the public key on your GCP project
Goto Google Cloud Console, choose your project, then
VMInstances->Metadata->SSH Keys->Edit->Add Item
Cut and paste the contents of the bpa-ssh-key.pub (from your mac) here and then save
Reset the VM Instance if it is running
Step 3. Edit config file under ~/.ssh on your mac
Edit the ~/.ssh/config to add the following lines if not present already
Host *
PubKeyAuthentication yes
IdentityFile ~/.ssh/bpa-ssh-key
Step 4. SSHing to GCP Instance
ssh username#gcloud-externalip
It should create a SSH shell without asking for the password (since you have created the RSA/SSH keys without a password) on the gcloud instance.
Since Metadata is common across all instances under the same project, you can seam-lessly SSH into any of the instances by choosing the respective External IP of the gcloud instance.

Change key pair for ec2 instance

How do I change the key pair for my ec2 instance in AWS management console? I can stop the instance, I can create new key pair, but I don't see any link to modify the instance's key pair.
This answer is useful in the case you no longer have SSH access to the existing server (i.e. you lost your private key).
If you still have SSH access, please use one of the answers below.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#replacing-lost-key-pair
Here is what I did, thanks to Eric Hammond's blog post:
Stop the running EC2 instance
Detach its /dev/xvda1 volume (let's call it volume A) - see here
Start new t1.micro EC2 instance, using my new key pair. Make sure you create it in the same subnet, otherwise you will have to terminate the instance and create it again. - see here
Attach volume A to the new micro instance, as /dev/xvdf (or /dev/sdf)
SSH to the new micro instance and mount volume A to /mnt/tmp
$ sudo mkdir /mnt/tmp; sudo mount /dev/xvdf1 /mnt/tmp
Copy ~/.ssh/authorized_keys to /mnt/tmp/home/ubuntu/.ssh/authorized_keys
Logout
Terminate micro instance
Detach volume A from it
Attach volume A back to the main instance as /dev/xvda
Start the main instance
Login as before, using your new .pem file
That's it.
Once an instance has been started, there is no way to change the keypair associated with the instance at a meta data level, but you can change what ssh key you use to connect to the instance.
There is a startup process on most AMIs that downloads the public ssh key and installs it in a .ssh/authorized_keys file so that you can ssh in as that user using the corresponding private ssh key.
If you want to change what ssh key you use to access an instance, you will want to edit the authorized_keys file on the instance itself and convert to your new ssh public key.
The authorized_keys file is under the .ssh subdirectory under the home directory of the user you are logging in as. Depending on the AMI you are running, it might be in one of:
/home/ec2-user/.ssh/authorized_keys
/home/ubuntu/.ssh/authorized_keys
/root/.ssh/authorized_keys
After editing an authorized_keys file, always use a different terminal to confirm that you are able to ssh in to the instance before you disconnect from the session you are using to edit the file. You don't want to make a mistake and lock yourself out of the instance entirely.
While you're thinking about ssh keypairs on EC2, I recommend uploading your own personal ssh public key to EC2 instead of having Amazon generate the keypair for you.
Here's an article I wrote about this:
Uploading Personal ssh Keys to Amazon EC2
http://alestic.com/2010/10/ec2-ssh-keys
This would only apply to new instances you run.
Run this command after you download your AWS pem.
ssh-keygen -f YOURKEY.pem -y
Then dump the output into authorized_keys.
Or copy pem file to your AWS instance and execute following commands
chmod 600 YOURKEY.pem
and then
ssh-keygen -f YOURKEY.pem -y >> ~/.ssh/authorized_keys
Instruction from AWS EC2 support:
Change pem login
go to your EC2 Console
Under NETWORK & SECURITY, click on Key Pair Click on Create Key Pair
Give your new key pair a name, save the .pem file. The name of the
key pair will be used to connect to your instance
Create SSH connection to your instance and keep it open
in PuttyGen, click "Load" to load your .pem file
Keep the SSH-2 RSA radio button checked. Click on "Save private key"
You'll get pop-up window warning, click "Yes”
click on "Save public key" as well, so to generate the public key.
This is the public key that we're going to copy across to your
current instance
Save the public key with the new key pair name and with the
extension .pub
Open the public key content in a notepad
copy the content below "Comment: "imported-openssh-key" and before
"---- END SSH2 PUBLIC KEY ----
Note - you need to copy the content
as one line - delete all new lines
on your connected instance, open your authorized_keys file using
the tool vi. Run the following command: vi .ssh/authorized_keys
you should see the original public key in the file also
move your cursor on the file to the end of your first public key
content :type "i" for insert
on the new line, type "ssh-rsa" and add a space before you paste
the content of the public key , space, and the name of the .pem
file (without the .pem)
Note - you should get a line with the same format as the previous line
press the Esc key, and then type :wq!
this will save the updated authorized_keys file
now try open a new SSH session to your instance using your new key pai
When you've confirmed you're able to SSH into the instance using the new key pair, u can vi .ssh/authorized_key and delete the old key.
Answer to Shaggie remark:
If you are unable to connect to the instance (e.g. key is corrupted) than use the AWS console to detach the volume (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html) and reattach it to working instance, than change the key on the volume and reattach it back to the previous instance.
I noticed that when managed by Elastic Beanstalk, you can change your active EC2 key pair. Under Elastic Beanstalk > Configuration > Security, choose the new key from the EC2 key pair drop-down. You'll see this message asking if you're sure:
EC2KeyName: Changes to option EC2KeyName settings will not take effect
immediately. Each of your existing EC2 instances will be replaced and
your new settings will take effect then.
My instance was already terminated when I did this. It then started, terminated, and started again. Apparently "replacing" means terminating and creating a new instance. If you've modified your boot volume, create an AMI first, then specify that AMI in the same Elastic Beanstalk > Configuration > Instances form as the Custom AMI ID. This also warns about replacing the EC2 instances.
After you've modified your EC2 key pair and Custom AMI ID, and after seeing warnings about both, click Save to continue.
Remember that the IP address changes when the instance is re-created so you'll need to retrieve a new IP address from the EC2 console to use when connecting via SSH.
I went through this approach, and after some time, was able to make it work. The lack of actual commands made it tough, but I figured it out. HOWEVER - much easier approach was found and tested shortly after:
Save your instance as an AMI (reboot or not, I suggest reboot). This will only work if EBS backed.
Then, simply start an instance from this AMI and assign your new Keyfile.
Move over your elastic IP (if applicable) to your new instance, and you are done.
There are two scenarios asked in this question:-
1)You don't have access to the .pem file that's why you want to create a new one.
2)You have the .pem file access with you but you just want to change or create a new .pem file for some vulnerability or security purposes.
So if you lost your keys you can scroll up and see other answers. But if you just simply change your .pem file for security purposes follow the steps:-
1)Go to AWS console login and create a new .pem file from the key-pair
section over there. It will automatically downloaded .pem file into
your pc
2)change permission to 400 if you are using Linux/ubuntu hit the below
command
chmod 400 yournewfile.pem
3)Generate RSA of the newly-downloaded file in your local machine
ssh-keygen -f yournewfile.pem -y
4)Copy the RSA code from here
5)Now SSH to your instance via previous .pem file
ssh -i oldpemfileName.pem username#ipaddress
sudo vim ~/.ssh/authorized_keys
6)Give one-two lines space and paste the copied RSA of new file here
and then save the file
7)Now your new .pem file is linked with the running instance
8)If you want to disable the previous .pem file access then just edit
the
sudo vim ~/.ssh/authorized_keys
file and remove or change the previous RSA from here.
Note:- Remove carefully so that newly created RSA not get changed.
In this way, you can change/connect the new .pem file with your running instance.
You can revoke access to previously generated .pem file due to security purposes.
Hope it would help!
Steps:
Create new key e.g. using AWS console, the PuTTY Key Generator, or ssh-keygen
Stop instance
Set instance user data to push public key to server
Start instance
#cloud-config
cloud_final_modules:
- [once]
bootcmd:
- echo 'ssh-rsa AAAAB3Nz...' > /home/USERNAME/.ssh/authorized_keys
Where USERNAME is the expected username for the machine. A listed of default usernames is available from AWS.
Step-by-step instructions from AWS
I believe the simpliest aproach is to :
Create AMI image of existing instance.
Launch new EC2 instance using AMI image (crated by step 1) with new key pair.
Login to new EC2 instance with new key.
If below steps are followed it will save lot of time and there will be no need to stop the running instance.
Start new t1.micro EC2 instance, using new key pair. Make sure you create it in the same subnet, otherwise you will have to terminate the instance and create it again.
SSH to the new micro instance and copy content of ~/.ssh/authorized_keys somewhere on your computer.
Login to main instance with old ssh key.
Copy & replace the file content from point 2 to ~/.ssh/authorized_keys
Now you can login again only with new key. Old key will not work anymore.
That is it. Enjoy :)
In case you are using ElasticBeanstalk platform, you can change the keys by going:
Elastic Beanstalk panel
Configuration
Instances (cog top-right)
EC2 key pair
This will terminate current instance and creates new one with chosen keys/settings.
The simplest solution is to copy the contents of
~/.ssh/id_rsa.pub
into your AWS instance's authorized_keys at
~/.ssh/authorized_keys
This will allow you to ssh into the EC2 instance without specifying a pem file for the ssh command. You can remove all other keys once you've tested connecting to it.
If you need to create a new key to share it with someone else, you can do that with:
ssh-keygen -t rsa
which will create the private key.pem file, and you can get the public key of that with:
ssh-keygen -f private_key.pem -y > public_key.pub
Anyone who has private_key.pem will be able to connect with
ssh user#host.com -i private_key.pem
You don't need to rotate root device and change the SSH Public Key in authorized_keys. For that can utilize userdata to add you ssh keys to any instance. For that first you need to create a new KeyPair using AWS console or through ssh-keygen.
ssh-keygen -f YOURKEY.pem -y
This will generate public key for your new SSH KeyPair, copy this public key and use it in below script.
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
/bin/echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC6xigPPA/BAjDPJFflqNuJt5QY5IBeBwkVoow/uBJ8Rorke/GT4KMHJ3Ap2HjsvjYrkQaKANFDfqrizCmb5PfAovUjojvU1M8jYcjkwPG6hIcAXrD5yXdNcZkE7hGK4qf2BRY57E3s25Ay3zKjvdMaTplbJ4yfM0UAccmhKw/SmH0osFhkvQp/wVDzo0PyLErnuLQ5UoMAIYI6TUpOjmTOX9OI/k/zUHOKjHNJ1cFBdpnLTLdsUbvIJbmJ6oxjSrOSTuc5mk7M8HHOJQ9JITGb5LvJgJ9Bcd8gayTXo58BukbkwAX7WsqCmac4OXMNoMOpZ1Cj6BVOOjhluOgYZbLr" >> /home/hardeep/.ssh/authorized_keys
--//
After the restart the machine will be having the specified SSH publch key.
Remove the userdata after first restart. Read more about userdata on startup.
I have tried below steps and it worked without stopping the instance. My requirement was - as I have changed my client machine, the old .pem file was not allowing me to log in to the ec2 instance.
Log in to the ec2 instance using your old .pem file from the old machine. Open ~/.ssh/authorized_keys
You will see your old keys in that file.
ssh-keygen -f YOUR_PEM_FILE.pem -y
It will generate a key. Append the key to ~/.ssh/authorized_keys opened in step#1. No need to delete the old key.
From AWS console, create a new key pair. Store it in your new machine. Rename it to the old pem file - reason is old pem file is still associated with the ec2 instance in AWS.
All done.
I am able to log in to the AWS ec2 from my new client machine.
You have several options to replace the key of your EC2 instance.
You can replace the key manually in the .ssh/authorized_keys file. However this requires you to have actually access to the instance or the volume if this is unencrypted.
You can use the AWS Systems Manager. This requires to have an agent installed.
Since the first option can be found easily in the answers or at the search engine of your choice, I want to focus on the Systems Manager.
Open the Service Systems Manager
Click on Automation on the left side.
Click on Execute Automation
Select AWSSupport-TroubleshootSSH (usually it is on the last page)
You can find more information on the Official AWS Documentation
Thanks for the tips guys. Will definitely keep them in mind when I need to rest the key pairs.
However, in the interest of efficiency and laziness I've come up with something else:
Create your new key pair and download the credentials
Right-click your instance > Create AMI Once it is done
terminate your instance (or just stop it until you are sure you can create another one from your new shiny AMI)
Start a new EC2 instance from the AMI you just created and specify your new key pair created in step (1) above.
Hope this can be of use to you and save you some time as well as minimize the amount of white hair you get from stuff like this :)
What you can do...
Create a new Instance Profile / Role that has the AmazonEC2RoleForSSM policy attached.
Attach this Instance Profile to the instance.
Use SSM Session Manager to login to the instance.
Use keygen on your local machine to create a key pair.
Push the public part of that key onto the instance using your SSM session.
Profit.
This is for them who has two different pem file and for any security purpose want to discard one of the two. Let's say we want to discard 1.pem
Connect with server 2 and copy ssh key from ~/.ssh/authorized_keys
Connect with server 1 in another terminal and paste the key in ~/.ssh/authorized_keys. You will have now two public ssh key here
Now, just for your confidence, try to connect with server 1 with 2.pem. You will be able to connect server 1 with both 1.pem and 2.pem
Now, comment the 1.pem ssh and connect using ssh -i 2.pem user#server1
Yegor256's answer worked for me, but I thought I would just add some comments to help out those who are not so good at mounting drives(like me!):
Amazon gives you a choice of what you want to name the volume when you attach it. You have use a name in the range from /dev/sda - /dev/sdp
The newer versions of Ubuntu will then rename what you put in there to /dev/xvd(x) or something to that effect.
So for me, I chose /dev/sdp as name the mount name in AWS, then I logged into the server, and discovered that Ubuntu had renamed my volume to /dev/xvdp1). I then had to mount the drive - for me I had to do it like this:
mount -t ext4 xvdp1 /mnt/tmp
After jumping through all those hoops I could access my files at /mnt/tmp
This will work only if you have access to the instance you want to change/add the key in.
You can create a new key pair. Or if you already have the key pair, then you can paste the public key of the new pair in the authorized_keys file on your instance.
vim .ssh/authorized_keys
Now you can use the private key for that pair and log in.
Hope this helps.
My issue was , I tried with IP rather than public DNS. Then i tried with public DNS and its resolved
if you are unable to login in VM and deleted your ssh key's and you can also change the key pair of your ec2 using below steps.
Go step by step
1) stop your ec2 instance.
2)take a snapshot of VM and storage.
3)create a new VM while creating it select your snapshot and create VM from your Snapshot.
4) while the creation of VM downloads your keypair.
5) once your VM UP you can ssh with a new key pair and your data will also back.
Alternate solution. If you have the only access on server. In that case don't remove pem file from AWS console. Just remove pem access key from sudo nano ~/.ssh/authroized_keys and add your system public ssh key. Now you have the access ssh user#i.p
If anybody is here because they can't access an EC2 instance because they don't have the keypair, but they do have IAM access, you can run the following command to allow temporary access (60 seconds) to your EC2 instance using a key you already have, as long as you know the username (which is usually 'ubuntu' for ubuntu instances or 'ec2-user' for amazon linux instances):
aws ec2-instance-connect send-ssh-public-key --region ${your-aws-region} --instance-id ${your-instance-id} --availability-zone ${your-instance-az} --instance-os-user ${username} --ssh-public-key file://path/to/public/key
(If you have multiple credentials profiles in your ~/.aws/credentials file you can specify by also adding the flag '--profile your-profile' to this command)
The output will look something like this if successful:
{
"RequestId": "3537268d-c161-41bb-a4ac-977b79b2bdc0",
"Success": true
}
Then you have 60 seconds to ssh in using that key.