Add Keypair to existing EC2 instance - amazon-web-services

I was given AWS Console access to an account with 2 instances running that I cannot shut down (in production). I would, however, like to gain SSH access to these instances, is it possible to create a new Keypair and apply it to the instances so I can SSH in? Obtaining the existing pem file for the keypair the instances were created under is currently not an option.
If this isn't possible is there some other way I can get into the instances?

You can't apply a keypair to a running instance. You can only use the new keypair to launch a new instance.
For recovery, if it's an EBS boot AMI, you can stop it, make a snapshot of the volume. Create a new volume based on it. And be able to use it back to start the old instance, create a new image, or recover data.
Though data at ephemeral storage will be lost.
Due to the popularity of this question and answer, I wanted to capture the information in the link that Rodney posted on his comment.
Credit goes to Eric Hammond for this information.
Fixing Files on the Root EBS Volume of an EC2 Instance
You can examine and edit files on the root EBS volume on an EC2 instance even if you are in what you considered a disastrous situation like:
You lost your ssh key or forgot your password
You made a mistake editing the /etc/sudoers file and can no longer
gain root access with sudo to fix it
Your long running instance is hung for some reason, cannot be
contacted, and fails to boot properly
You need to recover files off of the instance but cannot get to it
On a physical computer sitting at your desk, you could simply boot the system with a CD or USB stick, mount the hard drive, check out and fix the files, then reboot the computer to be back in business.
A remote EC2 instance, however, seems distant and inaccessible when you are in one of these situations. Fortunately, AWS provides us with the power and flexibility to be able to recover a system like this, provided that we are running EBS boot instances and not instance-store.
The approach on EC2 is somewhat similar to the physical solution, but we’re going to move and mount the faulty “hard drive” (root EBS volume) to a different instance, fix it, then move it back.
In some situations, it might simply be easier to start a new EC2 instance and throw away the bad one, but if you really want to fix your files, here is the approach that has worked for many:
Setup
Identify the original instance (A) and volume that contains the broken root EBS volume with the files you want to view and edit.
instance_a=i-XXXXXXXX
volume=$(ec2-describe-instances $instance_a |
egrep '^BLOCKDEVICE./dev/sda1' | cut -f3)
Identify the second EC2 instance (B) that you will use to fix the files on the original EBS volume. This instance must be running in the same availability zone as instance A so that it can have the EBS volume attached to it. If you don’t have an instance already running, start a temporary one.
instance_b=i-YYYYYYYY
Stop the broken instance A (waiting for it to come to a complete stop), detach the root EBS volume from the instance (waiting for it to be detached), then attach the volume to instance B on an unused device.
ec2-stop-instances $instance_a
ec2-detach-volume $volume
ec2-attach-volume --instance $instance_b --device /dev/sdj $volume
ssh to instance B and mount the volume so that you can access its file system.
ssh ...instance b...
sudo mkdir -p 000 /vol-a
sudo mount /dev/sdj /vol-a
Fix It
At this point your entire root file system from instance A is available for viewing and editing under /vol-a on instance B. For example, you may want to:
Put the correct ssh keys in /vol-a/home/ubuntu/.ssh/authorized_keys
Edit and fix /vol-a/etc/sudoers
Look for error messages in /vol-a/var/log/syslog
Copy important files out of /vol-a/…
Note: The uids on the two instances may not be identical, so take care if you are creating, editing, or copying files that belong to non-root users. For example, your mysql user on instance A may have the same UID as your postfix user on instance B which could cause problems if you chown files with one name and then move the volume back to A.
Wrap Up
After you are done and you are happy with the files under /vol-a, unmount the file system (still on instance-B):
sudo umount /vol-a
sudo rmdir /vol-a
Now, back on your system with ec2-api-tools, continue moving the EBS volume back to it’s home on the original instance A and start the instance again:
ec2-detach-volume $volume
ec2-attach-volume --instance $instance_a --device /dev/sda1 $volume
ec2-start-instances $instance_a
Hopefully, you fixed the problem, instance A comes up just fine, and you can accomplish what you originally set out to do. If not, you may need to continue repeating these steps until you have it working.
Note: If you had an Elastic IP address assigned to instance A when you stopped it, you’ll need to reassociate it after starting it up again.
Remember! If your instance B was temporarily started just for this process, don’t forget to terminate it now.

Though you can't add a key pair to a running EC2 instance directly, you can create a linux user and create a new key pair for him, then use it like you would with the original user's key pair.
In your case, you can ask the instance owner (who created it) to do the following. Thus, the instance owner doesn't have to share his own keys with you, but you would still be able to ssh into these instances. These steps were originally posted by Utkarsh Sengar (aka. #zengr) at http://utkarshsengar.com/2011/01/manage-multiple-accounts-on-1-amazon-ec2-instance/. I've made only a few small changes.
Step 1: login by default “ubuntu” user:
$ ssh -i my_orig_key.pem ubuntu#111.111.11.111
Step 2: create a new user, we will call our new user “john”:
[ubuntu#ip-11-111-111-111 ~]$ sudo adduser john
Set password for “john” by:
[ubuntu#ip-11-111-111-111 ~]$ sudo su -
[root#ip-11-111-111-111 ubuntu]# passwd john
Add “john” to sudoer’s list by:
[root#ip-11-111-111-111 ubuntu]# visudo
.. and add the following to the end of the file:
john ALL = (ALL) ALL
Alright! We have our new user created, now you need to generate the key file which will be needed to login, like we have my_orin_key.pem in Step 1.
Now, exit and go back to ubuntu, out of root.
[root#ip-11-111-111-111 ubuntu]# exit
[ubuntu#ip-11-111-111-111 ~]$
Step 3: creating the public and private keys:
[ubuntu#ip-11-111-111-111 ~]$ su john
Enter the password you created for “john” in Step 2. Then create a key pair. Remember that the passphrase for key pair should be at least 4 characters.
[john#ip-11-111-111-111 ubuntu]$ cd /home/john/
[john#ip-11-111-111-111 ~]$ ssh-keygen -b 1024 -f john -t dsa
[john#ip-11-111-111-111 ~]$ mkdir .ssh
[john#ip-11-111-111-111 ~]$ chmod 700 .ssh
[john#ip-11-111-111-111 ~]$ cat john.pub > .ssh/authorized_keys
[john#ip-11-111-111-111 ~]$ chmod 600 .ssh/authorized_keys
[john#ip-11-111-111-111 ~]$ sudo chown john:ubuntu .ssh
In the above step, john is the user we created and ubuntu is the default user group.
[john#ip-11-111-111-111 ~]$ sudo chown john:ubuntu .ssh/authorized_keys
Step 4: now you just need to download the key called “john”. I use scp to download/upload files from EC2, here is how you can do it.
You will still need to copy the file using ubuntu user, since you only have the key for that user name. So, you will need to move the key to ubuntu folder and chmod it to 777.
[john#ip-11-111-111-111 ~]$ sudo cp john /home/ubuntu/
[john#ip-11-111-111-111 ~]$ sudo chmod 777 /home/ubuntu/john
Now come to local machine’s terminal, where you have my_orig_key.pem file and do this:
$ cd ~/.ssh
$ scp -i my_orig_key.pem ubuntu#111.111.11.111:/home/ubuntu/john john
The above command will copy the key “john” to the present working directory on your local machine. Once you have copied the key to your local machine, you should delete “/home/ubuntu/john”, since it’s a private key.
Now, one your local machine chmod john to 600.
$ chmod 600 john
Step 5: time to test your key:
$ ssh -i john john#111.111.11.111
So, in this manner, you can setup multiple users to use one EC2 instance!!

For Elastic Beanstalk environments, you can apply a key-value pair to a running instance like this:
Create a key-value pair from EC2 -> Key Pairs (Under NETWORK & SECURITY tab)
Go to Elastic Beanstalk and click on your application
Go to Configuration -> Security and click Edit
Choose your EC2 key pair and click Apply
Click confirm to confirm the update. It will terminate the environment and apply the key value to your environment.

On your local machine, run command:
ssh-keygen -t rsa -C "SomeAlias"
After that command runs, a file ending in *.pub will be generated. Copy the contents of that file.
On the Amazon machine, edit ~/.ssh/authorized_keys and paste the contents of the *.pub file (and remove any existing contents first).
You can then SSH using the other file that was generated from the ssh-keygen command (the private key).

This happened to me earlier (didn't have access to an EC2 instance someone else created but had access to AWS web console) and I blogged the answer: http://readystate4.com/2013/04/09/aws-gaining-ssh-access-to-an-ec2-instance-you-lost-access-to/
Basically, you can detached the EBS drive, attach it to an EC2 that you do have access to. Add your SSH pub key to ~ec2-user/.ssh/authorized_keys on this attached drive. Then put it back on the old EC2 instance. step-by-step in the link using Amazon AMI.
No need to make snapshots or create a new cloned instance.

I didn't find an easy way to add a new key pair via the console, but you can do it manually.
Just ssh into your EC2 box with the existing key pair. Then edit the ~/.ssh/authorized_keys and add the new key on a new line. Exit and ssh via the new machine. Success!

In my case I used this documentation to associate a key pair with my instance of Elastic Beanstalk
Important
You must create an Amazon EC2 key pair and configure your Elastic Beanstalk–provisioned Amazon EC2 instances to use the Amazon EC2 key pair before you can access your Elastic Beanstalk–provisioned Amazon EC2 instances. You can set up your Amazon EC2 key pairs using the AWS Management Console. For instructions on creating a key pair for Amazon EC2, see the Amazon Elastic Compute Cloud Getting Started Guide.
Configuring Amazon EC2 Server Instances with Elastic Beanstalk

You can just add a new key to the instance by the following command:
ssh-copy-id -i ~/.ssh/id_rsa.pub domain_alias
You can configure domain_alias in ~/.ssh config
host domain_alias
User ubuntu
Hostname domain.com
IdentityFile ~/.ssh/ec2.pem

Once an instance has been started, there is no way to change the
keypair associated with the instance at a meta data level, but you
can change what ssh key you use to connect to the instance.
stackoverflow.com/questions/7881469/change-key-pair-for-ec2-instance

You can actually add a key pair through the elastic beanstalk config page. it then restarts your instance for you and everything works.

Related

I have a problem connecting to AWS EC2 via SSH with .pem

I used the following command in the directory where .pem exists to use the Chrome Extension, Secure Shell.
$ sudo chmod 400 myKeyPair.pem
$ ssh-keygen -y -f myKeyPair.pem > myKeyPair.pub
$ touch myKeyPair
$ sudo cat myKeyPair.pem > myKeyPair
And it worked perfectly within the Secure Shell Extension.
And I deleted all the files and created a new key pair (with the same name). And I ssh through the MacOS terminal. However, this will result in "Permission denied (publickey)." I think .pem is a new file, but the previous command still seems to have an effect.
How can I run an existing .pem before the public key conversion and SSH connection through a single .pem?
Ah! And I have another completely different question. For example, after creating EC2 via the WordPress AMI in AWS Marketplace and writing a post, is this stored in EBS?
Thanks in advance to everyone who answers.
When a Keypair is generated, it contains a random key. Therefore, every time a keypair is generated, it is different. The actual name of a keypair is irrelevant.
So if you do the following:
Create a keypair
Launch an EC2 instance providing that keypair
Delete the keypair
then you will never be able to login to the instance because you no longer have the keypair used when the instance was launched.
What actually happens is that when an instance is launched, some code on the instance copies the public half of the keypair to the /users/ec2-user/.ssh/authorized_keys file. Then, when somebody tries to login with the private half of a keypair, Linux compares the two halves of the keypair. If they match, the user is allowed to login.

Adding to authorized_keys via AWS EC2 Instance User Data

Recently I made the silly mistake of clearing the contents of my user's ~/.ssh/authorized_keys file on my AWS instance. As such, I can no longer ssh onto the instance.
I realised I could add these keys back via AWS EC2 instance user data. However as of yet I have had no luck with this. I stopped my instance, added the following to the user data and started it again:
#!/bin/bash
> /home/myUser/.ssh/authorized_keys
echo "ssh-rsa aaa/bbb/ccc/ddd/etc== mykeypair" >> /home/myUser/.ssh/authorized_keys
chown myUser:myUser /home/myUser/.ssh/authorized_keys
chmod 600 /home/myUser/.ssh/authorized_keys
This should empty the file, add the public keypair and ensure the correct permissions are present on the file.
However my private key is still being rejected.
I know the keys are correct so it must be something to do with my instance user data. I have also tried prepending 'sudo' to all commands.
Try to use cloud-init directives instead of shell
#cloud-config
cloud_final_modules:
- [users-groups,always]
users:
- name: example_user
groups: [ wheel ]
sudo: [ "ALL=(ALL) NOPASSWD:ALL" ]
shell: /bin/bash
ssh-authorized-keys:
- ssh-rsa AAAAB3Nz<your public key>...
The default behavior is to execute once-per-instance. However, these
instructions add the key on every reboot or restart of the instance.
If the user data is removed, the default functionality is restored.
These instructions are for use on all OS distributions that support
cloud-init directives.
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-user-account-cloud-init-user-data/
From the official docs:
By default, user data and cloud-init directives only run during the
first boot cycle when you launch an instance. However, AWS Marketplace
vendors and owners of third-party AMIs may have made their own
customizations for how and when scripts run.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
So modifying user data after you shut down your instance will not be useful in most cases.
Solution: you can detach your EBS volume, attach it to an EC2 instance you can connect to, mount the volume, fix authorized_keys, than connect the volume back to the affected instance.

Multiple users connecting to AWS EC2 via SSH

I was trying to organize how a developer could connect via SSH to an AWS instance that I had launched as an administrator. I notice in the documentation that it says:
you'll need the fully-qualified path of the .pem file for the key pair that you specified when you launched the instance.
Does this mean one can only SSH into an instance that one had launched ? I'd like to just leave the instance running and have others able to SSH in to install software and configure.
Here's how to add new users/developers to an AMAZON EC2 linux instance and give them unique SSH Key access:
Say you are creating "user": Create a key on your own machine by entering the following:
ssh -keygen -b 1024 -f user -t dsa
Don't use a paraphrase -- just hit enter.
You should now have two files compiled: user and user.pub
chmod 600 user.pub
Now transfer the public key file (user.pub) from your computer to the server. For example let us use the /tmp/ directory.
Now SSH into your server using an account with root access, you will now need to create the user and also create the necessary files and ownership for you to use the key you just created:
# sudo su (if needed)
# useradd -c "firstname lastname" user
# cd /home/user
# mkdir .ssh
# chmod 700 .ssh
# chown user:user .ssh
# cat /tmp/user.pub >> .ssh/authorized_keys
# chmod 600 .ssh/authorized_keys
# chown user:user .ssh/authorized_keys
Once you've done this, exit out back to your own machine, then try to SSH using the new credential and user account you've created:
ssh -i user.pem user#ec2-your-instance-name.compute-1.amazonaws.com
When you create an instance, you can specify a key at launch time. What ends up happening is that AWS takes the public key associated with the key pair you created, and puts it into authorized_keys in /home/ec2-user/.ssh/.
So, if this is a one-time thing, you could provide the private key (the .pem file you downloaded when you created the key) to the user that needs access.
If this is an on-going issue - i.e. you will be creating lots of instances and having lots of different people who need to access them - then you need a better solution, but you'll have to figure out what your requirements are.
Some potential solutions would be to get public keys from your users, add them to an instance, and then create an AMI from that instance. Use that AMI to launch future instances. You could also put users public keys into S3, and have a script that pulled them down when the instance was created & either added them to authorized_keys or created individual users. You could pull users keys from IAM if all your users have IAM accounts. Or you could use a directory & configure your instance to use that for authentication.
One way to setup user management is
Ask all the users to create their own public-private key.
Everyone shares/copy/checkin their public key at some location where we can run rsync server to expose all the public keys
On the server, fetch all these keys from rsync server and run a usercreate command.
Another way is, have a base image in AWS with users already created with their public key in authorized_keys. And use this image to create new instances.

AWS EC2 Permission denied

When i try to log in into EC2 instance through ssh, i got the below error Permission denied (publickey)
I checked the host name and username, everything is fine. Wrongly i given the chmod -R 777 . from ec2 instance root directory when i was logged at last time, After that i could not able connect to instance. I need some files from ec2-instance. Is there any way to get log in into ec2?. Also i tried with new instance. Its working.
Is there any possibility?
I haven't tried this myself, but I don't see why it wouldn't work.
Try snapshotting your instance (create image button from ec2 console). Once complete, find your snapshot in the Ec2 console. It should be backed by an EBS volume with an id of the pattern "vol-xxxxxxxx".
Spin up a new instance and attach "vol-xxxxxxxx" as secondary storage. SSH to the new instance and mount the device "vol-xxxxxxxx" correlates to (e.g. /dev/xvdf) to a temp directory and find the files you're looking for.
Detach your root volume and attach to another instance.
Login to the ec2 instance
mkdir tempfolder
sudo mount /dev/svdf1 (normally /dev/svdf1, you can list out your volumes to make sure)
cd tempfolder/home
chmod 700 -R ec2-user
sudo umount tempfolder
Detach volume and attach it to old instance, remember it's root instance so you attach it with name "/dev/xvda".
I faced the similar problem.
you will not able to re-cover the old instance, just create new instance and set the permissions
chmod 777 (don't use -R) option, then your problem will be resolved.
One reason can be that your key file is not publicly viewable for SSH to work. Use this command if needed:
chmod 400 mykey.pem
Also keep in mind the correct user id for EC2(ec2-user) instance and the command:
ssh -l ec2-user -i .ssh/yourkey.pem public-ec2-host
Use Winscp to revert the permission change.
Recently I had accidentally changed the "/home/ec2-user" directory permissions to 777 using putty. I was immediately logged out. I was also logged into the server using "Winscp" and it didn't get disconnected after chaging the permissions.
The solution is change the permission on "/home/ec2-user" back to 700 using Winscp and I was able to log back in. It worked for me. Winscp saved me a lot of trouble.

Change key pair for ec2 instance

How do I change the key pair for my ec2 instance in AWS management console? I can stop the instance, I can create new key pair, but I don't see any link to modify the instance's key pair.
This answer is useful in the case you no longer have SSH access to the existing server (i.e. you lost your private key).
If you still have SSH access, please use one of the answers below.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#replacing-lost-key-pair
Here is what I did, thanks to Eric Hammond's blog post:
Stop the running EC2 instance
Detach its /dev/xvda1 volume (let's call it volume A) - see here
Start new t1.micro EC2 instance, using my new key pair. Make sure you create it in the same subnet, otherwise you will have to terminate the instance and create it again. - see here
Attach volume A to the new micro instance, as /dev/xvdf (or /dev/sdf)
SSH to the new micro instance and mount volume A to /mnt/tmp
$ sudo mkdir /mnt/tmp; sudo mount /dev/xvdf1 /mnt/tmp
Copy ~/.ssh/authorized_keys to /mnt/tmp/home/ubuntu/.ssh/authorized_keys
Logout
Terminate micro instance
Detach volume A from it
Attach volume A back to the main instance as /dev/xvda
Start the main instance
Login as before, using your new .pem file
That's it.
Once an instance has been started, there is no way to change the keypair associated with the instance at a meta data level, but you can change what ssh key you use to connect to the instance.
There is a startup process on most AMIs that downloads the public ssh key and installs it in a .ssh/authorized_keys file so that you can ssh in as that user using the corresponding private ssh key.
If you want to change what ssh key you use to access an instance, you will want to edit the authorized_keys file on the instance itself and convert to your new ssh public key.
The authorized_keys file is under the .ssh subdirectory under the home directory of the user you are logging in as. Depending on the AMI you are running, it might be in one of:
/home/ec2-user/.ssh/authorized_keys
/home/ubuntu/.ssh/authorized_keys
/root/.ssh/authorized_keys
After editing an authorized_keys file, always use a different terminal to confirm that you are able to ssh in to the instance before you disconnect from the session you are using to edit the file. You don't want to make a mistake and lock yourself out of the instance entirely.
While you're thinking about ssh keypairs on EC2, I recommend uploading your own personal ssh public key to EC2 instead of having Amazon generate the keypair for you.
Here's an article I wrote about this:
Uploading Personal ssh Keys to Amazon EC2
http://alestic.com/2010/10/ec2-ssh-keys
This would only apply to new instances you run.
Run this command after you download your AWS pem.
ssh-keygen -f YOURKEY.pem -y
Then dump the output into authorized_keys.
Or copy pem file to your AWS instance and execute following commands
chmod 600 YOURKEY.pem
and then
ssh-keygen -f YOURKEY.pem -y >> ~/.ssh/authorized_keys
Instruction from AWS EC2 support:
Change pem login
go to your EC2 Console
Under NETWORK & SECURITY, click on Key Pair Click on Create Key Pair
Give your new key pair a name, save the .pem file. The name of the
key pair will be used to connect to your instance
Create SSH connection to your instance and keep it open
in PuttyGen, click "Load" to load your .pem file
Keep the SSH-2 RSA radio button checked. Click on "Save private key"
You'll get pop-up window warning, click "Yes”
click on "Save public key" as well, so to generate the public key.
This is the public key that we're going to copy across to your
current instance
Save the public key with the new key pair name and with the
extension .pub
Open the public key content in a notepad
copy the content below "Comment: "imported-openssh-key" and before
"---- END SSH2 PUBLIC KEY ----
Note - you need to copy the content
as one line - delete all new lines
on your connected instance, open your authorized_keys file using
the tool vi. Run the following command: vi .ssh/authorized_keys
you should see the original public key in the file also
move your cursor on the file to the end of your first public key
content :type "i" for insert
on the new line, type "ssh-rsa" and add a space before you paste
the content of the public key , space, and the name of the .pem
file (without the .pem)
Note - you should get a line with the same format as the previous line
press the Esc key, and then type :wq!
this will save the updated authorized_keys file
now try open a new SSH session to your instance using your new key pai
When you've confirmed you're able to SSH into the instance using the new key pair, u can vi .ssh/authorized_key and delete the old key.
Answer to Shaggie remark:
If you are unable to connect to the instance (e.g. key is corrupted) than use the AWS console to detach the volume (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html) and reattach it to working instance, than change the key on the volume and reattach it back to the previous instance.
I noticed that when managed by Elastic Beanstalk, you can change your active EC2 key pair. Under Elastic Beanstalk > Configuration > Security, choose the new key from the EC2 key pair drop-down. You'll see this message asking if you're sure:
EC2KeyName: Changes to option EC2KeyName settings will not take effect
immediately. Each of your existing EC2 instances will be replaced and
your new settings will take effect then.
My instance was already terminated when I did this. It then started, terminated, and started again. Apparently "replacing" means terminating and creating a new instance. If you've modified your boot volume, create an AMI first, then specify that AMI in the same Elastic Beanstalk > Configuration > Instances form as the Custom AMI ID. This also warns about replacing the EC2 instances.
After you've modified your EC2 key pair and Custom AMI ID, and after seeing warnings about both, click Save to continue.
Remember that the IP address changes when the instance is re-created so you'll need to retrieve a new IP address from the EC2 console to use when connecting via SSH.
I went through this approach, and after some time, was able to make it work. The lack of actual commands made it tough, but I figured it out. HOWEVER - much easier approach was found and tested shortly after:
Save your instance as an AMI (reboot or not, I suggest reboot). This will only work if EBS backed.
Then, simply start an instance from this AMI and assign your new Keyfile.
Move over your elastic IP (if applicable) to your new instance, and you are done.
There are two scenarios asked in this question:-
1)You don't have access to the .pem file that's why you want to create a new one.
2)You have the .pem file access with you but you just want to change or create a new .pem file for some vulnerability or security purposes.
So if you lost your keys you can scroll up and see other answers. But if you just simply change your .pem file for security purposes follow the steps:-
1)Go to AWS console login and create a new .pem file from the key-pair
section over there. It will automatically downloaded .pem file into
your pc
2)change permission to 400 if you are using Linux/ubuntu hit the below
command
chmod 400 yournewfile.pem
3)Generate RSA of the newly-downloaded file in your local machine
ssh-keygen -f yournewfile.pem -y
4)Copy the RSA code from here
5)Now SSH to your instance via previous .pem file
ssh -i oldpemfileName.pem username#ipaddress
sudo vim ~/.ssh/authorized_keys
6)Give one-two lines space and paste the copied RSA of new file here
and then save the file
7)Now your new .pem file is linked with the running instance
8)If you want to disable the previous .pem file access then just edit
the
sudo vim ~/.ssh/authorized_keys
file and remove or change the previous RSA from here.
Note:- Remove carefully so that newly created RSA not get changed.
In this way, you can change/connect the new .pem file with your running instance.
You can revoke access to previously generated .pem file due to security purposes.
Hope it would help!
Steps:
Create new key e.g. using AWS console, the PuTTY Key Generator, or ssh-keygen
Stop instance
Set instance user data to push public key to server
Start instance
#cloud-config
cloud_final_modules:
- [once]
bootcmd:
- echo 'ssh-rsa AAAAB3Nz...' > /home/USERNAME/.ssh/authorized_keys
Where USERNAME is the expected username for the machine. A listed of default usernames is available from AWS.
Step-by-step instructions from AWS
I believe the simpliest aproach is to :
Create AMI image of existing instance.
Launch new EC2 instance using AMI image (crated by step 1) with new key pair.
Login to new EC2 instance with new key.
If below steps are followed it will save lot of time and there will be no need to stop the running instance.
Start new t1.micro EC2 instance, using new key pair. Make sure you create it in the same subnet, otherwise you will have to terminate the instance and create it again.
SSH to the new micro instance and copy content of ~/.ssh/authorized_keys somewhere on your computer.
Login to main instance with old ssh key.
Copy & replace the file content from point 2 to ~/.ssh/authorized_keys
Now you can login again only with new key. Old key will not work anymore.
That is it. Enjoy :)
In case you are using ElasticBeanstalk platform, you can change the keys by going:
Elastic Beanstalk panel
Configuration
Instances (cog top-right)
EC2 key pair
This will terminate current instance and creates new one with chosen keys/settings.
The simplest solution is to copy the contents of
~/.ssh/id_rsa.pub
into your AWS instance's authorized_keys at
~/.ssh/authorized_keys
This will allow you to ssh into the EC2 instance without specifying a pem file for the ssh command. You can remove all other keys once you've tested connecting to it.
If you need to create a new key to share it with someone else, you can do that with:
ssh-keygen -t rsa
which will create the private key.pem file, and you can get the public key of that with:
ssh-keygen -f private_key.pem -y > public_key.pub
Anyone who has private_key.pem will be able to connect with
ssh user#host.com -i private_key.pem
You don't need to rotate root device and change the SSH Public Key in authorized_keys. For that can utilize userdata to add you ssh keys to any instance. For that first you need to create a new KeyPair using AWS console or through ssh-keygen.
ssh-keygen -f YOURKEY.pem -y
This will generate public key for your new SSH KeyPair, copy this public key and use it in below script.
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
/bin/echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC6xigPPA/BAjDPJFflqNuJt5QY5IBeBwkVoow/uBJ8Rorke/GT4KMHJ3Ap2HjsvjYrkQaKANFDfqrizCmb5PfAovUjojvU1M8jYcjkwPG6hIcAXrD5yXdNcZkE7hGK4qf2BRY57E3s25Ay3zKjvdMaTplbJ4yfM0UAccmhKw/SmH0osFhkvQp/wVDzo0PyLErnuLQ5UoMAIYI6TUpOjmTOX9OI/k/zUHOKjHNJ1cFBdpnLTLdsUbvIJbmJ6oxjSrOSTuc5mk7M8HHOJQ9JITGb5LvJgJ9Bcd8gayTXo58BukbkwAX7WsqCmac4OXMNoMOpZ1Cj6BVOOjhluOgYZbLr" >> /home/hardeep/.ssh/authorized_keys
--//
After the restart the machine will be having the specified SSH publch key.
Remove the userdata after first restart. Read more about userdata on startup.
I have tried below steps and it worked without stopping the instance. My requirement was - as I have changed my client machine, the old .pem file was not allowing me to log in to the ec2 instance.
Log in to the ec2 instance using your old .pem file from the old machine. Open ~/.ssh/authorized_keys
You will see your old keys in that file.
ssh-keygen -f YOUR_PEM_FILE.pem -y
It will generate a key. Append the key to ~/.ssh/authorized_keys opened in step#1. No need to delete the old key.
From AWS console, create a new key pair. Store it in your new machine. Rename it to the old pem file - reason is old pem file is still associated with the ec2 instance in AWS.
All done.
I am able to log in to the AWS ec2 from my new client machine.
You have several options to replace the key of your EC2 instance.
You can replace the key manually in the .ssh/authorized_keys file. However this requires you to have actually access to the instance or the volume if this is unencrypted.
You can use the AWS Systems Manager. This requires to have an agent installed.
Since the first option can be found easily in the answers or at the search engine of your choice, I want to focus on the Systems Manager.
Open the Service Systems Manager
Click on Automation on the left side.
Click on Execute Automation
Select AWSSupport-TroubleshootSSH (usually it is on the last page)
You can find more information on the Official AWS Documentation
Thanks for the tips guys. Will definitely keep them in mind when I need to rest the key pairs.
However, in the interest of efficiency and laziness I've come up with something else:
Create your new key pair and download the credentials
Right-click your instance > Create AMI Once it is done
terminate your instance (or just stop it until you are sure you can create another one from your new shiny AMI)
Start a new EC2 instance from the AMI you just created and specify your new key pair created in step (1) above.
Hope this can be of use to you and save you some time as well as minimize the amount of white hair you get from stuff like this :)
What you can do...
Create a new Instance Profile / Role that has the AmazonEC2RoleForSSM policy attached.
Attach this Instance Profile to the instance.
Use SSM Session Manager to login to the instance.
Use keygen on your local machine to create a key pair.
Push the public part of that key onto the instance using your SSM session.
Profit.
This is for them who has two different pem file and for any security purpose want to discard one of the two. Let's say we want to discard 1.pem
Connect with server 2 and copy ssh key from ~/.ssh/authorized_keys
Connect with server 1 in another terminal and paste the key in ~/.ssh/authorized_keys. You will have now two public ssh key here
Now, just for your confidence, try to connect with server 1 with 2.pem. You will be able to connect server 1 with both 1.pem and 2.pem
Now, comment the 1.pem ssh and connect using ssh -i 2.pem user#server1
Yegor256's answer worked for me, but I thought I would just add some comments to help out those who are not so good at mounting drives(like me!):
Amazon gives you a choice of what you want to name the volume when you attach it. You have use a name in the range from /dev/sda - /dev/sdp
The newer versions of Ubuntu will then rename what you put in there to /dev/xvd(x) or something to that effect.
So for me, I chose /dev/sdp as name the mount name in AWS, then I logged into the server, and discovered that Ubuntu had renamed my volume to /dev/xvdp1). I then had to mount the drive - for me I had to do it like this:
mount -t ext4 xvdp1 /mnt/tmp
After jumping through all those hoops I could access my files at /mnt/tmp
This will work only if you have access to the instance you want to change/add the key in.
You can create a new key pair. Or if you already have the key pair, then you can paste the public key of the new pair in the authorized_keys file on your instance.
vim .ssh/authorized_keys
Now you can use the private key for that pair and log in.
Hope this helps.
My issue was , I tried with IP rather than public DNS. Then i tried with public DNS and its resolved
if you are unable to login in VM and deleted your ssh key's and you can also change the key pair of your ec2 using below steps.
Go step by step
1) stop your ec2 instance.
2)take a snapshot of VM and storage.
3)create a new VM while creating it select your snapshot and create VM from your Snapshot.
4) while the creation of VM downloads your keypair.
5) once your VM UP you can ssh with a new key pair and your data will also back.
Alternate solution. If you have the only access on server. In that case don't remove pem file from AWS console. Just remove pem access key from sudo nano ~/.ssh/authroized_keys and add your system public ssh key. Now you have the access ssh user#i.p
If anybody is here because they can't access an EC2 instance because they don't have the keypair, but they do have IAM access, you can run the following command to allow temporary access (60 seconds) to your EC2 instance using a key you already have, as long as you know the username (which is usually 'ubuntu' for ubuntu instances or 'ec2-user' for amazon linux instances):
aws ec2-instance-connect send-ssh-public-key --region ${your-aws-region} --instance-id ${your-instance-id} --availability-zone ${your-instance-az} --instance-os-user ${username} --ssh-public-key file://path/to/public/key
(If you have multiple credentials profiles in your ~/.aws/credentials file you can specify by also adding the flag '--profile your-profile' to this command)
The output will look something like this if successful:
{
"RequestId": "3537268d-c161-41bb-a4ac-977b79b2bdc0",
"Success": true
}
Then you have 60 seconds to ssh in using that key.