I'm trying to connect to master cluster in kubernetes as ssh -i ~/.ssh/id_rsa ubuntu#api.demo.k8s.testcheck.tk
It throws error as
ubuntu#api.demo.k8s.testcheck.tk: Permission denied (publickey).
I am using kops as a deployment utility. Can someone help here please
Can you change the username from ubuntu to ec2-user? I think the default user of EKS nodes is ec2-user
If you used a later version of kOps, it does not by default add an SSH key. You need to supply one with the create command like this: kops create cluster --ssh-public-key ~/.ssh/id_rsa.pub .... After the cluster has been created, you can create an ssh key kops create secret --name <cluster name> sshpublickey admin -i ~/.ssh/id_rsa.pub.
Instead of using static SSH keys, I do recommend using mssh from ec2-instance-connect
To connect to EC2 via SSH with public key, you need the corresponding private key configured in the authorized_keys file in the root volume. When kops provisioned the EC2, does it provide you which keys to use?
If not, if the EC2 is using a newer AMI, you should be able to alternatively connect via EC2 instance connect.
Else, you need to manually modify the authorized_keys file.
Related
I am setting up a new EC2 Amazon Linux 2 AMI and am having a try at setting up EC2 Instance Connect as it's preinstalled on my new instance.
From what I've understood the docs to mean, I should be able to create an IAM user, add a public key to that user and then SSH into the box using the IAM user's (public) key without having to create the .ssh folder on the EC2 instance.
What I've done is:
Create a user on the EC2 instance which my IAM user should map to (let's call him bob)
Uploaded my public OpenSSH key to the IAM user
Created a permission policy which allows the action ec2-instance-connect:SendSSHPublicKey (as per the docs)
Once these are all done, if I try to SSH into the box, it doesn't work and in my /var/log/secure I see a preauth failure.
If I create the .ssh/authorized_keys file and set the permissions correctly, everything works fine.
However, my understanding of the EC2 Instance Connect approach is that it gives me a central way to manage public-key based access to my instances.
Am I correct?
Am I missing something in how I'm setting this up?
I'm finding the documentation a little unclear, so some insight would be helpful.
Thank!
EC2 Instance Connect works as follows:
You issue a command that pushes a temporary public key to the instance, such as:
$ aws ec2-instance-connect send-ssh-public-key --instance-id i-001234a4bf70dec41EXAMPLE --availability-zone us-west-2b --instance-os-user ec2-user --ssh-public-key file://my_rsa_key.pub
You then establish an SSH connection to the instance using the private half of the keypair
Within the instance, the EC2 Instance Connect software interfaces with the ssh process and checks whether the SSH key provided matches the public key that was pushed with send-ssh-public-key (and it is within 60 seconds of receiving that key)
If they match, the SSH session is permitted
See: Connect Using EC2 Instance Connect - Amazon Elastic Compute Cloud
EC2 Instance Connect also provides a web-based interface that can both initiate the above process (using a temporary random keypair) and provide an SSH interface. When doing so, the SSH connection appears to come from within AWS, not your own IP address. This is because the web interface uses HTTPS to AWS, then AWS establishes the SSH connection to the instance. This has an impact on security group configuration.
I have a question to ask about AWS EC2. I created an EC2 instance before with a ppk file and associated my EC2 instance with that PPK file on Putty.
Thereafter, I created a separate EC2 instance with additional storage and I tried to associate my ppk file with this instance too.
However, When I ssh onto Putty, it gives me this error
Using username "ec2-storage".
Server refused our key
Is it because once a EC2 instance is using a keypair I cant use it for another EC2 instance? However, if this is the case why does the AWS console give us the option to choose an existing key pair?
Any advice?
Adrian
If you are launching an Amazon EC2 instance based on the Amazon Linux AMI, the username is ec2-user.
I got an AMI shared with me that is basically an instance copy from a EC2 related to a different account. It is a server with an EBS volume attached to it. I created an EC2 instance from that AMI. So far so good.
However, I can only access it using the SSH pem file from the other account. But obviously I want to access it with the SSH pair from my current account.
How do I do that? I would have expected to be able to access the instance with the SSH from my new account.
The ssh keys are independent from AWS and there's no reason for them to update automatically. You have to edit the authorized_keys file located in your .ssh directory ~/.ssh/authorized_keys and add the public key of your desired key pair to that file.
I'm using KOPs to launch a Kubernetes cluster in the AWS environment.
Is there a way to set a predefined SSH key when calling create cluster?
If KOPs autogenerates the SSH key when running create cluster, is there a way to download this key to access the cluster nodes?
Please read the Kops SSH docs:
When using the default images, the SSH username will be admin, and the SSH private key is be the private key corresponding to the public key in kops get secrets --type sshpublickey admin. When creating a new cluster, the SSH public key can be specified with the --ssh-public-key option, and it defaults to ~/.ssh/id_rsa.pub.
So to answer your questions:
Yes, you can set the key using --ssh-public-key
When --ssh-public-key is not specified Kops does not autogenerate a key, but rather uses the key found in ~.ssh/id_rsa.pub
I am following instructions on:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/coreos/coreos_multinode_cluster.md
I am trying to launch a Master with master.yaml file as a UserData. I am able to successfully launch the instance in EC2 but i cant seem to ssh to it via aws ssh key..
backend-service viralcarpenter$ ssh -i ~/Downloads/viral-kubernetes-acad-key.pem core#54.153.63.240
core#54.153.63.240's password:
Is there something that i am missing?
You need to have a Key Pair configured in your EC2 region and specify it when creating the instance in order to be able to SSH into it.
--key-name <keypair>