I'm using KOPs to launch a Kubernetes cluster in the AWS environment.
Is there a way to set a predefined SSH key when calling create cluster?
If KOPs autogenerates the SSH key when running create cluster, is there a way to download this key to access the cluster nodes?
Please read the Kops SSH docs:
When using the default images, the SSH username will be admin, and the SSH private key is be the private key corresponding to the public key in kops get secrets --type sshpublickey admin. When creating a new cluster, the SSH public key can be specified with the --ssh-public-key option, and it defaults to ~/.ssh/id_rsa.pub.
So to answer your questions:
Yes, you can set the key using --ssh-public-key
When --ssh-public-key is not specified Kops does not autogenerate a key, but rather uses the key found in ~.ssh/id_rsa.pub
Related
I would like to SSH into my Amazon Web Services (AWS) Cloud9 Elastic Cloud Compute (EC2) environment, but there is no key pair assigned to the Cloud9 EC2 environment. How can I assign a key pair to that environment, so that I can SSH into it?
I created the AWS Cloud9 EC2 environment through the Cloud9 interface, rather than creating the EC2 environment and then accessing it through Cloud9. When I create EC2 environments normally, I am given the opportunity to assign an existing key pair, or create a new key pair. This option was not presented to me when I created the environment through Cloud9.
You can SSH into a Cloud9 environment created through Cloud9. The steps are similar to sharing a running app over the internet in the docs, but instead of sharing the app, you share the SSH server.
In AWS Console, find the corresponding EC2 instance.
In the bottom panel, under the Description tab, in Security groups row, click on the link to go to associated security group.
You should now be in Security Groups section. In the bottom panel, under the Inbound tab, click Edit and add:
Type: SSH
Source: Anywhere
and click Save.
In Cloud9 terminal, add your public key to ~/.ssh/authorized_keys. Don’t replace the existing keys or elsewise Cloud9 IDE wouldn’t be able to connect to the instance.
You can now SSH into the Cloud9-managed instance using ssh ec2-user#<ip>, or ssh ubuntu#<ip> if using Ubuntu AMI, for other AMIs see default user name for the AMI
Cloud9 is managing the underlying EC2 for you so you won't get any extra charges.
A terminal is already provided by AWS but you could follow this procedure if you still want to get SSH access to a Cloud9 environment.
I'm trying to connect to master cluster in kubernetes as ssh -i ~/.ssh/id_rsa ubuntu#api.demo.k8s.testcheck.tk
It throws error as
ubuntu#api.demo.k8s.testcheck.tk: Permission denied (publickey).
I am using kops as a deployment utility. Can someone help here please
Can you change the username from ubuntu to ec2-user? I think the default user of EKS nodes is ec2-user
If you used a later version of kOps, it does not by default add an SSH key. You need to supply one with the create command like this: kops create cluster --ssh-public-key ~/.ssh/id_rsa.pub .... After the cluster has been created, you can create an ssh key kops create secret --name <cluster name> sshpublickey admin -i ~/.ssh/id_rsa.pub.
Instead of using static SSH keys, I do recommend using mssh from ec2-instance-connect
To connect to EC2 via SSH with public key, you need the corresponding private key configured in the authorized_keys file in the root volume. When kops provisioned the EC2, does it provide you which keys to use?
If not, if the EC2 is using a newer AMI, you should be able to alternatively connect via EC2 instance connect.
Else, you need to manually modify the authorized_keys file.
I am setting up a new EC2 Amazon Linux 2 AMI and am having a try at setting up EC2 Instance Connect as it's preinstalled on my new instance.
From what I've understood the docs to mean, I should be able to create an IAM user, add a public key to that user and then SSH into the box using the IAM user's (public) key without having to create the .ssh folder on the EC2 instance.
What I've done is:
Create a user on the EC2 instance which my IAM user should map to (let's call him bob)
Uploaded my public OpenSSH key to the IAM user
Created a permission policy which allows the action ec2-instance-connect:SendSSHPublicKey (as per the docs)
Once these are all done, if I try to SSH into the box, it doesn't work and in my /var/log/secure I see a preauth failure.
If I create the .ssh/authorized_keys file and set the permissions correctly, everything works fine.
However, my understanding of the EC2 Instance Connect approach is that it gives me a central way to manage public-key based access to my instances.
Am I correct?
Am I missing something in how I'm setting this up?
I'm finding the documentation a little unclear, so some insight would be helpful.
Thank!
EC2 Instance Connect works as follows:
You issue a command that pushes a temporary public key to the instance, such as:
$ aws ec2-instance-connect send-ssh-public-key --instance-id i-001234a4bf70dec41EXAMPLE --availability-zone us-west-2b --instance-os-user ec2-user --ssh-public-key file://my_rsa_key.pub
You then establish an SSH connection to the instance using the private half of the keypair
Within the instance, the EC2 Instance Connect software interfaces with the ssh process and checks whether the SSH key provided matches the public key that was pushed with send-ssh-public-key (and it is within 60 seconds of receiving that key)
If they match, the SSH session is permitted
See: Connect Using EC2 Instance Connect - Amazon Elastic Compute Cloud
EC2 Instance Connect also provides a web-based interface that can both initiate the above process (using a temporary random keypair) and provide an SSH interface. When doing so, the SSH connection appears to come from within AWS, not your own IP address. This is because the web interface uses HTTPS to AWS, then AWS establishes the SSH connection to the instance. This has an impact on security group configuration.
I would like to SSH into my Amazon Web Services (AWS) Cloud9 Elastic Cloud Compute (EC2) environment, but there is no key pair assigned to the Cloud9 EC2 environment. How can I assign a key pair to that environment, so that I can SSH into it?
I created the AWS Cloud9 EC2 environment through the Cloud9 interface, rather than creating the EC2 environment and then accessing it through Cloud9. When I create EC2 environments normally, I am given the opportunity to assign an existing key pair, or create a new key pair. This option was not presented to me when I created the environment through Cloud9.
You can SSH into a Cloud9 environment created through Cloud9. The steps are similar to sharing a running app over the internet in the docs, but instead of sharing the app, you share the SSH server.
In AWS Console, find the corresponding EC2 instance.
In the bottom panel, under the Description tab, in Security groups row, click on the link to go to associated security group.
You should now be in Security Groups section. In the bottom panel, under the Inbound tab, click Edit and add:
Type: SSH
Source: Anywhere
and click Save.
In Cloud9 terminal, add your public key to ~/.ssh/authorized_keys. Don’t replace the existing keys or elsewise Cloud9 IDE wouldn’t be able to connect to the instance.
You can now SSH into the Cloud9-managed instance using ssh ec2-user#<ip>, or ssh ubuntu#<ip> if using Ubuntu AMI, for other AMIs see default user name for the AMI
Cloud9 is managing the underlying EC2 for you so you won't get any extra charges.
A terminal is already provided by AWS but you could follow this procedure if you still want to get SSH access to a Cloud9 environment.
I am following instructions on:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/coreos/coreos_multinode_cluster.md
I am trying to launch a Master with master.yaml file as a UserData. I am able to successfully launch the instance in EC2 but i cant seem to ssh to it via aws ssh key..
backend-service viralcarpenter$ ssh -i ~/Downloads/viral-kubernetes-acad-key.pem core#54.153.63.240
core#54.153.63.240's password:
Is there something that i am missing?
You need to have a Key Pair configured in your EC2 region and specify it when creating the instance in order to be able to SSH into it.
--key-name <keypair>