We are just beginning to use Avi in AWS and I am setting up the controller instance.
As I am adding users to the controller instance I would like for users that log in to the instance via shell do so with private/public keypair authentication.
I created a user and added their public key to ~/.ssh/authorized_keys , I also added a NOPASSWD entry, but it seems to still be prompting for a password. Can I log in to the GUI with a password but restrict shell access to keypair only?
Avi Controller ssh expects the key for each user to be in a separate file at /etc/ssh/authorized_keys_username
The SSH config at /etc/ssh/sshd_config sets the path to the authorized keys:
AuthorizedKeysFile /etc/ssh/authorized_keys_%u
You can restrict the user to use keys for shell access by changing /etc/ssh/sshd_config file; but these changes will get overwritten every time you upgrade to a new version.
There is a match option that can disable any users password-based shell access in /etc/ssh/sshd_config
Match User sysadmin,root,aviseuser,avictlruser,testuser
PasswordAuthentication no
Related
When I look instead ~/.ssh/authorized_keys as the root user on my ami I see something like:
no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command="echo 'Please login as the user \"ubuntu\" rather than the user \"root\".';echo;sleep 10;exit 142" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC5Cfdsafdafdas_some_public_key packer_610ad8fb-0ed3-eddc-c48f-0f8553d421da
no-port-forwarding,no-agent-forwarding,no-X11-forwarding,command="echo 'Please login as the user \"ubuntu\" rather than the user \"root\".';echo;sleep 10;exit 142" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC5Cfdsafdafdas_some_public_key my-key
The second key is the key is the one I used to set up my instance so I understand that one. Is the packer key just a temporary key used to upload the instance somewhere and I can safely delete it?
Packer is used to create AMI and during that process it needs to ssh into the image. Normally, people who create the AMI remove them during the finalization and cleanup, seems they forgot to do that. It's safe to delete them.
I created an Ubuntu VM on GCP Compute Engine.
Some details:
-> (ubuntu-minimal-2204-jammy-v20220810)
Machine type
e2-micro
CPU platform
Intel Broadwell
Architecture
x86/64
I added one user using SSH keys. This user can properly access to the VM, no problem here.
But he can also become root like this:
# he resets the root password
sudo passwd
# the he can become root using the freshly created password
su
How can I prevent this ?
I tried to remove this user from the sudoers but without success:
root#vm_test:/home/user# sudo deluser user_test sudo
/usr/sbin/deluser: The user `user_test' is not a member of group `sudo'.
EDIT:
My sudoers config file looks like this. I might modify it to restrict access. But I don't understand how.
# User privilege specification
root ALL=(ALL:ALL) ALL
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
In IAM, give them roles/compute.osLogin, not roles/compute.osAdminLogin or roles/compute.instanceAdmin.
The SSH Access method that you are using (Manage SSH keys in metadata) leverages the access management to a directory service; if you want to control the access level to your instance(s) using Google's Identity Service, you need to use the OS Login method instead.
Here is an example granting normal user access to an instance named 'ubuntu-test' to the user 'test-user#gmail.com':
gcloud compute instances add-iam-policy-binding ubuntu-test \
--member='user:test-user#gmail.com' \
--role='roles/compute.osLogin' \
--zone=<instance_zone>
Note: Unlike the Manage SSH key method, in the OS Login method the user must exist in the GCP database in order to properly assign the permissions.
I just started using gcloud, and I noticed when I create a VM or going into cloud console, my full name shows up in the console.
Is there a way to create another user with a more generic name? I don't like having my full name in all my VM's and consoles.
Do I just create another user as 'owner' or is there a best practices around this?
When you use gcloud compute ssh [INSTANCE_NAME], gcloud uses your current credentials to create an SSH keypair. The project ssh metadata is then updated with this username and SSH keypair. This is what you are seeing once you connect.
You can create a new SSH keypair with any username that you want. Then you can add this keypair to the instance metadata. Then you can login using that username. This also creates a new home directory in the instance.
For these examples, let's say that you want to create a new user 'development'.
STEP 1: Create a new SSH keypair
ssh-keygen -t rsa -f keypair -C development
This will create two files:
keypair - this is your RSA private key. You need this file to login via SSH to your instance using the new username.
keypair.pub - this is your SSH-RSA public key. The contents is imported to your instance. Display the contents of this file. Notice the username at the end.
STEP 2 (Google Cloud Console Method):
Login to the Google Cloud Console.
Go to "Compute Engine" -> "VM instances".
Click on the instance that you want to modify.
Click the "EDIT" button to modify the instance.
Scroll down to "SSH Keys". Click "Show and edit" under "You have 0 SSH keys".
Copy and paste the contents of "keypair.pub" into the box where "Enter entire key data" is displayed.
Scroll down to the bottom and click "Save".
STEP 3 - Connect to the instance using SSH:
Replace the IP_ADDRESS with the Compute Engine instances external IP address in the following command.
ssh -i keypair development#IP_ADDRESS
This is the correct method to support multiple users connecting to the same Compute Engine instance. Each user has their own keypair and their own username and home directory.
This is also the correct method to provide users with login access to an instance that do not have Google Cloud IAM permissions to the cloud account.
For advanced users, you can use the gcloud compute instances add-metadata command to add the SSH public key to the instance.
You can also add this SSH public key to the Project Metadata which will make this keypair available on all instances within a project.
I have created an instance and its pem file named as demo.pem, But due to some security i have to change my old demo.pem file with demos.pem for the same instance.
I do not want to create new instance for changing pem file => Is it possible? | Help?
It's worth understanding how keypairs work...
When logging into Linux using keypairs, you specify a username and a keypair, eg:
ssh -i demo.pem ec2-user#54.11.22.33
Linux then looks in the .ssh/authorized_keys file belonging to that user, eg:
/home/users/ec2-user/.ssh/authorized_keys
If looks for the public key in that file that matches the private key used for login. It then does keypair magical stuff and determines whether to allow the person to login.
Therefore, to enable login on an instance using a new keypair:
Add the public half of the keypair to the ~/.ssh/authorized_keys file in the appropriate user's home directory
If desired, remove an old key from that file to remove access permissions
You can have multiple keys in that file, which permit login via any of the authorized keypairs.
Answer from A to Z:
create a pem key pair in the aws interface at (example)
https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#KeyPairs:
then go to your download files and modify access mode
chmod 400 yourNewPemName.pem
then generate the public key:
ssh-keygen -y -f yourNewPemName.pem > yourNewPemName.pub
connect to the ec2 instance:
cd ~ / .ssh
then replace the contents of the authorized_keys file, with the contents of your public key contents generated above step 3
Working on EC2 is a snap, you just download the .pem file, give it the right permissions, and you are ready to go, yet ... if you have the .pem file you have full access to the EC2 instance!!
What shall I do to limit people's access to the instance in a controllable way, e.g. people pass me their public key and add it to the instance a la Github
Follow the steps in this document: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/managing-users.html
There are 3 steps:
You have to add a user account (adduser) for each user
Make sure the user-home/.ssh dir has 600 permission (chmod)
Add the user public key to user-home/.ssh/authorized_keys and make sure it has 700 permission (chmod)