I created an Ubuntu VM on GCP Compute Engine.
Some details:
-> (ubuntu-minimal-2204-jammy-v20220810)
Machine type
e2-micro
CPU platform
Intel Broadwell
Architecture
x86/64
I added one user using SSH keys. This user can properly access to the VM, no problem here.
But he can also become root like this:
# he resets the root password
sudo passwd
# the he can become root using the freshly created password
su
How can I prevent this ?
I tried to remove this user from the sudoers but without success:
root#vm_test:/home/user# sudo deluser user_test sudo
/usr/sbin/deluser: The user `user_test' is not a member of group `sudo'.
EDIT:
My sudoers config file looks like this. I might modify it to restrict access. But I don't understand how.
# User privilege specification
root ALL=(ALL:ALL) ALL
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
In IAM, give them roles/compute.osLogin, not roles/compute.osAdminLogin or roles/compute.instanceAdmin.
The SSH Access method that you are using (Manage SSH keys in metadata) leverages the access management to a directory service; if you want to control the access level to your instance(s) using Google's Identity Service, you need to use the OS Login method instead.
Here is an example granting normal user access to an instance named 'ubuntu-test' to the user 'test-user#gmail.com':
gcloud compute instances add-iam-policy-binding ubuntu-test \
--member='user:test-user#gmail.com' \
--role='roles/compute.osLogin' \
--zone=<instance_zone>
Note: Unlike the Manage SSH key method, in the OS Login method the user must exist in the GCP database in order to properly assign the permissions.
Related
I am logged into a compute engine via ssh. I am using my personal ssh keys to ssh in. ie. I did gcloud compute ssh --project someproj --zone somezone someserver to login. Then I tried to do gcloud compute instances list to view the external ip. It says I have insufficient privileges. My understanding is that although I have ssh login as my self, I am actually using the service account. So I edited the service account to have Role: Compute Viewer but I still get an error. What am I doing wrong?
Please be advised I know I can view the external IP from console or via my pc using the cli. I'm more interested in why the compute instance can not see it given the IAM settings.
Here is the actual error:
$ gcloud compute instances list
ERROR: (gcloud.compute.instances.list) Some requests did not succeed:
- Insufficient Permission: Request had insufficient authentication scopes.
Here is my gcloud sdk config:
$ gcloud config list
[core]
account = some-service-account#developer.gserviceaccount.com
disable_usage_reporting = True
project = some-project
Your active configuration is: [default]
When you create a Compute Engine instance, you have the opportunity to specify "scopes". Scopes are an older technology where we can constrain requests based on allowed execution scopes. The default is to "Allow default access" which allows some GCP services and not others. The other two options are "Allow Full Access" and "Set Access for each API". If you specify "Allow Full Access" then access to GCP services is exclusively IAM control. If it is either default or access for each API then you will be governed by BOTH scopes and IAM permissions.
It is likely that you are using default access which prevents the gcloud command you want to run. Either set "Allow Full Access" or change the specific scope to allow "Compute" scopes.
Trying to use OS-login from my account, which has the owner role and compute os login admin role to connect to an instance with enable-oslogin TRUE. This used to work well (maybe a week or more ago), but lately it has been giving me this error.
This is within the same organization, same project. So not sure where the error is coming from.
Did GCP change their OS-Login feature? I am unable to find anything in the release notes.
gcloud compute ssh instance
ERROR: (gcloud.compute.ssh) User [user] does not have permission to access user [user:importSshPublicKey] (or it may not exist): Insufficient IAM permissions. The instance belongs to an external organization. You must be granted the roles/compute.osLoginExternalUser IAM role on the external organization to configure POSIX account information.
The GCP message is saying that you are running the gcloud tool with a credential that don’t belongs to the same organization from the project.
Try to run “gcloud auth list” to validate the authenticated user.
If the user is incorrectly select run the gcloud config command to change the logged user or gcloud auth login to log with the proper credentials.
I wish that this can help you,
Eduardo Ruela
I'd like to get automated deployments going for a VM that I have running in Google Cloud and as part of that, I'm trying to use a service account to SCP my files up to a VM in GCP, but unfortunately, I can't seem to figure out what the correct permissions should be.
After scouring the documentation, I have a service account with these permissions:
compute.instances.get
compute.instances.setMetadata
compute.projects.get
compute.projects.setCommonInstanceMetadata
but when I run the below commands, I get the below output:
+ ./google-cloud-sdk/bin/gcloud auth activate-service-account --key-file=./service-account.json
Activated service account credentials for: [scp-test#my-project.iam.gserviceaccount.com]
+ ./google-cloud-sdk/bin/gcloud beta compute scp hello.txt scp-test:c:/hello.txt --quiet --project=my-project --ssh-key-file=./.ssh/key --zone=us-east4-c
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
Generating public/private rsa key pair.
Your identification has been saved in /Users/mac-user/Downloads/scp-test/.ssh/key.
Your public key has been saved in /Users/mac-user/Downloads/scp-test/.ssh/key.pub.
The key fingerprint is:
{OMMITED}
The key's randomart image is:
{OMMITED}
External IP address was not found; defaulting to using IAP tunneling.
Updating project ssh metadata...failed.
Updating instance ssh metadata...failed.
ERROR: (gcloud.beta.compute.scp) Could not add SSH key to instance metadata:
- The user does not have access to service account '{OMMITED}-compute#developer.gserviceaccount.com'. User: 'scp-test#my-project.iam.gserviceaccount.com'. Ask a project owner to grant you the iam.serviceAccountUser role on the service account
granting my scp-test user the iam.serviceAccountUser role works, but this seems to be bad practice since it then makes my scp-test user able to impersonate the default account ('{OMMITED}-compute#developer.gserviceaccount.com'.), which then seems to give it full access to everything.
How do I grant it only the permissions that it needs for SCP?
In order to use SSH/SCP you need instance admin rights to Compute Engine.
Service account means the service account IAM member that gcloud is configured to use: scp-test#my-project.iam.gserviceaccount.com
You need to give the service account this role:
roles/compute.instanceAdmin.v1
Since your compute instance is also configured to use a service account, you also need this role for your service account:
roles/iam.serviceAccountUser
I used Ansible to create a gce cluster following the guideline at: https://docs.ansible.com/ansible/latest/scenario_guides/guide_gce.html
And at the end of the GCE creations, I used the add_host Ansible module to register all instances in their corresponding groups. e.g. gce_master_ip
But then when I try to run the following tasks after the creation task, they would not work:
- name: Create redis on the master
hosts: gce_master_ip
connection: ssh
become: True
gather_facts: True
vars_files:
- gcp_vars/secrets/auth.yml
- gcp_vars/machines.yml
roles:
- { role: redis, tags: ["redis"] }
Within the auth.yml file I already provided the service account email, path to the json credential file and the project id. But apparently that's not enough. I got errors like below:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey).\r\n", "unreachable": true}
This a typical ssh username and credentials not permitted or not provided. In this case I would say I did not setup anything of the username and private key for the ssh connection that Ansible will use to do the connecting.
Is there anything I should do to make sure the corresponding credentials are provided to establish the connection?
During my search I think one question just briefly mentioned that you could use the gcloud compute ssh... command. But is there a way I could specify in Ansible to not using the classic ssh and use the gcloud one?
To have Ansible SSH into a GCE instance, you'll have to supply an SSH username and private key which corresponds to the the SSH configuration available on the instance.
So the question is: If you've just used the gcp_compute_instance Ansible module to create a fresh GCE instance, is there a convenient way to configure SSH on the instance without having to manually connect to the instance and do it yourself?
For this purpose, GCP provides a couple of ways to automate and manage key distribution for GCE instances.
For example, you could use the OS Login feature. To use OS Login with Ansible:
When creating the instance using Ansible, Enable OS Login on the target instance by setting the "enable-oslogin" metadata field to "TRUE" via the metadata parameter.
Make sure the Service Account attached to the instance that runs Ansible have both the roles/iam.serviceAccountUser and roles/compute.osLoginAdmin permissions.
Either generate a new or choose an existing SSH keypair that will be deployed to the target instance.
Upload the public key for use with OS Login: This can be done via gcloud compute os-login ssh-keys add --key-file [KEY_FILE_PATH] --ttl [EXPIRE_TIME] (where --ttl specifies how long you want this public key to be usable - for example, --ttl 1d will make it expire after 1 day)
Configure Ansible to use the Service Account's user name and the private key which corresponds to the public key uploaded via the gcloud command. For example by overriding the ansible_user and ansible_ssh_private_key_file inventory parameters, or by passing --private-key and --user parameters to ansible-playbook.
The service account username is the username value returned by the gcloud command above.
Also, if you want to automatically set the enable-oslogin metadata field to "TRUE" across all instances in your GCP project, you can simply add a project-wide metadata entry. This can be done in the Cloud Console under "Compute Engine > Metadata".
We are just beginning to use Avi in AWS and I am setting up the controller instance.
As I am adding users to the controller instance I would like for users that log in to the instance via shell do so with private/public keypair authentication.
I created a user and added their public key to ~/.ssh/authorized_keys , I also added a NOPASSWD entry, but it seems to still be prompting for a password. Can I log in to the GUI with a password but restrict shell access to keypair only?
Avi Controller ssh expects the key for each user to be in a separate file at /etc/ssh/authorized_keys_username
The SSH config at /etc/ssh/sshd_config sets the path to the authorized keys:
AuthorizedKeysFile /etc/ssh/authorized_keys_%u
You can restrict the user to use keys for shell access by changing /etc/ssh/sshd_config file; but these changes will get overwritten every time you upgrade to a new version.
There is a match option that can disable any users password-based shell access in /etc/ssh/sshd_config
Match User sysadmin,root,aviseuser,avictlruser,testuser
PasswordAuthentication no