If I create a service account in my project and give it to a third party, can that third party abuse it to create VM instances etc? Or is it only allowed to do things that I give it explicit permission to do?
In the "permissions" section of the Google developers console I can set the service account to "Can edit" or "Can view", but what do those mean?
If you give "edit" or "owner" permissions, the user can create, modify, or delete GCE VM instances (among other resources). If you only give "view" permissions, then they can't create, modify, or delete GCE VM instances.
However, you cannot give fine-grained permissions such as "user can only edit this VM instance, but not this other one".
Per Google Compute Engine docs:
Can View
Provides READ access:
Can see the state of your instances.
Can list and get any resource type.
Can Edit
Provides "Can View" access, plus:
Can modify instances.
On standard images released after March 22, 2012, can ssh into the
project's instances.
Is Owner
Provides "Can Edit" access, plus:
Can change membership of the project.
Per Google Cloud Storage docs:
Project team members are given the following permissions based on
their roles:
All Project Team Members
All project team members can list buckets
within a project.
Project Editors
All project editors can list, create, and delete buckets.
Project Owners
All project owners can list, create, and delete buckets, and can also perform administrative tasks like adding and removing team members and changing billing. The project owners group is the owner of all buckets within a project, regardless of who may be the original bucket creator.
When you create a bucket without specifying an ACL, the project-private ACL is applied to the bucket automatically. This ACL provides additional permissions to team members, as described in default bucket ACLs.
Per Google Cloud SQL docs:
Team members may be authorized to have one of three levels of access:
“can View” (called Viewer in App Engine Console) allows read-only
access.
“can Edit” (called Developer in App Engine Console) allows
modify and delete access.
This allows a developer to deploy the
application and modify or configure its resources.
“is Owner” (called
Owner in App Engine Console) allows full administrative access.
This
includes the ability to add members and set the authorization level of
team members.
Related
I have a Google Cloud organisation which gives certain access based on the organisation/folder. For example, a CST staff might get "Storage Object Viewer" on the customer folder to be able to read the cloud storage files for debugging. Each customer is one project in the customer folder.
Now I'm trying to setup a bucket (inside the same project under the customer folder) that only a very selected handful should have access to. Is this possible?
Running gsutil iam get shows only 1 service account with access and 1 group, but looking at the UI also all the inherited permissions give read access.
Is there a way (I'm using unified bucket permissions) to disable so there is no inheritance?
Regards,
Niklas
You can use a preview feature named Deny Policies
In my GCP project, people have storage admin access. I want to restrict the person and give few members writing access in the GCP bucket. When I try to revoke the access it is saying cannot change access as it is inherited.
Any way to create custom access for a particular storage bucket in GCP. I have to make this for the AIRFLOW DAG bucket.
Custom roles cannot be recognized upwards on the resource hierarchy. For example, a role created at the project level cannot be used at the folder or organization level.
Similarily, custom roles cannot be recognized laterally. For example, a custom role created at the project level cannot be used in bindings in another project even if they are in the same folder or organization.
To use a custom role in different projects or different folders, customers have to create/define the roles at the parent organization level. Note that currently, a custom role cannot be created at the folder level.
For more information on custom roles you can check the public documentation.
In my organization, we work with GCP and have multiple projects there. I'm now trying to organize the IAM roles between all the projects and I'm not sure about some of the IAM settings. Are to projects act as completely separate entities with completely different IAM roles/permissions or are there any overlap between them that can lead to that a change in one project might affect another project?
Changing roles in one project will not directly change roles set on another project. But there are some things you'll want to consider.
While projects can have their own access control rules, it is possible to manage access at more than the project level. Here are the four resource points where you can manage access:
Organization level. The organization resource represents your company.
IAM roles granted at this level are inherited by all resources under
the organization.
Folder level. Folders can contain projects,
other folders, or a combination of both. Roles granted at the highest
folder level will be inherited by projects or other folders that are
contained in that parent folder.
Project level. Projects represent a
trust boundary within your company. Services within the same project
have a default level of trust. For example, App Engine instances can
access Cloud Storage buckets within the same project. IAM roles
granted at the project level are inherited by resources within that
project.
Resource level. In addition to the existing Cloud Storage and
BigQuery ACL systems, additional resources such as Genomics Datasets,
Pub/Sub topics, and Compute Engine instances support lower-level roles
so that you can grant certain users permission to a single resource
within a project.
Access can be at the individual level, through a service account, or through organization-wide and Google Group membership. That means that when you add or remove someone from the organization or a Google group, you may inadvertently add or remove them from various roles in different projects.
Also, if a member (individual or group) is assigned a role that gives the capability to change IAM roles, then anyone in that member group can modify permissions. They may change rules in ways you don't want.
When in doubt, use testPermissions to verify that roles are working as expected.
The IAM roles you set in a project won't affect other projects.
Google Cloud resources are organized hierarchically, where the organization node is the root node in the hierarchy, the projects are the children of the organization, and the other resources are descendants of projects. You can set Identity and Access Management (IAM) policies at different levels of the resource hierarchy. Resources inherit the policies of the parent resource. The effective policy for a resource is the union of the policy set at that resource and the policy inherited from its parent.
Please check the following documentation where you will find a good explanation of the resource hierarchy for access control
I think this diagram can help you to understand better how IAM works:
I have Google Cloud project shared with few collages and few service accounts. I have Owner permission on the GCP project.
I want to create a Google Cloud Storage Bucket that only me will have access to it. So, the other users and service accounts in the projects will can't see it.
I created a new Google Cloud bucket (permission: Uniform) and went into the "Permission" sections. This list was already filled with inherits permissions. Since I want that only me will have access into this bucket:
I was added my self as Storage Admin
I removed the permissions for Owner, Editor and Viewer for this repository.
Now I have list with all the service accounts in the project. Unfortunately, Google not allowing me to remove the access of those service accounts:
How to revoke account to those service-accounts to this bucket?
These are inherited access which cannot be removed at bucket level.
Roles are always inherited, and there is no way to explicitly remove a permission for a lower-level resource that is granted at a higher level in the resource hierarchy.
As a principle of least priviledge grant minimum scope to the service accounts.
I want to create firewall rules particular to a storage browser in Google Cloud platform. I see that we have an option to create firewall rules but, How can we have that rules to specific storage browser and not to all other storage browser buckets?
You do not have to create firewall rules to buckets. What you need is to set the permisions on the buckets Using Cloud IAM with buckets.
Open the Cloud Storage browser in the Google Cloud Platform Console.
Click the drop-down menu associated with the bucket to which you want
to grant a member a role.
The drop-down menu appears as three vertical dots to the far right of
the bucket's row.
Choose Edit bucket permissions.
In the Add members field, enter one or more identities that need
access to your bucket.
Add member dialog.
Select a role (or roles) from the Select a role drop-down menu. The
roles you select appear in the pane with a short description of the
permissions they grant.
Click Add.
You can add as members individual users, groups, domains, or even the public as a whole. Members are assigned roles, which grant members the ability to perform actions in Cloud Storage as well as GCP more generally.
You can make a Cloud Storage bucket accessible only by a certain service account link.
A service account is a special type of Google account intended to
represent a non-human user that needs to authenticate and be
authorized to access data in Google APIs link.
You can not apply firewall rules to single buckets.
Firewall rules are defined at the network level, and only apply to the
network where they are created.
Your inquiry is a known Feature Request that has not been implemented yet on Cloud Storage. It has been requested and ongoing, in order to allow IP Whitelisting in Bucket Policy, just like AWS does it with S3 buckets. You can “star” the FR, so that it gets more visibility and also add your email to the “CC” list so that you can get the updates.
As a workaround, you may request access to use VPC Service Controls. According to official documentation, with VPC Service Controls, administrators can define a security perimeter around resources of Google-managed services to control communication to and between those services.
Cloud Storage is included in the Supported products of these Google-managed services and here you can find its limitations.
You can use access levels to grant controlled access to protected Google Cloud Platform (GCP) resources in service perimeters from outside a perimeter.
Access levels define various attributes that are used to filter requests made to certain resources. Access levels can consider various criteria, such as IP address and user identity. Additionally, they are created and managed using Access Context Manager.
This example describes how to create an access level condition that allows access only from a specified range of IP addresses.
However, it needs to be considered that VPC Service controls create a “borders” around the project specifying a “virtual area”, where Access Context Manager rules can be applied. The ACM rule specifying an IP address will allow that IP address to access all Cloud Storage Objects and all other protected resources owned by that project, which is not the expected result. As stated here, you cannot apply an IP address rule to an object, only to all objects in a project.
Furthermore, here you can find a useful link for the Best Practices concerning Security and Access Control on Cloud Storage buckets. Here, you can find tips on “sharing your files” while hosting a static website.
In conclusion, another option is Firebase Hosting instead of Cloud Storage, as stated here. Firebase Hosting is a Google hosting service which provides static web content to the user in a secure, fast, free and easy way.