I have a GCP project and just for testing purpose, I want to grant the permission to 'allUsers'. But when I am trying to add, I am getting error Members of type allUsers and allAuthenticatedUsers cannot be added to this resource. Can somebody help me to understand what I am doing wrong or missing here? Thanks
Check the docs.
Project does not support allUsers and allAuthenticatedUsers as members in a Binding of a Policy.
Sometimes it might be that you’re trying to use a deprecated feature that is already no longer available in web UI and control panels but in fact, is still silently supported for thus who are unable to upgrade.
Granting a role for allUsers is one of these cases and you can find an example of such a case in this answer of a smart-things community.
Regarding the:
Can somebody help me to understand what I am doing wrong or missing here?
You are trying to use insecure permissions that are strongly discouraged. And that is wrong, that is not available in some web user interfaces for a reason. But if Google would ditch such support at all, then IoT devices that are still dependent on this yet out of the reach of developers who could upgrade them would become inoperable, so new users unlikely to see such possibility but thus who used it in the passed will be stay operable.
But if you were unfortunate to delete such permission and now you left a lot of IoT devices without an ability to publish for a PubSub topic (and upgrading devices is not a feasible option) then following mentioned answer from smart things community if you want to allow publishing for a topic bar of the project foo then you can use a set IAM policy API to apply a role roles/pubsub.publisher for allUsers
A resource will be: projects/foo/topic/bar
And policy object will be:
{
"policy": {
"bindings": [
{
"role": "roles/pubsub.publisher",
"members": [
"allUsers"
]
}
]
}
}
The member types permission used to grant allUser or allAuthenticatedUser access to certain Google Cloud resources such as buckets cannot be applied as project roles to projects.
Google Cloud projects provide different levels of access control that are different from those used with buckets, as explained in the following Access Control for Projects using IAM documentation.
There is a three-level policy hierarchy in Google Cloud that puts projects and resources as separate entities. In this hierarchy, policies are inheritable but do have different access control models which are not interchangeable.
You can grant to a Google Cloud project the following permissions:
roles/owner - Full access to all resources.
roles/editor - Edit access to all resources.
roles/viewer - Read access to all resources.
roles/browser - Access to browse resources in the project.
The above project permissions can be fine-tuned at the resource level using member types as explained above.
Try to remove the prevent public access in permissions of your GCP cloud storage bucket. This allows to have fine granular control on individual objects. So that one or many objects in the bucket can be public.
Public access prevention prevents data in your organization or project from being accidentally exposed to the public. When you enforce public access prevention on a new or existing Cloud Storage resource, no one in your organization can make data public through IAM policies or ACLs.
For more see the docs here
Go to your bucket and revoke public access as in below image. Then go to your resource and add permission allUser
Disable the prevention to public access
Go to permissions (right-hand side of configuration)
Disable public access prevention.
then again try to make image public
it'll work
Click em "Edit Access" in your bucket, then remove public access, then try again, you will be able to set allUsers in permissions.
Related
I want to list IAM policies or access levels for various resourses. I followed docs and I'm able to list it for projects. There are various resources and I'm somewhat confused with it.
Is all other resourses come inside the project? (Basically I'm confused with the chain)
If someone have access to project (read/write/anything else) then can they have access to resourses inside the projects?
If other resourses are independent then how to list their IAM policies? (For each individual resourse)
I'm using GCP Oauth2 API and would highly appreatiate if anyone at least answer the above questions.
Is all other resourses come inside the project?
Yes,
For a specific project, you can use search-all-resources to search all the resources across services (or APIs) and projects.
To use the number 123 to search every resource in a project:
$ gcloud asset search-all-resources --scope=projects/123
If someone has access to a project (read/write/anything else) then can they have access to resources inside the projects?
Results from the above command are the resources in that project. If you have a user who has an owner role in the project then the user can manage roles and permissions for a project and all resources within the project. If a user has a viewer role then the user has permissions for read-only actions that do not affect state, such as viewing (but not modifying) existing resources or data.
Some resources also have separate permissions,a user can have permissions other than project level that is Individual permissions to the specific resource, by using them you can restrict the user to access projects but the user can access a specific resource.
Here you can find Access control for projects with IAM.
If other resources are independent then how to list their IAM policies? (For each individual resource)
Google has Predefined roles for every resource in the project you can filter out the specific resource by searching the resource in this doc, those are predefined roles which can assign a user to the specific resource.
You can find more information in this doc.
The ORG, Folder, and Project are resources. They have an API to access IAM Policy Bindings. Cloud Storage, KMS, Compute Engine, Cloud Run, Functions, etc are also resources. They have an API to access IAM Policy bindings. Look up the API for each resource type.
In Google Cloud, many resources support IAM Policy Bindings but not all.
Is all other resourses come inside the project? (Basically I'm
confused with the chain)
Google Cloud resources belong to projects in almost all cases. Billing Accounts and Payment Accounts are examples that are separate.
If someone have access to project (read/write/anything else) then can
they have access to resourses inside the projects?
If as you say "read/write/anything else", then yes. If they have the correct IAM roles at the project level, they can access the resource. Since some resources also support their own IAM Policy Bindings, a user can be granted access to a resource at the resource level without having permission at the project level.
If other resourses are independent then how to list their IAM
policies? (For each individual resource)
You must access the resource's IAM Policy Bindings. Each resource that supports IAM Policy Bindings has a corresponding API to read/modify.
Note: resources are not independent. They are owned by a project in almost all cases as I mentioned previously.
My developer has created an EC2 instance on AWS and I want to be able to access it via my own dashboard.
What I did is:
As a root user, I created an IAM account for me and him and assigned us both to a group named PowerUsers
I created an Organizational Unit and added his account to it
When he goes to his EC2 dashboard, he sees his created instances. But when I go to my EC2 dashboard, I see nothing. We both selected the correct region.
I hope someone can help us out here, I can't seem to get any wiser from the AWS documentation.
tl;dr there is a difference between visual access and technical access. Technical is possible, via IAM roles and permissions, etc. Visual access is not possible, not in the AWS console from a different account.
Generally you do not see resources from other accounts that you have access to. That is simply not how AWS / IAM or basically any complex permission system works.
Same thing for S3 buckets, you cannot see S3 buckets you have access to in your S3 console, not those that are public to everyone and not those that you have explicitly been granted permission to. You only ever see the buckets that you / your account actually own(s).
The reason for that from a technical perspective is really simple: AWS simply does not know which buckets / EC2 instance you can access. It knows your permissions and if you want to access a specific resource AWS can check if the permissions let you access it but not the other way around.
IAM has permission that can grant permissions based on IP, time of day, VPC, etc. That makes it impossible and not really meaningful to display what you can access now because in 10 second or from a different network it might be that you cannot see it at all.
Let me tell you from personal experience and currently building one myself: If you build a permission system it is built to answer "can I do X" but listing all X is a VERY different story, IAM cannot answer it and I have not come across a permission system that can answer it while at the same time having a complex permission structure AND being efficient. Seems like you cannot have efficiency, complexity and reverse lookup / list at the same time.
Note that you still have access to the resource. E.g. when manipulating the browser URL to directly access the resource you can view it even though you are not logged into the owning account but at that point you are asking "can I do X" (X = "view resource") and that can be easily answered. You only cannot list the resources.
Second note: some of the listed resources you see and that your account owns you still cannot access because there might be an explicit IAM Deny policy for your current role in place that only takes effect when interacting with the resource.
Following are some options;
Better way is to use, Cross-Account Access using switch roles and also refer this
Bit tricky way using Python sign-in script.
I find myself using a lot roles/storage.legacyBucketWriter which has the following permissions:
storage.buckets.get
storage.objects.create
storage.objects.delete
storage.objects.list
May be it's okay but it feels odd to me to be using a role with legacy in its name ...
I don't want to create a custom role either because it seems overkill since there is this role fitting the need.
And yes there is a role roles/storage.objectAdmin but it lacks the storage.buckets.get permission.
What do you think ?
Remember legacy roles are related to primitive roles on GCP, It is 'Legacy' because it exactly matches the pre-IAM permissions granted via the legacy role on an Object. It all depends on your use case, the best recommended practice is to follow the principle of least privilege.
Keep in mind as is mentioned at the official documentation:
Legacy Bucket IAM roles work in tandem with bucket ACLs: when you add
or remove a Legacy Bucket role, the ACLs associated with the bucket
reflect your changes.
Also, consider the scope of the read/write legacy roles as is described on this table.
Finally take a look at the section of Security, ACLs, and access control for follow the best practices recommended for Cloud Storage service.
Consider using the Storage Admin Role (roles/storage.admin) instead of a legacy role. This role grants both storage.buckets.* and storage.objects.* permissions, making it a suitable match for your requirements.
According to the Storage Admin Role description, this role
grants full control over buckets and objects. When applied to an individual bucket, control is limited to that specific bucket and its associated objects.
This role is particularly useful when utilizing the gsutil rsync command targeting a bucket.
i am picking up terraform for GCP and i came across these three resources:
google_service_account_iam_member
google_project_iam_member
google_organization_iam_member
They sound very similar to each other but certainly with some key differences.
I went through their docs but their differences were not absolutely clear to me. Is there any easy way to illustrate the difference between these?
Thanks
Within GCP, there is a hierarchy: Organization, Project, Resource
The IAM policies you mentioned behaves the same; however, works on different levels based on the hierarchy.
For example, the google_project_iam_member will update the IAM policy to grant a role to a new member on the project level.
The google_organization_iam_member will do the same thing, but on the Organization level (which is a level higher than the project).
Update:
The google_service_account_iam_member will work on every level depending on what you would like the service account to do. You can either have the service account act as an identity or just have it run a certain resouce. A service account can be added on all three levels.
As described before the google_project_iam_member and google_organization_iam_member, are used to manager IAM permission in the project or organization level. You can also manage permission on the folder level.
When, IAM is granted on the org level all folders and projects inherit that permission. When granted in the folder, alll projects and sub folders under that folder will inherit that permission.
Permissions can also be managed at resource level, the google_service_account_iam_member allow to grant permission to manage the service account and use the service account in the service account level. That helpful when you want to grant more restricted permissions and grant access to a single service account instead of all service accounts from the project.
Thanks,
Eduardo Ruela
I want to be able to programatically add a user to a project that exists in google cloud. I can do this via the console by going to Iam and admin, selecting a project, then searching for a user, selecting a role and adding them. Furthermore, the docs seem to say this should be possible
Project owners can grant access to team members to access project's
resources and APIs by granting IAM roles to team members. You can
grant a role to a team member using the Cloud Platform Console, the
cloud command-line tool, or the setIamPolicy() method.
But the API seems to be missing this method.
I can grant users access to particular resources, but I cant give them the same kind of all resource access level that I can from the console.
What API call can I use to, say, grant a given user read-only access to all the resources in a given project?
It's right where you linked it :)
What you want to do is:
1. Get current policy.
That will give you a JSON response showing you what the structure should be like.
2. Make your changes. If there is already an entry with roles/viewer, append to the members list, otherwise create the entry:
...
{
"role": "roles/viewer",
"members": [
"user:your.friend#gmail.com"
]
},
...
3. Set the new policy.
For a list of possible roles look here.