In Google Cloud Platform for all services as common, Is it possible to provide Admin Access but without deleting access to any resources? So user or service-account can perform read, create, update operations but delete alone will be restricted.
The quick answer is no.
For some resources create and update are delete operations. You must consider the resource and the data contained by the resource. For example, updating a Cloud Storage object with zero-length content effectively deletes the content of the object.
For most resources, you can create a custom role with specific permissions. However, not all permissions can be assigned to custom roles, which means you must use a predefined role.
Some resources support delete inhibit (Compute Engine, Cloud Storage), but not all do.
Some resources cannot be deleted (KMS key ring, resources, and versions).
You will need to analyze your requirements resource by resource.
Related
I want to restrict GCP resources via a custom policy.
1- specific VM types,
2- Storage size restrictions only 10GB allowed size selection.
Is this possible in GCP that we restrict users to create specific types of resources?
I have created a custom role, that only allowed create, update, delete and list operations on GCP resources.
I cannot restrict user that specific type of VM Instance creation allowed.
I want to create a role for a Lambda function that allows it to create/update/delete any resource, as long as that resource was also created by it. For example, it should be able to create an SQS queue and do anything with it, but it should not have access to any other SQS queues from that AWS account.
Can this be achieved using IAM policies?
I've tried to use resourceTag and requestTag conditions for this, allowing the role to create or modify a resource only if is tagged with a specific value. Unfortunately, a lot of AWS services do not support authorization based on tags.
Are there other options for achieving this?
You could create IAM policies that only allow a user to create, update, delete resources that have a particular naming scheme. For example, you could set the policy's resource arn to have "/username*". The user would only be able to create resources that start with their username and effect those resources. They wouldn't be able to effect resources created that started with another users name and vice-versa.
It is very hard to do in practice. You would have to combine the tags that you already mentioned, along with permission boundaries.
I think the best way to achieve this is to give you application its own dedicated AWS account, so that you can scope its permissions to that account, and it doesn't have the ability to impact other applications.
We want to have deployment users to use in our pipelines, purely for programmatic access. These users will be created per project, rather than using one deployment user for all stacks.
I'm trying to lock down the resources that these deployment users have permission to change, but I'm struggling due to the fact that the ARN is not yet known until the stack is created, meaning that creating the IAM policy to restrict it to only certain resources is proving difficult.
For example, say I want to create an application load balancer (with listeners, rules etc) - I want the deployment user to have permission to create an ALB (easy enough) but I want the deployment user to only have permission to delete or modify the newly created ALB, not any other ALBs.
Any tips / smart ways to do this? The ARNs are generated and "random" as I dislike naming my resources and having to modify the names if I change a setting that requires replacement.
You can use IAM policy conditions to restrict access to resources based on tags.
For example, you can add two policy statements with a condition element to allow specific actions on a resource:
User1 can create a resource only if the request contains owner=user1 tag.
User1 can update or delete a resource only if owner=${aws:username} tag is attached to the resource.
You can find policy example in this guide:
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html
We are trying to create different bucket for different source system, and give them access only to dump data on particular bucket. They should not have read access, i.e. they shouldnt be able to see hats there inside the bucket. Is it doable , if yes how ?
You are probably looking for roles/storage.objectCreator role (take a look at IAM roles for Storage) :
Allows users to create objects. Does not give permission to view, delete, or overwrite objects.
You can create a custom role for your project, which has only write access. Find storage permissions here. Then you can assign the created custom role to a person or service account with IAM.
I want my service account to be able to create files and folders in my bucket but disallow any read/list/download for objects in that bucket. I am not able to figure out what permissions to set for my bucket/service-account. Any ideas on this?
You can have a look at the general Identity and Access Management (IAM) page for Google Cloud Storage. From that, you can either use one of the predefined Cloud Storage roles, or create a custom role with the specific IAM permissions that you need. Let's follow both approaches:
Standard Cloud Storage IAM Roles: in this page you can find the complete list of available IAM Roles. Given the use case you present, you should consider using roles/storage.objectCreator role, as it only grants storage.objects.create permissions, and you cannot view or list objects.
Custom IAM Roles: you can follow this guide to create a custom IAM Role, and the define the specific permissions that you want to grant to your bucket. In this other page you can see a list of all the available permissions. You should use storage.objects.create, but you may be interested in adding also a different permission such as storage.objects.delete in order for the Service Account to be able to overwrite content (which cannot be done with the roles/storage.objectCreator role, as it does not have delete permissions).
So in general, and applying your specific use case, you could say that you can use the roles/storage.objectCreator standard role. However, you must take into account that using it, you will not be able to override content, as for that purpose, you will need the storage.objects.delete permission too. In that case, you can create a custom role.