Usecase :
I have a GCP setup with :
multiple Google Kubernetes Engine clusters
multiple CloudSQL instances
multiple GCS buckets
Basically, I'd like to give permissions to users with finer granularity than project-wide (ie user A can only access bucket B, but can access CloudSQL instances C and D)
Possible solutions :
For GCS, this is easy and documented.
However, I couldn't find anything similar for the other two. The CloudSQL documentation even seems to say it is not possible (All permissions are applied to the project. You cannot apply different permissions based on the instance or other lower-level object.)
Workarounds explored :
I tried creating custom IAM roles. I was hoping that there would be some way to filter on objects the role is applied to, in a fashion similar to what AWS IAM allows with its Resource filter.
That's apparently not possible here.
For GKE, I can give every user the Kubernetes Engine Cluster Viewer role (which basically just allows to list clusters and basic info about them, as well as logging on with gcloud cli tool), and then use the Kubernetes RBAC engine to give very fine-grained permissions. This works fine, but doesn't allow the user to use the Google web interface, which is extremely handy, especially for a beginner on k8s.
Similarly, for CloudSQL, I can give the Cloud SQL Client role, and manage my users directly through the postgres access control system. This works fine, but my users are able to connect to other instances (they still need an account on these instances, of course). Moreover, operations such as restoring a backup cannot be allowed only on specific instances.
So, have I missed something obvious, or have anybody found some way to work around these limitations ?
For GKE, seems that the only option is using RBAC to give users fine grained permissions by RoleBinding within a namespace or ClusterRoleBinding for cluster-wide permissions.
Regarding CloudSQL, currently not supports instance based permissions, but you can track any updates in this link for this feature request.
Related
I have some experience with AWS and an AWS Developer Associate certification. I have been told that I am being moved to a project where I will be using GCP. How easy/hard would it be to learn GCP with AWS experience? Alternatively, how can I facilitate my entry into GCP with an AWS background?
The fundamentals are similar in AWS, GC and Azure, although the terminology is different. There are differences of course (for example, subnets in GC are regional whereas in AWS they're in AZs) but they're easy not too difficult to understand once you get into it.
There's a course by Google in Coursera, which is designed for people familiar with AWS - https://www.coursera.org/learn/gcp-fundamentals-aws
The GC learning resources should also help - https://cloud.google.com/training?hl=en
I think the main difference between AWS and GCP is how projects are managed. I'm referring to Identity and Access Management (IAM) and Resource Manager. In GCP you manage projects in a hierarchical way, using an approach called Resource Hierarchy.
In GCP you always have an Organization, a Project and resources. You might also have Folders. In GCP, basically everything is a resource (like in a REST API). All GCP resources belong to a project, and an individual GCP account can manage multiple projects.
You can manage each GCP project individually, or you can group related projects into folders and manage them from there, or even manage everything from the top-level GCP Organization.
By managing, I mean applying policies: what this resource can do, which accounts can use it.
GCP accounts are sometimes called IAM principals. An IAM principle can be a user account, a Google group (i.e. a bunch of user accounts), a service account (i.e. an account assigned to a program).
The relationship between 1 resource (e.g. a GCP project) and N IAM principals (e.g. 2 user accounts, 1 service account) that have that set of privileges is called IAM binding. A IAM policy is a set of IAM bindings.
As for the services AWS, Azure and GCP offer, there is this nice comparison chart.
So to recap, focus on learning IAM and resource hierarchy first. You will need it whatever GCP service you will end up using.
I am looking into restricting the permissions of some of our more junior team members that are not using most of their given permissions on GCP.
These users are exclusively creating+using the VM instances, as well as using GCS. They currently have the role of Editor. Looking at the existing per-defined roles, it looks to me as though the Compute Instance Admin (v1) and Storage Admin roles would fit their use better.
However, looking at the permission diff, a number of permissions ending in setIamPolicy jumps out to me as potentially dangerous. The diff also contains a number of createTagBinding and deleteTagBinding permissions that seem less alarming. What would be the consequence of granting these?
I'm surprised that I was not able to find a more granular Editor-level role, for Compute Instance and Storage. These seem to me like very common roles other companies might want to use. As far as I can tell, the User,Viewer, Creator or such roles specific to Compute Instance and Storage all seem to lack some core permission we currently need, such as listing buckets, creating VMs, or logging onto VMs with sudo rights. Have I overlooked some existing roles? Is there a way to create an "intersection role", granting only permissions that both parent roles have?
Basic roles like Editor, Owner, and Viewer should be avoided whenever possible.
The roles you suggested like 'Compute Instance Admin' are preferred.
In terms of permissions ending with setIamPolicy, for the Compute Instance Admin role, they apply only to compute resources like Instances, Snapshots, etc.
They are required to grant permissions to resources somebody with the Admin role creates. They do not allow to create/grant new permissions/roles that go outside of compute resources.
Have a look at the following summary. It shows a similar situation: https://cloud.google.com/iam/docs/resource-hierarchy-access-control
I work as a contractor for a large enterprise company and I was assigned to a new project recently for which we need to request resources on AWS. For our project we will need access to EC2 and RDS.
I am not very familiar with AWS, so my question is: will it be possible to get access to AWS Web Console for our team with limited services (access only to EC2 and RDS in our case)? How much work is needed to provide such access (to set up IAM etc)?
I am a bit concerned that I will not get access to AWS Web Console, because I was asked if I needed a sudo user for a VM. It was frustrating for me to hear such question, because I will need several VMs rather than one.
By default, IAM Users have no access to services. In such a situation, they can access the AWS management console, but there will be many error messages about not having access to information, nor the ability to perform actions.
Once an IAM User is granted the necessary permissions, the console will start working better for them. However, it an be difficult to determine exactly which permissions they require to fully use the console. For example, to use the EC2 console, the user would require ec2:DescribeInstances, which allows them to view details about all EC2 instances. This might not be desirable in your situation, since they might not want these users to see such a list.
Then comes the ability to perform actions on services, such as launching an EC2 instance. This requires the ec2:RunInstances permission, but also other related permissions to gain access to security groups, roles and networking configuration.
Bottom line: Yes, you will be able to access the AWS management console. However, your ability to view or do things will be limited by the permissions you are provided.
Can AWS IAM be used to control access for custom applications? I heavily rely on IAM for controlling access to AWS resources. I have a custom Python app that I would like to extend to work with IAM, but I can't find any references to this being done by anyone.
I've considered the same thing, and I think it's theoretically possible. The main issue is that there's no call available in IAM that determines if a particular call is allowed (SimulateCustomPolicy may work, but that doesn't seem to be its purpose so I'm not sure it would have the throughput to handle high volumes).
As a result, you'd have to write your own IAM policy evaluator for those custom calls. I don't think that's inherently a bad thing, since it's also something you'd have to build for any other policy-based system. And the IAM policy format seems reasonable enough to be used.
I guess the short answer is, yes, it's possible, with some work. And if you do it, please open source the code so the rest of us can use it.
The only way you can manage users, create roles and groups is if you have admin access. Power users can do everything but that.
You can create a group with all the privileges you want to grant and create a user with policies attached from the group created. Create a user strictly with only programmatic access, so the app can connect with access key ID and secure key from AWS CLI.
Normally, IAM can be used to create and manage AWS users and groups, and permissions to allow and deny their access to AWS resources.
If your Python app is somehow consuming or interfacing to any AWS resource as S3, then probably you might want to look into this.
connect-on-premise-python-application-with-aws
The Python application can be upload to an S3 bucket. The application is running on a server inside the on-premise data center of a company. The focus of this tutorial is on the connection made to AWS.
Consider placing API Gateway in front of your Python app's routes.
Then you could control access using IAM.
I am in the early stages of writing an AWS app for our users that will run our research algorithms using their AWS resources. For example, our code will need to spin up EC2 instances running our 'worker' app, access RDS databases, and create access SQS queues. The AWS Java SDK examples (we are writing this in Java) use a AwsCredentials.properties file to store the Access Key ID and Secret Access Key, which is fine for examples, but obviously not acceptable for our users, who are would be in essence giving us access to all their resources. What is a clean way to go about running our system on their behalf? I discovered AWS Identity and Access Management (IAM) which seems to be for this purpose (I haven't got my head around it yet), esp. Cross-account access between AWS accounts. This post makes it sound straightforward:
Use the amazon IAM service to create a set of keys that only has
permission to perform the tasks that you require for your script.
http://aws.amazon.com/iam/
However, other posts (e.g., Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?) suggest there are limitations to using IAM with EC2 in particular.
Any advice would be really helpful!
The key limitation with regards to RDS and EC2 is that while you can restrict access to certain API actions there are no resource level constraints. For example with an IAM S3 policy you can restrict a user to only being able to perform certain actions on certain buckets. You can write a policy for EC2 that says that user is allowed to stop instances, but not one that says you can only stop certain instances.
Another option is for them to provide you with temporary credentials via the Security Token Service. Another variant on that is to use the new IAM roles service. With this an instance has a set of policies associated with it. You don't need to provide an AwsCredentials.proprties file because the SDK can fetch credentials from the metadata service.
Finally one last option might be consolidated billing. If the reason you are using their AWS resources is just because of the billing, then setup a new account which is billed from their account. The accounts are isolated from each other so you can't for example delete their instances by accident. Equally you can't access their RDS snapshots and things like that (access to an RDS instance via mysql (as opposed to the AWS api) would depend on the instance's security group). You can of course combine this with the previous options - they could provide you with credentials that only allow you to perform certain actions within that isolated account.