I work with a team of developers that has a shared database hosted in AWS. This team is "virtual" (comprised of remote workers--there is no officesi).
There is an AWS security group that has rules that allow each of the developers to access the database (by IP address). The senior developers have logins and admin permissions to AWS allowing them to change the security group rules--for example when someone's IP address changes.
The problem is that some of the junior developers have "jumpy" IP addresses which change frequently. Each time the IP address changes, a senior developer needs to stop work, login to AWS, and correct the security group rule for the junior developer. This is not sustainable.
Is there a way we can set up AWS so the junior developers can have logins to AWS, but their permissions only allow them to access a single, particular security group? That way the juniors can login to AWS and self-serve on the IP address update, and management doesn't need to worry that they have access to other, restricted areas in AWS?
To directly answer your question, there are multiple ways to achieve what you want and IAM and SCP are the things to take a look at.
With IAM you can either use IAM Permission boundaries to limit the privileges that a certain user has or rely on the ABAC approach where you assign a certain tag to the resource to which you want to grant access to. In your case you can have a "junior" tag set on the SG in question and a respective IAM policy that grants permissions based on it.
Another option is to use a Service Control Policy (in case you have AWS Organization enabled). With SCPs you can limit certain actions on account level (e.g. Deny action on ec2, unless a certain criteria is met).
All of the above are on identity access level.
Networking-wise you can alter your design a bit by setting up an AWS Client VPN in front of the RDS.
Related
So, we have a "Compute Engine default service account", and everything is clear with it:
it's a legacy account with excessive permission
it used to be limited by "scope" assigned to each GCE instance or instances group
it's recommended to delete this account and use custom service account for each service with the least privilege principle.
The second "default service account" mentioned in the docs is the "App Engine default service account". Presumably it's assigned to the App Engine instances and it's also a legacy thing that needs to be treated similarly to the Compute Engine default service account. Right?
And what about "Google APIs Service Agent"? It has the "Editor" role. As far as I understand, this account is used internally by GCP and is not accessed by any custom resources I create as a user. Does it mean that there is no reason to reduce its permissions for the sake of complying with the best security practices?
You don't have to delete your default service account however at some point it's best to create accounts that have minimum permissions required for the job and refine the permissions to suit your needs instead of using default ones.
You have full control over this account so you can change it's permissions at any moment or even delete it:
Google creates the Compute Engine default service account and adds it to your project automatically but you have full control over the account.
The Compute Engine default service account is created with the IAM basic Editor role, but you can modify your service account's roles to control the service account's access to Google APIs.
You can disable or delete this service account from your project, but doing so might cause any applications that depend on the service account's credentials to fail
If something stops working you can recover the account up to 90 days.
It's also advisable not to use service accounts during development at all since this may pose security risk in the future.
Google APIs Service Agent which
This service account is designed specifically to run internal Google processes on your behalf. The account is owned by Google and is not listed in the Service Accounts section of Cloud Console
Addtiionally:
Certain resources rely on this service account and the default editor permissions granted to the service account. For example, managed instance groups and autoscaling uses the credentials of this account to create, delete, and manage instances. If you revoke permissions to the service account, or modify the permissions in such a way that it does not grant permissions to create instances, this will cause managed instance groups and autoscaling to stop working.
For these reasons, you should not modify this service account's roles unless a role recommendation explicitly suggests that you modify them.
Having said that we can conclude that remooving either default service account or Google APIs Service Agent is risky and requires a lot of preparation (especially that latter one).
Have a look at the best practices documentation describing what's recommended and what not when managing service accounts.
Also you can have a look at securing them against any expoitation and changing the service account and access scope for an instances.
When you talk about security, you especially talk about risk. So, what are the risks with the default service account.
If you use them on GCE or Cloud Run (the Compute Engine default service account) you have over permissions. If your environment is secured, the risk is low (especially on Cloud Run). On GCE the risk is higher because you have to keep up to date the VM and to control the firewall rules to access to your VM.
Note: by default, Google Cloud create a VPC with firewall rules open to 0.0.0.0/0 on port 22, RDP and ICMP. It's also a security issue to fix by default.
The App Engine default service account is used by App Engine and Cloud Functions by default. Same as Cloud Run, the risk can be considered as low.
Another important aspect is the capacity to generate service account key files on those default services accounts. Service account key file are simple JSON file with a private key in it. This time the risk is very high because a few developers take REALLY care of the security of that file.
Note: In a previous company, the only security issues that we had came from those files, especially with service account with the editor role
Most of the time, the user doesn't need a service account key file to develop (I wrote a bunch of articles on that on Medium)
There is 2 ways to mitigate those risks.
Perform IaC (Infra as code, with product like teraform) to create and deploy your projects and to enforce all the best security practices that you have defined in your company (VPC without default firewall rules, no editor role on service accounts,...)
Use organisation policies, especially this one "Disable service account key creation" to prevent the service account key creation, and this one "Disable Automatic IAM Grants for Default Service Accounts" to prevent the editor role on the default service accounts.
The deletion isn't a solution, but a good knowledge of the risk, a good security culture in the team and some organisation policies are the key.
Recently, the IAM key of GCP was exposed, and miner was installed.
In the case of AWS, 2fa can be set when accessing with an access key, or access can be made only from a specific IP.
If there was such a setting, the accident would not have occurred immediately even if the key was exposed.
I searched for ACL and 2FA settings in GCP, but there is no key setting, only the instance access setting is checked.
Is it possible to set up GCP's Web Console access, 2FA for access to IAM key, and IP ACL?
In addition, an IP-based ACL is required for BigQuery, but it is impossible to access the ACL for BigQuery access when contacting other teams, and it is only controlled by IAM.
If IAM is exposed by user error, is there any way GCP can prevent this?
You can enforce 2fa and IP control on IAM service (with IAM conditions and context-aware access).
Google helps you as it can:
The support contact you in case of abnormal activity, such as miner installed your VM and thus suspicious network activity
The public repository, such as Github, are periodically scanned by google and in case of service account key file found, you are notified
Platform proposes you solution to mitigate the risk
Context aware accesss
IAM condition
Organisation policy to disable the capacity to generate service account key file. Only a small group of users are able to generate them after the validation of the user request. The target is to limit the number of key and to generate them only when the use case require them
SCC (Security Command Center) findings can raise primitive role on service account: too much roles, use predefined role instead
IAM recommender that propose you to reduce the permission scope based on the 90 last days of activities
So a set of tools to be proactive and reactive to events.
You can set up 2 step-verification(2sv) which is triggerd when trying to access GCP vm instances.
Follow this guide to set up 2sv to your instance.
Also, VPC service control can add additional sercurity layer for managed service like bigQuery.
You can block specific IP address bia this service.
This article will help you a lot in using VPC Service Control.
I work as a contractor for a large enterprise company and I was assigned to a new project recently for which we need to request resources on AWS. For our project we will need access to EC2 and RDS.
I am not very familiar with AWS, so my question is: will it be possible to get access to AWS Web Console for our team with limited services (access only to EC2 and RDS in our case)? How much work is needed to provide such access (to set up IAM etc)?
I am a bit concerned that I will not get access to AWS Web Console, because I was asked if I needed a sudo user for a VM. It was frustrating for me to hear such question, because I will need several VMs rather than one.
By default, IAM Users have no access to services. In such a situation, they can access the AWS management console, but there will be many error messages about not having access to information, nor the ability to perform actions.
Once an IAM User is granted the necessary permissions, the console will start working better for them. However, it an be difficult to determine exactly which permissions they require to fully use the console. For example, to use the EC2 console, the user would require ec2:DescribeInstances, which allows them to view details about all EC2 instances. This might not be desirable in your situation, since they might not want these users to see such a list.
Then comes the ability to perform actions on services, such as launching an EC2 instance. This requires the ec2:RunInstances permission, but also other related permissions to gain access to security groups, roles and networking configuration.
Bottom line: Yes, you will be able to access the AWS management console. However, your ability to view or do things will be limited by the permissions you are provided.
I'm trying to set up a Site to Site connection between our on-premise server and our cloud infrastructure. In our premises we have a SonicWall firewall installed and, since SonicOS 6.5.1.0 it's now easy to put an AWS access key and AWS Secret Key and let the software configure everything via SDK.
The problem is that the tutorial on how to configure the firewall (p. 8) says:
The security policy used, either for a group to which the user belongs or attached to the user directly, must
include the following permissions:
• AmazonEC2FullAccess – For AWS Objects and AWS VPN
• CloudWatchLogsFullAccess – For AWS Logs
Since it's not ideal to give anyone the full access to Amazon EC2 do you know which features SonicWall actually needs so I can disable everything else and follow the principle of least privilege?
Without looking into the code for SonicWall itself, it is not going to be easy to know exactly which API calls it's going to make to EC2. If you are prepared to at least temporarily grant full EC2 access, you could use AWS CloudTrail to monitor exactly which API calls are being made by the IAM user associated with your on-premises server, and then update your specific policy to match those calls.
Alternatively, start with the full access IAM policy template and go through and deny any calls you think are completely unrelated to SonicWall's functionality.
If you trust SonicWall then probably the easiest thing to do is to just allow the full EC2 access it claims is required (or start there and gradually remove them until something breaks!)
Background: On Azure (we're in the process of moving from azure to aws), we have everything organized into resources groups. By default no one can do much in the prod subscription(account), but based on the team asking for a provisioned resource, a team member gets stamped as the "owner" of the resource group, which just gives him/her full access to that resource group, and can add/remove other members as they see fit. This allows us to set up a very fine-grained set of access controls where each team ultimately decides whats allowed and not - not based on groups but based on users getting access to resource groups (in which instances and other resources exist).
Now that we're moving to AWS I had hoped to use the SAML integration to provide access (we're running Auth0 in front of AzureAD, but this should be the same for any saml/federated aws setup I think).
My problem is that with SAML AWS doesn't really "see" each individual user - they're not auto-created in IAM at first logon or anything, so the only "security boundary" I have to work with are the groups I send into AWS, which I can assign to IAM roles.
This is a problem, because 1)the user has to select the desired role at login (if member of more than one), and 2)each role setup is a manual task which requires me to configre AzureAD, saml claims in auth0 and finally IAM roles in AWS. The latter is obviously something I can automate, but still.
Here's the core of my problem:
Say that I have 2 EC2 instances in AWS: DB and Web. I have 3 users, AdminPete, DBDave and WebWilson. I'd like to be able to give Pete full access to both instances, while Dave and Wilson gets access to "their own" ec2 instance. As far as I can see, I would have to configure two IAM roles (DB and Web), and require Pete (who has access to both) to choose his role at login. This is a super-simple example, but it doesn't really scale well at all.
I'm curious to hear how you guys are doing access control in aws - my goal is to be able to create a very fine-tuned setup using tags or some other mechanism. The official aws documentation only deals with getting saml configured (which is easy enough), but very little about real-life permissions management.
The core of the problem (imho) is that unlike "regular" iam users, I can't attach an iam policy or a group to a single user when that user is federated - I can only attach the policy to the federated role as a whole.
Any pointers appreciated! At this point I'm considering just not using saml at all for our aws stuff so that we can use fine-grained iam policies to manage permissions in a more flexible manner.