Recently, the IAM key of GCP was exposed, and miner was installed.
In the case of AWS, 2fa can be set when accessing with an access key, or access can be made only from a specific IP.
If there was such a setting, the accident would not have occurred immediately even if the key was exposed.
I searched for ACL and 2FA settings in GCP, but there is no key setting, only the instance access setting is checked.
Is it possible to set up GCP's Web Console access, 2FA for access to IAM key, and IP ACL?
In addition, an IP-based ACL is required for BigQuery, but it is impossible to access the ACL for BigQuery access when contacting other teams, and it is only controlled by IAM.
If IAM is exposed by user error, is there any way GCP can prevent this?
You can enforce 2fa and IP control on IAM service (with IAM conditions and context-aware access).
Google helps you as it can:
The support contact you in case of abnormal activity, such as miner installed your VM and thus suspicious network activity
The public repository, such as Github, are periodically scanned by google and in case of service account key file found, you are notified
Platform proposes you solution to mitigate the risk
Context aware accesss
IAM condition
Organisation policy to disable the capacity to generate service account key file. Only a small group of users are able to generate them after the validation of the user request. The target is to limit the number of key and to generate them only when the use case require them
SCC (Security Command Center) findings can raise primitive role on service account: too much roles, use predefined role instead
IAM recommender that propose you to reduce the permission scope based on the 90 last days of activities
So a set of tools to be proactive and reactive to events.
You can set up 2 step-verification(2sv) which is triggerd when trying to access GCP vm instances.
Follow this guide to set up 2sv to your instance.
Also, VPC service control can add additional sercurity layer for managed service like bigQuery.
You can block specific IP address bia this service.
This article will help you a lot in using VPC Service Control.
Related
So, we have a "Compute Engine default service account", and everything is clear with it:
it's a legacy account with excessive permission
it used to be limited by "scope" assigned to each GCE instance or instances group
it's recommended to delete this account and use custom service account for each service with the least privilege principle.
The second "default service account" mentioned in the docs is the "App Engine default service account". Presumably it's assigned to the App Engine instances and it's also a legacy thing that needs to be treated similarly to the Compute Engine default service account. Right?
And what about "Google APIs Service Agent"? It has the "Editor" role. As far as I understand, this account is used internally by GCP and is not accessed by any custom resources I create as a user. Does it mean that there is no reason to reduce its permissions for the sake of complying with the best security practices?
You don't have to delete your default service account however at some point it's best to create accounts that have minimum permissions required for the job and refine the permissions to suit your needs instead of using default ones.
You have full control over this account so you can change it's permissions at any moment or even delete it:
Google creates the Compute Engine default service account and adds it to your project automatically but you have full control over the account.
The Compute Engine default service account is created with the IAM basic Editor role, but you can modify your service account's roles to control the service account's access to Google APIs.
You can disable or delete this service account from your project, but doing so might cause any applications that depend on the service account's credentials to fail
If something stops working you can recover the account up to 90 days.
It's also advisable not to use service accounts during development at all since this may pose security risk in the future.
Google APIs Service Agent which
This service account is designed specifically to run internal Google processes on your behalf. The account is owned by Google and is not listed in the Service Accounts section of Cloud Console
Addtiionally:
Certain resources rely on this service account and the default editor permissions granted to the service account. For example, managed instance groups and autoscaling uses the credentials of this account to create, delete, and manage instances. If you revoke permissions to the service account, or modify the permissions in such a way that it does not grant permissions to create instances, this will cause managed instance groups and autoscaling to stop working.
For these reasons, you should not modify this service account's roles unless a role recommendation explicitly suggests that you modify them.
Having said that we can conclude that remooving either default service account or Google APIs Service Agent is risky and requires a lot of preparation (especially that latter one).
Have a look at the best practices documentation describing what's recommended and what not when managing service accounts.
Also you can have a look at securing them against any expoitation and changing the service account and access scope for an instances.
When you talk about security, you especially talk about risk. So, what are the risks with the default service account.
If you use them on GCE or Cloud Run (the Compute Engine default service account) you have over permissions. If your environment is secured, the risk is low (especially on Cloud Run). On GCE the risk is higher because you have to keep up to date the VM and to control the firewall rules to access to your VM.
Note: by default, Google Cloud create a VPC with firewall rules open to 0.0.0.0/0 on port 22, RDP and ICMP. It's also a security issue to fix by default.
The App Engine default service account is used by App Engine and Cloud Functions by default. Same as Cloud Run, the risk can be considered as low.
Another important aspect is the capacity to generate service account key files on those default services accounts. Service account key file are simple JSON file with a private key in it. This time the risk is very high because a few developers take REALLY care of the security of that file.
Note: In a previous company, the only security issues that we had came from those files, especially with service account with the editor role
Most of the time, the user doesn't need a service account key file to develop (I wrote a bunch of articles on that on Medium)
There is 2 ways to mitigate those risks.
Perform IaC (Infra as code, with product like teraform) to create and deploy your projects and to enforce all the best security practices that you have defined in your company (VPC without default firewall rules, no editor role on service accounts,...)
Use organisation policies, especially this one "Disable service account key creation" to prevent the service account key creation, and this one "Disable Automatic IAM Grants for Default Service Accounts" to prevent the editor role on the default service accounts.
The deletion isn't a solution, but a good knowledge of the risk, a good security culture in the team and some organisation policies are the key.
I work with a team of developers that has a shared database hosted in AWS. This team is "virtual" (comprised of remote workers--there is no officesi).
There is an AWS security group that has rules that allow each of the developers to access the database (by IP address). The senior developers have logins and admin permissions to AWS allowing them to change the security group rules--for example when someone's IP address changes.
The problem is that some of the junior developers have "jumpy" IP addresses which change frequently. Each time the IP address changes, a senior developer needs to stop work, login to AWS, and correct the security group rule for the junior developer. This is not sustainable.
Is there a way we can set up AWS so the junior developers can have logins to AWS, but their permissions only allow them to access a single, particular security group? That way the juniors can login to AWS and self-serve on the IP address update, and management doesn't need to worry that they have access to other, restricted areas in AWS?
To directly answer your question, there are multiple ways to achieve what you want and IAM and SCP are the things to take a look at.
With IAM you can either use IAM Permission boundaries to limit the privileges that a certain user has or rely on the ABAC approach where you assign a certain tag to the resource to which you want to grant access to. In your case you can have a "junior" tag set on the SG in question and a respective IAM policy that grants permissions based on it.
Another option is to use a Service Control Policy (in case you have AWS Organization enabled). With SCPs you can limit certain actions on account level (e.g. Deny action on ec2, unless a certain criteria is met).
All of the above are on identity access level.
Networking-wise you can alter your design a bit by setting up an AWS Client VPN in front of the RDS.
Any security compliance of aws code samples?
Suppose I want to check following things :-
a)Security groups
security - open ports, SG, keys, Protocols
b)ELB security
exposed ports
c)ec2
SG, exposed ports, instances must be configured in vpc
d)IAM:
Full admin privilege in IAM policy, user-level MFA status, password policy status
e)lambda :
admin role, unauthorized cross-account access
How to collect this information through code? Is there any aws java sdk avaliable to check this things?
I found one tool Chef Inspec where i can write rule and get report but can't able to pass list of instances, it check for instance by instance..
Is there any other tool or java sdk to get all these things?
I have an Amazon Web Services account which will be used to host the backed of an app. The backend uses PHP/MySQL and will most likely use an EC2 instance and RDS. I have my own account which has access to everything. I need to create an account for a developer to put the backend on AWS but I don't want them to have access to anything except what they need. I know how to create IAM users and Groups but I don't know which permissions to grant the developer. Under Select Policy Template there is a Power User template, is that good for a developer? Has anyone done this before?
The Power User Access template in AWS Identity and Access Management (IAM) grants permission to do ANYTHING except using IAM. A user with this permission can view, create or remove any resources in your AWS account, but they could not create new users or modify any user permissions.
It is recommended that you only give people the least amount of privilege required to use AWS, so that they do not intentional nor accidentally do something unwanted. However, if you do not have enough knowledge of AWS to know what functionality is required, you will most likely need to trust the developer to configure the system for your needs.
A few tips:
Only give them access via an IAM User -- never give them your root credentials
If you don't know what permissions are required, then "Power User" is at least safer than "Administrator" since they cannot edit IAM settings
When they have completed their work, revoke their access so they cannot create any more AWS resources
Determine whether you also wish to revoke access to the EC2 instances (you'll have to do this on the instances themselves)
You may need to define some roles that will be used with Amazon EC2 -- these are defined in IAM, so the developer will not have permission to create the roles himself
Ask the developer for documentation of what he has deployed
Turn on Detailed Billing to identify what AWS charges you are receiving and check them against the documentation
Turn on CloudTrail to activate auditing of your account (it is activated per-region)
Alternatively, you could do all the AWS configuration (launching an EC2 instance, creating the database) and only let the developer login to the EC2 instance itself. That way, they would not need access to your AWS account.
I am in the early stages of writing an AWS app for our users that will run our research algorithms using their AWS resources. For example, our code will need to spin up EC2 instances running our 'worker' app, access RDS databases, and create access SQS queues. The AWS Java SDK examples (we are writing this in Java) use a AwsCredentials.properties file to store the Access Key ID and Secret Access Key, which is fine for examples, but obviously not acceptable for our users, who are would be in essence giving us access to all their resources. What is a clean way to go about running our system on their behalf? I discovered AWS Identity and Access Management (IAM) which seems to be for this purpose (I haven't got my head around it yet), esp. Cross-account access between AWS accounts. This post makes it sound straightforward:
Use the amazon IAM service to create a set of keys that only has
permission to perform the tasks that you require for your script.
http://aws.amazon.com/iam/
However, other posts (e.g., Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?) suggest there are limitations to using IAM with EC2 in particular.
Any advice would be really helpful!
The key limitation with regards to RDS and EC2 is that while you can restrict access to certain API actions there are no resource level constraints. For example with an IAM S3 policy you can restrict a user to only being able to perform certain actions on certain buckets. You can write a policy for EC2 that says that user is allowed to stop instances, but not one that says you can only stop certain instances.
Another option is for them to provide you with temporary credentials via the Security Token Service. Another variant on that is to use the new IAM roles service. With this an instance has a set of policies associated with it. You don't need to provide an AwsCredentials.proprties file because the SDK can fetch credentials from the metadata service.
Finally one last option might be consolidated billing. If the reason you are using their AWS resources is just because of the billing, then setup a new account which is billed from their account. The accounts are isolated from each other so you can't for example delete their instances by accident. Equally you can't access their RDS snapshots and things like that (access to an RDS instance via mysql (as opposed to the AWS api) would depend on the instance's security group). You can of course combine this with the previous options - they could provide you with credentials that only allow you to perform certain actions within that isolated account.