AWS Control Tower Guardrail - Prevents S3 Bucket being created with encryption - amazon-web-services

We have applied the guardrails mentioned in this posting, AWS Preventive S3 Guardrails. 1. Unfortunately, we are not getting the anticipated outcome. We applied the Disallow Changes to Encryption Configuration for Amazon S3 Buckets 2.
The SCP has a DENY for s3:PutEncryptionConfiguration, with a condition excepting the arn:aws:iam::*:role/AWSControlTowerExecution role.
The issue is that anyone can create an S3 bucket, which is acceptable. However, when creating the bucket either in the console or via CloudFormation and attempting to specify encryption either SSE or KMS an error is generated and the bucket created without encryption.
Ideally we need to have anyone be able to create an S3 bucket and enable encryption. What we were hoping that this SCP would do would be to prevent removing encryption once applied to the bucket.
We are anticiapting similar issues with the other guardrails mentioned in the article:
Disallow Changes to Encryption Configuration for all Amazon S3 Buckets [Previously: Enable Encryption at Rest for Log Archive]
Disallow Changes to Logging Configuration for all Amazon S3 Buckets [Previously: Enable Access Logging for Log Archive]
Disallow Changes to Bucket Policy for all Amazon S3 Buckets [Previously: Disallow Policy Changes to Log Archive]
Disallow Changes to Lifecycle Configuration for all Amazon S3 Buckets [Previously: Set a Retention Policy for Log Archive]
Has anyone encountered this issue? What would be the best way to implement allowing the buckets be created with the needed encryption, logging, bucket policy and lifecycle and once created disallowing removal or changes after the bucket was created?

I'm afraid scp's dont offer the flexibility you need, simply because the condition keys you need are not present in the api calls. There is not a policy that says "allow createbucket with the condition that it has encryption enabled".
I've worked in various platform teams for corporates to implement these types of controls and have encountered these limitations many times. Basically there are three strategies:
Detective compliance
Corrective compliance
Preventive compliance
First make sure you have visibility over how stuff is configured. You can use aws config rules for this. There are definitely rules out there that check s3 buckets for encryption settings. Make sure to centralize the results of these rules using a aws config aggregator in your security account. After detection you can manually follow up on detected misconfigurations (or automate this when running at scale).
If you also like to correct mistakes you can use aws config auto remediation actions. Also various open source tools are available to help you with this. An often used one is cloud custodian with the c7n-org plugin. Also many commercial offerimgs exist but are quite expensive.
With scp's or iam policies you can prevent someone from doing stuff which is a bit lower risk than correcting misconfigurations after they happened. However, it's also very inflexible, policies can get complex real quickly and it also it doesnt tell the user why he cant do something. Often, scp's are only used for the very simple tasks (e.g. no iam users may be created) or blocking actions outside or certain regions.
I'd opt for making sure you detect stuff properly first, then see if you can either correct or prevent it.
Edit:
If you have mature teams that only use ci/cd and infra as code you can also make sure your security controls are implemented using tools like cfn-guard in a pipeline build stage. Simply fail the build if their templates are not up to standards.
Edit2: to get back on your question: for some actions it's possible to prevent using scp's if there is a separate api for disabling stuff like a 'DisableEncryption' action. However for most actions it's a PutEncryptionSetting-like action and you cant really tell if its being enabled or disabled.

Related

SCP Policies for Amazon RDS

I would like to create a Service Control Policies (SCP) policy at the Organization Level that can block 3 things:
Don't allow creating a database Publicly Accessible
Don't allow creating a database without the option of encryption marked
Don't allow creating a database without the option of backup marked
Anyone know if is it possible?
I don't think that Service Control Policies can act at this level.
They basically say which API calls are permitted (eg CreateDbInstance, RebootDbInstance) and don't get down to the level of parameters.
In fact, I don't think it would be possible to create normal IAM policies that have that level of detail, let alone SCPs.
Such rules would likely need to be monitored by Evaluating Resources with AWS Config Rules - AWS Config rather than controlling permissions.

Trigger alarm based on a rate-limit on S3 GetObject and DeleteObject requests

Recently, one of my AWS accounts got compromised, fortunately we were able to change all secure information in time. To avoid recurrence of such a situation the first thing to do would be to have a process in place for secret info management.
That said, I would also want to trigger a cloudwatch alarm in a case where multiple download or delete is taking place from inside my AWS account.
I have come across solutions like
AWS WAF
Have a CDN in place
Trigger a lambda function on an event in S3
Solutions #1 & #2 are not serving to my requirement as they throttle requests coming from outside of AWS. Once it is implemented at S3 level, it will automatically throttle both inside and outside requests.
In solution #3 I could not get a hold of multiple objects requested by an IP in my lambda function, when a threshold time limit and threshold number of file is crossed.
Is raising an alarm by rate-limiting at S3 level a possibility?
There is no rate limit provided by AWS on S3 directly, but you can implement alarms over SNS Topics with CloudTrails.
Unless you explicitly require anyone in your team to remove the objects in your S3 bucket, you shouldn't provide anyone access. The following are some idea you can follow:
Implement the least privilege access
You can block the access to remove the objects on the IAM User
level, so no-one will be able to remove any items.
You can modify the Bucket policy to provide DeleteObject Access to
specific users/roles as conditions.
Enable multi-factor authentication (MFA) Delete
MFA Delete can help prevent accidental bucket deletions. If MFA
Delete is not enabled, any user with the password of a sufficiently
privileged root or IAM user could permanently delete an Amazon S3
object.
MFA Delete requires additional authentication for either of the
following operations:
Changing the versioning state of your bucket
Permanently deleting an object version.
S3 Object Lock
S3 Object Lock enables you to store objects using a "Write Once Read Many" (WORM) model. S3 Object Lock can help prevent accidental or inappropriate deletion of data. For example, you could use S3 Object Lock to help protect your AWS CloudTrail logs.
Amazon Macie with Amazon S3
Macie uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property. It provides you with dashboards and alerts that give visibility into how this data is being accessed or moved.
You can learn more about the best Security Practices with S3.
https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/

How do I find out what AWS actions do I need to grant for a CLI command?

Consider I want to run some AWS CLI command, e.g. aws s3 sync dist/ "s3://${DEPLOY_BUCKET_NAME}" --delete.
How do I know what specific permissions (actions) do I need to grant in order for this command to work correctly? I want to adhere to the least privileged principle.
Just to clarify my question. I know where to find a list of all actions for S3 or other service and I know how to write a policy. The question is how do I know what specific actions do I need to grant for some CLI command?
Because, each command will use different actions and the arguments of the command also play a role here.
Almost every command used in the AWS CLI map one-to-one to IAM Actions.
However, the aws s3 commands such as sync are higher-level functions that call multiple commands.
For sync, I would imagine you would need:
ListBucket
CopyObject
GetObjectACL
PutObjectACL
If that still doesn't help, then you can use AWS CloudTrail to look at the underlying API calls that the AWS CLI made to your account. The CloudTrail records will show each API call and whether it succeeded or failed.
There's no definitive mapping to API actions from high-level awscli commands (like aws s3 sync) or from AWS console actions that I'm aware of.
One thing that you might consider is to enable CloudTrail, then temporarily enable all actions on all resources in an IAM policy, then run a test of aws s3 sync, and then review CloudTrail for what API actions were invoked on which resources. Not ideal, but it might give you something to start with.
You can use Athena to query CloudTrail Logs. It might seem daunting to set up at first, but it's actually quite easy. Then you can issue simple SQL queries such as:
SELECT eventtime, eventname, resources FROM trail20191021 ORDER BY eventtime DESC;
If you want to know for S3 specifically, that is documented in the S3 Developer Guide:
Specifying Permissions in a Policy
Specifying Conditions in a Policy
Specifying Resources in a Policy
In general, you can get what you need for any AWS resource from Actions, Resources, and Condition Keys for AWS Services
And you may find the AWS Policy Generator useful

Creating custom AWS IAM actions

Can AWS IAM be used to control access for custom applications? I heavily rely on IAM for controlling access to AWS resources. I have a custom Python app that I would like to extend to work with IAM, but I can't find any references to this being done by anyone.
I've considered the same thing, and I think it's theoretically possible. The main issue is that there's no call available in IAM that determines if a particular call is allowed (SimulateCustomPolicy may work, but that doesn't seem to be its purpose so I'm not sure it would have the throughput to handle high volumes).
As a result, you'd have to write your own IAM policy evaluator for those custom calls. I don't think that's inherently a bad thing, since it's also something you'd have to build for any other policy-based system. And the IAM policy format seems reasonable enough to be used.
I guess the short answer is, yes, it's possible, with some work. And if you do it, please open source the code so the rest of us can use it.
The only way you can manage users, create roles and groups is if you have admin access. Power users can do everything but that.
You can create a group with all the privileges you want to grant and create a user with policies attached from the group created. Create a user strictly with only programmatic access, so the app can connect with access key ID and secure key from AWS CLI.
Normally, IAM can be used to create and manage AWS users and groups, and permissions to allow and deny their access to AWS resources.
If your Python app is somehow consuming or interfacing to any AWS resource as S3, then probably you might want to look into this.
connect-on-premise-python-application-with-aws
The Python application can be upload to an S3 bucket. The application is running on a server inside the on-premise data center of a company. The focus of this tutorial is on the connection made to AWS.
Consider placing API Gateway in front of your Python app's routes.
Then you could control access using IAM.

How can I query my IAM capabilities?

My code is running on an EC2 machine. I use some AWS services inside the code, so I'd like to fail on start-up if those services are unavailable.
For example, I need to be able to write a file to an S3 bucket. This happens after my code's been running for several minutes, so it's painful to discover that the IAM role wasn't configured correctly only after a 5 minute delay.
Is there a way to figure out if I have PutObject permission on a specific S3 bucket+prefix? I don't want to write dummy data to figure it out.
You can programmatically test permissions by the SimulatePrincipalPolicy API
Simulate how a set of IAM policies attached to an IAM entity works with a list of API actions and AWS resources to determine the policies' effective permissions.
Check out the blog post below that introduces the API. From that post:
AWS Identity and Access Management (IAM) has added two new APIs that enable you to automate validation and auditing of permissions for your IAM users, groups, and roles. Using these two APIs, you can call the IAM policy simulator using the AWS CLI or any of the AWS SDKs. Use the new iam:SimulatePrincipalPolicy API to programmatically test your existing IAM policies, which allows you to verify that your policies have the intended effect and to identify which specific statement in a policy grants or denies access to a particular resource or action.
Source:
Introducing New APIs to Help Test Your Access Control Policies
Have you tried the AWS IAM Policy Simulator. You can use it interactively, but it also has some API capabilities that you may be able to use to accomplish what you want.
http://docs.aws.amazon.com/IAM/latest/APIReference/API_SimulateCustomPolicy.html
Option 1: Upload an actual file when you app starts to see if it succeeds.
Option 2: Use dry runs.
Many AWS commands allow for "dry runs". This would let you execute your command at the start without actually doing anything.
The AWS CLI for S3 appears to support dry runs using the --dryrun option:
http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
The Amazon EC2 docs for "Dry Run" says the following:
Checks whether you have the required permissions for the action, without actually making the request. If you have the required permissions, the request returns DryRunOperation; otherwise, it returns UnauthorizedOperation.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/CommonParameters.html