I have some app configuration stored in a file in an S3 bucket (api keys). I have the S3 bucket configured to only allow access via a specific VPC endpoint, which ties the keys to specific environments, and prevents e.g. production keys being accidentally used in a staging or test environment.
However occasionally I need to amend these keys, and it's a pain. Currently the bucket policy prevents console access, so I have to remove the bucket policy, update the file, then replace the policy.
How can I allow access from the console, a specific VPC endpoint, and no where else?
Current policy, where I've tried and failed already:
{
"Version": "2012-10-17",
"Id": "Policy12345",
"Statement": [
{
"Sid": "Principal-Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::account-id:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-keys-staging",
"arn:aws:s3:::my-keys-staging/*"
]
},
{
"Sid": "Access-to-specific-VPCE-only",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-keys-staging",
"arn:aws:s3:::my-keys-staging/*"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": "vpce-vpceid"
}
}
}
]
}
As mentioned in the comments, having an explicit Deny cannot be overridden. By including the Deny tied to a particular VPC, you cannot add any other Allow elements to counteract that Deny statement.
Option 1
One option is to change your "deny if not from VPC abc" statement to "allow if from VPC abc". This would allow you to add additional Allow statements to your policy to allow you to access the bucket from elsewhere.
However, there are 2 very important caveats that goes along with doing that:
Any user with "generic" S3 access via IAM policies would have access to the bucket, and
Any role/user from said VPC would be allowed into your bucket.
So by changing Deny to Allow, you will no longer have a VPC-restriction at the bucket level.
This may or may not be within your organization's security requirements.
Option 2
Instead, you can amend your existing Deny to add additional conditions which will work in an AND situation:
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": "vpce-vpceid",
"aws:username": "your-username"
}
}
This type of condition will deny the request if:
The request is not coming from your magic VPC, AND
The request is not coming from YOUR username
So you should be able to maintain the restriction of limiting requests to your VPC with the exception that your user sign-in would be allowed access to the bucket from anywhere.
Note the security hole you are opening up by doing this. You should ensure you restrict the username to one that (a) does not have any access keys assigned, and (b) has MFA enabled.
Related
I am trying to set up things on S3 to prevent hotlinking.
I've taken advice from here: How do I prevent hotlinking on Amazon S3 without using signed URLs?
And also this one: https://kinsta.com/blog/hotlinking/
However, I can't get it to work.
First, I prevent all public access to the bucket so the settings on the Permissions tab are like this:
I have set the policy like this:
{
"Version": "2008-10-17",
"Id": "HTTP referer policy example",
"Statement": [
{
"Sid": "prevent hotlinking",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://example.co.uk/*",
"http://www.example.co.uk/*",
"https://example.co.uk/*",
"https://www.example.co.uk/*",
"http://localhost/01-example/*"
]
}
}
}
]
}
However, when I try to access content from the bucket from the referring site, I cannot see the S3 content.
What am I doing wrong?
I prevent all public access to the bucket so the settings on the Permissions tab are like this.
That's why it does not work. Your policy allows for public/anonymous access ("Principal": {"AWS": "*"}), but at the same time you explicitly "prevent all public access". You have to enable the public access. From docs:
Before you use a bucket policy to grant read-only permission to an anonymous user, you must disable block public access settings for your bucket.
Blocking public access options will override any other configuration you're using, for this reason your bucket policy will not take effect.
To allow your policy to work you will need to disable this, you might choose to keep several of the options enabled to prevent further changes being made to the bucket policy.
On a related note to your policy, the Referer header can be faked to still access these assets so it should not be treated as a silver bullet.
Another solution to use would be to either use an S3 signed URL or to take a look at using a CloudFront distribution in front of your S3 bucket and then making use of a signed cookie.
I have bucket policy which allows access only from a VPC:
{
"Version": "2012-10-17",
"Id": "aksdhjfaksdhf",
"Statement": [
{
"Sid": "Access-only-from-a-specific-VPC",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::zzzz",
"arn:aws:s3:::zzzz/*"
],
"Condition": {
"StringNotEquals": {
"aws:SourceVpc": "vpc-xxxx"
}
}
}
]
}
I'd like to allow traffic coming from AWS Textract to this bucket as well. I've tried various methods but because of the absolute precedence of 'explicit deny' (which I require), I cannot make it work.
Is there a different policy formulation or a different method altogether to restrict the access to this S3 Bucket to traffic from the VPC AND from Textract service exclusively?
This will not be possible.
In general, it's a good idea to avoid Deny policies since they override any Allow policy. They can be notoriously hard to configure correctly.
One option would be to remove the Deny and be very careful in who is granted Allow access to the bucket.
However, if this is too hard (eg Admins are given access to all buckets by default), then a common practice is to move sensitive data to an S3 bucket in a different AWS Account and only grant cross-account access to specific users.
I am looking for a bucket policy which allows only the root account user and the bucket creator to delete the bucket. something like below. Please suggest. How to restrict to only bucket creator and root?
{
"Version": "2012-10-17",
"Id": "PutObjBucketPolicy",
"Statement": [
{
"Sid": "Prevent bucket delete",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxx:root"
},
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::test-bucket-s3"
},
{
"Sid": "Prevent bucket delete",
"Effect": "Deny",
"Principal": *,
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::test-bucket-s3"
}
]
}
A Deny always beats an Allow. Therefore, with this policy, nobody would be allowed to delete the bucket. (I assume, however, that the root user would be able to do so, since it exists outside of IAM.)
There is no need to assign permissions to the root, since it can always do anything.
Also, there is no concept of the "bucket creator". It belongs to the account, not a user.
Therefore:
Remove the Allow section (it does nothing)
Test whether the policy prevents non-root users from deleting it
Test whether the policy still permits the root user to delete it
There are 2 different type of permission in S3.
Resource Based policies
User Policies
So Bucket policies and access control lists (ACLs) are part of Resource Based and which attached to the bucket.
if all users are in same aws account. you can consider user policy which is attached to user or role.
if you are dealing with multiple aws accounts, Bucket policies or ACL is better.
only different is, Bucket policies allows you grant or deny access and apply too all object in the bucket.
ACL is grant basic read or write permission and can't add conditional check.
In our environment, all IAM user accounts are assigned a customer-managed policy that grants read-only access to a lot of AWS services. Here's what I want to do:
Migrate a sql server 2012 express database from on-prem to a RDS instance
Limit access to the S3 bucket containing the database files
Here's the requirements according to AWS:
A S3 bucket to store the .bak database file
A role with access to the bucket
SQLSERVER_BACKUP_RESTORE option attached to RDS instance
So far, I've done the following:
Created a bucket under the name "test-bucket" (and uploaded the .bak file here)
Created a role under the name "rds-s3-role"
Created a policy under the name "rds-s3-policy" with these settings:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::test-bucket/"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectMetaData",
"s3:GetObject",
"s3:PutObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Assigned the policy to the role
Gave the AssumeRole permissions to the RDS service to assume the role created above
Created a new option group in RDS with the SQLSERVER_BACKUP_RESTORE option and linked it to my RDS instance
With no restrictions on my S3 bucket, I can perform the restore just fine; however, I can't find a solid way of restricting access to the bucket without hindering the RDS service from doing the restore.
In terms of my attempts to restrict access to the S3 bucket, I found a few posts online recommending using an explicit Deny statement to deny access to all types of principals and grant access based on some conditional statements.
Here's the contents of my bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1486769843194",
"Statement": [
{
"Sid": "Stmt1486769841856",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test-bucket",
"arn:aws:s3:::test-bucket/*"
],
"Condition": {
"StringNotLike": {
"aws:userid": [
"<root_id>",
"<user1_userid>",
"<user2_userid>",
"<user3_userid>",
"<role_roleid>:*"
]
}
}
}
]
}
I can confirm the bucket policy does restrict access to only the IAM users that I specified, but I am not sure how it treats IAM roles. I used the :* syntax above per a document I found on the aws forums where the author stated the ":*" is a catch-all for every principal that assumes the specified role.
The only thing I'm having a problem with is, with this bucket policy in place, when I attempt to do the database restore, I get an access denied error. Has anyone ever done something like this? I've been going at it all day and haven't been able to find a working solution.
The following, admittedly, is guesswork... but reading between the lines of the somewhat difficult to navigate IAM documentation and elsewhere, and taking into account the way I originally interpreted it (incorrectly), I suspect that you are using the role's name rather than the role's ID in the policy.
Role IDs look similar to AWSAccessKeyIds except that they begin with AROA....
For the given role, find RoleId in the output from this:
$ aws iam get-role --role-name ROLE-NAME
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/
Use caution when creating a broad Deny policy. You can end up denying s3:PutBucketPolicy to yourself, which leaves you in a situation where your policy prevents you from changing the policy... in which case, your only recourse is presumably to persuade AWS support to remove the bucket policy. A safer configuration would be to use this to deny only the object-level permissions.
We need to create an IAM user that is allowed to access buckets in our client's S3 accounts (provided that they have allowed us access to those buckets as well).
We have created an IAM user in our account with the following inline policy:
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
}
]
}
In addition to this, we will request that our clients use the following policy and apply it to their relevant bucket:
{
"Version": "2008-10-17",
"Id": "Policy1416999097026",
"Statement": [
{
"Sid": "Stmt1416998971331",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::229569340673:user/our-iam-user"
},
"Action": [
"s3:AbortMultipartUpload",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::client-bucket-name/*"
},
{
"Sid": "Stmt1416999025675",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::229569340673:user/our-iam-user"
},
"Action": [
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::client-bucket-name"
}
]
}
Whilst this all seems to work fine, the one major issue that we have discovered is our own internal inline policy seems to give full access to our-iam-user to all of our own internal buckets.
Have we mis-configured something, or are we missing something else obvious here?
According to AWS support, this is not the right way to approach the problem:
https://forums.aws.amazon.com/message.jspa?messageID=618606
I am copying the answer from them here.
AWS:
The policy you're using with your IAM user grants access to any Amazon S3 bucket. In this case this will include any S3 bucket in your account and any bucket in any other account, where the account owner has granted your user access. You'll want to be more specific with the policy of your IAM user. For example, the following policy will limit your IAM user access to a single bucket.
You can also grant access to an array of buckets, if the user requires access to more than one.
Me
Unfortunately, we don't know beforehand all of our client's bucket names when we create the inline policy. As we get more and more clients to our service, it would be impractical to keep adding new client bucket names to the inline policy.
I guess another option is to create a new AWS account used solely for the above purpose - i.e. this account will not itself own anything, and will only ever be used for uploading to client buckets.
Is this acceptable, or are there any other alternatives options open to us?
AWS
Having a separate AWS account would provide clear security boundaries. Keep in mind that if you ever create a bucket in that other account, the user would inherit access to any bucket if you grant access to "arn:aws:s3:::*".
Another approach would be to use blacklisting (note whitelisting as suggested above is a better practice).
As you can see, the 2nd statement explicitly denies access to an array of buckets. This will override the allow in the first statment. The disadvantage here is that by default the user will inherit access to any new bucket. Therefore, you'd need to be diligent about adding new buckets to the blacklist. Either approach will require you to maintain changes to the policy. Therefore, I recommend my previous policy (aka whitelisting) where you only grant access to the S3 buckets that the user requires.
Conclusion
For our purposes, the white listing/blacklisting approach is not acceptable because we don't know before all the buckets that will be supplied by our clients. In the end, we went the route of creating a new AWS account with a single user, and that user does not have of its own s3 buckets
The policy you grant to your internal user gives this user access to all S3 bucket for the API listed (the first policy in your question). This is unnecessary as your client's bucket policies will grant your user required privileges to access to client's bucket.
To solve your problem, remove the user policy - or - explicitly your client's bucket in the list of allowed [Resources] instead of using "*"