I am trying to set up a policy using StringNotLike, to prevent access to a particular bucket (named secbucket) and allows all other buckets. However whatever I put in the condition, it still allows access for all buckets.
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "arn:aws:s3:::*",
"Condition": {
"StringNotLike": {
"s3:prefix": [
"secbucket",
"secbucket/home",
"secbucket/home/*"
]
}
}
}
To clarify if I understand the StringNotLike with the Resource condition: If the s3:prefix does not match the conditions, it will allow access.
I am not sure what wrong with this policy. Please let me know what is wrong with this policy. Many thanks.
The Bucket name is not part of the S3 prefix.
If you wish to create a policy that references the bucket name, you will need to do it as part of the ARN, eg:
"Resource": "arn:aws:s3:::secbucket"
It appears your desire is to grant access to ALL buckets except secbucket. This can be done in two steps:
Grant access to all buckets by using an Allow statement
Deny access to secbucket by using a Deny statement
If you wish to secure secbucket from most of your users, an alternative method is to put secbucket in a different AWS Account and then use a Bucket Policy to only grant access to specific IAM entities. This is sometimes easier than denying access to lots of entities.
Related
We've been using a template for bucket policies that worked well until we tried to enable replication. The first thing in the policy is a deny statement that has exceptions for a specific vpce, and three IP network ranges. The deny statement is followed by some allow statements. This worked well. When we tried to configure replication, we get replication failed status for any object added or updated. So we added the IAM role created for this replication to the deny exceptions and also to the allow statements as a principal. This still cause replication failure. We know the issue is the policy because removing the policy results in replication completing normally. Here's the format of the deny statement...
"Statement": [
{
"Sid": "Stmt1587152999999",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"<Bucket ARN>",
"<Bucket ARN>/*"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"x.x.x.x/xx",
"x.x.x.x/xx",
"x.x.x.x/xx"
]
},
"StringNotEquals": {
"aws:SourceVpce": "<VPCE ID>"
},
"ArnNotEquals": {
"aws:SourceArn": "<IAM role created for replication>"
}
}
},
Is the source arn of the IAM role used for replication the correct way to exclude it from the deny statement? Is there another approach to limit access while still allowing replication?
Deny statements are always difficult. They often end up denying more than expected.
I think the above statement is saying:
Access to this S3 bucket is denied if:
You aren't coming from one of these IP addresses, AND
You aren't coming through that VPC Endpoint, AND
You aren't using that IAM Role
This should effectively be saying "Don't deny if any of the above are True" (that is, they are using one of the IPs, OR the VPC Endpoint OR the IAM Role).
See: Creating a condition with multiple keys or values - AWS Identity and Access Management
This means that your statement should be correct, but you report having problems. I can't see an immediate problem with what you are doing, but try starting by only having the IAM Role condition, test whether it is working correctly, then add the other conditions one-at-a-time to identify the cause of the conflict.
The issue with my policy was in the Role ARN.
I used "aws:SourceArn" but should have used " "aws:PrincipalArn"
I'm pretty sure I got SourceArn from the policy generator. I ended up opening a case and after a few iterations with support I got the "aws:PrincipalArn". That worked!
I am facing a very strange issue which is either my lack of knowledge or a bug in aws s3 :
So I create an s3bucket which is not accessible to anyone in public and then I put an image in it. So when I try that image that is definitely no visible to everyone which is good.(So both my bucket and image have no public access)
Then I added the following bucket policy to it:
{
"Version": "2012-10-17",
"Id": "Policy1506624486110",
"Statement": [
{
"Sid": "Stmt1506624421375",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucketname/*"
}
]
}
At this point based on my understanding all aws resources are accessible to this image but not any other people in the public. Strangely I see that people in public, any stranger can access this image. Can anyone explain what that bucket policy magically does that it make it available to public?
You're explicitly making your bucket public.
To grant permission to everyone, also referred as anonymous access, you set the wildcard, "*", as the Principal value. For example, if you configure your bucket as a website, you want all the objects in the bucket to be publicly accessible. The following are equivalent:
"Principal":"*"
"Principal":{"AWS":"*"}
http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-bucket-user-policy-specifying-principal-intro.html
The option of either using the "AWS" at the "beginning" (as the object key in a JSON object) or the bare scalar string "*" is presumably for historical reasons, one being an older or newer form than the other, but that doesn't appear to be documented. The object key refers to an authority type, with other documented values including "CanonicalUser", "Federated", and "Service".
There are very few valid use cases for using "*" in a policy, unless additional condition tests in the policy are used to narrow the policy's scope.
Note also that the * is not a true wildcard, here. It's only a placeholder for "everyone." You can't use it in a principal to match a portion of an ARN. For example, "AWS": [ "arn:aws:iam:account-id:user/*" ] does not mean all IAM users in the specified account.
The best practice recommendation is not to use bucket policies when the desired action can be accomplished with user or role policies.
You should be specific to the principal. You can give multiple ARN's instead of '*'. You bucket policy generator to generate policy and specify which ARN you want in the principal. It would be worth read below link,
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
I'm not entirely sure if this is possible, but I would like to create a setup similar to what is described in:
https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/
by creating an S3 bucket with a subdirectory for each AWS user accessible only to that user.
My question is: is it possible to go a step further and specifically block users that might otherwise have full S3 permissions from being able to read from subdirectories that don't belong to them?
This solution would be ideal for me, except that several users have */* on S3 which I believe will override this policy for them, allowing them to see other users' data. Ideally this would be a bucket policy rather than an IAM group/role so that any user in the account automatically has these permissions applied without needing to be added to a group.
When an IAM user/role accesses an S3 bucket, all of the following policies are applied:
The user's or role's IAM policies,
If the user is in any groups, all of those group's policies, and
If the bucket being accessed has a bucket policy, that policy.
All of those policies work as follows:
All commands are denied, unless
There is an explicit allow in any policy, unless
There is an explicit deny in any policy.
Basically, what this means is that by default, access is denied, unless you add an "Allow" statement to a policy (IAM user/role, group, or bucket). But if you explicitly add a "Deny" statement (in any affecting policy), that "Deny" statement will overrule any other "Allow" statement.
Knowing this, you can apply a bucket policy to your S3 bucket with the correct "Deny" statements. These policy statements would overrule any other policy statements, applying to anyone accessing the bucket (even the super-est of super users).
So, you can try something like this:
{
"Version": "2012-10-17",
"Id": "blah",
"Statement": [
{
"Sid": "DenyListingOfUserFolder",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::block-test",
"Condition": {
"StringNotLike": {
"s3:prefix": [
"",
"home/",
"home/${aws:username}/*"
]
}
}
}
]
}
This policy will deny anyone from listing the contents from any folder aside from the root folder, "home" folder, and "home/their user name" folder.
Be careful when working with "Deny" staetments. The wrong policy could lock you out of your own bucket and you'll need AWS support to remove the policy for you.
For security reasons, we have a pre-prod and a prod AWS account. We're now beginning to use IAM Roles for S3 access to js/css files through django-storage / boto.
While this is working correctly on a per account basis, now a need has risen where the QA instance needs to access one S3 bucket on a the prod account.
Is there a way to have one IAM role that can grant access to the pre-prod And prod S3 buckets? As I'm writing it seems impossible, but it never hearts to ask!
Here's the AWS doc on this: http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
Essentially, you have to delegate permissions to one account from the other account using the Principal block of your Bucket's IAM policy, and then set up your IAM user in the second account as normal.
Example bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account-ID>:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucket-name>"
]
}
]
}
This works well for read-only access, but there can be issues with write access. Primarily, the account writing the object will still be the owner of that object. When dealing with Write permissions, you'll usually want to make sure the account owning the bucket still has the ability to access objects written by the other account, which requires the object to be written with a particular header: x-amz-grant-full-control
You can set up your bucket policy so that the bucket will not accept cross-account objects that do not supply this header. There's an example of that at the bottom of this page: http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html (under "Granting Cross-Account Permissions to Upload Objects While Ensuring the Bucket Owner Has Full Control")
This makes use of a conditional Deny clause in the bucket policy, like so:
{
"Sid":"112",
"Effect":"Deny",
"Principal":{"AWS":"1111111111" },
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotEquals": {"s3:x-amz-grant-full-control":["emailAddress=xyz#amazon.com"]}
}
}
I generally avoid cross-account object writes, myself...they are quite fiddly to set up.
We need to create an IAM user that is allowed to access buckets in our client's S3 accounts (provided that they have allowed us access to those buckets as well).
We have created an IAM user in our account with the following inline policy:
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
}
]
}
In addition to this, we will request that our clients use the following policy and apply it to their relevant bucket:
{
"Version": "2008-10-17",
"Id": "Policy1416999097026",
"Statement": [
{
"Sid": "Stmt1416998971331",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::229569340673:user/our-iam-user"
},
"Action": [
"s3:AbortMultipartUpload",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::client-bucket-name/*"
},
{
"Sid": "Stmt1416999025675",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::229569340673:user/our-iam-user"
},
"Action": [
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::client-bucket-name"
}
]
}
Whilst this all seems to work fine, the one major issue that we have discovered is our own internal inline policy seems to give full access to our-iam-user to all of our own internal buckets.
Have we mis-configured something, or are we missing something else obvious here?
According to AWS support, this is not the right way to approach the problem:
https://forums.aws.amazon.com/message.jspa?messageID=618606
I am copying the answer from them here.
AWS:
The policy you're using with your IAM user grants access to any Amazon S3 bucket. In this case this will include any S3 bucket in your account and any bucket in any other account, where the account owner has granted your user access. You'll want to be more specific with the policy of your IAM user. For example, the following policy will limit your IAM user access to a single bucket.
You can also grant access to an array of buckets, if the user requires access to more than one.
Me
Unfortunately, we don't know beforehand all of our client's bucket names when we create the inline policy. As we get more and more clients to our service, it would be impractical to keep adding new client bucket names to the inline policy.
I guess another option is to create a new AWS account used solely for the above purpose - i.e. this account will not itself own anything, and will only ever be used for uploading to client buckets.
Is this acceptable, or are there any other alternatives options open to us?
AWS
Having a separate AWS account would provide clear security boundaries. Keep in mind that if you ever create a bucket in that other account, the user would inherit access to any bucket if you grant access to "arn:aws:s3:::*".
Another approach would be to use blacklisting (note whitelisting as suggested above is a better practice).
As you can see, the 2nd statement explicitly denies access to an array of buckets. This will override the allow in the first statment. The disadvantage here is that by default the user will inherit access to any new bucket. Therefore, you'd need to be diligent about adding new buckets to the blacklist. Either approach will require you to maintain changes to the policy. Therefore, I recommend my previous policy (aka whitelisting) where you only grant access to the S3 buckets that the user requires.
Conclusion
For our purposes, the white listing/blacklisting approach is not acceptable because we don't know before all the buckets that will be supplied by our clients. In the end, we went the route of creating a new AWS account with a single user, and that user does not have of its own s3 buckets
The policy you grant to your internal user gives this user access to all S3 bucket for the API listed (the first policy in your question). This is unnecessary as your client's bucket policies will grant your user required privileges to access to client's bucket.
To solve your problem, remove the user policy - or - explicitly your client's bucket in the list of allowed [Resources] instead of using "*"