AWS S3 StringLike Condition preventing requests to bucket - amazon-web-services

I have the following s3 IAM policy. It is intended to allow me to copy files from or put files into a bucket below from location temp/prod/tests within the bucket.
In the policy, I have added the StringLike condition, which I had hoped would allow the permissions in the policy to allow copying and puts when the object prefix contains temp/prod/tests.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:ReplicateObject",
"s3:PutObject",
"s3:ListBucket",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:GetBucketLocation",
"s3:GetBucketAcl",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::MYBUCKET/temp/prod/tests/*",
"arn:aws:s3:::MYBUCKET"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"temp/prod/tests/*",
"temp/prod/tests/"
]
}
}
}
]
}
My problem is that the condition prevents me from copying anything under temp/prod/tests/, or putting any new object in this bucket beneath this location.
$ aws s3 cp --recursive s3://MYBUCKET/temp/prod/tests/ /tmp
download failed: s3://MYBUCKET/temp/prod/tests/testfiles/testfile to ../../../tmp/testfiles/testfile An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
And
$ aws s3 cp /tmp/test s3://MYBUCKET/temp/prod/tests/
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
If I remove the Condition, I am able to copy the files as expected.
I don't understand why the condition is not working, because as far as I can see, the requests I am making match the prefix of the condition.
Does anyone know why this is not working as I expect?

First of all, I think it is a good practice to split the rules according to resources. Some of the s3 actions require a bucket, some of them require an object. It's in the documentation to every service: https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html
Furthermore, the conditions instead of proper resources make the policy even more confusing.
In theory, for uploading an object you need just PutObject, you don't even need any List action. But for various cmdline tools I am curious about how far you would get with something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "rule1",
"Effect": "Allow",
"Action": [
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload",
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::MYBUCKET/temp/prod/tests/*",
]
},
{
"Sid": "rule2",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::MYBUCKET",
]
}
}
}

Most of the policy is derived from this blog post Writing IAM Policies: Grant Access to User-Specific Folders in an Amazon S3 Bucket
Following policy does as you mentioned in the question
It is intended to allow me to copy files from or put files into a bucket below from location temp/prod/tests within the bucket
PLUS all the actions within the folder temp/prod/tests/*. Those can be restricted further. Like you have few permissions asigned.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowGroupToSeeBucketListInTheConsole",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3::: MYBUCKET"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"temp/prod/tests/*"
]
}
}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3::: MYBUCKET/temp/prod/tests/*"
}
]
}

I think part of the confusion here is your expectation that s3:prefix will be present and testable during a CopyObject operation. It's present during a ListBucket operation, and I think that may be the only operation in which it's present. Condition keys for S3 are documented, but the documentation does not appear to include a matrix of which keys are present during which API operations.
Specifically, I believe that s3:prefix will be absent during an actual CopyObject operation and that means that IAM will treat this as values do not match, hence the conditional test fails and the CopyObject operation is denied.
AWS policy evaluation logic is reasonably straighforward and well-defined but the context in which AWS global condition context keys are present is not well-defined, or at least not well-documented. It's also quite difficult to determine exactly why a given API operation was denied after the fact (i.e. which part of the aggregated policies caused the failure), which makes it difficult to write and test complex policies.
Ideally, you'd know which keys are present on which operations, but that doesn't seem to be documented. One way to deal with this is to test (see what works and what does not). Another way is to use the ...IfExists condition check, but this is really designed for use with policy keys that are optional rather than that are not even relevant. When you use StringLikeIfExists, for example:
If the policy key is present in the context of the request, process the key as specified in the policy [i.e. perform a StringLike test]. If the key is not present, evaluate the condition element as true.
In the case of your policy, I'd suggest:
use bucket resources with bucket actions and object resources with object actions (right now, you are mixing them together)
limit your prefix conditions to the ListBucket operation
no need to make GetObject or PutObject conditional, simply indicate the resource ARN for which these operations will be allowed (e.g. arn:aws:s3:::MYBUCKET/temp/prod/tests/*)

I found a solution that works with minimal change to the policy.
I added ForAllValues to the condition, and now I can copy any objects beneath temp/prod/tests/ or any of the subdirectories below temp/prod/tests/.
"Condition": {
"ForAllValues:StringLike": {
"s3:prefix": [
"temp/prod/tests/*",
"temp/prod/tests/"
]
}
}

Related

"Implicitly denied" when I've explicitly allowed S3 IAM user actions in AWS Policy Simulator

I am trying to simulate an IAM policy I want to attach to a user so I can restrict their access to two buckets, one for file upload and one for file download.
The policy simulator tells me that the following policy does not work and I cannot figure out why, but it seems to be to do with the wildcards.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GetObject",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket-*-report-output/*.csv"
]
},
{
"Sid": "PutObjects",
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::mybucket-*-report-input/*.csv"
]
}
]
}
The policy simulator says the following policy does work however:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GetObject",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket-*-report-output"
]
},
{
"Sid": "PutObjects",
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::mybucket-*-report-input"
]
}
]
}
There must be something I am missing about how to structure the policy, but I want to restrict access to the buckets in the policy, for the operations mentioned, but I also want to ensure that the user can only add and retrieve files with .csv extension.
Below is a screenshot of the simulator:
Your policy is 100% correct - the IAM Policy Simulator is showing wrong results for some absurd reason.
I also can reproduce your problem using the above policy, and the results are all over the place - sometimes both allowed, both denied, only one allowed etc.
It seems to be having an issue with the double wildcard, and sometimes it is coming back with the wrong resource ARN being evaluated in the HTTP response being returned (I'm sometimes seeing both ARNs set to output instead of only 1 set to output in the network tab for the HTTP response - caching?).
It's not limited to PutObject either only and it's giving me loads of conflicting results with the double wildcard, even for other actions like s3:RestoreObject.
Regardless, I'm not sure what the issue is but your policy is correct - ignore IAM Policy Simulator in this case.
If you have access to AWS Support, I would create a support ticket there or post this same question as a potential bug on the AWS forums.
Evidence of a conflicting result, even though I have exactly recreated your scenario:

How to solve conditions do not apply to combination of actions and resources in statement?

I am trying to copy an s3 bucket from one account to another account. In order to do so, I am following the steps as described by aws. In step 4, the following policy is suggested:
{
"Statement": [
{
"Sid": "ExampleStmt",
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET",
"arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET/*"
],
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
},
"Principal": {
"AWS": [
"arn:aws:iam::222222222222:user/Jane"
]
}
}
]
}
However, when I try to do this (after replacing the example buckets and arn), I get the following error: Conditions do not apply to combination of actions and resources in statement.
How can I solve this error and make sure I can copy the s3 from one account to another?
The condition key you are using is not applicable to the actions you have specified.
PutObject
PubObjectAcl
ListBucket
You can checkout here Condition keys for Amazon S3 for various conditions being supported by S3.

AccessDenied for ListObjects for S3 bucket when permissions are s3:*

I am getting:
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
When I try to get folder from my S3 bucket.
Using this command:
aws s3 cp s3://bucket-name/data/all-data/ . --recursive
The IAM permissions for the bucket look like this:
{
"Version": "version_id",
"Statement": [
{
"Sid": "some_id",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucketname/*"
]
}
] }
What do I need to change to be able to copy and ls successfully?
You have given permission to perform commands on objects inside the S3 bucket, but you have not given permission to perform any actions on the bucket itself.
Slightly modifying your policy would look like this:
{
"Version": "version_id",
"Statement": [
{
"Sid": "some_id",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucketname",
"arn:aws:s3:::bucketname/*"
]
}
]
}
However, that probably gives more permission than is needed. Following the AWS IAM best practice of Granting Least Privilege would look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucketname"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::bucketname/*"
]
}
]
}
If you wanted to copy all s3 bucket objects using the command "aws s3 cp s3://bucket-name/data/all-data/ . --recursive" as you mentioned, here is a safe and minimal policy to do that:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucket-name"
],
"Condition": {
"StringLike": {
"s3:prefix": "data/all-data/*"
}
}
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::bucket-name/data/all-data/*"
]
}
]
}
The first statement in this policy allows for listing objects inside a specific bucket's sub directory. The resource needs to be the arn of the S3 bucket, and to limit listing to only a sub-directory in that bucket you can edit the "s3:prefix" value.
The second statement in this policy allows for getting objects inside of the bucket at a specific sub-directory. This means that anything inside the "s3://bucket-name/data/all-data/" path you will be able to copy. Be aware that this doesn't allow you to copy from parent paths such as "s3://bucket-name/data/".
This solution is specific to limiting use for AWS CLI commands; if you need to limit S3 access through the AWS console or API, then more policies will be needed. I suggest taking a look here: https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/.
A similar issue to this can be found here which led me to the solution I am giving.
https://github.com/aws/aws-cli/issues/2408
Hope this helps!
I got the same error when using policy as below, although i have "s3:ListBucket" for s3:ListObjects operation.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectAcl"
],
"Resource": [
"arn:aws:s3:::<bucketname>/*",
"arn:aws:s3:::*-bucket/*"
],
"Effect": "Allow"
}
]
}
Then i fixed it by adding one line
"arn:aws:s3:::bucketname"
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectAcl"
],
"Resource": [
"arn:aws:s3:::<bucketname>",
"arn:aws:s3:::<bucketname>/*",
"arn:aws:s3:::*-bucket/*"
],
"Effect": "Allow"
}
]
}
I tried the following:
aws s3 ls s3.console.aws.amazon.com/s3/buckets/{bucket name}
This gave me the error:
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
Using this form worked:
aws s3 ls {bucket name}
I was unable to access to S3 because
first I configured key access on the instance (it was impossible to attach role after the launch then)
forgot about it for a few months
attached role to instance
tried to access.
The configured key had higher priority than role, and access was denied because the user wasn't granted with necessary S3 permissions.
Solution: rm -rf .aws/credentials, then aws uses role.
I faced with the same issue. I just added credentials config:
aws_access_key_id = your_aws_access_key_id
aws_secret_access_key = your_aws_secret_access_key
into "~/.aws/credentials" + restart terminal for default profile.
In the case of multi profiles --profile arg needs to be added:
aws s3 sync ./localDir s3://bucketName --profile=${PROFILE_NAME}
where PROFILE_NAME:
.bash_profile ( or .bashrc) -> export PROFILE_NAME="yourProfileName"
More info about how to config credentials and multi profiles can be found here
For Amazon users who have enabled MFA, please use this:
aws s3 ls s3://bucket-name --profile mfa.
And prepare the profile mfa first by running
aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user-name --token-code 928371 --duration 129600. (replace 123456789012, user-name and 928371).
You have to specify Resource for the bucket via "arn:aws:s3:::bucketname" or "arn:aws:3:::bucketname*". The latter is preferred since it allows manipulations on the bucket's objects too. Notice there is no slash!
Listing objects is an operation on Bucket. Therefore, action "s3:ListBucket" is required.
Adding an object to the Bucket is an operation on Object. Therefore, action "s3:PutObject" is needed.
Certainly, you may want to add other actions as you require.
{
"Version": "version_id",
"Statement": [
{
"Sid": "some_id",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::bucketname*"
]
}
]
}
Okay for those who have done all the above and still getting this issue, try this:
Bucket Policy should look like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowBucketSync",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:PutObjectAcl",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME",
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}
Then save and ensure your Instance or Lightsail is connected to the right profile on AWS Configure.
First:
try adding --recursive at the end, any luck? No okay try the one below.
Second:
Okay now try this instead: --no-sign-request
so it should look like this:
sudo aws s3 sync s3://BUCKET_NAME /yourpath/path/folder --no-sign-request
You're welcome 😂
I was thinking the error is due to "s3:ListObjects" action but I had to add the action "s3:ListBucket" to solve the issue "AccessDenied for ListObjects for S3 bucket"
I'm adding an answer with the same direction as the accepted answer but with small (important) differences and adding more details.
Consider the configuration below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::<Bucket-Name>"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::<Bucket-Name>/*"]
}
]
}
The policy grants programmatic write-delete access and is separated into two parts:
The ListBucket action provides permissions on the bucket level and the other PutObject/DeleteObject actions require permissions on the objects inside the bucket.
The first Resource element specifies arn:aws:s3:::<Bucket-Name> for the ListBucket action so that applications can list all objects in the bucket.
The second Resource element specifies arn:aws:s3:::<Bucket-Name>/* for the PutObject, and DeletObject actions so that applications can write or delete any objects in the bucket.
The separation into two different 'arns' is important from security reasons in order to specify bucket-level and object-level fine grained permissions.
Notice that if I would have specified just GetObject in the 2nd block what would happen is that in cases of programmatic access I would receive an error like:
Upload failed: <file-name> to <bucket-name>:<path-in-bucket> An error occurred (AccessDenied) when calling the PutObject operation: Access Denied.
To allow permissions in s3 bucket go to the permissions tab in s3 bucket and in bucket policy change the action to this which will allow all actions to be performed:
"Action":"*"
Here's the policy that worked for me.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::bucket-name"
]
},
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::bucket-name/*"
]
}
]
}
I had a similar problem while trying to sync an entire s3 bucket locally. For me MFA (Multi-factor authentication) was enforced on my account, which is required while making commands via AWS CLI.
So the solution for me was - provide mfa credentials using a profile (mfa documentation) while using any AWS CLI commands.
Ran into a similar issues, for me the problem was that I had different AWS keys set in my bash_profile.
I answered a similar question here: https://stackoverflow.com/a/57317494/11871462
If you have conflicting AWS keys in your bash_profile, AWS CLI defaults to these instead.
I had this issue
my requirement i wanted to allow user to write to specific path
{
"Sid": "raspiiotallowspecificBucket",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucketname>/scripts",
"arn:aws:s3:::<bucketname>/scripts/*"
]
},
and problem was solved with this change
{
"Sid": "raspiiotallowspecificBucket",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucketname>",
"arn:aws:s3:::<bucketname>/*"
]
},
I like this better than any of the previous answers. It shows how to use the YAML format and lets you use a variable to specify the bucket.
- PolicyName: "AllowIncomingBucket"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "s3:*"
Resource:
- !Ref S3BucketArn
- !Join ["/", [!Ref S3BucketArn, '*']]
My issue was having set
env:
AWS_ACCESS_KEY_ID: {{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: {{ secrets.AWS_SECRET_ACCESS_KEY }}
again, under the aws-sync GitHub Action as environment variables. They were coming from my GitHub settings. Though in my case I had assumed a role in the previous step which would set me some new keys into those same name environment variables. So i was overwriting the good assumed keys with the bad GitHub basic keys.
Please take care of this if you're assuming roles.
I had the same issue. I had to provide the right resource and action, resource is your bucket's arn and action in your desired permission. Also please ensure you have your right user arn. Below is my solution.
{
"Version": "2012-10-17",
"Id": "Policy1546414123454",
"Statement": [
{
"Sid": "Stmt1546414471931",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789101:root"
},
"Action": ["s3:ListBucket", "s3:ListBucketVersions"],
"Resource": "arn:aws:s3:::bucket-name"
}
]
}
If you are suddenly getting this error on a new version of minio on buckets that used to work, the reason is that bucket access policy defaults were changed from version 2021 to 2022. Now in version 2022 by default all buckets (both newly created and existing ones) have Access Policy set to Private - it is not sufficient to provide server credentials to access them - you will still get errors such as these (here: returned to the python minio client):
S3Error: S3 operation failed; code: AccessDenied, message: Access Denied., resource: /dicts, request_id: 16FCBE6EC0E70439, host_id: 61486e5a-20be-42fc-bd5b-7f2093494367, bucket_name: dicts
To roll back to the previous security settings in version 2022, the quickest method is to change the bucket access Access Policy back to Public in the MinIO console (or via mc client).
This is not the best practice but this will unblock you.
Make sure for the user that's executing the command, it has the following policy attached to it under it's permission.
A. PowerUserAccess
B. AmazonS3FullAccess
I had faced same error "An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied"
Note:
Bucket policy not a good solution.
In IAM service create new custom policy attached with respective user would be safer.
Solved by below procedure:
IAM Service > Policies > Create Policy > select JSON >
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:ListBucketVersions"
],
"Resource": [
"arn:aws:s3:::<bucket name>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucketMultipartUploads",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload",
"s3:DeleteObjectVersion",
"s3:GetObjectVersion",
"s3:PutObjectACL",
"s3:ListBucketVersions"
],
"Resource": [
"arn:aws:s3:::<bucketname>/*"
]
}
]
}
Select Next Tag > Review Policy enter and create policy.
Select the newly created policy
Select the tab 'Policy Usage' in edit window of newly created policy window.
Select "Attach" select the user from the list and Save.
Now try in console with bucket name to list the objects, without bucket name it throws same error.
$aws s3 ls
A little late but might be helpful for someone. First thing first I am managing all access to s3 buckets using bucket policy.
My bucket policy to allow access to folder1 to IAM user: user1
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/user1"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::s3-bucket/folder1",
"arn:aws:s3:::s3-bucket/folder1/*"
]
}
]
}
Now when user1 tries to perform list operation they get an error. It may look weird as the user has s3 full access from the bucket policy.
aws s3 ls s3://s3-bucket/folder1
aws s3 ls s3://s3-bucket/folder1/
aws s3 ls s3://s3-bucket/folder1/*
An error occurred (AccessDenied) when calling the ListObjectsV2
operation: Access Denied
Now let's take a look at the AWS documentation for ListBucket
Grants permission to list some or all of the objects in an Amazon S3
bucket (up to 1000)
To test that try to create a bucket policy and only provide the ListBucket permission for folder1 like this. Observe that you will get an error.
Conclusion
The ListBucket operation is only permitted for buckets and not for prefixes and hence if we want to provide list operation then it must be at the bucket level. Of course this will allow the user to list objects inside all other folders present in the bucket.

Some "Condition"s disallowed in AWS S3 Bucket Policies?

I'd like to define an S3 bucket-level policy that restricts access to specific users (e.g. using Cognito ids). Why can't a Condition block like the following be used in a Bucket policy?
{
"Statement":[
{
"Effect":"Allow",
"Principal": "*",
"Condition": {
"StringEquals":{
"cognito-identity.amazonaws.com:aud":[
"us-east-1:12345678-abcd-abcd-abcd-123456790ab",
"us-east-1:98765432-dcba-dcba-dcba-123456790ab"
]
}
},
"Action":"s3:ListBucket",
"Resource":"arn:aws:s3:::my-bucket-name"
}
]
}
When I try, I get the errror:
Policy has an invalid condition key - cognito-identity.amazonaws.com:aud
But this block works fine (minus the Principal) in a user-level policy. I'm trying to understand what the rules are, so I don't have to blindly attempt to make changes and "see what works".
To be can refer to ${cognito-identity.amazonaws.com:sub} from a bucket policy (e.g. inside of a resource URL); but I can't us it as a condition key (as in the example above).
So: are the rules for bucket policies different from other policies? Is this documented somewhere? I'd especially love a pointer to an authoritative source here, because I suspect I may be missing some important documentation.
it seems like you can't add a cognito-id based condition in bucket level policy however this can be achieved by adding a policy to your identity pools auth role.
Assume that you want every user in an identity pool to be able to read the contents of a bucked but only specific users to write. This can be achived by following policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucketname>/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<bucketname>/*"
],
"Condition": {
"StringEquals": {
"cognito-identity.amazonaws.com:sub": [
"<cognito id1>",
"<conito id2>"
]
}
}
}
]
}

S3 bucket policy: In a Public Bucket, make a sub-folder private

I have a bucket filled with contents that need to be mostly public. However, there is one folder (aka "prefix") that should only be accessible by an authenticated IAM user.
{
"Statement": [
{
"Sid": "AllowIAMUser",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket/prefix1/prefix2/private/*",
"Principal": {
"AWS": [
"arn:aws:iam::123456789012:user/bobbydroptables"
]
}
},
{
"Sid": "AllowAccessToAllExceptPrivate",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket/*",
"Condition": {
"StringNotLike": {
"s3:prefix": "prefix1/prefix2/private/"
}
},
"Principal": {
"AWS": [
"*"
]
}
}
]
}
When I try to save this policy I get the following error messages from AWS:
Conditions do not apply to combination of actions and resources in statement -
Condition "s3:prefix"
and action "s3:GetObject"
in statement "AllowAccessToAllExceptPrivate"
Obviously this error applies specifically to the second statement. Is it not possible to use the "s3:prefix" condition with the "s3:GetObject" action?
Is it possible to take one portion of a public bucket and make it accessible only to authenticated users?
In case it matters, this bucket will only be accessed read-only via api.
This question is similar to Amazon S3 bucket policy for public restrictions only, except I am trying to solve the problem by taking a different approach.
After much digging through AWS documentation, as well as many trial and error permutations in the policy editor, I think I have found an adequate solution.
Apparently, AWS provides an option called NotResource (not found in the Policy Generator currently).
The NotResource element lets you grant or deny access to all but a few
of your resources, by allowing you to specify only those resources to
which your policy should not be applied.
With this, I do not even need to play around with conditions. This means that the following statement will work in a bucket policy:
{
"Sid": "AllowAccessToAllExceptPrivate",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Effect": "Allow",
"NotResource": [
"arn:aws:s3:::bucket/prefix1/prefix2/private/*",
"arn:aws:s3:::bucket/prefix1/prefix2/private"
],
"Principal": {
"AWS": [
"*"
]
}
}