S3 bucket policy: In a Public Bucket, make a sub-folder private - amazon-web-services

I have a bucket filled with contents that need to be mostly public. However, there is one folder (aka "prefix") that should only be accessible by an authenticated IAM user.
{
"Statement": [
{
"Sid": "AllowIAMUser",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket/prefix1/prefix2/private/*",
"Principal": {
"AWS": [
"arn:aws:iam::123456789012:user/bobbydroptables"
]
}
},
{
"Sid": "AllowAccessToAllExceptPrivate",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket/*",
"Condition": {
"StringNotLike": {
"s3:prefix": "prefix1/prefix2/private/"
}
},
"Principal": {
"AWS": [
"*"
]
}
}
]
}
When I try to save this policy I get the following error messages from AWS:
Conditions do not apply to combination of actions and resources in statement -
Condition "s3:prefix"
and action "s3:GetObject"
in statement "AllowAccessToAllExceptPrivate"
Obviously this error applies specifically to the second statement. Is it not possible to use the "s3:prefix" condition with the "s3:GetObject" action?
Is it possible to take one portion of a public bucket and make it accessible only to authenticated users?
In case it matters, this bucket will only be accessed read-only via api.
This question is similar to Amazon S3 bucket policy for public restrictions only, except I am trying to solve the problem by taking a different approach.

After much digging through AWS documentation, as well as many trial and error permutations in the policy editor, I think I have found an adequate solution.
Apparently, AWS provides an option called NotResource (not found in the Policy Generator currently).
The NotResource element lets you grant or deny access to all but a few
of your resources, by allowing you to specify only those resources to
which your policy should not be applied.
With this, I do not even need to play around with conditions. This means that the following statement will work in a bucket policy:
{
"Sid": "AllowAccessToAllExceptPrivate",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Effect": "Allow",
"NotResource": [
"arn:aws:s3:::bucket/prefix1/prefix2/private/*",
"arn:aws:s3:::bucket/prefix1/prefix2/private"
],
"Principal": {
"AWS": [
"*"
]
}
}

Related

How can I allow everyone in my org to access an object uploaded by someone else?

I maintain an S3 bucket for my org that is not publicly accessible but is readable by everyone in the org. There's also a folder, sandbox, that everyone in the org can write to. I setup my S3 permissions as:
{
"Version": "2012-10-17",
"Id": "...",
"Statement": [
{
"Sid": "...",
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam::1234:root"]
},
"Action": [
"s3:GetObject",
"s3:GetObjectTagging"
],
"Resource": "arn:aws:s3:::my-bucket/*"
},
{
"Sid": "...",
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam::1234:root"]
},
"Action": [
"s3:GetObject",
"s3:GetObjectTagging",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::my-bucket/sandbox/*"
}
]
}
Here, 1234 is a user in my org; I have enumerated all my users here. The first Statement allows read-only access while the second gives write to only the sandbox directory. These both work, but I've found that when people in my org write to it, no one has access to read those files except the individual who wrote it.
I instructed users to copy files there using --acl bucket-owner-full-control; for example:
aws s3 cp --acl bucket-owner-full-control my_file.tsv s3://my-bucket/sandbox/
But this doesn't fix the permissions. What's the right way to make it so I effectively own all uploaded files, or at least so that everyone can read files that anyone else uploads?
This is probably unrelated, but I also tried including a condition for bucket owner:
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
I put this Condition as a sibling value to Action, Resource, etc., but when I try to save the permissions, I get the error:
Conditions do not apply to combination of actions and resources in statement
I'm sure that you asked this on the assumption that users from different AWS accounts uploading objects.
Reading the description of the bucket-owner-full-control Canned ACL in the following Controlling ownership of uploaded objects using S3 Object Ownership page, you can get that it's applicable when objects are uploaded.
Thus, create another Statement with only s3:PutObject and you can give it permission with its condition.
The policy would be as following:
{
"Version": "2012-10-17",
"Id": "...",
"Statement": [
{
"Sid": "...",
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam::1234:root"]
},
"Action": [
"s3:GetObject",
"s3:GetObjectTagging"
],
"Resource": "arn:aws:s3:::my-bucket/*"
},
{
"Sid": "...",
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam::1234:root"]
},
"Action": [
"s3:GetObject",
"s3:GetObjectTagging",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::my-bucket/sandbox/*"
},
{
"Sid": "...",
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam::1234:root"]
},
"Action": ["s3:PutObject"],
"Resource": "arn:aws:s3:::my-bucket/sandbox/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
Take a look at this documentation as well.
For instance, Request syntax of GetObject cannot be applied with x-amz-acl, but putObject is applicable.
BTW, this answer above is about the issue relevant to condition, not allows all the users from different account.
So, you can grant permission to another AWS account.
How to provide cross-account access to objects that are in S3 buckets?
Bucket owner granting cross-account bucket permissions

AWS S3 bucket policy public read, restricted write

I'm trying to set up a policy to my brand new, nice bucket called files.mybucket.com that states:
Everyone can read my objects
Only some IAM users can do everything else.
Here's what I've tried so far:
"Statement": [
{
"Sid": "getAll",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::files.mybucket.com",
"arn:aws:s3:::files.mybucket.com/*"
]
},
{
"Sid": "writeSome",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:user/John",
"arn:aws:iam::111111111111:user/Dave"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::files.mybucket.com",
"arn:aws:s3:::files.mybucket.com/*"
]
}
]
The above seems to have no effect: even if I remove "John" principal from the statement I still can upload things with it through the console and Cloudberry Explorer.
"Statement": [
{
"Sid": "getAll",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::files.mybucket.com",
"arn:aws:s3:::files.mybucket.com/*"
]
},
{
"Sid": "writeSome",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::111111111111:user/John",
"arn:aws:iam::111111111111:user/Dave"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::files.mybucket.com",
"arn:aws:s3:::files.mybucket.com/*"
]
}
]
This one looks promising as it keeps users from writing (if I remove John, I can't write with it anymore) BUT it also blocks get requests from unauthenticated people (and I want them to be able to see the content).
So, the question: how to allow people to get my files AND keep everybody except John And Dave from writing on the bucket?
It's driving me nuts. I appreciate the help.
As a general rule:
Rules that apply to everybody should go in the Bucket Policy
Rule that only apply to specific users should be applied to the IAM Users, or an IAM Group of users
Therefore:
Create a bucket policy to grant Read access to everyone (the first part of your policy, above)
For every user who should be allowed to access the bucket, add a policy to their IAM User
This avoids the need for Deny policies, which always cause problems.

Error connecting to AWS Transfer (SFTP service) via Filezilla [duplicate]

I am having trouble connecting to AWS Transfer for SFTP. I successfully set up a server and tried to connect using WinSCP.
I set up an IAM role with trust relationships like follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "transfer.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I paired this with a scope down policy as described in the documentation using a home directory homebucket and home directory homedir
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListHomeDir",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketAcl"
],
"Resource": "arn:aws:s3:::${transfer:HomeBucket}"
},
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:GetObjectVersionAcl",
"s3:GetObjectTagging",
"s3:PutObjectTagging",
"s3:PutObjectAcl",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::${transfer:HomeDirectory}*"
}
]
}
I was able to authenticate using an ssh key, but when it came to actually reading/writing files I just kept getting opaque errors like "Error looking up homedir" and failed "readdir". This all smells very much like problems with my IAM policy but I haven't been able to figure it out.
We had similar issues getting the scope down policy to work with our users on AWS Transfer. The solution that worked for us, was creating two different kinds of policies.
Policy to attach to the role which has general rights on the whole bucket.
Scope down policy to apply to the user which makes use of the transfer service variables like {transfer:UserName}.
We concluded that maybe only the extra attached policy is able to resolve the transfer service variables. We are not sure if this is correct and if this is the best solution, because this opens the possible risk when forgiving to attach the scope down policy to create a kind of "admin" user. So I'd be glad to get input to further lock this down a little bit.
Here is how it looks in my console when looking at the transfer user details:
Here are our two policies we use:
General policy to attach to IAM role
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-s3-bucket"
]
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3::: my-s3-bucket/*"
}
]
}
Scope down policy to apply to transfer user
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::${transfer:HomeBucket}"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"${transfer:UserName}/*",
"${transfer:UserName}"
]
}
}
},
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::${transfer:HomeDirectory}*"
}
]
}
I had a similar problem but with a different error behavior. I managed to log in successfully, but then the connection was almost immediately closed.
I did the following things:
Make sure that the IAM role that allows bucket access also contains KMS access if your bucket is encrypted.
Make sure that the trust relationship is also part of that role.
Make sure that the server itself has a Cloudwatch role also with a trust relationship to transfer.amazonaws.com! This was the solution for me. I don't get why this is needed but without the trust relationship in the Cloudwatch role, my connection get's closed.
I hope that helps.
Edit: Added a picture for the settings of the CloudWatch role:
The bucket policy for the IAM user role can look like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<your bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<your bucket>/*"
]
}
]
}
Finally, also add a Trust Relationship as shown above for the user IAM role.
If you can connect to your sftp but then get a readdir error when trying to list contents, e.g. with the command "ls", then that's a sign that you have no bucket permission. If your connection get's closed right away it seems to be a Trust Relationship issue or a KMS issue.
According to the somewhat cryptic documentation #limfinity was correct. To scope down access you need a general Role/Policy combination granting access to see the bucket. This role gets applied to the SFTP user you create. In addition you need a custom policy which grants CRUD rights only to the user's bucket. The custom policy is also applied to the SFTP user.
From page 24 of this doc... https://docs.aws.amazon.com/transfer/latest/userguide/sftp.ug.pdf#page=28&zoom=100,0,776
To create a scope-down policy, use the following policy variables in your IAM policy:
AWS Transfer for SFTP User Guide
Creating a Scope-Down Policy
• ${transfer:HomeBucket}
• ${transfer:HomeDirectory}
• ${transfer:HomeFolder}
• ${transfer:UserName}
Note
You can't use the variables listed preceding as policy variables in an IAM role definition. You create these variables in an IAM policy and supply them directly when setting up your user. Also, you can't use the ${aws:Username}variable in this scope-down policy. This variable refers to an IAM user name and not the user name required by AWS SFTP.
Can't comment, sorry if I'm posting incorrectly.
Careful with AWS's default policy!
This solution did work for me in that I was able to use scope-down policies for SFTP users as expected. However, there's a catch:
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
This section of the policy will enable SFTP users using this policy to change directory to root and list all of your account's buckets. They won't have access to read or write, but they can discover stuff which is probably unnecessary. I can confirm that changing the above to:
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "${transfer:HomeBucket}"
},
... appears to prevent SFTP users from listing buckets. However, they can still cd to directories if they happen to know buckets that exist -- again they dont' have read/write but this is still unnecessary access. I'm probably missing something to prevent this in my policy.
Proper jailing appears to be a backlog topic: https://forums.aws.amazon.com/thread.jspa?threadID=297509&tstart=0
We were using the updated version of SFTP with Username and Password and had to spend quite some time to figure out all details. For the new version, the Scope down policy needs to be specified as 'Policy' key within Secrets Manager. This is very important for the whole flow to work.
We have documented the full setup on our site here - https://coderise.io/sftp-on-aws-with-username-and-password/
Hope that helps!

AWS S3 Bucket policy public. How to make object private?

I've a bucket with GetObject available to everyone on full bucket(*). I want to make a few objects private(through Object level operation ACL), i.e. only the bucket owner should have read access to the object. I've gone through all available documentation, but couldn't find any possible way. Can anyone confirm is this possible or not?
You cannot use S3 Object ACLs because ACLs do not have a DENY.
You can modify your S3 policy to specify objects and deny access to individual items.
Example S3 Policy (notice that this policy forbids access to everyone for GetObject for two files):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Sid": "DenyPublicReadGetObject",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::mybucket/block_this_file",
"arn:aws:s3:::mybucket/block_this_file_too"
]
}
]
}
If you want to add a condition so that certain users can still access the objects, add a condition after the Resource section like this. This condition will allow IAM users john.wayne and bob.hope to still call GetObject.
"Resource": [
"arn:aws:s3:::mybucket/block_this_file",
"arn:aws:s3:::mybucket/block_this_file_too"
],
"Condition": {
"StringNotEquals": {
"aws:username": [
"john.wayne",
"bob.hope"
]
}
}

Renaming object from in aws s3 console, with IAM user

Created an IAM user, with S3 full access (S3:*) on a specific ARN (only one bucket). Upload and delete works, but not able to rename or copy/paste.
Here is my IAM policy.
{
"Sid": "Stmt1490288788",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket-name/*"
]
}
I don't know if this is the correct solution, but giving ListAllMyBuckets permission worked for me.
I just added another statement along with the previous one.
{
"Sid": "Stmt1490288788",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket-name/*"
]
}{
"Sid": "Stmt1490289746001",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": [
"arn:aws:s3:::*"
]
}
So this policy lists all the buckets, but only allow put/delete/get access to the specific bucket. Still wondering what's the relation between rename/copy & list all bucket permissions.
While there isn't actually rename functionality in S3 itself, some interfaces may try and implement it by using S3 PUT object copy and DELETE actions behind the scenes. Their implementation may require other bucket-level permissions to complete, which is why it may be failing with your policy. Try this:
{
"Sid": "Stmt1490288788",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"
]
}
The difference being this grants permissions to actions performed on the bucket itself (the first resource declared), as well as the objects in the bucket (the second resource declared).