AWS: Restricting IAM User to Specific Folder in S3 Bucket - amazon-web-services

So I've been trying to define a policy to restrict a group of IAM users to a particular folder in an S3 bucket with no success. I've riffed off the policy outlined in this blog post. http://blogs.aws.amazon.com/security/post/Tx1P2T3LFXXCNB5/Writing-IAM-policies-Grant-access-to-user-specific-folders-in-an-Amazon-S3-bucke
Specifically I'm using the following:
{
"Version":"2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
},
{
"Sid": "AllowRootAndHomeListingOfCompanyBucket",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket"],
"Condition":{"StringEquals":{"s3:delimiter":["/"]}}
},
{
"Sid": "AllowListingOfUserFolder",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket"],
"Condition":{"StringLike":{"s3:prefix":["myfolder"]}}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::mybucket/myfolder/*"]
}
]
}
Unfortunately this policy for some reason allows users to navigate not only into the specified folder but other folders present in the same bucket. How do I restrict users in such a way that they can only navigate into the specified folder?

I hope this documentation will help you out, the steps are broken down and quite simple to follow:
http://docs.aws.amazon.com/AmazonS3/latest/dev/walkthrough1.html
You can also use policy variables as well.
It lets you specify placeholders in a policy. When the policy is evaluated, the policy variables are replaced with values that come from the request itself.
For example - ${aws:username}:
Further more you can also check out this Stackoverflow question (if seem relevant):
Preventing a user from even knowing about other users (folders) on AWS S3

I've answered this before, but I'll answer again from here. It's best to create a user then add them to a group then assign the group r/w to the bucket. This is a typical example of how write the policy
{
"Statement": [
{
"Sid": "sidgoeshere",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:ListBucket",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::s3bucket",
"arn:aws:s3:::s3bucket/*"
]
}
]
}

Related

AWS S3 bucket policy public read, restricted write

I'm trying to set up a policy to my brand new, nice bucket called files.mybucket.com that states:
Everyone can read my objects
Only some IAM users can do everything else.
Here's what I've tried so far:
"Statement": [
{
"Sid": "getAll",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::files.mybucket.com",
"arn:aws:s3:::files.mybucket.com/*"
]
},
{
"Sid": "writeSome",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:user/John",
"arn:aws:iam::111111111111:user/Dave"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::files.mybucket.com",
"arn:aws:s3:::files.mybucket.com/*"
]
}
]
The above seems to have no effect: even if I remove "John" principal from the statement I still can upload things with it through the console and Cloudberry Explorer.
"Statement": [
{
"Sid": "getAll",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::files.mybucket.com",
"arn:aws:s3:::files.mybucket.com/*"
]
},
{
"Sid": "writeSome",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::111111111111:user/John",
"arn:aws:iam::111111111111:user/Dave"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::files.mybucket.com",
"arn:aws:s3:::files.mybucket.com/*"
]
}
]
This one looks promising as it keeps users from writing (if I remove John, I can't write with it anymore) BUT it also blocks get requests from unauthenticated people (and I want them to be able to see the content).
So, the question: how to allow people to get my files AND keep everybody except John And Dave from writing on the bucket?
It's driving me nuts. I appreciate the help.
As a general rule:
Rules that apply to everybody should go in the Bucket Policy
Rule that only apply to specific users should be applied to the IAM Users, or an IAM Group of users
Therefore:
Create a bucket policy to grant Read access to everyone (the first part of your policy, above)
For every user who should be allowed to access the bucket, add a policy to their IAM User
This avoids the need for Deny policies, which always cause problems.

How to restrict user to view only specific buckets

I have a set of users: user1 and user2. Ideally they should have access to read and write in their own buckets.
I want to give them console access so they can login and upload the data in S3 through drag and drop.
So I want to the ability of one user to view buckets of other users.
I am using the following IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::user1_bucket",
"Condition": {}
},
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl"
],
"Resource": "arn:aws:s3:::user1_bucket/*",
"Condition": {}
}
]
}
But it does not show any bucket for the user. All the user can see is Access Denied .
I tried to add principal in the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::9xxxxxxxxxx:user/user1"},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::user1_bucket"
]
}
]
}
This gives an error.
This policy contains the following error: Has prohibited field Principal For more information about the IAM policy grammar, see AWS IAM Policies
What can I do ?
There are two ways you can do this.
Bucket policies: You select who can access and control said bucket, by attaching a policy. Example for your case:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "bucketAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWS-account-ID:user/user-name"
},
"Action": [
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl"
],
"Resource": [
"arn:aws:s3:::examplebucket/*"
]
}
]
}
Source: Bucket Policy Examples - Amazon Simple Storage Service
Or you can give access through role policies, which I think is better. You almost had it, but you messed up at the end. Your policy should look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::examplebucket"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::examplebucket/"
}
]
}
Source: User Policy Examples - Amazon Simple Storage Service
I hope this helps.
It appears that your requirement is:
Users should be able to use the Amazon S3 management console to access (view, upload, download) their own S3 bucket
They should not be able to view the names of other buckets, nor access those buckets
With listing buckets
The first requirement can be met with a policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessThisBucket",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
},
{
"Sid": "ListAllBucketForS3Console",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
}
]
}
This allow them to access their specific bucket, but it also allows them to list all bucket names. This is a requirement of the Amazon S3 management console, since the first thing it does is list all of the buckets.
Without listing buckets
However, since you do not want to give these users the ability to list the names of all buckets, you could use a policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
This gives them full access to their own bucket, but they cannot list the names of other buckets.
To use this in the management console, they will need to jump directly to their bucket using a URL like this:
https://console.aws.amazon.com/s3/buckets/my-bucket
This will then allow them to access and use their bucket.
They will also be able to use AWS Command-Line Interface (CLI) commands like:
aws s3 ls s3://my-bucket
aws s3 cp foo.txt s3://my-bucket/foo.txt
Bottom line: To use the management console without permission to list all buckets, they will need to use a URL that jumps straight to their bucket.

Error connecting to AWS Transfer (SFTP service) via Filezilla [duplicate]

I am having trouble connecting to AWS Transfer for SFTP. I successfully set up a server and tried to connect using WinSCP.
I set up an IAM role with trust relationships like follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "transfer.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I paired this with a scope down policy as described in the documentation using a home directory homebucket and home directory homedir
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListHomeDir",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketAcl"
],
"Resource": "arn:aws:s3:::${transfer:HomeBucket}"
},
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:GetObjectVersionAcl",
"s3:GetObjectTagging",
"s3:PutObjectTagging",
"s3:PutObjectAcl",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::${transfer:HomeDirectory}*"
}
]
}
I was able to authenticate using an ssh key, but when it came to actually reading/writing files I just kept getting opaque errors like "Error looking up homedir" and failed "readdir". This all smells very much like problems with my IAM policy but I haven't been able to figure it out.
We had similar issues getting the scope down policy to work with our users on AWS Transfer. The solution that worked for us, was creating two different kinds of policies.
Policy to attach to the role which has general rights on the whole bucket.
Scope down policy to apply to the user which makes use of the transfer service variables like {transfer:UserName}.
We concluded that maybe only the extra attached policy is able to resolve the transfer service variables. We are not sure if this is correct and if this is the best solution, because this opens the possible risk when forgiving to attach the scope down policy to create a kind of "admin" user. So I'd be glad to get input to further lock this down a little bit.
Here is how it looks in my console when looking at the transfer user details:
Here are our two policies we use:
General policy to attach to IAM role
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-s3-bucket"
]
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3::: my-s3-bucket/*"
}
]
}
Scope down policy to apply to transfer user
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::${transfer:HomeBucket}"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"${transfer:UserName}/*",
"${transfer:UserName}"
]
}
}
},
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::${transfer:HomeDirectory}*"
}
]
}
I had a similar problem but with a different error behavior. I managed to log in successfully, but then the connection was almost immediately closed.
I did the following things:
Make sure that the IAM role that allows bucket access also contains KMS access if your bucket is encrypted.
Make sure that the trust relationship is also part of that role.
Make sure that the server itself has a Cloudwatch role also with a trust relationship to transfer.amazonaws.com! This was the solution for me. I don't get why this is needed but without the trust relationship in the Cloudwatch role, my connection get's closed.
I hope that helps.
Edit: Added a picture for the settings of the CloudWatch role:
The bucket policy for the IAM user role can look like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<your bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<your bucket>/*"
]
}
]
}
Finally, also add a Trust Relationship as shown above for the user IAM role.
If you can connect to your sftp but then get a readdir error when trying to list contents, e.g. with the command "ls", then that's a sign that you have no bucket permission. If your connection get's closed right away it seems to be a Trust Relationship issue or a KMS issue.
According to the somewhat cryptic documentation #limfinity was correct. To scope down access you need a general Role/Policy combination granting access to see the bucket. This role gets applied to the SFTP user you create. In addition you need a custom policy which grants CRUD rights only to the user's bucket. The custom policy is also applied to the SFTP user.
From page 24 of this doc... https://docs.aws.amazon.com/transfer/latest/userguide/sftp.ug.pdf#page=28&zoom=100,0,776
To create a scope-down policy, use the following policy variables in your IAM policy:
AWS Transfer for SFTP User Guide
Creating a Scope-Down Policy
• ${transfer:HomeBucket}
• ${transfer:HomeDirectory}
• ${transfer:HomeFolder}
• ${transfer:UserName}
Note
You can't use the variables listed preceding as policy variables in an IAM role definition. You create these variables in an IAM policy and supply them directly when setting up your user. Also, you can't use the ${aws:Username}variable in this scope-down policy. This variable refers to an IAM user name and not the user name required by AWS SFTP.
Can't comment, sorry if I'm posting incorrectly.
Careful with AWS's default policy!
This solution did work for me in that I was able to use scope-down policies for SFTP users as expected. However, there's a catch:
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
This section of the policy will enable SFTP users using this policy to change directory to root and list all of your account's buckets. They won't have access to read or write, but they can discover stuff which is probably unnecessary. I can confirm that changing the above to:
{
"Sid": "AWSTransferRequirements",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "${transfer:HomeBucket}"
},
... appears to prevent SFTP users from listing buckets. However, they can still cd to directories if they happen to know buckets that exist -- again they dont' have read/write but this is still unnecessary access. I'm probably missing something to prevent this in my policy.
Proper jailing appears to be a backlog topic: https://forums.aws.amazon.com/thread.jspa?threadID=297509&tstart=0
We were using the updated version of SFTP with Username and Password and had to spend quite some time to figure out all details. For the new version, the Scope down policy needs to be specified as 'Policy' key within Secrets Manager. This is very important for the whole flow to work.
We have documented the full setup on our site here - https://coderise.io/sftp-on-aws-with-username-and-password/
Hope that helps!

S3 policy - public write but only authenticated read

What I am trying to do is to let (anonymous) users share files to a specified bucket. However, they should not be possible to READ the files, which are already there (and for all I care not even the ones they submitted themselves). The only account which should be able to list/get objects from the bucket should be the bucket owner.
Here is what I got so far:
{
"Version": "2012-10-17",
"Id": "PutOnlyPolicy",
"Statement": [
{
"Sid": "Allow_PublicPut",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::myputbucket/*"
},
{
"Sid": "Deny_Read",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myputbucket/*"
},
{
"Sid": "Allow_BucketOwnerRead",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::myAWSAccountID:root"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myputbucket/*"
}
]
}
The above Policy enables me to write files to the bucket (f.e. via the android app S3anywhere), but I can't GET the objects, not even with my authenticated account.
Do you have any hints on how I could accomplish this? Thanks!
Anonymous users are not able to read a bucket content by default. So you should have only these lines in your policy:
{
"Version": "2012-10-17",
"Id": "PutOnlyPolicy",
"Statement": [
{
"Sid": "Allow_PublicPut",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::myputbucket/*"
}
]
}
The deny statement in your policy takes precedence over everything else. The default is to deny everything that isn't specifically allowed, so you should be able to just remove the deny statement and all will work the way you want.
Policy looks good, I guess that problem into Principal, you can look how it use into documentation http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-bucket-user-policy-specifying-principal-intro.html. Probably you should use AccountNumber-WithoutHyphens

How to match S3 bucket name suffix against usename prefix

I want to setup a policy inside AWS IAM service to allow users following specific pattern to connect to S3 buckets which names are following the specific pattern.
My users looks like archiver_clientname, my buckets look like clientname_archive. So far I read through this and this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
"Condition": {"StringLike": {"${s3:prefix}": ["${aws:username:suffix}/*"]}}
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::EXAMPLE-BUCKET-NAME"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::EXAMPLE-BUCKET-NAME/*"
}
]
}
${s3:prefix} and ${aws:username:suffix} are some variables which I made up, well ${s3:prefix} actually exists but I'm not sure if that does what I expect it to be doing. It would be great to be able to match my bucket names against my user names without renaming them because otherwise names will not be speaking and be just client-names, though I have other buckets with different purposes. It will be Ok to swap user prefix and suffix or use different separator though. And looks like the policy tool has enough flexibility to solve my task I just can't find the right documentation somehow.
I also do not want to setup a new policy for each user.
I will be happy with the answer that my approach is wrong with a good explanation why it is wrong and what I can do instead.
First off, when a user gets the list of buckets, it's not possible to limit the list that is returned to the user. So, don't put a condition on the "s3:ListAllMyBuckets" action. Either they see all the buckets, or they see none.
Next, you cannot "split apart" the username. The IAM policy language isn't that sophisticated. If your username is "archiver_username", then you should be able to match it to a bucket "archiver_username_archive" by using the `${aws:username} variable in the resource:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::${aws:username}_archive"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::${aws:username}_archive/*"
}
]
}
But the language does not permit you to match a bucket called "username_archive".