Amazon S3 Folder Level Permissions - amazon-web-services

I am using Amazon S3 to archive my client's documents within a single bucket and a series of folders as such, to distinguish each client.
MyBucket/0000001/..
MyBucket/0000002/..
MyBucket/0000003/..
My clients are now looking for a way to independently backup their files to their local machine. I'd like to create a set of permissions at a given folder level to view/download those files only within a specific folder.
I'm looking to do this outside the scope of my application, by this I mean, I'd like to create a set of permissions in the S3 browser and tell my clients to use some 3rd Party App to link to their area. Does anybody know if this is possible? I'm opposed to writing a module to automate this as at present as their simply isn't a big enough demand.

You can use IAM policies in conjunction with bucket policies to manage such access.
Each individual client would need their own IAM profile, and you would set up policies to limit object access to only those accounts.
Here is the AWS documentation:
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingIAMPolicies.html
I would particularly point out Example 1 in that document, which does exactly what you want.

Please refer to the following policy to restrict the user to upload or list objects only to specific folders. I have created a policy that allows me to list only the objects of folder1 and folder2, and also allows to put the object to folder1 and deny uploads to other folders of the buckets.
The policy does as below:
1.List all the folders of bucket
2.List objects and folders of allowed folders
3.Uploads files only to allowed folders
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "AllowListingOfFolder1And2",
"Action": [
"s3:*"
],
"Effect": "Deny",
"Resource": [
"arn:aws:s3:::bucketname"
],
"Condition": {
"StringNotLike": {
"s3:prefix": [
"folder1/*",
"folder2/*"
]
},
"StringLike": {
"s3:prefix": "*"
}
}
},
{
"Sid": "Allowputobjecttofolder1only",
"Effect": "Deny",
"Action": "s3:PutObject",
"NotResource": "arn:aws:s3:::bucketname/folder1/*"
}
]
}

Related

Amazon Cognito and S3: Read/write permissions for specific folder only

I'm trying to allow my AWS Cognito users to be able to upload their files in their own folder inside my S3 bucket. They should also be able to read them back. But no one should be able to upload files to any other folder, nor should they be able to read anything from any other folder.
Therefore, I'm creating each user's folder using their Cognito username and putting their files therein. But I just found that usernames are unique only within the User Pool in which they are created, so I want to include both the pool id and username in the Resource path.
I have found the variable for username (${aws:username}), but haven't been able to locate anything for pool id (${USER_POOL_ID_VARIABLE} placeholder below). Can someone help me with this and also check if the policy I have created below is okay for my purpose?
(Alternately, I'm okay if we could find some variable that is globally unique and could be used instead of creating two levels hierarchy):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my.bucket/${USER_POOL_ID_VARIABLE}/${aws:username}/*"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my.bucket/${USER_POOL_ID_VARIABLE}/${aws:username}/*"
]
}
]
}

AWS S3 policy restrict folder delete

I have a S3 bucket named "uploads" with this structure:
uploads|
|_products
|_users
|_categories
|_...
I want restrict users from deleting folders (products, users, ...) but they can delete objects inside those folers. My policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectTagging",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:GetObjectTagging"
],
"Resource": [
"arn:aws:s3:::uploads",
"arn:aws:s3:::uploads/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::uploads/banners/*",
"arn:aws:s3:::uploads/brands/*",
"arn:aws:s3:::uploads/categories/*",
"arn:aws:s3:::uploads/products/*",
"arn:aws:s3:::uploads/users/*"
]
}
]
}
But i tested and user was able to delete folder, where did i go wrong?
Folders do not exist in Amazon S3.
If an object is created (eg banners/sale.jpg), then the banners directory will magically appear. Then, if that object is deleted, then the directory will magically disappear. This is because directories do not exist in Amazon S3.
So, you need not worry about people deleting a directory because it will automatically reappear when an object is created in that path.
If the Create Folder button is used in the S3 management console, a zero-length object is created with the same name as the directory. This forces the directory to 'appear' (even though it doesn't exist).
From your description, it sounds like the user has the ability to delete the zero-length object, since it has the same path as the Resource you have specified. If so, then there is no way to prevent this from happening purely from a Policy.

Amazon S3 - How to automatically make the new content of a folder public

I'm using amazon s3 to store images in a bucket, and cloudfront to get and post those pictures. My problem is that every time I upload a new image, it's automatically private (trying to get it results in a 403 forbidden). To be able to get it and show it on my website, I have to make my folder public again (after I've already done it). Do you have any idea why is there this behaviour ?
My bucket is public and here are my IAM permissions:
// First strategy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:HeadBucket",
"s3:GetBucketAcl",
"s3:HeadObject",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
],
"Resource": "arn:aws:s3:::my-bucket-name/*"
}
]
}
// Second strategy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListAllMyBuckets"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::*"
},
{
"Action": [
"acm:ListCertificates",
"cloudfront:*",
"iam:ListServerCertificates",
"waf:ListWebACLs",
"waf:GetWebACL",
"wafv2:ListWebACLs",
"wafv2:GetWebACL"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
"my-bucket-name" is obviously replaced by the actual name of the bucket.
Thank you.
Objects in Amazon S3 are private by default.
If you wish to make objects publicly accessible (to people without IAM credentials), you have two options:
Option 1: Bucket Policy
You can create a Bucket Policy that makes content publicly accessible. For example, this policy grants GetObject access to anyone:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::examplebucket/*"
]
}
]
}
See: Bucket Policy Examples - Amazon Simple Storage Service
You will also need to turn off Block Public Access on the S3 bucket to permit a Bucket policy.
Option 2: Make the object public
You could alternatively make specific objects in Amazon S3 public.
When uploading an object, specify an Access Control List (ACL) of public-read. You will also need to turn off Block Public Access to permit the object-level permissions.
When you say "I have to make my folder public again", I suspect that you are going into the Amazon S3 console, selecting a folder and then using the Make public option. This is the equivalent of setting the ACL. You can avoid this extra step by specifying the ACL while uploading the object or using the Bucket Policy.
Make a Lambda trigger based on the prefix of your subfolder. Then during the create event update the permissions of the object to make it public.
See: https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html

AWS SFTP must allow users to download files with a specific tags in S3

I am trying to do a simple task where in I must be able to download a file in S3 which has a specific tag via SFTP. I am trying to achieve this by inserting a condition to the SFTP IAM Policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket_name"
]
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket_name/*",
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/key": "value"
}
}
}
]
}
But when I use this policy with the SFTP role, WinSCP throws permission denied error when I try to login to the server. I am able to login only if I remove the Condition part in the policy. If anyone knows how to do this, please guide me. Thanks in advance.
It is not possible to restrict GetObject based on object tags.
IAM checks whether the user is entitled to make the request before it looks at the objects in Amazon S3 themselves. The tags are not available during this process.
Putting the objects in folders and restricting access by folder should meet your requirements.

Deny permission to specific user of specific folder inside S3 bucket

In AWS S3, I have one bucket named "Environments" under that I have 4 folders named "sandbox", "staging", "prod1" and "prod2" respectively and the permission of the whole bucket is "public".
Now I want to restrict One AWS user named "developer" to write anything into "prod1" and "prod2" folder but it can view them.
Kindly help me out with this
Create below policy and attach to a user developer
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::Environments",
"arn:aws:s3:::Environments/sandbox/*",
"arn:aws:s3:::Environments/staging/*",
]
}
]
}
This policy allows to full permission to folder sandbox and staging, but restrict another folder to user developer