AWS S3 policy restrict folder delete - amazon-web-services

I have a S3 bucket named "uploads" with this structure:
uploads|
|_products
|_users
|_categories
|_...
I want restrict users from deleting folders (products, users, ...) but they can delete objects inside those folers. My policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectTagging",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:GetObjectTagging"
],
"Resource": [
"arn:aws:s3:::uploads",
"arn:aws:s3:::uploads/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::uploads/banners/*",
"arn:aws:s3:::uploads/brands/*",
"arn:aws:s3:::uploads/categories/*",
"arn:aws:s3:::uploads/products/*",
"arn:aws:s3:::uploads/users/*"
]
}
]
}
But i tested and user was able to delete folder, where did i go wrong?

Folders do not exist in Amazon S3.
If an object is created (eg banners/sale.jpg), then the banners directory will magically appear. Then, if that object is deleted, then the directory will magically disappear. This is because directories do not exist in Amazon S3.
So, you need not worry about people deleting a directory because it will automatically reappear when an object is created in that path.
If the Create Folder button is used in the S3 management console, a zero-length object is created with the same name as the directory. This forces the directory to 'appear' (even though it doesn't exist).
From your description, it sounds like the user has the ability to delete the zero-length object, since it has the same path as the Resource you have specified. If so, then there is no way to prevent this from happening purely from a Policy.

Related

Amazon Cognito and S3: Read/write permissions for specific folder only

I'm trying to allow my AWS Cognito users to be able to upload their files in their own folder inside my S3 bucket. They should also be able to read them back. But no one should be able to upload files to any other folder, nor should they be able to read anything from any other folder.
Therefore, I'm creating each user's folder using their Cognito username and putting their files therein. But I just found that usernames are unique only within the User Pool in which they are created, so I want to include both the pool id and username in the Resource path.
I have found the variable for username (${aws:username}), but haven't been able to locate anything for pool id (${USER_POOL_ID_VARIABLE} placeholder below). Can someone help me with this and also check if the policy I have created below is okay for my purpose?
(Alternately, I'm okay if we could find some variable that is globally unique and could be used instead of creating two levels hierarchy):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my.bucket/${USER_POOL_ID_VARIABLE}/${aws:username}/*"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my.bucket/${USER_POOL_ID_VARIABLE}/${aws:username}/*"
]
}
]
}

Amazon S3 - How to automatically make the new content of a folder public

I'm using amazon s3 to store images in a bucket, and cloudfront to get and post those pictures. My problem is that every time I upload a new image, it's automatically private (trying to get it results in a 403 forbidden). To be able to get it and show it on my website, I have to make my folder public again (after I've already done it). Do you have any idea why is there this behaviour ?
My bucket is public and here are my IAM permissions:
// First strategy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:HeadBucket",
"s3:GetBucketAcl",
"s3:HeadObject",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
],
"Resource": "arn:aws:s3:::my-bucket-name/*"
}
]
}
// Second strategy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListAllMyBuckets"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::*"
},
{
"Action": [
"acm:ListCertificates",
"cloudfront:*",
"iam:ListServerCertificates",
"waf:ListWebACLs",
"waf:GetWebACL",
"wafv2:ListWebACLs",
"wafv2:GetWebACL"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
"my-bucket-name" is obviously replaced by the actual name of the bucket.
Thank you.
Objects in Amazon S3 are private by default.
If you wish to make objects publicly accessible (to people without IAM credentials), you have two options:
Option 1: Bucket Policy
You can create a Bucket Policy that makes content publicly accessible. For example, this policy grants GetObject access to anyone:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::examplebucket/*"
]
}
]
}
See: Bucket Policy Examples - Amazon Simple Storage Service
You will also need to turn off Block Public Access on the S3 bucket to permit a Bucket policy.
Option 2: Make the object public
You could alternatively make specific objects in Amazon S3 public.
When uploading an object, specify an Access Control List (ACL) of public-read. You will also need to turn off Block Public Access to permit the object-level permissions.
When you say "I have to make my folder public again", I suspect that you are going into the Amazon S3 console, selecting a folder and then using the Make public option. This is the equivalent of setting the ACL. You can avoid this extra step by specifying the ACL while uploading the object or using the Bucket Policy.
Make a Lambda trigger based on the prefix of your subfolder. Then during the create event update the permissions of the object to make it public.
See: https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html

AWS S3 Policy Deny ListObjectVersions

I am using Wasabi. I have an s3 bucket with versioning enabled. I want to be able to list all the objects that are not deleted. My bucket contains the following objects:
a.txt
b.txt
c.txt
c.txt has been deleted.
I am accessing my s3 bucket with an IAM that follows the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::MyBucket"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::MyBucket/*"
}
]
}
If I list the object (aws s3 ls s3://my-bucket) it returns all the objects, included the deleted one (its latest version).
To prevent deleted objects from being listed, I tried to deny s3:ListBucketVersions on the root bucket and all objects but it did not work.
How can I make the policy to prevent the user from listing previous versions / deleted objects?
Edit: I am using Wasabi I just noticed that this behaviour is inconsistent with AWS S3 behaviour. I guess its on their side now.

Need particular folder level access in S3

I am creating a user test in AWS IAM access . Also create a bucket name AWS-test,Under this bucket there is a folder called 'newfol' . I want to give permission to test user to particular newfol folder . test user only can upload file in newfol folder and also that user not able to see any other bucket or any other folder which is present under AWS-test .
I am written below json for that . But using that I able to enter AWS-test bucket and check all folder over there and upload file in all folder under AWs-test .
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::AWS-test"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::AWS-test/newfol/*"
}
]
}
The first part of your json will get a list of all objects in the AWS-test bucket. If you just want them to be able to upload to the newfol folder, then delete the first part of the json and it should work.

Amazon S3 Folder Level Permissions

I am using Amazon S3 to archive my client's documents within a single bucket and a series of folders as such, to distinguish each client.
MyBucket/0000001/..
MyBucket/0000002/..
MyBucket/0000003/..
My clients are now looking for a way to independently backup their files to their local machine. I'd like to create a set of permissions at a given folder level to view/download those files only within a specific folder.
I'm looking to do this outside the scope of my application, by this I mean, I'd like to create a set of permissions in the S3 browser and tell my clients to use some 3rd Party App to link to their area. Does anybody know if this is possible? I'm opposed to writing a module to automate this as at present as their simply isn't a big enough demand.
You can use IAM policies in conjunction with bucket policies to manage such access.
Each individual client would need their own IAM profile, and you would set up policies to limit object access to only those accounts.
Here is the AWS documentation:
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingIAMPolicies.html
I would particularly point out Example 1 in that document, which does exactly what you want.
Please refer to the following policy to restrict the user to upload or list objects only to specific folders. I have created a policy that allows me to list only the objects of folder1 and folder2, and also allows to put the object to folder1 and deny uploads to other folders of the buckets.
The policy does as below:
1.List all the folders of bucket
2.List objects and folders of allowed folders
3.Uploads files only to allowed folders
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "AllowListingOfFolder1And2",
"Action": [
"s3:*"
],
"Effect": "Deny",
"Resource": [
"arn:aws:s3:::bucketname"
],
"Condition": {
"StringNotLike": {
"s3:prefix": [
"folder1/*",
"folder2/*"
]
},
"StringLike": {
"s3:prefix": "*"
}
}
},
{
"Sid": "Allowputobjecttofolder1only",
"Effect": "Deny",
"Action": "s3:PutObject",
"NotResource": "arn:aws:s3:::bucketname/folder1/*"
}
]
}