S3 Bucket Policy for anonymous uploads - amazon-web-services

I've setup an S3 bucket to allow anonymous uploads. The goal is to allow uploading but not downloading, but the problem I have found is that not only can I not block downloading of these files, but I do not own them and cannot delete copy or manipulate them in any way. The only way I am able to get rid of them is to delete the bucket.
Here is the policy:
{
"Version": "2008-10-17",
"Id": "policy",
"Statement": [
{
"Sid": "allow-public-put",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::legendstemp/*"
}
]
}
This works, but I no longer have access to these files using either the Amazon Console or programmatically.
The bucket policy also does not apply to these files because the file owner is not the same as the bucket owner. I cannot take ownership of them either.
How can setup a bucket policy to allow anonymous upload but not download?

I know it's been a while since this was asked, but I came across this while trying to get this to work myself, and wrote about it in some length in:
https://gist.github.com/jareware/d7a817a08e9eae51a7ea
The gist of it (heh!) is that you can do anonymous upload and disallow other actions, but you won't be able to carry out the actions using authenticated requests. At least as far as I can tell.
Hope this helps.

Related

AWS S3 ACL Permissions

So my bucket was and is still functioning correctly, I'm able to upload images through the API with no issues. However, I was messing around with the user policy and I made a change to the Resource for my User Policy and this caused some settings to change.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1420751757000",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": CHANGE MADE HERE
}
]
}
When I try to upload an image through my AWS account (not using the API), then the ACL public access is private by default. I tried changing my Policy version back to what I had, but no change. I am pretty inexperienced with S3, so if I'm missing crucial info regarding this issue I can provide it.
If you want all objects to be public, then you should use a Bucket Policy.
This should typically be limited to only allowing people to download (Get) an object if they know the name of the object. You can use this Bucket Policy (which goes on the bucket itself):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME/*"
]
}
]
}
This policy is saying: "Allow anyone to get an object from this bucket, without knowing who they are"
It does not allow listing of the bucket, upload to the bucket or deleting from the bucket. If you wish to do any of these operations, you would need to use your own credentials via an API call or using the AWS CLI.
For examples of bucket policies, see: Bucket policy examples - Amazon Simple Storage Service
Your IAM User should probably have a policy like this:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME",
"arn:aws:s3:::YOUR-BUCKET-NAME/*"
]
}
]
}
This is saying: "Allow this IAM User to do anything in Amazon S3 to this bucket and the contents of this bucket"
That will grant you permission to do anything with the bucket (including uploading, downloading and deleting objects, and deleting the bucket).
For examples of IAM Policies, see: User policy examples - Amazon Simple Storage Service

How can I find what external S3 buckets (AWS-owned) are being accessed?

I'm using WorkSpaces Web (not WorkSpaces!) with an S3 VPC endpoint. I would like to be able to restrict S3 access via the S3 endpoint policy to only the buckets required by WorkSpaces Web. I cannot find any documentation with the answers, and AWS support does not seem to know what these buckets are. How can I find out what buckets the service is talking to? I see the requests in VPC flow logs, but that obviously doesn't show what URL or bucket it is trying to talk to. I have tried the same policy used for WorkSpaces (below), but it was not correct (or possibly not enough). I have confirmed that s3:GetObject is the only action needed.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Access-to-specific-bucket-only",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::aws-windows-downloads-us-east-1/*",
"arn:aws:s3:::amazon-ssm-us-east-1/*",
"arn:aws:s3:::amazon-ssm-packages-us-east-1/*",
"arn:aws:s3:::us-east-1-birdwatcher-prod/*",
"arn:aws:s3:::aws-ssm-distributor-file-us-east-1/*",
"arn:aws:s3:::aws-ssm-document-attachments-us-east-1/*",
"arn:aws:s3:::patch-baseline-snapshot-us-east-1/*",
"arn:aws:s3:::amazonlinux.*.amazonaws.com/*",
"arn:aws:s3:::repo.*.amazonaws.com/*",
"arn:aws:s3:::packages.*.amazonaws.com/*"
]
}
]
}

Able to get some Amazon S3 objects but not all, getting 403 access denied error

Problem Statement:
Account A is uploading some file in an Amazon S3 bucket in Account B. I am in account C and trying to access objects in Account B Amazon S3 bucket. I am able to access some of the files but not all.
Account A is uploading files like this
this.s3Client.putObject(bucketName, key, new FileInputStream(content), metadata);
this.s3Client.setObjectAcl(bucketName, key, CannedAccessControlList.BucketOwnerFullControl);
I am only getting access denied for some of the files not all.
While I have checked Bucket Policy and lambda policy. It seems correct to me, as I am able to access other objects that were not uploaded by Account A and I feel that this issue is related to an object permission, where the uploader in s3 bucket has the exclusive access. But as we see in the above code, uploader is setting object acl to BucketOwnerFullControl
All the files are set to public already, also I have given access to Account C aws account canonical Id under ACL.
ACLs
Lambda policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::ACCOUNT_B_BUCKET/*"
}
]
}
Bucket Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWS_ACCOUNT_C:root"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::ACCOUNT_B_BUCKET/*"
}
]
}
I have spent a lot of time on this and it is frustrating now. Please also let me know how can I debug these types of issues faster?
Can you please check the ACL or GO to S3 console and Select all files and make public. this is just a debugging step .

Change Permission for Folders in S3 bucket

I want to give "Yes" to Read object for the Group "Everyone" under Public Access to all the folder contents of my S3 bucket. I am able to do this file by file. But I want to do it in bulk update without affecting other folders.
Is there any console kind of way where we run a command to implement the same? If yes how and what is the command to be used.
Editing the ACL of the bucket may affect all the contents in the bucket. I want to do it for specific folders.
Can anyone help me in this??
That's a good question. On findings, I came up to a conclusion as below.
Create a customized Bucket policy if you don't find them on Amazon predefined policies.
The policy can be as follows:
Refer Here:
{
"Version": "2000-11-15",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::Bucket_Name/Folder_Name/contents_1/contents_2/contents_3/*"
]
}
]
}
3) For the resource part, I would recommend you to go to the Amazon Docs Link. Click Here
Try this out, and let me know if you can do it. In case of any doubt loop it in this thread and will clear it off. All the best

Bucket policy denying S3:DeleteBucket and S3:DeleteObject still deletes objects

I've applied the following bucket policy to a my-bucket.myapp.com S3 bucket:
{
"Version": "2008-10-17",
"Id": "PreventAccidentalDeletePolicy",
"Statement": [
{
"Sid": "PreventAccidentalDelete",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:DeleteBucket",
"s3:DeleteObject”
],
"Resource": [
“arn:aws:s3:::my-bucket.myapp.com”,
"arn:aws:s3:::my-bucket.myapp.com/*"
]
}
]
}
Then in the console, when I attempt to delete the bucket (right-click, Delete) I get the error I'm expecting: Access Denied.
BUT, and here's the rub, the problem is that it still deletes all the objects that are in the bucket
Why does this happen?
And it even happens with a versioned bucket. It just wipes all the versions and the objects are GONE.
Recommended best practice is to not use the root account aside from creating your initial IAM user so you can add restrictions to prevent such an incident. In the event someone has a use-case that needs this behavior programmatically they don't want to put limits in the system as "safe guards". It's up to the user to follow best practice and implement the necessary safeguards as applicable to their situation
The exact process for how amazon authorizes actions on s3 objects: http://docs.aws.amazon.com/AmazonS3/latest/dev/how-s3-evaluates-access-control.html
Section 2|A on this document describes behavior applied to root account in user context: " If the request is made using root credentials of an AWS account, Amazon S3 skips this step."