s3 - use CLI to make directory public - amazon-web-services

is it possible to use the s3 cli to change to ACL of existing files, without using sync ? I got about 1TB of data on my bucket, I'd like to change their ACL without syncing it on my computer.

There are two ways to make Amazon S3 content 'public':
Change the Access Control List (ACL) on an individual object
Create a Bucket Policy on a bucket or path within a bucket
It sounds like you want to make all objects within a given directory public, so you should use an Amazon S3 Bucket Policy, such as this one from Bucket Policy Examples - Amazon Simple Storage Service:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::my-bucket/directory/*"]
}
]
}
You can add this policy by the AWS CLI, but it's much easier to do it in the Amazon S3 management console (Permissions tab).

Related

Amazon S3 - is it possible to restrict users to scan folders of public folders?

I have an Amazon S3 bucket with several files with randomized file names. The file names are very difficult to guess (for example: id84hBDs4g0a73nb0Ms9.png) and I would only want users who know the file name to access them. This means that the Amazon S3 bucket is technically public, but I have to restrict users from "scanning" / "listing" the folder. The user should only be able to access the files they know the file name to and not other files. Is this a possible setting in Amazon S3?
You are wanting to implement Security through obscurity - Wikipedia. Please note that it is not a form of complete security, as anyone who knows the names of the objects can access them.
You can simply add a Bucket Policy that makes all objects in the bucket (or a path within the bucket) 'public'. This policy allows access to the objects, but not listing of the bucket.
From Bucket policy examples - Amazon Simple Storage Service:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::YOUR-BUCKET/*"
}
]
}
You will first need to disable Block Public Access on the bucket to be able to store this bucket policy.

Need help to deny S3 bucket creation without specific Tags

I want to create an IAM policy to only allow the "Test" user to create S3 bucket with "Name" and "Bucket" Tags while creating. But not able to do.
I have tried this, but even with the specified condition, the user is not able to create an Bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Deny",
"Action": "s3:CreateBucket",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:RequestTag/Name": "Bucket"
}
}
}
]
}
Thanks in advance.
The Actions, resources, and condition keys for Amazon S3 - Service Authorization Reference documentation page lists the conditions that can be applied to the CreateBucket command.
Tags are not included in this list. Therefore, it is not possible to restrict the CreateBucket command based on tags being specified with the command.
#pop I believe you can't do this using an IAM policy nor an SCP because by design S3 create tag API is configured to be triggered as a subsequent call to CreateBucket API. So your IAM policy would prevent creation of S3 Bucket itself even if you have added this tag. This is by design for S3 service compared to other AWS services.
Only option in my opinion would be a post-deployment action i.e. to choose an event driven model where you use S3 events to take actions (delete bucket/ add access block bucket policy etc.) based on how a bucket got created.
As John Rotenstein pointed out, this is not possibly (yet at least) to explicitly deny this but there are a few options that people do for this since this type of tagging policy is a common things in many organizations.
Compliance Reports
You can use the AWS Config service to detect S3 bucket resources that are out-of-compliance. You can define your tagging policy for S3 Buckets with a Config rule.
This will not prevent users from creating buckets but it will provide a way to audit your accounts and also be proactively notified.
Auto-remediation
If you want a bucket to be auto-deleted or flagged, you can create a lambda function that is triggered by the CloudTrail API for when buckets are created.
The Lambda could be implemented to check the tags and, if the bucket is non-compliant, try and delete the bucket or mark it for deletion via some other process you define.

s3 images not loading after copying it from one bucket to another of same account

I have copied images from one bucket to another and now i am unable to access images. Error shows
AccessDenied message.
I have verified file exists in bucket but unable to access it. If i remove same image and upload it again then it is working.
Please suggest any reason if anything i missed during copying bucket data from one to another.
Link which was working with old bucket and now not accessible with new bucket
https://abc.s3.amazonaws.com/uploads/profile/image/16923/client_thumb_CINDERELLA_12
Copy (or whatever put action) in S3 is by default private because that's the deault ACL behavior, see here. This is why you get access denied even though the source file has public access enabled. Therefore you need to specify the ACL to public-read during the copy, in your case (cli sample):
aws s3 cp s3://mySourceBucket/test.txt s3://myDestinationBucket/test.txt --acl public-read
Note that according to AWS, if you use --acl, your IAM policy must include the S3:PutObjectAcl action.
--acl (string) Sets the ACL for the object when the command is performed. If you use this parameter you must have the "s3:PutObjectAcl" permission included in the list of actions for your IAM policy. Only accepts values of private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-control and log-delivery-write. See Canned ACL for details
You are trying to access the public URL for the S3 object and it works only if you open the bucket publically. You have to set your new bucket publically accessible and ALSO set the bucket policy such as:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"PublicRead",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
}
]
}
where the examplebucket is the name for the bucket.

How to create public policies for specific nested folders in S3 bucket?

I have an S3 bucket and inside it I have some folders but the objects inside folders are going to be created dynamically so in simple terms I can say it like this:
main_users/someIDNumber/uploaded
someIDNumber is dynamic and it is different every time a user is created.
Now I want to give GetObject permission to all objects inside "uploaded" folder just for all users to a specific refer which is my website.
I have tried this in my bucket policies but it doesn't work:
arn:aws:s3:::mybucketname/main_users/*/uploaded/*
also this:
arn:aws:s3:::mybucketname/main_users/*/uploaded
But I get access denied on my website side.
How can I do it?
It worked for me. I did the following:
Uploaded a file to: s3://my-bucket/main_users/42/uploaded/foo.txt
Created a stack IAM user with the policy shown below
Ran aws s3 cp s3://my-bucket/main_users/42/uploaded/foo.txt . --profile stack
The file copied successfully
The policy was:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket/main_users/*/uploaded/*"
}
]
}
It failed when I tried to copy a file with:
aws s3 cp s3://my-bucket/main_users/24/something/foo.txt . --profile stack
Please note that if you are trying to list (ls) a folder, you will need a different policy. The above was a test of GetObject, not of listing a bucket.
While I put these policies on a specific IAM user, it should work the same in a Bucket Policy. Just make sure that you have edited S3 Block Public Access to enable the content of the bucket to be publicly accessible.

How to setup ACL on my S3 bucket for the simple, basic operations

I'm using S3 to store my backups. I'm doing that via aws s3 and cron.
And the moment my bucket is public.
I want to setup ACL in such a way that only I may do CRUD + ListAll operations on it and that's it. I've read their documentation but it's too complicated whereas I need a simple thing. How can I do this?
my bash script on my VPS server should have access to the S3 bucket via API; probably there must also be a restriction by IP
I should have to my bucket via web console/S3 website from any place and any IP
the bucket shouldn't be accessible for no one else
You can create an S3 bucket policy so that only your PUBLIC IP address can access your bucket.
Here is an example policy. Change the bucket name to your bucket name. Change the IP address to your public IP address:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "statement1",
"Effect": "Allow",
"Principal": "*",
"Action":["s3:*"] ,
"Resource": [
"arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"],
"Condition" : {
"IpAddress" : {
"aws:SourceIp": "192.168.143.0/32"
}
}
}
]
}
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
Select the bucket that you want to attach the above policy.
Choose Permissions.
Choose Edit Bucket Policy.
Copy the above policy into the Bucket Policy Editor window.
Substitute the values (bucket name, IP address) in the bucket policy.
Choose Save and then Close.
You're using the AWS command line tool (awscli) to sync files to S3. You are presumably supplying credentials to awscli (very few people use the awscli unauthenticated).
So, assuming that you are authenticated, why have you made your S3 bucket public? The correct thing to do here is to ensure that the credentials you are using are associated with an IAM policy that allows you to access the S3 bucket. Then remove the S3 bucket policy.
If, for some reason, you really do want to be able to access the S3 bucket without authentication (not a good idea, generally) then you can make things somewhat safer by applying a bucket policy allowing that level of access from only your IP address.
Hope this helps.