I'm trying to use an S3 bucket to upload files to as part of a build, it is configured to provide files as a static site and the content is protected using a Lambda and CloudFront. When I manually create files in the bucket they are all visible and everything is happy, but when the files are uploaded what is created are not available, resulting in an access denied response.
The user that's pushing to the bucket does not belong in the same AWS environment, but it has been set up with an ACL that allows it to push to the bucket, and the bucket with a policy that allows it to be pushed to by that user.
The command that I'm using is:
aws s3 sync --no-progress --delete docs/_build/html "s3://my-bucket" --acl bucket-owner-full-control
Is there something else that I can try that basically uses the bucket permissions for anything that's created?
According to OP's feedback in the comment section, setting Object Ownership to Bucket owner preferred fixed the issue.
I had configured my bucket with public access to all my objects
But older files is still not public.
If i access my old objects i'm getting access denied.
I have to manually change them to public no other option for me.Currently i have 5000 objects inside my bucket.Manually changing them is not possible
Is there any other thing to change in my bucket configuration from the default one..
You can use aws cli command to achieve that.Use aws s3api put-object-acl.
Description :
uses the acl subresource to set the access control list (ACL)
permissions for an object that already exists in a bucket.
Be aware that the put-object-acl command is for a single object only. In case you want to run it recursively, take a look at this thread.
More about How can I grant public read access to some objects in my Amazon S3 bucket?
How can I change ACL form public to private on the single folder in AWS s3?.
Right now I am using this command
aws s3 ls --recursive s3://<bucket-name> | cut -d' ' -f5- | awk '{print $NF}' | while read line; do
echo "$line"
aws s3api put-object-acl --acl private --bucket <bucket> --key "$line"
done
But this will change permissions on all the folders in bucket
There are three ways to grant public access to objects in Amazon S3:
Access Control Lists (ACLs) on an individual object — good for providing one-off access to specific objects
A Bucket Policy on the S3 bucket — good for providing access to a whole bucket or a portion of a bucket
A Policy on an IAM User or IAM Role — good for providing access to specific users
Please note that there are no permissions on folder. In fact, folders do not actually exist in Amazon S3 buckets (even though it might look like they do, they don't!).
All data in Amazon S3 is private by default. So, to answer your question of "How do I change from public to private access", the answer is you should reverse whatever you did to make it public. So, if you granted access via an ACL, then you should remove the public access via the ACL. (By the way, it is not recommended to use ACLs to grant public access. Consider using a Bucket Policy or IAM Policy instead.)
If you wish to change ACLs on existing objects, you can copy the objects on top of themselves, but specify a different ACL. You might need to also specify another field such as metadata to allow the change to take affect.
For an example, see: Amazon S3 changing object permissions retroactively
I have an AWS s3 bucket and a directory to which I have allowed other users to put files/objects. The directory is public. Using a 3rd party software (namely Alteryx), I am trying to get the objects. From this tool, I connect to AWS using my Access Key and secret. I can list the files in the directory but am only able to read files that I have created (not the files that others have put into the directory). I am guessing the problem is with ownership of the objects (the objects I own can be read, the files others own can not be read using my Access Key). Any suggestions on how I can programmatically change ownerhsip of the files?
My current bucket policy for the directory in question.
You are right. Since you are the bucket owner, not the object owner, you cannot access the objects owned by other users. However, in your S3 bucket policy, you have specified to grant full object access to bucket owner for PutObject action. It should have done the trick.
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
My best guess is it's because of Multipart Upload of files in your bucket, (upload large objects in parts) or moving objects from one bucket to another.
You can always ask the object owner to update the acl of the object using following command.
aws s3api put-object-acl --bucket bucketname --key keyname --acl bucket-owner-full-control
I've linked my resource here.
I am guessing the problem is with ownership of the objects
Yes, your guess is correct.
Any suggestions on how I can programmatically change ownerhsip of the
files?
You can do that by getting all the objects from S3 bucket and set the acl to bucket-owner-full-control.
Please specify which programming language you are using. I'm assuming you can use python,if so please refer here
import boto3
s3 = boto3.resource('s3')
object = s3.Bucket('YOUR_BUCKET').Object('YOUR_OBJECT')
// you can improve the code to list all the objects and iterate them
object.Acl().put(ACL='bucket-owner-full-control')
But Please note that you can only do this by using the other user's credentials that put the object to your S3 bucket. You cannot change the Object Acl using your own credentials.
I have a set of video files that were copied from one AWS Bucket from another account to my account in my own bucket.
I'm running into a problem now with all of the files where i am receiving Access Denied errors when I try to make all of the files public.
Specifically, I login to my AWS account, go into S3, drill down through the folder structures to locate one of the videos files.
When I look at this specificfile, the permissions tab on the files does not show any permissions assigned to anyone. No users, groups, or system permissions have been assigned.
At the bottom of the Permissions tab, I see a small box that says "Error: Access Denied". I can't change anything about the file. I can't add meta-data. I can't add a user to the file. I cannot make the file Public.
Is there a way i can gain control of these files so that I can make them public? There are over 15,000 files / around 60GBs of files. I'd like to avoid downloading and reuploading all of the files.
With some assistance and suggestions from the folks here I have tried the following. I made a new folder in my bucket called "media".
I tried this command:
aws s3 cp s3://mybucket/2014/09/17/thumb.jpg s3://mybucket/media --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=emailaddress=my_aws_account_email_address
I receive a fatal error 403 when calling the HeadObject operation: Forbidden.
A very interesting conundrum! Fortunately, there is a solution.
First, a recap:
Bucket A in Account A
Bucket B in Account B
User in Account A copies objects to Bucket B (having been granted appropriate permissions to do so)
Objects in Bucket B still belong to Account A and cannot be accessed by Account B
I managed to reproduce this and can confirm that users in Account B cannot access the file -- not even the root user in Account B!
Fortunately, things can be fixed. The aws s3 cp command in the AWS Command-Line Interface (CLI) can update permissions on a file when copied to the same name. However, to trigger this, you also have to update something else otherwise you get this error:
This copy request is illegal because it is trying to copy an object to itself without changing the object's metadata, storage class, website redirect location or encryption attributes.
Therefore, the permissions can be updated with this command:
aws s3 cp s3://my-bucket/ s3://my-bucket/ --recursive --acl bucket-owner-full-control --metadata "One=Two"
Must be run by an Account A user that has access permissions to the objects (eg the user who originally copied the objects to Bucket B)
The metadata content is unimportant, but needed to force the update
--acl bucket-owner-full-control will grant permission to Account B so you'll be able to use the objects as normal
End result: A bucket you can use!
aws s3 cp s3://account1/ s3://accountb/ --recursive --acl bucket-owner-full-control
To correctly set the appropriate permissions for newly added files, add this bucket policy:
[...]
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012::user/their-user"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
And set ACL for newly created files in code. Python example:
import boto3
client = boto3.client('s3')
local_file_path = '/home/me/data.csv'
bucket_name = 'my-bucket'
bucket_file_path = 'exports/data.csv'
client.upload_file(
local_file_path,
bucket_name,
bucket_file_path,
ExtraArgs={'ACL':'bucket-owner-full-control'}
)
source: https://medium.com/artificial-industry/how-to-download-files-that-others-put-in-your-aws-s3-bucket-2269e20ed041 (disclaimer: written by me)
In case anyone trying to do the same but using Hadoop/Spark job instead of AWS CLI.
Step 1: Grant user in Account A appropriate permissions to copy
objects to Bucket B. (mentioned in above answer)
Step 2: Set the fs.s3a.acl.default configuration option using Hadoop Configuration. This can be set in conf file or in program:
Conf File:
<property>
<name>fs.s3a.acl.default</name>
<description>Set a canned ACL for newly created and copied objects. Value may be Private,
PublicRead, PublicReadWrite, AuthenticatedRead, LogDeliveryWrite, BucketOwnerRead,
or BucketOwnerFullControl.</description>
<value>$chooseOneFromDescription</value>
</property>
Programmatically:
spark.sparkContext.hadoopConfiguration.set("fs.s3a.acl.default", "BucketOwnerFullControl")
by putting
--acl bucket-owner-full-control
made it to work.
I'm afraid you won't be able to transfer ownership as you wish. Here's what you did:
Old account copies objects into new account.
The "right" way of doing it (assuming you wanted to assume ownership on the new account) would be:
New account copies objects from old account.
See the small but important difference? S3 docs kind of explain it.
I think you might get away with it without needing to download the whole thing by just copying all of the files within the same bucket, and then deleting the old files. Make sure you can change the permissions after doing the copy. This should save you some money too, as you won't have to pay for the data transfer costs of downloading everything.
boto3 "copy_object" solution :
Providing Grant control to the destination bucket owner
client.copy_object(CopySource=copy_source, Bucket=target_bucket, Key=key, GrantFullControl='id=<bucket owner Canonical ID>')
Get for console
Select bucket, Permission tab, "Access Control List" tab