Amazon S3 File Permissions, Access Denied when copied from another account - amazon-web-services

I have a set of video files that were copied from one AWS Bucket from another account to my account in my own bucket.
I'm running into a problem now with all of the files where i am receiving Access Denied errors when I try to make all of the files public.
Specifically, I login to my AWS account, go into S3, drill down through the folder structures to locate one of the videos files.
When I look at this specificfile, the permissions tab on the files does not show any permissions assigned to anyone. No users, groups, or system permissions have been assigned.
At the bottom of the Permissions tab, I see a small box that says "Error: Access Denied". I can't change anything about the file. I can't add meta-data. I can't add a user to the file. I cannot make the file Public.
Is there a way i can gain control of these files so that I can make them public? There are over 15,000 files / around 60GBs of files. I'd like to avoid downloading and reuploading all of the files.
With some assistance and suggestions from the folks here I have tried the following. I made a new folder in my bucket called "media".
I tried this command:
aws s3 cp s3://mybucket/2014/09/17/thumb.jpg s3://mybucket/media --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=emailaddress=my_aws_account_email_address
I receive a fatal error 403 when calling the HeadObject operation: Forbidden.

A very interesting conundrum! Fortunately, there is a solution.
First, a recap:
Bucket A in Account A
Bucket B in Account B
User in Account A copies objects to Bucket B (having been granted appropriate permissions to do so)
Objects in Bucket B still belong to Account A and cannot be accessed by Account B
I managed to reproduce this and can confirm that users in Account B cannot access the file -- not even the root user in Account B!
Fortunately, things can be fixed. The aws s3 cp command in the AWS Command-Line Interface (CLI) can update permissions on a file when copied to the same name. However, to trigger this, you also have to update something else otherwise you get this error:
This copy request is illegal because it is trying to copy an object to itself without changing the object's metadata, storage class, website redirect location or encryption attributes.
Therefore, the permissions can be updated with this command:
aws s3 cp s3://my-bucket/ s3://my-bucket/ --recursive --acl bucket-owner-full-control --metadata "One=Two"
Must be run by an Account A user that has access permissions to the objects (eg the user who originally copied the objects to Bucket B)
The metadata content is unimportant, but needed to force the update
--acl bucket-owner-full-control will grant permission to Account B so you'll be able to use the objects as normal
End result: A bucket you can use!

aws s3 cp s3://account1/ s3://accountb/ --recursive --acl bucket-owner-full-control

To correctly set the appropriate permissions for newly added files, add this bucket policy:
[...]
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012::user/their-user"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
And set ACL for newly created files in code. Python example:
import boto3
client = boto3.client('s3')
local_file_path = '/home/me/data.csv'
bucket_name = 'my-bucket'
bucket_file_path = 'exports/data.csv'
client.upload_file(
local_file_path,
bucket_name,
bucket_file_path,
ExtraArgs={'ACL':'bucket-owner-full-control'}
)
source: https://medium.com/artificial-industry/how-to-download-files-that-others-put-in-your-aws-s3-bucket-2269e20ed041 (disclaimer: written by me)

In case anyone trying to do the same but using Hadoop/Spark job instead of AWS CLI.
Step 1: Grant user in Account A appropriate permissions to copy
objects to Bucket B. (mentioned in above answer)
Step 2: Set the fs.s3a.acl.default configuration option using Hadoop Configuration. This can be set in conf file or in program:
Conf File:
<property>
<name>fs.s3a.acl.default</name>
<description>Set a canned ACL for newly created and copied objects. Value may be Private,
PublicRead, PublicReadWrite, AuthenticatedRead, LogDeliveryWrite, BucketOwnerRead,
or BucketOwnerFullControl.</description>
<value>$chooseOneFromDescription</value>
</property>
Programmatically:
spark.sparkContext.hadoopConfiguration.set("fs.s3a.acl.default", "BucketOwnerFullControl")

by putting
--acl bucket-owner-full-control
made it to work.

I'm afraid you won't be able to transfer ownership as you wish. Here's what you did:
Old account copies objects into new account.
The "right" way of doing it (assuming you wanted to assume ownership on the new account) would be:
New account copies objects from old account.
See the small but important difference? S3 docs kind of explain it.
I think you might get away with it without needing to download the whole thing by just copying all of the files within the same bucket, and then deleting the old files. Make sure you can change the permissions after doing the copy. This should save you some money too, as you won't have to pay for the data transfer costs of downloading everything.

boto3 "copy_object" solution :
Providing Grant control to the destination bucket owner
client.copy_object(CopySource=copy_source, Bucket=target_bucket, Key=key, GrantFullControl='id=<bucket owner Canonical ID>')
Get for console
Select bucket, Permission tab, "Access Control List" tab

Related

AWS S3 sync creates objects with different permissions than bucket

I'm trying to use an S3 bucket to upload files to as part of a build, it is configured to provide files as a static site and the content is protected using a Lambda and CloudFront. When I manually create files in the bucket they are all visible and everything is happy, but when the files are uploaded what is created are not available, resulting in an access denied response.
The user that's pushing to the bucket does not belong in the same AWS environment, but it has been set up with an ACL that allows it to push to the bucket, and the bucket with a policy that allows it to be pushed to by that user.
The command that I'm using is:
aws s3 sync --no-progress --delete docs/_build/html "s3://my-bucket" --acl bucket-owner-full-control
Is there something else that I can try that basically uses the bucket permissions for anything that's created?
According to OP's feedback in the comment section, setting Object Ownership to Bucket owner preferred fixed the issue.

AWS s3 Access Denied because of Ownership?

I have an AWS s3 bucket and a directory to which I have allowed other users to put files/objects. The directory is public. Using a 3rd party software (namely Alteryx), I am trying to get the objects. From this tool, I connect to AWS using my Access Key and secret. I can list the files in the directory but am only able to read files that I have created (not the files that others have put into the directory). I am guessing the problem is with ownership of the objects (the objects I own can be read, the files others own can not be read using my Access Key). Any suggestions on how I can programmatically change ownerhsip of the files?
My current bucket policy for the directory in question.
You are right. Since you are the bucket owner, not the object owner, you cannot access the objects owned by other users. However, in your S3 bucket policy, you have specified to grant full object access to bucket owner for PutObject action. It should have done the trick.
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
My best guess is it's because of Multipart Upload of files in your bucket, (upload large objects in parts) or moving objects from one bucket to another.
You can always ask the object owner to update the acl of the object using following command.
aws s3api put-object-acl --bucket bucketname --key keyname --acl bucket-owner-full-control
I've linked my resource here.
I am guessing the problem is with ownership of the objects
Yes, your guess is correct.
Any suggestions on how I can programmatically change ownerhsip of the
files?
You can do that by getting all the objects from S3 bucket and set the acl to bucket-owner-full-control.
Please specify which programming language you are using. I'm assuming you can use python,if so please refer here
import boto3
s3 = boto3.resource('s3')
object = s3.Bucket('YOUR_BUCKET').Object('YOUR_OBJECT')
// you can improve the code to list all the objects and iterate them
object.Acl().put(ACL='bucket-owner-full-control')
But Please note that you can only do this by using the other user's credentials that put the object to your S3 bucket. You cannot change the Object Acl using your own credentials.

S3 console Copy/Paste forbidden after s3cmd PUT from another account

Let's say i have 2 aws accounts: Account1 and AccountZ
I installed and configured s3cmd to have access to Account1.
I created a bucket in AccountZ and made it publicly read/write
I performed an s3cmd put of a text.txt from Account1 to s3://AccountZ/test.txt
Then, after it uploaded, I tried to copy paste AccountZ/test.txt to a different bucket, and it says that there was an error ("The following objects were not copied due to errors from: <AccountZ folder>"). So, I tried to change the permissions to the file, and it says I dont have permissions to do that.
If "upload" a file using the S3 console into AccountZ target directory, that resulting file IS copy/paste-able. So there seems to be an issue with the uploaded file due to the PUT
If i change the permissions config of s3cmd to be the key/secret of AccountZ, then uploaded file's permissions work just fine and the copy/paste command is successful.
How do I upload/PUT a file to S3 so that I can then copy/paste the resulting file in the S3 console?
When an object is uploaded to S3, the owner of the object is the account that created it. In this case, the owner of the object is Account1, even though the bucket exists in AccountZ. The default permissions on the object only allow it to be modified by the owner of the object (Account1). The only thing that AccountZ will be able to do with the object is delete it.
When you create a bucket policy, that policy will automatically apply to any objects in the bucket that are 'owned' by the same account that owns the bucket. Since AccountZ owns the bucket and Account1 owns the object, the bucket policy of public read/write isn't going to apply here.
Try specifying an ACL (eg 'public-read-write') when the object is uploaded. If you need to modify an object that has already been uploaded, try the PutObjectAcl call from the S3 API using Account1's credentials. (http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUTacl.html)
In a similar strategy to what #ScottWolf proposed, I had to do the following to solve my problem:
The solution was that I had to go add a bucket policy in the source data bucket (Account1) that gave permissions to the target. then i had to re-configure my s3 api to use AccountZ's credentials and then just do a copy from Account1 to AccountZ

S3: User cannot access object in his own s3 bucket if created by another user

An external user has access to our s3 bucket, using these actions in our bucket policy:
"Action": [
"s3:GetObjectAcl",
"s3:GetObject",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts",
"s3:PutObject"
]
That user generated temporary credentials, which were then used to upload a file into our bucket.
Now, I cannot access the file. In the s3 UI, if I attempt to download the file, I get a 403. If I attempt to change the permissions on that object, I see the message : "Sorry! You do not have permissions to view this bucket." If the external user sets the appropriate header (x-amz-acl bucket-owner-full-control) when uploading the file with the temporary credentials, I can access the file normally. It seems strange to me that even though I own the bucket, it is possible for the external user to put files into it that I am unable to access.
Is it possible that there is some policy I can set so I can access the file, or so that I am able to access any file that is added to my bucket, regardless of how it is added? Thanks!
I believe you have to get the object owner to update the ACL or re-write the object specifying bucket owner full control. The simplest way to experiment with this is using the CLI:
aws s3api put-object-acl --acl bucket-owner-full-control --bucket some-bucket --key path/to/unreadable.txt
Yeah, I think you have to do that once for each object, I don't think there is a recursive option.
AWS publishes an example bucket policy to prevent adding objects to the bucket without giving the bucket owner full control. But that will not address ownership of the objects already in your bucket.
I don't know of any policy that will automagically transfer ownership to the bucket owner.
you can actually use a copy and recursive option to copy all objects back to the bucket and set the acl bucket-owner-full-control by using the following syntax:
aws s3 cp s3://myBucket s3://myBucket --recursive --acl bucket-owner-full-control --storage-class STANDARD
AWS has solved this in the general case by now allowing bucket owners to configure their buckets to take control of all objects placed there, regardless of writer. This is great news as you no longer need to ask the writer to place additional flags during write.
To change your bucket to this setting (which is also now the recommended default) you can use this command:
aws s3api put-bucket-ownership-controls --bucket <bucketname> --ownership-controls Rules=[{ObjectOwnership=BucketOwnerEnforced}]
Another piece of good news is that this retroactively takes control of objects previously written without ACL restrictions.
For more information see https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html

AccessDeniedException: 403 Forbidden on GCS using owner account

I have tried to access files in a bucket and I keep getting access denied on the files. I can see them in the GCS console but can access them through that and cannot access them through gsutil either running the command below.
gsutil cp gs://my-bucket/folder-a/folder-b/mypdf.pdf files/
But all this returns is AccessDeniedException: 403 Forbidden
I can list all the files and such but not actually access them. I've tried adding my user to the acl but that still had no effect. All the files were uploaded from a VM through a fuse mount which worked perfectly and just lost all access.
I've checked these posts but none seem to have a solution thats helped me
Can't access resource as OWNER despite the fact I'm the owner
gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE
gsutil cors set command returns 403 AccessDeniedException
Although, quite an old question. But I had a similar issue recently. After trying many options suggested here without success, I carefully re-examined my script and discovered I was getting the error as a result of a mistake in my bucket address gs://my-bucket. I fixed it and it worked perfectly!
This is quite possible. Owning a bucket grants FULL_CONTROL permission to that bucket, which includes the ability to list objects within that bucket. However, bucket permissions do not automatically imply any sort of object permissions, which means that if some other account is uploading objects and sets ACLs to be something like "private," the owner of the bucket won't have access to it (although the bucket owner can delete the object, even if they can't read it, as deleting objects is a bucket permission).
I'm not familiar with the default FUSE settings, but if I had to guess, you're using your project's system account to upload the objects, and they're set to private. That's fine. The easiest way to test that would be to run gsutil from a GCE host, where the default credentials will be the system account. If that works, you could use gsutil to switch the ACLs to something more permissive, like "project-private."
The command to do that would be:
gsutil acl set -R project-private gs://muBucketName/
tl;dr The Owner (basic) role has only a subset of the GCS permissions present in the Storage Admin (predefined) role—notably, Owners cannot access bucket metadata, list/read objects, etc. You would need to grant the Storage Admin (or another, less privileged) role to provide the needed permissions.
NOTE: This explanation applies to GCS buckets using uniform bucket-level access.
In my case, I had enabled uniform bucket-level access on an existing bucket, and found I could no longer list objects, despite being an Owner of its GCP project.
This seemed to contradict how GCP IAM permissions are inherited— organization → folder → project → resource / GCS bucket—since I expected to have Owner access at the bucket level as well.
But as it turns out, the Owner permissions were being inherited as expected, rather, they were insufficient for listing GCS objects.
The Storage Admin role has the following permissions which are not present in the Owner role: [1]
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.setIamPolicy
storage.buckets.update
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.getIamPolicy
storage.objects.list
storage.objects.setIamPolicy
storage.objects.update
This explained the seemingly strange behavior. And indeed, after granting the Storage Admin role (whereby my user was both Owner and Storage Admin), I was able to access the GCS bucket.
Footnotes
Though the documentation page Understanding roles omits the list of permissions for Owner (and other basic roles), it's possible to see this information in the GCP console:
Go to "IAM & Admin"
Go to "Roles"
Filter for "Owner"
Go to "Owner"
(See list of permissions)