We are trying to copy/move s3 bucket files that were originally transferred to our bucket from another AWS account.
However, when we try to move these file contents with aws s3 cp command we get: fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
I think the problem is that someone copied this data over from another account without using --acl bucket-owner-full-control. Do you know if there is a way for us to go through and list file owners via the CLI or boto3? Maybe a recursive call to the bucket showing all object owners to each file? Or find anything that isn't owned by our account?
Current permissions of our bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::account-user-id:root"
]
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::customers"
},
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::account-user-id:root"
]
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::customers/*"
}
]
}
A recent feature in Amazon S3 allows you to override the ownership settings:
If you configure the bucket with ACLs disabled, then you should immediately regain access to all of the objects.
Related
The line that I am trying to run is
aws s3 sync s3://sourcebucket.publicfiles s3://mybucket
I have been looking through multiple question like this and I have tried about everything.
I have changed my IAM policy to give full access
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:ListStorageLensConfigurations",
"s3:ListAccessPointsForObjectLambda",
"s3:GetAccessPoint",
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:ListAccessPoints",
"s3:ListJobs",
"s3:PutStorageLensConfiguration",
"s3:ListMultiRegionAccessPoints",
"s3:CreateJob"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3::ID:accesspoint/*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:*:ID:accesspoint/*",
"arn:aws:s3:us-west-2:ID:async-request/mrap/*/*",
"arn:aws:s3:::*/*",
"arn:aws:s3:*:938745241482:storage-lens/*",
"arn:aws:s3:*:938745241482:job/*",
"arn:aws:s3-object-lambda:*:ID:accesspoint/*"
]
}
]
}
As well as the bucket policy
{
"Version": "2012-10-17",
"Id": "Policy",
"Statement": [
{
"Sid": "statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ID:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
]
}
]
}
At this point I have tried making my bucket public as well as
aws s3 cp s3://sourcebucket.publicfiles/file s3://mybucket/file --acl bucket-owner-full-control
for the specific files that are not working but it gives me the same error.
An error occurred (AccessDenied) when calling the GetObjectTagging operation: Access Denied
Since this is a public bucket I do not have access to its policies.
I am not sure what else to try so I would really appreciate any insight
PS This is my first post here so if there is a better way to format question/ any more info I should give I am sorry
The error is saying that you do not have permission to call GetObjectTagging. This indicates that the Copy operation is attempting to retrieve Tags from the object so that it can then apply the same tags to the copied object, but you do not have permission to request the tags on the source object.
An article Troubleshoot issues copying an object between S3 buckets says:
You must have s3:GetObjectTagging permission for the source object and s3:PutObjectTagging permission for objects in the destination bucket.
Therefore, if the source bucket is not granting you GetObjectTagging permission, then you cannot use aws s3 sync or aws s3 cp. Instead, you will need to copy each object individually using aws s3api copy-object. For example:
aws s3api copy-object --copy-source bucket-1/test.txt --key test.txt --bucket bucket-2
(If I need to copy multiple objects individually, I make a list of objects in an Excel spreadsheet and then make a formula to create the above copy-object command. I use 'Copy Down' to create commands for all files, then paste all the commands into the command line.)
In CodeBuild, I have 2 projects. One is for a staging site, and another one is for a production site. When I compile my site, and run it through the staging project, it works fine. It sync's successfully to my s3 bucket for the staging site. However, when tried to compile it and run it through the production project, when running the sync command, it returns an error :
fatal error: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
[Container] 2018/09/11 08:40:33 Command did not exit successfully aws s3 sync public/ s3://$S3_BUCKET exit status 1
I did some digging around, and I think the problem is with my bucket policy. I am using CloudFront as a CDN on top of my S3 bucket. I don't want to modify the bucket policy of the production bucket right until I'm absolutely sure that I must. I'm worried it might have some affect on the live site.
Here is my bucket policy for the production bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[bucket_name]/*"
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity [access_code]"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[bucket_name]/*"
}
]
}
As per the error description, the list permission is missing.
Add the below permission at your bucket policy:
"Action": [
"s3:Get*",
"s3:List*"
]
This should solve your issue. Also check the IAM service role created on codebuild to access S3 buckets. The S3 bucket policy and IAM role both control the access to the S3 bucket in this kind of setup.
Your service role should have list permission for S3.
{
"Sid": "S3ObjectPolicy",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:List*"
],
"Resource": ["arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"]
}
I've tried pretty much every possible bucket policy. Also tried adding a policy to the user, but I get Access Denied every time I try to download an object from s3 bucket using the AWS Console.
Bucket Policy:
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345678901011:user/my-username"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"XX.XXX.XXX.XXX/24",
"XXX.XXX.XXX.XXX/24"
]
}
}
}
]
}
That doesn't work so I tried adding a policy to my-username:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "StmtXXXXXXXXXX",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
As strange as it sounds, it is possible to upload an object to Amazon S3 that the account owning the bucket cannot access.
When an object is uploaded to Amazon S3 (PutObject), it is possible to specify an Access Control List (ACL). Possible values are:
private
public-read
public-read-write
authenticated-read
aws-exec-read
bucket-owner-read
bucket-owner-full-control
You should normally upload objects with the bucket-owner-full-control ACL. This allows the owner of the bucket access to the object and permission to control the object (eg delete it).
If this permission is not supplied, then they cannot access nor modify the object.
I know that it contradicts the way you'd think buckets should work, but it's true!
How to fix it:
Re-upload the objects with bucket-owner-full-control ACL, or
The original uploader can loop through the objects and do an in-place CopyObject with a new ACL. This changes the permissions without having to re-upload.
UPDATE: In November 2021, a new feature was released: Amazon S3 Object Ownership can now disable access control lists to simplify access management for data in S3. This avoids the need to specify object ownership and fixes most problems with object ownership.
You can solve it by using : http://docs.aws.amazon.com/cli/latest/reference/s3api/put-object-acl.html
put-object-acl : This has to be done by original uploader.
But is definitely faster than copying data again.
I had TB's of data to deal with.
aws s3api put-bucket-acl --bucket $foldername --key $i --grant-full-control uri=http://acs.amazonaws.com/groups/global/AllUsers
I want to migrate s3 bucket from one account to another account here is my bucket policy
{
"Version": "2008-10-17",
"Id": "Policy1335892530063",
"Statement": [
{
"Sid": "DelegateS3Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxxx:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test123",
"arn:aws:s3:::test123/*"
]
},
{
"Sid": "Stmt1335892150622",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxx:root"
},
"Action": [
"s3:GetBucketAcl",
"s3:GetBucketPolicy"
],
"Resource": "arn:aws:s3:::test123"
},
{
"Sid": "Stmt1335892526596",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxxxx:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test123/*"
}
]
}
here is my IAM user policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": ["arn:aws:s3:::*"]
}
]
}
When I run command
aws s3 sync s3://test123 s3://abc-test123
I get Error
A client error (AccessDenied) occurred when calling the CopyObject operation: Access Denied
Your bucket policy seems to be correct.
Please verify that you are using root account, just as specified in your bucket policy.
Also you may need to check if there is not any denied bucket policies on your destination bucket.
If nothing helps, you can enable temporary public access to your bucket as a workaround. Yes, it's not secure but it should probably work in all cases.
Make sure you are providing adequate permissions on both the source bucket (to read) and the destination bucket (to write).
If you are using Root credentials (not generally recommended) for an Account that owns the bucket, you probably don't even need the bucket policy -- the root account should, by default, have the necessary access.
If you are assigning permissions to an IAM user, then instead of creating a Bucket Policy, assign permissions on the IAM user themselves. No need to supply a Principal in this situation.
Start by checking that you have permissions to list both buckets:
aws s3 ls s3://test123
aws s3 ls s3://abc-test123
Then check that you have permissions to copy a file from the source and to the destination:
aws s3 cp s3://test123/foo.txt .
aws s3 cp foo.txt s3://abc-test123/foo.txt
If they work, then the sync command should work, too.
Using s3cmd after configuring with my root privileges (access key and secret key), whenever I try to download something from a bucket using sync or get , I receive this strange error of permission for my root account:
WARNING: Remote file S3Error: 403 (Forbidden):
The owner is another user I have made using IAM console, but am I correct to expect that the root user should always get full and unrestricted access?
Also using aws-cli i get an unknown error
A client error (Unknown) occurred when calling the GetObject operation: Unknown
Also I thought I had to add a bucket policy to allow for root access (as strange as it sounds), as the first step I added annonymous access with this policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::myBucket/*"
]
}
]
}
But still the errors are the same as above. The owner of the bucket is also the root user (the one trying to access is the same as owner). What am I understanding wrong here? How can I restore root user's access to my own bucket that was made by one of my own IAM users?
For any of the S3 read permissions to work, you need not just allow those objects but also allow ListBucket on the bucket(s), ListAllMyBuckets and GetBucketLocation, my consolidated version:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::myBucket/*",
"arn:aws:s3:::myBucket"
]
}
]
}
See more examples at AWS IAM Documentation
It is always a good idea to recheck the status of storage, and whether S3 has been under a life cycle, so that in this case it could have been transferred to Glacier. Here, I tried to access a Glacier object using s3cmd commands, and I received uninformative and irrelevant permission errors. It would be a good idea to add this as an enhancement to future versions of s3cmd to get better warning/error messages.