Can't copy from from an S3 bucket in another account - amazon-web-services

Added an update (an EDIT) at the bottom
Info
I have two AWS accounts. One with an S3 bucket and a second one that needs access to it.
On the account with the S3 bucket, the bucket policy looks like this:
{
"Sid": "DelegateS3ToSecAcc",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::Second-AWS-ACC-ID:root"
},
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::BUCKET-NAME/*",
"arn:aws:s3:::BUCKET-NAME"
]
},
In the second account, that tries to get the file from S3, I've attached the following IAM Policy (There are other policies too but this should give it access):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3-object-lambda:Get*",
"s3-object-lambda:List*"
],
"Resource": "*"
}
]
}
Problem
Despite everything, when I run the following command:
aws s3 cp s3://BUCKET-NAME/path/to/file/copied/from/URI.txt .
I get:
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Did I do something wrong? What did I miss? All my web the web results suggested making sure in the bucket policy I have /* and that the IAM policy allows S3 access but it's already there.
EDIT: aws s3 ls works on the file! It means it just relates to permissions somehow. It works from another AWS that may have uploaded the file. Just need to figure out how to open it up.

The aws s3 cp command does lots of weird stuff, including (it seems) calling head-object.
Try calling the pure S3 API instead:
aws s3api get-object --bucket BUCKET-NAME --key path/to/file/copied/from/URI.txt .

Related

AWS-CLI S3: Can list but cannot copy

Please help. I have gone through many SO and AWS posts and no solutions seem to be working for me.
I am trying to run the command aws s3 cp s3://buckets/<bucket-name>/<grandparent-dir>/<parent-dir>/<child-dir> <local-dir> --recursive in order to copy all the contents of the child-dir folder to a local-dir folder on my machine. I keep getting the error fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied.
running aws s3 ls <bucket-name>/<grandparent-dir>/<parent-dir>/<child-dir> succesfully prints al the items in the child-dir, so I must have ListObjects permissions.
I am the owner of this bucket. The id printed when running aws s3api list-buckets --query Owner.ID matches the id shown when running aws s3api list-objects --bucket <bucket-name> --prefix "<grandparent-dir>/<parent-dir>/<child-dir>"
I am logged in as an IAM User within the user group groupA
groupA has the following IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:ListAccessPoints"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<bucket-name>/*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"s3:GetBucketPublicAccessBlock",
"s3:GetBucketPolicyStatus",
"s3:ListBucket",
"s3:GetBucketAcl"
],
"Resource": "arn:aws:s3:::<bucket-name>"
}
]
}
The bucket itself has the followoing bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1546414473940",
"Statement": [
{
"Sid": "Stmt1546414471931",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<user-id>:user/<user-name>"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<bucket-name>"
}
]
}
I have run aws configure and put in my valid access_key, secret_key, and region. I have confirmed this with aws configure list as well as opening the /.aws/credentials file. The region selected is the same as the region of the bucket.
I have logged in as the root user and turned all 4 options off for Block Public Access both in the permissions tab of the bucket itself and the account options on the left side menu.
Still, after all this, I am getting the error fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied when trying to run the copy command. However, the list command is working.
What am I doing wrong? Please save me!
If I have left any important information out, please let me know.
After reading the comment by #JohnRotenstein, I realized that when entering the endpoint name for the s3 bucket, the buckets term should not be present. By modifying my endpoint
from:
aws s3 cp s3://buckets/<bucket-name>/<grandparent-dir>/<parent-dir>/<child-dir> <local-dir> --recursive
to:
aws s3 cp s3://<bucket-name>/<grandparent-dir>/<parent-dir>/<child-dir> <local-dir> --recursive
the download started working.
Huge thank you to #JohnRotenstein!

AWS S3: Able to list buckets and download items via GUI but not via AWS CIL

The title sums up the problem. When entering the gui I observe the following role at the upper right corner:
my_name # 1234
When calling aws sts get-caller-identity --profile my_role in CIL i get:
"UserId": "my_user_id",
"Account": "1234",
"Arn": "arn:aws:iam::1234:user/my_name"
From that I conclude that I am logged in with the same role in the gui and the cli. When opening the s3 bucket "s3_bucket_signature-1" via the gui I can see all the files in the bucket and I am able to download them. However when calling
aws s3 cp --recursive s3://s3_bucket_signature-1/* my_dir --profile my_role
I get:
fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied?
My role is within a user group. Every role in this user group has the following permissions
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3-object-lambda:Get*",
"s3-object-lambda:List*"
],
"Resource": [
"arn:aws:s3:::s3_bucket_signature-*",
"arn:aws:s3:::s3_bucket_signature-*/*"
]
},
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
}
]
}
Any Idea what is going on here?
It was an Issue with MFA. When MFA enabled and you want to access resources via CLI perform the steps described in :
How to use MFA with AWS CLI?
and if you want to use boto3 api see:
https://charlesvictus.medium.com/using-mfa-with-aws-using-python-and-boto3-f4f3e532f177

AWS S3 "Access Denied" on GetObject operation (using AES-256 Server Side Encryption)

I have two AWS accounts and I'm trying to access S3 objects in Account A from Account B. The objects in question were uploaded as a result of Elasticache's copy-snapshot operation, meaning that the root user of Account A is not the true owner. I added the following policies:
The Bucket policy on Account A:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::account_b_id:user/user_x"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-a-name",
"arn:aws:s3:::bucket-a-name/*"
]
}
]
}
The IAM policy applied to user_x on Account B:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::bucket-a-name",
"arn:aws:s3:::bucket-a-name/*"
]
}
]
}
Here's where some strange things started happening. Making a call similar to this:
aws s3api get-object --bucket bucket-a-name --key backup.rdb localbackup.rdb
I notice the operation ONLY succeeds iff there is no Server Side encryption enabled in the console. By default, every file backed up from Elasticache is encrypted under the S3 AES-256 type, not KMS. Until I disable the encryption, I will always get the error:
An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
However as demonstrated I have given (what I believe to be) sufficient permissions to access these objects. What is going on? How can I access these objects?
I should also note that when I make that very same call from a user with AdminstratorAccess policy on Account A, the operation is successful with no errors.

Is it possible to copy between AWS accounts using AWS CLI?

Is it possible using AWS CLI to copy the contents of S3 buckets between AWS accounts? I know it's possible to copy/sync between buckets in the same account, but I need to get the contents of an old AWS account into a new one. I have AWS CLI configured with two profiles, but I don't see how I can use both profiles in a single copy/sync command.
Very Simple. Let's say:
Old AWS Account = old#aws.com
New AWS Account = new#aws.com
Loginto the AWS console as old#aws.com
Go to the bucket of your choice and apply below bucket policy:
{
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket_name",
"Principal": {
"AWS": [
"account-id-of-new#aws.com-account"
]
}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket_name/*",
"Principal": {
"AWS": [
"account-id-of-new#aws.com-account"
]
}
}
]
}
I would guess that bucket_name and account-id-of-new#aws.com-account1 is evident to you in above policy
Now, Make sure you are running AWS-CLI with the credentials of new#aws.com
Run below command and the copy will happen like a charm:
aws s3 cp s3://bucket_name/some_folder/some_file.txt s3://bucket_in_new#aws.com_acount/fromold_account.txt
Ofcourse, do make sure that new#aws.com has write privileges to his own bucket bucket_in_new#aws.com_acount which is used in above command to save the stuff copied from old#aws.com bucket.
Hope this helps.
Ok, I have this working now! Thanks for your answers. In the end I used a combination between #slayedbylucifer and #Sony Kadavan. What worked for me was a new bucket policy and a new user policy.
I added the following bucket policy (Account A):
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::myfoldername",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:user/myusername"
]
}
},
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::myfoldername",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:user/myusername"
]
}
}
]
}
And the following user policy (Account B):
{
"Version": "2012-10-17",
"Statement":{
"Effect":"Allow",
"Action":"s3:*",
"Resource":"arn:aws:s3:::myfoldername/*"
}
}
And used the following aws cli command (the region option was required because the accounts were in different regions):
aws --region us-east-1 s3 sync s3://myfoldername s3://myfoldername-accountb
Yes, you can.
You need to first create an IAM user in the second account and delegate permissions to it - read/write/list on specific S3 bucket. Once you do this then provide this IAM users's credentials to your CLI and it will work.
How to delegate permissions:
Delegating Cross-Account Permissions to IAM Users - AWS Identity and Access Management : http://docs.aws.amazon.com/IAM/latest/UserGuide/DelegatingAccess.html#example-delegate-xaccount-roles
Sample S3 policy for delegation:
{
"Version": "2012-10-17",
"Statement" : {
"Effect":"Allow",
"Sid":"AccountBAccess1",
"Principal" : {
"AWS":"111122223333"
},
"Action":"s3:*",
"Resource":"arn:aws:s3:::mybucket/*"
}
}
When you do this on production setups, be more restrictive in the permissions. If your need is to copy from a bucket to another. Then on one side, you need to give only List and Get (not Put)
In my case below mentioned command will work, hope so this will work for you as well. I have two different AWS accounts in different regions, and I want to copy my old bucket content into new one bucket. I have AWS CLI configured with two profiles.
Used the following aws cli command:
aws s3 cp --profile <profile1> s3://source_bucket_path/ --profile <profile2> s3://destination_bucket_path/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers --recursive

AWS S3 Bucket Permissions - Access Denied

I am trying to give myself permission to download existing files in an S3 bucket. I've modified the Bucket Policy, as follows:
{
"Sid": "someSID",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
}
My understanding is that addition to the policy should give me full rights to "bucketname" for my account "myuid", including all files that are already in that bucket. However, I'm still getting Access Denied errors when I try to download any of those files via the link that comes up in the console.
Any thoughts?
Step 1
Click on your bucket name, and under the permissions tab, make sure that Block new public bucket policies is unchecked
Step 2
Then you can apply your bucket policy
Hope that helps
David, You are right but I found that, in addition to what bennie said below, you also have to grant view (or whatever access you want) to 'Authenticated Users'.
But a better solution might be to edit the user's policy to just grant access to the bucket:
{
"Statement": [
{
"Sid": "Stmt1350703615347",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {}
}
]
}
The first block grants all S3 permissions to all elements within the bucket. The second block grants list permission on the bucket itself.
Change resource arn:aws:s3:::bucketname/AWSLogs/123123123123/* to arn:aws:s3:::bucketname/* to have full rights to bucketname
for show website static in s3:
This is bucket policies:
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::example-bucket/*"
]
}
]
}
Use below method for uploading any file for public readable form using TransferUtility in Android.
transferUtility.upload(String bucketName, String key, File file, CannedAccessControlList cannedAcl)
Example
transferUtility.upload("MY_BUCKET_NAME", "FileName", your_file, CannedAccessControlList.PublicRead);
To clarify: It is really not documented well, but you need two access statements.
In addition to your statement that allows actions to resource "arn:aws:s3:::bucketname/AWSLogs/123123123123/*", you also need a second statement that allows ListBucket to "arn:aws:s3:::bucketname", because internally the Aws client will try to list the bucket to determine it exists before doing its action.
With the second statement, it should look like:
"Statement": [
{
"Sid": "someSID",
"Action": "ActionThatYouMeantToAllow",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
},
{
"Sid": "someOtherSID",
"Action": "ListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
]
Note: If you're using IAM, skip the "Principal" part.
If you have an encrypted bucket, you will need kms allowed.
Possible reason: if files have been put/copy by another AWS Account user then you can not access the file since still file owner is not you. The AWS account user who has been placed files in your directory has to grant access during a put or copy operation.
For a put operation, the object owner can run this command:
aws s3api put-object --bucket destination_awsexamplebucket --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2 --acl bucket-owner-full-control
For a copy operation of a single object, the object owner can run one of these commands:
aws s3api copy-object --bucket destination_awsexammplebucket --key source_awsexamplebucket/myobject --acl bucket-owner-full-control
ref : AWS Link
Giving public access to Bucket to add policy is NOT A RIGHT way.
This exposes your bucket to public even for a short amount of time.
You will face this error even if you are admin access (Root user will not face it)
According to aws documentation you have to add "PutBucketPolicy" to you IAM user.
So Simply add a S3 Policy to you IAM User as in below screenshot , mention your Bucket ARN for make it safer and you don't have to make you bucket public again.
No one metioned MFA. For Amazon users who have enabled MFA, please use this:
aws s3 ls s3://bucket-name --profile mfa.
And prepare the profile mfa first by running
aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user-name --token-code 928371 --duration 129600. (replace 123456789012, user-name and 928371).
This can also happen if the encryption algorithm in the S3 parameters is missing. If bucket's default encryption is set to enabled, ex. Amazon S3-managed keys (SSE-S3), you need to pass ServerSideEncryption: "AES256"|"aws:kms"|string to your bucket's param.
const params = {
Bucket: BUCKET_NAME,
Body: content,
Key: fileKey,
ContentType: "audio/m4a",
ServerSideEncryption: "AES256" // Here ..
}
await S3.putObject(params).promise()
Go to this link and generate a Policy.
In the Principal field give *
In the Actions set the Get Objects
Give the ARN as arn:aws:s3:::<bucket_name>/*
Then add statement and then generate policy, you will get a JSON file and then just copy that file and paste it in the Bucket Policy.
For More Details go here.