Cross account sync not working - amazon-web-services

I am stumped.
I have a credential for account A: arn:aws:iam::<ACCOUNT>:user/<USER>
I have a bucket in account B: arn:aws:s3:::bucket-name
Policy on the bucket in account B is set to:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::<ACCOUNT>:user/<USER>"},
"Resource": [
"arn:aws:s3:::bucket-name/*",
"arn:aws:s3:::bucket-name"
]
}
]
}
aws --profile <PROFILE> s3 ls s3://bucket-name
fails with:
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
I have tried 1001 variations on the policy. What am I missing?

Access your IAM tab in your AWS console - Create a group - and when your group is created - create a policy with the following information:
{
"Statement":{
"Effect":"Allow",
"Action":"s3:Get*",
"Resource":"arn:aws:s3:::your-bucket name/*"
}
}
Make your IAM user a member of the group you created - any they will now have access to your S3 bucket in a separate AWS account.
Edit: As Pointed in Comment I Would like to Add Explanation that For this to work you need to have permission from Bucket Owner and User's Own Account before cross account access is allowed.
Thus What you might be missing is the permission from user's own account.

aws --profile <PROFILE> s3 ls s3://bucket-name
Double check your PROFILE here, maybe you are using wrong profile.
This is good article to read: Reference

Related

AWS S3: Able to list buckets and download items via GUI but not via AWS CIL

The title sums up the problem. When entering the gui I observe the following role at the upper right corner:
my_name # 1234
When calling aws sts get-caller-identity --profile my_role in CIL i get:
"UserId": "my_user_id",
"Account": "1234",
"Arn": "arn:aws:iam::1234:user/my_name"
From that I conclude that I am logged in with the same role in the gui and the cli. When opening the s3 bucket "s3_bucket_signature-1" via the gui I can see all the files in the bucket and I am able to download them. However when calling
aws s3 cp --recursive s3://s3_bucket_signature-1/* my_dir --profile my_role
I get:
fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied?
My role is within a user group. Every role in this user group has the following permissions
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3-object-lambda:Get*",
"s3-object-lambda:List*"
],
"Resource": [
"arn:aws:s3:::s3_bucket_signature-*",
"arn:aws:s3:::s3_bucket_signature-*/*"
]
},
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
}
]
}
Any Idea what is going on here?
It was an Issue with MFA. When MFA enabled and you want to access resources via CLI perform the steps described in :
How to use MFA with AWS CLI?
and if you want to use boto3 api see:
https://charlesvictus.medium.com/using-mfa-with-aws-using-python-and-boto3-f4f3e532f177

How to add multiple users to Access Control List for many files on S3

I figured out how to give access to other AWS accounts to an S3 bucket. If I understand correctly, the permissions given to the bucket is not the same as the permissions give to each object in the bucket. I want all the objects in the bucket to have the same permissions.
To give users list access to the bucket:
aws2 s3api put-bucket-acl --bucket BucketName --grant-read-acp emailaddress=email1#emal.com,emailaddress=email2#emal.com,… --grant-read emailaddress=email1#emal.com,emailaddress=email2#emal.com,…
To give users list access to one object:
aws2 s3api put-object-acl --bucket BucketName --key myObject.txt --grant-read-acp emailaddress=email1#emal.com,emailaddress=email2#emal.com --grant-read emailaddress=email1#emal.com,emailaddress=email2#emal.com
However, I have hundreds of thousands of objects on S3. How do I grant the same access to all of them using the Amazon Web Service Command Line Interface (AWS CLI)?
What you are looking for is put-bucket-acl. Here is the AWS documentation.
The example provided is:
aws s3api put-bucket-acl --bucket MyBucket --grant-full-control emailaddress=user1#example.com,emailaddress=user2#example.com --grant-read uri=http://acs.amazonaws.com/groups/global/AllUsers
In your example, you only have the flag --grant-read-acp this does not grant access to the objects in the bucket. Per the documentation, --grant-read-acp "Allows grantee to read the bucket ACL". Not very useful in your case.
Where as --grant-full-control gives read, write, read ACP, and write ACP to the bucket. If you look at the documentation I linked, you can see all the flags allowed.
This answer is based upon the requirements of:
Grant Read & List access to whole bucket
To a list of AWS Accounts
You can attach a Bucket Policy to the Amazon S3 bucket with a list of AWS Account IDs:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::ACCOUNT-ID-1:root",
"arn:aws:iam::ACCOUNT-ID-2:root",
"arn:aws:iam::ACCOUNT-ID-3:root"
]
},
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::BUCKET-NAME",
"arn:aws:s3:::BUCKET-NAME/*"
]
}
]
}
This will give access if they use their root login (where they login via an email address), and I think it will also work for an IAM User in their account as long as they have been granted sufficient IAM permissions for Amazon S3 within their own account. (eg s3:* or, more safely, s3:GetObject and s3:ListBucket for the desired bucket)
Since you "want all the objects in the bucket to have the same permissions", and you wish to apply the permissions to a set of users, I would recommend:
Create an IAM Group
Assign the desired IAM Users to the IAM Group
Add a policy to the IAM Group that grants access to the bucket
Here is an example from User Policy Examples - Amazon Simple Storage Service:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListAllMyBuckets"
],
"Resource":"arn:aws:s3:::*"
},
{
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource":"arn:aws:s3:::examplebucket"
},
{
"Effect":"Allow",
"Action":[
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource":"arn:aws:s3:::examplebucket/*"
}
]
}
You can modify the policy as desired. The above policy grants permission to:
See a list of all buckets
List the contents of a specific bucket
Get/Put/Delete the contents of a specific bucket

How to PUT S3 objects from another AWS account into your own account S3 bucket using assume role?

I have a pretty typical use case where I have a role granted to me in AWS account (1234567890 - not under my control) to read data from their S3 bucket ('remote_bucket'). I can read the data from remote bucket just fine, but I no longer can dump it into my own bucket, because assuming role in another AWS account "hides" grants to resource on my own account. Last row fails with the error. How to solve this?
import boto3
# Create IAM client and local session
sts = boto3.client('sts')
local_sess = boto3.Session()
s3_local = local_sess.resource('s3')
role_to_assume_arn='arn:aws:iam::1234567890:role/s3_role'
role_session_name='test'
# Assume role in another account to access their S3 bucket
response = sts.assume_role(
RoleArn = role_to_assume_arn,
RoleSessionName = 'test',
ExternalId = 'ABCDEFG12345'
)
creds=response['Credentials']
# open session in another account:
assumed_sess = boto3.Session(
aws_access_key_id=creds['AccessKeyId'],
aws_secret_access_key=creds['SecretAccessKey'],
aws_session_token=creds['SessionToken'],
)
remote_bucket = 'remote_bucket'
s3_assumed = assumed_sess.resource('s3')
bk_assumed = s3_assumed.Bucket(remote_bucket)
for o in bk_assumed.objects.filter(Prefix="prefix/"):
print(o.key)
in_object = s3_assumed.Object(remote_bucket, o.key)
content = in_object.get()['Body'].read()
s3_local.Object('my_account_bucket', o.key).put(Body=content)
Error:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
(TL;DR See the very end for a way to configure permissions that is easier than using Roles!)
The easiest way to move data between buckets is to use the copy_object() command. This command is sent to the destination bucket and "pulls" information from the source bucket.
This is made slightly more complicated when multiple AWS accounts are involved because the same set of credentials requires GetObject permission on the source bucket AND PutObject permission on the destination bucket.
Your situation appears to be:
GetObject permission has been granted on the source bucket via a Role that you can assume
However, that role does not have PutObject permission on the destination bucket
It is important that provided the role is also assigned permissions to write to the destination bucket.
To test this situation, I did the following:
Created Bucket-A in Account-A (the source bucket) and uploaded some test files
Created Bucket-B in Account-B (the destination bucket)
Created Role-A in Account-A, specifying that it can be used by Account-B
Assigned this IAM policy to Role-A:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::bucket-a",
"arn:aws:s3:::bucket-a/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:Put*"
],
"Resource": [
"arn:aws:s3:::bucket-b/*"
]
}
]
}
Note that the role has also been given permission to write to Bucket-B. This might not be present in your particular situation, but it is necessary otherwise Account-A will not permit the role to call Bucket-B!
To clarify: When using cross-account permissions, both accounts need to grant permission. In this case, Account-A is granting Role-A permission to write to Bucket-B, but Account-B also has to permit the write (see Bucket Policy below).
Created User-B in Account-B and gave permissions to call AssumeRole on Role-A in Account-A:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AssumeRoleA",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::111111111111:role/role-a"
}
]
}
Created a Bucket Policy on Bucket-B allowing Role-A to put files into the bucket. This is important because Account-A does not have any access to resources in Account-B, but this bucket policy will allow Role-A to use the bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket-b/*",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:role/role-a"
]
}
}
]
}
Updated my credentials file to use Role-A:
[role-a]
role_arn = arn:aws:iam::906972072995:role/role-a
source_profile = user-b
Confirmed that I could access Bucket-A:
aws s3 ls s3://bucket-a --profile role-a
Confirmed that I could copy objects from Bucket-A to Bucket-B:
aws s3 cp s3://bucket-a/foo.txt s3://stack-bucket-b/ --profile role-a
This worked successfully.
Summary
The above process might seem rather complex but it can be easily divided between the source and destination:
Source: Provided Role-A that can read from Bucket-A, also with permissions to write to Bucket-B
Destination: Bucket policy on Bucket-B allowing Role-A to write to it
If Account-A is not willing to provide role permissions that can both read from Bucket-A and write to Bucket-B, then there is another option:
Ask Account-A to add a bucket policy to Bucket-A permitting User-B to GetObject
User-B can then use normal (User-B) permissions to read the content from Bucket-A without any need for AssumeRole
This is, obviously, a lot easier than assuming a role

Granting write access for the Authenticated Users to S3 bucket

I want to give read access to all AWS authenticated users to a bucket. Note I don't want my bucket to be publicly available. Old amazon console seems to give that provision which I no longer see -
Old S3 bucket ACL -
New bucket Acl -
How can I achieve old behavior? Can I do it using bucket policies -
Again I don't want
{
"Id": "Policy1510826508027",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1510826503866",
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::athakur",
"Principal": {
"AWS": [
"*"
]
}
}
]
}
That support is removed in the new s3 console and has to be set via ACL.
You can use the put-bucket-acl api to set the Any Authenticated AWS User as grantee.
The grantee for this is:
<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group"><URI><replaceable>http://acs.amazonaws.com/groups/global/AuthenticatedUsers</replaceable></URI></Grantee>
Refer http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTacl.html for more info.
We can give entire ACL string in the aws cli command as ExploringApple explained or just do -
aws s3api put-bucket-acl --bucket bucketname --grant-full-control uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers
Docs - http://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-acl.html

AWS S3 Bucket Permissions - Access Denied

I am trying to give myself permission to download existing files in an S3 bucket. I've modified the Bucket Policy, as follows:
{
"Sid": "someSID",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
}
My understanding is that addition to the policy should give me full rights to "bucketname" for my account "myuid", including all files that are already in that bucket. However, I'm still getting Access Denied errors when I try to download any of those files via the link that comes up in the console.
Any thoughts?
Step 1
Click on your bucket name, and under the permissions tab, make sure that Block new public bucket policies is unchecked
Step 2
Then you can apply your bucket policy
Hope that helps
David, You are right but I found that, in addition to what bennie said below, you also have to grant view (or whatever access you want) to 'Authenticated Users'.
But a better solution might be to edit the user's policy to just grant access to the bucket:
{
"Statement": [
{
"Sid": "Stmt1350703615347",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {}
}
]
}
The first block grants all S3 permissions to all elements within the bucket. The second block grants list permission on the bucket itself.
Change resource arn:aws:s3:::bucketname/AWSLogs/123123123123/* to arn:aws:s3:::bucketname/* to have full rights to bucketname
for show website static in s3:
This is bucket policies:
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::example-bucket/*"
]
}
]
}
Use below method for uploading any file for public readable form using TransferUtility in Android.
transferUtility.upload(String bucketName, String key, File file, CannedAccessControlList cannedAcl)
Example
transferUtility.upload("MY_BUCKET_NAME", "FileName", your_file, CannedAccessControlList.PublicRead);
To clarify: It is really not documented well, but you need two access statements.
In addition to your statement that allows actions to resource "arn:aws:s3:::bucketname/AWSLogs/123123123123/*", you also need a second statement that allows ListBucket to "arn:aws:s3:::bucketname", because internally the Aws client will try to list the bucket to determine it exists before doing its action.
With the second statement, it should look like:
"Statement": [
{
"Sid": "someSID",
"Action": "ActionThatYouMeantToAllow",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
},
{
"Sid": "someOtherSID",
"Action": "ListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
]
Note: If you're using IAM, skip the "Principal" part.
If you have an encrypted bucket, you will need kms allowed.
Possible reason: if files have been put/copy by another AWS Account user then you can not access the file since still file owner is not you. The AWS account user who has been placed files in your directory has to grant access during a put or copy operation.
For a put operation, the object owner can run this command:
aws s3api put-object --bucket destination_awsexamplebucket --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2 --acl bucket-owner-full-control
For a copy operation of a single object, the object owner can run one of these commands:
aws s3api copy-object --bucket destination_awsexammplebucket --key source_awsexamplebucket/myobject --acl bucket-owner-full-control
ref : AWS Link
Giving public access to Bucket to add policy is NOT A RIGHT way.
This exposes your bucket to public even for a short amount of time.
You will face this error even if you are admin access (Root user will not face it)
According to aws documentation you have to add "PutBucketPolicy" to you IAM user.
So Simply add a S3 Policy to you IAM User as in below screenshot , mention your Bucket ARN for make it safer and you don't have to make you bucket public again.
No one metioned MFA. For Amazon users who have enabled MFA, please use this:
aws s3 ls s3://bucket-name --profile mfa.
And prepare the profile mfa first by running
aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user-name --token-code 928371 --duration 129600. (replace 123456789012, user-name and 928371).
This can also happen if the encryption algorithm in the S3 parameters is missing. If bucket's default encryption is set to enabled, ex. Amazon S3-managed keys (SSE-S3), you need to pass ServerSideEncryption: "AES256"|"aws:kms"|string to your bucket's param.
const params = {
Bucket: BUCKET_NAME,
Body: content,
Key: fileKey,
ContentType: "audio/m4a",
ServerSideEncryption: "AES256" // Here ..
}
await S3.putObject(params).promise()
Go to this link and generate a Policy.
In the Principal field give *
In the Actions set the Get Objects
Give the ARN as arn:aws:s3:::<bucket_name>/*
Then add statement and then generate policy, you will get a JSON file and then just copy that file and paste it in the Bucket Policy.
For More Details go here.