s3 bucket policy for instance to read from two different accounts - amazon-web-services

I have a instance which needs to read data from two different account s3.
Bucket in DataAccount with bucket name "dataaccountlogs"
Bucket in UserAccount with bucket name "userlogs"
I have console access to both account, so now I need to configure bucket policy to allow instances to read s3 data from buckets dataaccountlogs and userlogs , and my instance is running in UserAccount .
I need to access these two bucket both from command line as well as using spark job.

You will need a role in UserAccount, which will be used to access mentioned buckets, say RoleA. Role should have permissions for required S3 operations.
Then you will able to configure a bucket policy for each bucket:
For DataAccount:
{
"Version": "2012-10-17",
"Id": "Policy1",
"Statement": [
{
"Sid": "test1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DataAccount:role/RoleA"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::dataaccountlogs",
"arn:aws:s3:::dataaccountlogs/*"
]
}
]
}
For UserAccount:
{
"Version": "2012-10-17",
"Id": "Policy1",
"Statement": [
{
"Sid": "test1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DataAccount:role/RoleA"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::userlogs",
"arn:aws:s3:::userlogs/*"
]
}
]
}
For accessing them from command line:
You will need to setup AWS CLI tool first:
https://docs.aws.amazon.com/polly/latest/dg/setup-aws-cli.html
Then you will need to configure a profile for using your role.
First you will need to make a profile for your user to login:
aws configure --profile YourProfileAlias
And follow instructions for setting up credentials.
Then you will need to edit config and add profile for a role:
~/.aws/config
Add to the end a block:
[profile YourRoleProfileName]
role_arn = arn:aws:iam::DataAccount:role/RoleA
source_profile = YourProfileAlias
After that you will be able to use aws s3api ... --profile YourRoleProfileName to access your both buckets on behalf of created role.
To access from spark:
If you run your cluster on EMR, you should use SecurityConfiguration, and fill a section for S3 role configuration. A different role for each specific bucket can be specified. You should use "Prefix" constraint and list all destination prefixes after. Like "s3://dataaccountlogs/,s3://userlogs".
Note: you should strictly use s3 protocol for this, not s3a. Also there is number of limitations, you can find here:
https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-s3-optimized-committer.html
Another way with spark is to configure Hadoop to assume your role. Putting
spark.hadoop.fs.s3a.aws.credentials.provider =
"org.apache.hadoop.fs.s3a.AssumedRoleCredentialProvider,org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider"
And configuring you role to be used
spark.hadoop.fs.s3a.assumed.role.arn = arn:aws:iam::DataAccount:role/RoleA
This way is more general now, since EMR commiter have various limitations. You can find more information for configuring this at Hadoop docs:
https://hadoop.apache.org/docs/r3.1.1/hadoop-aws/tools/hadoop-aws/assumed_roles.html

Related

IAM Role policy for cross account access to S3 bucket in a specific AWS account

Allow access from IAM Role in AccountA to given S3 buckets only if they are present in AWS AccountB (using Account Number).
Here is my Role policy in AccountA which currently has following permission. How can I update it to only access S3 bucket in AccountB.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:Put*"
],
"Resource": [
"arn:aws:s3:::kul-my-bucket/my-dir/*"
]
},
{
"Sid": "ListBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::kul-my-bucket"
]
}
]
}
Wondering if it is possible or should I try it differently?
Is anything similar to this possible for my case by providing the condition in the policy:
"Condition": {
"StringLike": {
"aws:accountId": [
"111111111111"
]
}
}
I need this because on the S3 bucket in AccountB it allows root access to AccountA. Hence I want to put restriction on Role policy in AccountA.
I do not think it is possible to grant/deny access to an Amazon S3 bucket based on an AWS Account number. This is because Amazon S3 ARNs exclude the Account ID and instead use the unique bucket name.
You would need to grant Allow access specifically to each bucket by name.
I have seen this situation before where the requirement was to grant S3 permission to access buckets only in other accounts, but not the account owning the IAM User themselves. We could not determine a way to do this without also granting permission to access the "same account" S3 buckets.

How to sync multiple S3 buckets using multiple AWS accounts?

I am having trouble syncing two S3 buckets that are attached to two separate AWS accounts.
There are two AWS accounts - Account A which is managed by a third party and Account B, which I manage. I am looking to pull files from an S3 bucket in Account A to an S3 bucket in Account B.
Account A provided me the following instructions:
In Account B, create a new IAM user called LogsUser. Attach the following policy to the user:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::ACCOUNTID:role/12345-LogAccess-role"
}
]
}
Configure the AWS CLI to update the config and credentials files. Specifically, the ~/.aws/config file to look like:
[profile LogsUser]
role_arn = arn:aws:iam::ACCOUNTID:role/12345-LogAccess-role
source_profile = LogsUser
And the ~/.aws/credentials file to look like
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
From here, I am successfully able to query the log files in Account A's bucket using $ aws s3 ls --profile LogsUser s3://bucket-a.
I have set up bucket-b in Account B, however, I am unable to query any files in bucket-b. For example, $ aws s3 ls --profile LogsUser s3://bucket-b returns An error occurred (AccessDenied) when calling the AssumeRole operation: Access denied.
Is there something additional I can add to the config file or my IAM policy to allow access to bucket-b using --profile LogsUser option? I can access bucket-b using other --profile settings, but am not looking to sync to the local file system and then to another bucket.
The desired results is to run a command like aws s3 sync s3://bucket-a s3://bucket-b --profile UserLogs.
For example, if you want to copy “Account A” S3 bucket objects to “Account B” S3 bucket, follow below.
Create a policy for the S3 bucket in “account A” like the below policy. For that, you need “Account B” number, to find the B account number go to Support → Support center and copy the account number from there.
Setup “account A” bucket policy :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateS3Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_B_NUMBER:root"
},
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::ACCOUNT_A_BUCKET_NAME/*",
"arn:aws:s3:::ACCOUNT_A_BUCKET_NAME"
]
}
]
}
Log into “Account B” and create a new IAM user or attach the below policy for the existing user.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::ACCOUNT_A_BUCKET_NAME",
"arn:aws:s3:::ACCOUNT_A_BUCKET_NAME/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::ACCOUNT_B_BUCKET_NAME",
"arn:aws:s3:::ACCOUNT_B_BUCKET_NAME/*"
]
}
]
}
Configure AWS CLI with “Account B” IAM user(Which you have created IAM with the above user policy)
aws s3 sync s3://ACCOUNT_A_BUCKET_NAME s3://ACCOUNT_B_BUCKET_NAME --source-region ACCOUNT_A_REGION-NAME --region ACCOUNT_B_REGION-NAME
This way we can copy S3 bucket objects over different AWS accounts.
If you have multiple awscli profiles, use --profile end of the command with profile name.
Your situation is:
You wish to copy from Bucket-A in Account-A
The files need to be copied to Bucket-B in Account-B
Account-A has provided you with the ability to assume LogAccess-role in Account-A, which has access to Bucket-A
When copying files between buckets using the CopyObject() command (which is used by the AWS CLI sync command), it requires:
Read Access on the source bucket (Bucket-A)
Write Access on the destination bucket (Bucket-B)
When you assume LogAccess-role, you receive credentials that have Read Access on Bucket-A. That is great! However, those credentials do not have permission to write to Bucket-B because it is in a separate account.
To overcome this, you should create a Bucket Policy on Bucket-A that grants Write Access to LogAccess-role from Account-B. The Bucket Policy would look something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT-A:role/12345-LogAccess-role"
},
"Action": [
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::bucket-a",
"arn:aws:s3:::bucket-a/*"
]
}
]
}
(You might need other permissions. Check any error messages for hints.)
That way, LogAccess-role will be able to read from Bucket-A and write to Bucket-B.
I would suggest you to consider you to use AWS S3 bucket replication:
https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html
If you just want to list objects in bucket-b, do this.
First make sure the LogsUser IAM user has got proper permission to access the bucket-b s3 bucket in Account B. You can add this policy to the user if not
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket-b/*"
]
}
]
}
If there is permissions attached to the user, and if the Access keys and Secret Key stored in ~/.aws/credentials stored as [default] belongs to LogsUser IAM user, you can simply list objects inside bucket-b with following command.
aws s3 ls
If you want to run the command aws s3 sync s3://bucket-a s3://bucket-b --profile UserLogs, do this.
Remember, we will be using temporary credentials created by STS after assuming the role with permanent credentials of LogsUser. That means the role in Account A should have proper access to both buckets to perform the action and the bucket(bucket-b) in another account (Account B) should have proper bucket policy to allow the role to perform S3 operations.
To provide permissions to the role to access bucket-b, attach following bucket policy to bucket-b.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTID:role/12345-LogAccess-role"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket-b/*"
]
}
]
}
Also in Account A, attach a policy to the role like below to allow access to S3 buckets in both the accounts.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket-b/*",
"arn:aws:s3:::bucket-a/*"
]
}
]
}

Granting write access for the Authenticated Users to S3 bucket

I want to give read access to all AWS authenticated users to a bucket. Note I don't want my bucket to be publicly available. Old amazon console seems to give that provision which I no longer see -
Old S3 bucket ACL -
New bucket Acl -
How can I achieve old behavior? Can I do it using bucket policies -
Again I don't want
{
"Id": "Policy1510826508027",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1510826503866",
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::athakur",
"Principal": {
"AWS": [
"*"
]
}
}
]
}
That support is removed in the new s3 console and has to be set via ACL.
You can use the put-bucket-acl api to set the Any Authenticated AWS User as grantee.
The grantee for this is:
<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group"><URI><replaceable>http://acs.amazonaws.com/groups/global/AuthenticatedUsers</replaceable></URI></Grantee>
Refer http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTacl.html for more info.
We can give entire ACL string in the aws cli command as ExploringApple explained or just do -
aws s3api put-bucket-acl --bucket bucketname --grant-full-control uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers
Docs - http://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-acl.html

Is it possible to copy between AWS accounts using AWS CLI?

Is it possible using AWS CLI to copy the contents of S3 buckets between AWS accounts? I know it's possible to copy/sync between buckets in the same account, but I need to get the contents of an old AWS account into a new one. I have AWS CLI configured with two profiles, but I don't see how I can use both profiles in a single copy/sync command.
Very Simple. Let's say:
Old AWS Account = old#aws.com
New AWS Account = new#aws.com
Loginto the AWS console as old#aws.com
Go to the bucket of your choice and apply below bucket policy:
{
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket_name",
"Principal": {
"AWS": [
"account-id-of-new#aws.com-account"
]
}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket_name/*",
"Principal": {
"AWS": [
"account-id-of-new#aws.com-account"
]
}
}
]
}
I would guess that bucket_name and account-id-of-new#aws.com-account1 is evident to you in above policy
Now, Make sure you are running AWS-CLI with the credentials of new#aws.com
Run below command and the copy will happen like a charm:
aws s3 cp s3://bucket_name/some_folder/some_file.txt s3://bucket_in_new#aws.com_acount/fromold_account.txt
Ofcourse, do make sure that new#aws.com has write privileges to his own bucket bucket_in_new#aws.com_acount which is used in above command to save the stuff copied from old#aws.com bucket.
Hope this helps.
Ok, I have this working now! Thanks for your answers. In the end I used a combination between #slayedbylucifer and #Sony Kadavan. What worked for me was a new bucket policy and a new user policy.
I added the following bucket policy (Account A):
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::myfoldername",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:user/myusername"
]
}
},
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::myfoldername",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:user/myusername"
]
}
}
]
}
And the following user policy (Account B):
{
"Version": "2012-10-17",
"Statement":{
"Effect":"Allow",
"Action":"s3:*",
"Resource":"arn:aws:s3:::myfoldername/*"
}
}
And used the following aws cli command (the region option was required because the accounts were in different regions):
aws --region us-east-1 s3 sync s3://myfoldername s3://myfoldername-accountb
Yes, you can.
You need to first create an IAM user in the second account and delegate permissions to it - read/write/list on specific S3 bucket. Once you do this then provide this IAM users's credentials to your CLI and it will work.
How to delegate permissions:
Delegating Cross-Account Permissions to IAM Users - AWS Identity and Access Management : http://docs.aws.amazon.com/IAM/latest/UserGuide/DelegatingAccess.html#example-delegate-xaccount-roles
Sample S3 policy for delegation:
{
"Version": "2012-10-17",
"Statement" : {
"Effect":"Allow",
"Sid":"AccountBAccess1",
"Principal" : {
"AWS":"111122223333"
},
"Action":"s3:*",
"Resource":"arn:aws:s3:::mybucket/*"
}
}
When you do this on production setups, be more restrictive in the permissions. If your need is to copy from a bucket to another. Then on one side, you need to give only List and Get (not Put)
In my case below mentioned command will work, hope so this will work for you as well. I have two different AWS accounts in different regions, and I want to copy my old bucket content into new one bucket. I have AWS CLI configured with two profiles.
Used the following aws cli command:
aws s3 cp --profile <profile1> s3://source_bucket_path/ --profile <profile2> s3://destination_bucket_path/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers --recursive

AWS S3 Bucket Permissions - Access Denied

I am trying to give myself permission to download existing files in an S3 bucket. I've modified the Bucket Policy, as follows:
{
"Sid": "someSID",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
}
My understanding is that addition to the policy should give me full rights to "bucketname" for my account "myuid", including all files that are already in that bucket. However, I'm still getting Access Denied errors when I try to download any of those files via the link that comes up in the console.
Any thoughts?
Step 1
Click on your bucket name, and under the permissions tab, make sure that Block new public bucket policies is unchecked
Step 2
Then you can apply your bucket policy
Hope that helps
David, You are right but I found that, in addition to what bennie said below, you also have to grant view (or whatever access you want) to 'Authenticated Users'.
But a better solution might be to edit the user's policy to just grant access to the bucket:
{
"Statement": [
{
"Sid": "Stmt1350703615347",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {}
}
]
}
The first block grants all S3 permissions to all elements within the bucket. The second block grants list permission on the bucket itself.
Change resource arn:aws:s3:::bucketname/AWSLogs/123123123123/* to arn:aws:s3:::bucketname/* to have full rights to bucketname
for show website static in s3:
This is bucket policies:
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::example-bucket/*"
]
}
]
}
Use below method for uploading any file for public readable form using TransferUtility in Android.
transferUtility.upload(String bucketName, String key, File file, CannedAccessControlList cannedAcl)
Example
transferUtility.upload("MY_BUCKET_NAME", "FileName", your_file, CannedAccessControlList.PublicRead);
To clarify: It is really not documented well, but you need two access statements.
In addition to your statement that allows actions to resource "arn:aws:s3:::bucketname/AWSLogs/123123123123/*", you also need a second statement that allows ListBucket to "arn:aws:s3:::bucketname", because internally the Aws client will try to list the bucket to determine it exists before doing its action.
With the second statement, it should look like:
"Statement": [
{
"Sid": "someSID",
"Action": "ActionThatYouMeantToAllow",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
},
{
"Sid": "someOtherSID",
"Action": "ListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
]
Note: If you're using IAM, skip the "Principal" part.
If you have an encrypted bucket, you will need kms allowed.
Possible reason: if files have been put/copy by another AWS Account user then you can not access the file since still file owner is not you. The AWS account user who has been placed files in your directory has to grant access during a put or copy operation.
For a put operation, the object owner can run this command:
aws s3api put-object --bucket destination_awsexamplebucket --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2 --acl bucket-owner-full-control
For a copy operation of a single object, the object owner can run one of these commands:
aws s3api copy-object --bucket destination_awsexammplebucket --key source_awsexamplebucket/myobject --acl bucket-owner-full-control
ref : AWS Link
Giving public access to Bucket to add policy is NOT A RIGHT way.
This exposes your bucket to public even for a short amount of time.
You will face this error even if you are admin access (Root user will not face it)
According to aws documentation you have to add "PutBucketPolicy" to you IAM user.
So Simply add a S3 Policy to you IAM User as in below screenshot , mention your Bucket ARN for make it safer and you don't have to make you bucket public again.
No one metioned MFA. For Amazon users who have enabled MFA, please use this:
aws s3 ls s3://bucket-name --profile mfa.
And prepare the profile mfa first by running
aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user-name --token-code 928371 --duration 129600. (replace 123456789012, user-name and 928371).
This can also happen if the encryption algorithm in the S3 parameters is missing. If bucket's default encryption is set to enabled, ex. Amazon S3-managed keys (SSE-S3), you need to pass ServerSideEncryption: "AES256"|"aws:kms"|string to your bucket's param.
const params = {
Bucket: BUCKET_NAME,
Body: content,
Key: fileKey,
ContentType: "audio/m4a",
ServerSideEncryption: "AES256" // Here ..
}
await S3.putObject(params).promise()
Go to this link and generate a Policy.
In the Principal field give *
In the Actions set the Get Objects
Give the ARN as arn:aws:s3:::<bucket_name>/*
Then add statement and then generate policy, you will get a JSON file and then just copy that file and paste it in the Bucket Policy.
For More Details go here.