Cross account S3 access through CloudFormation CLi - amazon-web-services

I am trying to create a CloudFormation Stack using the AWS CLI by running the following command:
aws cloudformation create-stack --debug --stack-name ${stackName} --template-url ${s3TemplatePath} --parameters '${parameters}' --region eu-west-1
The template resides in an S3 bucket in the another account, lets call this account 456. The bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::123:root"
]
},
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::cloudformation.template.eberry.digital/*"
}
]
}
("Action: * " is for debugging).
Now for a twist. I am logged into account 456 and I run
aws sts assume-role --role-arn arn:aws:iam::123:role/delegate-access-to-infrastructure-account-role --role-session-name jenkins
and the set the correct environment variables to access 123. The policy attached to the role that I assume allow the user Administrator access while I debug - which still doesn't work.
aws s3api list-buckets
then display the buckets in account 123.
To summarize:
Specifying a template in an S3 bucket owned by account 456, into CloufFormation in the console, while logged into account 123 works.
Specifying a template in an S3 bucket owned by account 123, using the CLI, works.
Specifying a template in an S3 bucket owned by account 456, using the CLI, doesn't work.
The error:
An error occurred (ValidationError) when calling the CreateStack operation: S3 error: Access Denied
For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
I don't understand what I am doing wrong and would by thankful for any ideas. In the meantime I will upload the template to all accounts that will use it.

Amazon S3 provides cross-account access through the use of bucket policies. These are IAM resource policies (which are applied to resources—in this case an S3 bucket—rather than IAM principals: users, groups, or roles). You can read more about how Amazon S3 authorises access in the Amazon S3 Developer Guide.
I was a little confused about which account is which, so instead I'll just say that you need this bucket policy when you want to deploy a template in a bucket owned by one AWS account as a stack in a different AWS account. For example, the template is in a bucket owned by AWS account 111111111111 and you want to use that template to deploy a stack in AWS account 222222222222. In this case, you'll need to be logged in to account 222222222222 and specify that account as the principal in the bucket policy.
The following is an example bucket policy that provides access to another AWS account; I use this on my own CloudFormation templates bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWS_ACCOUNT_ID_WITHOUT_HYPHENS:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::S3_BUCKET_NAME",
"arn:aws:s3:::S3_BUCKET_NAME/*"
]
}
]
}
You'll need to use the 12-digit account identifier for the AWS account you want to provide access to, and the name of the S3 bucket (you can probably use "Resource": "*", but I haven't tested this).

Related

AWS STS Temporary Credentials S3 Access Denied PutObject

I am following the How to Use AWS IAM with STS for access to AWS resources - 2nd Watch blog post and my understanding is the S3 Bucket Policy Principal requesting the temporary credentials, one approach would be to hard code the Id of this User but the post attempts a more elegant approach of evaluating the STS delegated user (when it works).
arn:aws:iam::409812999999999999:policy/fts-assume-role
IAM user policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "*"
}
]
}
arn:aws:s3:::finding-taylor-swift
s3 bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1581282599999999999",
"Statement": [
{
"Sid": "Stmt158128999999999999",
"Effect": "Allow",
"Principal": {
"Service": "sts.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::finding-taylor-swift/*"
}
]
}
conor#xyz:~$ aws configure --profile finding-taylor-swift
AWS Access Key ID [****************6QNY]:
AWS Secret Access Key [****************+8kF]:
Default region name [eu-west-2]:
Default output format [text]: json
conor#xyz:~$ aws sts get-session-token --profile finding-taylor-swift
{
"Credentials": {
"SecretAccessKey": "<some text>",
"SessionToken": "<some text>",
"Expiration": "2020-02-11T03:31:50Z",
"AccessKeyId": "<some text>"
}
}
conor#xyz:~$ export AWS_SECRET_ACCESS_KEY=<some text>
conor#xyz:~$ export AWS_SESSION_TOKEN=<some text>
conor#xyz:~$ export AWS_ACCESS_KEY_ID=<some text>
conor#xyz:~$ aws s3 cp dreamstime_xxl_concert_profile_w500_q8.jpg s3://finding-taylor-swift
upload failed: ./dreamstime_xxl_concert_profile_w500_q8.jpg to s3://finding-taylor-swift/dreamstime_xxl_concert_profile_w500_q8.jpg An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
conor#xyz:~$
AWS CLI has been setup as described on Using Temporary Security Credentials with the AWS CLI
"When you run AWS CLI commands, the AWS CLI looks for credentials in a specific order—first in environment variables and then in the configuration file. Therefore, after you've put the temporary credentials into environment variables, the AWS CLI uses those credentials by default. (If you specify a profile parameter in the command, the AWS CLI skips the environment variables. Instead, the AWS CLI looks in the configuration file, which lets you override the credentials in the environment variables if you need to.)
The following example shows how you might set the environment variables for temporary security credentials and then call an AWS CLI command. Because no profile parameter is included in the AWS CLI command, the AWS CLI looks for credentials first in environment variables and therefore uses the temporary credentials."
There is no need to use a Bucket Policy for your scenario. A bucket policy is applied to an Amazon S3 bucket and is typically used to grant access that is specific to the bucket (eg public access).
Using an IAM Role
If you wish to provide bucket access to a specific IAM User, IAM Group or IAM Role, then the permissions should be attached to the IAM entity rather than the bucket.
(For get-session-token, see the end of my answer.)
Setup
Let's start by creating an IAM Role similar to what you had. I choose Create role, then for trusted entity I select Another AWS account (since it will be assumed by an IAM User rather than service).
I then create an Inline policy on the IAM Role to permit access to the bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket"
}
]
}
(It's normally not a good idea to assign s3:* permissions, since this lets the user delete content and even delete the bucket. Try to restrict it to the minimum permissions that are actually required.)
The Trust Relationship on the IAM Role determines who is allowed to assume the role. It could be one person, or anyone in the account (as long as they have been granted permission to call AssumeRole). In my case, I'll assign it to the whole account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::MY-ACCOUNT-ID:root"
},
"Action": "sts:AssumeRole"
}
]
}
Now there are a few different ways to assume the role...
Simple method: IAM Role in credentials file
The AWS CLI has the ability to specify an IAM Role in the credentials file, and it will automatically assume the role for you.
See: Using an IAM Role in the AWS CLI
To use this, I add a section to my .aws/config file:
[profile foo]
role_arn = arn:aws:iam::MY-ACCOUNT-ID:role/MY-ROLE-NAME
source_profile = default
I could then simply use it with:
aws s3 ls s3://my-bucket --profile foo
This successfully lets me access that specific bucket.
Complex method: Assume the role myself
Rather than letting the AWS CLI do all the work, I can also assume the role myself:
aws sts assume-role --role-arn arn:aws:iam::MY-ACCOUNT-ID:role/MY-ROLE-NAME --role-session-name foo
{
"Credentials": {
"AccessKeyId": "ASIA...",
"SecretAccessKey": "...",
"SessionToken": "...",
"Expiration": "2020-02-11T00:43:30+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROA...:foo",
"Arn": "arn:aws:sts::MY-ACCOUNT-ID:assumed-role/MY-ROLE-NAME/foo"
}
}
I then appended this information to the .aws/credentials file:
[foo2]
aws_access_key_id = ASIA...
aws_secret_access_key = ...
aws_security_token= ...
Yes, you could add this to the credentials file by using aws configure --foo2, but it does not prompt for the Security Token. Therefore, you need to edit the credentials file to add that information anyway.
I then used the profile:
aws s3 ls s3://my-bucket --profile foo2
It allowed me to successfully access and use the bucket.
Using GetSessionToken
The above examples use an IAM Role. This is typically used to grant cross-account access or to temporarily assume more-powerful credentials (eg an Admin performing sensitive operations).
Your Question references get-session-token. This provides temporary credentials based on a user's existing credentials and permissions. Thus, they cannot gain additional permissions as part of this API call.
This call is typically used either to supply an MFA token or to provide time-limited credentials for testing purposes. For example, I could give you credentials that effectively let you use my IAM User, but only for a limited time.

Access s3 bucket from different aws account

I am trying to restore a database as a part of our testing. The backups exists on the prod s3 account. My database is running as ec2 instance in dev account.
Can anyone tell me how can i access the prod s3 from dev account.
Steps:
- i created a role on prod account and with trusted relationship with the dev account
- i added a policy to the role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::prod"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::prod/*"
}
]
}
on dev account i created a role and with assume policy
> { "Version": "2012-10-17", "Statement": [
> {
> "Effect": "Allow",
> "Action": "sts:AssumeRole",
> "Resource": "arn:aws:iam::xxxxxxxxx:role/prod-role"
> } ] }
But i am unable to access the s3 bucket, can someone point me where i am wrong.
Also i added the above policy to an existing role. so does that mean its not working because of my instance profile ( inconsistent error)
Please help and correct me if i am wrong anywhere. I am looking for a solution in terms of a role and not as a user.
Thanks in advance!
So lets recap: you want to access your prod bucket from the dev account.
There are two ways to do this, Method 1 is your approach however I would suggest Method 2:
Method 1: Use roles. This is what you described above, it's great, however, you cannot sync bucket to bucket if they're on different accounts as different access keys will need to be exported each time. You'll most likely have to sync the files from the prod bucket to the local fs, then from the local fs to the dev bucket.
How to do this:
Using roles, create a role on the production account that has access to the bucket. The trust relationship of this role must trust the role on the dev account that's assigned to the ec2 instance. Attach the policy granting access to the prod bucket to that role. Once that's all configured, the ec2 instance role in dev must be updated to allow sts:AssumeRole of that role you've defined in production. On the ec2 instance in dev you will need to run aws sts assume-role --role-arn <the role on prod> --role-session-name <a name to identify the session>. This will give you back 3 variables, AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID, and AWS_SESSION_TOKEN. On your ec2 instance, run set -a; AWS_SECRET_ACCESS_KEY=${secret_access_key};
AWS_ACCESS_KEY_ID=${access_key_id}; AWS_SESSION_TOKEN=${session_token}. Once those variables have been exported, you can run aws sts get-caller-identity and that should come back showing you that you're on the role you've provisioned in production. You should now be able to sync the files to the local system, and once that's done, unset the aws keys we set as env variables, then copy the files from the ec2 instance to the bucket in dev. Notice how there are two steps here to copy them? that can get quite annoying - look into method 2 on how to avoid this:
Method 2: Update the prod bucket policy to trust the dev account - this will mean you can access the prod bucket from dev and do a bucket to bucket sync/cp.
I would highly recommend you take this approach as it will mean you can copy directly between buckets without having to sync to the local fs.
To do this, you will need to update the bucket policy on the bucket in production to have a principals block that trusts the AWS account id of dev. An example of this is, update your prod bucket policy to look something like this:
NOTE: granting s3:* is bad, and granting full access to the account prob isnt suggested as anyone on the account with the right s3 permissions can now access this bucket, but for simplicity I'm going to leave this here:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DEV_ACC_ID:root"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::PROD_BUCKET_NAME",
"arn:aws:s3:::PROD_BUCKET_NAME/*"
]
}
]
}
Once you've done this, on the dev account, attach the policy in your main post to the dev ec2 instance role (the one that grants s3 access). Now when you connect to the dev instance, you do not have to export any environment variables, you can simply run aws s3 ls s3://prodbucket and it should list the files.
You can sync the files between the two buckets using aws s3 sync s3://prodbucket s3://devbucket --acl bucket-owner-full-control and that should copy all the files from prod to dev, and on top of that should update the ACLs of each file so that dev owns them (meaning you have full access to the files in dev).
You need to assume the role in the production account from the dev account. Call sts:AssumeRole and then use the credentials returned to access the bucket.
You can alternatively add a bucket policy that allows the dev account to read from the prod account. You wouldn't need the cross account role in the prod account in this case.

Why AWS Bucket Policy NotPrincipal with specific user doesn't work with aws client when no profile is specified?

I have this AWS S3 Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "OnlyS3AdminCanPerformOperationPolicy",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::<account-id>:user/s3-admin"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
Side note: IAM s3-admin user has AdministratorAccess policy attached.
At first I though the bucket policy didn't worked. It was probably because of the way I tested the operation.
aws s3 rm s3://my-bucket-name/file.csv
Caused:
delete failed: s3://test-cb-delete/buckets.csv An error occurred (AccessDenied)
but I if used --profile default as per
aws s3 --profile default rm s3://my-bucket-name/file.csv
It worked.
I verified and only have one set of credentials configured for the aws client. Also, I am able to list the content of the bucket even when I don't use the --profile default argument.
Why is the aws client behaving that way?
Take a look at the credential precedence provider chain and use that to determine what is different about the two credentials you're authenticating as.
STS has a handly API that tells you who you are. It's similar to the UNIX-like command whoami, except for AWS Principals. To see which credential is which, do this:
aws sts get-caller-identity
aws sts --profile default get-caller-identity

Copying between diffrent Accounts S3 Buckets [duplicate]

I created two profiles (one for source and one for target bucket) and using below command to copy:
aws s3 cp --profile source_profile s3://source_bucket/file.txt --profile target_profile s3://target_profile/
But it throws below error.
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Looks like we can't use multiple profiles with aws commands.
The simplest method is to grant permissions via a bucket policy.
Say you have:
Account-A with IAM User-A
Account-B with Bucket-B
Add a bucket policy on Bucket-B:
{
"Id": "CopyBuckets",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GrantAccessToUser-A",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-b",
"arn:aws:s3:::bucket-b/*"
],
"Principal": {
"AWS": [
"arn:aws:iam::<account-a-id>:user/user-a"
]
}
}
]
}
Then just copy the files as User-A.
See also: aws sync between S3 buckets on different AWS accounts
No, you can't use multiple profiles in one AWS CLI command. Possible solutions:
1) Download files to local disk, then upload them to the target bucket with a separate command.
2) Allow first account access to the target bucket. For this, you will have to create a cross-account role in the source account and assign it the appropriate permissions in the target account. That way you will be using one role/one profile, but this role will be granted permissions in the second account. See https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html

Can I grant access to objects owned by another account in AWS S3 using bucket policies?

I want to control access to an object, which is created by another AWS account. Can I do that by bucket policies?
In other words, does bucket policies apply to objects that are owned by another account?
I do not have 2 AWS accounts so I can not test this case in action.
No.
The ability to grant access to objects can only be done from the Account that owns the bucket/object.
If you think about it, this makes sense -- you would not want me granting access to your objects. Only the account that owns the bucket/object can do this.
Yes you can by creating a policy. Please find a sample policy below,
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountB-ID:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::examplebucket"
]
}
]
}
You can do it through AWS console, CLI or API. Please find CLI sample below,
aws s3 ls s3://examplebucket --profile AccountBadmin
aws s3api get-bucket-location --bucket examplebucket --profile AccountBadmin
Please read documentation here for more details