Why AWS Bucket Policy NotPrincipal with specific user doesn't work with aws client when no profile is specified? - amazon-web-services

I have this AWS S3 Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "OnlyS3AdminCanPerformOperationPolicy",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::<account-id>:user/s3-admin"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
Side note: IAM s3-admin user has AdministratorAccess policy attached.
At first I though the bucket policy didn't worked. It was probably because of the way I tested the operation.
aws s3 rm s3://my-bucket-name/file.csv
Caused:
delete failed: s3://test-cb-delete/buckets.csv An error occurred (AccessDenied)
but I if used --profile default as per
aws s3 --profile default rm s3://my-bucket-name/file.csv
It worked.
I verified and only have one set of credentials configured for the aws client. Also, I am able to list the content of the bucket even when I don't use the --profile default argument.
Why is the aws client behaving that way?

Take a look at the credential precedence provider chain and use that to determine what is different about the two credentials you're authenticating as.
STS has a handly API that tells you who you are. It's similar to the UNIX-like command whoami, except for AWS Principals. To see which credential is which, do this:
aws sts get-caller-identity
aws sts --profile default get-caller-identity

Related

Grant S3 access to EC2 instance (simplest case)

I tried with simplest case following AWS documentation. I created role, assigned to instance and rebooted instance. To test access interactively, I logon to Windows instance and run aws s3api list-objects --bucket testbucket. I get error An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied.
Next test was to create .aws/credentials file and add profile to assume role. I modified role (assigned to instance) and added permission to assume role by any user in account. When I run same command aws s3api list-objects --bucket testbucket --profile assume_role, objects in bucket are listed.
Here is my test role Trust Relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"ssm.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
},
{
"Sid": "UserCanAssumeRole",
"Effect": "Allow",
"Principal": {
"AWS": "111111111111"
},
"Action": "sts:AssumeRole"
}
]
}
Role has only one permission "AmazonS3FullAccess".
When I switch role in AWS console, I can see content of S3 bucket (and no other action is allowed in AWS console).
My assumption is that EC2 instance does not assume role.
How to pinpoint where is the problem?
Problem was with Windows proxy.
I checked proxy environment variables. None was set. When I checked Control Panel->Internet options I saw that Proxy text box shows value of proxy, but checkbox "Use proxy" was not checked. Next to it was text "Some of your settings are managed by organization." Skip proxy was having 169.254.169.254 listed.
I run command in debug mode and saw that CLI connects to proxy. Which cannot access 169.254.169.254 and no credentials are set. When I explicitly set environment variable set NO_PROXY=169.254.169.254 everything started to work.
Why AWS CLI uses proxy from Windows system I do not understand. Worst of all, it uses proxy but does not check bypass proxy. Lesson learned. Run command in debug mode and verify output.

AWS STS Temporary Credentials S3 Access Denied PutObject

I am following the How to Use AWS IAM with STS for access to AWS resources - 2nd Watch blog post and my understanding is the S3 Bucket Policy Principal requesting the temporary credentials, one approach would be to hard code the Id of this User but the post attempts a more elegant approach of evaluating the STS delegated user (when it works).
arn:aws:iam::409812999999999999:policy/fts-assume-role
IAM user policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "*"
}
]
}
arn:aws:s3:::finding-taylor-swift
s3 bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1581282599999999999",
"Statement": [
{
"Sid": "Stmt158128999999999999",
"Effect": "Allow",
"Principal": {
"Service": "sts.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::finding-taylor-swift/*"
}
]
}
conor#xyz:~$ aws configure --profile finding-taylor-swift
AWS Access Key ID [****************6QNY]:
AWS Secret Access Key [****************+8kF]:
Default region name [eu-west-2]:
Default output format [text]: json
conor#xyz:~$ aws sts get-session-token --profile finding-taylor-swift
{
"Credentials": {
"SecretAccessKey": "<some text>",
"SessionToken": "<some text>",
"Expiration": "2020-02-11T03:31:50Z",
"AccessKeyId": "<some text>"
}
}
conor#xyz:~$ export AWS_SECRET_ACCESS_KEY=<some text>
conor#xyz:~$ export AWS_SESSION_TOKEN=<some text>
conor#xyz:~$ export AWS_ACCESS_KEY_ID=<some text>
conor#xyz:~$ aws s3 cp dreamstime_xxl_concert_profile_w500_q8.jpg s3://finding-taylor-swift
upload failed: ./dreamstime_xxl_concert_profile_w500_q8.jpg to s3://finding-taylor-swift/dreamstime_xxl_concert_profile_w500_q8.jpg An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
conor#xyz:~$
AWS CLI has been setup as described on Using Temporary Security Credentials with the AWS CLI
"When you run AWS CLI commands, the AWS CLI looks for credentials in a specific order—first in environment variables and then in the configuration file. Therefore, after you've put the temporary credentials into environment variables, the AWS CLI uses those credentials by default. (If you specify a profile parameter in the command, the AWS CLI skips the environment variables. Instead, the AWS CLI looks in the configuration file, which lets you override the credentials in the environment variables if you need to.)
The following example shows how you might set the environment variables for temporary security credentials and then call an AWS CLI command. Because no profile parameter is included in the AWS CLI command, the AWS CLI looks for credentials first in environment variables and therefore uses the temporary credentials."
There is no need to use a Bucket Policy for your scenario. A bucket policy is applied to an Amazon S3 bucket and is typically used to grant access that is specific to the bucket (eg public access).
Using an IAM Role
If you wish to provide bucket access to a specific IAM User, IAM Group or IAM Role, then the permissions should be attached to the IAM entity rather than the bucket.
(For get-session-token, see the end of my answer.)
Setup
Let's start by creating an IAM Role similar to what you had. I choose Create role, then for trusted entity I select Another AWS account (since it will be assumed by an IAM User rather than service).
I then create an Inline policy on the IAM Role to permit access to the bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket"
}
]
}
(It's normally not a good idea to assign s3:* permissions, since this lets the user delete content and even delete the bucket. Try to restrict it to the minimum permissions that are actually required.)
The Trust Relationship on the IAM Role determines who is allowed to assume the role. It could be one person, or anyone in the account (as long as they have been granted permission to call AssumeRole). In my case, I'll assign it to the whole account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::MY-ACCOUNT-ID:root"
},
"Action": "sts:AssumeRole"
}
]
}
Now there are a few different ways to assume the role...
Simple method: IAM Role in credentials file
The AWS CLI has the ability to specify an IAM Role in the credentials file, and it will automatically assume the role for you.
See: Using an IAM Role in the AWS CLI
To use this, I add a section to my .aws/config file:
[profile foo]
role_arn = arn:aws:iam::MY-ACCOUNT-ID:role/MY-ROLE-NAME
source_profile = default
I could then simply use it with:
aws s3 ls s3://my-bucket --profile foo
This successfully lets me access that specific bucket.
Complex method: Assume the role myself
Rather than letting the AWS CLI do all the work, I can also assume the role myself:
aws sts assume-role --role-arn arn:aws:iam::MY-ACCOUNT-ID:role/MY-ROLE-NAME --role-session-name foo
{
"Credentials": {
"AccessKeyId": "ASIA...",
"SecretAccessKey": "...",
"SessionToken": "...",
"Expiration": "2020-02-11T00:43:30+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROA...:foo",
"Arn": "arn:aws:sts::MY-ACCOUNT-ID:assumed-role/MY-ROLE-NAME/foo"
}
}
I then appended this information to the .aws/credentials file:
[foo2]
aws_access_key_id = ASIA...
aws_secret_access_key = ...
aws_security_token= ...
Yes, you could add this to the credentials file by using aws configure --foo2, but it does not prompt for the Security Token. Therefore, you need to edit the credentials file to add that information anyway.
I then used the profile:
aws s3 ls s3://my-bucket --profile foo2
It allowed me to successfully access and use the bucket.
Using GetSessionToken
The above examples use an IAM Role. This is typically used to grant cross-account access or to temporarily assume more-powerful credentials (eg an Admin performing sensitive operations).
Your Question references get-session-token. This provides temporary credentials based on a user's existing credentials and permissions. Thus, they cannot gain additional permissions as part of this API call.
This call is typically used either to supply an MFA token or to provide time-limited credentials for testing purposes. For example, I could give you credentials that effectively let you use my IAM User, but only for a limited time.

Copying between diffrent Accounts S3 Buckets [duplicate]

I created two profiles (one for source and one for target bucket) and using below command to copy:
aws s3 cp --profile source_profile s3://source_bucket/file.txt --profile target_profile s3://target_profile/
But it throws below error.
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Looks like we can't use multiple profiles with aws commands.
The simplest method is to grant permissions via a bucket policy.
Say you have:
Account-A with IAM User-A
Account-B with Bucket-B
Add a bucket policy on Bucket-B:
{
"Id": "CopyBuckets",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GrantAccessToUser-A",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-b",
"arn:aws:s3:::bucket-b/*"
],
"Principal": {
"AWS": [
"arn:aws:iam::<account-a-id>:user/user-a"
]
}
}
]
}
Then just copy the files as User-A.
See also: aws sync between S3 buckets on different AWS accounts
No, you can't use multiple profiles in one AWS CLI command. Possible solutions:
1) Download files to local disk, then upload them to the target bucket with a separate command.
2) Allow first account access to the target bucket. For this, you will have to create a cross-account role in the source account and assign it the appropriate permissions in the target account. That way you will be using one role/one profile, but this role will be granted permissions in the second account. See https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html

Cross account S3 access through CloudFormation CLi

I am trying to create a CloudFormation Stack using the AWS CLI by running the following command:
aws cloudformation create-stack --debug --stack-name ${stackName} --template-url ${s3TemplatePath} --parameters '${parameters}' --region eu-west-1
The template resides in an S3 bucket in the another account, lets call this account 456. The bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::123:root"
]
},
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::cloudformation.template.eberry.digital/*"
}
]
}
("Action: * " is for debugging).
Now for a twist. I am logged into account 456 and I run
aws sts assume-role --role-arn arn:aws:iam::123:role/delegate-access-to-infrastructure-account-role --role-session-name jenkins
and the set the correct environment variables to access 123. The policy attached to the role that I assume allow the user Administrator access while I debug - which still doesn't work.
aws s3api list-buckets
then display the buckets in account 123.
To summarize:
Specifying a template in an S3 bucket owned by account 456, into CloufFormation in the console, while logged into account 123 works.
Specifying a template in an S3 bucket owned by account 123, using the CLI, works.
Specifying a template in an S3 bucket owned by account 456, using the CLI, doesn't work.
The error:
An error occurred (ValidationError) when calling the CreateStack operation: S3 error: Access Denied
For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
I don't understand what I am doing wrong and would by thankful for any ideas. In the meantime I will upload the template to all accounts that will use it.
Amazon S3 provides cross-account access through the use of bucket policies. These are IAM resource policies (which are applied to resources—in this case an S3 bucket—rather than IAM principals: users, groups, or roles). You can read more about how Amazon S3 authorises access in the Amazon S3 Developer Guide.
I was a little confused about which account is which, so instead I'll just say that you need this bucket policy when you want to deploy a template in a bucket owned by one AWS account as a stack in a different AWS account. For example, the template is in a bucket owned by AWS account 111111111111 and you want to use that template to deploy a stack in AWS account 222222222222. In this case, you'll need to be logged in to account 222222222222 and specify that account as the principal in the bucket policy.
The following is an example bucket policy that provides access to another AWS account; I use this on my own CloudFormation templates bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWS_ACCOUNT_ID_WITHOUT_HYPHENS:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::S3_BUCKET_NAME",
"arn:aws:s3:::S3_BUCKET_NAME/*"
]
}
]
}
You'll need to use the 12-digit account identifier for the AWS account you want to provide access to, and the name of the S3 bucket (you can probably use "Resource": "*", but I haven't tested this).

S3 to S3 transfer using different accounts?

I've been reading multiple posts like this one about how to transfer data with aws cli from one S3 bucket to another using different accounts but I am still unable to do so. I'm sure it's because I haven't fully grasp the concepts of account + permission settings in AWS yet (e.g. iam account vs access key).
I have a vendor that gave me a user called "Foo" and account number "123456789012" with 2 access keys to access their S3 bucket "SourceBucket" in eu-central-1. I created a profile on my machine with the access key provided by the vendor called "sourceProfile". I have my S3 called "DestinationBucket" in us-east-1 and I set the bucket policy to the following.
{
"Version": "2012-10-17",
"Id": "Policy12345678901234",
"Statement": [
{
"Sid": "Stmt1487222222222",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/Foo"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::DestinationBucket/",
"arn:aws:s3:::DestinationBucket/*"
]
}
]
}
Here comes the weird part. I am able to list the files and even download files from the "DestinationBucket" using the following command lines.
aws s3 ls s3://DestinationBucket --profile sourceProfile
aws s3 cp s3://DestinationBucket/test ./ --profile sourceProfile
But when I try to put copy anything to the "DestinationBucket" using the profile, I got Access Denied error.
aws s3 cp test s3://DestinationBucket --profile sourceProfile --region us-east-1
upload failed: ./test to s3://DestinationBucket/test An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Did I set up the bucket policy especially the list of action right? How could ls and cp from destination to local work but cp from local to destination bucket doesn't work?
Because AWS make it a way that parent account holder must do the delegation.
Actually, beside delegates access on to that particular access key user, you can choose to do replication on the bucket as stated here.