AWS STS Temporary Credentials S3 Access Denied PutObject - amazon-web-services

I am following the How to Use AWS IAM with STS for access to AWS resources - 2nd Watch blog post and my understanding is the S3 Bucket Policy Principal requesting the temporary credentials, one approach would be to hard code the Id of this User but the post attempts a more elegant approach of evaluating the STS delegated user (when it works).
arn:aws:iam::409812999999999999:policy/fts-assume-role
IAM user policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "*"
}
]
}
arn:aws:s3:::finding-taylor-swift
s3 bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1581282599999999999",
"Statement": [
{
"Sid": "Stmt158128999999999999",
"Effect": "Allow",
"Principal": {
"Service": "sts.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::finding-taylor-swift/*"
}
]
}
conor#xyz:~$ aws configure --profile finding-taylor-swift
AWS Access Key ID [****************6QNY]:
AWS Secret Access Key [****************+8kF]:
Default region name [eu-west-2]:
Default output format [text]: json
conor#xyz:~$ aws sts get-session-token --profile finding-taylor-swift
{
"Credentials": {
"SecretAccessKey": "<some text>",
"SessionToken": "<some text>",
"Expiration": "2020-02-11T03:31:50Z",
"AccessKeyId": "<some text>"
}
}
conor#xyz:~$ export AWS_SECRET_ACCESS_KEY=<some text>
conor#xyz:~$ export AWS_SESSION_TOKEN=<some text>
conor#xyz:~$ export AWS_ACCESS_KEY_ID=<some text>
conor#xyz:~$ aws s3 cp dreamstime_xxl_concert_profile_w500_q8.jpg s3://finding-taylor-swift
upload failed: ./dreamstime_xxl_concert_profile_w500_q8.jpg to s3://finding-taylor-swift/dreamstime_xxl_concert_profile_w500_q8.jpg An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
conor#xyz:~$
AWS CLI has been setup as described on Using Temporary Security Credentials with the AWS CLI
"When you run AWS CLI commands, the AWS CLI looks for credentials in a specific order—first in environment variables and then in the configuration file. Therefore, after you've put the temporary credentials into environment variables, the AWS CLI uses those credentials by default. (If you specify a profile parameter in the command, the AWS CLI skips the environment variables. Instead, the AWS CLI looks in the configuration file, which lets you override the credentials in the environment variables if you need to.)
The following example shows how you might set the environment variables for temporary security credentials and then call an AWS CLI command. Because no profile parameter is included in the AWS CLI command, the AWS CLI looks for credentials first in environment variables and therefore uses the temporary credentials."

There is no need to use a Bucket Policy for your scenario. A bucket policy is applied to an Amazon S3 bucket and is typically used to grant access that is specific to the bucket (eg public access).
Using an IAM Role
If you wish to provide bucket access to a specific IAM User, IAM Group or IAM Role, then the permissions should be attached to the IAM entity rather than the bucket.
(For get-session-token, see the end of my answer.)
Setup
Let's start by creating an IAM Role similar to what you had. I choose Create role, then for trusted entity I select Another AWS account (since it will be assumed by an IAM User rather than service).
I then create an Inline policy on the IAM Role to permit access to the bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket"
}
]
}
(It's normally not a good idea to assign s3:* permissions, since this lets the user delete content and even delete the bucket. Try to restrict it to the minimum permissions that are actually required.)
The Trust Relationship on the IAM Role determines who is allowed to assume the role. It could be one person, or anyone in the account (as long as they have been granted permission to call AssumeRole). In my case, I'll assign it to the whole account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::MY-ACCOUNT-ID:root"
},
"Action": "sts:AssumeRole"
}
]
}
Now there are a few different ways to assume the role...
Simple method: IAM Role in credentials file
The AWS CLI has the ability to specify an IAM Role in the credentials file, and it will automatically assume the role for you.
See: Using an IAM Role in the AWS CLI
To use this, I add a section to my .aws/config file:
[profile foo]
role_arn = arn:aws:iam::MY-ACCOUNT-ID:role/MY-ROLE-NAME
source_profile = default
I could then simply use it with:
aws s3 ls s3://my-bucket --profile foo
This successfully lets me access that specific bucket.
Complex method: Assume the role myself
Rather than letting the AWS CLI do all the work, I can also assume the role myself:
aws sts assume-role --role-arn arn:aws:iam::MY-ACCOUNT-ID:role/MY-ROLE-NAME --role-session-name foo
{
"Credentials": {
"AccessKeyId": "ASIA...",
"SecretAccessKey": "...",
"SessionToken": "...",
"Expiration": "2020-02-11T00:43:30+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROA...:foo",
"Arn": "arn:aws:sts::MY-ACCOUNT-ID:assumed-role/MY-ROLE-NAME/foo"
}
}
I then appended this information to the .aws/credentials file:
[foo2]
aws_access_key_id = ASIA...
aws_secret_access_key = ...
aws_security_token= ...
Yes, you could add this to the credentials file by using aws configure --foo2, but it does not prompt for the Security Token. Therefore, you need to edit the credentials file to add that information anyway.
I then used the profile:
aws s3 ls s3://my-bucket --profile foo2
It allowed me to successfully access and use the bucket.
Using GetSessionToken
The above examples use an IAM Role. This is typically used to grant cross-account access or to temporarily assume more-powerful credentials (eg an Admin performing sensitive operations).
Your Question references get-session-token. This provides temporary credentials based on a user's existing credentials and permissions. Thus, they cannot gain additional permissions as part of this API call.
This call is typically used either to supply an MFA token or to provide time-limited credentials for testing purposes. For example, I could give you credentials that effectively let you use my IAM User, but only for a limited time.

Related

Access denied when trying to do AWS s3 ls using AWS cli

I launched an ec2 instance and created a role with a full S3 access policy for the instance. I installed awscli on it and configured my user's access key. My user has admin access and full S3 access policy too. I can see the buckets in the aws console but when I try to run aws s3 ls on the instance it returned An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied.
What else I need to do to add permission to the role or my user properly to be able to list and sync object between S3 and the instance?
I ran into this issue as well.
I ran aws sts get-caller-identity and noticed that the ARN did not match what I was expecting. It turns out if you have AWS configurations set in your bash_profile or bashrc, the awscli will default to using these instead.
I changed the enviornment variables in bash_profile and bashrc to the proper keys and everything started working.
Turns out I forgot I had to do mfa to get access token to be able to operate in S3. Thank you for everyone response.
There appears to be confusion about when to use IAM Users and IAM Roles.
When using an Amazon EC2 instance, the best method to grant permissions is:
Create an IAM Role and attach policies to grant the desired permissions
Associate the IAM Role with the Amazon EC2 instance. This can be done at launch time, or afterwards (Actions/Instance Settings/Attach IAM Role).
Any application running on the EC2 instance (including the AWS CLI) will now automatically receive credentials. Do not run aws configure.
If you are wanting to grant permissions to your own (non-EC2) computer, then:
Create an IAM User (or use your existing one) and attach policies to grant the desired permissions
On the computer, run aws configure and enter the Access Key and Secret Key associated with the IAM User. This will store the credentials in ~/.aws/credentials.
Any application running on this computer will then use credentials from the local credentials file
Create a IAM user with permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketName/*"
}
]
}
Save Access key ID & Secret access key.
sudo apt install awscli
aws configure
AWS Access Key ID [None]: AKIAxxxxxxxxxxxZI4
AWS Secret Access Key [None]: 8Bxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8
Default region name [None]: region (ex. us-east-2)
Default output format [None]: json
aws s3 ls s3://s3testingankit1/
This problem can occurs not only from the CLI but also when executing S3 API for example.
The reason for this error can come from wrong configuration of the access permissions to the bucket.
For example with the setup below you're giving a full privileges to perform actions on the bucket's internal objects, BUT not specifying any action on the bucket itself:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
This will lead to the mentioned
... (AccessDenied) when calling the ListBuckets ...
error.
In order to fix this you should allow application to access the bucket (1st statement item) and to edit all objects inside the bucket (2nd statement item):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
There are shorter configurations that might solve the problem, but the one specified above tries also to keep fine grained security permissions.
I ran into this yesterday running a script I ran successfully in September 2021.
TL;DR: add --profile your.profile.name to the end of the command
I have multiple profiles on the login I was using. I think something in the aws environment changed, or perhaps I had done something that was able to bypass this before. Back in September I set the profile with
aws configure set region us-west-2 --profile my.profile.name
But yesterday, after the failure, I saw that aws sts get-caller-identity was returning a different identity. After some documentation search I found the additional method for specifying the profile, and operations like:
aws s3 cp myfile s3://my-s3-bucket --profile my.profile.name
all worked
I have an Windows machine with CyberDuck from which I was able to access a destination bucket, but when trying to access the bucket from a Linux machine with aws command, I got "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied".
I then executed same command "aws s3 ls" from a command line interface on the Windows machine and it worked just fine. It looks like there is some security restriction on the AWS server for the machine/IP.

Access s3 bucket from different aws account

I am trying to restore a database as a part of our testing. The backups exists on the prod s3 account. My database is running as ec2 instance in dev account.
Can anyone tell me how can i access the prod s3 from dev account.
Steps:
- i created a role on prod account and with trusted relationship with the dev account
- i added a policy to the role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::prod"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::prod/*"
}
]
}
on dev account i created a role and with assume policy
> { "Version": "2012-10-17", "Statement": [
> {
> "Effect": "Allow",
> "Action": "sts:AssumeRole",
> "Resource": "arn:aws:iam::xxxxxxxxx:role/prod-role"
> } ] }
But i am unable to access the s3 bucket, can someone point me where i am wrong.
Also i added the above policy to an existing role. so does that mean its not working because of my instance profile ( inconsistent error)
Please help and correct me if i am wrong anywhere. I am looking for a solution in terms of a role and not as a user.
Thanks in advance!
So lets recap: you want to access your prod bucket from the dev account.
There are two ways to do this, Method 1 is your approach however I would suggest Method 2:
Method 1: Use roles. This is what you described above, it's great, however, you cannot sync bucket to bucket if they're on different accounts as different access keys will need to be exported each time. You'll most likely have to sync the files from the prod bucket to the local fs, then from the local fs to the dev bucket.
How to do this:
Using roles, create a role on the production account that has access to the bucket. The trust relationship of this role must trust the role on the dev account that's assigned to the ec2 instance. Attach the policy granting access to the prod bucket to that role. Once that's all configured, the ec2 instance role in dev must be updated to allow sts:AssumeRole of that role you've defined in production. On the ec2 instance in dev you will need to run aws sts assume-role --role-arn <the role on prod> --role-session-name <a name to identify the session>. This will give you back 3 variables, AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID, and AWS_SESSION_TOKEN. On your ec2 instance, run set -a; AWS_SECRET_ACCESS_KEY=${secret_access_key};
AWS_ACCESS_KEY_ID=${access_key_id}; AWS_SESSION_TOKEN=${session_token}. Once those variables have been exported, you can run aws sts get-caller-identity and that should come back showing you that you're on the role you've provisioned in production. You should now be able to sync the files to the local system, and once that's done, unset the aws keys we set as env variables, then copy the files from the ec2 instance to the bucket in dev. Notice how there are two steps here to copy them? that can get quite annoying - look into method 2 on how to avoid this:
Method 2: Update the prod bucket policy to trust the dev account - this will mean you can access the prod bucket from dev and do a bucket to bucket sync/cp.
I would highly recommend you take this approach as it will mean you can copy directly between buckets without having to sync to the local fs.
To do this, you will need to update the bucket policy on the bucket in production to have a principals block that trusts the AWS account id of dev. An example of this is, update your prod bucket policy to look something like this:
NOTE: granting s3:* is bad, and granting full access to the account prob isnt suggested as anyone on the account with the right s3 permissions can now access this bucket, but for simplicity I'm going to leave this here:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::DEV_ACC_ID:root"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::PROD_BUCKET_NAME",
"arn:aws:s3:::PROD_BUCKET_NAME/*"
]
}
]
}
Once you've done this, on the dev account, attach the policy in your main post to the dev ec2 instance role (the one that grants s3 access). Now when you connect to the dev instance, you do not have to export any environment variables, you can simply run aws s3 ls s3://prodbucket and it should list the files.
You can sync the files between the two buckets using aws s3 sync s3://prodbucket s3://devbucket --acl bucket-owner-full-control and that should copy all the files from prod to dev, and on top of that should update the ACLs of each file so that dev owns them (meaning you have full access to the files in dev).
You need to assume the role in the production account from the dev account. Call sts:AssumeRole and then use the credentials returned to access the bucket.
You can alternatively add a bucket policy that allows the dev account to read from the prod account. You wouldn't need the cross account role in the prod account in this case.

Why AWS Bucket Policy NotPrincipal with specific user doesn't work with aws client when no profile is specified?

I have this AWS S3 Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "OnlyS3AdminCanPerformOperationPolicy",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::<account-id>:user/s3-admin"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
Side note: IAM s3-admin user has AdministratorAccess policy attached.
At first I though the bucket policy didn't worked. It was probably because of the way I tested the operation.
aws s3 rm s3://my-bucket-name/file.csv
Caused:
delete failed: s3://test-cb-delete/buckets.csv An error occurred (AccessDenied)
but I if used --profile default as per
aws s3 --profile default rm s3://my-bucket-name/file.csv
It worked.
I verified and only have one set of credentials configured for the aws client. Also, I am able to list the content of the bucket even when I don't use the --profile default argument.
Why is the aws client behaving that way?
Take a look at the credential precedence provider chain and use that to determine what is different about the two credentials you're authenticating as.
STS has a handly API that tells you who you are. It's similar to the UNIX-like command whoami, except for AWS Principals. To see which credential is which, do this:
aws sts get-caller-identity
aws sts --profile default get-caller-identity

Copying between diffrent Accounts S3 Buckets [duplicate]

I created two profiles (one for source and one for target bucket) and using below command to copy:
aws s3 cp --profile source_profile s3://source_bucket/file.txt --profile target_profile s3://target_profile/
But it throws below error.
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Looks like we can't use multiple profiles with aws commands.
The simplest method is to grant permissions via a bucket policy.
Say you have:
Account-A with IAM User-A
Account-B with Bucket-B
Add a bucket policy on Bucket-B:
{
"Id": "CopyBuckets",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GrantAccessToUser-A",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-b",
"arn:aws:s3:::bucket-b/*"
],
"Principal": {
"AWS": [
"arn:aws:iam::<account-a-id>:user/user-a"
]
}
}
]
}
Then just copy the files as User-A.
See also: aws sync between S3 buckets on different AWS accounts
No, you can't use multiple profiles in one AWS CLI command. Possible solutions:
1) Download files to local disk, then upload them to the target bucket with a separate command.
2) Allow first account access to the target bucket. For this, you will have to create a cross-account role in the source account and assign it the appropriate permissions in the target account. That way you will be using one role/one profile, but this role will be granted permissions in the second account. See https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html

Cross account S3 access through CloudFormation CLi

I am trying to create a CloudFormation Stack using the AWS CLI by running the following command:
aws cloudformation create-stack --debug --stack-name ${stackName} --template-url ${s3TemplatePath} --parameters '${parameters}' --region eu-west-1
The template resides in an S3 bucket in the another account, lets call this account 456. The bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::123:root"
]
},
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::cloudformation.template.eberry.digital/*"
}
]
}
("Action: * " is for debugging).
Now for a twist. I am logged into account 456 and I run
aws sts assume-role --role-arn arn:aws:iam::123:role/delegate-access-to-infrastructure-account-role --role-session-name jenkins
and the set the correct environment variables to access 123. The policy attached to the role that I assume allow the user Administrator access while I debug - which still doesn't work.
aws s3api list-buckets
then display the buckets in account 123.
To summarize:
Specifying a template in an S3 bucket owned by account 456, into CloufFormation in the console, while logged into account 123 works.
Specifying a template in an S3 bucket owned by account 123, using the CLI, works.
Specifying a template in an S3 bucket owned by account 456, using the CLI, doesn't work.
The error:
An error occurred (ValidationError) when calling the CreateStack operation: S3 error: Access Denied
For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
I don't understand what I am doing wrong and would by thankful for any ideas. In the meantime I will upload the template to all accounts that will use it.
Amazon S3 provides cross-account access through the use of bucket policies. These are IAM resource policies (which are applied to resources—in this case an S3 bucket—rather than IAM principals: users, groups, or roles). You can read more about how Amazon S3 authorises access in the Amazon S3 Developer Guide.
I was a little confused about which account is which, so instead I'll just say that you need this bucket policy when you want to deploy a template in a bucket owned by one AWS account as a stack in a different AWS account. For example, the template is in a bucket owned by AWS account 111111111111 and you want to use that template to deploy a stack in AWS account 222222222222. In this case, you'll need to be logged in to account 222222222222 and specify that account as the principal in the bucket policy.
The following is an example bucket policy that provides access to another AWS account; I use this on my own CloudFormation templates bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWS_ACCOUNT_ID_WITHOUT_HYPHENS:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::S3_BUCKET_NAME",
"arn:aws:s3:::S3_BUCKET_NAME/*"
]
}
]
}
You'll need to use the 12-digit account identifier for the AWS account you want to provide access to, and the name of the S3 bucket (you can probably use "Resource": "*", but I haven't tested this).