I launched an ec2 instance and created a role with a full S3 access policy for the instance. I installed awscli on it and configured my user's access key. My user has admin access and full S3 access policy too. I can see the buckets in the aws console but when I try to run aws s3 ls on the instance it returned An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied.
What else I need to do to add permission to the role or my user properly to be able to list and sync object between S3 and the instance?
I ran into this issue as well.
I ran aws sts get-caller-identity and noticed that the ARN did not match what I was expecting. It turns out if you have AWS configurations set in your bash_profile or bashrc, the awscli will default to using these instead.
I changed the enviornment variables in bash_profile and bashrc to the proper keys and everything started working.
Turns out I forgot I had to do mfa to get access token to be able to operate in S3. Thank you for everyone response.
There appears to be confusion about when to use IAM Users and IAM Roles.
When using an Amazon EC2 instance, the best method to grant permissions is:
Create an IAM Role and attach policies to grant the desired permissions
Associate the IAM Role with the Amazon EC2 instance. This can be done at launch time, or afterwards (Actions/Instance Settings/Attach IAM Role).
Any application running on the EC2 instance (including the AWS CLI) will now automatically receive credentials. Do not run aws configure.
If you are wanting to grant permissions to your own (non-EC2) computer, then:
Create an IAM User (or use your existing one) and attach policies to grant the desired permissions
On the computer, run aws configure and enter the Access Key and Secret Key associated with the IAM User. This will store the credentials in ~/.aws/credentials.
Any application running on this computer will then use credentials from the local credentials file
Create a IAM user with permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketName/*"
}
]
}
Save Access key ID & Secret access key.
sudo apt install awscli
aws configure
AWS Access Key ID [None]: AKIAxxxxxxxxxxxZI4
AWS Secret Access Key [None]: 8Bxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8
Default region name [None]: region (ex. us-east-2)
Default output format [None]: json
aws s3 ls s3://s3testingankit1/
This problem can occurs not only from the CLI but also when executing S3 API for example.
The reason for this error can come from wrong configuration of the access permissions to the bucket.
For example with the setup below you're giving a full privileges to perform actions on the bucket's internal objects, BUT not specifying any action on the bucket itself:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
This will lead to the mentioned
... (AccessDenied) when calling the ListBuckets ...
error.
In order to fix this you should allow application to access the bucket (1st statement item) and to edit all objects inside the bucket (2nd statement item):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
There are shorter configurations that might solve the problem, but the one specified above tries also to keep fine grained security permissions.
I ran into this yesterday running a script I ran successfully in September 2021.
TL;DR: add --profile your.profile.name to the end of the command
I have multiple profiles on the login I was using. I think something in the aws environment changed, or perhaps I had done something that was able to bypass this before. Back in September I set the profile with
aws configure set region us-west-2 --profile my.profile.name
But yesterday, after the failure, I saw that aws sts get-caller-identity was returning a different identity. After some documentation search I found the additional method for specifying the profile, and operations like:
aws s3 cp myfile s3://my-s3-bucket --profile my.profile.name
all worked
I have an Windows machine with CyberDuck from which I was able to access a destination bucket, but when trying to access the bucket from a Linux machine with aws command, I got "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied".
I then executed same command "aws s3 ls" from a command line interface on the Windows machine and it worked just fine. It looks like there is some security restriction on the AWS server for the machine/IP.
Related
We are asked to upload a file to client's S3 bucket; however, we do not have AWS account (nor we plan on getting one). What is the easiest way for the client to grant us access to their S3 bucket?
My recommendation would be for your client to create an IAM user for you that is used for the upload. Then, you will need to install the AWS cli. On your client's side there will be a user that the only permission they have is to write to their bucket. This can be done pretty simply and will look something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::the-bucket-name/*",
"arn:aws:s3:::the-bucket-name"
]
}
]
}
I have not thoroughly tested the above permissions!
Then, on your side, after you install the AWS cli you need to have two files. They both live in the home directory of the user that runs your script. The first is $HOME/.aws/config. This has something like:
[default]
output=json
region=us-west-2
You will need to ask them what AWS region the bucket is in. Next is $HOME/.aws/credentials. This will contain something like:
[default]
aws_access_key_id=the-access-key
aws_secret_access_key=the-secret-key-they-give-you
They must give you the region, the access key, the secret key, and the bucket name. With all of this you can now run something like:
aws s3 cp local-file-name.ext s3://the-client-bucket/destination-file-name.ext
This will transfer the local file local-file-name.ext to the bucket the-client-bucket with the file name there of destination-file-name.ext. They may have a different path in the bucket.
To recap:
Client creates an IAM user that has very limited permission. Only API permission is needed, not console.
You install the AWS CLI
Client gives you the access key and secret key.
You configure the machine that does the transfers with the credentials
You can now push files to the bucket.
You do not need an AWS account to do this.
I tried with simplest case following AWS documentation. I created role, assigned to instance and rebooted instance. To test access interactively, I logon to Windows instance and run aws s3api list-objects --bucket testbucket. I get error An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied.
Next test was to create .aws/credentials file and add profile to assume role. I modified role (assigned to instance) and added permission to assume role by any user in account. When I run same command aws s3api list-objects --bucket testbucket --profile assume_role, objects in bucket are listed.
Here is my test role Trust Relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"ssm.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
},
{
"Sid": "UserCanAssumeRole",
"Effect": "Allow",
"Principal": {
"AWS": "111111111111"
},
"Action": "sts:AssumeRole"
}
]
}
Role has only one permission "AmazonS3FullAccess".
When I switch role in AWS console, I can see content of S3 bucket (and no other action is allowed in AWS console).
My assumption is that EC2 instance does not assume role.
How to pinpoint where is the problem?
Problem was with Windows proxy.
I checked proxy environment variables. None was set. When I checked Control Panel->Internet options I saw that Proxy text box shows value of proxy, but checkbox "Use proxy" was not checked. Next to it was text "Some of your settings are managed by organization." Skip proxy was having 169.254.169.254 listed.
I run command in debug mode and saw that CLI connects to proxy. Which cannot access 169.254.169.254 and no credentials are set. When I explicitly set environment variable set NO_PROXY=169.254.169.254 everything started to work.
Why AWS CLI uses proxy from Windows system I do not understand. Worst of all, it uses proxy but does not check bypass proxy. Lesson learned. Run command in debug mode and verify output.
First of all i'm aware of these questions:
Grant EC2 instance access to S3 Bucket
Can't access s3 bucket using IAM-role from an ec2-instance
Getting Access Denied when calling the PutObject operation with
bucket-level permission
but the solutions are not working for me.
I created a role "sample_role", attached the AmazonS3FullAccess-policy to it and assigned the role to the ec2-instance.
My bucket-policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::My-Account-ID:role/sample_role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*"
}
]
}
On my ec2-instance, listing my buckets works fine, both from the command line (aws s3 ls) and from python script.
But when I try to upload a file test.txt to my bucket, I get AccessDenied:
import boto3
s3_client = boto3.client('s3')
s3_resource = boto3.resource('s3')
bucket = s3_resource.Bucket('my_bucket')
with open('test.txt', "rb") as f:
s3_client.upload_fileobj(f, bucket.name, 'text.txt')
Error message:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Same happens when i just try to list the objects in my bucket. Command line aws s3api list-objects --my_bucket or python script:
import boto3
s3_resource = boto3.resource('s3')
bucket = s3_resource.Bucket('my_bucket')
for my_bucket_object in bucket.objects.all():
print(my_bucket_object)
Error message:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
When I turn off "Block all public access" in my bucket settings and enable public access in my access control list, it obviously works. But I need to restrict access to the specified role.
What am I missing?
Thanks for your help!
It appears that your requirement is:
You have an Amazon S3 bucket (my_bucket)
You have an Amazon EC2 instance with an IAM Role attached
You want to allow applications running on that EC2 instance to access my_bucket
You do not want the bucket to be publicly accessible
I will also assume that you are not trying to deny other users access to the bucket if they have already been granted that access. You are purely wanting to Allow access to the EC2 instance, without needing to Deny access to other users/roles that might also have access.
You can do this by adding a policy to the IAM Role that is attached to the EC2 instance:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"
]
}
]
}
This grants ALL Amazon S3 permissions to the IAM Role for my_bucket. Note that some commands require permission on the bucket itself, while other commands require permission on the contents (/*) of the bucket.
I should also mention that granting s3:* is probably too generous, because it would allow the applications running on the instance to delete content and even delete the bucket, which is probably not what you wish to grant. If possible, limit the actions to only those that are necessary.
When I turn off "Block all public access" in my bucket settings and enable public access in my access control list, it obviously works.
Remove "enable public access" from this sentence and this will be your solution :-)
"Block all public access" blocks all public access and it doesn't matter what bucket policy you use. So uncheck this option and your bucket policy will start working as you planned.
So I found the problem.
The credentails of my ec2 instance were configured with the access key of a dev-user account to which the role was not assigned.
I found out by running aws sts get-caller-identity which returns the identity (e.g. IAM role) actually being used.
So it seems that the assigned role can be overwritten by the user identity, which makes sense.
To solve the problem, I simply undid the configuration by deleting the configuration file ~/.aws/credentials. After that the identity changed to the assigned role.
I'm trying to download a file from a private S3 bucket using the PHP SDK (on an EC2 instance).
I create an IAM role and attached the AmazonS3FullAccess to it.
I created the S3 bucket and this is the bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::206193043625:role/MyRoleName"
},
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::config-files/*"
}
]
}
Then on the PHP side I make a curl request to http://169.254.169.254/latest/meta-data/iam/security-credentials/MyRoleName, I get a JSON back instantiate the S3Client and try to download it, but I'm getting this error message:
Error executing "GetObject" on "https://files.s3.us-west-2.amazonaws.com/us-west-2__config.php"; AWS HTTP error: Client error: GET https://files.s3.us-west-2.amazonaws.com/us-west-2__config.php resulted in a 403 Forbidden response:
AccessDenied
Access DeniedC84D80 (truncated...) AccessDenied (client): Access Denied -
AccessDenied
Access DeniedC84D80DE6B2D35FD6sDWIYK98nSH+Oa8lBH7lD91rfHospDeo0jZKFDdo0CaeY8aX6Wb/s2ja5qeYxCBuLwDJ2AqSl0=
Can anyone point me to a direction?
There is no need to access 169.254.169.254 directly. The AWS SDK for PHP will automatically retrieve credentials.
Simply create the S3 client without specifying any credentials.
Since you've already provided AmazonS3FullAccess role to your EC2 instance, you need not to do anything else(i.e accessing metadata api). Directly access your S3 client & it shall work as expected from your compute instance.
For accessing S3 Bucket from EC2 Instance follow the below steps:
* Create an IAM Role with S3 Full Access.
* Launch an EC2 instance with the role attached to it.
* SSH to your EC2 instance with root permissions.
* Type the command: aws s3 ls. It will display all the buckets which are there in S3.
Since the role is attached to the EC2 instance, there is no need to mention the security credentials.
Thanks
I got text file to access S3 bucket like following:
arn:aws:iam::############:user/aaaaaaaa-aaaaaaaaa-aaa
User
aaaaaaaa-aaaaaaaaa-aaa
Access key ID
AAAAAAAAAAAAAAAAAAAA
Secret access key
AAAAAAAAAAA/AAAAAAAAAAAAAAAAAAAAAAAAAAAA
{
"Statement": [
{
"Effect":"Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListAllMyBuckets",
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::bbbbbbbb-bbbbbbbbb-bbbbbbb/*"
}
]
}
I have AWS account and can create my own buckets, but see no UI to acquire such files.
UPDATE
I issued
>aws s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
then I did
>aws configure
AWS Access Key ID [None]: AAAAAAAAAAAAAAAAAAAA
AWS Secret Access Key [None]: AAAAAAAAAAA/AAAAAAAAAAAAAAAAAAAAAAAAAAAA
Default region name [None]:
Default output format [None]:
and now
>aws s3 ls
An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied
Why? Why didn't I use neither User nor Resource values from my text file or how to use this data?
UPDATE 2
I tried
>aws s3 ls
>aws s3 ls s3://bbbbbbbb-bbbbbbbbb-bbbbbbb
>aws s3 ls bbbbbbbb-bbbbbbbbb-bbbbbbb
>aws s3 ls bbbbbbbb-bbbbbbbbb-bbbbbbb/*
>aws s3 ls s3:/bbbbbbbb-bbbbbbbbb-bbbbbbb
And got Access denied in all cases.
It appears that your System Administrators have created some configurations in AWS and they wanted to let you know what they have done. The file is a dump of information from various locations -- it is for your reference and is not for 'use' somewhere.
The first line is the Amazon Resource Name (ARN) that uniquely identifies you as a user. It can be used in security policies to grant you access to resources:
arn:aws:iam::############:user/aaaaaaaa-aaaaaaaaa-aaa
They are also telling you your Username:
User
aaaaaaaa-aaaaaaaaa-aaa
The Access Key and Secret Key can be used to identify yourself, as you have done with the AWS Command-Line Interface (CLI):
Access key ID
AAAAAAAAAAAAAAAAAAAA
Secret access key
AAAAAAAAAAA/AAAAAAAAAAAAAAAAAAAAAAAAAAAA
The next part is an IAM Policy:
{
"Statement": [
{
"Effect":"Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListAllMyBuckets",
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::bbbbbbbb-bbbbbbbbb-bbbbbbb/*"
}
]
}
This policy states that you can perform the listed actions against the specified Amazon S3 bucket.
It's not a great policy however, because the last 3 actions actually apply to a bucket (or no bucket), so should be not used with a Resource statement that specifies bucket/*.
If you are trying to access information in Amazon S3 but receive Access Denied, then contact your System Administrator to update the policy to grant you access.
You have configured your credentials properly based on your update
in the question.
But you haven't specified a default region in the configuration.
Check with your admins what is the region for this S3 bucket. It
could be something like us-east-1 or us-west-2.
Once you have your bucket's region, you can issue a command as
below:
aws s3 ls <name of your bucket> --region us-east-1
The reason you are receiving access denied is you do not have access to other buckets, but only one of the buckets on S3. This is suggested by this line:
"Resource":"arn:aws:s3:::bbbbbbbb-bbbbbbbbb-bbbbbbb/*"
Where bbbbbbbb-bbbbbbbbb-bbbbbbb is name of you bucket.
You need to go to IAM to create a policy for your bucket. Than you need to add this policy to your users account and than you can access this bucket using your users AccessKey and SecretAccessKey