Can't access S3 bucket using IAM Role from an EC2 instance - amazon-web-services

I'm trying to download a file from a private S3 bucket using the PHP SDK (on an EC2 instance).
I create an IAM role and attached the AmazonS3FullAccess to it.
I created the S3 bucket and this is the bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::206193043625:role/MyRoleName"
},
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::config-files/*"
}
]
}
Then on the PHP side I make a curl request to http://169.254.169.254/latest/meta-data/iam/security-credentials/MyRoleName, I get a JSON back instantiate the S3Client and try to download it, but I'm getting this error message:
Error executing "GetObject" on "https://files.s3.us-west-2.amazonaws.com/us-west-2__config.php"; AWS HTTP error: Client error: GET https://files.s3.us-west-2.amazonaws.com/us-west-2__config.php resulted in a 403 Forbidden response:
AccessDenied
Access DeniedC84D80 (truncated...) AccessDenied (client): Access Denied -
AccessDenied
Access DeniedC84D80DE6B2D35FD6sDWIYK98nSH+Oa8lBH7lD91rfHospDeo0jZKFDdo0CaeY8aX6Wb/s2ja5qeYxCBuLwDJ2AqSl0=
Can anyone point me to a direction?

There is no need to access 169.254.169.254 directly. The AWS SDK for PHP will automatically retrieve credentials.
Simply create the S3 client without specifying any credentials.

Since you've already provided AmazonS3FullAccess role to your EC2 instance, you need not to do anything else(i.e accessing metadata api). Directly access your S3 client & it shall work as expected from your compute instance.

For accessing S3 Bucket from EC2 Instance follow the below steps:
* Create an IAM Role with S3 Full Access.
* Launch an EC2 instance with the role attached to it.
* SSH to your EC2 instance with root permissions.
* Type the command: aws s3 ls. It will display all the buckets which are there in S3.
Since the role is attached to the EC2 instance, there is no need to mention the security credentials.
Thanks

Related

How can I enable an ec2 instance to have private access to an S3 bucket?

First of all i'm aware of these questions:
Grant EC2 instance access to S3 Bucket
Can't access s3 bucket using IAM-role from an ec2-instance
Getting Access Denied when calling the PutObject operation with
bucket-level permission
but the solutions are not working for me.
I created a role "sample_role", attached the AmazonS3FullAccess-policy to it and assigned the role to the ec2-instance.
My bucket-policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::My-Account-ID:role/sample_role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*"
}
]
}
On my ec2-instance, listing my buckets works fine, both from the command line (aws s3 ls) and from python script.
But when I try to upload a file test.txt to my bucket, I get AccessDenied:
import boto3
s3_client = boto3.client('s3')
s3_resource = boto3.resource('s3')
bucket = s3_resource.Bucket('my_bucket')
with open('test.txt', "rb") as f:
s3_client.upload_fileobj(f, bucket.name, 'text.txt')
Error message:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Same happens when i just try to list the objects in my bucket. Command line aws s3api list-objects --my_bucket or python script:
import boto3
s3_resource = boto3.resource('s3')
bucket = s3_resource.Bucket('my_bucket')
for my_bucket_object in bucket.objects.all():
print(my_bucket_object)
Error message:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
When I turn off "Block all public access" in my bucket settings and enable public access in my access control list, it obviously works. But I need to restrict access to the specified role.
What am I missing?
Thanks for your help!
It appears that your requirement is:
You have an Amazon S3 bucket (my_bucket)
You have an Amazon EC2 instance with an IAM Role attached
You want to allow applications running on that EC2 instance to access my_bucket
You do not want the bucket to be publicly accessible
I will also assume that you are not trying to deny other users access to the bucket if they have already been granted that access. You are purely wanting to Allow access to the EC2 instance, without needing to Deny access to other users/roles that might also have access.
You can do this by adding a policy to the IAM Role that is attached to the EC2 instance:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"
]
}
]
}
This grants ALL Amazon S3 permissions to the IAM Role for my_bucket. Note that some commands require permission on the bucket itself, while other commands require permission on the contents (/*) of the bucket.
I should also mention that granting s3:* is probably too generous, because it would allow the applications running on the instance to delete content and even delete the bucket, which is probably not what you wish to grant. If possible, limit the actions to only those that are necessary.
When I turn off "Block all public access" in my bucket settings and enable public access in my access control list, it obviously works.
Remove "enable public access" from this sentence and this will be your solution :-)
"Block all public access" blocks all public access and it doesn't matter what bucket policy you use. So uncheck this option and your bucket policy will start working as you planned.
So I found the problem.
The credentails of my ec2 instance were configured with the access key of a dev-user account to which the role was not assigned.
I found out by running aws sts get-caller-identity which returns the identity (e.g. IAM role) actually being used.
So it seems that the assigned role can be overwritten by the user identity, which makes sense.
To solve the problem, I simply undid the configuration by deleting the configuration file ~/.aws/credentials. After that the identity changed to the assigned role.

Access S3 bucket from all instances having a particular IAM role [duplicate]

I need to fire up an S3 bucket so my EC2 instances have access to store image files to it. The EC2 instances need read/write permissions. I do not want to make the S3 bucket publicly available, I only want the EC2 instances to have access to it.
The other gotcha is my EC2 instances are being managed by OpsWorks and I can have may different instances being fired up depending on load/usage. If I were to restrict it by IP, I may not always know the IP the EC2 instances have. Can I restrict by VPC?
Do I have to make my S3 bucket enabled for static website hosting?
Do I need to make all files in the bucket public as well for this to work?
You do not need to make the bucket public readable, nor the files public readable. The bucket and it's contents can be kept private.
Don't restrict access to the bucket based on IP address, instead restrict it based on the IAM role the EC2 instance is using.
Create an IAM EC2 Instance role for your EC2 instances.
Run your EC2 instances using that role.
Give this IAM role a policy to access the S3 bucket.
For example:
{
"Version": "2012-10-17",
"Statement":[{
"Effect": "Allow",
"Action": "s3:*",
"Resource": ["arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"]
}
]
}
If you want to restrict access to the bucket itself, try an S3 bucket policy.
For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam::111122223333:role/my-ec2-role"]
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"]
}
]
}
Additional information: http://blogs.aws.amazon.com/security/post/TxPOJBY6FE360K/IAM-policies-and-Bucket-Policies-and-ACLs-Oh-My-Controlling-Access-to-S3-Resourc
This can be done very simply.
Follow the following steps:
Open the AWS EC2 on console.
Select the instance and navigate to actions.
Select instances settings and select Attach/Replace IAM Role
Create a new role and attach S3FullAccess policy to that role.
When this is done, connect to the AWS instance and the rest will be done via the following CLI commands:
aws s3 cp filelocation/filename s3://bucketname
Please note... the file location refers to the local address. And the bucketname is the name of your bucket.
Also note: This is possible if your instance and S3 bucket are in the same account.
Cheers.
IAM role is the solution for you.
You need create role with s3 access permission, if the ec2 instance was started without any role, you have to re-build it with that role assigned.
Refer: Allowing AWS OpsWorks to Act on Your Behalf

Access denied when trying to do AWS s3 ls using AWS cli

I launched an ec2 instance and created a role with a full S3 access policy for the instance. I installed awscli on it and configured my user's access key. My user has admin access and full S3 access policy too. I can see the buckets in the aws console but when I try to run aws s3 ls on the instance it returned An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied.
What else I need to do to add permission to the role or my user properly to be able to list and sync object between S3 and the instance?
I ran into this issue as well.
I ran aws sts get-caller-identity and noticed that the ARN did not match what I was expecting. It turns out if you have AWS configurations set in your bash_profile or bashrc, the awscli will default to using these instead.
I changed the enviornment variables in bash_profile and bashrc to the proper keys and everything started working.
Turns out I forgot I had to do mfa to get access token to be able to operate in S3. Thank you for everyone response.
There appears to be confusion about when to use IAM Users and IAM Roles.
When using an Amazon EC2 instance, the best method to grant permissions is:
Create an IAM Role and attach policies to grant the desired permissions
Associate the IAM Role with the Amazon EC2 instance. This can be done at launch time, or afterwards (Actions/Instance Settings/Attach IAM Role).
Any application running on the EC2 instance (including the AWS CLI) will now automatically receive credentials. Do not run aws configure.
If you are wanting to grant permissions to your own (non-EC2) computer, then:
Create an IAM User (or use your existing one) and attach policies to grant the desired permissions
On the computer, run aws configure and enter the Access Key and Secret Key associated with the IAM User. This will store the credentials in ~/.aws/credentials.
Any application running on this computer will then use credentials from the local credentials file
Create a IAM user with permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketName/*"
}
]
}
Save Access key ID & Secret access key.
sudo apt install awscli
aws configure
AWS Access Key ID [None]: AKIAxxxxxxxxxxxZI4
AWS Secret Access Key [None]: 8Bxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8
Default region name [None]: region (ex. us-east-2)
Default output format [None]: json
aws s3 ls s3://s3testingankit1/
This problem can occurs not only from the CLI but also when executing S3 API for example.
The reason for this error can come from wrong configuration of the access permissions to the bucket.
For example with the setup below you're giving a full privileges to perform actions on the bucket's internal objects, BUT not specifying any action on the bucket itself:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
This will lead to the mentioned
... (AccessDenied) when calling the ListBuckets ...
error.
In order to fix this you should allow application to access the bucket (1st statement item) and to edit all objects inside the bucket (2nd statement item):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
There are shorter configurations that might solve the problem, but the one specified above tries also to keep fine grained security permissions.
I ran into this yesterday running a script I ran successfully in September 2021.
TL;DR: add --profile your.profile.name to the end of the command
I have multiple profiles on the login I was using. I think something in the aws environment changed, or perhaps I had done something that was able to bypass this before. Back in September I set the profile with
aws configure set region us-west-2 --profile my.profile.name
But yesterday, after the failure, I saw that aws sts get-caller-identity was returning a different identity. After some documentation search I found the additional method for specifying the profile, and operations like:
aws s3 cp myfile s3://my-s3-bucket --profile my.profile.name
all worked
I have an Windows machine with CyberDuck from which I was able to access a destination bucket, but when trying to access the bucket from a Linux machine with aws command, I got "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied".
I then executed same command "aws s3 ls" from a command line interface on the Windows machine and it worked just fine. It looks like there is some security restriction on the AWS server for the machine/IP.

S3 PutObject operation gives Access Denied with IAM Role containing Policy granting access to S3

I have an IAM role with a custom policy attached to it allowing access to an S3 bucket we'll call foo-bar. I've tried granting access to that specific resource, with PutObject and a couple other actions. That IAM Role is attached to an EC2 instance yet that EC2 instance does not have access to upload files when I use aws s3 sync. s3://foo-bar.
To test if it was an issue with the policy, I just granted S3:* to * resources, and it still won't upload.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"cloudformation:ListExports",
"s3:*"
],
"Resource": "*"
}
]
}
The error I get at the CLI is:
upload failed: infrastructure\vpc.template to s3://foo-bar/infrastructure/vpc.template An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Is there something else I need to do in order to give it access? Why isn't the Policy attached to the IAM Role working?
I tried running it with --debug to see what's going on.
This helped me discover that I have a local .aws/credentials file which overrode the IAMRole attached to the machine.
If you need the credentials file - you can have a different profile [some name] and use --profile to choose it.
HTH.

AWS S3 Bucket Access from EC2

I need to fire up an S3 bucket so my EC2 instances have access to store image files to it. The EC2 instances need read/write permissions. I do not want to make the S3 bucket publicly available, I only want the EC2 instances to have access to it.
The other gotcha is my EC2 instances are being managed by OpsWorks and I can have may different instances being fired up depending on load/usage. If I were to restrict it by IP, I may not always know the IP the EC2 instances have. Can I restrict by VPC?
Do I have to make my S3 bucket enabled for static website hosting?
Do I need to make all files in the bucket public as well for this to work?
You do not need to make the bucket public readable, nor the files public readable. The bucket and it's contents can be kept private.
Don't restrict access to the bucket based on IP address, instead restrict it based on the IAM role the EC2 instance is using.
Create an IAM EC2 Instance role for your EC2 instances.
Run your EC2 instances using that role.
Give this IAM role a policy to access the S3 bucket.
For example:
{
"Version": "2012-10-17",
"Statement":[{
"Effect": "Allow",
"Action": "s3:*",
"Resource": ["arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"]
}
]
}
If you want to restrict access to the bucket itself, try an S3 bucket policy.
For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": ["arn:aws:iam::111122223333:role/my-ec2-role"]
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"]
}
]
}
Additional information: http://blogs.aws.amazon.com/security/post/TxPOJBY6FE360K/IAM-policies-and-Bucket-Policies-and-ACLs-Oh-My-Controlling-Access-to-S3-Resourc
This can be done very simply.
Follow the following steps:
Open the AWS EC2 on console.
Select the instance and navigate to actions.
Select instances settings and select Attach/Replace IAM Role
Create a new role and attach S3FullAccess policy to that role.
When this is done, connect to the AWS instance and the rest will be done via the following CLI commands:
aws s3 cp filelocation/filename s3://bucketname
Please note... the file location refers to the local address. And the bucketname is the name of your bucket.
Also note: This is possible if your instance and S3 bucket are in the same account.
Cheers.
IAM role is the solution for you.
You need create role with s3 access permission, if the ec2 instance was started without any role, you have to re-build it with that role assigned.
Refer: Allowing AWS OpsWorks to Act on Your Behalf