Grant EC2 instance access to S3 Bucket - amazon-web-services

I want to grant my ec2 instance access to an s3 bucket.
On this ec2 instance, a container with my application is launched. Now I don't get permission on the s3 bucket.
This is my bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1462808223348",
"Statement": [
{
"Sid": "Stmt1462808220978",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::714656454815:role/ecsInstanceRole"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "private-ip/32"
}
}
}
]
}
But it doesn't work until I give the bucket the permission for everyone to access it.
I try to curl the file in the s3 bucket from inside the ec2 instance but this doesn't work either.

at least of now, 2019, there is a much easier and cleaner way to do it (the credentials never have to be stored in the instance, instead it can query them automatically):
create an IAM Role for your instance and assign it
create a policy to grant access to your s3 bucket
assign the policy to the instance's IAM role
upload/download objects e.g. via aws cli for s3 - cp e.g. aws s3 cp <S3Uri> <LocalPath>
#2: An example of a JSON Policy to Allow Read and Write Access to Objects in an S3 Bucket is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::bucket-name"]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": ["arn:aws:s3:::bucket-name/*"]
}
]
}
You have to adjust allowed actions, and replace "bucket-name"

There is no direct way of granting "EC2" instance access to AWS server, but you can try the following.
Create a new user in AWS IAM, and download the credentials file.
This user will represent your EC2 server.
Provide the user with permissions to your S3 Bucket.
Next, place the credentials file in the following location:-
EC2 - Windows Instance:
a. Place the credentials file anywhere you wish. (e.g. C:/credentials)
b. Create an environment variable AWS_CREDENTIAL_PROFILES_FILE and put the value as the path where you put your credentials file (e.g. C:/credentials)
EC2 - Linux Instance
a. Follow steps from windows instance
b. Create a folder .aws inside your app-server's root folder (e.g. /usr/share/tomcat6).
c. Create a symmlink between your environment variable and your .aws folder
sudo ln -s $AWS_CREDENTIAL_PROFILES_FILE /usr/share/tomcat6/.aws/credentials
Now that your credentials file is placed, you can use Java code to access the bucket.
NOTE: AWS-SDK libraries are required for this
AWSCredentials credentials = null;
try {
credentials = new ProfileCredentialsProvider().getCredentials();
} catch (Exception e) {
LOG.error("Unable to load credentials " + e);
failureMsg = "Cannot connect to file server.";
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (environment variable : AWS_CREDENTIAL_PROFILES_FILE), and is in valid format.",
e);
}
AmazonS3 s3 = new AmazonS3Client(credentials);
Region usWest2 = Region.getRegion(Regions.US_WEST_2);
s3.setRegion(usWest2);
ObjectListing objectListing = s3.listObjects(new ListObjectsRequest().withBucketName(bucketName).withPrefix(prefix));
Where bucketName = [Your Bucket Name]
and prefix = [your folder structure inside your bucket, where your file(s) are contained]
Hope that helps.
Also, if you are not using Java, you can check out AWS-SDKs in other programming languages too.

I found it out....
It only works with the public IP from the ec2 instance.

Try this:
{
"Version": "2012-10-17",
"Id": "Policy1462808223348",
"Statement": [
{
"Sid": "Stmt1462808220978",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "yourIp/24"
}
}
}
]
}

I faced the same problem. I finally resolved it by creating an access-point for the bucket in question using AWS CLI see https://docs.aws.amazon.com/AmazonS3/latest/dev/creating-access-points.html and I then created a bucket policy like following
{
"Version": "2012-10-17",
"Id": "Policy1583357393961",
"Statement": [
{
"Sid": "Stmt1583357315674",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-id>:role/ecsInstanceRole"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<your-bucket>"
},
{
"Sid": "Stmt1583357391961",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-id>:role/ecsInstanceRole"
},
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::<your-bucket>/*"
}
]
}
Please make sure you are using a newer version of aws cli (1.11.xxx didn't work for me). I finally installed the version 2 of cli to get this to work.

Related

Uploading to AWS S3 bucket from a profile in a different environment

I have access to one of two AWS environments and I've created a protected S3 bucket in it to upload files to from an account in the one that I do not. The environment and the account that I don't have access to are what a project's CI uses.
environment I have access to: env1
environment I do not have access to: env2
account I do not have access to: user/ci
bucket name: content
S3 bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
...
},
{
"Sid": "Allow access to bucket from profile in env1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/ci"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket*"
],
"Resource": "arn:aws:s3:::content"
},
{
"Sid": "Allow access to bucket items from profile in env1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/ci"
},
"Action": [
"s3:Get*",
"s3:PutObject",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::content",
"arn:aws:s3:::content/*"
]
}
]
}
From inside a container that's configured for env1 and user/ci I'm testing with the command
aws s3 sync content/ s3://content/
and I get the error:
fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
I have two questions:
Am I even using the correct aws command to upload the data to the bucket?
Am I missing something from my bucket policy?
For the latter, I've basically followed what a load of examples and answers online have suggested.
To test your policy, I did the following:
Created an IAM User with no policies
Created an Amazon S3 bucket
Attached your Bucket Policy to the bucket, and updated the ARN and bucket name
Tested access to the bucket with:
aws s3 ls s3://bucketname
aws s3 sync folder/ s3://bucketname/folder/
It worked fine.
Therefore, the policy you display appears to be giving all necessary permissions. It is possible that you have something else that is Denying access on the bucket.
The solution was to given the ACL
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::content",
"arn:aws:s3:::content/*"
]
}
]
}
to user/ci in env1.

AWS - Unable to access S3 bucket object from EC2

I am not sure if I am missing a step here or not.
I have an s3 bucket I need to be able to access from an AWS SDK PHP script I wrote running on my EC2. I created an IAM role to allow access.
IAM Allow_S3_Access_to_EC2
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::myinbox"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myinbox/*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "s3:DeleteObject",
"Resource": "arn:aws:s3:::myinbox/*"
}
]
}
And my Trust Relationship for the IAM role is
Trust Relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I then attached that IAM role to my EC2 instance. From what I have read this is all I have to do, but I think I need to do more.
In my Bucket Policy I have the following to allow access from my SES to be able to create the email object.
S3 Bucket Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSESPuts",
"Effect": "Allow",
"Principal": {
"Service": "ses.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::myinbox/*",
"Condition": {
"StringEquals": {
"aws:Referer": "************"
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::************:role/Allow_S3_Access_to_EC2"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::myinbox",
"arn:aws:s3:::myinbox/*"
]
}
]
}
My Bucket Policy has nothing in there for my EC2 or even my IAM role I have attached. Do I need to add something to my Bucket Policy as well? That is where I am confused.
What I am experiencing is when a new object is created and I try and access that object from my AWS SDK PHP I get a "403" Forbidden. If I make that object public in the S3 console I can then access it just fine. So even though I have set permissions for my EC2 to access my S3 unless I make the object public I can't access it.
I even tried using wget to the object on the actual server through the terminal and I still get the 403 unless I make the object public
When I run the IAM Policy Simulator on my role I get
Here is my PHP
PHP Script
require '../aws-ses/aws-autoloader.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$bucketName = 'myinbox';
try {
// Instantiate the client.
$s3 = new S3Client([
'version' => 'latest',
'region' => 'us-west-2',
'credentials' => array('key'=>'*********************',
'secret'=>'*******************************************')
]);
} catch (Exception $e) {
// We use a die, so if this fails. It stops here. Typically this is a REST call so this would
// return a json object.
die("Error: " . $e->getMessage());
}
// Use the high-level iterators (returns ALL of your objects).
$objects = $s3->getIterator('ListObjects', array('Bucket' => $bucketName));
First, did you set up the trust relationship so that the EC2 service can assume that role?
Next, you don't associate IAM roles directly with EC2 instances; instead you need to use an Instance Profile. Did you set up an Instance Profile associated with that Role?
This document is a good start: https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-iam-instance-profile.html
1) I would make sure that you're ec2 is using the role to call the s3, use the command below to identify
aws sts get-caller-identity
2) I would revoke existing sessions to make sure the new session has refreshed the roles
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_revoke-sessions.html
3) use the S3 access analyzer to define the access resolving
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/access-analyzer.html

AWS S3: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied

I have an AWS account with read/write permissions as shown below:
I'd like to make it so that an IAM user can download files from an S3 bucket but I'm getting access denied when executing aws s3 sync s3://<bucket_name> . I have tried various things, but not to avail. Some steps that I did:
Created a user called s3-full-access
Executed aws configure in my CLI and entered the generated access key id and secret access key for the above user
Created a bucket policy (shown below) that I'd hoped grants access for my user created in first step.
My bucket has a folder name AffectivaLogs in which files were being added anonymously by various users, and it seems like though the bucket is public, the folder inside it is not and I am not even able to make it public, and it leads to following error.
Following are the public access settings:
Update: I updated the bucket policy as follows, but it doesn't work.
To test the situation, I did the following:
Created an IAM User with no attached policies
Created an Amazon S3 bucket
Turned off S3 block public access settings:
Block new public bucket policies
Block public and cross-account access if bucket has public policies
Added a Bucket Policy granting s3:* access to the contents of the bucket for the IAM User
I then ran aws s3 sync and got Access Denied.
I then modified the policy to also permit access to the bucket itself:
{
"Id": "Policy",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "statement",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-bucket/*",
"arn:aws:s3:::my-bucket"
],
"Principal": {
"AWS": [
"arn:aws:iam::123456789012:user/stack-user"
]
}
}
]
}
This worked.
Bottom line: Also add permissions to access the bucket, in addition to the contents of the bucket. (I suspect it is because aws s3 sync requires listing of bucket contents, in addition to accessing the objects themselves.)
If you use KMS encryption enabled on bucket you should also add policy that allows you to decrypt data using KMS key.
You can configure the S3 policy with the required principal
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::accountId:user/*
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucket"
},
{
"Sid": "GetObjects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::accountId:user/*
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
Or you can create IAM policy and attached it to the role
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::bucket"
},
{
"Sid": "GetObject",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket/*"
}
]
}

Error While storing the document Permission denied AWS

I have a EC2 instance in elasticbeanstalk environment(dev) which works as expected. I have also deployed the same APP on a new elasticbeanstalk environment(Test). Application comes up and all the functionality works, but the upload to S3 functionality does't work in this TEST ENV. I get "Error While storing the document Permission denied" Exception.
I have give all the permission in S3 for the bucket policy. My bucket policy details are as follow -
{
"Version": "2012-10-17",
"Id": "Policy150025",
"Statement": [
{
"Sid": "Stmt1500252113871",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::dev/devkey"
}
]
}
I am not sure why the same APP works in One ENV and not the Other. Appreciate any suggestions.
* Updated *
Trust Relationship
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
bucket policy grants the user access to the objects, but the user who is uploading the files to the bucket should have put objects access to the bucket,
for the ec2 instance can you confirm the aws credentials inside machine env, or any role attached to the instance which is allowing to put objects into bucket.

Migrate S3 bucket error

I want to migrate s3 bucket from one account to another account here is my bucket policy
{
"Version": "2008-10-17",
"Id": "Policy1335892530063",
"Statement": [
{
"Sid": "DelegateS3Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxxx:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::test123",
"arn:aws:s3:::test123/*"
]
},
{
"Sid": "Stmt1335892150622",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxx:root"
},
"Action": [
"s3:GetBucketAcl",
"s3:GetBucketPolicy"
],
"Resource": "arn:aws:s3:::test123"
},
{
"Sid": "Stmt1335892526596",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxxxx:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test123/*"
}
]
}
here is my IAM user policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": ["arn:aws:s3:::*"]
}
]
}
When I run command
aws s3 sync s3://test123 s3://abc-test123
I get Error
A client error (AccessDenied) occurred when calling the CopyObject operation: Access Denied
Your bucket policy seems to be correct.
Please verify that you are using root account, just as specified in your bucket policy.
Also you may need to check if there is not any denied bucket policies on your destination bucket.
If nothing helps, you can enable temporary public access to your bucket as a workaround. Yes, it's not secure but it should probably work in all cases.
Make sure you are providing adequate permissions on both the source bucket (to read) and the destination bucket (to write).
If you are using Root credentials (not generally recommended) for an Account that owns the bucket, you probably don't even need the bucket policy -- the root account should, by default, have the necessary access.
If you are assigning permissions to an IAM user, then instead of creating a Bucket Policy, assign permissions on the IAM user themselves. No need to supply a Principal in this situation.
Start by checking that you have permissions to list both buckets:
aws s3 ls s3://test123
aws s3 ls s3://abc-test123
Then check that you have permissions to copy a file from the source and to the destination:
aws s3 cp s3://test123/foo.txt .
aws s3 cp foo.txt s3://abc-test123/foo.txt
If they work, then the sync command should work, too.