I have a EC2 instance in elasticbeanstalk environment(dev) which works as expected. I have also deployed the same APP on a new elasticbeanstalk environment(Test). Application comes up and all the functionality works, but the upload to S3 functionality does't work in this TEST ENV. I get "Error While storing the document Permission denied" Exception.
I have give all the permission in S3 for the bucket policy. My bucket policy details are as follow -
{
"Version": "2012-10-17",
"Id": "Policy150025",
"Statement": [
{
"Sid": "Stmt1500252113871",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::dev/devkey"
}
]
}
I am not sure why the same APP works in One ENV and not the Other. Appreciate any suggestions.
* Updated *
Trust Relationship
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
bucket policy grants the user access to the objects, but the user who is uploading the files to the bucket should have put objects access to the bucket,
for the ec2 instance can you confirm the aws credentials inside machine env, or any role attached to the instance which is allowing to put objects into bucket.
Related
I have access to one of two AWS environments and I've created a protected S3 bucket in it to upload files to from an account in the one that I do not. The environment and the account that I don't have access to are what a project's CI uses.
environment I have access to: env1
environment I do not have access to: env2
account I do not have access to: user/ci
bucket name: content
S3 bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
...
},
{
"Sid": "Allow access to bucket from profile in env1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/ci"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket*"
],
"Resource": "arn:aws:s3:::content"
},
{
"Sid": "Allow access to bucket items from profile in env1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/ci"
},
"Action": [
"s3:Get*",
"s3:PutObject",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::content",
"arn:aws:s3:::content/*"
]
}
]
}
From inside a container that's configured for env1 and user/ci I'm testing with the command
aws s3 sync content/ s3://content/
and I get the error:
fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
I have two questions:
Am I even using the correct aws command to upload the data to the bucket?
Am I missing something from my bucket policy?
For the latter, I've basically followed what a load of examples and answers online have suggested.
To test your policy, I did the following:
Created an IAM User with no policies
Created an Amazon S3 bucket
Attached your Bucket Policy to the bucket, and updated the ARN and bucket name
Tested access to the bucket with:
aws s3 ls s3://bucketname
aws s3 sync folder/ s3://bucketname/folder/
It worked fine.
Therefore, the policy you display appears to be giving all necessary permissions. It is possible that you have something else that is Denying access on the bucket.
The solution was to given the ACL
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::content",
"arn:aws:s3:::content/*"
]
}
]
}
to user/ci in env1.
I am stuck with provisioning end-user access into a cross account shared bucket, and need help figuring out if there are specific policy requirements for using clients to access the bucket, vs straight CLI.
IAM User Accounts are managed in our "Core" AWS Account.
S3 Bucket is provisioned in our "Dev" AWS Account.
S3 Bucket in Dev account is encrypted with KMS key in Dev Account.
We have configured our Bucket Policy to permit the user access.
We have configured user policies to permit access to the S3 bucket.
We have configured user policies to permit use of the KMS key.
When using the CLI our user account can succesfully access and use the S3 bucket. When attempting to connect with a GUI Client (Win-SCP, CyberDuck, MAC ForkLift) we receive permission denied errors.
BUCKET POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::[DEVACCOUNT#]:role/EC2-ROLE-FOR-APP-ACCESS",
"arn:aws:iam::[COREACCOUNT#]:user/end.user"
]
},
"Action": "s3:List*",
"Resource": [
"arn:aws:s3:::dev-mybucket",
"arn:aws:s3:::dev-mybucket/*"
]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::[DEVACCOUNT#]:role/EC2-ROLE-FOR-APP-ACCESS",
"arn:aws:iam::[COREACCOUNT#]:user/end.user"
]
},
"Action": [
"s3:GetObject",
"s3:Put*"
],
"Resource": "arn:aws:s3:::dev-mybucket/*"
}
]
}
User Policy - access KMS
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUseOfDevAPPSKey",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:Describe*"
],
"Resource": [
"arn:aws:kms:ca-central-1:[DEVACCOUNT#]:key/[redacted-key-number]"
]
},
{
"Sid": "AllowAttachmentOfPersistentResources",
"Effect": "Allow",
"Action": [
"kms:CreateGrant",
"kms:List*",
"kms:RevokeGrant"
],
"Resource": [
"arn:aws:kms:ca-central-1:[DEVACCOUNT#]:key/[redacted-key-number]"
],
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": true
}
}
}
]
}
User policy - Access S3 Bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessToMyBucket",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::dev-mybucket/",
"arn:aws:s3:::dev-mybucket/*"
]
}
]
}
From aws s3 commands we can 'ls' content and 'cp' content from local to remote and from remote to local.
When configuring access with the GUI Clients we always receive somewhat generic 'permission denied' or 'access denied' type errors.
The GUI client is probably making a call that is not List*, Put* or GetObject.
For example, it might be calling GetObjectVersion, GetObjectAcl or GetBucketAcl.
Try adding Get* permissions in addition to List*.
You might also be able to look at the events in your AWS CloudTrail trail to see what specific API calls were denied.
For details, see: Specifying Permissions in a Policy - Amazon Simple Storage Service
Access to an S3 bucket via a GUI such as the AWS web console or SFTP clients with s3 functionality(FileZilla, Cyberduck, ForkLift, etc.) requires the s3:ListAllMyBuckets action in a policy attached to that IAM user. This is very unfortunate as the user will now have access to see ALL your bucket names in that account even if they just have read, write, and or List access to just one bucket in that account.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations.html
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html
One other option is to go to the bucket URL directly. The user/role will require access via that bucket's Bucket policy.
https://s3.console.aws.amazon.com/s3/buckets/dev-mybucket
I have an AWS account with read/write permissions as shown below:
I'd like to make it so that an IAM user can download files from an S3 bucket but I'm getting access denied when executing aws s3 sync s3://<bucket_name> . I have tried various things, but not to avail. Some steps that I did:
Created a user called s3-full-access
Executed aws configure in my CLI and entered the generated access key id and secret access key for the above user
Created a bucket policy (shown below) that I'd hoped grants access for my user created in first step.
My bucket has a folder name AffectivaLogs in which files were being added anonymously by various users, and it seems like though the bucket is public, the folder inside it is not and I am not even able to make it public, and it leads to following error.
Following are the public access settings:
Update: I updated the bucket policy as follows, but it doesn't work.
To test the situation, I did the following:
Created an IAM User with no attached policies
Created an Amazon S3 bucket
Turned off S3 block public access settings:
Block new public bucket policies
Block public and cross-account access if bucket has public policies
Added a Bucket Policy granting s3:* access to the contents of the bucket for the IAM User
I then ran aws s3 sync and got Access Denied.
I then modified the policy to also permit access to the bucket itself:
{
"Id": "Policy",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "statement",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-bucket/*",
"arn:aws:s3:::my-bucket"
],
"Principal": {
"AWS": [
"arn:aws:iam::123456789012:user/stack-user"
]
}
}
]
}
This worked.
Bottom line: Also add permissions to access the bucket, in addition to the contents of the bucket. (I suspect it is because aws s3 sync requires listing of bucket contents, in addition to accessing the objects themselves.)
If you use KMS encryption enabled on bucket you should also add policy that allows you to decrypt data using KMS key.
You can configure the S3 policy with the required principal
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::accountId:user/*
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucket"
},
{
"Sid": "GetObjects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::accountId:user/*
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
Or you can create IAM policy and attached it to the role
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::bucket"
},
{
"Sid": "GetObject",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
Suppose I have an S3 bucket that has "Everyone Read" permission. Bucket is not public. Means anyone can access objects by typing its url in the browser. Now I want to remove this access from URL thing in browser. One option is to go to each images and remove "Read" from "Everyone" section. But since there are huge amount of images so this is not feasible.
So can I put such bucket policy which allows access only from one IAM user and not from browser thing? I tried adding such bucket policy that allow access to all resources for only specific user but still images are accessible from browsing through URL. Any thoughts?
Edit: Adding policy that I tried
{
"Id": "Policy1",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::test-bucket-public-issue",
"Principal": {
"AWS": [
"arn:aws:iam::AccounId:user/Username"
]
}
}
]
}
Ok #Himanshu Mohan I will explain you what i have done. I have created a S3 bucket and then i added the below bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1534419239074",
"Statement": [
{
"Sid": "Stmt1534419237657",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::xxx-xxx-test/*"
}
]
}
While adding this policy the bucket will automatically public
Then i have uploaded an image as what you referred and i was able to access the same image via browser.
Now I changed the policy back to as what you said
Now i was not able to access the image, will show the access denied xml response. The only difference i see is i have added the /* after the bucket name "Resource": "arn:aws:s3:::xxx-xxx-test/*".
I want to grant my ec2 instance access to an s3 bucket.
On this ec2 instance, a container with my application is launched. Now I don't get permission on the s3 bucket.
This is my bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1462808223348",
"Statement": [
{
"Sid": "Stmt1462808220978",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::714656454815:role/ecsInstanceRole"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "private-ip/32"
}
}
}
]
}
But it doesn't work until I give the bucket the permission for everyone to access it.
I try to curl the file in the s3 bucket from inside the ec2 instance but this doesn't work either.
at least of now, 2019, there is a much easier and cleaner way to do it (the credentials never have to be stored in the instance, instead it can query them automatically):
create an IAM Role for your instance and assign it
create a policy to grant access to your s3 bucket
assign the policy to the instance's IAM role
upload/download objects e.g. via aws cli for s3 - cp e.g. aws s3 cp <S3Uri> <LocalPath>
#2: An example of a JSON Policy to Allow Read and Write Access to Objects in an S3 Bucket is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::bucket-name"]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": ["arn:aws:s3:::bucket-name/*"]
}
]
}
You have to adjust allowed actions, and replace "bucket-name"
There is no direct way of granting "EC2" instance access to AWS server, but you can try the following.
Create a new user in AWS IAM, and download the credentials file.
This user will represent your EC2 server.
Provide the user with permissions to your S3 Bucket.
Next, place the credentials file in the following location:-
EC2 - Windows Instance:
a. Place the credentials file anywhere you wish. (e.g. C:/credentials)
b. Create an environment variable AWS_CREDENTIAL_PROFILES_FILE and put the value as the path where you put your credentials file (e.g. C:/credentials)
EC2 - Linux Instance
a. Follow steps from windows instance
b. Create a folder .aws inside your app-server's root folder (e.g. /usr/share/tomcat6).
c. Create a symmlink between your environment variable and your .aws folder
sudo ln -s $AWS_CREDENTIAL_PROFILES_FILE /usr/share/tomcat6/.aws/credentials
Now that your credentials file is placed, you can use Java code to access the bucket.
NOTE: AWS-SDK libraries are required for this
AWSCredentials credentials = null;
try {
credentials = new ProfileCredentialsProvider().getCredentials();
} catch (Exception e) {
LOG.error("Unable to load credentials " + e);
failureMsg = "Cannot connect to file server.";
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (environment variable : AWS_CREDENTIAL_PROFILES_FILE), and is in valid format.",
e);
}
AmazonS3 s3 = new AmazonS3Client(credentials);
Region usWest2 = Region.getRegion(Regions.US_WEST_2);
s3.setRegion(usWest2);
ObjectListing objectListing = s3.listObjects(new ListObjectsRequest().withBucketName(bucketName).withPrefix(prefix));
Where bucketName = [Your Bucket Name]
and prefix = [your folder structure inside your bucket, where your file(s) are contained]
Hope that helps.
Also, if you are not using Java, you can check out AWS-SDKs in other programming languages too.
I found it out....
It only works with the public IP from the ec2 instance.
Try this:
{
"Version": "2012-10-17",
"Id": "Policy1462808223348",
"Statement": [
{
"Sid": "Stmt1462808220978",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "yourIp/24"
}
}
}
]
}
I faced the same problem. I finally resolved it by creating an access-point for the bucket in question using AWS CLI see https://docs.aws.amazon.com/AmazonS3/latest/dev/creating-access-points.html and I then created a bucket policy like following
{
"Version": "2012-10-17",
"Id": "Policy1583357393961",
"Statement": [
{
"Sid": "Stmt1583357315674",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-id>:role/ecsInstanceRole"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<your-bucket>"
},
{
"Sid": "Stmt1583357391961",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-id>:role/ecsInstanceRole"
},
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::<your-bucket>/*"
}
]
}
Please make sure you are using a newer version of aws cli (1.11.xxx didn't work for me). I finally installed the version 2 of cli to get this to work.