For the testing purpose I have uploaded 'n'- number of files to a folder in the s3 bucket as "any aws user" ACL. Now I want to change ACL to "private" for all the files in that folder. I find that it can be done more easily by a third-party tool called s3cmd. But I have no permission to use third party tools. So is there any way to do it by the aws console(other than doing it programtically iterating over each file and setting the ACL).I am using php api's, Or is there any way to set acl recursively through AWS cli
You can Refer to this AWS CLI Documentation using put-bucket-acl the adding --acl to private
Alternatively you can specify with bucket policy to enforce private access to bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PrivateAclPolicy", "Effect": "Deny",
"Principal": { "AWS": "*"},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::bucket_name/foldername/*"
],
"Condition": {
"StringNotEquals": {
"s3:x-amz-acl": [
"private"
]
}
}
}
]
}
Replace bucket_name and foldername with the name of your bucket and folder.
Thanks
Related
I have access to one of two AWS environments and I've created a protected S3 bucket in it to upload files to from an account in the one that I do not. The environment and the account that I don't have access to are what a project's CI uses.
environment I have access to: env1
environment I do not have access to: env2
account I do not have access to: user/ci
bucket name: content
S3 bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
...
},
{
"Sid": "Allow access to bucket from profile in env1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/ci"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket*"
],
"Resource": "arn:aws:s3:::content"
},
{
"Sid": "Allow access to bucket items from profile in env1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/ci"
},
"Action": [
"s3:Get*",
"s3:PutObject",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::content",
"arn:aws:s3:::content/*"
]
}
]
}
From inside a container that's configured for env1 and user/ci I'm testing with the command
aws s3 sync content/ s3://content/
and I get the error:
fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
I have two questions:
Am I even using the correct aws command to upload the data to the bucket?
Am I missing something from my bucket policy?
For the latter, I've basically followed what a load of examples and answers online have suggested.
To test your policy, I did the following:
Created an IAM User with no policies
Created an Amazon S3 bucket
Attached your Bucket Policy to the bucket, and updated the ARN and bucket name
Tested access to the bucket with:
aws s3 ls s3://bucketname
aws s3 sync folder/ s3://bucketname/folder/
It worked fine.
Therefore, the policy you display appears to be giving all necessary permissions. It is possible that you have something else that is Denying access on the bucket.
The solution was to given the ACL
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::content",
"arn:aws:s3:::content/*"
]
}
]
}
to user/ci in env1.
I'm using amazon s3 to store images in a bucket, and cloudfront to get and post those pictures. My problem is that every time I upload a new image, it's automatically private (trying to get it results in a 403 forbidden). To be able to get it and show it on my website, I have to make my folder public again (after I've already done it). Do you have any idea why is there this behaviour ?
My bucket is public and here are my IAM permissions:
// First strategy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:HeadBucket",
"s3:GetBucketAcl",
"s3:HeadObject",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
],
"Resource": "arn:aws:s3:::my-bucket-name/*"
}
]
}
// Second strategy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListAllMyBuckets"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::*"
},
{
"Action": [
"acm:ListCertificates",
"cloudfront:*",
"iam:ListServerCertificates",
"waf:ListWebACLs",
"waf:GetWebACL",
"wafv2:ListWebACLs",
"wafv2:GetWebACL"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
"my-bucket-name" is obviously replaced by the actual name of the bucket.
Thank you.
Objects in Amazon S3 are private by default.
If you wish to make objects publicly accessible (to people without IAM credentials), you have two options:
Option 1: Bucket Policy
You can create a Bucket Policy that makes content publicly accessible. For example, this policy grants GetObject access to anyone:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::examplebucket/*"
]
}
]
}
See: Bucket Policy Examples - Amazon Simple Storage Service
You will also need to turn off Block Public Access on the S3 bucket to permit a Bucket policy.
Option 2: Make the object public
You could alternatively make specific objects in Amazon S3 public.
When uploading an object, specify an Access Control List (ACL) of public-read. You will also need to turn off Block Public Access to permit the object-level permissions.
When you say "I have to make my folder public again", I suspect that you are going into the Amazon S3 console, selecting a folder and then using the Make public option. This is the equivalent of setting the ACL. You can avoid this extra step by specifying the ACL while uploading the object or using the Bucket Policy.
Make a Lambda trigger based on the prefix of your subfolder. Then during the create event update the permissions of the object to make it public.
See: https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
I have an S3 bucket to upload files in the following path syntax:
arn:aws:s3:::XXX/{generated string}/logs/{file}
I require the bucket's root listing to be private and for a directory listing of each of the logs folders to be public.
The generated folder names are dynamically created so it would not be plausible to manually make each folder public.
I have attempted to use the S3 policy generator, to generate the following policy:
{
"Id": "PolicyID",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExampleStatement",
"Action": [
"s3:ListObjects"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::XXX",
"Condition": {
"StringLike": {
"s3:prefix": "*/logs"
}
},
"Principal": "*"
}
]
}
However, upon attempting to save this policy AWS throws a "policy has invalid action" error.
How can I create a policy to fit these requirements?
I am using an AWS S3 bucket to hold configuration files for Java AWS Lambdas. How do I configure the bucket to only allow access to any lambda function and nothing else?
You need to add s3 bucket policy for account 123456789012 in region us-east-1 -
{
"Id": "Policy1498253351771",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1498253327847",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::<bucket_name>/<prefix>",
"Principal": {
"AWS": [
"arn:aws:lambda:us-east-1:123456789012:function:*"
]
}
}
]
}
Above is a general policy for all lambda functions.
If you need to generate a more granular policy as per your usecase , you can try AWS Policy Generator
I want to grant my ec2 instance access to an s3 bucket.
On this ec2 instance, a container with my application is launched. Now I don't get permission on the s3 bucket.
This is my bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1462808223348",
"Statement": [
{
"Sid": "Stmt1462808220978",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::714656454815:role/ecsInstanceRole"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "private-ip/32"
}
}
}
]
}
But it doesn't work until I give the bucket the permission for everyone to access it.
I try to curl the file in the s3 bucket from inside the ec2 instance but this doesn't work either.
at least of now, 2019, there is a much easier and cleaner way to do it (the credentials never have to be stored in the instance, instead it can query them automatically):
create an IAM Role for your instance and assign it
create a policy to grant access to your s3 bucket
assign the policy to the instance's IAM role
upload/download objects e.g. via aws cli for s3 - cp e.g. aws s3 cp <S3Uri> <LocalPath>
#2: An example of a JSON Policy to Allow Read and Write Access to Objects in an S3 Bucket is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::bucket-name"]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": ["arn:aws:s3:::bucket-name/*"]
}
]
}
You have to adjust allowed actions, and replace "bucket-name"
There is no direct way of granting "EC2" instance access to AWS server, but you can try the following.
Create a new user in AWS IAM, and download the credentials file.
This user will represent your EC2 server.
Provide the user with permissions to your S3 Bucket.
Next, place the credentials file in the following location:-
EC2 - Windows Instance:
a. Place the credentials file anywhere you wish. (e.g. C:/credentials)
b. Create an environment variable AWS_CREDENTIAL_PROFILES_FILE and put the value as the path where you put your credentials file (e.g. C:/credentials)
EC2 - Linux Instance
a. Follow steps from windows instance
b. Create a folder .aws inside your app-server's root folder (e.g. /usr/share/tomcat6).
c. Create a symmlink between your environment variable and your .aws folder
sudo ln -s $AWS_CREDENTIAL_PROFILES_FILE /usr/share/tomcat6/.aws/credentials
Now that your credentials file is placed, you can use Java code to access the bucket.
NOTE: AWS-SDK libraries are required for this
AWSCredentials credentials = null;
try {
credentials = new ProfileCredentialsProvider().getCredentials();
} catch (Exception e) {
LOG.error("Unable to load credentials " + e);
failureMsg = "Cannot connect to file server.";
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (environment variable : AWS_CREDENTIAL_PROFILES_FILE), and is in valid format.",
e);
}
AmazonS3 s3 = new AmazonS3Client(credentials);
Region usWest2 = Region.getRegion(Regions.US_WEST_2);
s3.setRegion(usWest2);
ObjectListing objectListing = s3.listObjects(new ListObjectsRequest().withBucketName(bucketName).withPrefix(prefix));
Where bucketName = [Your Bucket Name]
and prefix = [your folder structure inside your bucket, where your file(s) are contained]
Hope that helps.
Also, if you are not using Java, you can check out AWS-SDKs in other programming languages too.
I found it out....
It only works with the public IP from the ec2 instance.
Try this:
{
"Version": "2012-10-17",
"Id": "Policy1462808223348",
"Statement": [
{
"Sid": "Stmt1462808220978",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "yourIp/24"
}
}
}
]
}
I faced the same problem. I finally resolved it by creating an access-point for the bucket in question using AWS CLI see https://docs.aws.amazon.com/AmazonS3/latest/dev/creating-access-points.html and I then created a bucket policy like following
{
"Version": "2012-10-17",
"Id": "Policy1583357393961",
"Statement": [
{
"Sid": "Stmt1583357315674",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-id>:role/ecsInstanceRole"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<your-bucket>"
},
{
"Sid": "Stmt1583357391961",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-id>:role/ecsInstanceRole"
},
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::<your-bucket>/*"
}
]
}
Please make sure you are using a newer version of aws cli (1.11.xxx didn't work for me). I finally installed the version 2 of cli to get this to work.