When using the following bucket policy, I see that it restricts PUT access as expected - however GET is allowed on the created object, even though there is nothing which should allow this operation.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPut",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<BUCKET>/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"<IP ADDRESS>"
]
}
}
}
]
}
I am able to PUT files to <BUCKET> from <IP ADDRESS> using curl as follows:
curl https://<BUCKET>.s3-<REGION>.amazonaws.com/ --upload-file test.txt
The file uploads successfully, and appears in the S3 console. I am now for some reason able to GET the file from anywhere on the internet.
curl https://<BUCKET>.s3-<REGION>.amazonaws.com/test.txt -XGET
This only applies for files uploaded using the above method. When uploading a file in the S3 web console, I am not able to use curl to GET it (access denied). So I assume that it is an object level permission issue. Though I don't understand why the bucket policy would not implicitly deny this access.
When looking at the object level permissions in the console, the only differences between a file uploaded through the console (method 1), and one uploaded from the allowed <IP ADDRESS> (method 2) are that the file in method 2 does not have an 'Owner', Permissions, or Metadata - while the method 1 file has all of these.
Furthermore - when attempting to GET the objects using a Lambda script (boto3 download_file()) which assumes a role with full access to the bucket, it fails for objects uploaded with method 2. Though it succeeds for objects uploaded with method 1.
Issue Summary
To summarise the issue:
you have a policy that permits anonymous upload of objects from a given source IP address
those objects are then not readable by your authenticated users (specifically an Iam Role adopted by your lambda function)
those objects ARE readable from ANY IP by unauthenticated users
Other observations
unauthenticated user is unable to delete the object
The desired outcome is:
objects can be uploaded by an unauthenticated user from a known IP address
objects are not then downloadable by unauthenticated users from any IP address
objects are retrievable by an authenticated Iam user
Root Cause
Here is what's happening:
Anonymous user uploads the object
The Anonymous user becomes the object owner
Verifiable by retrieving the object acl (do a GET request for the object with query string ?acl) - you will receive:
<?xml version="1.0" encoding="UTF-8"?>
<AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Owner>
<ID>65a011a29cdf8ec533ec3d1ccaae921c</ID>
</Owner>
<AccessControlList>
<Grant>
<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>65a011a29cdf8ec533ec3d1ccaae921c</ID></Grantee>
<Permission>FULL_CONTROL</Permission>
</Grant>
</AccessControlList>
</AccessControlPolicy>
The Owner ID is the universal id of the anonymous user - I have seen the same id referenced in some AWS forum discussions.
Being the object owner has the following impact:
Anonymous user has FULL_CONTROL (see acl above)
Anonymous user is unable to Delete - this appears to be an AWS blanket rule that cannot be changed - the anonymous user is never allowed to delete anything, even if they have FULL_CONTROL
Anonymous user is, however, able to PUT an empty object over the top of the existing object, as a result of FULL_CONTROL
When a bucket contains a object owned by a user who is not part of the bucket's account:
Bucket owner has no permission on the object (not referenced in acl)
Bucket owner is not able to read the object
Bucket owner is able to see the object in a bucket list operation due to bucket acl
Bucket owner is able to delete the object - this is a blanket rule that cannot be changed - as the person paying the bill, you always reserve the right to delete the object - even if you can't read it
Resolution
There is a way to achieve your desired outcome - unfortunately you have to reference the arn of the specific Iam entity (user, role, group) you want to be able to read the object in the bucket acl.
The key elements of the solution are:
Require the anonymous user to grant the bucket owner full access
This ensures the bucket owner and owner account Iam users aren't denied access by the object acl
Explicitly deny all non-PUT access to all users who aren't your nominated user/role
This ensure anonymous users can't read the object
Sample policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allow-anonymous-put",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<BUCKETNAME>/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "<IPADDRESS>"
},
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Sid": "deny-not-my-user-everything-else",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::<ACCOUNTNUMBER>:role/<ROLENAME>"
},
"NotAction": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::<BUCKETNAME>/*"
}
]
}
The key to the second statement is the use of NotPrincipal and NotAction.
I've tested this locally, but only with a regular Iam user granted access, not with a Lamba function assuming a role - but the principal should hold. Good luck!
The following articles were helpful in understanding what was going on - they each present a scenario similar, but not quite the same as yours, but the methods they used to tackle their scenarios led the way:
http://jayendrapatil.com/aws-s3-permisions/
http://prettyplease.me/anonymous-s3-upload-with-full-owner-control/
https://gist.github.com/jareware/d7a817a08e9eae51a7ea
Related
I would like to add an image upload possibility for my users.
So far I've followed a simple YouTube tutorial and created a new bucket with the following Bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1578265217545",
"Statement": [
{
"Sid": "statement-1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/images/*"
}
]
}
And the following CORS policy:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"DELETE",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
I've also created an IAM user, and attached the following policy to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "statement1",
"Effect": "Allow",
"Action": [
"s3:Put*",
"s3:Get*",
"s3:Delete*"
],
"Resource": [
"arn:aws:s3:::my-bucket/*"
]
}
]
}
I got my access and secret keys that I successfully used to upload/delete files – success.
I have a strong feeling, the above policies are not really secure at this moment (e.g. I'm planning to make the CORS policy more strict, by only allowing the bucket to be accessed from a certain domain).
My main question now is – How can I make sure that if user A uploads his image, no other user (until allowed) can access it?
I think this would be possible if each user of the application has an IAM user account in AWS. Then you could have restrict the images using the corresponding AWS IAM user. But I believe this is probably not the case.
Something better would be, instead of accessing the images directly on AWS, access the images via your application. You could have a table storing the image path in the bucket on AWS, the corresponding owner(s) and also a flag indicating if the image can be accessed publicly or not.
Then when you need a specific image, you would make a request to your application, which would check if the user making the request is the owner of the image, if yes, the application would download the image from AWS using the AWS S3 SDK and send it over to the user.
This approach will decouple AWS from your end users and your app will be responsible for managing who can access what. Given every request to AWS will pass through your app, there is less risk on compromising the AWS infrastructure in place.
Object tagging and attribute-based access control could be used for conditional access to different objects.
Use case: Application not supporting individual IAM users:
Objects are assigned ownerID tag with id value,
Users are assigned an uuid or their profile has a tag with some kind of id value and
API function used to fetch objects compares object tag and user id/tag and retrieves only objects with matching values
Use case: Application supporting AWS IAM users / SSO users:
Objects are assigned a tag with appropriate value (id,
department, etc),
AWS users are assigned a tag with appropriate value
(id, department etc.),
An IAM role and an access control policy are
created for allowing conditional access depending on tag values
https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging-and-policies.html
I am trying to understand the need for the condition "home/${aws:userid}/*" . This condition actually feels like it is satisified in the "arn:aws:s3:::bucket-name/home/${aws:userid}/*" .
When we have allowed for all s3 operations in the third statement for that user. then why do we need to allow s3:listbuckets specifically for that user?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucket-name",
"Condition": {
"StringLike": {
"s3:prefix": [
"",
"home/",
"home/${aws:userid}/*"
]
}
}
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-name/home/${aws:userid}",
"arn:aws:s3:::bucket-name/home/${aws:userid}/*"
]
}
]
}
ref https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_federated-home-directory-console.html
s3:ListBucket is a bucket-level permission, while arn:aws:s3:::bucket-name/home/${aws:userid}/* is a wildcard object ARN, not a bucket ARN.
An attempt to grant s3:ListBucket will not match any Resource that isn't a bucket ARN, so the s3:* grant -- which only includes object ARNs -- does not actually allow object listings.
So this example policy does not contain any redundancy.
If this implementation still seems a bit counter-intuitive or perhaps convoluted, it does become somewhat clearer if you consider how the S3 API works on the wire. The ListObjects API action (as well as the newer ListObjectsV2) is submitted against the bucket -- there's no path in the request... or, more precisely, the path in the HTTP request is always¹ /... and the query string contains prefix= and the object key prefix where the requested listing is to be anchored.
While there's no compulsory correlation between the way the underlying API works and the way IAM policies work, it does make sense that the s3:prefix condition key is what's used to control use of the prefix parameter to ListObjects, instead of an object-level ARN, and that the bucket -- not an object key or wildcard pattern -- is the resource being accessed.
¹ always / except when it's /${bucket} as required by the old deprecated path-style URLs that are finally being phased out after a false start or two, at least for new buckets. The resource as expressed in the path component of the request URI is always the bucket itself, not the bucket plus a key prefix.
The resources are different. In the third statement, user can only access bucket-name/home/${aws:userid}. This means that when the user goes into the S3 console, and clicks bucket bucket-name it will get access denied. So user won't be able to list the bucket content, and will not see that there is home folder there. They will also not see that in bucket-name/home there is a folder with their username.
Thus, to overcome this issue, there is the second statement, which allows to list all content of ``bucket-nameand thenbucket-name/home`. This way users can navigate easily in S3 console to get to their actual home folder.
Without the second statement, users would have to type url of their home folders in browser to go to directly to it, which is not very user friendly.
I'm trying to use the react-native-s3-upload package to upload files to an S3 bucket in my React Native App. This only works if I set "Block public access" to 'off' in S3. Otherwise I get <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>. The access key and secret key provided with put requests are for an IAM user that belongs to a group with AmazonS3FullAccess. I also have this policy attached to the bucket:
{
"Version": "2012-10-17",
"Id": "Policyxxxxxxx",
"Statement": [
{
"Sid": "Stmtxxxxxxx",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxx:user/<user name>"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::<bucket name>"
}
]
}
I've tried all sorts of solutions but nothing seems to work. If I replace the secret key and access keys with dummy text then it returns <Error><Code>InvalidAccessKeyId</Code><Message> so it's definitely signing me in with the keys but seems to be ignoring the permissions.
If your IAM user has AmazonS3FullAccess policy, it should connect to bucket just fine.
I think that the problem is that default object acl is public,
https://www.npmjs.com/package/react-native-s3-upload
acl - The Access Control List of this object. Defaults to public-read
You need to set it to private.
I have a cloudformation template up in an S3 bucket (the url follows the pattern but is not exactly equal to: https://s3.amazonaws.com/bucket-name/cloudform.yaml). I need to be able to access it from CLI for a bash script. I'd prefer that everybody in an organization (all in this single account) has access to this template but other people outside of the organization don't have access to the template. A bucket policy I've tried looks like:
{
"Version": "2012-10-17",
"Id": "Policy11111111",
"Statement": [
{
"Sid": "Stmt111111111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::7777777777:root"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
With this policy, I and a couple other people in my office are unable to access the url. Even when I'm logged in with the root account I'm getting Access Denied.
Also, this change (only setting Principal to *) makes the bucket accessible to anybody:
{
"Version": "2012-10-17",
"Id": "Policy11111111",
"Statement": [
{
"Sid": "Stmt111111111",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
Obviously the signs point to my Principal field being misconfigured. 777777777 is the replacement for the Account ID I see under the My Account page.
So, do I need to worry about this on the IAM front? Considering that I am logged in as the root user, I'd guess I should have access to this as long as I put in a bucket policy. Any help would be much appreciated.
Short and sweet:
The bucket policy doesn't allow you to do what you want because of a wildcard limitation of the Principal element. Your best bet is to create an IAM group and put all IAM users into that group if they need access.
Long version:
Just to make it clear, any request to https://s3.amazonaws.com/bucket-name/cloudform.yaml MUST be signed and have the necessary authentication parameters or the request will be rejected with Access Denied. The only exception is if the bucket policy or the bucket ACL allows public access, but it doesn't sound like this is what you want.
When you say "everybody in an organization (all in this single account)" I assume you mean IAM users under the account who are accessing the file from the AWS console, or IAM users who are using some other code or tool (e.g. AWS CLI) to access the file.
So what it sounds like what you want is the ability to specify the Principal as
"Principal": {
"AWS": "arn:aws:iam::777777777777:user/*"
}
since that is what the pattern would be for any IAM user under the 777777777777 account id. Unfortunately this is not allowed because no wildcard is allowed in the Principal unless you use the catch-all wildcard "*". In other words "*" is allowed, but either "prefix*" or "*suffix" is not. (I wish AWS documented this better.)
You could specify every IAM user you have in the bucket policy like so:
"Principal": {
"AWS": [
"arn:aws:iam::777777777777:user/alice",
"arn:aws:iam::777777777777:user/bob",
"arn:aws:iam::777777777777:user/carl",
...
"arn:aws:iam::777777777777:user/zed",
}
But you probably don't want to update the policy for every new user.
It would be easiest to create an IAM group that grants access to that file. Then you would add all IAM users to that group. If you add new users then you'll have to manually add them to that group, so it is not as convenient as what you originally wanted.
I am trying to give myself permission to download existing files in an S3 bucket. I've modified the Bucket Policy, as follows:
{
"Sid": "someSID",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
}
My understanding is that addition to the policy should give me full rights to "bucketname" for my account "myuid", including all files that are already in that bucket. However, I'm still getting Access Denied errors when I try to download any of those files via the link that comes up in the console.
Any thoughts?
Step 1
Click on your bucket name, and under the permissions tab, make sure that Block new public bucket policies is unchecked
Step 2
Then you can apply your bucket policy
Hope that helps
David, You are right but I found that, in addition to what bennie said below, you also have to grant view (or whatever access you want) to 'Authenticated Users'.
But a better solution might be to edit the user's policy to just grant access to the bucket:
{
"Statement": [
{
"Sid": "Stmt1350703615347",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {}
}
]
}
The first block grants all S3 permissions to all elements within the bucket. The second block grants list permission on the bucket itself.
Change resource arn:aws:s3:::bucketname/AWSLogs/123123123123/* to arn:aws:s3:::bucketname/* to have full rights to bucketname
for show website static in s3:
This is bucket policies:
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::example-bucket/*"
]
}
]
}
Use below method for uploading any file for public readable form using TransferUtility in Android.
transferUtility.upload(String bucketName, String key, File file, CannedAccessControlList cannedAcl)
Example
transferUtility.upload("MY_BUCKET_NAME", "FileName", your_file, CannedAccessControlList.PublicRead);
To clarify: It is really not documented well, but you need two access statements.
In addition to your statement that allows actions to resource "arn:aws:s3:::bucketname/AWSLogs/123123123123/*", you also need a second statement that allows ListBucket to "arn:aws:s3:::bucketname", because internally the Aws client will try to list the bucket to determine it exists before doing its action.
With the second statement, it should look like:
"Statement": [
{
"Sid": "someSID",
"Action": "ActionThatYouMeantToAllow",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
},
{
"Sid": "someOtherSID",
"Action": "ListBucket",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname",
"Principal": {
"AWS": [
"arn:aws:iam::123123123123:user/myuid"
]
}
]
Note: If you're using IAM, skip the "Principal" part.
If you have an encrypted bucket, you will need kms allowed.
Possible reason: if files have been put/copy by another AWS Account user then you can not access the file since still file owner is not you. The AWS account user who has been placed files in your directory has to grant access during a put or copy operation.
For a put operation, the object owner can run this command:
aws s3api put-object --bucket destination_awsexamplebucket --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2 --acl bucket-owner-full-control
For a copy operation of a single object, the object owner can run one of these commands:
aws s3api copy-object --bucket destination_awsexammplebucket --key source_awsexamplebucket/myobject --acl bucket-owner-full-control
ref : AWS Link
Giving public access to Bucket to add policy is NOT A RIGHT way.
This exposes your bucket to public even for a short amount of time.
You will face this error even if you are admin access (Root user will not face it)
According to aws documentation you have to add "PutBucketPolicy" to you IAM user.
So Simply add a S3 Policy to you IAM User as in below screenshot , mention your Bucket ARN for make it safer and you don't have to make you bucket public again.
No one metioned MFA. For Amazon users who have enabled MFA, please use this:
aws s3 ls s3://bucket-name --profile mfa.
And prepare the profile mfa first by running
aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user-name --token-code 928371 --duration 129600. (replace 123456789012, user-name and 928371).
This can also happen if the encryption algorithm in the S3 parameters is missing. If bucket's default encryption is set to enabled, ex. Amazon S3-managed keys (SSE-S3), you need to pass ServerSideEncryption: "AES256"|"aws:kms"|string to your bucket's param.
const params = {
Bucket: BUCKET_NAME,
Body: content,
Key: fileKey,
ContentType: "audio/m4a",
ServerSideEncryption: "AES256" // Here ..
}
await S3.putObject(params).promise()
Go to this link and generate a Policy.
In the Principal field give *
In the Actions set the Get Objects
Give the ARN as arn:aws:s3:::<bucket_name>/*
Then add statement and then generate policy, you will get a JSON file and then just copy that file and paste it in the Bucket Policy.
For More Details go here.