Grant access to file in S3 bucket by a unique hash (Go)? - amazon-web-services

So I have an S3 bucket. I want to grant access to a single file within that bucket to a unique person. Is it possible to grant access based on a secure hash or something like that?
So for instance. File is uploaded to bucket. Emails is sent to user with a link:
https://s3-us-west-2.amazonaws.com/mycoolbucket/test.txt?key=asdqwerwerhsdhsdfh23562346
Access to that file is granted if the key (or whatever) is present and correct. If that key wasn't correct access would be denied. And access would only be granted for that single file in the bucket. Trying to avoid changing policies and what not.
Thanks in advance!

Take a look at pre-signed URLs, for example in Java: http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURLJavaSDK.html

So after looking over all the existing golang packages and what not I decided it was best to just build my own package specifically for creating a secure url to a specific item in an S3 bucket. It works great but documentation is a work in progress. Hopefully it helps someone:
https://github.com/markhayden/s3querybuilder

Related

Using S3 bucket as a file server for the public

Use-case
We basically want to collect files from external customers into a file server.
We were thinking of using the S3 bucket as the file server that customers can interact with directly.
Question
Is it possible to accomplish this where we create a bucket for each customer, and he can be given a link to the S3 bucket that also serves as the UI for him to drag and drop his files into directly?
He shouldn't have to log-in to AWS or create an AWS account
He should directly interact with only his S3 bucket (drag-drop, add, delete files), there shouldn't be a way for him to check other buckets. We will probably create many S3 buckets for our customers in the same AWS account. His entry point into the S3 bucket UI is via a link (S3 bucket URL perhaps)
If such a thing is possible - would love some general pointers as to what more I should do (see my approach below)
My work so far
I've been able to create an S3 bucket - grant public access
Set policies to Get, List and PutObject into the S3 bucket.
I've been able to give public access to objects inside the bucket using their link, but never their bucket itself.
Is there something more I can build on or am I hitting a dead-end and this is not possible to accomplish?
P.S: This may not be a coding question, but maybe your answer could have code to accomplish it if at all possible, general pointers if possible
S3 presigned url can help in such cases, but you have write your own custom frontend application for drag and drop features.
Link: https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html

Allowing S3 bucket to access itself

I just want my S3 bucket to be able to access itself. For example in my index.html there is a reference to a favicon, which resides in my s3 (the same actually) bucket. When i call the index.html, i get 403 HTTP ACCESS DENIED error.
If i put block all access off and i add a policy it works, but i do not want the Bucket to be public.
How am i able to invoke my website with my AWS user for example without making the site public (that is with having all internet access blocked)?
I just want my S3 bucket to be able to access itself.
no, the request always comes from the client
How am i able to invoke my website with my AWS user
For the site-level access control there is CloudFront with signed cookie. You will still need some logic (apigw+lambda? lambda on edge? other server?) to authenticate the user and sign the cookie.
You mention that "the websites in the bucket should be only be able to see by a few dedicated users, which i will create with IAM."
However, accessing Amazon S3 content with IAM credentials is not compatible with accessing objects via URLs in a web browser. IAM credentials can be used when making AWS API calls, but a different authentication method is required when accessing content via URLs. Authentication normally requires a back-end to perform the authentication steps, or you could use Amazon Cognito.
Without knowing how your bucket is set up and what permissions / access controls you have already deployed it is hard to give a definite answer.
Having said that it sounds like you simply need to walk through the proper steps for building an appropriate permission model. You have already explored part of this with the block all access and a policy, but there are also ACL's and permission specifics based on object ownership that need to be considered.
Ultimately AWS's documentation is going to do a better job than most to illustrate what to do and where to start:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteAccessPermissionsReqd.html
NOTE: if you share more information about how the bucket is configured and how your client side is accessing the website, I can edit the answer to give a more prescriptive solution (assuming the AWS docs don't get you all the way there)
UPDATE: After re-reading your question and comment on my answer, I think gusto2 and John's answers are pointing you in the right direction. What you are wanting to do is to authenticate users before they access the contents of the S3 bucket (which if I understand you right, is a s3 hosted static website). This means you need an authentication layer between the client and the bucket, which can be accomplished in a number of ways (lambda + cloudfront, or using an IdP like Cognito are certainly viable options). It would be a moot point for me to regurgitate exactly how to pull off something like this when there are a ton of accessible blog posts on the topic (search "Authenticate s3 static website").
HOWEVER I also want to point out that what you are wanting to accomplish is not possible in the way you are hoping to accomplish it (using IAM permission modeling to authenticate users against an s3 hosted static website). You can either authenticate users to your s3 website OR you can use IAM + S3 Permissions and ACL to set up AWS User and Role specific access to the contents of a bucket, but you can't use IAM users / roles as a method for authenticating client access to an S3 static website (not in any way I would imagine is simple or recommended at least...)

How to change user 'role' per request in Amazon AWS S3 bucket?

I'm not sure if this is the appropriate use case, so please tell me what to look for if I'm incorrect in my assumption of how to do this.
What I'm trying to do:
I have an s3 bucket with different 'packs' that users can download. Upon their purchase, they are given a user role in Wordpress. I have an S3 browser set up via php that makes requests to the bucket for info.
Based on their 'role', it will only show files that match prefix (whole pack users see all, single product people only see single product prefix).
In that way, the server will be sending the files on behalf of the user, and changing IAM roles based on the user's permission level. Do I have to have it set that way? Can I just analyze the WP role and specify and endpoint or query that notes the prefixes allowed?
Pack users see /
Individual users see /--prefix/
If that makes sense
Thanks in advance! I've never used AWS, so this is all new to me. :)
This sounds too complex. It's possible to do with AWS STS but it would be extremely fragile.
I presume you're hiding the actual S3 bucket from end users and are streaming through your php application? If so, it makes more sense to do any role-based filtering in the php application as you have far more logic available to you there - IAM is granular, but restrictions to resources in S3 is going to be funky and there's always a chance you'll get something wrong and expose the incorrect downloads.
Rather do this inside your app:
establish the role you've granted
issue the S3 ls command filtered by the role - i.e. if the role permits only --prefix, issue the ls command so that it only lists files matching --prefix
don't expose files in the bucket globally - only your app should have access to the S3 bucket - that way people also can't share links once they've downloaded a pack.
this has the added benefit of not encoding your S3 bucket structure in IAM, and keeps your decision logic isolated to code.
There are basically three ways you can grant access to private content in Amazon S3.
Option 1: IAM credentials
You can add a policy to an IAM User, so that they can access private content. However, such credentials should only be used by staff in your own organization. it should not be used to grant access to application users.
Option 2: Temporary credentials via STS
Your application can generate temporary credentials via the AWS Security Token Service. These credentials can be given specific permissions and are valid for a limited time period. This is ideal for granting mobile apps access to Amazon S3 because they can communicate directly with S3 without having to go via the back-end app. The credentials would only be granted access to resources they are permitted to use.
These types of credentials can also be used by web applications, where the web apps make calls directly to AWS services (eg from Node/JavaScript in the browser). However, this doesn't seem suitable for your WordPress situation.
Option 3: Pre-Signed URLs
Imagine a photo-sharing application where users can access their private photos, and users can also share photos with other users. When a user requests access to a particular photo (or when the back-end app is creating an HTML page that uses a photo), the app can generate a pre-signed URL that grants temporary access to an Amazon S3 object.
Each pre-signed URL gives access only to a single S3 object and only for a selected time period (eg 5 minutes). This means that all the permission logic for whether a user is entitled to access a file can be performed in the back-end application. When the back-end application provides a pre-signed URL to the user's browser, the user can access the content directly from Amazon S3 without going via the back-end.
See: Amazon S3 pre-signed URLs
Your situation sounds suitable for Option #3. Once you have determined that a user is permitted to access a particular file in S3, it can generate the pre-signed URL and include it as a link (or even in <img src=...> tags). The user can then download the file. There is no need to use IAM Roles in this process.

can someone hack into my s3 with "AWS-cognito-identity-poolID" that is hard-coded?

First i was hardcoded my aws "accessKey" and "securityKey" in client side JS file, but it was very insecure so i read about "aws-cognito", and implemented new JS in following manner :
Still i am confused with one thing that can someone hack into my s3 with "AWS-cognito-identity-poolID" that is hard-coded ? Or any other security steps should i take ?
Thanks,
Jaikey
Definition of Hack
I am not sure what hacking means in the context of your question.
I assume that you actually mean "that anyone can do something different than uploading a file" which includes deleting or accessing objects inside your bucket.
Your solution
As Ninad already mentioned above, you can use your current approach by enabling "Enable access to unauthenticated identities" [1]. You will then need to create two roles of which one is for "unauthenticated users". You could grant that role PutObject permissions to the S3 bucket. This would allow everyone who visits your page to upload objects to the S3 bucket. I think that is what you intend and it is fine from a security point of view since the IdentityPoolId is a public value (i.e. not confidential).
Another solution
I guess, you do not need to use Amazon Cognito to achieve what you want. It is probably sufficient to add a bucket policy to S3 which grants permission for PutObject to everyone.
Is this secure?
However, I would not recommend to enable direct public write access to your S3 bucket.
If someone would abuse your website by spamming your upload form, you will incure S3 charges for put operations and data storage.
It would be a better approach to send the data through Amazon CloudFront and apply a WAF with rate-based rules [2] or implement a custom rate limiting service in front of your S3 upload. This would ensure that you can react appropriately upon malicious activity.
References
[1] https://docs.aws.amazon.com/cognito/latest/developerguide/identity-pools.html
[2] https://aws.amazon.com/about-aws/whats-new/2019/08/lower-threshold-for-aws-waf-rate-based-rules/
Yes, s3 bucket is secure if you are using through "AWS-Cognito-Identity-Pool" at client side, also enable CORS which allow action only from specific domain that ensure if someone try direct upload or list bucket, will get "access-denied".
Also make sure that you have set the file r/w credentials of the hard-coded access so that it can only be read by the local node and nobody else. By the way, the answer is always a yes, it is only a matter how much someone is willing to engage themselves to "hack". Follow what people said here, and you are safe.

Restrict Access to S3 bucket on AWS

I am storing files in a S3 bucket. I want the access to the files be restricted.
Currently, anyone with the URL to the file is able to access the file.
I want a behavior where file is accessed only when it is accessed through my application. The application is hosted on EC2.
Following are 2 possible ways I could find.
Use "referer" key in bucket policy.
Change "allowed origin" in CORS configuration
Which of the above 2 should be used, given the fact that 'referer' could be spoofed in the request header.
Also can cloudfront play a role over here?
I would recommend using a Pre-Signed URL that permits access to private objects stored on Amazon S3. It is a means of keeping objects secure, yet grant temporary access to a specific object.
It is created via a hash calculation based on the object path, expiry time and a shared Secret Access Key belonging to an account that has permission to access the Amazon S3 object. The result is a time-limited URL that grants access to the object. Once the expiry time passes, the URL does not return the object.
Start by removing existing permissions that grant access to these objects. Then generate Pre-Signed URLs to grant access to private content on a per-object basis, calculated every time you reference an S3 object. (Don't worry, it's fast to do!)
See documentation: Sample code in Java
When dealing with a private S3 bucket, you'll want to use an AWS SDK appropriate for your use case.
Here lies SDKs for many different languages: http://aws.amazon.com/tools/
Within each SDK, you can find sample calls to S3.
If you are trying to make private calls via browser-side JavaScript, you can use CORS.