Custom authentication with JWT for download from S3 - amazon-web-services

I have a setup on AWS where I'm running a backend on ElasticBeanstalk/EC2, which stores some files in S3. What I'd like to do, is when an authenticated (by my backend) user wants to download a file, they can do so directly from S3, instead of going through the backend itself.
To that end, I'd like S3 to check a signed JWT before allowing a file download. For now, let's assume that any correctly signed JWT allows any file download, regardless of the JWT claims. To make this more difficult, I want the download links to be usable in HTML <img src='link/to/s3/'>, so sending the JWT in a header as you normally would isn't feasible.
Is this even possible with S3? How would I go about setting this up?

I'm not very familiar with S3, but doesn't it already have a concept of "signed URLs"?
Instead of issuing a JWT, your web app instead uses the S3 proprietary method to generate a URL which encodes the expiry time and other parameters, but the overall result is the same.

Related

How to use the TranscribeStreamingClient in the browser with credentials

I want to be able to offer a RT transcription in my browser app using AWS transcribe. I see there is the TranscribeStreamingClient which can have credentials for AWS which are optional - I assume these credentials are required for access to the s3 bucket?
What I want to know is how to setup auth in my app so that I can make sure my users dont go overboard with the amount of minutes they transcribed?
For example:
I would be expecting to generate a pre-signed url that expires in X seconds/minutes on my Backend that I can pass to the web client which it then uses to handle the rest of the communication (similar like S3).
However I don't see such an option and the only solution that I keep circling back to is that I would need to be feeding the audio packets from to my backend which then handles all the auth and just forwards it to the service via the streaming client there. This would be okay but the documentation says that the TranscribeStreamingClient should be compatible for browser integrations. What am I missing?

Security concern in direct browser uploads to S3

The main security concern in direct js browser uploads to S3 is that users will store their S3 credentials on the client side.
To mitigate this risk, the S3 documentation recommends using a short lived keys generated by an intermediate server:
A file is selected for upload by the user in their web browser.
The user’s browser makes a request to your server, which produces a temporary signature with which to sign the upload request.
The temporary signed request is returned to the browser in JSON format.
The browser then uploads the file directly to Amazon S3 using the signed request supplied by your server.
The problem with this flow is that I don't see how it helps in the case of public uploads.
Suppose my upload page is publicly available. That means the server API endpoint that generates the short lived key needs to be public as well. A malicious user could then just find the address of the api endpoint and hit it everytime they want to upload something. The server has no way of knowing if the request came from a real user on the upload page or from any other place.
Yeah, I could check the domain on the request coming in to the api, and validate it, but domain can be easily spoofed (when the request is not coming from a browser client).
Is this whole thing even a concern ? The main risk is someone abusing my S3 account and uploading stuff to it. Are there other concerns that I need to know about ? Can this be mitigated somehow?
Suppose my upload page is publicly available. That means the server
API endpoint that generates the short lived key needs to be public as
well. A malicious user could then just find the address of the api
endpoint and hit it everytime they want to upload something. The
server has no way of knowing if the request came from a real user on
the upload page or from any other place.
If that concerns you, you would require your users to login to your website somehow, and serve the API endpoint behind the same server-side authentication service that handles your login process. Then only authenticated users would be able to upload files.
You might also want to look into S3 pre-signed URLs.

Is it a bad practice to make request to 3rd party service to get token?

Suppose this scenario:
I'm building service which works with user uploads to my AWS S3 account.
Each site using my service must have upload form which uploads directly to S3. In order to do that each site has to sign it's upload form with AWS Signature Version 4.
The problem is signing requires AWSAccessKeyId and AWSSecretAccessKey which i must share to my service user and that's not acceptable.
I thought i can generate all needed signing data on my side and the just reply with that when user(site) asks for it.
So the question is: is that a bad idea in order to sign upload form site(which is going to upload file to my S3) has to make request to my server for signing data(XHR or server side)?
I'm not entirely sure what you're asking, but if you're asking if it's a bad idea to sign the upload yourself on behalf of the individual sites, than the answer is no...with a caveat.
Signing the upload (really, you should just sign the upload URL) is far less of a security risk than providing other domains your access keys. That's what it's there for, to allow anonymous uploads. Signing the request merely gives the site/user permission to upload, but does not take into account who is uploading or what they are uploading.
This is where your own security checks need to come in. If the form is hosted on multiple domains (all uploading to your S3 bucket), you should first check the domain the form originated from so as to avoid someone putting the form on their own webserver and trying to upload stuff. Depending on how your various sites are configured, there are a couple of ways to do this, of which I am no expert unfortunately.
Secondly, you will want to validate the data being uploaded. Is it pure text, binary, etc.? You will want to validate all of that prior to initiating the upload.
I hope this helps.

Only make S3 files accessible through ajax

I want to use S3 to store user uploaded excel files - obviously I only want that S3 file to be accessible by that user.
Right now my application accomplishes this by checking if the user is correct, then hitting the URL https://s3.amazonaws.com/datasets.mysite.com/1243 via AJAX. I can use CORS to allow this AJAX only from https://www.mysite.com.
However if you just type https://s3.amazonaws.com/datasets.mysite.com/1243 into the browser, you can get any file :P
How do I stop S3 from serving files directly, and only enable it to be served via ajax (where I already control access with CORS)?
It is not about AJAX or not, it is about permissions and authorization.
First, your buckets should be private unlike their current state which is world visible.
Then in order for your users to connect, you create a temporary download link which in AWS world called S3 Pre-signed Request.
You generate them in your back-end, here is a java sample
Enjoy,
R

privacy on Amazon S3

I have an app that lets users post and share files, and currently it's my server that serves these files, but as data grows, so I'm investigating using Amazon S3. However, I use dynamic rules for what is public and what is private between certain users etc, so the server is the only possible arbiter, i.e. permissions cannot be decided on the app/client end.
Simplistically, I guess I can let my server GET data from S3, then send them back to the app. But obviously then I'm paying for bandwidth twice not to mention making my server do unnecessary work.
This seems like a fairly common problem, so I wonder how do people typically solve this problem? (Like I read that Dropbox stores its data on S3.)
We have an application with pretty much the same requirements, and there's a really good solution available. S3 supports signed, expiring S3 URLs to access objects. If you have a private S3 object that you, but not others, can access, you can create such a URL. If you give that URL to someone else, he or she can use it to fetch the object that they normally have no access to.
So the solution for your use case is:
User does a GET to the URL on your web site
Your code verifies that the user should be able to see the object (via your application's custom, dynamic rules)
The web site returns a redirect response to a signed S3 URL that expires soon, say in 5 minutes
The user's web browser does a GET to that signed S3 URL. Since it's properly signed and hasn't yet expired, S3 returns the contents of the object directly to the user's browser.
The data goes from S3 to the user without ever traveling back out through your web site. Only users your application has authorized can get the data. And if a user bookmarks or shares the URL it won't work once the expiration time has passed.