API Gateway for playing media URLs? - amazon-web-services

I have a media file linked to a URL (like https://songexample.mp3 (not the real url)). I want my app to play that audio, using something like
audio = new Audio('https://songexample.mp3')
audio.play()
But I would like to make it so that users cannot view the underlying URL itself, and also so that only logged in users can play the URL.
I am wondering if I can use API Gateway for this purpose. Through tutorials like this one, it seems clear that you can link API Gateway to Cognito, so that only logged in Cognito users can access the API Gateway endpoint.
But, this example and the others I have seen assume you are using a REST endpoint, with a specific parameter (like id at the end), to get back information. In my case, the URL will not return text-based information, but will actually play media.
API Gateway seems to expect that the end of the URL will provide data for a {proxy} parameter--but my URL must end in mp3. Not surprisingly, when I've tried I've just gotten back 404 messages so far.
Is it possible to use API Gateway to limit access to this kind of URL?

Related

How to use the TranscribeStreamingClient in the browser with credentials

I want to be able to offer a RT transcription in my browser app using AWS transcribe. I see there is the TranscribeStreamingClient which can have credentials for AWS which are optional - I assume these credentials are required for access to the s3 bucket?
What I want to know is how to setup auth in my app so that I can make sure my users dont go overboard with the amount of minutes they transcribed?
For example:
I would be expecting to generate a pre-signed url that expires in X seconds/minutes on my Backend that I can pass to the web client which it then uses to handle the rest of the communication (similar like S3).
However I don't see such an option and the only solution that I keep circling back to is that I would need to be feeding the audio packets from to my backend which then handles all the auth and just forwards it to the service via the streaming client there. This would be okay but the documentation says that the TranscribeStreamingClient should be compatible for browser integrations. What am I missing?

Rest API authorization of signed and unencrypted JWT best practices

I just want to make sure I've got the overall idea down and don't create an implementation that violates basic security best practices. Can somebody please check my understanding?
As I understand it, a user can log in to my application and the authentication server REST API can return a JWT that is signed, but NOT encrypted. Inside that token I can have claims inside the payload that my client application can access, such as features the user can use on the application. That way my client side website can change functionality based on the user privileges and roles. The JWT claims in the payload are NOT sensitive. They will be strings representing categories for images or documents, things like that.
When the user wants to get additional content (like a document, image, or video) from other REST API endpoints, they submit the JWT along with the GET request. My API can then verify the signature of the JWT and grant API access if appropriate.
This last part is what I'm most unsure about. My intent is to use another authorization server API endpoint which takes the JWT in a POST request and returns a simple "valid/invalid" response. My thought is that my Content Delivery Network (CDN) can use this API to verify that the JWT in possession is validly signed. I believe (and maybe here is where I'm goofing up) that the authorization server API can be publicly accessible to ease use by my other microservices. This seems fine because I'm just giving a boolean pass/fail on the validity of the token so I don't see any need to hide or obfuscate the API. I question this because I know AWS has backend stuff to validate and authorize for API calls but I like the simplicity of just using REST APIs for everything for my first implementation; to maintain simplicity.
So in summary:
1.) Signed, unencrypted JWT with non-sensitive user roles/privileges.
2.) Unencrypted so client side webpage can selectively render content based on user.
3.) Public authorization API that anybody could technically use so that my CDN (and other microservices) can validate JWTs.
Any major issue with this approach? Have I committed any technical sins?
Thank you so much in advance for your time on this matter.
Okay, I think I've sorted this out myself after finding a great video tutorial on this stuff. Below is the video I watched:
https://www.youtube.com/watch?v=_XbXkVdoG_0
I had some misconceptions and this video sorted them out. It appears that what I described in my question is precisely how JWT should be used.

AWS API Gateway, Cognito Identity Pool and REST: can I restrict to specific paths and methods?

I want to implement a blog API - for fun and learning - which allows a user to manage and write/view their own blog posts. So far I have an API with paths like
/ - GET all posts,
/blog/{id} - GET a specific post or PUT to update a post
/blog/ - POST for a new blog
Using a cognito user pool, a user can sign up, and login and the API Gateway uses an authorizer to allow or deny access (I'm mucking about with Blazor at the same time - there isn't really an interface yet just a bit of cobbled together C# that uses the identity provide API}.
However, any user can see all posts. I really want something like this:
/{user}/ - GET all posts by user
/{user}/blog/{id} - GET or PUT specific blog post
and so on.
Behind the API gateway are four really simply lambda functions. So far, with the user pool authorizer I can see the Authorization header but nothing else (the request context and context have no Identity elements that are not null).
I was wondering whether I could use Identity Pool to do the specific user permissions using IAM Roles, but I cant think of what the roles might look like, or whether this seems possible. I know there are parameters you can embed in roles - you do that for S3 Roles - why not API paths?
Does this sound plausible or would I need to go down the Lambda function to do authorization? Anyone any examples? I googled and look through stack overflow, but couldn't see anything specific around this.
Another problem I guess would be getting a nice ID substitution for user here - I collect email and nickname so far - need a nice username rather than a cognito user id, which looks like they're wouldn't play well with a URL?
Thanks.
The answer to my query appears to be in this you tube video, put up by the AWS team late last night (uk time, anyway.) So far, using C#, I can authenticate myself against the user pool, and get AWS Credentials, but when I attempt to access my API I get "message": "unauthorized", and that's it!
Anyway, onwards and upwards.
You tube video about fine grained access control using cognito identity pools.

How to check for Cognito permissions in API Gateway

Trying to understand how to use Cognito and API Gateway to secure an API.
Here is what I understand so far from AWS documentation and the Cognito user interface:
Clients
www-public - public facing website
www-admin - administrators website
Resource Servers
Prices - for this simple example the API will provide secured access to this resource.
Scopes
prices.read
prices.write
Again, very simple permissions on the API. Public www users can read prices, administrators can write them.
API Gateway
GET /prices - accessible to authenticated users that can read prices.
POST /prices - only accessible to administrators
Users
Administrators - can update prices via the POST method.
Non-administrators - cannot update prices.
Based on this...
Each client will request the scopes it is interested in. So for the public www site it will request prices.read and for the administration site both prices.read and prices.write.
The API Gateway will use two Cognito Authorisers, one for each HTTP Verb. So the GET method must check the user can read prices and the POST method that they can write prices.
The bit I don't see is how to put all of this together. I can make the clients request scopes but how do they now connect to user permissions?
When the token is generated, where is the functionality that says "Ok, you requested these scopes, now I'm going to check if this user has this permission and give you the right token?"
I understand that scopes ultimately related to the claims that will be returned in the token.For example, requesting the profile scope means that the token will contain certain claims e.g. email, surname etc.
I think based on this that my permissions will ultimately end up being claims that are returned when specific scopes are asked for. The fact that the two clients differ in what they request means that the prices write claim an never be returned to the public www client. It would never issue a token if the prices.write claim was requested.
What I can't see is where this fits in Cognito. There is the option to put users into groups but that is pretty much it. Likewise, there is nothing (that I could see) to relate scopes to claims.
I'm coming from a .Net and Identity Server background. Certainly in the last version of Identity Server I looked at there was a handler method where you would work out which claims to put into a token. I guess this would map into one of the custom handler lambda functions in Cognito. From there this would need to query Cognito and work out what claims to issue?
The final piece of the puzzle is how the API Gateway checks the claims. Can this be done in API Gateway or does the token need to be inspected in the Lambda function I will write to handle the API Gateway request?
Certainly using Identity Server and .Net there was a client library you would use in the API to inspect the claims and redact permissions accordingly. Guessing there is something similar in a Node JS Lambda function?
A few assumptions there as I'm basically in the dark. I think the basics are there but not sure how to connect everything together.
Hoping someone has figured this out.

Security concern in direct browser uploads to S3

The main security concern in direct js browser uploads to S3 is that users will store their S3 credentials on the client side.
To mitigate this risk, the S3 documentation recommends using a short lived keys generated by an intermediate server:
A file is selected for upload by the user in their web browser.
The user’s browser makes a request to your server, which produces a temporary signature with which to sign the upload request.
The temporary signed request is returned to the browser in JSON format.
The browser then uploads the file directly to Amazon S3 using the signed request supplied by your server.
The problem with this flow is that I don't see how it helps in the case of public uploads.
Suppose my upload page is publicly available. That means the server API endpoint that generates the short lived key needs to be public as well. A malicious user could then just find the address of the api endpoint and hit it everytime they want to upload something. The server has no way of knowing if the request came from a real user on the upload page or from any other place.
Yeah, I could check the domain on the request coming in to the api, and validate it, but domain can be easily spoofed (when the request is not coming from a browser client).
Is this whole thing even a concern ? The main risk is someone abusing my S3 account and uploading stuff to it. Are there other concerns that I need to know about ? Can this be mitigated somehow?
Suppose my upload page is publicly available. That means the server
API endpoint that generates the short lived key needs to be public as
well. A malicious user could then just find the address of the api
endpoint and hit it everytime they want to upload something. The
server has no way of knowing if the request came from a real user on
the upload page or from any other place.
If that concerns you, you would require your users to login to your website somehow, and serve the API endpoint behind the same server-side authentication service that handles your login process. Then only authenticated users would be able to upload files.
You might also want to look into S3 pre-signed URLs.