We have some videos in an S3 bucket. they've been transformed using AWS Elastic Transcoder to .m3u8 / .ts
We want the users to be able to stream these videos on both a web app and a mobile app.
Now, we want to secure this streaming, so our videos won't get pirated.
So, our proposed solution is as follows:
Prevent public access to the S3 bucket
create a cloudfront distribution with the bucket as the origin
Only enable access to this CDN using pre-signed URLs/cookies
For web app: use a pre-signed cookie (set by an endpoint at our backend that requires authentication), so that it works well with HLS (since the app needs to fetch a new segment every few seconds)
But now we don't know what to do with our mobile app. We can't use pre-signed cookies since there's no browser, and we can't use pre-signed URLs, since we'll need a signed URL for each segment we need to fetch. Any suggestions and solutions are welcome.
For our similar use-case:
We used CloudFront url and not S3 signed url. Because S3 signed URL is valid at object level and not folder level.
For paid videos, security and access was managed by Lambda#Edge on viewer requests.
Although we used OAuth and database inside that lambda, but surprisingly, we didn't face any bottlenecks on Lambda#Edge. For future plans we considered using Redis for seamless access validation inside Lambda#Edge.
Related
I have an input provided by a user, that would be used as the endpoint url for bucket operations for an S3 bucket.
Is there a way to differentiate if the url is a REST API endpoint or a website endpoint?
I did read: https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteEndpoints.html
which mentions "Supports only GET and HEAD requests on objects" for a website endpoint.
However, i have come across cases where the other operations worked even with a website endpoint.
I am using python boto3 for these APIs.
With only S3 you can't upload any data. It's a static website hosting, without any logic or ability to process data. You're just storing files that browser renders.
If you need logic I'd suggest adding some Lambda functions with REST endpoint. For your needs, you'll probably stay in free tier.
I need to secure my s3 bucket objects. In my web application I'm using aws-sdk to upload media to s3 bucket and get an http link back to access that object. This http link is public by default and I want to make it secure so that only authorized users can access the media. aws s3 allows to make the object private but it wont let anyone with the link access the object.
This link will be accessed from a mobile app where I dont want to use aws-sdk, Instead I want to execute some logic on aws side whenever someone tries to access the http link for the object.
What I would like to happen is, before the user gets access to s3 object, Some authorizer code would execute (like a jwt token authorizer) and depending on it user would be granted/denied access.
I'm currently looking into Amazon API Gateways, I believe they can be accessed as an http link and AWS Lambda could be used to secure them(where i would execute my jwt authorizer). Then these apis would have access to s3 internally.
If someone could point me in the right direction, If this is at all possible.
If I could use the same jwt token issued from my web-application to send along the request to Amazon API Gateway, that would be great.
I would make the bucket private, and place a CloudFront distribution in front of it. Using an Origin access identity to allow only CloudFront to directly access the S3 bucket.
Then to provide security I would use either CloudFront signed cookies, or Lambda#Edge with a custom JWT token validation.
The easiest solution to expose private objects in an S3 bucket is to create a pre-signed URL. Pre-signed URLs use the permissions from the service (which pre-signs the URL) to determine access and have only a limited duration in which they can be used. They can also be used to upload an object directly to S3 instead of having to proxy the upload through a lambda function.
For a download functionality and a smooth user experience, you can - for example - have a lambda function that generates a pre-signed URL and returns it as an HTTP 302 response, which should instruct the browser to automatically download the file from the new URL.
(Edit)
Following on what I've stated in the comments on this answer, if you're proxying the upload/download of the objects through services such as API Gateway or Lambda, you will be severely limited in the size of files that you are able to upload to S3. The payload size limit on an API Gateway is 10 MB and for requests to lambda your payload is capped at 6MB for synchronous invocations. If you want to upload something larger than 10 MB, you will need to use direct upload to S3 for which pre-signed URLs are the safest solution.
I know I am bit late here, but I wanted to give my opinion in case someone has the same problems.
Your mobile app should communicate with a server app (backend app) for authentication and authorization. let's say you are deploying your server app on AWS VPC. Now, it's simple to manage the files access by creating a policy which allow just your server app (IP, or VPC) to access the bucket. the authorization part will be managed on your application.
We need to save product's images in AWS. There are 2 ways, it can be uploaded from frontend(website or mobile application) or from backend.
On frontend side we need to store AWS credentials, which can be an issue. So, we want to go with upload on AWS from backend. The flow will be: user select an image, and upload it to backend, and backend upload it to AWS.
Is this ok? What issues can appear?
There is nothing wrong with uploading it to your backend, presumably an ec2 instance, and then having the ec2 instance upload the file to s3 - thats a secure way of doing it and a method I often use.
However, you do not need to expose your aws credentials to the browser if you would prefer to do the upload directly from your browser to s3 - you would just need to add AWS Cognito to the equation.
Using cognito you can get temporary credentials that will allow you to do the upload without compromising security.
https://aws.amazon.com/sdk-for-browser/
You can also used a pre-signed S3 URL (see: Uploading Objects Using Pre-Signed URLs) which you generate in the backend and pass to the frontend app. Then the flow would be something like this:
Request a pre-signed URL from your backend service
The frontend app PUTs the file to the signed URL
Signing the URL on the backend would look something like this (Ruby):
s3 = Aws::S3::Resource.new(region: 'us-east-1')
url = s3.bucket('my-bucket').object('name-of-file').presigned_url(:put)
And on the frontend you could simply do something like this using fetch:
fetch(signedUrl, { method: 'PUT', body: file })
I'm currently utilizing an S3 bucket's rest api to access images for my company's application. I've written lambda functions to help lighten the load on my application, but in order to utilize these functions I have to change the settings in my S3 bucket to act as a static website host. When I do this I can longer access the original rest api. I would like to utilize both the REST api and the static website url to make the transition from one to the other smooth, but I can't find any details about doing this. Is it even possible?
Rest API request example https://s3.amazonaws.com/bucketname.mycompany.com/spacebackground.jpg
Static Website Example
http://bucketname.mycompany.com.s3-website-us-west-1.amazonaws.com/spacebackground(1)Full.jpg
Once I change a bucket to static website host I get a "permanent redirect" http response.
Is it possible to utilize both?
You are missing the cloud service named CloudFront.
https://aws.amazon.com/cloudfront/
You can map both lambda and your static website to a single domain. For Example,
https://www.example.com/ --- will be a static site
https://www.example.com/api -- belongs to API.
Create a CloudFront Distribution, configure your origins and Map to your url pattern.
You can also specify which url need to be self signed or which one to keep public as well.
It will be a breeze.
Hope it helps.
I had plans to use the AWS API Gateway for three purposes. All of these endpoints are configured with custom domain names, with AWS issued SSL certificates and I have CNAME records configure to match the could front urls.
api.my-domain.com (REST api calls that return json data) (Working as expected)
images.my-domain.com (Proxy pass through of binary image data from S3) - Working as expected.
videos.my-domain.com (DOH!... )
Unfortunately dealing with videos I've run into a few issues. Smaller videos start to work but then generate an error. But.. that's not the main issue.
There is a 10MB max payload size on the response data from an API integration endpoint, so I must come up with another solution for the videos.
I don't want to host the images, or videos via cloundfront. And I want to use the same AWS issues wildcard certificate *.my-domain.com on all the endpoints. I wanted to use the API gateway for the image request because the images are small, and won't exceed the limit, and I can cache them at the api level.
a CNAME pointed to my video S3 bucket works, but can't use the same SSL certificate, and I wanted all traffic to originate vie the API gateway and not have request going directly to the bucket endpoint.
```
So.. what are my options?
It seems like my best option will be to transcode the MP4 videos to HLS, and host the S3 bucket via cloudfront. I hadn't really wanted to incur the charges of using cloudfont, but I don't see any better option, for the design I want.
The most recent videos will be viewed occasionally, not high demand, older videos will be viewed rarely so hosting them in cloudfront seems like a waste.
Typical setup for Video streaming in AWS is to Stream the Video stored in S3 through AWS CloudFront RTMP Distribution.
Going forward with CloudFront hosted content from my S3 bucket to see how it works.
The reason I picked CloudFront overall was the tight integration with other AWS services, and the ability to have complete control over the path/name of the assets in the S3 bucket, whereas with Vimeo you don't have very good control over the name of the assets.