Protect webfonts via Amazon CloudFront from download - amazon-web-services

Is it possible to protect data embedded on my website via Amazon CloudFront from hotlinking or other downloads? I am mainly interested in protecting webfonts from being downloaded.
Amazon CloudFront is connected to a S3 Bucket
S3 Bucket Policy controls allowed domains for files via CloudFront
You think that could work?

Since you have cloudfront setup connected to your s3 bucket, you can use Cloudfront Signed Urls to prevent download by anyone from the public.
You can put your fonts in a folder called fonts for example, and setup a separate behaviour in cloudfront for any path that contains /fonts/ and in there you can activate Restrict Viewer Access.
In your website, you will need to add some way to generate the presigned url for this only when your webpage is loaded and you can put a short expiry time for this URL.

Related

AWS S3 sending download.txt file

I'm setting up an S3 bucket behind CloudFront that is meant to serve static assets. My problem is doing a / on any directory with no file name will have the browser download a download.txt with 0 bytes. I have my S3 bucket setup for Static Website Hosting and is pubic, so I'm able to access my assets.
https://s3-bucket.domain.com/path/to/file.jpg -> get asset, working
https://s3-bucket.domain.com/path/to/file-bad-name -> Error status 403, working. Renders error.html from S3.
https://s3-bucket.domain.com/path/to/ -> sends download.txt, not working
How do I configure #3 to not send a download.txt and render an error page instead?
There are few things happening there.
You need to map it to new origin if you want to point the path to an S3 object.
Your pattern is not having priority in CloudFront.
If you fix one of the above or both, then it should work as expected.
I have my S3 bucket setup for Static Website Hosting and is pubic
...but you selected the bucket from the dropdown list when defining the origin... yes?
You need to configure the origin domain name to use the web site hosting endpoint for the bucket.
When you configure your CloudFront distribution, for the origin, enter the Amazon S3 static website hosting endpoint for your bucket. This value appears in the Amazon S3 console, on the Properties page under Static Website Hosting. For example: http://bucket-name.s3-website-us-west-2.amazonaws.com
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html#concept_S3Origin_website
If you don't do this, and you created folders in the bucket using the S3 console, then what you are currently observing is the expected behavior, a side effect of the way the console creates those imaginary folders.

How to create session token for my file on AWS S3 if it connected to other CDN provider?

I have some video file stored on AWS S3 and I want to secure these files by adding session token with specific expiry time on the video URLs for preventing unauthorized access.
I know the s3 SDK can generate temporary credentials with Cloudfront to achieve this.
However, if I connected S3 to other CDN provider such as cloudflare. Will these temporary credentials work perfectly?
For example, my video file is stored on s3 -> http://files.video.com.s3.amazonaws.com/video.mp4
The cloudflare cdn url is -> http://files.video.com/video.mp4
If i generated temporary credentials for the file and access the url -> http://files.video.com/video.mp4?token=4180da90a6973bc8bd801bfe49f04a&expirey=1526231040535
Will it work?
It sounds like you're referring to S3 Presigned URLs. No, if your S3 bucket is private, CloudFlare will not be able to generate presigned URLs to access your files. AWS CloudFront uses an Origin Access Identity to resolve this issue. However with 3rd party CDNs, this is not possible.
There are 2 ways you could achieve better security (source).
Make your bucket public but restrict the allowed IPs for your S3 bucket to only CloudFlare IPs.
Make your bucket private and use CloudFlare workers to authorize its GET requests

Hiding web content in S3

This is more of a theoretical question for AWS S3 website hosting.
Say I have a website hosted in S3. Obviously I want the content to be public, but I don't want people to be able to download the backend scripts, images, css by simply changing the domain url. I want to hide those folders, but if I deny GetObject access in the bucket policy for the folders the application "breaks" because it can't reach those folders.
How can I secure my content to ensure the most security for my backend when it sits in an S3 bucket?
You need to access the website via cloudfront with restricted access, better known as Origin Access Identity. This will allow cloudfront distribution access to s3 bucket.
More details can be found in the AWS Docs or https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-creating-oai

Moving files without breaking links in AWS s3

We have a website hosted which uses s3 bucket for storing different files (say policy repo). we want to reorganize the files in a proper folder under S3 bucket without breaking the link to these objects in our website. Is there a way to achieve this in S3?
You could use a Cloudfront distribution in-front of your s3 bucket and use AWS Lambda Edge to re-write the URLs mapping to the new moved folder paths without breaking the old URLs.
From Configuring a Webpage Redirect - Amazon Simple Storage Service:
If your Amazon S3 bucket is configured for website hosting, you can redirect requests for an object to another object in the same bucket or to an external URL.
You set the redirect by adding the x-amz-website-redirect-location property to the object metadata. The website then interprets the object as 301 redirect. To redirect a request to another object, you set the redirect location to the key of the target object.

Does cloudfront support questionsmarks at URLs to fetch a new resource

We are porting a website to a AWS cloudfront backed infrastructure which serves static resource from an S3 bucket.
We access the resource like this
http://example.com/static/foo.css?hash=1
Our CMS generates a new hash when the files changed (cachebusting).
So we upload the file foo.css to the S3 bucket and access
http://example.com/static/foo.css?hash=2
But still the old contents are displayed. It seems to be cached in cloudfront?
Can this be avoided?
Or does the cms need to be modified?
You have to configure your CloudFront distribution to forward query parameters and use them as part of the cache key. Documented here.