Handle an S3 'miss' with a lambda using serverless framework - amazon-web-services

Noting the architecture design (taken from this deprecated AWS documentation page:
https://aws.amazon.com/blogs/compute/resize-images-on-the-fly-with-amazon-s3-aws-lambda-and-amazon-api-gateway/)
With each step described:
A user requests a resized asset from an S3 bucket through its static
website hosting endpoint. The bucket has a routing rule configured to
redirect to the resize API any request for an object that cannot be
found.
Because the resized asset does not exist in the bucket, the
request is temporarily redirected to the resize API method.
The user’s browser follows the redirect and requests the resize operation via API
Gateway.
The API Gateway method is configured to trigger a Lambda
function to serve the request.
The Lambda function downloads the
original image from the S3 bucket, resizes it, and uploads the resized
image back into the bucket as the originally requested key.
When the Lambda function completes, API Gateway permanently redirects the user
to the file stored in S3.
The user’s browser requests the
now-available resized image from the S3 bucket. Subsequent requests
from this and other users will be served directly from S3 and bypass
the resize operation. If the resized image is deleted in the future,
the above process repeats and the resized image is re-created and
replaced into the S3 bucket.
Steps 3-7 feel somewhat straight forward... but how do you get an S3 bucket to configure a routing rule to redirect upon a 'missing object'?
Specifically, this neesd to be done in serverless framework.
In theory, an updated version of this concept is laid out in the cloudformation template here: https://docs.aws.amazon.com/solutions/latest/serverless-image-handler/template.html but I'm not seeing any code in that template that configures an S3 bucket. I follow deeper to their gitlab repo and it seems they are deploying with the aws-sdk? https://github.com/aws-solutions/serverless-image-handler/blob/main/source/custom-resource/index.ts

It seems that you can configure a redirect on S3 itself. Here is a link that shares at least 3 steps to do this.
To configure redirection rules for a static website. To add redirection rules for a bucket that already has static website hosting enabled, follow these steps.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
In the Buckets list, choose the name of a bucket that you have configured as a static website.
Choose Properties.
Under Static website hosting, choose Edit.
In Redirection rules box, enter your redirection rules in JSON.
In the S3 console you describe the rules using JSON. For JSON examples, see Redirection rules examples. Amazon S3 has a limitation of 50 routing rules per website configuration.
Recommending the layering of your automation by separating provisioning and application automation by using Terraform or/and Cloud Formation for this.

Related

AWS: Does my S3 bucket need a policy for files to be readable from my application?

I have a Laravel application that is hosted on AWS. I am using an S3 bucket to store files. I know that I have successfully connected to this bucket because when I upload files, they appear as I would expect inside the bucket's directories.
However, when I try to use the URL attached to the uploaded file to display it, I receive a 403 Forbidden error.
I have an IAM user set up named laravel which has the permission AmazonS3FullAccess applied to it, and I am using that key/secret.
I have the Object URL like so:
https://<BUCKET NAME>.s3.eu-west-1.amazonaws.com/<DIR>/<FILENAME>.webm
But if I try to access that either in my app (fed into an audio player) or just via the link directly, I get a 403. None of the tutorials I've followed to get this working involve Bucket Policies, but when I've googled the problems I'm having, Bucket Policy seems to come up.
Is there a single source of truth on how I am to do this? My AWS knowledge is very limited, but I am trying to get better!
When you request a URL of the form https://bucket.s3.amazonaws.com/dog/snoopy.png, that request is unauthenticated. Your S3 bucket policy does not allow unauthenticated access to the contents of the bucket so that request is denied with 403.
If you want your files to be downloadable by an unauthenticated/anonymous client then create an S3 bucket policy to allow that.
Alternatively, your server can create signed URLs and share those with the client.
Otherwise, your client's requests need to be authenticated, which means having correctly-permissioned credentials and using an AWS SDK.
Typically, back-end applications that you write that need access to data in S3 (or other AWS resources) would be given AWS credentials allowing the necessary access. If your back-end application runs in AWS then you would do that by launching the compute with an IAM role.
Typically, front-end applications would not have AWS credentials. Instead they would either authenticate to a back-end that then does work with AWS resources on their behalf. There are other options, however, such as AWS Amplify apps.

AWS S3 AccessDenied on subfolder

I have created S3 bucket, and done the steps to enable static web hosting on it.
I have verified it works by going to the URL
which looks something as following https://my-bucket.s3.aws.com
I want to put my web assets in a sub folder now
I put the web assets in a folder I called foobar
Now if want to access it I have to explictly enter URL as following:
https://my-bucket.s3.aws.com/foobar/index.html
So my question is, do I need to use some other service such as CloudFront to enable so I can go into the bucket with the following URL instead https://my-bucket.s3.aws.com/foobar, that is I don't want to have to explicit say index.html at the end?
You can't do this with a default document for a subfolder using CloudFront. Documentation says
However, if you define a default root object, an end-user request for
a subdirectory of your distribution does not return the default root
object. For example, suppose index.html is your default root object
and that CloudFront receives an end-user request for the install
directory under your CloudFront distribution:
http://d111111abcdef8.cloudfront.net/install/
CloudFront does not return the default root object even if a copy of
index.html appears in the install directory.
But that same page also says
The behavior of CloudFront default root objects is different from the
behavior of Amazon S3 index documents. When you configure an Amazon S3
bucket as a website and specify the index document, Amazon S3 returns
the index document even if a user requests a subdirectory in the
bucket. (A copy of the index document must appear in every
subdirectory.) For more information about configuring Amazon S3
buckets as websites and about index documents, see the Hosting
Websites on Amazon S3 chapter in the Amazon Simple Storage Service
Developer Guide.
So check out out that referenced guide, and the section on Configuring an Index Document in particular.

Protect webfonts via Amazon CloudFront from download

Is it possible to protect data embedded on my website via Amazon CloudFront from hotlinking or other downloads? I am mainly interested in protecting webfonts from being downloaded.
Amazon CloudFront is connected to a S3 Bucket
S3 Bucket Policy controls allowed domains for files via CloudFront
You think that could work?
Since you have cloudfront setup connected to your s3 bucket, you can use Cloudfront Signed Urls to prevent download by anyone from the public.
You can put your fonts in a folder called fonts for example, and setup a separate behaviour in cloudfront for any path that contains /fonts/ and in there you can activate Restrict Viewer Access.
In your website, you will need to add some way to generate the presigned url for this only when your webpage is loaded and you can put a short expiry time for this URL.

AWS S3 sending download.txt file

I'm setting up an S3 bucket behind CloudFront that is meant to serve static assets. My problem is doing a / on any directory with no file name will have the browser download a download.txt with 0 bytes. I have my S3 bucket setup for Static Website Hosting and is pubic, so I'm able to access my assets.
https://s3-bucket.domain.com/path/to/file.jpg -> get asset, working
https://s3-bucket.domain.com/path/to/file-bad-name -> Error status 403, working. Renders error.html from S3.
https://s3-bucket.domain.com/path/to/ -> sends download.txt, not working
How do I configure #3 to not send a download.txt and render an error page instead?
There are few things happening there.
You need to map it to new origin if you want to point the path to an S3 object.
Your pattern is not having priority in CloudFront.
If you fix one of the above or both, then it should work as expected.
I have my S3 bucket setup for Static Website Hosting and is pubic
...but you selected the bucket from the dropdown list when defining the origin... yes?
You need to configure the origin domain name to use the web site hosting endpoint for the bucket.
When you configure your CloudFront distribution, for the origin, enter the Amazon S3 static website hosting endpoint for your bucket. This value appears in the Amazon S3 console, on the Properties page under Static Website Hosting. For example: http://bucket-name.s3-website-us-west-2.amazonaws.com
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html#concept_S3Origin_website
If you don't do this, and you created folders in the bucket using the S3 console, then what you are currently observing is the expected behavior, a side effect of the way the console creates those imaginary folders.

Using Amazon EBS like S3

Is it possible to use EBS like S3? By that I mean can you allow users to download files from a link like you can in S3?
The reason for this is because my videos NEED to be on the same domain/server to work correctly. I am creating a Virtual Reality video website however, IOS does not support cross-origin resource sharing through WebGL (which is used to create VR).
Because of this, my S3 bucket file system will not work as it will be classed as cross origin, but looking into EBS briefly it seems that it attaches to the all your instances as local storage which would get past the cross-origin problem I am facing.
Would it be simply like a folder on my web server, that could be reached by 'www.domain.com/ebs-file-system/videos/video.mp4'?
Thanks in advance for any comments.
Amazon S3 CORS
You can configure your Amazon S3 bucket to support Cross-Origin Resource Sharing (CORS):
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
CloudFront Behaviours
Another option is to use Amazon CloudFront, which can present multiple systems as a single URL. For example, example.com/video could point to an S3 bucket, while example.com/stream could point to a web server. This should circumvent CORS problems.
See:
Format of URLs for CloudFront Objects
Values that You Specify When You Create or Update a Web Distribution
Worst Case
Worst case, you could serve everything via your EC2 instance. You could copy your S3 content to the instance (eg using the AWS Command-Line Interface (CLI) aws s3 sync command) and serve it to your users. However, this negates all the benefits that Amazon S3 provides.