I am trying to implement a vue.js server(-less) side rendered webshop-a-like-site via nuxt.js on AWS Lambda backed with Cloudflare.
I prefer Cloudflare over Cloudfront because of http/3, image optimization features, safety against attacks, brotli and some more features that Cloudflare provides out-of-the-box.
Unfortunately i couldn't find any ressources if anyone did this before and what to take care of to work properly.
Right now my setup is like:
User -> Route53 -> AWS API Gateway -> AWS Lambda
-> S3 (for static files)
-> another AWS Lambda for dynamic data from Elasticsearch indexes
I am not sure where to properly integrate Cloudflare.
`I found a blogposts and threads about:
using Cloudflare Workers instead of AWS API Gateway
https://news.ycombinator.com/item?id=16747420
Creating a CNAME for Lambda provided by Cloudfront, but I am not sure if this triggers another Roundtrip to Cloudfront and additional cost? https://forums.aws.amazon.com/thread.jspa?threadID=261297
Connecting a Subdomain to API-Gateway
https://medium.com/#bobthomas295/combining-aws-serverless-with-cloudflare-sub-domains-338a1b7b2bd
Another solution could be that I build the nuxt.js directly in a Cloudflare Worker, but I am not sure of any downsides of this solution, since CPU time is very limited in Pro Plan?
`
Furthermore I've read an article about the need of securing the API-Gateway against attackers by only allowing Cloudflare IPs.
Did anyone of you already setup Vue + Nuxt with Cloudflare ? Am open to any other suggestions or ideas.
Thanks a lot!
Philipp
I am not sure where to properly integrate Cloudflare.
Assuming this is the crux of the question here, this is what it might look like using the notation you provided.
User -> Route53 -> Cloudflare -> AWS API Gateway -> AWS Lambda -> S3 -> Another lambda
The basic idea is that you'll want Cloudflare to be the first thing your DNS (Route53) resolves to so it can properly serve cached content before it ever reaches your application. Which, in this case, would start at API Gateway.
Related
I'm using the API Gateway v2 and Cloud Front on my system, and serverless framework (with compose) to manage everything.
How to expose both services in the same domain (to avoid preflight requests and a few other internal requirements) where each is accessible with a custom path?
Example:
https://foo.bar/app -> points to the Cloud Front application
https://foo.bar/api -> points to the API Gateway
https://foo.bar -> redirects to the Cloud Front in /app initially but later it will have its own SPA landing page.
Anything I can do? The only way we were able to configure this was by creating an edge lambda to handle requests and decide whether the CF or the API would be used but this solution seems to waste resources unnecessarily...
Thanks.
You could use the behavior and origin feature of cloudfront.
Have multiple origins for example S3 bucket and another api gateway.
Then based on behavior you can route to specific origin.
Like Default(*) behavior will point to S3.
/api/* behavior will point to api gateway.
https://kuchbhilearning.blogspot.com/2022/10/add-cloudfront-behavior-and-origin.html code.
Much detail explanation
https://kuchbhilearning.blogspot.com/2022/10/api-gateway-and-cloud-front-in-same.html
I'm new to AWS and just exploring possible architectures using the tools like AWS cognito, AWS Cloudfront, and/or AWS API Gateway.
Currently, my app is deployed in an EC2 instance and here is the outline:
Frontend: React app running on port 80. When a user goes to https://myapp.com, the request is be directed to my-ec2-instance:80.
Backend: Nodejs + Express running on port 3000. After the user loads the frontend in the browser, when he interacts with the website, http requests are sent to https://myapp.com/api/*, which are routed to my-ec2-instance:3000;
I use nginx/openresty as a single entry point to my webapp, and it does authorization with AWS Cognito, and then reverse-proxy the requests based on path:
Now, instead of managing an EC2 instance with the nginx/openresty service in it, I want to go serverless.
I plan to point my domain myapp.com to AWS CloudFront, and then Cloudfront acts as the single entry point to replace the functionalities of Nginx/Openresty. It should do the following:
Authorization with AWS Cognito:
When a user first visits myapp.com, he is directed to AWS Cognito from AWS Cloudfront to complete the sign-in step.
path-based reverse proxy: I know this can be done. I can configure this from the CloudFront configuration page.
But for 1, Can Cloudfront do authorization with AWS Cognito? Is this the right way of using AWS Cloudfront?
After reading the AWS doc and trying with Cloudfront configurations, I started to think that Cloudfront is not build for such a use case at all.
Any suggestions?
You mentioned "serverless", but using ec2 which is a server. You can use AWS lambda (Node JS) for backend and S3 for front-end. AWS API gateway has built in authorization feature where you can use AWS Cognito. Cloudfront is for content delivery cached in edge locations to deliver content faster from nearest edge locations where the user is located.
You can follow the below steps to implement serverless concept in AWS.
Create the front end and upload to S3
Configure AWS Cognito and grab the following
UserPoolId: 'xxxx',
ClientId: 'xxxx',
IdentityPoolId: 'xxxx',
Region: 'xxxx'
Use aws-cognito-sdk.min.js to authenticate user and get the JWT token, sample code can be found here. This JWT token needs to be passed to each and every API call in the header section. If using AJAX then sample code is
var xhr = new XMLHttpRequest();
xhr.setRequestHeader("Authorization", idToken);
Configure AWS API gateway and cloudfront - follow documentation
In API Gateway configuration select Cognito for those API's for which you want to use authorized access.
Create AWS Lambda functions for the backend and link to API Gateway.
PROBLEMS
It feels like your current problems are:
Requests to Cloudfront will require a cookie, and Cloudfront has only very limited capabilities for running code to verify them (via lambda edge extensions)
It does not make sense to put a reverse proxy in front of Cloudfront, since it should deploy web resources to 20 or so global locations for you
SOLUTION APPROACH
If you can separate web and API concerns you can solve your problem:
Make your Express web back end (used during local development) serve only static content
Use the reverse proxy and cookies only for API and OAuth requests
TOKEN HANDLER PATTERN
At Curity we have put together some related resources, illustrated in the following picture:
It is a tricky flow from a deployment viewpoint, though the idea is to just plug in the token handler components, so that your SPA and APIs require only simple code, while also using the best security.
AWS CODE EXAMPLE
Out of interest a React sample of mine uses this pattern with Cognito, and is deployed to Cloudfront.
Few ideas.
Frontend:
Use S3 + CloudFront distribution.
About the authentication, you can try using a Lambda function "linked" to the CloudFront distribution, redirecting to Cognito.
Backend:
Deploy on Fargate, EC2 or do you prefer.
Put an Application Load Balancer (ALB) in front of the endpoint, so you can define rules with redirects, forward, deny, etc.
I would like to put an AWS WAF in front of a web site served by CloudFront. I will need to update this WAF via automated calls though its API.
Where is this API documented?
I quickly found the Making HTTPS Requests to AWS WAF or Shield Advanced page, which states that
Many AWS WAF and Shield Advanced API actions require you to include
JSON-formatted data in the body of the request.
This is followed by a random example of how to insert an IP match condition rule.
I cannot believe that this is the only "documentation" available (making the REST interface hardly usable).
Here is the api documentation for WAF http://docs.aws.amazon.com/waf/latest/APIReference/API_Operations_AWS_WAF.html
and this if you are using Python https://boto3.amazonaws.com/v1/documentation/api/latest/index.html
If I have a static site on AWS S3 (and maybe using CloudFront) that's pretty cool, because it scales easily, and has zero-downtime deployments, because you're just updating static assets, and gets distributed to edge locations, woohoo!
But, if I want to have a contact form, or process a stripe payment. I need to run some backend code. So, how do I tell AWS that for GETs to certain routes, use S3 (or CloudFront), but if there's a form submit, direct that to this little Lambda function over here?
Could I use Route53 and direct everything at example.com/forms/... over to Lambda?
Route53 is just DNS, it doesn't do any routing based on the path. Since you are using CloudFront I believe you can use the CloudFront Behaviors feature to perform the routing you are talking about, like what is described in this blog post. Alternatively, just use a different subdomain for the dynamic parts of your web application like api.example.com for your API Gateway routes.
Let say I need an API Gateway that is going to run Lambdas and I want to make the best globally distributed performing infrastructure. Also, I will use Cognito for authentication, Dynamodb, and S3 for user data and frontend statics.
My app is located at myapp.com
First the user get the static front end from the nearest location:
user ===> edge location at CloudFront <--- S3 at any region (with static front end)
After that we need to comunicate with API Gateway.
user ===> API Gateway ---> Lambda ---> S3 || Cognito || Dynamodb
API Gateway can be located in several regions, and even though is distributed with CloudFront, each endpoint is pointing to a Lambda located at a given region: Let say I deploy an API at eu-west-1. If a request is sent from USA, even if my API is on CloudFront, the Lambda it runs is located at eu-west-1, so latency will be high anyway.
To avoid that, I need to deploy another API at us-east-1 and all my Lambdas too. That API will be pointing to those Lambdas
If I deploy one API for every single region, I would need one endpoint for each one of them, and the frontend should decide which one to request. But how could we know which one is the nearest location?
The ideal scenario is a single global endpoint at api.myapp.com, which is going to go to the nearest API Gateway which runs the Lambdas located in that region too. Can I configure that using Route 53 latency routing with multiple A records pointing to each api gateway?
If this is not right way to do this, can you point me in the right direction?
AWS recently announced support for regional API endpoints using which you can achieve this.
Below is an AWS Blog which explains how to achieve this:
Building a Multi-region Serverless Application with Amazon API Gateway and AWS Lambda
Excerpt from the blog:
The default API endpoint type in API Gateway is the edge-optimized API
endpoint, which enables clients to access an API through an Amazon
CloudFront distribution. This typically improves connection time for
geographically diverse clients. By default, a custom domain name is
globally unique and the edge-optimized API endpoint would invoke a
Lambda function in a single region in the case of Lambda integration.
You can’t use this type of endpoint with a Route 53 active-active
setup and fail-over.
The new regional API endpoint in API Gateway moves the API endpoint
into the region and the custom domain name is unique per region. This
makes it possible to run a full copy of an API in each region and then
use Route 53 to use an active-active setup and failover.
Unfortunately, this is not currently possible. The primarily blocker here is CloudFront.
MikeD#AWS provides the info on their forums:
When you create a custom domain name it creates an associated CloudFront distribution for the domain name and CloudFront enforces global uniqueness on the domain name.
If a CloudFront distribution with the domain name already exists, then the CreateCloudFrontDistribution will fail and API Gateway will return an error without saving the domain name or allowing you to define it's associated API(s).
Thus, there is currently (Jun 29, 2016) no way to get API Gateway in multiple regions to handle the same domain name.
AWS has no update on providing the needful since confirming existence of an open feature request on July 4, 2016. AWS Form thread for updates
Checkout Lambda#Edge
Q: What is Lambda#Edge? Lambda#Edge allows you to run code across AWS
locations globally without provisioning or managing servers,
responding to end users at the lowest network latency. You just upload
your Node.js code to AWS Lambda and configure your function to be
triggered in response to Amazon CloudFront requests (i.e., when a
viewer request lands, when a request is forwarded to or received back
from the origin, and right before responding back to the end user).
The code is then ready to execute across AWS locations globally when a
request for content is received, and scales with the volume of
CloudFront requests globally. Learn more in our documentation.
Usecase, minimizing latency for globally distributed users
Q: When should I use Lambda#Edge? Lambda#Edge is optimized for latency
sensitive use cases where your end viewers are distributed globally.
Ideally, all the information you need to make a decision is available
at the CloudFront edge, within the function and the request. This
means that use cases where you are looking to make decisions on how to
serve content based on user characteristics (e.g., location, client
device, etc) can now be executed and served right from the edge in
Node.js-6.10 without having to be routed back to a centralized server.