AWS API Gateway caching SSL & DNS lookup to improve response times - amazon-web-services

I've been working on improving performance of my APIs hosted on various AWS services this week, and I've found that a significant portion of the time on some of the non-Lambda calls (namely ECS) is taken by DNS Lookup, TCP Handshake, and SSL Handshake. Below are the response times broken down by Postman.
The endpoint pointing to an ECS service does not cache the handshakes
Headers returned by ECS endpoint
The endpoint pointing to a Lambda does
Headers returned by Lambda endpoint
More details:
Both services are hosted in the same zone, the API Gateway for both is also hosted in the same zone, the only difference is that for the Lambda, the route goes from Route53 -> API Gateway -> Lambda integration, where the ECS goes Route53 -> API Gateway -> Application load balancer (private VPC link, HTTP) -> ECS service, the service being an NGINX reverse proxy listening for HTTP and routing those requests to a Django container hosted in the same service.
Question:
I'm looking to enable or set up caching for the ECS service in order to cut down response times to below or close to 100ms. I haven't found any details on this caching behavior or by extension how to set it up. How can I go about setting it up, and perhaps read more about this behavior? Thanks

Does the Lambda request come second? The cache will reuse the same handshake for each sub/domain within the session, so the first should always have a penalty. Caching the DNS is generally at machine level, but TLS/SSL has to be handshook on every new session.
In both situations, it is API Gateway which terminates your public SSL. API Gateway makes a separate internal request to Lambda or ECS. Depending how you have setup your routing, you could also remove the API Gateway from the ECS route and have the NGINX terminate SSL and serve the certificate and remove one of the (slower) hops?

Related

Request flow when AWS WAF, ALB associated and ALB 4xx

We have a server configured under a ALB associated with a WAF
As the underlying service receives requests for your web sites, it forwards those requests to AWS WAF for inspection against your rules. Once a request meets a condition defined in your rules, AWS WAF instructs the underlying service to either block or allow the request based on the action you define.
Above is mentioned in AWS FAQ and according to that I have a understanding that requests first comes to ALB and then forward to WAF.
My questions are in my environment I see some bad requests comes and returns 400 on ALB. they are not counted in WAF so is that mean bad requests are process in ALB? Bad requests are not forwarded to WAF? If I want to forward all the ALB request to WAF is it possible?

How to upgrade AWS API Gateway endpoints without API interruptions

I have been tasked with upgrading the servers / endpoints that our AWS APIGateway uses to respond to requests. We are using a VPC link for the integration request. These servers are hosted on AWS Elastic Beanstalk.
We only use two resources / methods in our API : /middleware and middleware-dev-4 that go through this VPC link.
As our customers rely heavily on our API I can not easily swap out the servers. I can create new servers and point the APIs to those but I do not want any downtime in our API service. Can you recommend a way to do this API change without impacting our customers ?
Iv'e seen several examples using canary release but they seem to pertain to Lambda functions and not VPC links with EC2 servers as endpoints.
Edit --
AWS responded and they agreed that I should add the new servers to the target group in the network load balancer and deregister the old ones.
Do you need to update servers or endpoints?
If endpoints, api gateway has stages, your customer uses endpoints published on some stage. Make your changes & publish new stage, api gateway will publish new endpoints after some seconds.
If servers, then api gateway has not much to do with it. It's up to you how you run the servers. Check AWS Elastic Beanstalk blue/green deployments:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html
Because AWS Elastic Beanstalk performs an in-place update when you
update your application versions, your application might become
unavailable to users for a short period of time. To avoid this,
perform a blue/green deployment. To do this, deploy the new version to
a separate environment, and then swap the CNAMEs of the two
environments to redirect traffic to the new version instantly.
By the way, check your DNS cache settings. DNS Cache may lead to real issues & downtime. If you will change CNAME value on DNS, make sure your client is not caching that CNAME value for a long time. Check what is DNS cache for that cname now, you will need that time period, so make sure to check. Update DNS to have minimum cache time, like 1 or 5 minutes. Wait the time period what was set for cache originally. Do your blue/green, update CNAME, wait for DNS cache time to expire the cache.

Allow request from API Gateway to private ALB

I have a public API gateway set up, I want to forward the requests from API Gateway to a private ALB in the VPC. On AWS Console, for API Gateway VPC link setup I could only select an NLB in the VPC.
Is there a reason why we can only route to NLB and not to ALB?
Is there a way I can route to private ALB from the API Gateway?
Currently AWS only supports connecting to NLB for VPC link integrations. They have a feature request in place to enable support for ALB as well. For now, you can do -
Public API --> VPC Link --> NLB --> ALB
In the target groups of the NLB, add the private IPs of the ALB. This way you can reap benefits of the NLB (TCP layer) and ALB (HTTPS).
Using static IP addresses for Application Load Balancers
The selected answer is outdated. It is possible to have API Gateway integrate, thorugh http, with an internal facing ALB by using VPC Link and private resource integration.
For step by step details, see my answer on another question: https://stackoverflow.com/a/67413951/2948212
edit: I see I was confusing this post with another one... I believe my answer still adds value though, so I am leaving it (I thought this specified REST API Gateways and not HTTP API Gateways, but it does not).
Answer
While #diegosasw's answer is valid and useful, it is for AWS HTTP API Gateways, not AWS REST API Gateways.
With that being said, they are correct in saying it is possible! Please see the following AWS documentation regarding how to accomplish this: https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-application-load-balancers/
Please note one particular downside of AWS's documented approach: it requires a public ALB. Of course this is not ideal, though one can still harden their ALB so that it only accepts traffic originating from the REST API Gateway. If this is not acceptable for the existing use case, then #Suraj Bhatia's answer above must be followed (for REST API integrations, at least). If HTTP Gateways are acceptable, then #diegosasw's answer is the better approach to take due to it being simpler to manage and still allowing for a private ALB 🙂
For prosperity, AWS's documentation states the following:
Note: The following procedure assumes two things:
You have access to a public Application Load Balancer and its DNS
name. You have an API Gateway REST API resource with an HTTP method.
In the API Gateway console, choose the API you want to integrate with the Application Load Balancer.
In the Resources pane, for Methods, choose the HTTP method that your API uses.
Choose Integration Request.
In the Integration Request pane, for Integration Type, choose HTTP.
Note: To pass the entire API request and its parameters to the backend
Application Load Balancer, create one of the following instead: An
HTTP proxy integration
-or- An HTTP custom integration
For more information, see Set up HTTP integrations in API Gateway.
In the Endpoint URL field, enter either the Application Load Balancer's default DNS name or custom DNS name. Then, add the
configured protocol of its listener. For example, an Application Load
Balancer that's configured with an HTTPS listener on port 8080
requires the following endpoint URL format: https://domain-name:8080/
Important: Make sure that you create an HTTP listener or HTTPS
listener for the Application Load Balancer using the port and listener
rules of your choice. For more information, see Listeners for your
Application Load Balancers. For an Application Load Balancer
configured with an HTTPS listener, the associated certificate must be
issued by an API Gateway-supported certificate authority. If you have
to use a certificate that's self-signed or issued by a private
certificate authority, then set insecureSkipVerification to true in
the integration's tlsConfig.
Choose Save.
Deploy the API.

It is possible to open a web page via AWS lambda functions?

I'm curious whether is possible to load a web page via AWS lambda functions.
I mean, I would like to open a webpage like www.something.com/home which makes a request to the AWS lambda function which will open/get resources from www.i-would-like-to-hide-this-url.com/home, but the URL should remain www.something.com/home.
So can I use AWS as a proxy for the case above?
Yes you can do it with CloudFront using custom Origin. It will work as a reverse proxy for your customers.
A custom origin is an HTTP server, for example, a web server. The HTTP server can be an Amazon Elastic Compute Cloud (Amazon EC2) instance or an HTTP server that you manage privately. An Amazon S3 origin configured as a website endpoint is also considered a custom origin.
When you use a custom origin that is your own HTTP server, you specify the DNS name of the server, along with the HTTP and HTTPS ports and the protocol that you want CloudFront to use when fetching objects from your origin.
Using Amazon EC2 or Other Custom Origins
Or you can do it with ELB and a reverse proxy on EC2. But in this case you will be responsible for this reverse proxy.
Maybe it is even possible to do it with lambda if you code the "reverse proxy" solution, but I guess it is not exaclty recommended.
Typically you host the static assets (html/js/css/img) in S3, you front Lambda with API Gateway, and your web page makes HTTP/Rest requests to API Gateway which forwards them to your Lambda. Lambda itself does not typically serve the static assets. If you need SSL then you add CloudFront. Example here.

Disable http redirection to https

I'm developing little service using lambda functions which returns "Fact of the day" in your CLI using curl.
First, I developed business logic, deployed and created lambda using Serverless.
Second, I bought domain using aws route 53, Provisioned certificate and routed domain using `Custom Domain Name on API gateway.
At the moment if you would visit https://domain.io service works as intented but if you would try call curl domain.io it outputs:
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>CloudFront</center>
</body>
</html>
My goal is to get service running without SSL (or redirect), by calling curl domain.io.
Is it possible to avoid redirection? Or can you create API custom domain name without certificate?
Currently I call curl -F domain.io it will follow redirect, but it's not solution I'm looking for.
Thank you!
Remove the custom domain configuration from API Gateway.
Wait a few minutes for API Gateway to release the custom domain in the AWS Edge Network (there isn't a way to determine when this is complete, but you'll get an error on one of the subsequent steps until it is. 20 minutes should be sufficient).
Create a CloudFront distribution, using the generic ...execute-api...amazonaws.com domain name assigned to your API stage.
For Origin Protocol Policy, select HTTPS Only.
Set the Origin Path to your stage prefix (e.g. /prod or /v1) -- whatever you set up as the stage prefix.
Set the Viewer Protocol Policy to HTTP and HTTPS.
Set the Minimum TTL and Default TTL to 0.
Set the Alternate Domain Name for the distribution to your custom domain.
If you want SSL to optionally work on your custom domain, associate an ACM certificate with the CloudFront distribution.
Change your DNS entry to point to the *.cloudfront.net hostname assigned to your distribution.
Wait for the CloudFront distribution state to change from In Progress to Deployed.
Test.
This seems like a lot of effort to enable HTTP against API Gateway, but it is necessary, because API Gateway was specifically designed not to support HTTP -- it only works with HTTPS, because that's a best-practice for APIs, generally.
Q: Can I create HTTPS endpoints?
Yes, all of the APIs created with Amazon API Gateway expose HTTPS endpoints only. Amazon API Gateway does not support unencrypted (HTTP) endpoints.
https://aws.amazon.com/api-gateway/faqs/
CloudFront is commonly known as a CDN, but it is in fact something of a Swiss Army knife of custom HTTP request manipulation, and this is a case of that.
Once you verify your behavior, you can optionally increase the Default TTL in CloudFront, which will cause it to cache responses for up to that value in seconds, reducing your costs by sending fewer actual requests to API Gateway and replaying cached responses to the callers.
This setup differs from what you have, now, because you are in control of the CloudFront distribution, instead of API Gateway... so you can customize it in ways that API Gateway doesn't allow when it is in control.