Disable http redirection to https - amazon-web-services

I'm developing little service using lambda functions which returns "Fact of the day" in your CLI using curl.
First, I developed business logic, deployed and created lambda using Serverless.
Second, I bought domain using aws route 53, Provisioned certificate and routed domain using `Custom Domain Name on API gateway.
At the moment if you would visit https://domain.io service works as intented but if you would try call curl domain.io it outputs:
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>CloudFront</center>
</body>
</html>
My goal is to get service running without SSL (or redirect), by calling curl domain.io.
Is it possible to avoid redirection? Or can you create API custom domain name without certificate?
Currently I call curl -F domain.io it will follow redirect, but it's not solution I'm looking for.
Thank you!

Remove the custom domain configuration from API Gateway.
Wait a few minutes for API Gateway to release the custom domain in the AWS Edge Network (there isn't a way to determine when this is complete, but you'll get an error on one of the subsequent steps until it is. 20 minutes should be sufficient).
Create a CloudFront distribution, using the generic ...execute-api...amazonaws.com domain name assigned to your API stage.
For Origin Protocol Policy, select HTTPS Only.
Set the Origin Path to your stage prefix (e.g. /prod or /v1) -- whatever you set up as the stage prefix.
Set the Viewer Protocol Policy to HTTP and HTTPS.
Set the Minimum TTL and Default TTL to 0.
Set the Alternate Domain Name for the distribution to your custom domain.
If you want SSL to optionally work on your custom domain, associate an ACM certificate with the CloudFront distribution.
Change your DNS entry to point to the *.cloudfront.net hostname assigned to your distribution.
Wait for the CloudFront distribution state to change from In Progress to Deployed.
Test.
This seems like a lot of effort to enable HTTP against API Gateway, but it is necessary, because API Gateway was specifically designed not to support HTTP -- it only works with HTTPS, because that's a best-practice for APIs, generally.
Q: Can I create HTTPS endpoints?
Yes, all of the APIs created with Amazon API Gateway expose HTTPS endpoints only. Amazon API Gateway does not support unencrypted (HTTP) endpoints.
https://aws.amazon.com/api-gateway/faqs/
CloudFront is commonly known as a CDN, but it is in fact something of a Swiss Army knife of custom HTTP request manipulation, and this is a case of that.
Once you verify your behavior, you can optionally increase the Default TTL in CloudFront, which will cause it to cache responses for up to that value in seconds, reducing your costs by sending fewer actual requests to API Gateway and replaying cached responses to the callers.
This setup differs from what you have, now, because you are in control of the CloudFront distribution, instead of API Gateway... so you can customize it in ways that API Gateway doesn't allow when it is in control.

Related

Can I get an example of how to connect a lambda function to a domain name?

I've been wasting about 12 hours going in circles in what seems like this:
I am trying to just make a simple static landing page in lambda and hook the root of a domain to it.
The landing page works, but api gateway didn't because AWS doesn't seem to set permissions properly by default ("internal server error" with API gateway and lambda on AWS) but now the gateway link works.
So the next steps were the following:
add a custom domain name in the api gateway
add the api mapping in the custom domain name
in route 53, create a wildcard certificate with *.domain.com and domain.com
create an A record that points to the api gateway with domain.com
create a CNAME record that points to the A record
and I get an error 403 with absolutely nothing in the log. I log both 'default' and '$default' stages in the api gateway.
I read https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-403-error-lambda-authorizer/ which is all about looking at what's in the logs...
and I find the doc is both everywhere and nowhere because it's built as chunks of 'do this' and 'do that' without ever painting a whole picture of how each piece is connected to the other, or any graph with the hierarchy of services, etc. Reminds me of code that works only when you follow the example documented and breaks otherwise.
I'm sure I'm doing something wrong, but given the lack of logs and lack of cohesive documentation, I have no idea about the problem.
Not to mention that http doesn't even connect, just https.
Can anyone outline the steps needed to achieve this? essentially: [http|https]://(www).domain.com -> one lambda function
You cannot use API Gateway for an HTTP request; it only supports HTTPS.
From the Amazon API Gateway FAQs (emphasis mine):
Q: Can I create HTTPS endpoints?
Yes, all of the APIs created with Amazon API Gateway expose HTTPS endpoints only. Amazon API Gateway does not support unencrypted (HTTP) endpoints. By default, Amazon API Gateway assigns an internal domain to the API that automatically uses the Amazon API Gateway certificate. When configuring your APIs to run under a custom domain name, you can provide your own certificate for the domain.
You can use CloudFront to automatically redirect HTTP to HTTPS. How do I set up API Gateway with my own CloudFront distribution? provides a pretty simple walkthrough of connecting an API Gateway to CloudFront (you can skip the API Gateway portion and use the one you created). The important thing you'll need to do that is not in that document is to select Redirect HTTP to HTTPS.
If you truly need HTTP traffic you're probably going to need to go with an ALB.

Is it possible to use Amazon Web Application Firewall with application that not hosted on AWS instances?

I'm new with AWS WAF and get stuck with setting up it for application that hosts on some dedicated server. I didn't find any information how to set up it without migration to aws servers, but I found that WAF integrated with CloudFront. But anyway I found only few information that explain how to integrate this CDN with my web application. So, the main question is:
Is it possible to use AWS WAF with application that hosted on some dedicated server? And if it possible - can you provide some guides and/or docs for setting up?
Yes, you can use WAF with a server outside AWS.
WAF works with CloudFront, and CloudFront does not require the origin server to be in the AWS ecosystem.
When you create a distribution, you specify where CloudFront sends requests for the files. CloudFront supports using several AWS resources as origins. For example, you can specify an Amazon S3 bucket or a MediaStore container, a MediaPackage channel, or a custom origin, such as an Amazon EC2 instance or your own HTTP web server. (emphasis added)
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html
Configuring CloudFront to work with your external server is no different than configuring it to work with a server in EC2. Your DNS entry (e.g. www.example.com) changes to point to CloudFront, and CloudFront connects to your server using a new name that you create (e.g. origin.example.com). CloudFront proxies requests through to your server, unless the edge location handling the a given request happens to have access to a copy of the same resource that it cached while handling a previous request for the same page -- that's how CloudFront gets your content, by caching it as it handles requests that are passing through. (You don't pre-load any content into CloudFront.) If CloudFront has a cached copy, your server sees nothing, and CloudFront returns the object to the browser from its cache. But CloudFront isn't strictly a CDN, even though they market it that way. It is a global network of reverse proxies and high-reliability/low-latency transport.
You'll want to take steps to ensure that the web server rejected requests that didn't come through CloudFront. See Using Custom Headers to Restrict Access to Your Content on a Custom Origin as well as the list of CloudFront IP Addresses which you could use on your web server's firewall.
Once you have your site working through CloudFront, all you do is activate WAF on the distribution. CloudFront is very tightly integrated with WAF so that is a very simple change, once you have your WAF rules set up.

CNAME to AWS Service - Browser Not Accepting Certificate

I am trying to access an AWS service directly from the browser- specifically the SNS service. I want to be able to post a message directly to an sns topic, but using a CNAME record so I can control which region the browser ultimately goes to (sns.mydomain.com -> sns.us-east-1.amazonaws.com | sns.eu-west-1.amazonaws.com depending on requesters region).
My issue is that if I make an HTTPS request to my aliased endpoint, the returned certificate will not be signed to my endpoint and the browser will refuse to work with it. And while I can get around this by making only HTTP requests, the browser will refuse to make an HTTP request from a secure origin (a site served on HTTPS).
Is it possible to have a CNAME point to an AWS service in the way that I'm trying to do it?
Ultimately, i'm trying to avoid locking the client application in the browser into an aws region.
Is it possible to have a CNAME point to an AWS service in the way that I'm trying to do it?
No. You're hitting up against a central feature of https verification, namely the Common Name of the cert or a SAN ( Subject Alternative Name) must match the certificate. If it weren't so, HTTPS would not be validating that the server is who they claim to be.
Ultimately, i'm trying to avoid locking the client application in the browser into an aws region.
That's a fine goal. Instead of doing so at the DNS layer, why not create an endpoint or configuration setting that supplies region or regions to use? A smart client could even iterate through regions in the case of some failures that appeared to be regional outages, which is somewhat better than a CNAME that you still have to fix when a region goes down.

Regional/Edge-optimized API Gateway VS Regional/Edge-optimized custom domain name

This does not make sense to me at all. When you create a new API Gateway you can specify whether it should be regional or edge-optimized. But then again, when you are creating a custom domain name for API Gateway, you can choose between the two.
Worst of all, you can mix and match them!!! You can have a regional custom domain name for an edge-optimized API gateway and it's absolutely meaningless to me!
Why these two can be regional/edge-optimized separately? And when do I want each of them to be regional/edge-optimized?
Why these two can be regional/edge-optimized separately?
Regional and Edge-Optimized are deployment options. Neither option changes anything fundamental about how the API is processed by the AWS infrastructure once the request arrives at the core of the API Gateway service or how the services behind API Gateway ultimately are accessed -- what changes is how the requests initially arrive at AWS and are delivered to the API Gateway core for execution. More about this, below.
When you use a custom domain name, your selected API stage is deployed a second time, on a second endpoint, which is why you have a second selection of a deployment type that must be made.
Each endpoint has the characteristics of its deployment type, whether regional or edge-optimized. The original deployment type of the API itself does not impact the behavior of the API if deployed with a custom domain name, and subsequently accessed using that custom domain name -- they're independent.
Typically, if you deploy your API with a custom domain name, you wouldn't continue to use the deployment endpoint created for the main API (e.g. xxxx.execute-api.{region}.amazonaws.com), so the initial selection should not matter.
And when do I want each of them to be regional/edge-optimized?
If you're using a custom domain name, then, as noted above, your original deployment selection for the API as a whole has no further impact when you use the custom domain.
Edge-optimized endpoints were originally the only option available. If you don't have anything on which to base your selection, this choice is usually a reasonable one.
This option routes incoming requests through the AWS "Edge Network," which is the CloudFront network, with its 100+ global edge locations. This does not change where the API Gateway core ultimately handles your requests -- they are still ultimately handled within the same region -- but the requests are routed from all over the world into the nearest AWS edge, and they travel from there on networks operated by AWS to arrive at the region where you deployed your API.
If the clients of your API Gateway stage are globally dispersed, and you are only deploying your API in a single region, you probably want an edge-optimized deployment.
The edge-optimized configuration tends to give you better global responsiveness, since it tends to reduce the impact of network round trips, and the quality of the transport is not subject to as many of the vagaries of the public Internet because the request covers the least amount of distance possible before jumping off the Internet and onto the AWS network. The TCP handshake and TLS are negotiated with the connecting browser/client across a short distance (from the client to the edge) and the edge network maintains keep-alive connections that can be reused, all of which usually works in your favor... but this optimization becomes a relative impairment when your clients are always (or usually) calling the API from within the AWS infrastructure, within the same region, since the requests need to hop over to the edge network and then back into the core regional network.
If the clients of your API Gateway stage are inside AWS and within the same region where you deployed the API (such as when the API is being called by other systems in EC2 within the region), then you will most likely want a regional endpoint. Regional endpoints route requests through less of the AWS infrastructure, ensuring minimal latency and reduced jitter when requests are coming from EC2 within the same region.
As a side-effect of routing through the edge network, edge-optimized endpoints also provide some additional request headers that you may find useful, such as CloudFront-Viewer-Country: XX which attempts to identify the two-digit country code of the geographic location of the client making the API request. Regional endpoints don't have these headers.
As a general rule, go with edge-optimized unless you find a reason not to.
What would be some reasons not to? As mentioned above, if you or others are calling the API from within the same AWS region, you probably want a regional endpoint. Edge-optimized endpoints can introduce some edge-case side-effects in more advanced or complicated configurations, because of the way they integrate into the rest of the infrastructure. There are some things you can't do with an edge-optimized deployment, or that are not optimal if you do:
if you are using CloudFront for other sites, unrelated to API Gateway, and CloudFront is configured for a wildcard alternate domain name, like *.example.com, then you can't use a subdomain from that wildcard domain, such as api.example.com, on an edge-optimized endpoint with a custom domain name, because API Gateway submits a request to the edge network on your behalf to claim all requests for that subdomain when they arrive via CloudFront, and CloudFront rejects this request since it represents an unsupported configuration when used with API Gateway, even though CloudFront supports it in some other circumstances.
if you want to provide redundant APIs that respond to the same custom domain name in multiple regions, and use Route 53 Latency-Based Routing to deliver requests to the region nearest to the requester, you can't do this with an edge-optimized custom domain, because the second API Gateway region will not be able to claim the traffic for that subdomain on the edge network, since the edge network requires exactly 1 target for any given domain name (subdomain). This configuration can be achieved using regional endpoints and Route 53 LBR, or can be achieved while leveraging the edge network by using your own CloudFront distribution, Lambda#Edge to select the target endpoint based on the caller's location, and API Gateway regional deployments. Note that this can't be achieved by any means if you need to support IAM authentication of the caller, because the caller needs to know the target region before signing and submitting the request.
if you want to use your API as part of a larger site that integrates multiple resources, is deployed behind CloudFront, and uses the path to route to different services -- for example, /images/* might route to an S3 bucket, /api/* might route to your API Gateway stage, and * (everything else) might route to an elastic load balancer in EC2 -- then you don't want to use an edge-optimized API, because this causes your requests to loop through the edge network twice (increasing latency) and causes some header values to be lost. This configuration doesn't break, but it isn't optimal. For this, a regional endpoint is desirable.

What is the difference between a Custom Domain Name at the API Gateway and a CloudFront Instance with a Custom Domain?

I'm currently extensively using the API Gateway as a source for CloudFront. My CloudFront serves other things as well, such as plain files from S3.
I've recently been looking into improving the current setup, and noticed the "Custom Domain Names" option in API Gateway.
From what I've understood, using it creates an unconfigurable CloudFront instance. I've not been able to find much information beyond that.
Are there any advantages to using API Gateway's Custom Domain Names over using a self-managed CloudFront instance?
When you use AWS CloudFront you can configure different Origins such as S3, API Gateway & etc to the distribution which allows to serve different services through same domain. e.g you can serve mydomain.com points to index.html in S3 and mydomain.com/api/* points to API Gateway. This allows for the frontend JavaScripts to access the API without the need for Cross Origin Request support at API Gateway which avoids sending Options preflight(If you have headers like Cookie, Authorization & etc.) request by the browser.
On the other hand you can configure Custom Domain Names to API Gateway. This allows to define a Custom Domain as well as a Custom SSL Certificate using AWS Certificate Manager. The main difference is, if you have a frontend application, you need to define two domains(or different subdomains) for the frontend served from S3 and API. When accessing the API from different domain it will require to have CORS configured at the API Gateway and can affect performance based on the latency.