CNAME to AWS Service - Browser Not Accepting Certificate - amazon-web-services

I am trying to access an AWS service directly from the browser- specifically the SNS service. I want to be able to post a message directly to an sns topic, but using a CNAME record so I can control which region the browser ultimately goes to (sns.mydomain.com -> sns.us-east-1.amazonaws.com | sns.eu-west-1.amazonaws.com depending on requesters region).
My issue is that if I make an HTTPS request to my aliased endpoint, the returned certificate will not be signed to my endpoint and the browser will refuse to work with it. And while I can get around this by making only HTTP requests, the browser will refuse to make an HTTP request from a secure origin (a site served on HTTPS).
Is it possible to have a CNAME point to an AWS service in the way that I'm trying to do it?
Ultimately, i'm trying to avoid locking the client application in the browser into an aws region.

Is it possible to have a CNAME point to an AWS service in the way that I'm trying to do it?
No. You're hitting up against a central feature of https verification, namely the Common Name of the cert or a SAN ( Subject Alternative Name) must match the certificate. If it weren't so, HTTPS would not be validating that the server is who they claim to be.
Ultimately, i'm trying to avoid locking the client application in the browser into an aws region.
That's a fine goal. Instead of doing so at the DNS layer, why not create an endpoint or configuration setting that supplies region or regions to use? A smart client could even iterate through regions in the case of some failures that appeared to be regional outages, which is somewhat better than a CNAME that you still have to fix when a region goes down.

Related

Can I get an example of how to connect a lambda function to a domain name?

I've been wasting about 12 hours going in circles in what seems like this:
I am trying to just make a simple static landing page in lambda and hook the root of a domain to it.
The landing page works, but api gateway didn't because AWS doesn't seem to set permissions properly by default ("internal server error" with API gateway and lambda on AWS) but now the gateway link works.
So the next steps were the following:
add a custom domain name in the api gateway
add the api mapping in the custom domain name
in route 53, create a wildcard certificate with *.domain.com and domain.com
create an A record that points to the api gateway with domain.com
create a CNAME record that points to the A record
and I get an error 403 with absolutely nothing in the log. I log both 'default' and '$default' stages in the api gateway.
I read https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-403-error-lambda-authorizer/ which is all about looking at what's in the logs...
and I find the doc is both everywhere and nowhere because it's built as chunks of 'do this' and 'do that' without ever painting a whole picture of how each piece is connected to the other, or any graph with the hierarchy of services, etc. Reminds me of code that works only when you follow the example documented and breaks otherwise.
I'm sure I'm doing something wrong, but given the lack of logs and lack of cohesive documentation, I have no idea about the problem.
Not to mention that http doesn't even connect, just https.
Can anyone outline the steps needed to achieve this? essentially: [http|https]://(www).domain.com -> one lambda function
You cannot use API Gateway for an HTTP request; it only supports HTTPS.
From the Amazon API Gateway FAQs (emphasis mine):
Q: Can I create HTTPS endpoints?
Yes, all of the APIs created with Amazon API Gateway expose HTTPS endpoints only. Amazon API Gateway does not support unencrypted (HTTP) endpoints. By default, Amazon API Gateway assigns an internal domain to the API that automatically uses the Amazon API Gateway certificate. When configuring your APIs to run under a custom domain name, you can provide your own certificate for the domain.
You can use CloudFront to automatically redirect HTTP to HTTPS. How do I set up API Gateway with my own CloudFront distribution? provides a pretty simple walkthrough of connecting an API Gateway to CloudFront (you can skip the API Gateway portion and use the one you created). The important thing you'll need to do that is not in that document is to select Redirect HTTP to HTTPS.
If you truly need HTTP traffic you're probably going to need to go with an ALB.

Is it possible to use Amazon Web Application Firewall with application that not hosted on AWS instances?

I'm new with AWS WAF and get stuck with setting up it for application that hosts on some dedicated server. I didn't find any information how to set up it without migration to aws servers, but I found that WAF integrated with CloudFront. But anyway I found only few information that explain how to integrate this CDN with my web application. So, the main question is:
Is it possible to use AWS WAF with application that hosted on some dedicated server? And if it possible - can you provide some guides and/or docs for setting up?
Yes, you can use WAF with a server outside AWS.
WAF works with CloudFront, and CloudFront does not require the origin server to be in the AWS ecosystem.
When you create a distribution, you specify where CloudFront sends requests for the files. CloudFront supports using several AWS resources as origins. For example, you can specify an Amazon S3 bucket or a MediaStore container, a MediaPackage channel, or a custom origin, such as an Amazon EC2 instance or your own HTTP web server. (emphasis added)
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html
Configuring CloudFront to work with your external server is no different than configuring it to work with a server in EC2. Your DNS entry (e.g. www.example.com) changes to point to CloudFront, and CloudFront connects to your server using a new name that you create (e.g. origin.example.com). CloudFront proxies requests through to your server, unless the edge location handling the a given request happens to have access to a copy of the same resource that it cached while handling a previous request for the same page -- that's how CloudFront gets your content, by caching it as it handles requests that are passing through. (You don't pre-load any content into CloudFront.) If CloudFront has a cached copy, your server sees nothing, and CloudFront returns the object to the browser from its cache. But CloudFront isn't strictly a CDN, even though they market it that way. It is a global network of reverse proxies and high-reliability/low-latency transport.
You'll want to take steps to ensure that the web server rejected requests that didn't come through CloudFront. See Using Custom Headers to Restrict Access to Your Content on a Custom Origin as well as the list of CloudFront IP Addresses which you could use on your web server's firewall.
Once you have your site working through CloudFront, all you do is activate WAF on the distribution. CloudFront is very tightly integrated with WAF so that is a very simple change, once you have your WAF rules set up.

Disable http redirection to https

I'm developing little service using lambda functions which returns "Fact of the day" in your CLI using curl.
First, I developed business logic, deployed and created lambda using Serverless.
Second, I bought domain using aws route 53, Provisioned certificate and routed domain using `Custom Domain Name on API gateway.
At the moment if you would visit https://domain.io service works as intented but if you would try call curl domain.io it outputs:
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>CloudFront</center>
</body>
</html>
My goal is to get service running without SSL (or redirect), by calling curl domain.io.
Is it possible to avoid redirection? Or can you create API custom domain name without certificate?
Currently I call curl -F domain.io it will follow redirect, but it's not solution I'm looking for.
Thank you!
Remove the custom domain configuration from API Gateway.
Wait a few minutes for API Gateway to release the custom domain in the AWS Edge Network (there isn't a way to determine when this is complete, but you'll get an error on one of the subsequent steps until it is. 20 minutes should be sufficient).
Create a CloudFront distribution, using the generic ...execute-api...amazonaws.com domain name assigned to your API stage.
For Origin Protocol Policy, select HTTPS Only.
Set the Origin Path to your stage prefix (e.g. /prod or /v1) -- whatever you set up as the stage prefix.
Set the Viewer Protocol Policy to HTTP and HTTPS.
Set the Minimum TTL and Default TTL to 0.
Set the Alternate Domain Name for the distribution to your custom domain.
If you want SSL to optionally work on your custom domain, associate an ACM certificate with the CloudFront distribution.
Change your DNS entry to point to the *.cloudfront.net hostname assigned to your distribution.
Wait for the CloudFront distribution state to change from In Progress to Deployed.
Test.
This seems like a lot of effort to enable HTTP against API Gateway, but it is necessary, because API Gateway was specifically designed not to support HTTP -- it only works with HTTPS, because that's a best-practice for APIs, generally.
Q: Can I create HTTPS endpoints?
Yes, all of the APIs created with Amazon API Gateway expose HTTPS endpoints only. Amazon API Gateway does not support unencrypted (HTTP) endpoints.
https://aws.amazon.com/api-gateway/faqs/
CloudFront is commonly known as a CDN, but it is in fact something of a Swiss Army knife of custom HTTP request manipulation, and this is a case of that.
Once you verify your behavior, you can optionally increase the Default TTL in CloudFront, which will cause it to cache responses for up to that value in seconds, reducing your costs by sending fewer actual requests to API Gateway and replaying cached responses to the callers.
This setup differs from what you have, now, because you are in control of the CloudFront distribution, instead of API Gateway... so you can customize it in ways that API Gateway doesn't allow when it is in control.

https on S3 WITHOUT cloudfront possible?

We currently want to start hosting all our assets through AWS S3 and we also want to server everything over https. I understand I can use the Amazon Certificate Manager (ACM) with Cloudfront to server assets over https. The problem is that we are in the medical industry and we are legally prohibited to host anything outside the EU. With S3 I can choose a location (Frankfurt for us), but with Cloudfront I just get this option:
So I thought that I could maybe use Letsencrypt to generate my own certs. But I think I then still need to use ACM which only works with Cloudfront, which means I still can't use it.
Does anybody know if I can somehow setup S3 with https but without cloudfront?
Unfortunately you can't use an SSL certificate with your custom domain with S3. You can use the S3 domain with the Amazon SSL certificate like: https://my-example-bucket.s3-website-us-east-1.amazonaws.com.
If you want to use a custom domain with SSL, and you can't use CloudFront, then you will need to look into placing some other proxy in front of S3 like your own Nginx server or something.
In AWS API Gateway, you can create a proxy resource /{proxy+} that maps to s3-website.
Be sure to map not to s3 alone, but s3-website, so you get PATH/TO/DIR/index.html returned for PATH/TO/DIR, and possibly other things working as desired.
API Gateway is served over HTTPS, optionally under your own domain.
This is not very good option though, because you have to manually add all allowed HTTP return codes, and there's a limit of 10MB payload in a request, as this service is aimed at REST APIs.
Below is a useful resource schedule. Both S3 and CloudFront are available in the EU. You can certainly present S3 via CloudFront.
I understand the requirements to host within a territorial boundary. The req'ts for that you will achieve with S3 in the EU region. CloudFront is not a hosting service it is a CDN (Content Delivery network) using high performance leased lines and manageable endpoint caching. The issue you are looking at is the price options, not the hosting location. If you want to serve content in the EU you would want 'Price Class 100' or 'Price Class All'.
When using CloudFront you can control both which IP ranges that can access your material, and the encryption of both front-end and back-end traffic. Check out some of the design patterns
There are some excellent white papers and design patterns for setting up secure CloudFront. I think you will find that you can do what you want and stay well within the legal requirements.
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
https://aws.amazon.com/compliance/eu-data-protection/
Also check out AWS doco 'using-https-cloudfront-to-s3-origin' & 'custom-ssl-domains'
P.S. Ensure that you set the bucket permissions to only be available via the CloudFront channel.
RL
CloudFront has a feature for white/blacklisting countries. I would try using any of the 3 CDN options you listed along with a whitelist of EU countries. I'm not sure what the easiest way to verify that other countries (e.g. US) are denied though.

How to use AWS WAF with Application ELB

I need to use AWS WAF for my web application hosted on AWS to provide additional rule based security to it. I couldnt find any way to directly use WAF with ELB and WAF needs Cloudfront to add WEB ACL to block actions based on rules.
So, I added my Application ELB CNAME to cloudfront, only the domain name, WebACL with an IP block rule and HTTPS protocol was updated with cloudfront. Rest all has been left default. once both WAF and Cloudfront with ELB CNAME was added, i tried to access the CNAME ELB from one of the ip address that is in the block ip rule in WAF. I am still able to access my web application from that IP address. Also, I tried to check cloudwatch metrics for Web ACL created and I see its not even being hit.
First, is there any good way to achieve what I am doing and second, is there a specific way to add ELB CNAME on cloudfront.
Thanks and Regards,
Jay
Service update: The orignal, extended answer below was correct at the time it was written, but is now primarily applicable to Classic ELB, because -- as of 2016-12-07 -- Application Load Balancers (elbv2) can now be directly integrated with Web Application Firewall (Amazon WAF).
Starting [2016-12-07] AWS WAF (Web Application Firewall) is available on the Application Load Balancer (ALB). You can now use AWS WAF directly on Application Load Balancers (both internal and external) in a VPC, to protect your websites and web services. With this launch customers can now use AWS WAF on both Amazon CloudFront and Application Load Balancer.
https://aws.amazon.com/about-aws/whats-new/2016/12/AWS-WAF-now-available-on-Application-Load-Balancer/
It seems like you do need some clarification on how these pieces fit together.
So let's say your actual site that you want to secure is app.example.com.
It sounds as if you have a CNAME elb.example.com pointing to the assigned hostname of the ELB, which is something like example-123456789.us-west-2.elb.amazonaws.com. If you access either of these hostnames, you're connecting directly to the ELB -- regardless of what's configured in CloudFront or WAF. These machines are still accessible over the Internet.
The trick here is to route the traffic to CloudFront, where it can be firewalled by WAF, which means a couple of additional things have to happen: first, this means an additional hostname is needed, so you configure app.example.com in DNS as a CNAME (or Alias, if you're using Route 53) pointing to the dxxxexample.cloudfront.net hostname assigned to your distribution.
You can also access your sitr using the assigned CloudFront hostname, directly, for testing. Accessing this endpoint from the blocked IP address should indeed result in the request being denied, now.
So, the CloudFront endpoint is where you need to send your traffic -- not directly to the ELB.
Doesn't that leave your ELB still exposed?
Yes, it does... so the next step is to plug that hole.
If you're using a custom origin, you can use custom headers to prevent users from bypassing CloudFront and requesting content directly from your origin.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/forward-custom-headers.html
The idea here is that you will establish a secret value known only to your servers and CloudFront. CloudFront will send this in the headers along with every request, and your servers will require that value to be present or else they will play dumb and throw an error -- such as 503 Service Unavailable or 403 Forbidden or even 404 Not Found.
So, you make up a header name, like X-My-CloudFront-Secret-String and a random string, like o+mJeNieamgKKS0Uu0A1Fqk7sOqa6Mlc3 and configure this as a Custom Origin Header in CloudFront. The values shown here are arbitrary examples -- this can be anything.
Then configure your application web server to deny any request where this header and the matching value are not present -- because this is how you know the request came from your specific CloudFront distribution. Anything else (other than ELB health checks, for which you need to make an exception) is not from your CloudFront distribution, and is therefore unauthorized by definition, so your server needs to deny it with an error, but without explaining too much in the error message.
This header and its expected value remains a secret because it will not be sent back to the browser by CloudFront -- it's only sent in the forward direction, in the requests that CloudFront sends to your ELB.
Note that you should get an SSL cert for your ELB (for the elb.example.com hostname) and configure CloudFront to forward all requests to your ELB using HTTPS. The likelihood of interception of traffic between CloudFront and ELB is low, but this is a protection you should consider implenting.
You can optionally also reduce (but not eliminate) most unauthorized access by blocking all requests that don't arrive from CloudFront by only allowing the CloudFront IP address ranges in the ELB security group -- the CloudFront address ranges are documented (search the JSON for blocks designated as CLOUDFRONT, and allow only these in the ELB security group) but note that if you do this, you still need to set up the custom origin header configuration, discussed above, because if you only block at the IP level, you're still technically allowing anybody's CloudFront distribution to access your ELB. Your CloudFront distribution shares IP addresses in a pool with other CloudFront distribution, so the fact that the request arrives from CloudFront is not a sufficient guarantee that it is from your CloudFront distribution. Note also that you need to sign up for change notifications so that if new address ranges are added to CloudFront, then you'll know to add them to your security group.