AWS ELB/ALB Multiple SSL Certicifates - amazon-web-services

I'm looking for a way to use multiple ssl certificates (over 100) on a single AWS ELB/ALB - how can I implement that?

You can't do this. Not directly.
ELB Classic supports 1 cert per listener and of course only one listener on port 443.
ALB supports 26 certificates (25 plus the default, which is used whenever the incoming SNI is unmatched or absent).
But, certificates can support multiple domains, so that's one way of getting support for more than 25 (+1) domains -- combine the domains onto a smaller number of certs. This is a limit that cannot be increased.
Or, create one CloudFront distribution per certificate, pointing them to the ALB as origin server. This allows you to support as many certs as you want with the services deployed behind one balancer, up to the limits for distributions on your account. The default limit is 200 CloudFront distributions in each account, but this can be increased by request. This can also be used to potentially reduce the load on the instances behind the balancer, since CloudFront can be configured to cache responses.

Related

AWS-issued (managed) TLS/SSL certificate for ELB/ALB

When I create an ELB (i.e. Application Load Balancer), Amazon gives it a DNS name e.g.:
myalb-1472119708.eu-central-1.elb.amazonaws.com
Now, I would like to terminate TLS/SSL on my ALB, however, I don't want to attach my own certificate (e.g. from the Certificate Manager), I am ok with accessing my application via the default DNS name (of the ALB) through HTTPS:
https://myalb-1472119708.eu-central-1.elb.amazonaws.com
However, with the default configuration I can access my app via HTTP only:
http://myalb-1472119708.eu-central-1.elb.amazonaws.com
Does AWS support this (rhetorical question)? Any plans to add this feature in the near future? Thanks.
UPDATE:
After all it's not a hard feature to implement. Moreover, SSL is the de facto standard for running (secure) web apps today. I believe, AWS can issue wildcard certificates for the ELB in every region, e.g.:
*.eu-central-1.elb.amazonaws.com
And then attach it to every ALB by default. Or publish a list of certificates' ARNs for every region. This would free developers from extra effort (buying a domain, registering a certificate in ACM) for their non-production projects.
At the time of this writing, the only way to resolve this is by running your ALB/ELB behind CloudFront, which (unlike ALB) gives you a TLS certificate by default:
User -> CloudFront edge location (HTTPS) -> ALB (HTTP) -> Backend (HTTP)
Although CloudFront incurs extra costs, apart from the ability to cache static content, CloudFront gives you faster TLS termination, which happens at its edge locations, thus reducing latency on the first two TLS handshake roundtrips (2 in theory, but practically 3 in case of low-bandwidth clients).

AWS ALB per ECS Service vs. multiple services per ALB for a microservices architecture

Initially I thought that multiple services per ALB listener with different path patterns to distribute API calls appropriately was the obvious choice. In terms of health checks though (if one of those services goes down), I don't know of a smart way to divert traffic for just that service to a different region.
If I have an active active setup with weighted route 53 records that will failover on a health check, I don't see any other solution than to either cut off that entire ALBs traffic and divert to another region, or ignore the 1 down service and continue to send traffic to the partially failing ALB.
Having a one to one mapping of ALBs to services fixes this solution, but it adds additional overhead in terms of cost and complexity.
What is the recommended pattern to follow for an active active microservices architecture?
If all of the services are accessed under a single hostname then the DNS of course must point to exactly one place, so rerouting is fundamentally an all-or-nothing prospect.
However, there's an effective workaround.
Configure a "secret" hostname for each service. ("Secret" in the sense that the client does not need to be aware of it.) We'll call these "service endpoints." The purpose of these hostnames is for routing requests to each service... svc1.api.example.com, svc2.api.example.com, etc.
Configure each of these DNS records to point to the primary or failover load balancer, with Route 53 entries and a Route 53 health check that specifically checks that one service for health at each balancer.
What you have at this point is a hostname for each service that will have a DNS answer that correctly points to the preferred, healthy endpoint.
What you don't yet have is a way to ensure that client requests go to the right place.
For this, create a CloudFront distribution, with your public API hostname as an Alternate Domain Name. Define one CloudFront Origin for each of these service endpoints (leave "Origin Path" blank), then create a Cache Behavior for each service with the appropriate path pattern e.g. /api/svc1* and select the matching origin. Whitelist any HTTP headers that your API needs to see.
Finally, point DNS for your main hostname to CloudFront.
The clients will automatically connect to their nearest CloudFront edge location, and CloudFront -- after matching the path pattern to discover where to send the request -- will check the DNS for that service-specific endpoint and forward the request to the appropriate balancer.
CloudFront, in this application is not a "CDN" per se, but rather a globally-distributed reverse proxy -- logically, a single destination for all your traffic, so no failover configuration is required on the main hostname for the API... so no more all-or-nothing routing. On the back-side of CloudFront, those service endpoint hostnames ensure that requests are routed to a healthy destination based on the Route 53 health checks. CloudFront respects the TTL of these DNS records and will not cache DNS responses that it shouldn't.

Email-based DCV Issue (multiple domains) - Amazon Certificate Manager (ACM)

Is there a way to validate domain control without using the email process? As I need to be able to add additional domains to the certificate for new clients...
The problem I'm facing is I can't add to the existing AWS certificate and have to create a new one with all the domains. When I do that everyone for every domain get's emailed and asked to confirm at:
administrator#domain.com
hostmaster#domain.com
admin#domain.com
postmaster#domain.com
webmaster#domain.com
So I have had to register a seperate certificate and upload it to ACM instead which is not ideal. Mainly as it's limited to 99 domains and was hoping to automate the whole process.
Is this possible with AWS?
Thank you.
Q: Are any other methods for validating a domain or approving a certificate supported?
Not at this time.
https://aws.amazon.com/certificate-manager/faqs/#provisioning
Having so many domains on one certificate isn't really a good practice, for other reasons, as well.
You're making your certificate physically longer and longer, wasting some amount of bandwidth, because the cert is sent to every connecting client, on every new connection.
Renewals will also be messy, if any of the domains on the cert are no longer pointing to your site, because auto-renewal requires that the issued cert be reachable on the Internet for each hostname.
ACM tries to automatically renew your Amazon-issued SSL/TLS certificates before they expire so that no action is required from you. To renew your certificate automatically, the following must be true:
ACM must be able to establish an HTTPS connection with each domain in the certificate.
For each connection, the certificate that is returned must match the one that ACM is renewing.
http://docs.aws.amazon.com/acm/latest/userguide/configure-domain-for-automatic-validation.html
One cleaner solution (the one I am using) is to provision each domain's cert individually, and attach each one to its own CloudFront distribution, pointing that to your origin server (which I assume in this context to be an ELB) and whitelisting all headers for forwarding to the origin, which bypasses caching and causes CloudFront to function as a simple but distributed reverse proxy. Setting "compress objects automatically" in CloudFront may also save some bandwidth charges, and even with caching disabled, CloudFront should improve the responsiveness of your sites by keeping traffic on the AWS network for more of the path between origin and viewer.

AWS Route 53 - Domain name route to different ports of an Application load balancer

We are implementing a micro-services architecture in AWS. We have several EC2 instances which has the micro-services deployed on different ports. We also have an internet facing Application Load Balancer, which routes to different services based on the port.
eg:
xxxx-xx.xx.elb.amazonaws.com:8080/ go to microservice 1
xxxx-xx.xx.elb.amazonaws.com:8090/ go to microservice 2
We need to have a domain name instead of the ELB, the port should not be exposed through the domain name as well. Almost all the resources I found regarding route 53, use alias which does the following:
xx.xxxx.co.id -> xxxx-xx.xx.elb.amazonaws.com or
xx.xxxx.co.id -> 111.111.111.11 (static ip)
1) Do we need separate domains for each micro service?
2) How to use alias to point domains to a specific port of the ELB?
3) Is it possible to use this setup if the domains are from another provider other than AWS.
Important Update
Since this answer was originally written, Application Load Balancer introduced the capability for ALB to route requests to a specific target group based on the Host header of the incoming request.
The incoming host header can now be used to route requests to specific instances and ports.
Additionally, ALB introduced SNI support, allowing you to associate multiple TLS (SSL) certificates with a single balancer, and the correct certificate will be automatically selected based on the SNI presented by the client when TLS is negotiated. Multi-domain and wildcard certs from Amazon Certificate Manager also work with ALB.
Based on these factors, no separate ports or different listeners are needed -- simply assign hostnames and/or path prefixes for each service, and map those patterns to the appropriate target group of instances.
The original answer is no longer accurate, but is included below.
1.) Do we need separate domains for each micro service?
No, this won't help you. ALB does not interpret the hostname attached to the incoming request.
Separate hostnames in the same domain won't directly accomplish your objective, either.
2.) How to use alias to point domains to a specific port of the ELB?
Domains do not point to ports. Hostnames do not point to ports. DNS is only used for address resolution. This is true everywhere on the Internet.
3.) Is it possible to use this setup if the domains are from another provider other than AWS.
This is not a limitation of AWS. DNS simply does not work this way.
A service endpoint is unaware of the DNS records that point to it. The DNS entry itself is strictly used for discovering an IP address that can be used to access the endpoint. After that, the endpoint does not actually know anything about the DNS, and there is no way to tell the browser, via DNS, to use a different port.
For HTTP, the implicit port is 80. For HTTPS, it is 443. Unless a port is provided in the URL, these are the only usable ports.
However, in HTTP and HTTPS, each request is accompanied by a Host: header, sent by the web browser with each request. This is the hostname in the address bar.
To differentiate between requests for different hostnames arriving at a device (such as ELB/ALB), the device at the endpoint must interpret the incoming host header and route the request to an back-end system providing that service.
ALB does not currently support this capability.
ALB does, however, support choosing endpoints based on a path prefix. So microservices.example.com/api/foo could route to one set of services, while microservices.example.com/api/bar could route to another.
But ALB does not directly support routing by host header.
In my infrastructure, we use a combination of ELB or ALB, but the instances behind the load balancer are not the applications. Instead, they are instances that run HAProxy load balancer software, and route the requests to the backend.
A brief example of the important configuration elements looks like this:
frontend main
use_backend svc1 if { hdr(Host) -i foo.example.com }
use_backend svc2 if { hdr(Host) -i bar.example.com }
backend svc1
server foo-a 192.168.2.24:8080
server foo-b 192.168.12.18:8080
backend svc2
....
The ELB terminates the SSL and selects a proxy at random and the proxy checks the Host: header and selects a backend (a group of 1 or more instances) to which the request will be routed. It is a thin layer between the ELB and the application, which handles the request routing by examining the host header or any other characteristic of the request.
This is one solution, but is a somewhat advanced configuration, depending on your expertise.
If you are looking for an out-of-the-box, serverless, AWS-centric solution, then the answer is actually found in CloudFront. Yes, it's a CDN, but it has several other applications, including as a reverse proxy.
For each service, choose a hostname from your domain to assign to that service, foo.api.example.com or bar.api.example.com.
For each service, create a CloudFront distribution.
Configure the Alternate Domain Name of each distribution to use that service's assigned hostname.
Set the Origin Domain Name to the ELB hostname.
Set the Origin HTTP Port to the service's specific port on the ALB, e.g. 8090.
Configure the default Cache Behavior to forward any headers you need. If you don't need the caching capability of CloudFront, choose Forward All Headers. Also enable forwarding of Query Strings and Cookies if needed.
In Route 53, create foo.api.example.com as an Alias to that specific CloudFront distribution's hostname, e.g. dxxxexample.cloudfront.net.
Your problem is solved.
You see what I did there?
For each hostname you configure, a dedicated CloudFront distribution receives the request on the standard ports (80/443) and -- based on which distribution the host header matches -- CloudFront routes the requests to the same ELB/ALB hostname but a custom port number.
I think there is a possibility that he can build what he's describing. I was in the same boat for a while, here's some options for you to consider:
In R53 create a hosted zone - and point your domain at it.
Optional step: create ALIAS records. You can do this for each subdomain or
app. Leave the ALIAS field blank if using the root domain.
Create a record set using the SLA option, which is a service lookup for port
redirection. Try to point this to your LB port 80, alias the sub-domains.
Change your load balancer's listeners, to listen on port 80 - then redirect app traffic based on your apps port settings.
I havent used SLA, but this would definitely point you in that direction.

How to use AWS WAF with Application ELB

I need to use AWS WAF for my web application hosted on AWS to provide additional rule based security to it. I couldnt find any way to directly use WAF with ELB and WAF needs Cloudfront to add WEB ACL to block actions based on rules.
So, I added my Application ELB CNAME to cloudfront, only the domain name, WebACL with an IP block rule and HTTPS protocol was updated with cloudfront. Rest all has been left default. once both WAF and Cloudfront with ELB CNAME was added, i tried to access the CNAME ELB from one of the ip address that is in the block ip rule in WAF. I am still able to access my web application from that IP address. Also, I tried to check cloudwatch metrics for Web ACL created and I see its not even being hit.
First, is there any good way to achieve what I am doing and second, is there a specific way to add ELB CNAME on cloudfront.
Thanks and Regards,
Jay
Service update: The orignal, extended answer below was correct at the time it was written, but is now primarily applicable to Classic ELB, because -- as of 2016-12-07 -- Application Load Balancers (elbv2) can now be directly integrated with Web Application Firewall (Amazon WAF).
Starting [2016-12-07] AWS WAF (Web Application Firewall) is available on the Application Load Balancer (ALB). You can now use AWS WAF directly on Application Load Balancers (both internal and external) in a VPC, to protect your websites and web services. With this launch customers can now use AWS WAF on both Amazon CloudFront and Application Load Balancer.
https://aws.amazon.com/about-aws/whats-new/2016/12/AWS-WAF-now-available-on-Application-Load-Balancer/
It seems like you do need some clarification on how these pieces fit together.
So let's say your actual site that you want to secure is app.example.com.
It sounds as if you have a CNAME elb.example.com pointing to the assigned hostname of the ELB, which is something like example-123456789.us-west-2.elb.amazonaws.com. If you access either of these hostnames, you're connecting directly to the ELB -- regardless of what's configured in CloudFront or WAF. These machines are still accessible over the Internet.
The trick here is to route the traffic to CloudFront, where it can be firewalled by WAF, which means a couple of additional things have to happen: first, this means an additional hostname is needed, so you configure app.example.com in DNS as a CNAME (or Alias, if you're using Route 53) pointing to the dxxxexample.cloudfront.net hostname assigned to your distribution.
You can also access your sitr using the assigned CloudFront hostname, directly, for testing. Accessing this endpoint from the blocked IP address should indeed result in the request being denied, now.
So, the CloudFront endpoint is where you need to send your traffic -- not directly to the ELB.
Doesn't that leave your ELB still exposed?
Yes, it does... so the next step is to plug that hole.
If you're using a custom origin, you can use custom headers to prevent users from bypassing CloudFront and requesting content directly from your origin.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/forward-custom-headers.html
The idea here is that you will establish a secret value known only to your servers and CloudFront. CloudFront will send this in the headers along with every request, and your servers will require that value to be present or else they will play dumb and throw an error -- such as 503 Service Unavailable or 403 Forbidden or even 404 Not Found.
So, you make up a header name, like X-My-CloudFront-Secret-String and a random string, like o+mJeNieamgKKS0Uu0A1Fqk7sOqa6Mlc3 and configure this as a Custom Origin Header in CloudFront. The values shown here are arbitrary examples -- this can be anything.
Then configure your application web server to deny any request where this header and the matching value are not present -- because this is how you know the request came from your specific CloudFront distribution. Anything else (other than ELB health checks, for which you need to make an exception) is not from your CloudFront distribution, and is therefore unauthorized by definition, so your server needs to deny it with an error, but without explaining too much in the error message.
This header and its expected value remains a secret because it will not be sent back to the browser by CloudFront -- it's only sent in the forward direction, in the requests that CloudFront sends to your ELB.
Note that you should get an SSL cert for your ELB (for the elb.example.com hostname) and configure CloudFront to forward all requests to your ELB using HTTPS. The likelihood of interception of traffic between CloudFront and ELB is low, but this is a protection you should consider implenting.
You can optionally also reduce (but not eliminate) most unauthorized access by blocking all requests that don't arrive from CloudFront by only allowing the CloudFront IP address ranges in the ELB security group -- the CloudFront address ranges are documented (search the JSON for blocks designated as CLOUDFRONT, and allow only these in the ELB security group) but note that if you do this, you still need to set up the custom origin header configuration, discussed above, because if you only block at the IP level, you're still technically allowing anybody's CloudFront distribution to access your ELB. Your CloudFront distribution shares IP addresses in a pool with other CloudFront distribution, so the fact that the request arrives from CloudFront is not a sufficient guarantee that it is from your CloudFront distribution. Note also that you need to sign up for change notifications so that if new address ranges are added to CloudFront, then you'll know to add them to your security group.