Is it possible with istio to make mTLS origination for egress traffic to wildcard arbitrary hosts with the following restrictions:
The application pods have to make simple HTTP requests, not HTTPS.
The mTLS origination should happen at the egress gateway.
Custom client and CA certificates have to be used.
Basically this scenario differs from the example in the official documentation Egress using wildcard arbitrary hosts with SNI proxy with the following things:
The application pod is using HTTP instead of HTTPS: curl http:\\host1.example.com. So the mTLS origination should happen at the egress gateway, not at the application pod.
Custom client and CA certificates are used.
I tried the desired scenario, but there is a problem that when the egress gateway routes the traffic to the SNI nginx proxy (sni-proxy), it can’t extract the hostname from the SNI TLS header. The error is: *18 no host in upstream ":443". SNI is not set by the application pod (sleep), because it’s using simple HTTP, instead of HTTPS as in the official example. When the SNI is not set, the SNI proxy can't forward the traffic to specific host.
Is it possible in this scenario to configure istio egress gateway to originate mTLS to specific host using only wildcard hostname in the resources like ServiceEntry, DestinationRule, VirtualService, etc? For example: application pod to pass HTTP request header parameter, like Host, which to be used by the egress gateway to originate mTLS and set SNI header, which then will be used by the SNI proxy to forward the traffic to the specific host? In this case the egress gateway will originate mTLS traffic to specific host dynamically based on the HTTP request header?
Related
I have an Application LoadBalancer with HTTPS cert and a few listener rules, In front, I deployed a CloudFront that will communicate to the load balancer and serve the content in the web , When the origin protocol in CF is HTTP the communication between origin and the CloudFront happens, but when the origin protocol is configured to HTTPS in CF, I am getting a 502 bad gateway error.
To use HTTPS for connection from CloudFront to ALB, while still using ALB's DNS name as origin, set a custom Cache policy in the CloudFront's behavior setting.
In the custom Cache policy's settings, specify Host header to be included in the cache key. Refer to the following image:
Cache key settings
This way, ALB will know to use the correct SSL certificate by referring to the hostname defined in the Host header, not the one in ALB's DNS name. (Assuming that the SSL certificate in ALB's listener is valid and matches the domain name being used to access the CloudFront)
Quoted from:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/http-502-bad-gateway.html
One of the domain names in the certificate must match one or both of
the following values:
The value that you specified for Origin Domain Name for the applicable origin in your distribution.
The value of the
Host header if you configured CloudFront to forward the Host header to
your origin.
The DNS name of the ALB is: openn-dev-alb4-1497166043.us-east-1.elb.amazonaws.com
You can't use that domain with HTTPS. Your SSL cert must be setup for your own domain, not the domain provided by AWS. The reason is that you can have only a valid public SSL certificate for a domain that you (or your company) fully control, not for AWS default ALB domain.
I have following setup at AWS ECS:
Container with Caddy web-server at 80 port that serves static files and performs proxying of /api/* requests to backend
Container with backend at 8000 port
EC2 instance at ECS
ALB at subdomain http://some-subdomain-12345.us-east-2.elb.amazonaws.com/ (subdomain was provided automatically by AWS) with HTTP Listener
I want to setup SSL certificate and HTTPS Listener for ALB at this subdomain that was provided by AWS - how I can do it?
P.S. I have seen an option for ALB with HTTPS Listener when we are attaching custom domain i.e. example.com and AWS will provide SSL certificate for it. But this is a pet project environment and I don't worry about real domain.
You can put your ALB behind CloudFront, which unlike ALB gives you a TLS certificate by default. So you can address your application at e.g.:
https://d3n6jitgitr0i4.cloudfront.net
Apart from the TLS certificate, it will give you the ability to cache your static resources at CloudFront's edge locations, and improve latency on the TLS handshake roundtrips.
I want to setup SSL certificate and HTTPS Listener for ALB at this subdomain that was provided by AWS - how I can do it?
You can't do this. This is not your domain (AWS owns it) and you can't associate any SSL certificate with it. You have to have your own domain that you control. Once you obtain the domain, you can get free SSL certificate from AWS ACM.
This could be a solution without using subdomains but using path redirection
https://caddy.community/t/caddy-2-reverse-proxy-to-path/9193
AWS Network Load Balancers support TLS termination. This means a certificate can be created in AWS Certificate Manager and installed onto a NLB and then TCP connections using TLS encryption will be decrypted at the NLB and then either re-encrypted or passed through to a non-encrypted listener. Details are here: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html.
The benefits of using AWS Certificate Manager are that the certificate will be managed and rotated automatically by AWS. No need to put public-facing certificates on private instances.
I'd like to route TCP connections to the NLB based on the SNI, i.e. connections to the same port and IP can be routed to different targets based on the server name that was requested by the client. Whilst I can see that multiple TLS certificates for a given listener are supported using SNI to determine which certificate to serve up, I don't see how to configure listeners based on SNI.
I have therefore put HAProxy behind a NLB and want to route to different backends using SNI. I terminate TLS with the client at the NLB, reencrypt the traffic between NLB and HAProxy using a self-signed certificate on HAProxy, then route to the backends using unencyrpted TCP.
(client) --TLS/TCP--> (NLB on port 443) --TLS/TCP--> (AWS target group on port 5000, running HAProxy) --TCP--> backends on different IPs/ports
Does AWS NLB pass through the SNI details to the target groups?
If I connect directly to HAProxy (not via NLB) then I can route to the backend of choice by using SNI, but I can't get the SNI routing to work if I connect via the NLB.
According to this SO answer and to the istio docs, if you terminate TLS on the load balancer it won't carry SNI to the target group. I had the exact same issue and I ended up solving it by setting the host as '*' on the ingress Gateway and then specifying the hosts on the different VirtualServices (as recommended here).
I think that this solution could also work but didn't tried it. You would have to set the certificate on istio Gateway secret and do a TLS pass through on the NLB, but then you can't use the AWS ACM SSL certificates as pointed out on the previous link.
When we use AWS Application Load Balancer to redirect incoming requests to our servers, we created an SSL certificate and set it to the load balancer. It both listens HTTP 80 and HTTPS 443 ports' traffic. In both cases, traffic is redirected to target group instances' HTTP 80 port.
In these instances, there are nginx servers which are configured to listen HTTP 80 ports of the instance they are in. (These instances are Elastic Container Service instances)
When I update the nginx.conf file to redirect incoming HTTP requests to redirect to HTTPS protocol, we are facing a redirect loop. Even if the original request is an HTTPS request, behind the load balancer, EC2 instance is listening HTTP port. So it doesn't matter if the original request is HTTP or HTTPS, nginx infinitely sends redirect.
I saw that Cloudfront is an option, but I'm not interested in using
another service of AWS and paying them extra money just to overcome
this issue.
Other solution might be changing HTTP listener to HTTPS inside the instances registered to target group of the ELB. Since we are using ECS, we have to find a way to secure our SSL certificate keys while creating the docker image. I don't want to put our SSL certificate inside code repository for jenkins to use it. There will be extra work if I choose this solution.
Do you have any simpler ideas to fix this issue?
Nginx will always get requests over HTTP, so obviously you can't tell it to redirect all HTTP requests. The ELB sets a special HTTP header on the requests it sends to your back-end servers, named x-forwarded-proto, that you need configure Nginx to use to check if the connection between the browser and the ELB is over HTTP or HTTPS, and only redirect if that is HTTP. I would check this answer on ServerFault.
As Mark B mentioned, checking the custom header values for the incoming prototol is the simplest way to handle this and eliminate the redirect loop.
However, if you want to ensure end-to-end encryption, you can deploy self-signed certificates in your containers. The load balancer does NOT require a valid, public certificate in order to connect to HTTPS origin.
That way you can forward port 80 on the ALB to port 80 on the target group (and you could even have a separate target group just for redirecting) and force the redirect as you're doing now, and forward port 443 on the ALB to port 443 on the target group.
We are implementing a micro-services architecture in AWS. We have several EC2 instances which has the micro-services deployed on different ports. We also have an internet facing Application Load Balancer, which routes to different services based on the port.
eg:
xxxx-xx.xx.elb.amazonaws.com:8080/ go to microservice 1
xxxx-xx.xx.elb.amazonaws.com:8090/ go to microservice 2
We need to have a domain name instead of the ELB, the port should not be exposed through the domain name as well. Almost all the resources I found regarding route 53, use alias which does the following:
xx.xxxx.co.id -> xxxx-xx.xx.elb.amazonaws.com or
xx.xxxx.co.id -> 111.111.111.11 (static ip)
1) Do we need separate domains for each micro service?
2) How to use alias to point domains to a specific port of the ELB?
3) Is it possible to use this setup if the domains are from another provider other than AWS.
Important Update
Since this answer was originally written, Application Load Balancer introduced the capability for ALB to route requests to a specific target group based on the Host header of the incoming request.
The incoming host header can now be used to route requests to specific instances and ports.
Additionally, ALB introduced SNI support, allowing you to associate multiple TLS (SSL) certificates with a single balancer, and the correct certificate will be automatically selected based on the SNI presented by the client when TLS is negotiated. Multi-domain and wildcard certs from Amazon Certificate Manager also work with ALB.
Based on these factors, no separate ports or different listeners are needed -- simply assign hostnames and/or path prefixes for each service, and map those patterns to the appropriate target group of instances.
The original answer is no longer accurate, but is included below.
1.) Do we need separate domains for each micro service?
No, this won't help you. ALB does not interpret the hostname attached to the incoming request.
Separate hostnames in the same domain won't directly accomplish your objective, either.
2.) How to use alias to point domains to a specific port of the ELB?
Domains do not point to ports. Hostnames do not point to ports. DNS is only used for address resolution. This is true everywhere on the Internet.
3.) Is it possible to use this setup if the domains are from another provider other than AWS.
This is not a limitation of AWS. DNS simply does not work this way.
A service endpoint is unaware of the DNS records that point to it. The DNS entry itself is strictly used for discovering an IP address that can be used to access the endpoint. After that, the endpoint does not actually know anything about the DNS, and there is no way to tell the browser, via DNS, to use a different port.
For HTTP, the implicit port is 80. For HTTPS, it is 443. Unless a port is provided in the URL, these are the only usable ports.
However, in HTTP and HTTPS, each request is accompanied by a Host: header, sent by the web browser with each request. This is the hostname in the address bar.
To differentiate between requests for different hostnames arriving at a device (such as ELB/ALB), the device at the endpoint must interpret the incoming host header and route the request to an back-end system providing that service.
ALB does not currently support this capability.
ALB does, however, support choosing endpoints based on a path prefix. So microservices.example.com/api/foo could route to one set of services, while microservices.example.com/api/bar could route to another.
But ALB does not directly support routing by host header.
In my infrastructure, we use a combination of ELB or ALB, but the instances behind the load balancer are not the applications. Instead, they are instances that run HAProxy load balancer software, and route the requests to the backend.
A brief example of the important configuration elements looks like this:
frontend main
use_backend svc1 if { hdr(Host) -i foo.example.com }
use_backend svc2 if { hdr(Host) -i bar.example.com }
backend svc1
server foo-a 192.168.2.24:8080
server foo-b 192.168.12.18:8080
backend svc2
....
The ELB terminates the SSL and selects a proxy at random and the proxy checks the Host: header and selects a backend (a group of 1 or more instances) to which the request will be routed. It is a thin layer between the ELB and the application, which handles the request routing by examining the host header or any other characteristic of the request.
This is one solution, but is a somewhat advanced configuration, depending on your expertise.
If you are looking for an out-of-the-box, serverless, AWS-centric solution, then the answer is actually found in CloudFront. Yes, it's a CDN, but it has several other applications, including as a reverse proxy.
For each service, choose a hostname from your domain to assign to that service, foo.api.example.com or bar.api.example.com.
For each service, create a CloudFront distribution.
Configure the Alternate Domain Name of each distribution to use that service's assigned hostname.
Set the Origin Domain Name to the ELB hostname.
Set the Origin HTTP Port to the service's specific port on the ALB, e.g. 8090.
Configure the default Cache Behavior to forward any headers you need. If you don't need the caching capability of CloudFront, choose Forward All Headers. Also enable forwarding of Query Strings and Cookies if needed.
In Route 53, create foo.api.example.com as an Alias to that specific CloudFront distribution's hostname, e.g. dxxxexample.cloudfront.net.
Your problem is solved.
You see what I did there?
For each hostname you configure, a dedicated CloudFront distribution receives the request on the standard ports (80/443) and -- based on which distribution the host header matches -- CloudFront routes the requests to the same ELB/ALB hostname but a custom port number.
I think there is a possibility that he can build what he's describing. I was in the same boat for a while, here's some options for you to consider:
In R53 create a hosted zone - and point your domain at it.
Optional step: create ALIAS records. You can do this for each subdomain or
app. Leave the ALIAS field blank if using the root domain.
Create a record set using the SLA option, which is a service lookup for port
redirection. Try to point this to your LB port 80, alias the sub-domains.
Change your load balancer's listeners, to listen on port 80 - then redirect app traffic based on your apps port settings.
I havent used SLA, but this would definitely point you in that direction.