I am trying to understand the difference between the VirtualService and the Gateway in istio? As far I could understand, VirutalService is also used for the purpose of routing the traffic same as Gateway
According to istio documentation:
A VirtualService defines a set of traffic routing rules to apply when a host is addressed. Each routing rule defines matching criteria for traffic of a specific protocol. If the traffic is matched, then it is sent to a named destination service (or subset/version of it) defined in the registry.
Gateway describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. The specification describes a set of ports that should be exposed, the type of protocol to use, SNI configuration for the load balancer, etc.
Gateway is generally used to expose a VirtualService to the outside world. So with this object we can control how and which traffic from outside will reach one of our VirtualServices. It is also possible to specify how Gateway treats the traffic, E.g. TLS termination or SNI passthrough.
There are some configurations that are possible only when both Gateway and VirtualService work together.
In short Gateway is for external traffic while VirtualService is for traffic that is already inside the istio cluster.
Related
Is there a way to create forwarding rules that redirect to a different host?
For example, I want to set up a load balancer with a rule that if the host = xyz.com then forward to host = abc.com Is this type of setup possible?
Let me help you with this.
Forwarding rules
A forwarding rule and its corresponding IP address represent the frontend configuration of a Google Cloud load balancer.
Note: Forwarding rules are also used for protocol forwarding, Classic VPN gateways, and Traffic Director to provide forwarding information in the control plane.
Each forwarding rule references an IP address and one or more ports on which the load balancer accepts traffic. Some Google Cloud load balancers limit you to a predefined set of ports, and others let you specify arbitrary ports.
The forwarding rule also specifies an IP protocol. For Google Cloud load balancers, the IP protocol is always either TCP or UDP.
Depending on the load balancer type, the following is true:
A forwarding rule specifies a backend service, target proxy, or target pool.
A forwarding rule and its IP address are internal or external.
Also, depending on the load balancer and its tier, a forwarding rule is either global or regional.
As is mentioned the Forwarding rule specified a backed service which can help you to reach your deployment.
Additionally I want share with you the following information abiut the URL Mapping, which can help you too.
URL maps
Google Cloud HTTP(S) load balancers and Traffic Director use a Google Cloud configuration resource called a URL map to route requests to backend services or backend buckets.
For example, with an external HTTP(S) load balancer, you can use a single URL map to route requests to different destinations based on the rules configured in the URL map:
Requests for https://example.com/video go to one backend service.
Requests for https://example.com/audio go to a different backend service.
Requests for https://example.com/images go to a Cloud Storage backend bucket.
Requests for any other host and path combination go to a default backend service.
URL maps are used with the following Google Cloud products:
External HTTP(S) Load Balancing (global and regional)
Internal HTTP(S) Load Balancing
Traffic Director
There are two types of URL map resources available: global and regional. The type of resource that you use depends on the product's load balancing scheme.
There is another solution named "HTTP-to-HTTPS redirect" to redirect all requests from port 80 (HTTP) to port 443 (HTTPS).
HTTPS uses TLS (SSL) to encrypt HTTP requests and responses, making it safer and more secure. A website that uses HTTPS has https:// in the beginning of its URL instead of http://.
But I am not sure if the HTTP-to-HTTPS fits with your description.
I hope this information help you to chose the best option for your deployment.
Is it possible with istio to make mTLS origination for egress traffic to wildcard arbitrary hosts with the following restrictions:
The application pods have to make simple HTTP requests, not HTTPS.
The mTLS origination should happen at the egress gateway.
Custom client and CA certificates have to be used.
Basically this scenario differs from the example in the official documentation Egress using wildcard arbitrary hosts with SNI proxy with the following things:
The application pod is using HTTP instead of HTTPS: curl http:\\host1.example.com. So the mTLS origination should happen at the egress gateway, not at the application pod.
Custom client and CA certificates are used.
I tried the desired scenario, but there is a problem that when the egress gateway routes the traffic to the SNI nginx proxy (sni-proxy), it can’t extract the hostname from the SNI TLS header. The error is: *18 no host in upstream ":443". SNI is not set by the application pod (sleep), because it’s using simple HTTP, instead of HTTPS as in the official example. When the SNI is not set, the SNI proxy can't forward the traffic to specific host.
Is it possible in this scenario to configure istio egress gateway to originate mTLS to specific host using only wildcard hostname in the resources like ServiceEntry, DestinationRule, VirtualService, etc? For example: application pod to pass HTTP request header parameter, like Host, which to be used by the egress gateway to originate mTLS and set SNI header, which then will be used by the SNI proxy to forward the traffic to the specific host? In this case the egress gateway will originate mTLS traffic to specific host dynamically based on the HTTP request header?
I have a k8s service defined as type: LoadBalancer which sets an external LB. Can I identify on application level that an incoming request is routed from the LoadBlancer?
Are there any guaranteed http headers? Can I define custom headers for that service that would be added to all incoming requests?
If your internal ingress is using nginx as an ingress controller you can add a custom header that will indicate that.
ELB guide says that:
Application Load Balancers and Classic Load Balancers add X-Forwarded-For, X-Forwarded-Proto, and X-Forwarded-Port headers to the request.
Have you already been trying using these ones?
AWS Network Load Balancers support TLS termination. This means a certificate can be created in AWS Certificate Manager and installed onto a NLB and then TCP connections using TLS encryption will be decrypted at the NLB and then either re-encrypted or passed through to a non-encrypted listener. Details are here: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html.
The benefits of using AWS Certificate Manager are that the certificate will be managed and rotated automatically by AWS. No need to put public-facing certificates on private instances.
I'd like to route TCP connections to the NLB based on the SNI, i.e. connections to the same port and IP can be routed to different targets based on the server name that was requested by the client. Whilst I can see that multiple TLS certificates for a given listener are supported using SNI to determine which certificate to serve up, I don't see how to configure listeners based on SNI.
I have therefore put HAProxy behind a NLB and want to route to different backends using SNI. I terminate TLS with the client at the NLB, reencrypt the traffic between NLB and HAProxy using a self-signed certificate on HAProxy, then route to the backends using unencyrpted TCP.
(client) --TLS/TCP--> (NLB on port 443) --TLS/TCP--> (AWS target group on port 5000, running HAProxy) --TCP--> backends on different IPs/ports
Does AWS NLB pass through the SNI details to the target groups?
If I connect directly to HAProxy (not via NLB) then I can route to the backend of choice by using SNI, but I can't get the SNI routing to work if I connect via the NLB.
According to this SO answer and to the istio docs, if you terminate TLS on the load balancer it won't carry SNI to the target group. I had the exact same issue and I ended up solving it by setting the host as '*' on the ingress Gateway and then specifying the hosts on the different VirtualServices (as recommended here).
I think that this solution could also work but didn't tried it. You would have to set the certificate on istio Gateway secret and do a TLS pass through on the NLB, but then you can't use the AWS ACM SSL certificates as pointed out on the previous link.
We are implementing a micro-services architecture in AWS. We have several EC2 instances which has the micro-services deployed on different ports. We also have an internet facing Application Load Balancer, which routes to different services based on the port.
eg:
xxxx-xx.xx.elb.amazonaws.com:8080/ go to microservice 1
xxxx-xx.xx.elb.amazonaws.com:8090/ go to microservice 2
We need to have a domain name instead of the ELB, the port should not be exposed through the domain name as well. Almost all the resources I found regarding route 53, use alias which does the following:
xx.xxxx.co.id -> xxxx-xx.xx.elb.amazonaws.com or
xx.xxxx.co.id -> 111.111.111.11 (static ip)
1) Do we need separate domains for each micro service?
2) How to use alias to point domains to a specific port of the ELB?
3) Is it possible to use this setup if the domains are from another provider other than AWS.
Important Update
Since this answer was originally written, Application Load Balancer introduced the capability for ALB to route requests to a specific target group based on the Host header of the incoming request.
The incoming host header can now be used to route requests to specific instances and ports.
Additionally, ALB introduced SNI support, allowing you to associate multiple TLS (SSL) certificates with a single balancer, and the correct certificate will be automatically selected based on the SNI presented by the client when TLS is negotiated. Multi-domain and wildcard certs from Amazon Certificate Manager also work with ALB.
Based on these factors, no separate ports or different listeners are needed -- simply assign hostnames and/or path prefixes for each service, and map those patterns to the appropriate target group of instances.
The original answer is no longer accurate, but is included below.
1.) Do we need separate domains for each micro service?
No, this won't help you. ALB does not interpret the hostname attached to the incoming request.
Separate hostnames in the same domain won't directly accomplish your objective, either.
2.) How to use alias to point domains to a specific port of the ELB?
Domains do not point to ports. Hostnames do not point to ports. DNS is only used for address resolution. This is true everywhere on the Internet.
3.) Is it possible to use this setup if the domains are from another provider other than AWS.
This is not a limitation of AWS. DNS simply does not work this way.
A service endpoint is unaware of the DNS records that point to it. The DNS entry itself is strictly used for discovering an IP address that can be used to access the endpoint. After that, the endpoint does not actually know anything about the DNS, and there is no way to tell the browser, via DNS, to use a different port.
For HTTP, the implicit port is 80. For HTTPS, it is 443. Unless a port is provided in the URL, these are the only usable ports.
However, in HTTP and HTTPS, each request is accompanied by a Host: header, sent by the web browser with each request. This is the hostname in the address bar.
To differentiate between requests for different hostnames arriving at a device (such as ELB/ALB), the device at the endpoint must interpret the incoming host header and route the request to an back-end system providing that service.
ALB does not currently support this capability.
ALB does, however, support choosing endpoints based on a path prefix. So microservices.example.com/api/foo could route to one set of services, while microservices.example.com/api/bar could route to another.
But ALB does not directly support routing by host header.
In my infrastructure, we use a combination of ELB or ALB, but the instances behind the load balancer are not the applications. Instead, they are instances that run HAProxy load balancer software, and route the requests to the backend.
A brief example of the important configuration elements looks like this:
frontend main
use_backend svc1 if { hdr(Host) -i foo.example.com }
use_backend svc2 if { hdr(Host) -i bar.example.com }
backend svc1
server foo-a 192.168.2.24:8080
server foo-b 192.168.12.18:8080
backend svc2
....
The ELB terminates the SSL and selects a proxy at random and the proxy checks the Host: header and selects a backend (a group of 1 or more instances) to which the request will be routed. It is a thin layer between the ELB and the application, which handles the request routing by examining the host header or any other characteristic of the request.
This is one solution, but is a somewhat advanced configuration, depending on your expertise.
If you are looking for an out-of-the-box, serverless, AWS-centric solution, then the answer is actually found in CloudFront. Yes, it's a CDN, but it has several other applications, including as a reverse proxy.
For each service, choose a hostname from your domain to assign to that service, foo.api.example.com or bar.api.example.com.
For each service, create a CloudFront distribution.
Configure the Alternate Domain Name of each distribution to use that service's assigned hostname.
Set the Origin Domain Name to the ELB hostname.
Set the Origin HTTP Port to the service's specific port on the ALB, e.g. 8090.
Configure the default Cache Behavior to forward any headers you need. If you don't need the caching capability of CloudFront, choose Forward All Headers. Also enable forwarding of Query Strings and Cookies if needed.
In Route 53, create foo.api.example.com as an Alias to that specific CloudFront distribution's hostname, e.g. dxxxexample.cloudfront.net.
Your problem is solved.
You see what I did there?
For each hostname you configure, a dedicated CloudFront distribution receives the request on the standard ports (80/443) and -- based on which distribution the host header matches -- CloudFront routes the requests to the same ELB/ALB hostname but a custom port number.
I think there is a possibility that he can build what he's describing. I was in the same boat for a while, here's some options for you to consider:
In R53 create a hosted zone - and point your domain at it.
Optional step: create ALIAS records. You can do this for each subdomain or
app. Leave the ALIAS field blank if using the root domain.
Create a record set using the SLA option, which is a service lookup for port
redirection. Try to point this to your LB port 80, alias the sub-domains.
Change your load balancer's listeners, to listen on port 80 - then redirect app traffic based on your apps port settings.
I havent used SLA, but this would definitely point you in that direction.