Shared istio egress gateway in multi-cluster/multi-primary mesh - istio

We are trying to setup an egress gateway in a multi-cluster/multi-primary mesh
configuration where the egress gateway is located in only one cluster but used from both.
diagram of desired setup
The use case is that the clusters are in different network zones and we want to be able
to route traffic to one zone transparently to the clients in the other zone.
We followed this guide in one cluster and it worked fine. However we have trouble setting up the VirtualService in the second cluster
to use the egress gateway in the first cluster.
When deploying the following virtual service to the second cluster we get 503 with cluster_not_found.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-cnn-through-egress-gateway
spec:
hosts:
- edition.cnn.com
gateways:
- istio-egressgateway
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 80
weight: 100
- match:
- gateways:
- istio-egressgateway
port: 80
route:
- destination:
host: edition.cnn.com
port:
number: 80
weight: 100
The endpoints proxy config on a pod in the second cluster misses the istio-egressgateway.istio-gateways.svc.cluster.local
endpoints (all other services are discovered and directed to the eastwest gateway of the other cluster).
We believe that this is the reason that this VirtualService doesn't work in the second cluster.
As a workaround we could redirect the egress traffic to the ingress gateway of the first cluster but this
has the disadvantage that the traffic leaves and re-enters the mesh which probably has an impact on tracing and monitoring.
Is it currently possible to setup a single egress gateway that can be used by all clusters in the mesh or do we have to go with the workaround?

According to the comments the solution should works as below:
To create a multi-cluster deployment you can use this tutorial. In this situation cross cluster workload of normal services works fine. However, there is a problem with getting the traffic to the egress gateway routed via the eastwest gateway. This can be solved with this example.
You should also change kind: VirtualService to kind: ServiceEntry in both clusters.
Like Tobias Henkel mentioned:
I got it to work fine with the service entry if I target the ingress gateway on ports 80/443 which then dispatches further to the mesh external services.
You can also use Admiral to automate traffic routing.
See also:
multi cluster mesh automation using Admiral
multi cluster service mesh od GKE
tutorial on GKE to create similar situation

Related

How to expose multiple services with TCP using nginx-ingress controller?

I have multiple deployments running of RDP application and they all are exposed with ClusterIP service. I have nginx-ingress controller in my k8s cluster and to allow tcp I have added --tcp-services-configmap flag in nginx-ingress controller deployment and also created a configmap for the same that is shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
3389: “demo/rdp-service1:3389”
This will expose “rdp-service1” service. And I have 10 more such services which needed to be exposed on the same port number but if I add more service in the same configmap like this
...
data
3389: “demo/rdp-service1:3389”
3389: “demo/rdp-service2:3389”
Then it will remove the previous service data and since here I have also deployed external-dns in k8s, so all the records created by ingress using host: ... will starts pointing to the deployment attached with the newly added service in configmap.
Now my final requirement is as soon as I append the rule for a newly created deployment(RDP application) in the ingress then it starts allowing the TCP connection for that, so is there any way to achieve this. Or is there any other Ingress controller available that can solve such type of use case and can also easily be integrated with external-dns ?
Note:- I am using AWS EKS Cluster and Route53 with external-dns.
Posting this answer as a community wiki to explain some of the topics in the question as well as hopefully point to the solution.
Feel free to expand/edit it.
NGINX Ingress main responsibility is to forward the HTTP/HTTPS traffic. With the addition of the tcp-services/udp-services it can also forward the TCP/UDP traffic to their respective endpoints:
Kubernetes.github.io: Ingress nginx: User guide: Exposing tcp udp services
The main issue is that the Host based routing for Ingress resource in Kubernetes is targeting specifically HTTP/HTTPS traffic and not TCP (RDP).
You could achieve a following scenario:
Ingress controller:
3389 - RDP Deployment #1
3390 - RDP Deployment #2
3391 - RDP Deployment #3
Where there would be no Host based routing. It would be more like port-forwarding.
A side note!
This setup would also depend on the ability of the LoadBalancer to allocate ports (which could be limited due to cloud provider specification)
As for possible solution which could be not so straight-forward I would take a look on following resources:
Stackoverflow.com: Questions: Nxing TCP forwarding based on hostname
Doc.traefik.io: Traefik: Routing: Routers: Configuring TCP routers
Github.com: Bolkedebruin: Rdpgw
I'd also check following links:
Aws.amazon.con: Quickstart: Architecture: Rd gateway - AWS specific
Docs.konghq.com: Kubernetes ingress controller: 1.2.X: Guides: Using tcpingress
Haproxy:
Haproxy.com: Documentation: Aloha: 12-0: Deployment guides: Remote desktop: RDP gateway
Haproxy.com: Documentation: Aloha: 10-5: Deployment guides: Remote desktop
Haproxy.com: Blog: Microsoft remote desktop services rds load balancing and protection
Actually, I really don't know why you are using that configmap.
In my knowledge, nginx-ingress-controller is routing traffic coming in the same port and routing based on host. So if you want to expose your applications on the same port, try using this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Chart.Name }}-ingress
namespace: your-namespace
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: your-hostname
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: {{ .Chart.Name }}-service
servicePort: {{ .Values.service.nodeport.port }}
Looking in your requirement, I feel that you need a LoadBalancer rather than Ingress

How do I map external traffic to the internal service mesh on GCP Traffic Director?

I've set up a simple GKE cluster hooked to GCP Traffic Director with the Traffic Director setup with automatic Envoy injection tutorial.
The next step is how do I map external traffic into the Traffic Director backend service, which is only internal?
Basically, my goal is to have an external load balancer with an IP address that takes outside traffic and routes it to the Traffic Director service mesh to split traffic between different Network Endpoint Groups.
I tried the following:
Create an external load balancer manually in Network Services -> Load Balancing --> However the list of Backends does not include the Traffic Director backend service so I can't create one to have an external IP and redirect it to the internal service mesh.
Install the NGINX ingress controller chart and install an ingress controller via .yaml that maps to the k8s cluster service --> This creates an external load balancer but it simply goes directly to the service instead of through Traffic Director
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-nginx-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
nginx.ingress.kubernetes.io/send-timeout: "60"
nginx.ingress.kubernetes.io/proxy-body-size: 1M
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: my-host-name.hostname.com
http:
paths:
- path: "/"
backend:
serviceName: service-test
servicePort: 80
Service:
apiVersion: v1
kind: Service
metadata:
name: service-test
annotations:
cloud.google.com/neg: '{"exposed_ports":{"80":{"name": "service-test-neg"}}}'
spec:
ports:
- port: 80
name: service-test
protocol: TCP
targetPort: 8000
selector:
run: app1
type: ClusterIP
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: app1
name: app1
spec:
replicas: 1
selector:
matchLabels:
run: app1
template:
metadata:
labels:
run: app1
spec:
containers:
- image: gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.1
name: app1
command:
- /bin/sh
- -c
- /serve_hostname -http=true -udp=false -port=8000
ports:
- protocol: TCP
containerPort: 8000
The deployment and service above is taken directly from the tutorial.
There seems to be a concept in the official documentation for Handling ingress traffic using a second-level gateway at the edge of your mesh, but it's only conceptual and does not provide how to actually do it.
How do I map external traffic using an external load balancer into a GCP Traffic Director-managed service mesh for advanced traffic configuration into GKE?
Traffic Director is not an endpoint to point to for routing. It is the "control plane" of your service mesh.
So you would configure your routing rules from GCP, and Traffic Director would configure your sidecars as expected. But eventually your Load Balancer should point to an Instance Group or Network Endpoint Group, not to Traffic Director.
EDIT
Traffic Director is not the one getting configured, but the one configuring. It configures the Envoy sidecars. These are L7 proxies, so the URL mapping happens on the proxies.
The Endpoint Group will be a group of IP addresses of pods. Since the pod ranges of the cluster have been added to the subnetwork; as IP alias, the VPC is capable of pulling any IP address from this range, group it, and make a backend for a HTTP load balancer on GCP.
Basically, Traffic Director is Istio, but with control plane decoupled to GCP.

What is the purpose of a VirtualService when defining an wildcard ServiceEntry in Istio?

The Istio documentation gives an example of configuring egress using a wildcard ServiceEntry here.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: wikipedia
spec:
hosts:
- "*.wikipedia.org"
ports:
- number: 443
name: tls
protocol: TLS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: wikipedia
spec:
hosts:
- "*.wikipedia.org"
tls:
- match:
- port: 443
sniHosts:
- "*.wikipedia.org"
route:
- destination:
host: "*.wikipedia.org"
port:
number: 443
What benefit/difference does the VirtualService give? If I remove the VirtualService nothing seems to be affected. I am using Istio 1.6.0
The VirtualService is not really doing anything, but if you take a look at this or this istio docs.
creating a VirtualService with a default route for every service, right from the start, is generally considered a best practice in Istio.
Virtual services play a key role in making Istio’s traffic management flexible and powerful. They do this by strongly decoupling where clients send their requests from the destination workloads that actually implement them. Virtual services also provide a rich way of specifying different traffic routing rules for sending traffic to those workloads.
Service Entry adds those wikipedia sites as an entry to istio internal service registry, so auto-discovered services in the mesh can route to these manually specified services.
Usually that's used to allow monitoring and other Istio features of external services from the start, when the Virtual Service would allow the proper routing of request.
Take a look at this istio documentation.
Service Entry makes sure your mesh knows about the service and can monitor it.
Using Istio ServiceEntry configurations, you can access any publicly accessible service from within your Istio cluster.
Virtual Service manage traffic to external services and controls traffic which go to the service, which in this case is all of it.
I would say the benefit is that, you can use istio routing rules, which can also be set for external services that are accessed using Service Entry configurations. In this example, you set a timeout rule on calls to the httpbin.org service.

My kubernetes AWS NLB integration is not working

I am trying to deploy a service in Kubernetes available through a network load balancer. I am aware this is an alpha feature at the moment, but I am running some tests. I have a deployment definition that is working fine as is. My service definition without the nlb annotation looks something like this and is working fine:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
However, when I switch to NLB, even when the load balancer is created and configured "correctly", the target in the AWS target group always appears unhealthy and I cannot access the service via HTTP. This is the service definition:
kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
externalTrafficPolicy: Local
It seems there was a rule missing in the k8s nodes security group, since the NLB forwards the client IP.
I don't think NLB is the problem.
externalTrafficPolicy: Local
is not supported by kops on AWS, and there are issues with some other K8s distros that run on AWS, due to some AWS limitation.
Try changing it to
externalTrafficPolicy: Cluster
There's an issue with the source IP being that of the load balancer instead of the true external client that can be worked around by using proxy protocol annotation on the service + adding some configuration to the ingress controller.
However, there is a 2nd issue that while you can technically hack your way around it and force it to work, it's usually not worth bothering.
externalTrafficPolicy: Local
Creates a NodePort /healthz endpoint so the LB sends traffic to a subset of nodes with service endpoints instead of all worker nodes. It's broken on initial provisioning and the reconciliation loop is broken as well.
https://github.com/kubernetes/kubernetes/issues/80579
^describes the problem in more depth.
https://github.com/kubernetes/kubernetes/issues/61486
^describes a workaround to force it to work using a kops hook
but honestly, you should just stick to
externalTrafficPolicy: Cluster as it's always more stable.
There was a bug in the NLB security groups implementation. It's fixed in 1.11.7, 1.12.5, and probably the next 1.13 patch.
https://github.com/kubernetes/kubernetes/pull/68422

How to define external ip for kubernetes ingress

I have question about kubernetes ingress.
I want to use ingress with my Amazon account and/or private cloud and want to assign external IP.
It is possible to assign external ip for services :
Services documentation - chapter external IP
but cannot find a way to do that for Ingress : Ingress documentation.
My question is direct especially to Kubernetes team.
Similar question was asked by Simon in this topic : How to force SSL for Kubernetes Ingress on GKE 2
but he asked about GKE while I am interested in private cloud, AWS.
Thank you in advance.
[UPDATE]
Guys found that my question may was answered already in this topic.
Actually answer that #anigosa put there is specific for GCloud.
His solution won't work in private cloud neither in AWS cloud. In my opinion the reason for that is that he use type: LoadBalancer (which cannot be used in private cloud) and use loadBalancerIP property which will works only on GCloud(for AWS it cause error : "Failed to create load balancer for service default/nginx-ingress-svc: LoadBalancerIP cannot be specified for AWS ELB
").
Looking at this issue, it seems you can define annotation on your service and map it to existing elastic ip.
Something like that:
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: <>
spec:
type: LoadBalancer
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
Please note this will create ELB for this service, not ingress.
As an ingress is simply one service (=ELB) handling requests for many other services, it should be possible to do something similar for ingress, but I couldn't find any docs for it.
There are two main ways you can do this. One is using a static IP annotation as shown in Omer's answer (which is cloud specific, and normally relies on the external IP being setup beforehand), the other is using an ingress controller (which is generally cloud agnostic).
The ingress controller will obtain an external IP on its service and then pass that to your ingress which will then use that IP as its own.
Traffic will then come into the cluster via the controller's service and the controller will route to your ingress.
Here's an example of the ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: my-ingress-class
spec:
tls:
- hosts:
- ssl.somehost.com
rules:
- host: ssl.somehost.com
http:
paths:
- backend:
serviceName: backend-service
servicePort: 8080
The line
kubernetes.io/ingress.class: my-ingress-class
Tells the cluster we want only an ingress controller that handles this "class" of ingress traffic -- you can have multiple ingress controllers in the cluster, each declaring they are handling a different class of ingress traffic so when you install the ingress controller, you also need to declare which ingress class you want it to handle.
Caveat: If you do not declare the ingress class on an ingress resource, ALL the ingress controllers in the cluster will attempt to route traffic to the ingres
Now if you want an external IP that is private, you can do that via the controller. For AWS and GCP you have annotations that tell the cloud provider you want an IP that is internal only by adding a specific annotation to the loadbalancer of the ingress controller
For AWS:
service.beta.kubernetes.io/aws-load-balancer-type: "internal"
For GCP:
networking.gke.io/load-balancer-type: "Internal"
or (< Kubernetes 1.17)
cloud.google.com/load-balancer-type: "Internal"
Your ingress will inherit the IP obtained by the ingress controller's loadbalancer