mTLS between services running inside and outside a mesh using Istio's trust chain - istio

I understand that I can configure Istio for its Citadel component to use a root x509 certificate + private key that I provide. Can I extend this system in a way that I also use the same root to issue certificates to legacy workloads running in the same k8s cluster, and then configure a destination rule to access these workloads from inside the mesh? Something like:
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: originate-mtls
spec:
host: mymtls-app.legacy.svc.cluster.local
trafficPolicy:
portLevelSettings:
- port:
number: 8443
tls:
mode: ISTIO_MUTUAL
sni: mymtls-app.legacy.svc.cluster.local
Can the above work? Do I need any additional configuration besides the above? I may not be in a position to run spiffe / spire to manage the certificates for workloads outside the mesh - which puts a spiffe-federation solution like this somewhat out of reach for me. But this also doesn't seem like a fully supported mechanism in any case.
I have been able to configure mTLS using a separate certificate hierarchy which I have to inject via secrets and mount into the pods / sidecars in question (illustrated here).

Related

How to expose multiple services with TCP using nginx-ingress controller?

I have multiple deployments running of RDP application and they all are exposed with ClusterIP service. I have nginx-ingress controller in my k8s cluster and to allow tcp I have added --tcp-services-configmap flag in nginx-ingress controller deployment and also created a configmap for the same that is shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
3389: “demo/rdp-service1:3389”
This will expose “rdp-service1” service. And I have 10 more such services which needed to be exposed on the same port number but if I add more service in the same configmap like this
...
data
3389: “demo/rdp-service1:3389”
3389: “demo/rdp-service2:3389”
Then it will remove the previous service data and since here I have also deployed external-dns in k8s, so all the records created by ingress using host: ... will starts pointing to the deployment attached with the newly added service in configmap.
Now my final requirement is as soon as I append the rule for a newly created deployment(RDP application) in the ingress then it starts allowing the TCP connection for that, so is there any way to achieve this. Or is there any other Ingress controller available that can solve such type of use case and can also easily be integrated with external-dns ?
Note:- I am using AWS EKS Cluster and Route53 with external-dns.
Posting this answer as a community wiki to explain some of the topics in the question as well as hopefully point to the solution.
Feel free to expand/edit it.
NGINX Ingress main responsibility is to forward the HTTP/HTTPS traffic. With the addition of the tcp-services/udp-services it can also forward the TCP/UDP traffic to their respective endpoints:
Kubernetes.github.io: Ingress nginx: User guide: Exposing tcp udp services
The main issue is that the Host based routing for Ingress resource in Kubernetes is targeting specifically HTTP/HTTPS traffic and not TCP (RDP).
You could achieve a following scenario:
Ingress controller:
3389 - RDP Deployment #1
3390 - RDP Deployment #2
3391 - RDP Deployment #3
Where there would be no Host based routing. It would be more like port-forwarding.
A side note!
This setup would also depend on the ability of the LoadBalancer to allocate ports (which could be limited due to cloud provider specification)
As for possible solution which could be not so straight-forward I would take a look on following resources:
Stackoverflow.com: Questions: Nxing TCP forwarding based on hostname
Doc.traefik.io: Traefik: Routing: Routers: Configuring TCP routers
Github.com: Bolkedebruin: Rdpgw
I'd also check following links:
Aws.amazon.con: Quickstart: Architecture: Rd gateway - AWS specific
Docs.konghq.com: Kubernetes ingress controller: 1.2.X: Guides: Using tcpingress
Haproxy:
Haproxy.com: Documentation: Aloha: 12-0: Deployment guides: Remote desktop: RDP gateway
Haproxy.com: Documentation: Aloha: 10-5: Deployment guides: Remote desktop
Haproxy.com: Blog: Microsoft remote desktop services rds load balancing and protection
Actually, I really don't know why you are using that configmap.
In my knowledge, nginx-ingress-controller is routing traffic coming in the same port and routing based on host. So if you want to expose your applications on the same port, try using this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Chart.Name }}-ingress
namespace: your-namespace
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: your-hostname
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: {{ .Chart.Name }}-service
servicePort: {{ .Values.service.nodeport.port }}
Looking in your requirement, I feel that you need a LoadBalancer rather than Ingress

Configure mutual TLS origination for egress traffic for more then 200 target destination with different cert requirements

I've got a use case of an application that needs to authenticate with mTLS to hundreds of different destination servers outside of the mesh with a different client certificate. I thought offloading this procedure form the app to Istio is that even possible?
Yes it is possible.
You can follow this guide to configure mutual TLS orgination for egress traffic.
Then for each destination modify the DestinationRule to use different certificate like in this example:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: db-mtls
spec:
host: mydbserver.prod.svc.cluster.local
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
Hope it helps.

What is the purpose of a VirtualService when defining an wildcard ServiceEntry in Istio?

The Istio documentation gives an example of configuring egress using a wildcard ServiceEntry here.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: wikipedia
spec:
hosts:
- "*.wikipedia.org"
ports:
- number: 443
name: tls
protocol: TLS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: wikipedia
spec:
hosts:
- "*.wikipedia.org"
tls:
- match:
- port: 443
sniHosts:
- "*.wikipedia.org"
route:
- destination:
host: "*.wikipedia.org"
port:
number: 443
What benefit/difference does the VirtualService give? If I remove the VirtualService nothing seems to be affected. I am using Istio 1.6.0
The VirtualService is not really doing anything, but if you take a look at this or this istio docs.
creating a VirtualService with a default route for every service, right from the start, is generally considered a best practice in Istio.
Virtual services play a key role in making Istio’s traffic management flexible and powerful. They do this by strongly decoupling where clients send their requests from the destination workloads that actually implement them. Virtual services also provide a rich way of specifying different traffic routing rules for sending traffic to those workloads.
Service Entry adds those wikipedia sites as an entry to istio internal service registry, so auto-discovered services in the mesh can route to these manually specified services.
Usually that's used to allow monitoring and other Istio features of external services from the start, when the Virtual Service would allow the proper routing of request.
Take a look at this istio documentation.
Service Entry makes sure your mesh knows about the service and can monitor it.
Using Istio ServiceEntry configurations, you can access any publicly accessible service from within your Istio cluster.
Virtual Service manage traffic to external services and controls traffic which go to the service, which in this case is all of it.
I would say the benefit is that, you can use istio routing rules, which can also be set for external services that are accessed using Service Entry configurations. In this example, you set a timeout rule on calls to the httpbin.org service.

What's the purpose of the `VirtualService` in this example?

I am looking at this example of Istio, and they are craeting a ServiceEntry and a VirtualService to access the external service, but I don't understand why are they creating a VirtualService as well.
So, this is the ServiceEntry:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: edition-cnn-com
spec:
hosts:
- edition.cnn.com
ports:
- number: 80
name: http-port
protocol: HTTP
- number: 443
name: https-port
protocol: HTTPS
resolution: DNS
With just this object, if I try to curl edition.cnn.com, I get 200:
/ # curl edition.cnn.com -IL 2>/dev/null | grep HTTP
HTTP/1.1 301 Moved Permanently
HTTP/1.1 200 OK
While I can't access other services:
/ # curl google.com -IL
HTTP/1.1 502 Bad Gateway
location: http://google.com/
date: Fri, 10 Jan 2020 10:12:45 GMT
server: envoy
transfer-encoding: chunked
But in the example they create this VirtualService as well.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: edition-cnn-com
spec:
hosts:
- edition.cnn.com
tls:
- match:
- port: 443
sni_hosts:
- edition.cnn.com
route:
- destination:
host: edition.cnn.com
port:
number: 443
weight: 100
What's the purpose of the VirtualService in this scenario?.
The VirtualService object is basically an abstract pilot resource that modifies envoy filter.
So creating VirtualService is a way of modification of envoy and its main purpose is like answering the question: "for a name, how do I route to backends?"
VirtualService can also be bound to Gateway.
In Your case lack of VirtualService results in lack of modification of the envoy from the default/global configuration. That means that the default configuration was enough for this case to work correctly.
So the Gateway which was used was most likely default. With same protocol and port that you requested with curl which all matched Your ServiceEntry requirements for connectivity.
This is also mentioned in istio documentation:
Virtual
services,
along with destination
rules,
are the key building blocks of Istio’s traffic routing functionality.
A virtual service lets you configure how requests are routed to a
service within an Istio service mesh, building on the basic
connectivity and discovery provided by Istio and your platform. Each
virtual service consists of a set of routing rules that are evaluated
in order, letting Istio match each given request to the virtual
service to a specific real destination within the mesh. Your mesh can
require multiple virtual services or none depending on your use case.
You can use VirtualService to add thing like timeout to the connection like in this example.
You can check the routes for Your service with the following command from istio documentation istioctl proxy-config routes <pod-name[.namespace]>
For bookinfo productpage demo app it is:
istioctl pc routes $(kubectl get pod -l app=productpage -o jsonpath='{.items[0].metadata.name}') --name 9080 -o json
This way You can check how routes look without VirtualService object.
Hope this helps You in understanding istio.
The VirtualService is not really doing anything, but as the docs say:
creating a VirtualService with a default route for every service, right from the start, is generally considered a best practice in Istio
The ServiceEntry adds the CNN site as an entry to Istio’s internal service registry, so auto-discovered services in the mesh can route to these manually specified services.
Usually that's used to allow monitoring and other Istio features of external services from the start, whereas the VirtualService would allow the proper routing of request (basically traffic management).
This page in the docs gives a bit more background info on using ServiceEntries and VirtualServices, but basically the ServiceEntry makes sure your mesh knows about the service and can monitor it, and the VirtualService controls what traffic is going to the service, which in this case is all of it.

Private paths in a public API with Kubernetes

We have a microservice architecture based on Kubernetes in Amazon EKS with Ambassador as API Gateway.
We have 2 Ambassadors: 1 public and 1 private. So we have services that are only accessible by services in the cluster or VPN, and we have some services that are public.
We have the need for making private some URL paths in the public services. For example, we have a public API that is accessible in api.company.com, and we want to leave all paths public like api.company.com/createuser, api.company.com/login, etc... but for other paths we want to make them private, for example: api.company.com/swagger.html.
We know that we could enable authentication for those paths in the API, but we are looking for a solution without auth.
An example of how we configure K8s service with Ambassador for public services:
apiVersion: v1
kind: Service
metadata:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: backends_mapping
prefix: /
ambassador_id: ambassador-public
service: backends.svc:8080
host: api.mycompany.com
labels:
app: backends
name: backends
namespace: svc
spec:
ports:
- name: http-backends
port: 8080
protocol: TCP
targetPort: http-api
selector:
app: backends
type: ClusterIP
Not sure what do you mean by without auth. You will need some sort of check to serve internal content.
One approach to achieve this can be(Note this is a high level overview).
You can make the service private, do not expose this service directly.
Prefix all your internal routes with say /internal/ or /private/ prefix.
So api.company.com/swagger.html becomes api.company.com/internal/swagger.html
You can create a load balancer that points to this middleware.
Middleware(public service) will intercept all the requests. I think Nginx can be used here. If the request has /internal/ path check if it satisfies the condition(origin, internal network etc).
If the check passes, redirect to private service.
If the check fails return 403 forbidden or whatever response code that fits.
Cilium can do just what you want:http://docs.cilium.io/en/stable/policy/language/#http
Basically you can specify L7 network policies which will only allow access to some of you API paths from certain pods.
Cilium project page: https://cilium.io/
Layer 7 policies example: http://docs.cilium.io/en/stable/policy/language/#http
EKS install guide: http://docs.cilium.io/en/v1.4/gettingstarted/k8s-install-eks/?highlight=eks
Disclaimer: I am part of the team that develops Cilium.