istio How to configure services that use the root directory to convert to secondary paths - istio

enter image description here
How does my nginx configuration work in the isio? I need to be able to access pgadmin through the secondary path rather than through the root directory. The root directory will be used by other important servers

You would need to create istio gateway and istio virtual service objects. Please refer istio documentation for traffic management. Below is the sample of uri base routing and similarly you can add different routes based on the requirement.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app-route
spec:
hosts:
- app.prod.svc.cluster.local
http:
- match:
- uri:
prefix: /pgadmin
- route:
- destination:
host: <db service name>

Related

Istio virtual service spec host and destination rule host

I'm trying to understand Istio configuration model but the more I read the more I get confused, especially around the hosts and host fields. In their examples, they all use the same short name and I'm not sure whether they mean the virtual service name, the Kubernetes service hostname or the dns service address.
Assuming I have the following configuration:
My Kubernetees project namespace is called poc-my-ns
Inside poc-my-ns I have my pods (both version 1 and 2) a Kubernetes route and a Kubernetes service.
The service hostname is: poc-my-ns.svc.cluster.local and the route is https://poc-my-ns.orgdevcloudapps911.myorg.org.
Everything is up and running and the service selector gets all pods from all versions as it should. (Istio virtual service suppose to do the final selection by version).
The intended Istio configuration looks like that:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: poc-my-dr
spec:
host: poc-my-ns.svc.cluster.local # ???
subsets:
- name: v1
labels:
version: 1.0
- name: v2
labels:
version: 2.0
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: poc-my-vs
spec:
hosts:
- poc-my-ns.svc.cluster.local # ???
http:
- route:
- destination:
host: poc-my-dr # ???
subset: v1
weight: 70
- destination:
host: poc-my-dr # ???
subset: v2
weight: 30
My questions are:
Is the destination rule spec/host refers to the Kubernetes service hostname?
Is the virtual service spec/hosts refers to the Kubernetes service hostname, Is it the route https://poc-my-ns.orgdevcloudapps911.myorg.org or something else?
Is the virtual service spec/http/route/destination/host refers to the destination rule name or does it suppose to point to the Kubernetes service hostname or should it be the virtual service metadata/name?
I will really appreciate clarifications.
The VirtualService and DestinationRule basically configure the envoy-proxy of the istio mesh. The VirtualService defines where to route the traffic to and the DestinationRule defines what to additionally do with the traffic.
For the VS the spec.hosts list can contain kubernetes internal and external hosts.
Say you want the define how to route traffic for api.example.com coming from outside the kubernetes cluster through the istio-ingressgateway my-gateway into the mesh. It should be routed to the rating app in the store namespace, so the VS would look like this:
spec:
hosts:
- api.example.com # external host
gateway:
- my-gateway # the ingress-gateway
http:
- [...]
route:
- destination:
host: rating.store.svc.cluster.local # kubernetes service
If you want to define how cluster/mesh internal traffic is routed, you set rating.store.svc.cluster.local in the spec.hosts list and define the mesh gateway (or leave it out like you did, because mesh is the default) and route it to the rating.store.svc.cluster.local service. You also add a DR where you define subsets and route all mesh internal traffic to subset v1.
# VS
[...]
spec:
hosts:
- rating.store.svc.cluster.local # cluster internal host
gateway:
- mesh # mesh internal gateway (default when omitted)
http:
- [...]
route:
- destination:
host: rating.store.svc.cluster.local # cluster internal host
subset: v1 # defined in destinationrule below
---
[...]
spec:
host: rating.store.svc.cluster.local # cluster internal host
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
But it could also be that you want to route traffic to a cluster external destination. In that case destination.host would be an external fqdn, like in this example from docs:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-wikipedia
spec:
hosts:
- wikipedia.org
location: MESH_EXTERNAL
ports:
- number: 80
name: example-http
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-wiki-rule
spec:
hosts:
- wikipedia.org
http:
- timeout: 5s
route:
- destination:
host: wikipedia.org
Think about it as "I want to route traffic from HOST_FROM to HOST_TO", where
HOST_FROM is spec.host and spec.hosts
HOST_TO is destination.host
and both can be inside the kubernetes cluster or outside.
So to answer all your questions:
It depends: If you want to route from/to cluster internal traffic you'll use a kubernetes service fqdn. For cluster external traffic you'll use the external target fqdn.
I highly recommend reading through the docs of VirtualService and DestinationRule where you can see several examples with explanations.

Redirect URLs using Google Cloud

I have a domain (example.com) already configured in Cloud DNS. With this domain I can access microservices that are in a GKE cluster. I use istio-ingressgateway IP in CloudDNS to make the association between the cluster
Now I have another domain (newexample.com) with a custom certificate for https connections. Is there a way to redirect all the requests to newexample.com to example.com? I do not want to change anything in gke/istio configuration if possible.
Each method will require some of the reconfiguration in either GKE/Istio side.
One of the solutions to this is to have a CNAME record in a Cloud DNS and a SSL certificate with Alternative Names.
With above solution you will be able to send requests to your GKE/Istio cluster with both domain names assuming correct Istio configuration.
What is CNAME?
CNAME is a Canonical Name Record or Alias Record.
A type of resource record in the Domain Name System (DNS), that specifies that one domain name is an alias of another canonical domain name.
Example of a CNAME record:
DNS name Type TTL Data
old.domain. A 60 1.2.3.4
new.domain. CNAME 60 old.domain.
Alternative Names:
A SAN or subject alternative name is a structured way to indicate all of the domain names and IP addresses that are secured by the certificate.
Enstrustdatacard.com: What is a san and how is it used
You can create SSL cerificate create to support both:
old.domain
new.domain
There are plenty options to do that for example Let's Encrypt or Cert Manager.
Example
I've created an example to show you how to do it:
Configure DNS zone in Cloud DNS
Create a basic app with a service
Create a certificate for example app
Create Istio resources to allow connections to example app
Test
Configure DNS zone in Cloud DNS
You will need to have 2 records:
A record with IP of your Ingress Gateway and name: old.domain
CNAME record pointing to old.domain with name: new.domain
Please take a look on official documentation: Cloud.google.com: DNS: Records
Create a basic app with a service
Below is example app with a service which will respond with a basic hello:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-dp
spec:
selector:
matchLabels:
app: hello-dp
replicas: 1
template:
metadata:
labels:
app: hello-dp
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
---
apiVersion: v1
kind: Service
metadata:
name: hello-sv
spec:
selector:
app: hello-dp
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
type: ClusterIP
Create a certificate for example app
As said previously, certificate with Alternative Names can be created with Let's Encrypt. I created it with:
GCE VM with Ubuntu 16.04
Open port 80
Domain name old.domain pointing to public ip address of a VM
Guide: Linode.com: Docs: Install let's encrypt to create a SSL certificate
Command to create certificate:
$ ./letsencrypt-auto certonly --standalone -dold.domain-dnew.domain
Certificate created in /etc/letsencrypt/archive/ used in creating tls secret for GKE with command:
$ kubectl create secret tlsssl-certificate--cert cert1.pem --key privkey1.pem
Please have in mind that this certificate was created only for testing purposes and I strongly advise using dedicated solution like: Cert-manager
PS: If you used this method please revert back changes in the Cloud DNS to point the Istio gateway.
Create Istio resources to allow connections to example app
Below are example Istio resources allowing connections to example app with support for HTTPS:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hello-gw
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: ssl-certificate
hosts:
- "old.domain"
- "new.domain"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-vs
spec:
hosts:
- "old.domain"
- "new.domain"
gateways:
- hello-gw
http:
- route:
- destination:
host: hello-sv
port:
number: 50001
Please take a specific look on:
tls:
mode: SIMPLE
credentialName: ssl-certificate
This part will ensure that connection to the cluster will use HTTPS
Additionally:
hosts:
- "old.domain"
- "new.domain"
Above definition in both resources will allow only connections with specified domains.
Test
When applied all of the above resources you should be able to enter in your browser:
https://old.domain
https://new.domain
and be greeted with below message and valid SSL certificate:
Hello, world!
Version: 2.0.0
Hostname: hello-dp-5dd8b85b56-bk7zr

How to hide Django Admin from the public on Azure Kubernetes Service while keeping access via backdoor

I'm running a Django app on Azure Kubernetes Service and, for security purposes, would like to do the following:
Completely block off the admin portal from the public (e.g. average Joe cannot reach mysite.com/admin)
Allow access through some backdoor (e.g. a private network, jump host, etc.)
One scenario would be to run two completely separate services: 1) the main API part of the app which is just the primary codebase with the admin disabled. This is served publicly. and 2) Private site behind some firewall which has admin enabled. Each could be on a different cluster with a different FQDN but all connect to the same datastore. This is definitely overkill - there must be a way to keep everything within the cluster.
I'm think there might be a way to configure the Azure networking layer to block/allow traffic from specific IP ranges, and do it on a per-endpoint basis (e.g. mysite.com/admin versus mysite.com/api/1/test). Alternatively, maybe this is doable on a per-subdomain level (e.g. api.mysite.com/anything versus admin.mysite.com/anything).
This might also be doable at the Kubernetes ingress layer but I can't figure out how.
What is the easiest way to satisfy the 2 requirements?
You can manage restriction at ingress level :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/whitelist-source-range: "192.168.0.XXX, 192.175.2.XXX"
name: staging-ingress
namespace: default
spec:
rules:
- host: test.example.io
http:
paths:
- backend:
serviceName: service-name
servicePort: 80
tls:
- hosts:
- test.example.io
secretName: tls-cert
You can white list the IP address for allowing specific path to resolve your backdoor issue. For other you can create another ingress rule with removing annotation for public accesss.
For a particular path :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/whitelist-source-range: "192.168.0.XXX, 192.175.2.XXX"
name: staging-ingress
namespace: default
spec:
rules:
- host: test.example.io
http:
paths:
- path : /admin
backend:
serviceName: service-name
servicePort: 80
tls:
- hosts:
- test.example.io
secretName: tls-cert
test.example.io/admin will only be accessible through source-range.

Configure Istio Ingress Gateway to require header token using Authorization Policy

I configured Istio Ingress Gateway to accept my URLs (using https) like microservices.myexample.com, grafana.myexample.com and so on.
Everything is working but all the urls are public.
Beacause of that I was asked to configure ingress gateway to protect urls inside microservices.myexample.com (Grafana has a login page). The idea is allow acess only if the request contains a token inside the header.
But when I applied this yml file all the URLs are blocked and they require the header including grafana.myexample.com:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: ingress
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
rules:
- from: []
to:
- operation:
#paths: ["/customers*"] # I also tried with paths. Every microservice has a path after microservices.myexample.com
hosts: ["microservices.myexample.com"]
when:
- key: request.headers[token]
values: ["test123"]
We did it.
Just in case if someone is stuck at the same problem. The following code will be applied to all services in mynamespace. All the urls will require the token except the ones ending with /actuator/health
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: token-authorization
namespace: mynamespace
spec:
rules:
- to:
- operation:
paths: ["*/actuator/health"]
- to:
- operation:
paths: ["/*"]
when:
- key: request.headers[token]
values: ["test123"]
This will not work.
This is because in Your AuthorizationPolicy the hosts under operation: does not support HTTPS protocol.
According to Istio documentation:
Optional. A list of hosts, which matches to the “request.host” attribute.
If not set, any host is allowed. Must be used only with HTTP.
This is because the host header in HTTPS traffic is encrypted. More info about this is here.
The same goes for request header token.

What's the purpose of the `VirtualService` in this example?

I am looking at this example of Istio, and they are craeting a ServiceEntry and a VirtualService to access the external service, but I don't understand why are they creating a VirtualService as well.
So, this is the ServiceEntry:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: edition-cnn-com
spec:
hosts:
- edition.cnn.com
ports:
- number: 80
name: http-port
protocol: HTTP
- number: 443
name: https-port
protocol: HTTPS
resolution: DNS
With just this object, if I try to curl edition.cnn.com, I get 200:
/ # curl edition.cnn.com -IL 2>/dev/null | grep HTTP
HTTP/1.1 301 Moved Permanently
HTTP/1.1 200 OK
While I can't access other services:
/ # curl google.com -IL
HTTP/1.1 502 Bad Gateway
location: http://google.com/
date: Fri, 10 Jan 2020 10:12:45 GMT
server: envoy
transfer-encoding: chunked
But in the example they create this VirtualService as well.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: edition-cnn-com
spec:
hosts:
- edition.cnn.com
tls:
- match:
- port: 443
sni_hosts:
- edition.cnn.com
route:
- destination:
host: edition.cnn.com
port:
number: 443
weight: 100
What's the purpose of the VirtualService in this scenario?.
The VirtualService object is basically an abstract pilot resource that modifies envoy filter.
So creating VirtualService is a way of modification of envoy and its main purpose is like answering the question: "for a name, how do I route to backends?"
VirtualService can also be bound to Gateway.
In Your case lack of VirtualService results in lack of modification of the envoy from the default/global configuration. That means that the default configuration was enough for this case to work correctly.
So the Gateway which was used was most likely default. With same protocol and port that you requested with curl which all matched Your ServiceEntry requirements for connectivity.
This is also mentioned in istio documentation:
Virtual
services,
along with destination
rules,
are the key building blocks of Istio’s traffic routing functionality.
A virtual service lets you configure how requests are routed to a
service within an Istio service mesh, building on the basic
connectivity and discovery provided by Istio and your platform. Each
virtual service consists of a set of routing rules that are evaluated
in order, letting Istio match each given request to the virtual
service to a specific real destination within the mesh. Your mesh can
require multiple virtual services or none depending on your use case.
You can use VirtualService to add thing like timeout to the connection like in this example.
You can check the routes for Your service with the following command from istio documentation istioctl proxy-config routes <pod-name[.namespace]>
For bookinfo productpage demo app it is:
istioctl pc routes $(kubectl get pod -l app=productpage -o jsonpath='{.items[0].metadata.name}') --name 9080 -o json
This way You can check how routes look without VirtualService object.
Hope this helps You in understanding istio.
The VirtualService is not really doing anything, but as the docs say:
creating a VirtualService with a default route for every service, right from the start, is generally considered a best practice in Istio
The ServiceEntry adds the CNN site as an entry to Istio’s internal service registry, so auto-discovered services in the mesh can route to these manually specified services.
Usually that's used to allow monitoring and other Istio features of external services from the start, whereas the VirtualService would allow the proper routing of request (basically traffic management).
This page in the docs gives a bit more background info on using ServiceEntries and VirtualServices, but basically the ServiceEntry makes sure your mesh knows about the service and can monitor it, and the VirtualService controls what traffic is going to the service, which in this case is all of it.