Here is my setup.
I have 2 AWS accounts.
Applications account
Monitoring account
Application account has EKS + Istio + Application related microservices + promtail agents.
Monitoring account has centralized logging system within EKS + Istio + (Grafana & Prometheus & loki pods running)
From Applications account, I want to push logs to Loki on Monitoring a/c. I tried exposing Loki service outside monitoring a/c but I am facing issues to set loki url as https://<DNS_URL>/loki. This change I tried by using suggestions at here and here, but that is not working for me. I have installed the loki-stack from this url
The question is how can I access loki URL from applications account so that it can be configured in promtail in applications a/c?
Please note both accounts are using pods within EKS and not standalone loki or promtail.
Thanks and regards.
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: monitoring
creationTimestamp: "2021-10-25T14:59:20Z"
labels:
app: loki
app.kubernetes.io/managed-by: Helm
chart: loki-2.5.0
heritage: Helm
release: loki
name: loki
namespace: monitoring
resourceVersion: "18279654"
uid: 7eba14cb-41c9-445d-bedb-4b88647f1ebc
spec:
clusterIP: 172.20.217.122
clusterIPs:
- 172.20.217.122
ports:
- name: metrics
port: 80
protocol: TCP
targetPort: 3100
selector:
app: loki
release: loki
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
generation: 14
name: grafana-vs
namespace: monitoring
resourceVersion: "18256422"
uid: e8969da7-062c-49d6-9152-af8362c08016
spec:
gateways:
- my-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /grafana/
name: grafana-ui
rewrite:
uri: /
route:
- destination:
host: prometheus-operator-grafana.monitoring.svc.cluster.local
port:
number: 80
- match:
- uri:
prefix: /loki
name: loki-ui
rewrite:
uri: /loki
route:
- destination:
host: loki.monitoring.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"my-gateway","namespace":"monitoring"},"spec":{"selector":{"istio":"ingressgateway"},"servers":[{"hosts":["*"],"port":{"name":"http","number":80,"protocol":"HTTP"}}]}}
creationTimestamp: "2021-10-18T12:28:05Z"
generation: 1
name: my-gateway
namespace: monitoring
resourceVersion: "16618724"
uid: 9b254a22-958c-4cc4-b426-4e7447c03b87
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"alb.ingress.kubernetes.io/scheme":"internal","alb.ingress.kubernetes.io/target-type":"ip","kubernetes.io/ingress.class":"alb"},"name":"ingress-alb","namespace":"istio-system"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"istio-ingressgateway","servicePort":80},"path":"/*"}]}}]}}
kubernetes.io/ingress.class: alb
finalizers:
- ingress.k8s.aws/resources
generation: 1
name: ingress-alb
namespace: istio-system
resourceVersion: "4447931"
uid: 74b31fba-0f03-41c6-a63f-6a10dee8780c
spec:
rules:
- http:
paths:
- backend:
service:
name: istio-ingressgateway
port:
number: 80
path: /*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- hostname: internal-k8s-istiosys-ingressa-25a256ef4d-1368971909.us-east-1.elb.amazonaws.com
kind: List
metadata:
resourceVersion: ""
selfLink: ""
The ingress is associated with AWS ALB.
I want to access Loki from ALB URL like http(s)://my-alb-url/loki
I hope I have provided the required details now.
Let me know. Thanks.
...how can I access loki URL from applications account so that it can be configured in promtail in applications a/c?
You didn't describe what issue when you use external LB above which should work, anyway, since this method will go thru Internet, the security risk is higher with egress cost consider the volume of logging. You can use Privatelink in this case, see page 16 Shared Services. Your promtail will use the ENI DNS name as the loki target.
Related
Goal
I'm trying to setup a
Cloud LB -> GKE [istio-gateway -> my-service]
This was working before, however, I have to recreate the cluster 2 days ago and run into this problem. Maybe some version change?
This is my ingress manifest file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "my-dev-ingress"
namespace: "istio-system"
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-dev-gclb-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "my-dev-cluster-cert-05"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: "istio-ingressgateway"
servicePort: 80
Problem
The health check issue by the Cloud LB failed. The backend service created by the Ingress create a /:80 default health check.
What I have tried
1) I tried to set the health check generated by the gke ingress to point to the istio-gateway StatusPort port 15020 in the Backend config console. Then the health check passed for a bit until the backend config revert itself to use the original /:80 healthcheck that it created. I even tried to delete the healthcheck that it created and it just create another one.
2) I also tried using the istio-virtual service to route the healthcheck to 15020 port as shown here with out much success.
3) I also tried just route everything in the virtual-service the healthcheck port
hosts:
- "*"
gateways:
- my-web-gateway
http:
- match:
- method:
exact: GET
uri:
exact: /
route:
- destination:
host: istio-ingress.gke-system.svc.cluster.local
port:
number: 15020
4) Most of the search result I found say that setting readinessProbe in the deployment should tell the ingress to set the proper health check. However, all of my service are under the istio-gateway and I can't really do the same.
I'm very lost right now and will really appreciate it if anyone could point me to the right direction. Thanks
i got it working with gke 1.20.4-gke.2200 and istio 1.9.2, the documentation for this is non existent or i have not found anything, you have to add an annotation to istio-ingressgateway service to use a backend-config when using "istioctl install -f values.yaml" command
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
serviceAnnotations:
cloud.google.com/backend-config: '{"default": "istio-ingressgateway-config"}'
then you have to create the backend-config with the correct healthcheck port
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: istio-ingressgateway-config
namespace: istio-system
spec:
healthCheck:
checkIntervalSec: 30
port: 15021
type: HTTP
requestPath: /healthz/ready
with this the ingress should automatically change the configuration for the load balancer health check pointing to istio port 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: web
networking.gke.io/managed-certificates: "web"
spec:
rules:
- host: test.example.com
http:
paths:
- path: "/*"
pathType: Prefix
backend:
service:
name: istio-ingressgateway
port:
number: 80
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-web
namespace: istio-system
spec:
hosts:
- test.example.com
gateways:
- web
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 8080 #internal service port
host: "internal-service.service-namespace.svc.cluster.local"
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: web
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- test.example.com
you could also set hosts to "*" in the virtualservice and gateway
Updated
So, I followed the AWS docs on how to setup an EKS cluster with Fargate using the eksctl tool. That all went smoothly but when I get to the part where I deploy my actual app, I get no endpoints and the ingress controller has no address associated with it. As seen here:
NAME HOSTS ADDRESS PORTS AGE
testapp-ingress * 80 129m
So, I can't hit it externally. But the test app (2048 game) had an address from the elb associated with the ingress. I thought it might be the subnet-tags as suggested here and my subnets weren't tagged the right way so I tagged them the way suggested in that article. Still no luck.
This is the initial article I followed to get set up. I've performed all the steps and only hit a wall with the alb: https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-next-steps
This is the alb article I've followed: https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
I followed the steps to deploy the sample app 2048 and that works just fine. I've made my configs very similar and it should work. I've followed all of the steps. Here are my old configs, new config below:
deployment yaml>>>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "testapp-deployment"
namespace: "testapp-qa"
spec:
selector:
matchLabels:
app: "testapp"
replicas: 5
template:
metadata:
labels:
app: "testapp"
spec:
containers:
- image: xxxxxxxxxxxxxxxxxxxxxxxxtestapp:latest
imagePullPolicy: Always
name: "testapp"
ports:
- containerPort: 80
---
service yaml>>>
apiVersion: v1
kind: Service
metadata:
name: "testapp-service"
namespace: "testapp-qa"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
type: NodePort
selector:
app: "testapp"
---
ingress yaml >>>
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "testapp-ingress"
namespace: "testapp-qa"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: testapp-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "testapp-service"
servicePort: 80
---
namespace yaml>>>
apiVersion: v1
kind: Namespace
metadata:
name: "testapp-qa"
Here are some of the logs from the ingress controller>>
E0316 22:32:39.776535 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
E0316 22:36:28.222391 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
Per the suggestion in the comments from #Michael Hausenblas, I've added an annotation to my service for the alb ingress.
Now that my ingress controller is using the correct ELB, I checked the logs because I still can't hit my app's /healthcheck. The logs:
E0317 16:00:45.643937 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxx-3a7d-4794-95fb-a18835abe0d3" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp"}
I0317 16:00:47.868939 1 rules.go:82] testapp-qa/testapp-ingress: modifying rule 1 on arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxx:listener/app/xxxxxxxxxxx-testappqa-testappin-b879/xxxxxxxxxxx/6b41c0d3ce97ae6b
I0317 16:00:47.890674 1 rules.go:98] testapp-qa/testapp-ingress: rule 1 modified with conditions [{ Field: "path-pattern", Values: ["/*"] }]
Update
I've updated my config. I don't have any more errors but still unable to hit my endpoints to test if my app is accepting traffic. It might have something to do with fargate or on the AWS side I'm not seeing. Here's my updated config:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "testapp"
namespace: "testapp-qa"
spec:
selector:
matchLabels:
app: "testapp"
replicas: 5
template:
metadata:
labels:
app: "testapp"
spec:
containers:
- image: 673312057223.dkr.ecr.us-east-1.amazonaws.com/wood-testapp:latest
imagePullPolicy: Always
name: "testapp"
ports:
- containerPort: 9898
---
apiVersion: v1
kind: Service
metadata:
name: "testapp"
namespace: "testapp-qa"
annotations:
alb.ingress.kubernetes.io/target-type: ip
spec:
ports:
- port: 80
targetPort: 9898
protocol: TCP
name: http
type: NodePort
selector:
app: "testapp"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "testapp-ingress"
namespace: "testapp-qa"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: /healthcheck
labels:
app: testapp
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "testapp"
servicePort: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: "testapp-qa"
In your service, try adding the following annotation:
annotations:
alb.ingress.kubernetes.io/target-type: ip
And also you'd need to explicitly tell the Ingress resource via the alb.ingress.kubernetes.io/healthcheck-path annotation where/how to perform the health checks for the target group. See the ALB Ingress controller docs for the annotation semantics.
We want Istio to allow incoming traffic to a service only from a particular namespace. How can we do this with Istio? We are runnning Istio 1.1.3 version.
Update:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-app-ingress
namespace: test-ns
spec:
podSelector:
matchLabels:
app: testapp
ingress:
- ports:
- protocol: TCP
port: 80
from:
- podSelector:
matchLabels:
istio: ingress
This did not work I am able to access the service from other namespaces as well. Next i tried:
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRole
metadata:
name: external-api-caller
namespace: test-ns
spec:
rules:
- services: ["testapp"]
methods: ["*"]
constraints:
- key: "destination.labels[version]"
values: ["v1", "v2"]
---
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
name: external-api-caller
namespace: test-ns
spec:
subjects:
- properties:
source.namespace: "default"
roleRef:
kind: ServiceRole
name: "external-api-caller"
I am able to access the service from all the namespaces. Where i expected it should be allowed only from "default" namespace
I'm not sure if this is possible for particular namespace, but it will work on labels.
You can create a network policy in Istio, this is nicely explained on Traffic Routing in Kubernetes via Istio and Envoy Proxy.
...
ingress:
- from:
- podSelector:
matchLabels:
zone: trusted
...
In the example only pods with label zone: trusted will be allowed to make incoming connection to the pod.
You can read about Using Network Policy with Istio.
I would also recommend reading Security Concepts in Istio as well as Denials and White/Black Listing.
Hope this helps You.
Using k8s Network Policy: Yes it is possible. The example that is posted in the question is not allowing from a different namespace. In the Ingress rule you have to use namespace selector which will be used to specify the namespace from which you want to allow the traffic. In the example below, namespace with label 'ns-group: prod-ns' will be allowed to access the pod with label 'app: testapp' on port 80 and protocol TCP
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-app-ingress
namespace: test-ns
spec:
podSelector:
matchLabels:
app: testapp
ingress:
- ports:
- protocol: TCP
port: 80
from:
- namespaceSelector:
matchLabels:
ns-group: prod-ns
Using Istio White Listing Policy: You can go through white listing policy examples and attribute vocabulary
Below is the example,
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelist-namespace
spec:
compiledAdapter: listchecker
params:
overrides: ["prod-ns"]
blacklist: false
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: source-ns-instance
spec:
compiledTemplate: listentry
params:
value: source.namespace
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: rule-policy-1
spec:
match: destination.labels["app"] == "testapp" && destination.namespace == "test-ns"
actions:
- handler: whitelist-namespace
instances: [ source-ns-instance ]
I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes
/ - you should see 'hello application`
/api/books should provide list of book in json format
This is the service
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: go-ms
This is the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
after applied the both yamls and when calling the URL:
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I was able to see the data in the browser as expected and also for the root app using just the external ip
Now I want to use istio, so I follow the guide and install it successfully via helm
using https://istio.io/docs/setup/kubernetes/install/helm/ and verify that all the 53 crd are there and also istio-system
components (such as istio-ingressgateway
istio-pilot etc all 8 deployments are in up and running)
I’ve change the service above from LoadBalancer to NodePort
and create the following istio config according to the istio docs
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: "/"
- uri:
exact: "/api/books"
route:
- destination:
port:
number: 8080
host: go-ms
in addition I’ve added the following
kubectl label namespace books istio-injection=enabled where the application is deployed,
Now to get the external Ip i've used command
kubectl get svc -n istio-system -l istio=ingressgateway
and get this in the external-ip
b0751-1302075110.eu-central-1.elb.amazonaws.com
when trying to access to the URL
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I got error:
This site can’t be reached
ERR_CONNECTION_TIMED_OUT
if I run the docker rayndockder/http:0.0.2 via
docker run -it -p 8080:8080 httpv2
I path's works correctly!
Any idea/hint What could be the issue ?
Is there a way to trace the istio configs to see whether if something is missing or we have some collusion with port or network policy maybe ?
btw, the deployment and service can run on each cluster for testing of someone could help...
if I change all to port to 80 (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books"
I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
ports:
- port: 8080
selector:
app: go-ms
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: go-ms-virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /
- uri:
exact: /api/books
route:
- destination:
port:
number: 8080
host: go-ms
EOF
The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.
I was able to access the app with url of http://$(minikube ip):31380.
There is no point in changing the port of services, deployments since these are application specific.
May be this question specifies the ports opened by istio ingress gateway.
I am trying to expose kiali on my default gateway. I have other services working for apps in the default namespace but have not been able to route traffic to anything in the istio namespace
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- '*'
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: default
spec:
hosts:
- kiali.dev.example.com
gateways:
- gateway
http:
- route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
The problem was I had mTLS enabled and kiali does not have a sidecar thus can not be validated by mTLS. the solution was to add a destination rule disabling mTLS for it.
apiVersion: 'networking.istio.io/v1alpha3'
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
You should define an ingress gateway and make sure that the hosts in the gateway match the hosts in the virtual service. Also specify the port of the destination. See the Control Ingress Traffic task.
For me this worked!
I ran
istioctl proxy-config routes istio-ingressgateway-866d7949c6-68tt4 -n istio-system -o json > ./routes.json
to get the dump of all the routes. The kiali route got corrupted for some reason. I deleted the virtual service and created it again, that fixed it.
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: istio-system
spec:
gateways:
- istio-system/my-gateway
hosts:
- 'mydomain.com'
http:
- match:
- uri:
prefix: /kiali/
route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
weight: 100
---
apiVersion: 'networking.istio.io/v1alpha3'
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: SIMPLE
---
Note: hosts needed to be set, '*' didnt work for some reason.