Goal
I'm trying to setup a
Cloud LB -> GKE [istio-gateway -> my-service]
This was working before, however, I have to recreate the cluster 2 days ago and run into this problem. Maybe some version change?
This is my ingress manifest file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "my-dev-ingress"
namespace: "istio-system"
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-dev-gclb-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "my-dev-cluster-cert-05"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: "istio-ingressgateway"
servicePort: 80
Problem
The health check issue by the Cloud LB failed. The backend service created by the Ingress create a /:80 default health check.
What I have tried
1) I tried to set the health check generated by the gke ingress to point to the istio-gateway StatusPort port 15020 in the Backend config console. Then the health check passed for a bit until the backend config revert itself to use the original /:80 healthcheck that it created. I even tried to delete the healthcheck that it created and it just create another one.
2) I also tried using the istio-virtual service to route the healthcheck to 15020 port as shown here with out much success.
3) I also tried just route everything in the virtual-service the healthcheck port
hosts:
- "*"
gateways:
- my-web-gateway
http:
- match:
- method:
exact: GET
uri:
exact: /
route:
- destination:
host: istio-ingress.gke-system.svc.cluster.local
port:
number: 15020
4) Most of the search result I found say that setting readinessProbe in the deployment should tell the ingress to set the proper health check. However, all of my service are under the istio-gateway and I can't really do the same.
I'm very lost right now and will really appreciate it if anyone could point me to the right direction. Thanks
i got it working with gke 1.20.4-gke.2200 and istio 1.9.2, the documentation for this is non existent or i have not found anything, you have to add an annotation to istio-ingressgateway service to use a backend-config when using "istioctl install -f values.yaml" command
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
serviceAnnotations:
cloud.google.com/backend-config: '{"default": "istio-ingressgateway-config"}'
then you have to create the backend-config with the correct healthcheck port
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: istio-ingressgateway-config
namespace: istio-system
spec:
healthCheck:
checkIntervalSec: 30
port: 15021
type: HTTP
requestPath: /healthz/ready
with this the ingress should automatically change the configuration for the load balancer health check pointing to istio port 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: web
networking.gke.io/managed-certificates: "web"
spec:
rules:
- host: test.example.com
http:
paths:
- path: "/*"
pathType: Prefix
backend:
service:
name: istio-ingressgateway
port:
number: 80
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-web
namespace: istio-system
spec:
hosts:
- test.example.com
gateways:
- web
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 8080 #internal service port
host: "internal-service.service-namespace.svc.cluster.local"
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: web
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- test.example.com
you could also set hosts to "*" in the virtualservice and gateway
Related
Here is my setup.
I have 2 AWS accounts.
Applications account
Monitoring account
Application account has EKS + Istio + Application related microservices + promtail agents.
Monitoring account has centralized logging system within EKS + Istio + (Grafana & Prometheus & loki pods running)
From Applications account, I want to push logs to Loki on Monitoring a/c. I tried exposing Loki service outside monitoring a/c but I am facing issues to set loki url as https://<DNS_URL>/loki. This change I tried by using suggestions at here and here, but that is not working for me. I have installed the loki-stack from this url
The question is how can I access loki URL from applications account so that it can be configured in promtail in applications a/c?
Please note both accounts are using pods within EKS and not standalone loki or promtail.
Thanks and regards.
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: monitoring
creationTimestamp: "2021-10-25T14:59:20Z"
labels:
app: loki
app.kubernetes.io/managed-by: Helm
chart: loki-2.5.0
heritage: Helm
release: loki
name: loki
namespace: monitoring
resourceVersion: "18279654"
uid: 7eba14cb-41c9-445d-bedb-4b88647f1ebc
spec:
clusterIP: 172.20.217.122
clusterIPs:
- 172.20.217.122
ports:
- name: metrics
port: 80
protocol: TCP
targetPort: 3100
selector:
app: loki
release: loki
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
generation: 14
name: grafana-vs
namespace: monitoring
resourceVersion: "18256422"
uid: e8969da7-062c-49d6-9152-af8362c08016
spec:
gateways:
- my-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /grafana/
name: grafana-ui
rewrite:
uri: /
route:
- destination:
host: prometheus-operator-grafana.monitoring.svc.cluster.local
port:
number: 80
- match:
- uri:
prefix: /loki
name: loki-ui
rewrite:
uri: /loki
route:
- destination:
host: loki.monitoring.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"my-gateway","namespace":"monitoring"},"spec":{"selector":{"istio":"ingressgateway"},"servers":[{"hosts":["*"],"port":{"name":"http","number":80,"protocol":"HTTP"}}]}}
creationTimestamp: "2021-10-18T12:28:05Z"
generation: 1
name: my-gateway
namespace: monitoring
resourceVersion: "16618724"
uid: 9b254a22-958c-4cc4-b426-4e7447c03b87
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"alb.ingress.kubernetes.io/scheme":"internal","alb.ingress.kubernetes.io/target-type":"ip","kubernetes.io/ingress.class":"alb"},"name":"ingress-alb","namespace":"istio-system"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"istio-ingressgateway","servicePort":80},"path":"/*"}]}}]}}
kubernetes.io/ingress.class: alb
finalizers:
- ingress.k8s.aws/resources
generation: 1
name: ingress-alb
namespace: istio-system
resourceVersion: "4447931"
uid: 74b31fba-0f03-41c6-a63f-6a10dee8780c
spec:
rules:
- http:
paths:
- backend:
service:
name: istio-ingressgateway
port:
number: 80
path: /*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- hostname: internal-k8s-istiosys-ingressa-25a256ef4d-1368971909.us-east-1.elb.amazonaws.com
kind: List
metadata:
resourceVersion: ""
selfLink: ""
The ingress is associated with AWS ALB.
I want to access Loki from ALB URL like http(s)://my-alb-url/loki
I hope I have provided the required details now.
Let me know. Thanks.
...how can I access loki URL from applications account so that it can be configured in promtail in applications a/c?
You didn't describe what issue when you use external LB above which should work, anyway, since this method will go thru Internet, the security risk is higher with egress cost consider the volume of logging. You can use Privatelink in this case, see page 16 Shared Services. Your promtail will use the ENI DNS name as the loki target.
I have deployed an application in my minikube cluster, the application access is ok via ingress gateway service at http://172.18.97.73:31566/.
minikube ip
172.18.97.73
kubectl get svc -n istio-system
istio-ingressgateway NodePort 10.99.4.153 <none> 15021:31123/TCP,80:31566/TCP,443:32094/TCP,15012:30004/TCP,15443:30369/TCP 3h48m
but kiali graph shows as unknown the origin of the traffic.
kiali graph
i can't understand what is wrong, could you help me?
the manifest is here:
apiVersion: v1
kind: Service
metadata:
name: application
spec:
ports:
- port: 8082
selector:
app: app
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: application-vs
spec:
hosts:
- "*"
gateways:
- "default/http-gateway"
http:
- route:
- destination:
host: application.default.svc.cluster.local
port:
number: 8082
I am trying to expose kiali on my default gateway. I have other services working for apps in the default namespace but have not been able to route traffic to anything in the istio namespace
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- '*'
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: default
spec:
hosts:
- kiali.dev.example.com
gateways:
- gateway
http:
- route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
The problem was I had mTLS enabled and kiali does not have a sidecar thus can not be validated by mTLS. the solution was to add a destination rule disabling mTLS for it.
apiVersion: 'networking.istio.io/v1alpha3'
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
You should define an ingress gateway and make sure that the hosts in the gateway match the hosts in the virtual service. Also specify the port of the destination. See the Control Ingress Traffic task.
For me this worked!
I ran
istioctl proxy-config routes istio-ingressgateway-866d7949c6-68tt4 -n istio-system -o json > ./routes.json
to get the dump of all the routes. The kiali route got corrupted for some reason. I deleted the virtual service and created it again, that fixed it.
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: istio-system
spec:
gateways:
- istio-system/my-gateway
hosts:
- 'mydomain.com'
http:
- match:
- uri:
prefix: /kiali/
route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
weight: 100
---
apiVersion: 'networking.istio.io/v1alpha3'
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: SIMPLE
---
Note: hosts needed to be set, '*' didnt work for some reason.
Scenario:
I have 2 clusters: A and B both with istio installed. I want to expose service-1 in cluster A as service-1.suffix, and let service-2 in cluster B access service-1 by: service-1.suffix. The folloing picture illustrates my idea.
In cluster A, I define a virtualService and Gateway to route the requests to service-1.
Gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: service-1
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "service-1.suffix"
VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: service-1
spec:
hosts:
- service-1.default.svc.cluster.local
- "service-1.suffix"
gateways:
- service-1
- mesh
http:
- route:
- destination:
host: service-1.default.svc.cluster.local
port:
number: 8080
This is working fine as I can use curl to access it successfully.
curl -I -HHost:service-1.suffix http://cluster_A_proxy:31380
The next step is creating Egress and VirtualService in Cluster B. Here are my definition files:
ServiceEntry:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: service-1
spec:
hosts:
- "service-1.suffix" #the global suffix mcm.com could be defined in mcm.
#addresses:
#- xxx/32
ports:
- number: 80
name: http
protocol: HTTP
resolution: STATIC
location: MESH_EXTERNAL
endpoints:
- address: 1.1.1.1 #The cluster A proxy ip
ports:
http: 31380
VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: service-1
spec:
hosts:
- "service-1.suffix"
http:
- route:
- destination:
host: "service-1.suffix"
port:
number: 80
In Cluster B, when I try to use curl to resolve service-1.suffix, I got a DNS error saying this cannot be resolved.
curl: (6) Could not resolve host: service-1.suffix
How can I fix this?
#The command I am using in an istio app in Cluster B:
kubectl exec -it pod_name -c container_name bash
curl -I -HHost:service-1.suffix http://service-1.suffix
Edit:
When I use another resolvable hostname like www.google.com in serviceentry I can get it through, the requests to www.google.com will be redirected to service-1 in cluster A. Just the same, if I use nip.io as my suffix, it works well. However, the made up name service-1.suffix could not be resolved.
Define a Kubernetes ExternalName service with a random IP:
kind: Service
apiVersion: v1
metadata:
name: service1
spec:
type: ExternalName
externalName: 1.1.1.1
Having issues configuring SSL termination in my Kubernetes cluster. Attempting to figure out best place for this to happen.
I have been able to get it working following the instructions listed here and then updating the ingress controller service to include the SSL certificate details using service.beta.kubernetes.io/aws-load-balancer-ssl-cert annotation:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:...
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: https
port: 443
targetPort: 80
I then have ingress rules and services setup similar to:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: app1.foo.bar
http:
paths:
- backend:
serviceName: app1
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app1
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
selector:
app: app1
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app1
spec:
template:
metadata:
labels:
app: app1
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
When going to app1.foo.bar I can see that:
http requests are redirected to https
the SSL certificate is correctly applied
Originally I was trying to apply the certificate to my individual app services. I could see the certificate being applied to the ELB in AWS but wasn't being passed through. So my question is:
Is this the correct configuration or is there a better solution?
Thank you :)
I would suggest terminating SSL on the Nginx Ingress Controller exposed with ELB, and use kube-lego for automated SSL certificate management.
https://github.com/kubernetes/ingress-nginx &
https://github.com/jetstack/kube-lego