Istio-ingressgateway customization during installation - istio

I need to change the 'hosts' of the Istio ingressgateway (Gateway object) from default value '*' to 'whatever' during Istio installation. We are using the IstioOperator to make customizations for our installation.
I think it should be done with k8s overlays
...
k8s:
overlays:
- kind: Gateway
name: istio-ingressgateway
patches:
- path: spec.servers.??????
value: whatever
...
What should be the expression for the path attribute ?
I found some info on https://github.com/istio/istio/blob/master/operator/pkg/patch/patch.go, but the case is not exactly the same.
So, the istio-gateway Gateway object in namespace istio-system should change from
spec:
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
to
spec:
servers:
- hosts:
- whatever
port:
name: http
number: 80
protocol: HTTP
We are using Istio 1.5.6
Thx !
UPDATE with a working example
Thx to #Jakub for pointing me to the right direction.
overlays:
- kind: Gateway
name: istio-ingressgateway
patches:
- path: spec.servers[0]
value:
hosts:
- whatever.dummy
port:
name: http
number: 80
protocol: HTTP

I am posting this as a community wiki answer for better visibility.
There is similiar question about that with answer and code example provided by #Jens Wurm.
This is a part of an overlay that will add another server entry with some example specs. Just tweak it to be the way you want it to be. You can also override your first server entry with a path of spec.servers[0] and then set the value to whatever you want it to be.
ingressGateways:
- enabled: true
k8s:
overlays:
- apiVersion: networking.istio.io/v1alpha3
kind: Gateway
name: ingressgateway
patches:
- path: spec.servers[1]
value:
hosts:
- '*.example.com'
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: example-cert
mode: SIMPLE
privateKey: sds
serverCertificate: sds
And there is working example provided by #Peter Claes
overlays:
- kind: Gateway
name: istio-ingressgateway
patches:
- path: spec.servers[0]
value:
hosts:
- whatever.dummy
port:
name: http
number: 80
protocol: HTTP

Related

Rewrite in virtualsvc not working when authpolicy implemented - Istio

I am following some of the instructions in https://github.com/istio/istio/issues/40579 to setup Istio with an custom oauth2 provider with keycloak.
I have a main ingress which is sending all the traffic on one host to istio-ingressgateway
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: istio-ingress-main
namespace: istio-system
spec:
ingressClassName: nginx
tls:
- hosts:
- mlp.prod
secretName: mlp-tls
rules:
- host: mlp.prod # A FQDN that describes the host where that rule should be applied
http:
paths: # A list of paths and handlers for that host
- path: /
pathType: Prefix
backend: # How the ingress will handle the requests
service:
name: istio-ingressgateway # Which service the request will be forwarded to
port:
number: 80 # Which port in that service
My ingress gateway is defined as below
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: prod-gateway
namespace : istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- 'mlp.prod'
One of my services is mlflow which is installed in mlflow namespace for which the virtual service is defined as below
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gateway-vs-mlflow
namespace: mlflow
spec:
hosts:
- '*'
gateways:
- istio-system/prod-gateway
http:
- match:
- uri:
prefix: "/mlflow"
rewrite:
uri: " "
route:
- destination:
host: mlflow-service.mlflow.svc.cluster.local
port:
number: 5000
Now when i try to access the host mlp.prod/mlflow/, I am able to access MLFLOW without any issues and the UI comes up correctly.
However if i try to add an oauth provider in an authpolicy towards the /mlflow route, then I get 404 page not available after the oauth authentication is done
The authpolicy is as in the below
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: oauth-policy
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: CUSTOM
provider:
name: "oauth2-proxy"
rules:
- to:
- operation:
paths: ["/mlflow"]
Please assist in this issue. Is the rewrite in Virtual service supposed to work only without authpolicy with oauth2-proxy provider
Kindly help
Thanks,
Sujith.
Version
istioctl version
client version: 1.15.2
control plane version: 1.15.2
data plane version: 1.15.2 (8 proxies)
kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.2
Kustomize Version: v4.5.4
Server Version: v1.22.9
WARNING: version difference between client (1.24) and server (1.22) exceeds the supported minor version skew of +/-1
I was able to resolve this by setting the oauth config with the below new value
upstreams="static://200"
Once this was done the oauth2 started returning 200 for authenticated users and everything was fine.

Turn off or remove x-envoy-peer-metadata

Is there a way to remove x-envoy-peer-metadata or restrict the data that goes into this header? Looks like this is default behavior at egress level and it has sensitive information related to k8s and other backend components.
(Updated, needed two files)
This is how it possible to do it via istio
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
labels:
app.kubernetes.io/part-of: my-namespace
name: my-namespace-google-remove-header
namespace: my-namespace
spec:
hosts:
- www.google.com
http:
- route:
- destination:
host: www.google.com
headers:
request:
remove:
- x-forwarded-proto
- x-envoy-decorator-operation
- x-envoy-peer-metadata-id
- x-envoy-peer-metadata
Then a ServiceEntry for the mesh is also needed
Below you can change port to 443 and protocol HTTPS if you need that instead
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: mesh-external-www-google-com
spec:
hosts:
- www.google.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: NONE

nginx.ingress.kubernetes.io/rewrite-target doesn't work, Failed to load resource: the server responded with a status of 500

I have backend and fronted applications. I have tried to create one ingress for fronted where both paths will be matched (host.com/api/v1/reference/list/test1 and host.com/api/v1/reference/test2). The second one works fine, but the first give me error: Failed to load resource: the server responded with a status of 500 (). Here is my ingress configuration:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: backend-app
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3
spec:
tls:
- hosts:
- host.com
secretName: tls-secret
rules:
- host: host.com
http:
paths:
- backend:
serviceName: service-backend
servicePort: 80
path: /api(/|$)(.*)
service:
apiVersion: v1
kind: Service
metadata:
name: service-backend
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
Does anyone know why my URLs are not getting rewritten and the requests are not delivered to the backend service for host.com/api/v1/reference/test2 ?
Thanks in advance!
resolved - bug inside application

Health check problem with setting up GKE with istio-gateway

Goal
I'm trying to setup a
Cloud LB -> GKE [istio-gateway -> my-service]
This was working before, however, I have to recreate the cluster 2 days ago and run into this problem. Maybe some version change?
This is my ingress manifest file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "my-dev-ingress"
namespace: "istio-system"
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-dev-gclb-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "my-dev-cluster-cert-05"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: "istio-ingressgateway"
servicePort: 80
Problem
The health check issue by the Cloud LB failed. The backend service created by the Ingress create a /:80 default health check.
What I have tried
1) I tried to set the health check generated by the gke ingress to point to the istio-gateway StatusPort port 15020 in the Backend config console. Then the health check passed for a bit until the backend config revert itself to use the original /:80 healthcheck that it created. I even tried to delete the healthcheck that it created and it just create another one.
2) I also tried using the istio-virtual service to route the healthcheck to 15020 port as shown here with out much success.
3) I also tried just route everything in the virtual-service the healthcheck port
hosts:
- "*"
gateways:
- my-web-gateway
http:
- match:
- method:
exact: GET
uri:
exact: /
route:
- destination:
host: istio-ingress.gke-system.svc.cluster.local
port:
number: 15020
4) Most of the search result I found say that setting readinessProbe in the deployment should tell the ingress to set the proper health check. However, all of my service are under the istio-gateway and I can't really do the same.
I'm very lost right now and will really appreciate it if anyone could point me to the right direction. Thanks
i got it working with gke 1.20.4-gke.2200 and istio 1.9.2, the documentation for this is non existent or i have not found anything, you have to add an annotation to istio-ingressgateway service to use a backend-config when using "istioctl install -f values.yaml" command
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
serviceAnnotations:
cloud.google.com/backend-config: '{"default": "istio-ingressgateway-config"}'
then you have to create the backend-config with the correct healthcheck port
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: istio-ingressgateway-config
namespace: istio-system
spec:
healthCheck:
checkIntervalSec: 30
port: 15021
type: HTTP
requestPath: /healthz/ready
with this the ingress should automatically change the configuration for the load balancer health check pointing to istio port 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: web
networking.gke.io/managed-certificates: "web"
spec:
rules:
- host: test.example.com
http:
paths:
- path: "/*"
pathType: Prefix
backend:
service:
name: istio-ingressgateway
port:
number: 80
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-web
namespace: istio-system
spec:
hosts:
- test.example.com
gateways:
- web
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 8080 #internal service port
host: "internal-service.service-namespace.svc.cluster.local"
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: web
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- test.example.com
you could also set hosts to "*" in the virtualservice and gateway

create VirtualService for kiali, tracing, grafana

I am trying to expose kiali on my default gateway. I have other services working for apps in the default namespace but have not been able to route traffic to anything in the istio namespace
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- '*'
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: default
spec:
hosts:
- kiali.dev.example.com
gateways:
- gateway
http:
- route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
The problem was I had mTLS enabled and kiali does not have a sidecar thus can not be validated by mTLS. the solution was to add a destination rule disabling mTLS for it.
apiVersion: 'networking.istio.io/v1alpha3'
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
You should define an ingress gateway and make sure that the hosts in the gateway match the hosts in the virtual service. Also specify the port of the destination. See the Control Ingress Traffic task.
For me this worked!
I ran
istioctl proxy-config routes istio-ingressgateway-866d7949c6-68tt4 -n istio-system -o json > ./routes.json
to get the dump of all the routes. The kiali route got corrupted for some reason. I deleted the virtual service and created it again, that fixed it.
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: istio-system
spec:
gateways:
- istio-system/my-gateway
hosts:
- 'mydomain.com'
http:
- match:
- uri:
prefix: /kiali/
route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
weight: 100
---
apiVersion: 'networking.istio.io/v1alpha3'
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: SIMPLE
---
Note: hosts needed to be set, '*' didnt work for some reason.