Wrong Base URL Keycloak Deploy on AWS EKS - amazon-web-services

i have a problem when deploy the Keycloak Server on AWS EKS
here is my configuration:
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-keycloak
spec:
replicas: 1
selector:
matchLabels:
app: my-keycloak
template:
metadata:
labels:
app: my-keycloak
spec:
containers:
- name: my-keycloak
image: jboss/keycloak
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: http
- containerPort: 8443
name: https
env:
- name: PROXY_ADDRESS_FORWARDING
value: "true"
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-keycloak
spec:
selector:
app: my-keycloak
ports:
- port: 8080
targetPort: 8080
name: http
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: my-keycloak
name: my-keycloak-ingress
spec:
rules:
- host: mykecloak.com
http:
paths:
- backend:
serviceName: my-keycloak
servicePort: 8080
but the base url always set to this, which is false and not work. What i want that the base url should be https://mykeycloak.com/* (with https and without the port number)
current-deployment
Many people said that the solution is set the PROXY_ADDRESS_FORWARDING to TRUE, but it doesnot work for me. Is there something i miss ?
Thanks for your help

You need to add the path /*
- host: mykeycloak.com
http:
paths:
- path: /*
backend:
serviceName: my-keycloak
servicePort: 8080

Related

Service can't be registered in Target groups

I'm new to using Kubernetes and AWS so there are a lot of concepts I may not understand. I hope you can help me with this problem I am having.
I have 3 services, frontend, backend and auth each with their corresponding nodeport and an ingress that maps the one host to each service, everything is running on EKS and for the ingress deployment I am using AWS ingress controller. Once everything is deployed I try to register the node-group in the targets the frontend and auth services work correctly but backend stays in unhealthy state. I thought it could be a port problem but if you look at auth and backend they have almost the same deployment defined and both are api created with dotnet core. One thing to note is that I can do kubectl port-forward <backend-pod> 80:80 and it is running without problems. And when I run the kubectl describe ingresses command I get this:
Name: ingress
Labels: app.kubernetes.io/managed-by=Helm
Namespace: default
Address: xxxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxx.elb.amazonaws.com
Ingress Class: \<none\>
Default backend: \<default\>
Rules:
Host Path Backends
----------------
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
alb.ingress.kubernetes.io/listen-ports: \[{"HTTPS":443}, {"HTTP":80}\]
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: 443
kubernetes.io/ingress.class: alb
Events:
Type Reason Age From Message
-------------------------
Normal SuccessfullyReconciled 8m20s (x15 over 41h) ingress Successfully reconciled
Frontend
apiVersion: apps/v1
kind: Deployment
metadata:
name: front
labels:
name: front
spec:
replicas: 2
selector:
matchLabels:
name: front
template:
metadata:
labels:
name: front
spec:
containers:
- name: frontend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: wrfront-{{ .Values.namespace }}-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
name: default-port
protocol: TCP
selector:
name: front
---
Auth
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-wrauth-keys
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
---
apiVersion: "v1"
kind: "ConfigMap"
metadata:
name: "auth-config-ocpm"
labels:
app: "auth"
data:
ASPNETCORE_URL: "http://+:80"
ASPNETCORE_ENVIRONMENT: "Development"
ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS: "true"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "auth"
labels:
app: "auth"
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
app: "auth"
template:
metadata:
labels:
app: "auth"
spec:
volumes:
- name: auth-keys-storage
persistentVolumeClaim:
claimName: pvc-wrauth-keys
containers:
- name: "api-auth"
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: auth-keys-storage
mountPath: "/app/auth-keys"
env:
- name: "ASPNETCORE_URL"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_URL"
name: "auth-config-ocpm"
- name: "ASPNETCORE_ENVIRONMENT"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_ENVIRONMENT"
name: "auth-config-ocpm"
- name: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
name: "auth-config-ocpm"
---
apiVersion: v1
kind: Service
metadata:
name: auth-service
spec:
type: NodePort
selector:
app: auth
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Backend (Service with problem)
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: NodePort
selector:
name: backend
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
# SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: {{ .Values.certificate }}
spec:
rules:
- host: {{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: front-service
port:
name: default-port
pathType: Prefix
- host: back.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: backend-service
port:
name: default-port
pathType: Prefix
- host: auth.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: auth-service
port:
name: default-port
pathType: Prefix
I've tried to deploy other services and they work correctly, also running only backend or only another service, but always the same thing happens and always with the backend.
What could be happening? Is it a configuration problem? Some error in Ingress or Deployment? Or is it just the backend service?
I would be very grateful for any help.
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
This one is saying that your backend service is not registered to the Ingress.
One thing to remember is that Ingress registers Services by pods' ClusterIP, like your Ingress output "10.0.1.30:80", not NodePort. And according to docs , I don't know why you can have multiple NodePort services with the same port. But when you do port-forward, you actually open that port on all your instances, I assume you have 2 instances, and then your ALB health check that port and return healthy.
But I think your issue is from your Ingress that can not locate your backend service.
My suggestions are:
Trying with only backend-service with port changed, and maybe without auth and frontend services. Default range of NodePort is 30000 - 32767
Going inside that pod or create a new pod, make a request to that service using its URL to check what it returns. By default, ALB only accept status 200 from its homepage.

Istio traffic routing rules take no effect

I am trying to configure a request routing using Istio and Ingress-nginx but I'm not able to route the requests properly. Basically I have two deployments each as a different subset and implemented a weighted VirtualService.
In Kiali dashboard it shows the request being routed from the ingress-controller to PassthroughCluster even though I can see the correct route mapping using istioctl proxy-config routes command.
Here is the complete configuration:
apiVersion: v1
kind: ServiceAccount
metadata:
name: dummy-app
namespace: my-namespace
---
apiVersion: v1
kind: Service
metadata:
name: dummy-app
namespace: my-namespace
labels:
app: dummy-app
service: dummy-app
spec:
ports:
- port: 8080
targetPort: 8080
name: http-web
selector:
app: dummy-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dummy-app-1
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: dummy-app
version: v1
template:
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels:
app: dummy-app
version: v1
spec:
serviceAccountName: dummy-app
containers:
- image: my-img
imagePullPolicy: IfNotPresent
name: dummy-app
env:
- name: X_HTTP_ENV
value: dummy-app-1
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dummy-app-2
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: dummy-app
version: v2
template:
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels:
app: dummy-app
version: v2
spec:
serviceAccountName: dummy-app
containers:
- image: my-img
imagePullPolicy: IfNotPresent
name: dummy-app
env:
- name: X_HTTP_ENV
value: dummy-app-2
ports:
- containerPort: 8080
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: dummy-app
namespace: my-namespace
spec:
host: dummy-app
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: dummy-app
namespace: my-namespace
spec:
hosts:
- dummy-app.my-namespace.svc.cluster.local
http:
- match:
- uri:
prefix: "/my-route"
route:
- destination:
host: dummy-app.my-namespace.svc.cluster.local
subset: v1
weight: 0
- destination:
host: dummy-app.my-namespace.svc.cluster.local
subset: v2
weight: 100
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "my-ingress-class"
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/upstream-vhost: dummy-app.my-namespace.svc.cluster.local
name: dummy-ingress
namespace: my-namespace
spec:
rules:
- host: myapp.com
http:
paths:
- backend:
service:
name: dummy-app
port:
number: 8080
path: /my-route(.*)
pathType: ImplementationSpecific
Weird thing is I have other applications working in the same namespace and using the same ingress-controller, so I'm not considering there is a problem with ingress-nginx setup.
Istio version:
client version: 1.11.4
control plane version: 1.11.4
data plane version: 1.11.4 (13 proxies), 1.12-dev (15 proxies)
Any ideas on what is the configuration problem or how can I better debug these kind of issues in Istio?
Main issue seems to be with ingress-nginx resource. Based on the above ingress definition, you are trying to bypass istio ingress gateway (where all the proxying rules has been implemented, like gateway,virtual-service and destination rules) and directly pushing the traffic to the application service from ingress. For istio proxy rules to work, you should let traffic pass through istio-ingressgateway (a service under istio-system namespace). So following changes should be made to your ingress resource:
rules:
- host: myapp.com
http:
paths:
- backend:
service:
name: istio-ingressgateway.istio-system
port:
number: 80
path: /my-route(.*)
pathType: ImplementationSpecific

How to configure NGINX in AWS

I have set up k8s cluster in AWS ec2 instances with 1 parent and 2 child nodes using kops.
Deployed 2 services and running with LoadBalancer service type in browser.
Now I installed NGINX but through LB ip not able to hit my service. It is giving 504 GATEWAY_TIME_OUT exception. Googled it but no success, Where am I going wrong? Here is my sample code...[AWS FREE ACCOUNT]
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
app: $APP_NAME
template:
metadata:
labels:
app: $APP_NAME
spec:
imagePullSecrets:
- name: $IMG_PULL_SECRET
containers:
- image: $IMAGE_REG/$APP_NAME:$IMAGE_TAG
name: $APP_NAME
imagePullPolicy: Always
ports:
- containerPort: ${CONTAINER_PORT}
protocol: TCP
env:
- name: spring.cloud.config.uri
value: 'http://config-server-service'
service.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: $APP_NAME
name: $APP_NAME
namespace: $NAMESPACE
spec:
type: $SERVICE_TYPE
#type: $SERVICE_TYPE
ports:
- port: 80
targetPort: ${CONTAINER_PORT}
protocol: TCP
selector:
app: $APP_NAME
ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
namespace: $NAMESPACE
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-name: JSESSIONID
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "true"
# kubernetes.io/ingress.global-static-ip-name: "my-gateway"
spec:
rules:
- http:
paths:
- path: /${APP_NAME}
backend:
serviceName: ${APP_NAME}
servicePort: 80

istio route rule error: "json: cannot unmarshal string into Go value"

I'm new to Istio and I'm going through some uses cases with my simple app.
I deployed 2 simple services on minikube running on Windows 10 Pro with VirtualBox 5.2.6, with istio 0.6.0
Version v1 of service1 and v1 & v2 of service2.
service1 responds to /hello and service2 responds to /world. Everything working fine so far and both services are responsding and as of service2, the round robin is working.
Now I want to apply 2 route rules, one to route to v2 of service2 based on a header and the rest to v1 of service2, but when I try to do that I get an error:
Error: cannot parse proto message: YAML decoding error: destination:
name: service2
match:
request:
headers:
Foo: bar
precedence: 2
route:
- labels:
version: v2
json: cannot unmarshal string into Go value of type map[string]json.RawMessage
Please find below my app and route rules config.
What's wrong with this config?
Please notice that when I omit the "match" part it's OK, but of course this is not what I want.
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: service2-route
spec:
destination:
name: service2
precedence: 2
match:
request:
headers:
Foo: bar
route:
- labels:
version: v2
---
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: service2-default
spec:
destination:
name: service2
precedence: 1
route:
- labels:
version: v1
weight: 100
and the my services deployment yaml:
###########################################################################
# Service 1
##########################################################################
apiVersion: v1
kind: Service
metadata:
name: service1
labels:
app: service1
spec:
ports:
- port: 8080
name: http
selector:
app: service1
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service1-v1
spec:
replicas: 1
template:
metadata:
labels:
app: service1
version: v1
spec:
containers:
- name: service1
image: myrepo/sampleapp-service1:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
###########################################################################
# Service 2
##########################################################################
apiVersion: v1
kind: Service
metadata:
name: service2
labels:
app: service2
spec:
ports:
- port: 8081
name: http
selector:
app: service2
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service2-v1
spec:
replicas: 1
template:
metadata:
labels:
app: service2
version: v1
spec:
containers:
- name: service2
image: myrepo/sampleapp-service2:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8081
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service2-v2
spec:
replicas: 1
template:
metadata:
labels:
app: service2
version: v2
spec:
containers:
- name: service2
image: myrepo/sampleapp-service2:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8081
---
###########################################################################
# Ingress resource (gateway)
##########################################################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: service1
servicePort: 8080
- path: /world
backend:
serviceName: service2
servicePort: 8081
---
The problem here is pretty simple, you have to say how do you want to match your header. In your example, I can assume that you want an exact match, so the following syntax:
match:
request:
headers:
Foo:
exact: bar
Here you can find more available options.
Also I would recommend to use quotes if your header value contains any special characters.

istio: ingress with grpc and http

I have a service listening on two ports; one is http, the other is grpc.
I would like to set up an ingress that can route to both these port, with the same host.
The load balancer would redirect to the http port if http/1.1 is used, and to the grpc port if h2 is used.
Is there a way to do that with istio ?
I made a hello world demonstrating what I am trying to achieve :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: dev
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337","-i","172.20.0.0/16"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}}]'
labels:
app: hello-world
spec:
containers:
- name: grpc-server
image: aguilbau/hello-world-grpc:latest
ports:
- name: grpc
containerPort: 50051
- name: http-server
image: nginx:1.7.9
ports:
- name: http
containerPort: 80
- name: istio-proxy
args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: docker.io/istio/proxy:0.1
imagePullPolicy: Always
resources: {}
securityContext:
runAsUser: 1337
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: dev
spec:
ports:
- name: grpc
port: 50051
- name: http
port: 80
selector:
app: hello-world
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-http
namespace: dev
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- host: hello-world
http:
paths:
- backend:
serviceName: hello-world
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-grpc
namespace: dev
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- host: hello-world
http:
paths:
- backend:
serviceName: hello-world
servicePort: 50051
---
I'm a bit late to the party, but for those of you stumbling on this post, I think you can do this with very little difficulty. I'm going to assume you have istio installed on a kubernetes cluster and are happy using the default istio-ingressgateway:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: dev
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337","-i","172.20.0.0/16"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}}]'
labels:
app: hello-world
spec:
containers:
- name: grpc-server
image: aguilbau/hello-world-grpc:latest
ports:
- name: grpc
containerPort: 50051
- name: http-server
image: nginx:1.7.9
ports:
- name: http
containerPort: 80
- name: istio-proxy
args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: docker.io/istio/proxy:0.1
imagePullPolicy: Always
resources: {}
securityContext:
runAsUser: 1337
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: dev
spec:
ports:
- name: grpc
port: 50051
- name: http
port: 80
selector:
app: hello-world
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hello-world-istio-gate
namespace: dev
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 50051
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-world-istio-vsvc
namespace: dev
spec:
hosts:
- "*"
gateways:
- hello-world-istio-gate
http:
- match:
- port: 80
route:
- destination:
host: hello-world
port:
number: 80
tcp:
- match:
- port: 50051
route:
- destination:
host: hello-world
port:
number: 50051
The above configuration omits your two Ingresses, and instead includes:
Your deployment
Your service
An istio gateway
An istio virtualservice
There is an important extra piece not shown, and I alluded to it earlier when talking about using the default ingressgateway. The following line found in "hello-world-istio-gateway" gives a clue:
istio: ingressgateway
This refers to a pod in the 'istio-system' namespace that is usually installed by default called 'istio-ingressgateway' - and this pod is exposed by a service also called 'istio-ingressgateway.' You will need to open up ports on the 'istio-ingressgateway' service.
As an example, I edited my (default) ingressgateway and added a port opening for HTTP and GRPC. The result is the following (edited for length) yaml code:
dampersand#kubetest1:~/k8s$ kubectl get service istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
<omitted for length>
labels:
app: istio-ingressgateway
chart: gateways-1.0.3
heritage: Tiller
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
<omitted for length>
ports:
- name: http2
nodePort: 31380
port: 80
protocol: TCP
targetPort: 80
<omitted for length>
- name: grpc
nodePort: 30000
port: 50051
protocol: TCP
targetPort: 50051
selector:
app: istio-ingressgateway
istio: ingressgateway
type: NodePort
The easiest way to make the above change (for testing purposes) is to use:
kubectl edit svc -n istio-system istio-ingressgateway
For production purposes, it's probably better to edit your helm chart or your istio.yaml file or whatever you initially used to set up the ingressgateway.
As a quick aside, note that my test cluster has istio-ingressgateway set up as a NodePort, so what the above yaml file says is that my cluster is port forwarding 31380 -> 80 and 30000 -> 50051. You may (probably) have istio-ingressgateway set up as a LoadBalancer, which will be different... so plan accordingly.
Finally, the following blog post is some REALLY excellent background reading for the tools I've outlined in this post! https://blog.jayway.com/2018/10/22/understanding-istio-ingress-gateway-in-kubernetes/
You may be able to do something like that if you move the grpc-server and http-server containers into different pods with unique labels (i.e., different versions of the service, so to speak) and then using Istio route rules, behind the Ingress, split the traffic. A rule with a match for header Upgrade: h2 could send traffic to the grpc version and a default rule would send the rest of the traffic to http one.