Istio virtual service not working for port other than 80 and 443 - istio

I am trying to set up istio virtual service along with the gateway but somehow it's not working.
Here is my gateway configuration
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: nginx-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
name: admin
number: 9999
protocol: HTTP
hosts:
- nginx.example.com
Virtual service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx-virtualservice
spec:
gateways:
- nginx-gateway
hosts:
- nginx.example.com
http:
- match:
- port: 9999
route:
- destination:
host: nginx
port:
number: 80
However it works only for port 80 and `443 but not for other ports.

Most likely, when you created the istio-ingressgateway, you used the default values, so only port 80 and 443 are opened.
Look at the ingressgateway service/deployment and open the port 9999, you can add it in the operator.
Right now you have the Gateway listening to the port 9999 but probably the traffic can't go through it.

Related

AWS/Kubernetes - Stickiness options not available for TCP protocols

Why is default load balancer port 80 and 443 is considered as TCP ports? I want to test stickiness as shown in the aws docs either through yaml file or through aws console.
I was using nginx ingress and moved to default load balancer to test stickiness but I see the error Stickiness options not available for TCP protocols
I even tried specifying protocol https but it doesn't accept. It only allows "SCTP", "TCP", "UDP".
apiVersion: v1
kind: Service
metadata:
name: httpd
labels:
app: httpd-service
namespace: test-web-dev
spec:
#type: LoadBalancer
selector:
app: httpd
ports:
- name: port-80
port: 80
targetPort: 80
- name: port-443
port: 443
targetPort: 443
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
When I try ingress, I disable the service type Loadbalancer above
nginx-ingress-lb-service.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
---
Stickiness requires listener which operates in layer 7 of OSI model, which in case of CLB, is provided by http and https listeners.
Since you are using TCP listener which operates in layer 3, stickiness is not supported. Thus, if you want to use sticky sessions, you must change to http or https listeners.
UDP and SCTP are invalid listeners for CLB. It only supports TCP, HTTP, HTTPS and SSL.

istio create external ip for specific service

I've deployed successfully an app to K8s with istio
We have gw which we use and virtual service like the following:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bher-virtualservice
namespace: ba-trail
spec:
gateways:
- bher-gateway
hosts:
- trialio.cloud.str
http:
- match:
- uri:
prefix: "/"
- uri:
prefix: "/login"
- uri:
prefix: "/static"
- uri:
regex: '^.*\.(ico|png|jpg)$'
route:
- destination:
host: bsa.ba-trail.svc.cluster.local service.namespace.svc.cluster.local
port:
number: 5000
I defined also a service and deployment.
I want to expose the service outside that I will be able to access
like:
https://myapp.host:5000
when I run:
kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 100.61.114.202 a7151b2063cb-200880.eu-central-1.elb.amazonaws.com 150210:31161/TCP,80:31280/TCP,443:31190/TCP 41d
How it can be done?
I was able to run the app with port forwarding but I want a direct external link.
So in your case, you have an ELB serving your istio ingress gateway that goes to a VirtualService that directs traffic to port 5000 in the container.
I assume that you have it working with 🤔💭:
a7151b2063cb-200880.eu-central-1.elb.amazonaws.com:80 and
a7151b2063cb-200880.eu-central-1.elb.amazonaws.com:443 ❓
and you want something like:
a7151b2063cb-200880.eu-central-1.elb.amazonaws.com:5000 ❓
but with a specific name that maps to
myapp.host ❓
First, you have to create a DNS CNAME record that maps myapp.host to a7151b2063cb-200880.eu-central-1.elb.amazonaws.com.
Then on the Kubernetes service istio-ingressgateway you probably have something like this:
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
labels:
name: istio-ingress-service
annotations:
... (❓)
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: ❓
protocol: TCP
- port: 443
targetPort: ❓
protocol: TCP
selector:
name: something-that-matches-your-istio-ingress
You could just add the extra port to the service so that it listens on that port on the outside.
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
labels:
name: istio-ingress-service
annotations:
... (❓)
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: ❓
protocol: TCP
- port: 443
targetPort: ❓
protocol: TCP
- port: 5000
targetPort: ❓
selector:
name: something-that-matches-your-istio-ingress
Finally, the virtual service needs to match your hostname myapp.host
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bher-virtualservice
namespace: ba-trail
spec:
gateways:
- bher-gateway
hosts:
- myapp.host
...
✌️

Health check problem with setting up GKE with istio-gateway

Goal
I'm trying to setup a
Cloud LB -> GKE [istio-gateway -> my-service]
This was working before, however, I have to recreate the cluster 2 days ago and run into this problem. Maybe some version change?
This is my ingress manifest file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "my-dev-ingress"
namespace: "istio-system"
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-dev-gclb-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "my-dev-cluster-cert-05"
kubernetes.io/ingress.allow-http: "false"
spec:
backend:
serviceName: "istio-ingressgateway"
servicePort: 80
Problem
The health check issue by the Cloud LB failed. The backend service created by the Ingress create a /:80 default health check.
What I have tried
1) I tried to set the health check generated by the gke ingress to point to the istio-gateway StatusPort port 15020 in the Backend config console. Then the health check passed for a bit until the backend config revert itself to use the original /:80 healthcheck that it created. I even tried to delete the healthcheck that it created and it just create another one.
2) I also tried using the istio-virtual service to route the healthcheck to 15020 port as shown here with out much success.
3) I also tried just route everything in the virtual-service the healthcheck port
hosts:
- "*"
gateways:
- my-web-gateway
http:
- match:
- method:
exact: GET
uri:
exact: /
route:
- destination:
host: istio-ingress.gke-system.svc.cluster.local
port:
number: 15020
4) Most of the search result I found say that setting readinessProbe in the deployment should tell the ingress to set the proper health check. However, all of my service are under the istio-gateway and I can't really do the same.
I'm very lost right now and will really appreciate it if anyone could point me to the right direction. Thanks
i got it working with gke 1.20.4-gke.2200 and istio 1.9.2, the documentation for this is non existent or i have not found anything, you have to add an annotation to istio-ingressgateway service to use a backend-config when using "istioctl install -f values.yaml" command
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
serviceAnnotations:
cloud.google.com/backend-config: '{"default": "istio-ingressgateway-config"}'
then you have to create the backend-config with the correct healthcheck port
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: istio-ingressgateway-config
namespace: istio-system
spec:
healthCheck:
checkIntervalSec: 30
port: 15021
type: HTTP
requestPath: /healthz/ready
with this the ingress should automatically change the configuration for the load balancer health check pointing to istio port 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: web
networking.gke.io/managed-certificates: "web"
spec:
rules:
- host: test.example.com
http:
paths:
- path: "/*"
pathType: Prefix
backend:
service:
name: istio-ingressgateway
port:
number: 80
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-web
namespace: istio-system
spec:
hosts:
- test.example.com
gateways:
- web
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 8080 #internal service port
host: "internal-service.service-namespace.svc.cluster.local"
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: web
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- test.example.com
you could also set hosts to "*" in the virtualservice and gateway

create VirtualService for kiali, tracing, grafana

I am trying to expose kiali on my default gateway. I have other services working for apps in the default namespace but have not been able to route traffic to anything in the istio namespace
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- '*'
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: default
spec:
hosts:
- kiali.dev.example.com
gateways:
- gateway
http:
- route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
The problem was I had mTLS enabled and kiali does not have a sidecar thus can not be validated by mTLS. the solution was to add a destination rule disabling mTLS for it.
apiVersion: 'networking.istio.io/v1alpha3'
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
You should define an ingress gateway and make sure that the hosts in the gateway match the hosts in the virtual service. Also specify the port of the destination. See the Control Ingress Traffic task.
For me this worked!
I ran
istioctl proxy-config routes istio-ingressgateway-866d7949c6-68tt4 -n istio-system -o json > ./routes.json
to get the dump of all the routes. The kiali route got corrupted for some reason. I deleted the virtual service and created it again, that fixed it.
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: istio-system
spec:
gateways:
- istio-system/my-gateway
hosts:
- 'mydomain.com'
http:
- match:
- uri:
prefix: /kiali/
route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
weight: 100
---
apiVersion: 'networking.istio.io/v1alpha3'
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: SIMPLE
---
Note: hosts needed to be set, '*' didnt work for some reason.

How to let istio resolve self defined hosts

Scenario:
I have 2 clusters: A and B both with istio installed. I want to expose service-1 in cluster A as service-1.suffix, and let service-2 in cluster B access service-1 by: service-1.suffix. The folloing picture illustrates my idea.
In cluster A, I define a virtualService and Gateway to route the requests to service-1.
Gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: service-1
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "service-1.suffix"
VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: service-1
spec:
hosts:
- service-1.default.svc.cluster.local
- "service-1.suffix"
gateways:
- service-1
- mesh
http:
- route:
- destination:
host: service-1.default.svc.cluster.local
port:
number: 8080
This is working fine as I can use curl to access it successfully.
curl -I -HHost:service-1.suffix http://cluster_A_proxy:31380
The next step is creating Egress and VirtualService in Cluster B. Here are my definition files:
ServiceEntry:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: service-1
spec:
hosts:
- "service-1.suffix" #the global suffix mcm.com could be defined in mcm.
#addresses:
#- xxx/32
ports:
- number: 80
name: http
protocol: HTTP
resolution: STATIC
location: MESH_EXTERNAL
endpoints:
- address: 1.1.1.1 #The cluster A proxy ip
ports:
http: 31380
VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: service-1
spec:
hosts:
- "service-1.suffix"
http:
- route:
- destination:
host: "service-1.suffix"
port:
number: 80
In Cluster B, when I try to use curl to resolve service-1.suffix, I got a DNS error saying this cannot be resolved.
curl: (6) Could not resolve host: service-1.suffix
How can I fix this?
#The command I am using in an istio app in Cluster B:
kubectl exec -it pod_name -c container_name bash
curl -I -HHost:service-1.suffix http://service-1.suffix
Edit:
When I use another resolvable hostname like www.google.com in serviceentry I can get it through, the requests to www.google.com will be redirected to service-1 in cluster A. Just the same, if I use nip.io as my suffix, it works well. However, the made up name service-1.suffix could not be resolved.
Define a Kubernetes ExternalName service with a random IP:
kind: Service
apiVersion: v1
metadata:
name: service1
spec:
type: ExternalName
externalName: 1.1.1.1