Istio Basic routing rules cant get it working - istio

I’m trying to set routing rules and I cant get it working.
istioctl version: 1.0.2
kubectl version: client 1.10.3 / server 1.9.6
I have the following deployments (configurations files below) :
1. Two simple flask pods
2. One NodePort Service
3. One DestinationRule
4. One VirtualService
After deploying all the above I still get reply’s from both two pods instead of only V1 as defined in the VirtualService.
Am I missing anything?
Pod 1:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: flask-v1
spec:
selector:
matchLabels:
app: flask
replicas: 1
template:
metadata:
labels:
app: flask
version: v1
spec:
containers:
- name: flask
image: simple-flask-example:1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
Pod 2:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: flask-v2
spec:
selector:
matchLabels:
app: flask
replicas: 1
template:
metadata:
labels:
app: flask
version: v2
spec:
containers:
- name: flask
image: simple-flask-example:2.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
Service nodeport:
apiVersion: v1
kind: Service
metadata:
name: flask
labels:
app: flask
spec:
type: NodePort
ports:
- port: 80
name: http
targetPort: 5000
selector:
app: flask
DestinationRule:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: flask
spec:
host: flask
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flask
spec:
hosts:
- flask
http:
- route:
- destination:
host: flask
subset: v1
Requests test:
>>> for x in range(10) : requests.request('GET','http://10.200.167.223').text
...
'{\n "hello": "world v2"\n}\n'
'{\n "hello": "world v2"\n}\n'
'{\n "hello": "world v2"\n}\n'
'{\n "hello": "world v1"\n}\n'
'{\n "hello": "world v1"\n}\n'
'{\n "hello": "world v2"\n}\n'
'{\n "hello": "world v2"\n}\n'
'{\n "hello": "world v1"\n}\n'
'{\n "hello": "world v2"\n}\n'
'{\n "hello": "world v1"\n}\n'

Istio routing rules (VirtualService rules) are executed in a client proxy, not in the target service, so if you call the service directly via a NodePort it won't do any of the Istio routing. You need to call it from either another Istio service or through an Istio Gateway.
A simple way to test service routing is using the
sleep sample as a client.
Alternatively you can setup an ingress Gateway for your service like the example described here.

Related

Kubernetes Istio Gateway & VirtualService Config Flask Example

I try to get some hands on experience with K8s & istio. I am using minikube and I try to deploy a dummy flask web-app. However, for some reason I do not manage to get the istio routing working.
E.g.
curl -v -H 'Host: hello.com' 'http://127.0.0.1/' --> 503 Service Unavailable
Do you see any issue in my specs?
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: flask-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "hello.com"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flaskvirtualservice
spec:
hosts:
- "hello.com"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask.default.svc.cluster.local
port:
number: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask
labels:
app: flask
spec:
replicas: 1
selector:
matchLabels:
app: flask
template:
metadata:
labels:
app: flask
spec:
containers:
- name: flask
image: digitalocean/flask-helloworld
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: flask-service
spec:
selector:
app.kubernetes.io/name: flask
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: 5000
Thanks for your support here!
Cheers
EDIT:
here the updated service definition
apiVersion: v1
kind: Service
metadata:
name: flask
labels:
app: flask
service: flask
spec:
ports:
- port: 5000
name: http
selector:
app: flask
I would suggest you look at this sample from the istio repo:
https://github.com/istio/istio/tree/master/samples/helloworld
This helloworld app is a flask app and you can find the python source code in src
Syntax
In your yaml you do not have --- between your Gateway and VirtualService.
DNS
You also don't make mention of DNS IE you need to make sure that the box you are running curl on has the ability to resolve your domain hello.com to the istio service ip. Since you are using minikube you could add an entry to your OS hosts file.
Routability
It has the ability to send requests to it, IE if you are outside the cluster you need an external ip or do something with kubectl port-forward ...
I hope this helps you sort things out!

Ingress Controller produces 502 Bad Gateway on every other request

I have a kubernetes ingress controller terminating my ssl with an ingress resource handling two routes: 1 my frontend SPA app, and the second backend api. Currently when I hit each frontend and backend service directly they perform flawlessly, but when I call the ingress controller both frontend and backend services alternate between producing the correct result and a 502 Bad Gateway.
To me it smells like my ingress resource is having some sort of path conflict that I'm not sure how to debug.
Reddit suggested that it could be a label and selector mismatch between my services and deployments which I believe I checked thoroughly. they also mentioned: "api layer deployment and a worker layer deployment [that] both share a common app label and your PDB selects that app label with a 50% availability for example". Which I haven't run down because I don't quite understand.
I also realize SSL could play a role in gateway issues; However, my certificates appear to be working when I hit the https:// port of the ingress-controller
frontend-deploy:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: my-performance-front
tier: frontend
replicas: 1
template:
metadata:
labels:
app: my-performance-front
tier: frontend
spec:
containers:
- name: my-performance-frontend
image: "<my current image and location>"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
imagePullSecrets:
- name: regcred
frontend-svc
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: ingress-nginx
spec:
selector:
app: my-performance-front
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
backend-deploy
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: my-performance-back
tier: backend
replicas: 1
template:
metadata:
labels:
app: my-performance-back
tier: backend
spec:
containers:
- name: my-performance-backend
image: "<my current image and location>"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
imagePullSecrets:
- name: regcred
backend-svc
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: ingress-nginx
spec:
selector:
app: my-performance-back
tier: backend
ports:
- protocol: TCP
name: "http"
port: 80
targetPort: 8080
type: LoadBalancer
ingress-rules
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-rules
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
# nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- http:
paths:
- path: /(api/v0(?:/|$).*)
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
- path: /(.*)
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Any ideas, critiques, or experiences are welcomed and appreciated!!!

AWS NLB stickysession on EKS

I`m trying to apply NLB sticky session on a EKS environment.
There are 2 worker nodes(EC2) connected to NLB target group, each node has 2 nginx pods.
I wanna connect to same pod on my local system for testing.
But it looks like connected different pod every trying using 'curl' command.
this is my test yaml file and test command.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: udptest
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container
image: nginx
ports:
- containerPort: 80
nodeSelector:
zone: a
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: udptest2
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container
image: nginx
ports:
- containerPort: 80
nodeSelector:
zone: c
---
apiVersion: v1
kind: Service
metadata:
name: nginx-nlb
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
#!/bin/bash
number=0
while :
do
if [ $number -gt 2 ]; then
break
fi
curl -L -k -s -o /dev/null -w "%{http_code}\n" <nlb dns name>
done
How can i connect to specific pod by NLB`s sticy session every attempt?
As much as i understand ClientIP value for sessionAffinity is not supported when the service type is LoadBalancer.
You can use the Nginx ingress controller and implement the affinity over there.
https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "test-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: service
servicePort: port
Good article : https://zhimin-wen.medium.com/sticky-sessions-in-kubernetes-56eb0e8f257d
You need to enable it:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip

upstream connect error or disconnect/reset before headers. reset reason: connection failure

I'm facing this issue upstream connect error or disconnect/reset before headers. reset reason: connection failure here the my deployment and service file
apiVersion: v1
kind: Service
metadata:
name: project
labels:
app: project
service: project
spec:
ports:
- port: 9080
name: http
selector:
app: project
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: project-svc
labels:
account: project
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: project-v1
labels:
app: project
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: project
version: v1
template:
metadata:
labels:
app: project
version: v1
spec:
serviceAccountName: project-svc
containers:
- name: project
image: segullshairbutt/website:admin_project_a_01_cl1_wn1_pod1_c4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
and here are the Gateway and virtualservice
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: project-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: projectinfo
spec:
hosts:
- "*"
gateways:
- project-gateway
http:
- match:
- uri:
exact: /productpage
route:
- destination:
host: project
port:
number: 9080
when i visit using minikube-ip:istio-engress i get this error but when I just change the image from my to bookinfo product-page ther nothing this error. I don't know why this is and from where.
Please help me I'll be very thankful to you!

istio route rule error: "json: cannot unmarshal string into Go value"

I'm new to Istio and I'm going through some uses cases with my simple app.
I deployed 2 simple services on minikube running on Windows 10 Pro with VirtualBox 5.2.6, with istio 0.6.0
Version v1 of service1 and v1 & v2 of service2.
service1 responds to /hello and service2 responds to /world. Everything working fine so far and both services are responsding and as of service2, the round robin is working.
Now I want to apply 2 route rules, one to route to v2 of service2 based on a header and the rest to v1 of service2, but when I try to do that I get an error:
Error: cannot parse proto message: YAML decoding error: destination:
name: service2
match:
request:
headers:
Foo: bar
precedence: 2
route:
- labels:
version: v2
json: cannot unmarshal string into Go value of type map[string]json.RawMessage
Please find below my app and route rules config.
What's wrong with this config?
Please notice that when I omit the "match" part it's OK, but of course this is not what I want.
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: service2-route
spec:
destination:
name: service2
precedence: 2
match:
request:
headers:
Foo: bar
route:
- labels:
version: v2
---
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: service2-default
spec:
destination:
name: service2
precedence: 1
route:
- labels:
version: v1
weight: 100
and the my services deployment yaml:
###########################################################################
# Service 1
##########################################################################
apiVersion: v1
kind: Service
metadata:
name: service1
labels:
app: service1
spec:
ports:
- port: 8080
name: http
selector:
app: service1
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service1-v1
spec:
replicas: 1
template:
metadata:
labels:
app: service1
version: v1
spec:
containers:
- name: service1
image: myrepo/sampleapp-service1:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
###########################################################################
# Service 2
##########################################################################
apiVersion: v1
kind: Service
metadata:
name: service2
labels:
app: service2
spec:
ports:
- port: 8081
name: http
selector:
app: service2
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service2-v1
spec:
replicas: 1
template:
metadata:
labels:
app: service2
version: v1
spec:
containers:
- name: service2
image: myrepo/sampleapp-service2:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8081
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: service2-v2
spec:
replicas: 1
template:
metadata:
labels:
app: service2
version: v2
spec:
containers:
- name: service2
image: myrepo/sampleapp-service2:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8081
---
###########################################################################
# Ingress resource (gateway)
##########################################################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: service1
servicePort: 8080
- path: /world
backend:
serviceName: service2
servicePort: 8081
---
The problem here is pretty simple, you have to say how do you want to match your header. In your example, I can assume that you want an exact match, so the following syntax:
match:
request:
headers:
Foo:
exact: bar
Here you can find more available options.
Also I would recommend to use quotes if your header value contains any special characters.