I'm trying to configure an Istio VirtialService / DestinationRule so that a grpc call to the service from a pod labeled datacenter=chi5 is routed to a grpc server on a pod labeled datacenter=chi5.
I have Istio 1.4 installed on a cluster running Kubernetes 1.15.
A route is not getting created in istio-sidecar envoy config for the chi5 subset and traffic is being routed round robin between each service endpoint regardless of pod label.
Kiali is reporting an error in the DestinationRule config: "this subset's labels are not found in any matching host".
Do I misunderstand the functionality of these Istio traffic management objects or is there an error in my configuration?
I believe my pod's are correctly labeled:
$ (dev) kubectl get pods -n istio-demo --show-labels
NAME READY STATUS RESTARTS AGE LABELS
ticketclient-586c69f77d-wkj5d 2/2 Running 0 158m app=ticketclient,datacenter=chi6,pod-template-hash=586c69f77d,run=client-service,security.istio.io/tlsMode=istio
ticketserver-7654cb5f88-bqnqb 2/2 Running 0 158m app=ticketserver,datacenter=chi5,pod-template-hash=7654cb5f88,run=ticket-service,security.istio.io/tlsMode=istio
ticketserver-7654cb5f88-pms25 2/2 Running 0 158m app=ticketserver,datacenter=chi6,pod-template-hash=7654cb5f88,run=ticket-service,security.istio.io/tlsMode=istio
The port-name on my k8s Service object is correctly prefixed with the grpc protocol:
$ (dev) kubectl describe service -n istio-demo ticket-service
Name: ticket-service
Namespace: istio-demo
Labels: app=ticketserver
Annotations: <none>
Selector: run=ticket-service
Type: ClusterIP
IP: 10.234.14.53
Port: grpc-ticket 10000/TCP
TargetPort: 6001/TCP
Endpoints: 10.37.128.37:6001,10.44.0.0:6001
Session Affinity: None
Events: <none>
I've deployed the following Istio objects to Kubernetes:
Kind: VirtualService
Name: ticket-destinationrule
Namespace: istio-demo
Labels: app=ticketserver
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: DestinationRule
Spec:
Host: ticket-service.istio-demo.svc.cluster.local
Subsets:
Labels:
Datacenter: chi5
Name: chi5
Labels:
Datacenter: chi6
Name: chi6
Events: <none>
---
Name: ticket-virtualservice
Namespace: istio-demo
Labels: app=ticketserver
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Spec:
Hosts:
ticket-service.istio-demo.svc.cluster.local
Http:
Match:
Name: ticket-chi5
Port: 10000
Source Labels:
Datacenter: chi5
Route:
Destination:
Host: ticket-service.istio-demo.svc.cluster.local
Subset: chi5
Events: <none>
I have made reproduction of your issue with 2 nginx pods.
What you want to have can be achieved with sourceLabel,check below example, I think it explain everything.
For start I made 2 ubuntu pods, 1 with label app:ubuntu and 1 without any labels.
apiVersion: v1
kind: Pod
metadata:
name: ubu2
labels:
app: ubuntu
spec:
containers:
- name: ubu2
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
apiVersion: v1
kind: Pod
metadata:
name: ubu1
spec:
containers:
- name: ubu1
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
Then 2 deployments with service.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
spec:
selector:
matchLabels:
run: nginx1
replicas: 1
template:
metadata:
labels:
run: nginx1
app: frontend
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
spec:
selector:
matchLabels:
run: nginx2
replicas: 1
template:
metadata:
labels:
run: nginx2
app: frontend
spec:
containers:
- name: nginx2
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: frontend
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend
Another thing is virtual service with mesh gateway, so it works only in the mesh, with 2 matches, 1 with sourceLabel which goes from pods with app: ubuntu label to nginx pod with v1 subset, and another match which goes to the nginx pod with v2 subset.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
gateways:
- mesh
hosts:
- nginx.default.svc.cluster.local
http:
- name: match-myuid
match:
- sourceLabels:
app: ubuntu
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
subset: v1
- name: default
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
subset: v2
And the last thing is DestinationRule which take subsets from virtual service and sent it to proper nginx pod with label either nginx1 or nginx2
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginxdest
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: v1
labels:
run: nginx1
- name: v2
labels:
run: nginx2
kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx1-5c5b84567c-tvtzm 2/2 Running 0 23m app=frontend,run=nginx1,security.istio.io/tlsMode=istio
nginx2-5d95c8b96-6m9zb 2/2 Running 0 23m app=frontend,run=nginx2,security.istio.io/tlsMode=istio
ubu1 2/2 Running 4 3h19m security.istio.io/tlsMode=istio
ubu2 2/2 Running 2 10m app=ubuntu,security.istio.io/tlsMode=istio
Results from ubuntu pods
Ubuntu with label
curl nginx/
Hello nginx1
Ubuntu without label
curl nginx/
Hello nginx2
Let me know if that help.
Related
I am trying to make an Istio gateway (with certificates from for public access to a deployed application. Here are the configurations:
Cert manager installed in cluster via helm:
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
Certificate issuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: kube-system
spec:
acme:
email: xxx#gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging
# Add a single challenge solver, HTTP01 using istio
solvers:
- http01:
ingress:
class: istio
Certificate file:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: url-certs
namespace: istio-system
annotations:
cert-manager.io/issue-temporary-certificate: "true"
spec:
secretName: url-certs
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
commonName: bot.demo.live
dnsNames:
- bot.demo.live
- "*.demo.live"
Gateway file:
# gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https-url-1
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: "url-certs" # This should match the Certificate secretName
Application Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: microbot
name: microbot
namespace: bot-demo
spec:
replicas: 1
selector:
matchLabels:
app: microbot
template:
metadata:
labels:
app: microbot
spec:
containers:
- name: microbot
image: dontrebootme/microbot:v1
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
Virtual service and application service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: microbot-virtual-svc
namespace: bot-demo
spec:
hosts:
- bot.demo.live
gateways:
- istio-system/public-gateway
http:
- match:
- uri:
prefix: "/"
route:
- destination:
host: microbot-service
port:
number: 9100
---
apiVersion: v1
kind: Service
metadata:
name: microbot-service
namespace: bot-demo
spec:
selector:
app: microbot
ports:
- port: 9100
targetPort: 80
Whenever I try to curl https://bot.demo.live, I get a certificate error. The certificate issuer is working. I just can't figure out how to expose the deployed application via the istio gateway for external access. bot.demo.live is already in my /etc/hosts/ file and and I can ping it just fine.
What am I doing wrong?
I`m trying to apply NLB sticky session on a EKS environment.
There are 2 worker nodes(EC2) connected to NLB target group, each node has 2 nginx pods.
I wanna connect to same pod on my local system for testing.
But it looks like connected different pod every trying using 'curl' command.
this is my test yaml file and test command.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: udptest
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container
image: nginx
ports:
- containerPort: 80
nodeSelector:
zone: a
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: udptest2
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container
image: nginx
ports:
- containerPort: 80
nodeSelector:
zone: c
---
apiVersion: v1
kind: Service
metadata:
name: nginx-nlb
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
#!/bin/bash
number=0
while :
do
if [ $number -gt 2 ]; then
break
fi
curl -L -k -s -o /dev/null -w "%{http_code}\n" <nlb dns name>
done
How can i connect to specific pod by NLB`s sticy session every attempt?
As much as i understand ClientIP value for sessionAffinity is not supported when the service type is LoadBalancer.
You can use the Nginx ingress controller and implement the affinity over there.
https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "test-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: service
servicePort: port
Good article : https://zhimin-wen.medium.com/sticky-sessions-in-kubernetes-56eb0e8f257d
You need to enable it:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip
I have followed the steps in the Udacity Full-Stack Nanodegree course to get a Kubernetes cluster running on AWS EKS.
The Service is running. Running the command kubectl get services simple-jwt-api -o wide returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
simple-jwt-api LoadBalancer 10.100.217.57 a32d4ab0969b149bd9fb47d2065aee80-335944770.us-west-2.elb.amazonaws.com 80:31644/TCP 51m app=simple-jwt-api
Nodes appear to be running:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-168-3-213.us-west-2.compute.internal Ready <none> 80m v1.15.10-eks-bac369 192.168.3.213 54.70.213.28 Amazon Linux 2 4.14.173-137.229.amzn2.x86_64 docker://18.9.9
ip-192-168-46-0.us-west-2.compute.internal Ready <none> 80m v1.15.10-eks-bac369 192.168.46.0 34.220.32.208 Amazon Linux 2 4.14.173-137.229.amzn2.x86_64 docker://18.9.9
Pods appear to be running
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
simple-jwt-api-5dd5b9cf98-46ngm 1/1 Running 0 37m 192.168.22.121 ip-192-168-3-213.us-west-2.compute.internal <none> <none>
simple-jwt-api-5dd5b9cf98-kfgxf 1/1 Running 0 37m 192.168.20.148 ip-192-168-3-213.us-west-2.compute.internal <none> <none>
simple-jwt-api-5dd5b9cf98-xs6rp 1/1 Running 0 37m 192.168.60.136 ip-192-168-46-0.us-west-2.compute.internal <none> <none>
Docker file is:
FROM python:stretch
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 8080
ENTRYPOINT ["gunicorn", "-b", ":8080", "main:APP"]
Deployment file is:
apiVersion: v1
kind: Service
metadata:
name: simple-jwt-api
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: simple-jwt-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-jwt-api
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2
maxSurge: 2
selector:
matchLabels:
app: simple-jwt-api
template:
metadata:
labels:
app: simple-jwt-api
spec:
containers:
- name: simple-jwt-api
image: CONTAINER_IMAGE
securityContext:
privileged: false
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
ports:
- containerPort: 8080
Why can't I access the app at a32d4ab0969b149bd9fb47d2065aee80-335944770.us-west-2.elb.amazonaws.com?
It looks like the targetPort in service targetPort: 80 does not match the container port of POD i.e.: containerPort: 8080. Please change the targetPort in service to be 8080 and try again.
apiVersion: v1
kind: Service
metadata:
name: simple-jwt-api
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: simple-jwt-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-jwt-api
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2
maxSurge: 2
selector:
matchLabels:
app: simple-jwt-api
template:
metadata:
labels:
app: simple-jwt-api
spec:
containers:
- name: simple-jwt-api
image: CONTAINER_IMAGE
securityContext:
privileged: false
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
ports:
- containerPort: 8080
I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes
/ - you should see 'hello application`
/api/books should provide list of book in json format
This is the service
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: go-ms
This is the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
after applied the both yamls and when calling the URL:
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I was able to see the data in the browser as expected and also for the root app using just the external ip
Now I want to use istio, so I follow the guide and install it successfully via helm
using https://istio.io/docs/setup/kubernetes/install/helm/ and verify that all the 53 crd are there and also istio-system
components (such as istio-ingressgateway
istio-pilot etc all 8 deployments are in up and running)
I’ve change the service above from LoadBalancer to NodePort
and create the following istio config according to the istio docs
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: "/"
- uri:
exact: "/api/books"
route:
- destination:
port:
number: 8080
host: go-ms
in addition I’ve added the following
kubectl label namespace books istio-injection=enabled where the application is deployed,
Now to get the external Ip i've used command
kubectl get svc -n istio-system -l istio=ingressgateway
and get this in the external-ip
b0751-1302075110.eu-central-1.elb.amazonaws.com
when trying to access to the URL
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I got error:
This site can’t be reached
ERR_CONNECTION_TIMED_OUT
if I run the docker rayndockder/http:0.0.2 via
docker run -it -p 8080:8080 httpv2
I path's works correctly!
Any idea/hint What could be the issue ?
Is there a way to trace the istio configs to see whether if something is missing or we have some collusion with port or network policy maybe ?
btw, the deployment and service can run on each cluster for testing of someone could help...
if I change all to port to 80 (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books"
I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
ports:
- port: 8080
selector:
app: go-ms
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: go-ms-virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /
- uri:
exact: /api/books
route:
- destination:
port:
number: 8080
host: go-ms
EOF
The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.
I was able to access the app with url of http://$(minikube ip):31380.
There is no point in changing the port of services, deployments since these are application specific.
May be this question specifies the ports opened by istio ingress gateway.
Minikube version: v0.25.2
Operating Syatem : Windows 10 Enterprise
Kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-04-10T12:46:31Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}
Kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
istioctl version
Version: 1.0.4
GitRevision: d5cb99f479ad9da88eebb8bb3637b17c323bc50b
User: root#8c2feba0b568
Hub: docker.io/istio
GolangVersion: go1.10.4
BuildStatus: Clean
Tried to run simple hello-world application through Istio on above environment.
kubectl get services
springbootapplication NodePort 10.103.103.141 <none> 80:30456/TCP 3d
kubectl get pods
springbootapplication-v1-6b5bdff8cd-2qhnn 2/2 Running 5 3d
After that I create one below helloworld.yaml file and run the command kubectl apply -f helloworld.yaml. It runs successfully.
apiVersion: v1
kind: Service
metadata:
name: springbootapplication
labels:
app: springbootapplication
spec:
type: NodePort
ports:
- port: 80
name: http
selector:
app: springbootapplication
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: springbootapplication-v1
spec:
replicas: 1
template:
metadata:
labels:
app: springbootapplication
version: v1
spec:
containers:
- name: springbootapplication
image: springbootapplication:v1
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: springbootapplication-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: springbootapplication
spec:
hosts:
- "*"
gateways:
- springbootapplication-gateway
http:
- match:
- uri:
exact: /home
route:
- destination:
host: springbootapplication
port:
number: 80
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: springbootapplication
spec:
host: springbootapplication
subsets:
- name: v1
labels:
version: v1
Problem: I don't know how to access this Spring Boot application now? How to get Gateway IP and Ingress?
You have exposed this as an http service, so in your kubernetes cluster, check for 'istio-ingressgateway' service (it should be a load balancer) and check the endpoint which is exposed at port 80.
Or, through command line, try these.
kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}' --> should give you the ingress port
minikube ip --> should give you IP.