I try to get some hands on experience with K8s & istio. I am using minikube and I try to deploy a dummy flask web-app. However, for some reason I do not manage to get the istio routing working.
E.g.
curl -v -H 'Host: hello.com' 'http://127.0.0.1/' --> 503 Service Unavailable
Do you see any issue in my specs?
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: flask-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "hello.com"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: flaskvirtualservice
spec:
hosts:
- "hello.com"
gateways:
- flask-gateway
http:
- route:
- destination:
host: flask.default.svc.cluster.local
port:
number: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask
labels:
app: flask
spec:
replicas: 1
selector:
matchLabels:
app: flask
template:
metadata:
labels:
app: flask
spec:
containers:
- name: flask
image: digitalocean/flask-helloworld
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: flask-service
spec:
selector:
app.kubernetes.io/name: flask
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: 5000
Thanks for your support here!
Cheers
EDIT:
here the updated service definition
apiVersion: v1
kind: Service
metadata:
name: flask
labels:
app: flask
service: flask
spec:
ports:
- port: 5000
name: http
selector:
app: flask
I would suggest you look at this sample from the istio repo:
https://github.com/istio/istio/tree/master/samples/helloworld
This helloworld app is a flask app and you can find the python source code in src
Syntax
In your yaml you do not have --- between your Gateway and VirtualService.
DNS
You also don't make mention of DNS IE you need to make sure that the box you are running curl on has the ability to resolve your domain hello.com to the istio service ip. Since you are using minikube you could add an entry to your OS hosts file.
Routability
It has the ability to send requests to it, IE if you are outside the cluster you need an external ip or do something with kubectl port-forward ...
I hope this helps you sort things out!
Related
I have a kubernetes ingress controller terminating my ssl with an ingress resource handling two routes: 1 my frontend SPA app, and the second backend api. Currently when I hit each frontend and backend service directly they perform flawlessly, but when I call the ingress controller both frontend and backend services alternate between producing the correct result and a 502 Bad Gateway.
To me it smells like my ingress resource is having some sort of path conflict that I'm not sure how to debug.
Reddit suggested that it could be a label and selector mismatch between my services and deployments which I believe I checked thoroughly. they also mentioned: "api layer deployment and a worker layer deployment [that] both share a common app label and your PDB selects that app label with a 50% availability for example". Which I haven't run down because I don't quite understand.
I also realize SSL could play a role in gateway issues; However, my certificates appear to be working when I hit the https:// port of the ingress-controller
frontend-deploy:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: my-performance-front
tier: frontend
replicas: 1
template:
metadata:
labels:
app: my-performance-front
tier: frontend
spec:
containers:
- name: my-performance-frontend
image: "<my current image and location>"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
imagePullSecrets:
- name: regcred
frontend-svc
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: ingress-nginx
spec:
selector:
app: my-performance-front
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
backend-deploy
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: my-performance-back
tier: backend
replicas: 1
template:
metadata:
labels:
app: my-performance-back
tier: backend
spec:
containers:
- name: my-performance-backend
image: "<my current image and location>"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
imagePullSecrets:
- name: regcred
backend-svc
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: ingress-nginx
spec:
selector:
app: my-performance-back
tier: backend
ports:
- protocol: TCP
name: "http"
port: 80
targetPort: 8080
type: LoadBalancer
ingress-rules
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-rules
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
# nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- http:
paths:
- path: /(api/v0(?:/|$).*)
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
- path: /(.*)
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Any ideas, critiques, or experiences are welcomed and appreciated!!!
We are using istio as a service mesh to secure our cluster. We have several web applications exposed through the ingress gateway as follows ingress-gateway-id:80/app1/, ingress-gateway-id:80/app2/ and ingress-gateway-id:80/app3/.
We have a gateway that routes traffic of the ingress gateway on port 80.
For each application, we create a virtual service that routes the traffic from (for example) ingress-gateway-id:80/app1/app1-api-uri/ to app1-service/app1-api-uri/
The main issue we are currently facing is that some applications work by only / (for example) app2-service/ which forces us to allow / through the virtual service and restrict the ingress gateway to allow only one application through the ingress gateway (without specifying hosts in headers as all our applications are web apps therefore accessible through a browser in our use case).
My question is how to allow multiple applications to access / through my ingress gateway (on the same port 80 for example) without the need to deal with setting host headers from the client (in our case the browser)?
If you don't want to use your domains as a virtual service hosts I would say the only options here would be to
use rewrite in your virtual service.
use custom headers
There is an example about rewrite from istio documentation.
HTTPRewrite
HTTPRewrite can be used to rewrite specific parts of a HTTP request before forwarding the request to the destination. Rewrite primitive can be used only with HTTPRouteDestination. The following example demonstrates how to rewrite the URL prefix for api call (/ratings) to ratings service before making the actual API call.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings-route
spec:
hosts:
- ratings.prod.svc.cluster.local
http:
- match:
- uri:
prefix: /ratings
rewrite:
uri: /v1/bookRatings
route:
- destination:
host: ratings.prod.svc.cluster.local
subset: v1
There is an example for 2 nginx deployments, both serving on /.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
spec:
selector:
matchLabels:
run: nginx1
replicas: 1
template:
metadata:
labels:
run: nginx1
app: frontend
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
spec:
selector:
matchLabels:
run: nginx2
replicas: 1
template:
metadata:
labels:
run: nginx2
app: frontend
spec:
containers:
- name: nginx2
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: frontend
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: comp-ingress-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
gateways:
- comp-ingress-gateway
hosts:
- '*'
http:
- name: match
match:
- uri:
prefix: /a
rewrite:
uri: /
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
port:
number: 80
- name: default
match:
- uri:
prefix: /b
rewrite:
uri: /
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v2
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginxdest
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: v1
labels:
run: nginx1
- name: v2
labels:
run: nginx2
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
And a test with curl.
curl -v xx.xxx.xxx.x/a
HTTP/1.1 200 OK
Hello nginx1
curl -v xx.xxx.xxx.x/b
HTTP/1.1 200 OK
Hello nginx2
There is an example about custom headers in istio documentation.
I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes
/ - you should see 'hello application`
/api/books should provide list of book in json format
This is the service
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: go-ms
This is the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
after applied the both yamls and when calling the URL:
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I was able to see the data in the browser as expected and also for the root app using just the external ip
Now I want to use istio, so I follow the guide and install it successfully via helm
using https://istio.io/docs/setup/kubernetes/install/helm/ and verify that all the 53 crd are there and also istio-system
components (such as istio-ingressgateway
istio-pilot etc all 8 deployments are in up and running)
I’ve change the service above from LoadBalancer to NodePort
and create the following istio config according to the istio docs
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: "/"
- uri:
exact: "/api/books"
route:
- destination:
port:
number: 8080
host: go-ms
in addition I’ve added the following
kubectl label namespace books istio-injection=enabled where the application is deployed,
Now to get the external Ip i've used command
kubectl get svc -n istio-system -l istio=ingressgateway
and get this in the external-ip
b0751-1302075110.eu-central-1.elb.amazonaws.com
when trying to access to the URL
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I got error:
This site can’t be reached
ERR_CONNECTION_TIMED_OUT
if I run the docker rayndockder/http:0.0.2 via
docker run -it -p 8080:8080 httpv2
I path's works correctly!
Any idea/hint What could be the issue ?
Is there a way to trace the istio configs to see whether if something is missing or we have some collusion with port or network policy maybe ?
btw, the deployment and service can run on each cluster for testing of someone could help...
if I change all to port to 80 (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books"
I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
ports:
- port: 8080
selector:
app: go-ms
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: go-ms-virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /
- uri:
exact: /api/books
route:
- destination:
port:
number: 8080
host: go-ms
EOF
The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.
I was able to access the app with url of http://$(minikube ip):31380.
There is no point in changing the port of services, deployments since these are application specific.
May be this question specifies the ports opened by istio ingress gateway.
After I created a traefik daemonset, I created a service as loadbalancer on port 80, which is the Traefik proxy port and the node got automatically registered to it. If i hit the elb i get the proxy 404 which is OK because no service is registered yet
I then created a nodeport service for the web-ui. targeted port 8080 inside the pod and 80 on clusterip. I can curl the traefik ui from inside the cluster and it retruns traefik UI
I then created an ingress so that when i hit elb/ui it gets me to the backend web-ui service of traefik and it fails. I also have the right annotations in my ingress but the elb does not route the path to the traefik ui in the backend which is running properly
What am i doing wrong here? I can post all my yml files if required
UPD
My yaml files:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: traefik
labels:
app: traefik
spec:
template:
metadata:
labels:
app: traefik
spec:
containers:
- image: traefik
name: traefik
args:
- --api
- --kubernetes
- --logLevel=INFO
- --web
ports:
- containerPort: 8080
name: traefikweb
- containerPort: 80
name: traefikproxy
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
spec:
selector:
app: traefik
ports:
- port: 80
targetPort: traefikproxy
type: LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
spec:
selector:
app: traefik
ports:
- name: http
targetPort: 8080
nodePort: 30001
port: 80
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: default
name: traefik-ing
annotations:
kubernetes.io/ingress.class: traefik
#traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip:/ui
spec:
rules:
- http:
paths:
- path: /ui
backend:
serviceName: traefik-web-ui
servicePort: 80
If you are on Private_Subnets use
kind: Service
metadata:
name: traefik-proxy
> annotations:
> "service.beta.kubernetes.io/aws-load-balancer-internal": "0.0.0.0/0"
spec:
selector:
app: traefik
ports:
- port: 80
targetPort: traefikproxy
type: LoadBalancer```
I then created an ingress so that when i hit elb/ui it gets me to the backend web-ui service of traefik and it fails."
How did it fail? Did you get error 404, error 500, or something else?
Also, for traefik-web-ui service, you don't need to set type: NodePort, it should be type: ClusterIP.
When you configure backends for your Ingress, the only requirement is availability from inside a cluster, so ClusterIP type will be more than enough for that.
Your service should be like that:
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
spec:
selector:
app: traefik
ports:
- name: http
targetPort: 8080
port: 80
Option PathPrefixStrip can be useful because without it request will be forwarded to UI with /ui prefix, which you definitely don't want.
All other configs look good.
Having issues configuring SSL termination in my Kubernetes cluster. Attempting to figure out best place for this to happen.
I have been able to get it working following the instructions listed here and then updating the ingress controller service to include the SSL certificate details using service.beta.kubernetes.io/aws-load-balancer-ssl-cert annotation:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:...
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: https
port: 443
targetPort: 80
I then have ingress rules and services setup similar to:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: app1.foo.bar
http:
paths:
- backend:
serviceName: app1
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app1
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
selector:
app: app1
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app1
spec:
template:
metadata:
labels:
app: app1
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
When going to app1.foo.bar I can see that:
http requests are redirected to https
the SSL certificate is correctly applied
Originally I was trying to apply the certificate to my individual app services. I could see the certificate being applied to the ELB in AWS but wasn't being passed through. So my question is:
Is this the correct configuration or is there a better solution?
Thank you :)
I would suggest terminating SSL on the Nginx Ingress Controller exposed with ELB, and use kube-lego for automated SSL certificate management.
https://github.com/kubernetes/ingress-nginx &
https://github.com/jetstack/kube-lego