Istio Gateway not being applied to istio-ingressgateway - istio

I'm trying to make istio work with my mssql service. The istio-ingressgateway LoadBalancer doesn't seems to be updated with the correct port value.
I'm running on GKE on 1.10+
apiVersion: v1
kind: Service
metadata:
name: mssql
labels:
app: mssql
service: mssql
spec:
selector:
app: mssql
ports:
- protocol: TCP
port: 1433
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 1433
protocol: TCP
name: tcp-1433
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vservice-mssql
spec:
hosts:
- "*"
gateways:
- public-gateway
tcp:
- match:
- port: 1433
route:
- destination:
host: mssql
port:
number: 1433
After applying the configuration on running I expected to have the port open on the istio-ingressgateway but I'm having this as a result:
istio-ingressgateway LoadBalancer 10.8.1.100 **REDACTED** 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:30160/TCP,8060:32736/TCP,853:30641/TCP,15030:31124/TCP,15031:30849/TCP 90d
The port I opened on the gateway is not listed.

Use following resources def
apiVersion: v1
kind: Service
metadata:
name: mssql
labels:
app: mssql
service: mssql
spec:
selector:
app: mssql
ports:
- protocol: TCP
port: 1433
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 15011
protocol: TCP
name: tcp-15011
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vservice-mssql
spec:
hosts:
- "*"
gateways:
- public-gateway
tcp:
- match:
- port: 15011
route:
- destination:
host: mssql
port:
number: 1433
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: mssql
namespace: istio-system
spec:
host: mssql`enter code here`
trafficPolicy:
tls:
mode: DISABLE
Use Node Port 30160 to access mssql
15011:30160/TCP
istio-ingressgateway LoadBalancer 10.8.1.100 **REDACTED** 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:30160/TCP,8060:32736/TCP,853:30641/TCP,15030:31124/TCP,15031:30849/TCP 90d

If you installed istio using the operator, you have to manually add ports to the operator spec:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-control-plane
spec:
values:
ingressGateways:
- enabled: true
name: istio-ingressgateway
k8s:
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: http2
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
- name: tcp
port: 31400
targetPort: 31400
- name: tls
port: 15443
targetPort: 15443
- # ADD PORTS HERE
Those ports listed here are the default ports. I'm not sure if you remove them, if istio-operator will "merge" the default ports with the ones you will add. But you can try that out (let us know!)

Related

Istio gateway ingress is not working in a local microk8s cluster

I am trying to make an Istio gateway (with certificates from for public access to a deployed application. Here are the configurations:
Cert manager installed in cluster via helm:
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
Certificate issuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: kube-system
spec:
acme:
email: xxx#gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging
# Add a single challenge solver, HTTP01 using istio
solvers:
- http01:
ingress:
class: istio
Certificate file:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: url-certs
namespace: istio-system
annotations:
cert-manager.io/issue-temporary-certificate: "true"
spec:
secretName: url-certs
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
commonName: bot.demo.live
dnsNames:
- bot.demo.live
- "*.demo.live"
Gateway file:
# gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https-url-1
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: "url-certs" # This should match the Certificate secretName
Application Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: microbot
name: microbot
namespace: bot-demo
spec:
replicas: 1
selector:
matchLabels:
app: microbot
template:
metadata:
labels:
app: microbot
spec:
containers:
- name: microbot
image: dontrebootme/microbot:v1
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
Virtual service and application service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: microbot-virtual-svc
namespace: bot-demo
spec:
hosts:
- bot.demo.live
gateways:
- istio-system/public-gateway
http:
- match:
- uri:
prefix: "/"
route:
- destination:
host: microbot-service
port:
number: 9100
---
apiVersion: v1
kind: Service
metadata:
name: microbot-service
namespace: bot-demo
spec:
selector:
app: microbot
ports:
- port: 9100
targetPort: 80
Whenever I try to curl https://bot.demo.live, I get a certificate error. The certificate issuer is working. I just can't figure out how to expose the deployed application via the istio gateway for external access. bot.demo.live is already in my /etc/hosts/ file and and I can ping it just fine.
What am I doing wrong?

AWS route53 external dns point only one A records to the (E)LB of the (nginx) ingress controller

I have created two ingresses, one for Grafana and one for my App. When the external dns write them into the route53 hosted zone as a A record, only one of them (the Myapp dns) get the (E)LB alias (dns), though the second A record get the internal IP as an ip address in the route53 A record.
the big question:
is there a way to set them all to the same alias/or at list to the same elb?
why it doesn't success doing so by default?
using:
terraform:
helm_release:
nginx-controller-bitnami
external-dns-bitnami
prometheus community
grafana-ingress:
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: kube-prometheus-stack
meta.helm.sh/release-namespace: prometheus
generation: 1
labels:
app.kubernetes.io/instance: kube-prometheus-stack
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 8.3.3
helm.sh/chart: grafana-6.20.5
name: kube-prometheus-stack-grafana
namespace: prometheus
resourceVersion: "2419"
spec:
rules:
- host: grafana.dns.io
http:
paths:
- backend:
service:
name: kube-prometheus-stack-grafana
port:
number: 80
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- ip: 10.0.1.19
kind: List
metadata:
resourceVersion: ""
selfLink: ""
my-app-ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: /
nginx.ingress.kubernetes.io/rewrite-target: /$1
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: "some.website.io"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: "{{ .Release.Name }}-app"
port:
number: 80
status:
loadBalancer:
ingress:
- ip: 10.0.1.19
nginx-controller-service
kind: Service
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: website.my-app.io, some.grafana.io
meta.helm.sh/release-name: nginx-ingress-controller
meta.helm.sh/release-namespace: default
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress-controller
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: nginx-ingress-controller
helm.sh/chart: nginx-ingress-controller-9.1.4
name: nginx-ingress-controller
namespace: default
resourceVersion: "997"
spec:
clusterIP: 172.30.0.140
clusterIPs:
- 172.30.0.140
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
nodePort: 30337
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31512
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress-controller
app.kubernetes.io/name: nginx-ingress-controller
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: someElbDns.eu-west-1.elb.amazonaws.com
Since your ingress controller service is of type LoadBalancer, creating this service will provision an NLB for you which will have your nginx controller pods as target groups.
A request will pass from the NLB to the ingress pods and then to grafana/your app subsequently.
I would try to use the same host (website.io) and the same nginx ingress class so both ingresses get assigned the same NLB address:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: default
name: grafana-ingress
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- grafana.website.io
secretName: tls-secret-website-io
rules:
- host: website.io
- http:
paths:
- path: /
backend:
serviceName: grafana-service
servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: default
name: awebsite-ingress
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- subdomain.website.io
secretName: tls-secret-website-io
rules:
- host: website.io
- http:
paths:
- path: /(/|$)(.*)
backend:
serviceName: website-service
servicePort: 80
kind: Service
apiVersion: v1
metadata:
name: nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: nginx-controller
app.kubernetes.io/part-of: nginx-controller
annotations:
# by default the type is elb (classic load balancer).
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
# this setting is to make sure the source IP address is preserved.
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: nginx-controller
app.kubernetes.io/part-of: nginx-controller
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
Last step is to create a CNAME entry in Rote53 from your domain to the loadbalancer DNS.

istio create external ip for specific service

I've deployed successfully an app to K8s with istio
We have gw which we use and virtual service like the following:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bher-virtualservice
namespace: ba-trail
spec:
gateways:
- bher-gateway
hosts:
- trialio.cloud.str
http:
- match:
- uri:
prefix: "/"
- uri:
prefix: "/login"
- uri:
prefix: "/static"
- uri:
regex: '^.*\.(ico|png|jpg)$'
route:
- destination:
host: bsa.ba-trail.svc.cluster.local service.namespace.svc.cluster.local
port:
number: 5000
I defined also a service and deployment.
I want to expose the service outside that I will be able to access
like:
https://myapp.host:5000
when I run:
kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 100.61.114.202 a7151b2063cb-200880.eu-central-1.elb.amazonaws.com 150210:31161/TCP,80:31280/TCP,443:31190/TCP 41d
How it can be done?
I was able to run the app with port forwarding but I want a direct external link.
So in your case, you have an ELB serving your istio ingress gateway that goes to a VirtualService that directs traffic to port 5000 in the container.
I assume that you have it working with 🤔💭:
a7151b2063cb-200880.eu-central-1.elb.amazonaws.com:80 and
a7151b2063cb-200880.eu-central-1.elb.amazonaws.com:443 ❓
and you want something like:
a7151b2063cb-200880.eu-central-1.elb.amazonaws.com:5000 ❓
but with a specific name that maps to
myapp.host ❓
First, you have to create a DNS CNAME record that maps myapp.host to a7151b2063cb-200880.eu-central-1.elb.amazonaws.com.
Then on the Kubernetes service istio-ingressgateway you probably have something like this:
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
labels:
name: istio-ingress-service
annotations:
... (❓)
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: ❓
protocol: TCP
- port: 443
targetPort: ❓
protocol: TCP
selector:
name: something-that-matches-your-istio-ingress
You could just add the extra port to the service so that it listens on that port on the outside.
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
labels:
name: istio-ingress-service
annotations:
... (❓)
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: ❓
protocol: TCP
- port: 443
targetPort: ❓
protocol: TCP
- port: 5000
targetPort: ❓
selector:
name: something-that-matches-your-istio-ingress
Finally, the virtual service needs to match your hostname myapp.host
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bher-virtualservice
namespace: ba-trail
spec:
gateways:
- bher-gateway
hosts:
- myapp.host
...
✌️

How to assign an external IP to the Istio Ingress Gateway with IstioOperator? [GKE]

I want to assign an external IP to the Ingress Gateway of Istio.
I want to use the Istio Operator Spec.
So far I got this:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
profile: demo
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
loadBalancerIP: 1.2.3.4
addonComponents:
grafana:
enabled: false
prometheus:
enabled: true
It's assigning an automatically IP to the Service:
kubectl get svc -n istio-system
Is not showing 1.2.3.4. for the EXTERNAL-IP
Is it only possible if I really own this IP over GCP?
First you have to create an IP resource in GCP, and then you can give that IP here in the yaml below.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- enabled: true
k8s:
overlays:
- api_version: autoscaling/v1
kind: HorizontalPodAutoscaler
name: istio-ingressgateway
patches:
- path: spec.minReplicas
value: 3
- path: spec.maxReplicas
value: 5
- path: spec.metrics[0].resource.targetAverageUtilization
value: 60
service:
loadBalancerIP: XXX.XXX.XXX.XXX
loadBalancerSourceRanges: []
ports:
- name: status-port
port: 15020
targetPort: 15020
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
- name: tcp
port: 31400
targetPort: 31400
- name: tls
port: 15443
targetPort: 15443
label:
app: istio-ingressgateway
istio: ingressgateway
name: istio-ingressgateway

istio: ingress with grpc and http

I have a service listening on two ports; one is http, the other is grpc.
I would like to set up an ingress that can route to both these port, with the same host.
The load balancer would redirect to the http port if http/1.1 is used, and to the grpc port if h2 is used.
Is there a way to do that with istio ?
I made a hello world demonstrating what I am trying to achieve :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: dev
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337","-i","172.20.0.0/16"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}}]'
labels:
app: hello-world
spec:
containers:
- name: grpc-server
image: aguilbau/hello-world-grpc:latest
ports:
- name: grpc
containerPort: 50051
- name: http-server
image: nginx:1.7.9
ports:
- name: http
containerPort: 80
- name: istio-proxy
args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: docker.io/istio/proxy:0.1
imagePullPolicy: Always
resources: {}
securityContext:
runAsUser: 1337
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: dev
spec:
ports:
- name: grpc
port: 50051
- name: http
port: 80
selector:
app: hello-world
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-http
namespace: dev
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- host: hello-world
http:
paths:
- backend:
serviceName: hello-world
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-grpc
namespace: dev
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- host: hello-world
http:
paths:
- backend:
serviceName: hello-world
servicePort: 50051
---
I'm a bit late to the party, but for those of you stumbling on this post, I think you can do this with very little difficulty. I'm going to assume you have istio installed on a kubernetes cluster and are happy using the default istio-ingressgateway:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: dev
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: injected
pod.beta.kubernetes.io/init-containers: '[{"args":["-p","15001","-u","1337","-i","172.20.0.0/16"],"image":"docker.io/istio/init:0.1","imagePullPolicy":"Always","name":"init","securityContext":{"capabilities":{"add":["NET_ADMIN"]}}}]'
labels:
app: hello-world
spec:
containers:
- name: grpc-server
image: aguilbau/hello-world-grpc:latest
ports:
- name: grpc
containerPort: 50051
- name: http-server
image: nginx:1.7.9
ports:
- name: http
containerPort: 80
- name: istio-proxy
args:
- proxy
- sidecar
- -v
- "2"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: docker.io/istio/proxy:0.1
imagePullPolicy: Always
resources: {}
securityContext:
runAsUser: 1337
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: dev
spec:
ports:
- name: grpc
port: 50051
- name: http
port: 80
selector:
app: hello-world
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hello-world-istio-gate
namespace: dev
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port:
number: 50051
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-world-istio-vsvc
namespace: dev
spec:
hosts:
- "*"
gateways:
- hello-world-istio-gate
http:
- match:
- port: 80
route:
- destination:
host: hello-world
port:
number: 80
tcp:
- match:
- port: 50051
route:
- destination:
host: hello-world
port:
number: 50051
The above configuration omits your two Ingresses, and instead includes:
Your deployment
Your service
An istio gateway
An istio virtualservice
There is an important extra piece not shown, and I alluded to it earlier when talking about using the default ingressgateway. The following line found in "hello-world-istio-gateway" gives a clue:
istio: ingressgateway
This refers to a pod in the 'istio-system' namespace that is usually installed by default called 'istio-ingressgateway' - and this pod is exposed by a service also called 'istio-ingressgateway.' You will need to open up ports on the 'istio-ingressgateway' service.
As an example, I edited my (default) ingressgateway and added a port opening for HTTP and GRPC. The result is the following (edited for length) yaml code:
dampersand#kubetest1:~/k8s$ kubectl get service istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
<omitted for length>
labels:
app: istio-ingressgateway
chart: gateways-1.0.3
heritage: Tiller
istio: ingressgateway
release: istio
name: istio-ingressgateway
namespace: istio-system
<omitted for length>
ports:
- name: http2
nodePort: 31380
port: 80
protocol: TCP
targetPort: 80
<omitted for length>
- name: grpc
nodePort: 30000
port: 50051
protocol: TCP
targetPort: 50051
selector:
app: istio-ingressgateway
istio: ingressgateway
type: NodePort
The easiest way to make the above change (for testing purposes) is to use:
kubectl edit svc -n istio-system istio-ingressgateway
For production purposes, it's probably better to edit your helm chart or your istio.yaml file or whatever you initially used to set up the ingressgateway.
As a quick aside, note that my test cluster has istio-ingressgateway set up as a NodePort, so what the above yaml file says is that my cluster is port forwarding 31380 -> 80 and 30000 -> 50051. You may (probably) have istio-ingressgateway set up as a LoadBalancer, which will be different... so plan accordingly.
Finally, the following blog post is some REALLY excellent background reading for the tools I've outlined in this post! https://blog.jayway.com/2018/10/22/understanding-istio-ingress-gateway-in-kubernetes/
You may be able to do something like that if you move the grpc-server and http-server containers into different pods with unique labels (i.e., different versions of the service, so to speak) and then using Istio route rules, behind the Ingress, split the traffic. A rule with a match for header Upgrade: h2 could send traffic to the grpc version and a default rule would send the rest of the traffic to http one.