ISTIO service mesh for internal kube services - istio

I am new to istio and was trying to set it up.
I have a question though: Is istio only meant for traffic coming in to the kube cluster via ingress or can it be used to communicate with services running inside the same kube cluster?
Sorry if it is a noob question, but i am unable to find it anywhere else. Any pointer would be greatly appreciated.
Here is what i have:
1. 2 different versions of a service deployed on the istio mesh:
kubectl get pods -n turbo -l component=rhea
NAME READY STATUS RESTARTS AGE
rhea-api-istio-1-58b957dd4b-cdn54 2/2 Running 0 46h
rhea-api-istio-2-5787d4ffd4-bfwwk 2/2 Running 0 46h
Another service deployed on the istio mesh:
kubectl get pods -n saudagar | grep readonly
saudagar-readonly-7d75c5c7d6-zvhz9 2/2 Running 0 5d
I have a kube service defined like:
apiVersion: v1
kind: Service
metadata:
name: rhea
labels:
component: rhea
namespace: turbo
spec:
selector:
component: rhea
ports:
- port: 80
targetPort: 3000
protocol: TCP
Destination rules:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rhea
spec:
host: rhea
subsets:
- name: v1
labels:
app: rhea-api-istio-1
- name: v2
labels:
app: rhea-api-istio-2
A virtual service like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rhea
namespace: turbo
spec:
hosts:
- rhea
http:
- route:
- destination:
host: rhea
subset: v1
What i am trying to test is circuit breaking, between rhea and saudagar, and traffic routing over the 2 versions of the service.
I want to test this from inside the same kube cluster. I am not able to achieve this. If i want to access rhea service from the saudagar service, what endpoint should i use so that i can see the traffic routing policy applied?

Istio can be used for controlling ingress traffic (from outside into the cluster), for controlling in-cluster traffic (between services inside the cluster) and for controlling egress traffic (from the services inside the cluster to services outside the cluster).

Related

How to avoid recreating ingress when recreate service on GKE?

When I delete a service and recreate, I've noticed that status of the ingress indicates Some backend services are in UNKNOWN state.
After some trials and errors, it seems to be related to name of network endpoint group(NEG). NEG tied with a new service has different name, but the ingress gets an old NEG as backend services.
Then, I found that they works again after I recreate an Ingress.
I'd like to avoid downtime to recreate an ingress as much as possible.
Is there a way to avoid recreating ingress when recreating services?
My Service
apiVersion: v1
kind: Service
metadata:
name: client-service
labels:
app: client
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: client
My Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: static-ip-name
networking.gke.io/managed-certificates: managed-certificate
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: client-service
servicePort: 80
If you want to re-use the ingress when the service disappears, you can edit its configuration instead of deleting and recreating it.
To reconfigure the Ingress you will have to update it by editing the configuration, as specified in the official Kubernetes documentation.
To do this, you can perform the following steps:
Issue the command kubectl edit ingress test
Perform the necessary changes, like updating the service configuration
Save the changes
kubectl will update the resource, and trigger an update on the load balancer.
Verify the changes by executing the command kubectl describe ingress test

AWS EKS, How To Hit Pod directly from browser?

I'm very new to kubernetes. I have spent the last week learning about Nodes, Pods, Clusters, Services, and Deployments.
With that I'm trying to just get some more understanding of how the networking for kubernetes even works. I just want to expose a simple nginx docker webpage and hit it from my browser.
Our VPC is setup with a direct connect so I'm able to hit EC2 instances on their private IP addresses. I also setup the EKS cluster using the UI on aws for now as private. For testing purposes I have added my cidr range to be allowed on all TCP as an additional security group in the EKS cluster UI.
Here is my basic service and deployment definitions:
apiVersion: v1
kind: Service
metadata:
name: testing-nodeport
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
type: NodePort
selector:
app: testing-app
ports:
- port: 80
targetPort: testing-port
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
replicas: 1
selector:
matchLabels:
infrastructure: fargate
app: testing-app
template:
metadata:
labels:
infrastructure: fargate
app: testing-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- name: testing-port
containerPort: 80
I can see that everything is running correctly when I run:
kubectl get all -n default
However, when I try to hit the NodePort IP address on port 80 I can't load it from the browser.
I can hit the pod if I first setup a kubectl proxy at the following url (as the proxy is started on port 8001):
http://localhost:8001/api/v1/namespaces/default/services/testing-nodeport:80/proxy/
I'm pretty much lost at this point. I don't know what I'm doing wrong and why I can't hit the basic nginx docker outside of the kubectl proxy command.
What if you use the proxy option? Something like this:
kubectl port-forward -n default service/testing-nodeport 3000:80
Forwarding from 127.0.0.1:3000 -> 80
Forwarding from [::1]:3000 -> 80
After this, you can access your K8S service from localhost:3000. More info here
Imagine that the kubernetes cluster is like your AWS VPC. It has its own internal network with private IPs and connects all the PODs. Kubernetes only exposes things which you explicitly ask to expose.
Service port 80 is available within the cluster. So one pod can talk to this service using the service name:service port. But if you need to access from outside, you need ingress controller / LoadBalancer. You can also use NodePort for testing purposes. The node port will be something bigger than 30000 (within this 30000-32767).
You should be able to access nginx using node IP:nodeport. Here I assumed you have security group opening the node port.
Use this yaml. I updated the node port to be 31000. You can access the nginx on nodeport:31000. As I had mentioned you can not use 80 as it is for within the cluster. If you need to use 80, then you need ingress controller.
apiVersion: v1
kind: Service
metadata:
name: testing-nodeport
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
type: NodePort
selector:
app: testing-app
ports:
- port: 80
targetPort: testing-port
protocol: TCP
nodePort: 31000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
replicas: 1
selector:
matchLabels:
infrastructure: fargate
app: testing-app
template:
metadata:
labels:
infrastructure: fargate
app: testing-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- name: testing-port
containerPort: 80
Okay after 16+ hours of debugging this I finally figured out what's going on. On fargate you can't set the security groups per node like you can with managed node groups. I was setting the security group rules in the "Additional security groups" settings. However, fargate apparently completely ignores those settings and ONLY uses the security group from your "Cluster security group" setting. So in the EKS UI I set the correct rules in the "Cluster security group" and I can now hit my pod directly on a fargate instance.
Big take away from this. Only use "Cluster security group" for fargate nodes.

Sonar cannot be access via istio virtual service but can be locally accessed after port forwarding

I am trying to implement SonarQube in a Kubernetes cluster. The deployment is running properly and is also exposed via a Virtual Service. I am able to open the UI via the localhost:port/sonar but I am not able to access it through my external ip. I understand that sonar binds to localhost and does not allow access from outside the remote server. I am running this on GKE with a MYSQL database. Here is my YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sonarqube
namespace: sonar
labels:
service: sonarqube
version: v1
spec:
replicas: 1
template:
metadata:
name: sonarqube
labels:
name: sonarqube
spec:
terminationGracePeriodSeconds: 15
initContainers:
- name: volume-permission
image: busybox
command:
- sh
- -c
- sysctl -w vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: sonarqube
image: sonarqube:6.7
resources:
limits:
memory: 4Gi
cpu: 2
requests:
memory: 2Gi
cpu: 1
args:
- -Dsonar.web.context=/sonar
- -Dsonar.web.host=0.0.0.0
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:mysql://***.***.**.*:3306/sonar?useUnicode=true&characterEncoding=utf8
ports:
- containerPort: 9000
name: sonarqube-port
---
apiVersion: v1
kind: Service
metadata:
labels:
service: sonarqube
version: v1
name: sonarqube
namespace: sonar
spec:
selector:
name: sonarqube
ports:
- name: http
port: 80
targetPort: sonarqube-port
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube-internal
namespace: sonar
spec:
hosts:
- sonarqube.staging.jeet11.internal
- sonarqube
gateways:
- default/ilb-gateway
- mesh
http:
- route:
- destination:
host: sonarqube
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube-external
namespace: sonar
spec:
hosts:
- sonarqube.staging.jeet11.com
gateways:
- default/elb-gateway
http:
- route:
- destination:
host: sonarqube
---
The deployment completes successfully. My exposed services gives a public ip that has been mapped to the host url but I am unable to access the service at the host url.
I need to change the mapping such that sonar binds with the server ip but I am unable to understand how to do that. I cannot bind it to my cluster ip, neither to my internal or external service ip.
What should I do? Please help!
I had the same issue recently and I managed to get this resolved today.
I hope the following solution will work for anyone facing the same issue!.
Environment
Cloud Provider: Azure - AKS
This should work regardless of whatever provider you use.
Istio Version: 1.7.3
K8 Version: 1.16.10
Tools - Debugging
kubectl logs -n istio-system -l app=istiod
logs from Istiod and events happening in the control plane.
istioctl analyze -n <namespace>
This generally gives you any warnings and errors for a given namespace.
Lets you know if things are misconfigured.
Kiali - istioctl dashboard kiali
See if you are getting inbound traffic.
Also, shows you any misconfigurations.
Prometheus - istioctl dashboard prometheus
query metric - istio_requests_total. This shows you the traffic going into the service.
If there's any misconfiguration you will see the destination_app as unknown.
Issue
Unable to access sonarqube UI via external IP, but accessible via localhost (port-forward).
Unable to route traffic via Istio Ingressgateway.
Solution
Sonarqube Service Manifest
apiVersion: v1
kind: Service
metadata:
name: sonarqube
namespace: sonarqube
labels:
name: sonarqube
spec:
type: ClusterIP
ports:
- name: http
port: 9000
targetPort: 9000
selector:
app: sonarqube
status:
loadBalancer: {}
Your targetport is the container port. To avoid any confusion just assign the service port number as same as the service targetport.
The port name is very important here. “Istio required the service ports to follow the naming form of ‘protocol-suffix’ where the ‘-suffix’ part is optional” - KIA0601 - Port name must follow [-suffix] form
Istio Gateway and VirtualService manifest for sonarqube
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: sonarqube-gateway
namespace: sonarqube
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9000
name: http
protocol: HTTP
hosts:
- "XXXX.XXXX.com.au"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube
namespace: sonarqube
spec:
hosts:
- "XXXX.XXXX.com.au"
gateways:
- sonarqube-gateway
http:
- route:
- destination:
host: sonarqube
port:
number: 9000
Gateway protocol must be set to HTTP.
Gateway Server Port and VirtualService Destination Port is the same. If you have different app Service Port, then your VirtualService Destination Port number should match the app Service Port. The Gateway Server Port should match the app Service Targetport.
Now comes to the fun bit! The hosts. If you want to access the service outside of the cluster, then you need to have your host-name (whatever host-name that you want to map the sonarqube server) as an DNS A record mapped to the External Public IP address of the istio-ingressgateway.
To get the EXTERNAL-IP address of the ingressgateway, run kubectl -n istio-system get service istio-ingressgateway.
If you do a simple nslookup (run - nslookup <hostname>), The IP address you get must match with the IP address that is assigned to the istio-ingressgateway service.
Expose a new port in the ingressgateway
Note that your sonarqube gateway port is a new port that you are introducing to Kubernetes and you’re telling the cluster to listen on that port. But your load balancer doesn’t know about this port. Therefore, you need to open the specified gateway port on your kubernetes external load balancer. Ref - Info
You don’t need to manually change your load balancer service. You just need to update the ingress gateway to include the new port, which will update the load balancer automatically.
You can identify if the port is causing issues by running istioctl analyze -n sonarqube. You should get the following warning;
[33mWarn[0m [IST0104] (Gateway sonarqube-gateway.sonarqube) The gateway refers to a port that is not exposed on the workload (pod selector istio=ingressgateway; port 9000) Error: Analyzers found issues when analyzing namespace: sonarqube. See https://istio.io/docs/reference/config/analysis for more information about causes and resolutions.
You should get the corresponding error in the control plane. Run kubectl logs -n istio-system -l app=istiod.
At this point you need to update the Istio ingressgateway service to expose the new port. Run kubectl edit svc istio-ingressgateway -n istio-system and add the following section to the ports.
Bypass creating a new port
In the previous section you saw how to expose a new port. This is optional and depending on your use case.
In this section you will see how to use a port that is already exposed.
If you look at the service of the istio-ingressgateway. You can see that there are default ports exposed. Here we are going to use port 80.
Your setup will look like the following;
To void specifying the port with your host name just add match uri prefix, as shown in the virtualservice manifest.
Time for testing
If everything works up to this point as expected, then you are good to go.
During testing I made one mistake by not specifying the port. If you get 404 status, Which is still a good thing, in this way you can verify what server it is using. If you setup things correctly, it should use the istio-envoy server, not the nginx.
Without specifiying the port. This will only work if you add the match uri prefix.
Donot pass argument just try running without it once working for me.
This is how my deployment file hope helpful
apiVersion: v1
kind: Service
metadata:
name: sonarqube-service
spec:
selector:
app: sonarqube
ports:
- protocol: TCP
port: 9000
targetPort: 9000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: sonarqube
name: sonarqube
spec:
replicas: 1
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- name: sonarqube
image: sonarqube:7.1
resources:
requests:
memory: "1200Mi"
cpu: .10
limits:
memory: "2500Mi"
cpu: .50
volumeMounts:
- mountPath: "/opt/sonarqube/data/"
name: sonar-data
- mountPath: "/opt/sonarqube/extensions/"
name: sonar-extensions
env:
- name: "SONARQUBE_JDBC_USERNAME"
value: "root" #Put your db username
- name: "SONARQUBE_JDBC_URL"
value: "jdbc:mysql://192.168.112.4:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true" #DB URL
- name: "SONARQUBE_JDBC_PASSWORD"
value : password
ports:
- containerPort: 9000
protocol: TCP
volumes:
- name: sonar-data
persistentVolumeClaim:
claimName: sonar-data
- name: sonar-extensions
persistentVolumeClaim:
claimName: sonar-extensions

Istio injected DB apps, make their service type as NodePort, the nodeport cannot be accessed

I am using istio 1.0.2 version with istio-demo-auth.yaml, I have one mssql and activemq deployed in the same namespaces with other applications, both were be injected by istioctl. The applications can connect to those two services inside the cluster, but I make those two services' type as NodePort, it succeeded, but I cannot access those nodeport(52433, 51618, or 58161).
kubectl get svc -n $namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
amq-master-01 NodePort 10.254.176.151 61618:51618/TCP,8161:58161/TCP 4h
mssql-master NodePort 10.254.209.36 2433:52433/TCP 33m
kubectl get deployment -n $namespace
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
activemq 1 1 1 1 4h
mssql-master 1 1 1 1 44m
Then I try to use gateway and virtualservice for using ingressgateway tcp port 31400. It works, as below:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tcp-gateway
namespace: multitenancy
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mssql-tcp
namespace: multitenancy
spec:
gateways:
- tcp-gateway
hosts:
- "*"
tcp:
- match:
- port: 31400
route:
- destination:
host: mssql-master
port:
number: 2433
My question is,
1. How to configure for another http connection for 61618 or other tcp connections? Currently I can only use 31400 for one service(mssql-2433).
2. Why is that nodeport is not working after I inject those application into istio, how could it be work?
Thanks.
Referring to the documentation:
Type NodePort
If you set the type field to NodePort, the Kubernetes master will allocate a port from a range specified by --service-node-port-range flag (default: 30000-32767), and each Node will proxy that port (the same port number on every Node) into your Service. That port will be reported in your Service’s .spec.ports[*].nodePort field.
Just update your config of all masters and you will be able to allocate any port.
Regarding to the second question:
I suggest you to create an issue on github, because it looks like a bug, there are no restrictions to use nodePort in the documentation.

How to integrate Kubernetes with existing AWS ALB?

I want to use existing AWS ALB for my kubernetes setup. i.e. I don't want alb-ingress-controller create or update any existing AWS resource ie. Target groups, roles etc.
How can I make ALB to communicate with Kubernetes cluster, henceforth passing the request to existing services and getting the response back to ALB to display in the front end?
I tried this but it will create new ALB for new ingress resource. I want to use the existing one.
You basically have to open a node port on the instances where the Kubernetes Pods are running. Then you need to let the ALB point to those instances. There are two ways of configuring this. Either via Pods or via Services.
To configure it via a Service you need to specify .spec.ports[].nodePort. In the default setup the port needs to be between 30000 and 32000. This port gets opened on every node and will be redirected to the specified Pods (which might be on any other node). This has the downside that there is another hop, which also can cost money when using a multi-AZ setup. An example Service could look like this:
---
apiVersion: v1
kind: Service
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
type: NodePort
selector:
app: my-frontend
ports:
- port: 8080
nodePort: 30082
To configure it via a Pod you need to specify .spec.containers[].ports[].hostPort. This can be any port number, but it has to be free on the node where the Pod gets scheduled. This means that there can only be one Pod per node and it might conflict with ports from other applications. This has the downside that not all instances will be healthy from an ALB point-of-view, since only nodes with that Pod accept traffic. You could add a sidecar container which registers the current node on the ALB, but this would mean additional complexity. An example could look like this:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
replicas: 3
selector:
matchLabels:
app: my-frontend
template:
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
containers:
- name: nginx
image: "nginx"
ports:
- containerPort: 80
hostPort: 8080