I was created a NATS cluster without inject to Istio.
apiVersion: nats.io/v1alpha2
kind: NatsCluster
metadata:
name: nats
spec:
size: 2
pod:
annotations:
sidecar.istio.io/inject: "false"
version: "2.0.0"
Now i has one sidecar istio, connect to Nats cluster above, but seems istio severed connection. My nats client on application closed, and Nats server notice: "Client parser ERROR, state=0 ..."
the reason is there no mtls between the nats cluster and the sidecar? How i can fix this issue?
for istio 1.8
nats and nats streaming ymal can be found on
https://github.com/nats-io/nats-operator
https://github.com/nats-io/nats-streaming-operator
If you don't connect by node port outside from kubernetes cluster. You just use default istio settings and inject sidecar for nats pods. It works.
But if you want to connect nats by node port from outside. You need disable mtls.
My setting is default mtls, pods of nats and nats streaming inject sidecar.
And nats only accept traffic of text plain and nats only send traffic with text plain.
add following peer authentication:
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "nats"
spec:
selector:
matchLabels:
app: nats
mtls:
mode: DISABLE
---
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "nats-streaming"
spec:
selector:
matchLabels:
app: nats-streaming
mtls:
mode: DISABLE
add following destination rule:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nats
spec:
host: "nats-server.acm.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nats-server-nodeport
spec:
host: "nats-server-nodeport.acm.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nats-server-mgmt
spec:
host: "nats-server-mgmt.acm.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
Related
I have an nginx container that handles html content & traffic routing via a VirtualService.
I have a separate maintenance nginx container I want to display (when I'm doing maintnenece) and on this occasion, I want all traffic to be routed to this maintenance container rather than the normal one stated in the first paragraph. I don't really want to have to tweak/patch the original traffic routes so looking for a way to have some form of override traffic routing rule.
From what I have read, the order of rules is based on the creation date so that didn't really help me.
So if anyone has any ideas how I can force all traffic to be routed to a specific "maintenance" service I would really appreciate your thoughts.
I would recommand setting a version label and work with that.
First create a DestinationRule to define your different versions and how they are identified (by labels).
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: nginx-versions
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: maintenance
labels:
version: maintenance
- name: v1
labels:
version: v1
Next setup your route in the VirtualService to point to v1.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-route
spec:
hosts:
- example.com
gateways:
- mygateway
http:
- name: nginx-route
match:
- uri:
prefix: "/nginx"
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
Now you need one Service and the two Deployments.
The selector in the service will need to match both deployments. In a normal kubernetes setup this would mean, that traffic would be routed between all workloads of both deployments. But because of istio and the version setup the traffic will only be send to the currently configured version.
The deployment with the maintenance version needs to be labeled with version: maintenance and the actual version needs to be labeled with version: v1.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-maintenance
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
version: maintenance
spec:
containers:
- image: nginx-maintenance
[...]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-v1
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- image: nginx-v1
[...]
If you want the traffic to be routed to the maintenance version just change the subset statement in the VirtalService and reapply it.
If you want in-cluster traffic always be send to your v1 version for some reason, you need another VirtualService that used the mesh gateway. Otherwise cluster internal traffic will be divided between all workload (v1 and maintenance).
Alternatively you could add the mesh gateway and the host to the VirtualService from above, but than cluster internal traffic will always behave like external traffic.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-route-in-cluster
spec:
hosts:
- nginx.default.svc.cluster.local
gateways:
- mesh
http:
- name: nginx-route
match:
- uri:
prefix: "/nginx"
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
Furthermore you could even use more versions and test updates by sending only a portion of your traffic to the new version.
To get a better understanding and some more ideas about versioning using istio please refere to this article (it's actually quite old but the concept is still relevant).
I am new at kubernetes so apologies in advance for any silly questions and mistakes. I am trying to setup external access through ingress for ArgoCD. My setup is an aws eks cluster. I have setup alb following the guide here. I have also setup external dns service as described here. I also followed the verification steps in that guide and was able to confirm that the dns record got created as well and i was able to access the foo service.
For argoCD I installed the manifests via
kubectl create namespace argocd
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml -n argocd
The argoCD docs mention adding a service to split up http and grpc and an ingress setup here. I followed that and installed those as well
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
external-dns.alpha.kubernetes.io/hostname: argocd.<mydomain.com>
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: ClusterIP
apiVersion: networking.k8s.io/v1 # Use extensions/v1beta1 for Kubernetes 1.18 and older
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/conditions.argogrpc: |
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
name: argocd
namespace: argocd
spec:
rules:
- host: argocd.<mydomain.com>
http:
paths:
- backend:
service:
name: argogrpc
port:
number: 443
pathType: ImplementationSpecific
- backend:
service:
name: argocd-server
port:
number: 443
pathType: ImplementationSpecific
tls:
- hosts:
- argocd.<mydomain.com>
The definitions are applied successfully but I don't see the dns record created neither any external IP listed. Am I missing any steps or is there any misconfiguration here? Thanks in advance!
Service type needs to be NodePort.
I have a Kubernetes app and I'm having the istio sidecar set up. Is it possible configure istio MTLS for a subset of APIs and others with simple TLS?
As I mentioned in the comments, you should be able to do that with destination rules, as you can use the tls settings mode to change the mtls for specific hosts.
Take a look at below examples from documentation:
For example, the following rule configures a client to use mutual TLS for connections to upstream database cluster.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: db-mtls
spec:
host: mydbserver.prod.svc.cluster.local
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
The following rule configures a client to use TLS when talking to a foreign service whose domain matches *.foo.com.
v1alpha3v1beta1
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: tls-foo
spec:
host: "*.foo.com"
trafficPolicy:
tls:
mode: SIMPLE
The following rule configures a client to use Istio mutual TLS when talking to rating services.
v1alpha3v1beta1
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ratings-istio-mtls
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
I am new to istio and was trying to set it up.
I have a question though: Is istio only meant for traffic coming in to the kube cluster via ingress or can it be used to communicate with services running inside the same kube cluster?
Sorry if it is a noob question, but i am unable to find it anywhere else. Any pointer would be greatly appreciated.
Here is what i have:
1. 2 different versions of a service deployed on the istio mesh:
kubectl get pods -n turbo -l component=rhea
NAME READY STATUS RESTARTS AGE
rhea-api-istio-1-58b957dd4b-cdn54 2/2 Running 0 46h
rhea-api-istio-2-5787d4ffd4-bfwwk 2/2 Running 0 46h
Another service deployed on the istio mesh:
kubectl get pods -n saudagar | grep readonly
saudagar-readonly-7d75c5c7d6-zvhz9 2/2 Running 0 5d
I have a kube service defined like:
apiVersion: v1
kind: Service
metadata:
name: rhea
labels:
component: rhea
namespace: turbo
spec:
selector:
component: rhea
ports:
- port: 80
targetPort: 3000
protocol: TCP
Destination rules:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: rhea
spec:
host: rhea
subsets:
- name: v1
labels:
app: rhea-api-istio-1
- name: v2
labels:
app: rhea-api-istio-2
A virtual service like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rhea
namespace: turbo
spec:
hosts:
- rhea
http:
- route:
- destination:
host: rhea
subset: v1
What i am trying to test is circuit breaking, between rhea and saudagar, and traffic routing over the 2 versions of the service.
I want to test this from inside the same kube cluster. I am not able to achieve this. If i want to access rhea service from the saudagar service, what endpoint should i use so that i can see the traffic routing policy applied?
Istio can be used for controlling ingress traffic (from outside into the cluster), for controlling in-cluster traffic (between services inside the cluster) and for controlling egress traffic (from the services inside the cluster to services outside the cluster).
I'm newbie in Kubernetes. I created a Kubernetes Cluster on Amazon EKS.
I'm trying to setup multiple kubernetes services to run multiple ASP.NET applications in one cluster. But facing a weird problem.
Everything runs fine when there is only 1 service. But whenever i create 2nd service for 2nd application it creates a conflict. The conflict is sometime service 1 url load service 2 application and sometime it loads service 1 application and same happens with service 2 url on simple page reload.
I've tried both Amazon Classic ELB (With LoadBalancer Service Type) and Nginx Ingress controller (With ClusterIp Service Type). This error is common in both approaches.
Both services and deployments are running on port 80, I even tried different ports for both services and deployments to avoid port conflict but same problem.
I've checked the deployment & service status, and pod log everything looks fine. No error or warning at all
Please guide how i can fix this error.
Here is the yaml file of both services for nginx ingress
# Service 1 for deployment 1 (container port: 1120)
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-05T14:54:21Z
labels:
run: load-balancer-example
name: app1-svc
namespace: default
resourceVersion: "463919"
selfLink: /api/v1/namespaces/default/services/app1-svc
uid: a*****-****-****-****-**********c
spec:
clusterIP: 10.100.102.224
ports:
- port: 1120
protocol: TCP
targetPort: 1120
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
2nd Service
# Service 2 for deployment 2 (container port: 80)
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-05T10:13:33Z
labels:
run: load-balancer-example
name: app2-svc
namespace: default
resourceVersion: "437188"
selfLink: /api/v1/namespaces/default/services/app2-svc
uid: 6******-****-****-****-************0
spec:
clusterIP: 10.100.65.46
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Thanks
The problem is with the selector in the services. They both have the same selector and that's why you are facing that problem. So they both will point to same set of pods.
The set of Pods targeted by a Service is (usually) determined by a Label Selector
Since deployemnt 1 and deployment 2 are different(i think), you should use different selector in them. Then expose the deployments. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
--
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: nightfury1204/hello_server
args:
- serve
ports:
- containerPort: 8080
Above two deployment nginx-deployment and hello-deployment has different selector. So expose them to service will not colide each other.
When you use kubectl expose deployment app1-deployment --type=ClusterIP --name=app1-svc to expose deployment, the service will have the same selector as the deployment.