Vpn between two workers node - istio

I have three nodes, the master and two workers inside my cluster. I want to know if it's possible with Istio to redirect all the traffic comming from one worker node, directly to the other worker node (but not the traffic of Kubernetes).
Thanks for the help
Warok
Edit
Apparently, it's possible to route the traffic of one specific user to a specific version https://istio.io/docs/tasks/traffic-management/request-routing/#route-based-on-user-identity. But the question is still open
Edit 2
Assume that my nodes name are node1 and node2, does the following yaml file is right?
apiVersion: networking.istio.io/v2alpha3
kind: VirtualService
metadata:
name: node1
...
spec:
hosts:
- nod1
tcp:
-match:
-port: 27017 #for now, i will just specify this port
- route:
- destination:
host: node2

I want to know if it's possible with Istio to redirect all the traffic comming from one worker node, directly to the other worker node (but not the traffic of Kubernetes).
Quick answer, No.
Istio is working as a sidecar container that is injected into a pod. You can read at What is Istio?
Istio lets you connect, secure, control, and observe services.
...
It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.
...
You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices
I also recommend reading What is Istio? The Kubernetes service mesh explained.
It's also important to know why would you want to redirect traffic from one node to the other.
Without knowing that I cannot advice any solutions.

Related

How to communicate securely to a k8s service via istio?

I can communicate to another service in the same namespace via:
curl http://myservice1:8080/actuator/info
inside the pod.
The application is not configured with TLS, I am curious if I can reach that pod via virtual service so that I can utilized this Istio feature:
curl https://myservice1:8080/actuator/info
We have Istio virtualservice and gateway in place. External access to pod is managed by it and is working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application.
How to communicate securely to a k8s service via istio?
Answering the question under the title - there will be many possibilities, but you should at the beginning Understanding TLS Configuration:
One of Istio’s most important features is the ability to lock down and secure network traffic to, from, and within the mesh. However, configuring TLS settings can be confusing and a common source of misconfiguration. This document attempts to explain the various connections involved when sending requests in Istio and how their associated TLS settings are configured. Refer to TLS configuration mistakes for a summary of some the most common TLS configuration problems.
There are many different ways to secure your connection. It all depends on what exactly you need and what you set up.
We have istio virtualservice and gateway in place, external access to pod is managed by it and working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application
As for virtualservice and gateway, you will find an example configuration in this article. You can find guides for single host and for multiple hosts.
We just wanted to reach another pod via https if possible without having to reconfigure the application.
Here you will most likely be able to apply the outbound configuration:
While the inbound side configures what type of traffic to expect and how to process it, the outbound configuration controls what type of traffic the gateway will send. This is configured by the TLS settings in a DestinationRule, just like external outbound traffic from sidecars, or auto mTLS by default.
The only difference is that you should be careful to consider the Gateway settings when configuring this. For example, if the Gateway is configured with TLS PASSTHROUGH while the DestinationRule configures TLS origination, you will end up with double encryption. This works, but is often not the desired behavior.
A VirtualService bound to the gateway needs care as well to ensure it is consistent with the Gateway definition.

Istio configuration on GKE

I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.

Application HA in k8s

im trying to make my app HA, so I created the following
3 replica
PDB
liveness and readiness probes and
pod anti affinity
is there anything else which I miss?
this is the antiaffinty config
...
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: ten
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: tan
topologyKey: topology.kubernetes.io/zone
Is there anything I missing?
Highly Available.. I think these are the parameters for an application to be HA..
Never launch your app directly from a Pod - it won’t survive a crash of a node. Even for single pod applications use ReplicaSet or Deployment objects, as they manage pods across whole cluster and maintain specified number of instances (even if it’s only one)
Use affinity configuration with custom rules to spread your pods based on your environments architecture. Workload are running in multiple instances spread across multiple nodes provides second level of resilience to the app
Define a livenessProbe for each container. Use proper method. Avoid ExecAction when your container can process HTTP requests. Remember to set proper initialDelaySeconds parameter to give your app some time to initialize (especially for ones based on JVM like Spring Boot - they are slow to start their HTTP endpoints)
You seemingly following all these points, so you should be good.
However If feasible I would recommend to try to deploy the apps on multiple clusters OR say deploy across multiple data centres and run in active-active mode. It can help adding more more nines to your availability.
Resource limit
You need to add the resource limit also in workloads it's a necessary thing otherwise cronjobs or other unnecessary workloads can may impact the business logic and workloads.
HPA - POD autoscaling
There is also some chance of all three POD get killed due to readiness & liveness while the workload under heavy traffic and the application won't be able to respond to readiness & liveness in this I would suggest you implement the HPA also at the place.
HA can be achieved by using multiple replicas, kubernetes provides this feature for HA only. Further service object in kubernetes helps load balancing the traffic to one of the available replicas based on liveliness and readiness probes, both of which are responsible for identifying the pod as healthy and ready to receive requests, resp.
please refer here https://kubernetes.io/docs/concepts/services-networking/service/ and https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

How to expose tcp service in Kubernetes only for certain ip addresses?

Nignx ingress provides a way to expose tcp or udp service: all you need is public NLB.
However this way tcp service will be exposed publicly: NLB does not support security groups or acl, also nginx-ingress does not have any way to filter traffic while proxying tcp or udp.
The only solution that comes to my mind is internal load balancer and separate non-k8s instance with haproxy or iptables, where I'll actually have restrictions based on source ip - and then forward/proxy requests to internal NLB.
Maybe there are another ways to solve this?
Do not use nginx-ingress for this. To get real IP inside nginx-ingress you have to set controller.service.externalTrafficPolicy: Local, which in its turn changes the way the nginx-ingress service is exposed - making it local to the nodes. See https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip. This in its turn causes your nginx-ingress LoadBalancer to have unhealthy hosts which will create noise in your monitoring (opposite to NodePort where every node exposes the same port and healthy). Unless you run nginx-ingress as a DaemonSet or use other hacks, e.g. limit which nodes are added as a backends (mind scheduling, scaling), or move nginx-ingress to a separate set of nodes/subnet - IMO each of these is a lot of headache for such a simple problem. More on this problem: https://elsesiy.com/blog/kubernetes-client-source-ip-dilemma
Use plain Service type: LoadBalancer (classic ELB) which supports:
Source ranges: https://aws.amazon.com/premiumsupport/knowledge-center/eks-cidr-ip-address-loadbalancer/
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups annotation in case you want to manage the source ranges from the outside.
In this case your traffic going like World -> ELB -> NodePort -> Service -> Pod, without Ingress.
Yo can use the whitelist-source-range annotation for that. We've been using it successfully for a few use cases and it does the job well.
EDIT: I spoke too soon. Rereading your question and understanding your exact use case brought me to this issue, which clearly states these services cannot be whitelisted, and suggests solving this in the firewall level.

Set static response from Istio Ingress Gateway

How do you set a static 200 response in Istio's Ingress Gateway?
We have a situation where we need an endpoint to return a small bit of static content (a bootstrap URL). We could even put it in a header. Can Istio host something like that or do we need to run a pod for no other reason than to return a single word?
Specifically I am looking for a solution that returns 200 via Istio configuration, not a pod that Istio routes to (which is quite a common example and available elsewhere).
You have to do it manually by creating VirtualService to specific service connected to pod.
Of course firstly you have to create pod and then attached service to it,
even if your application will return single word.
Istio Gateway’s are responsible for opening ports on relevant
Istio gateway pods and receiving traffic for hosts. That’s it.
The VirtualService: Istio VirtualService’s are what get “attached” to
Gateways and are responsible defining the routes the gateway should implement.
You can have multiple VirtualServices attached to Gateways. But not for the
same domain.