How to expose tcp service in Kubernetes only for certain ip addresses? - amazon-web-services

Nignx ingress provides a way to expose tcp or udp service: all you need is public NLB.
However this way tcp service will be exposed publicly: NLB does not support security groups or acl, also nginx-ingress does not have any way to filter traffic while proxying tcp or udp.
The only solution that comes to my mind is internal load balancer and separate non-k8s instance with haproxy or iptables, where I'll actually have restrictions based on source ip - and then forward/proxy requests to internal NLB.
Maybe there are another ways to solve this?

Do not use nginx-ingress for this. To get real IP inside nginx-ingress you have to set controller.service.externalTrafficPolicy: Local, which in its turn changes the way the nginx-ingress service is exposed - making it local to the nodes. See https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip. This in its turn causes your nginx-ingress LoadBalancer to have unhealthy hosts which will create noise in your monitoring (opposite to NodePort where every node exposes the same port and healthy). Unless you run nginx-ingress as a DaemonSet or use other hacks, e.g. limit which nodes are added as a backends (mind scheduling, scaling), or move nginx-ingress to a separate set of nodes/subnet - IMO each of these is a lot of headache for such a simple problem. More on this problem: https://elsesiy.com/blog/kubernetes-client-source-ip-dilemma
Use plain Service type: LoadBalancer (classic ELB) which supports:
Source ranges: https://aws.amazon.com/premiumsupport/knowledge-center/eks-cidr-ip-address-loadbalancer/
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups annotation in case you want to manage the source ranges from the outside.
In this case your traffic going like World -> ELB -> NodePort -> Service -> Pod, without Ingress.

Yo can use the whitelist-source-range annotation for that. We've been using it successfully for a few use cases and it does the job well.
EDIT: I spoke too soon. Rereading your question and understanding your exact use case brought me to this issue, which clearly states these services cannot be whitelisted, and suggests solving this in the firewall level.

Related

How to communicate securely to a k8s service via istio?

I can communicate to another service in the same namespace via:
curl http://myservice1:8080/actuator/info
inside the pod.
The application is not configured with TLS, I am curious if I can reach that pod via virtual service so that I can utilized this Istio feature:
curl https://myservice1:8080/actuator/info
We have Istio virtualservice and gateway in place. External access to pod is managed by it and is working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application.
How to communicate securely to a k8s service via istio?
Answering the question under the title - there will be many possibilities, but you should at the beginning Understanding TLS Configuration:
One of Istio’s most important features is the ability to lock down and secure network traffic to, from, and within the mesh. However, configuring TLS settings can be confusing and a common source of misconfiguration. This document attempts to explain the various connections involved when sending requests in Istio and how their associated TLS settings are configured. Refer to TLS configuration mistakes for a summary of some the most common TLS configuration problems.
There are many different ways to secure your connection. It all depends on what exactly you need and what you set up.
We have istio virtualservice and gateway in place, external access to pod is managed by it and working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application
As for virtualservice and gateway, you will find an example configuration in this article. You can find guides for single host and for multiple hosts.
We just wanted to reach another pod via https if possible without having to reconfigure the application.
Here you will most likely be able to apply the outbound configuration:
While the inbound side configures what type of traffic to expect and how to process it, the outbound configuration controls what type of traffic the gateway will send. This is configured by the TLS settings in a DestinationRule, just like external outbound traffic from sidecars, or auto mTLS by default.
The only difference is that you should be careful to consider the Gateway settings when configuring this. For example, if the Gateway is configured with TLS PASSTHROUGH while the DestinationRule configures TLS origination, you will end up with double encryption. This works, but is often not the desired behavior.
A VirtualService bound to the gateway needs care as well to ensure it is consistent with the Gateway definition.

ALB ingress mixed private and internet facing paths

I have a set of containerized microservices behind an ALB serving as endpoints for my API. The ALB ingress is internet-facing and I have set up my path routing accordingly. Suddenly the need appeared for some additional (new) containerized microservices to be private (aka not accessible through the internet) but still be reachable from, and able to communicate with, the ones that are public (internally).
Is there a way to configure path based routing , or modify the ingress with some annotation to keep certain paths private?
If not, would a second ingress (an internal one this time) under the same ALB do the trick for what I want?
Thanks,
George
Turns out that (at least for my case) the solution is to ignore the internet-facing Ingress and let it do its thing. Internal facing REST API paths that should not be otherwise accessible can be used through their pods' Service specification.
Implementing a Service per microservice will allow internal access in their : without the need to modify anything in the initial Ingress which will continue to handle internet-facing API(s).

Set static response from Istio Ingress Gateway

How do you set a static 200 response in Istio's Ingress Gateway?
We have a situation where we need an endpoint to return a small bit of static content (a bootstrap URL). We could even put it in a header. Can Istio host something like that or do we need to run a pod for no other reason than to return a single word?
Specifically I am looking for a solution that returns 200 via Istio configuration, not a pod that Istio routes to (which is quite a common example and available elsewhere).
You have to do it manually by creating VirtualService to specific service connected to pod.
Of course firstly you have to create pod and then attached service to it,
even if your application will return single word.
Istio Gateway’s are responsible for opening ports on relevant
Istio gateway pods and receiving traffic for hosts. That’s it.
The VirtualService: Istio VirtualService’s are what get “attached” to
Gateways and are responsible defining the routes the gateway should implement.
You can have multiple VirtualServices attached to Gateways. But not for the
same domain.

Vpn between two workers node

I have three nodes, the master and two workers inside my cluster. I want to know if it's possible with Istio to redirect all the traffic comming from one worker node, directly to the other worker node (but not the traffic of Kubernetes).
Thanks for the help
Warok
Edit
Apparently, it's possible to route the traffic of one specific user to a specific version https://istio.io/docs/tasks/traffic-management/request-routing/#route-based-on-user-identity. But the question is still open
Edit 2
Assume that my nodes name are node1 and node2, does the following yaml file is right?
apiVersion: networking.istio.io/v2alpha3
kind: VirtualService
metadata:
name: node1
...
spec:
hosts:
- nod1
tcp:
-match:
-port: 27017 #for now, i will just specify this port
- route:
- destination:
host: node2
I want to know if it's possible with Istio to redirect all the traffic comming from one worker node, directly to the other worker node (but not the traffic of Kubernetes).
Quick answer, No.
Istio is working as a sidecar container that is injected into a pod. You can read at What is Istio?
Istio lets you connect, secure, control, and observe services.
...
It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.
...
You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices
I also recommend reading What is Istio? The Kubernetes service mesh explained.
It's also important to know why would you want to redirect traffic from one node to the other.
Without knowing that I cannot advice any solutions.

How to setup an external kubernetes service in AWS using https

I would like to setup a public kubernetes service in AWS that listens on https.
I know that kubernetes services currently only support TCP and UDP, but is there a way to make this work with the current version of kubernetes and AWS ELBs?
I found this. http://blog.kubernetes.io/2015/07/strong-simple-ssl-for-kubernetes.html
Is that the best way at the moment?
Https usually runs over TCP, so you can simply run your service with Type=Nodeport/LoadBalancer and manage the certs in the service. This example might help [1], nginx is listening on :443 through a NodePort for ingress traffic. See [2] for a better explanation of the example.
[1] https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/https-nginx/nginx-app.yaml#L8
[2] http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html
Since 1.3, you can use annotations along with a type=LoadBalancer service:
https://github.com/kubernetes/kubernetes/issues/24978
service.beta.kubernetes.io/aws-load-balancer-ssl-cert=arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
service.beta.kubernetes.io/aws-load-balancer-ssl-ports=* (or e.g. https)
The first annotation is the only one you need if all you want is to support HTTPS, on any number of ports. If you also want to support HTTP on one or more additional ports, you need to use the second annotation to specify explicitly which ports will use encryption (the others will use plain HTTP).
In my case I setup an elb in aws and setup the ssl cert on that, choosing https and http for the connection types in the elb and that worked great. I setup the elb wroth kubectl expose.