Egress Blocking Based on IP Address - istio

We would like to use Istio for achieving blocking of egress access from applications and to have an allow-list/block-list of IP Addresses and CIDR blocks. Are there any solutions possible using Istio?
-Renjith

We would like to use Istio for achieving blocking of egress access from applications
I think you could use REGISTRY_ONLY outboundTrafficPolicy.mode for that.
Istio has an installation option, meshConfig.outboundTrafficPolicy.mode, that configures the sidecar handling of external services, that is, those services that are not defined in Istio’s internal service registry. If this option is set to ALLOW_ANY, the Istio proxy lets calls to unknown services pass through. If the option is set to REGISTRY_ONLY, then the Istio proxy blocks any host without an HTTP service or service entry defined within the mesh. ALLOW_ANY is the default value, allowing you to start evaluating Istio quickly, without controlling access to external services. You can then decide to configure access to external services later.
More about that here and here.
and to have an allow-list/block-list of IP Addresses and CIDR blocks.
AFAIK the only way to create an allow/block list in istio is with AuthorizationPolicy or EnvoyFilter.
I have found few examples where they used AuthorizationPolicy with egress gateway, for example here.
They just changed the AuthorizationPolicy label from app: istio-ingressgateway to app: istio-egressgateway.
spec:
selector:
matchLabels:
app: istio-egressgateway
I was looking for any example with ip/cidr, but I couldn't find anything, so I'm not sure if that's gonna work with the egress gateway.
Additional resources:
https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/#ip-based-allow-list-and-deny-list
Istio authorization policy not applying on child gateway
https://istio.io/latest/docs/reference/config/security/authorization-policy/#Source
https://github.com/salrashid123/istio_helloworld#egress-rules

Related

GKE Gateway with a wildcard Certificate Manager certificate

I'm trying to set up GKE Gateway with an HTTPS listener using a wildcard certificate managed via Certificate Manager.
The problem I'm facing is not in provisioning of the certificate, which was done successfully following the DNS Authorization tutorial and this answer. I've successfully provisioned a wildcard certificate, which is shown by gcloud certificate-manager certificates describe <cert-name> as ACTIVE and
AUTHORIZED on both the domain and its wildcard subdomain. I've also provisioned the associated Certificate Map and Map Entry (all via Terraform) and created a global static IP address and a wildcard A record for it in Cloud DNS.
However, when I try to use this cert and address in the GKE Gateway resource, the resource gets "stuck" (never reaches SYNC phase), and there's no HTTPS GCLB provisioned as seen via gcloud or Cloud Console.
Here's the config I was trying to use for it:
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: external-https
annotations:
networking.gke.io/certmap: gke-gateway
spec:
gatewayClassName: gke-l7-gxlb
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
kinds:
- kind: HTTPRoute
- name: https
protocol: HTTPS
port: 443
allowedRoutes:
kinds:
- kind: HTTPRoute
addresses:
- type: NamedAddress
value: gke-gateway
I've tried multiple different combinations of this config, including with an explicit IPAddress, or without allowedRoutes. But no matter what I try, it doesn't seem to work. I can only see the initial ADD and UPDATE events in the output of kubectl describe gateway external-http, and there're no logs for it to be found anywhere afaik (since GKE Gateway Controller is part of the GKE Control Plane and without any logging exposed to the customers, from what I understand).
The only time I was able to make either internal or external Gateway to work is when using HTTP protocol, i.e. without certificates. Hence, I think this has to do with HTTPS, and probably more specifically with linking to the managed wildcard certificate.
Additionally, I should mention that my attempts at deploying the Gateway fail most of the time (i.e. the resource gets "stuck" in the same way), even when reusing a previously-working HTTP config. I'm not sure what the source of this flakiness is (apart from maybe some internal quota), but I imagine this is fully expected, as the service is still in Beta.
Has anyone been able to actually provision a Gateway with HTTPS listener and wildcard certs, and how?

Using existing load balancer for K8S service

I have a simple app that I need to deploy in K8S (running on AWS EKS) and expose it to the outside world.
I know that I can add a service with the type LoadBalancer and viola K8S will create AWS ALB for me.
spec:
type: LoadBalancer
However, the issue is that it will create a new LB.
The main reason why this is an issue for me is that I am trying to separate out infrastructure creation/upgrades (vs. software deployment/upgrade). All of my infrastructures will be managed by Terraform and all of my software will be defined via K8S YAML files (may be Helm in the future).
And the creation of a load balancer (infrastructure) breaks this model.
Two questions:
Do I understand correctly that you can't change this behavior (create vs. use existing)?
I read multiple articles about K8S and all of them lead me into the direction of Ingress + Ingress Controller. Is this the way to solve this problem?
I am hesitant to go in this direction. There are tons of steps to get it working and it will take time for me to figure out how to retrofit it in Terraform and k8s YAML files
Short Answer , you can only change it to "NodePort" and couple the existing LB manually by adding EKS nodes with the right exposed port.
like
spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: **30080**
But to attach it like a native, that is not supported by AWS k8s Controller yet and may not be a priority to do support such behavior as :
Configuration: Controllers get configuration from k8s config maps or special CustomResourceDefinitions(CRDs) that will conflict with any manual
config on the already existing LB and my lead to wiping existing configs as not tracked in configs source.
Q: Direct expose or overlay ingress :
Note: Use ingress ( Nginx or AWS ALB ) if you have (+1) services to expose or you need to add controls on exposed APIs.

Istio traffic management with nginx-ingress working but only for port 80

I've seen something strange where I've been able to have an nginx-ingress with an injected sidecar (i.e. part of the mesh) successfully route traffic that it receives into a cluster based on a k8s ingress definition, and then apply Istio traffic routing to route traffic as desired internally, but this only works when the traffic is being sent to the k8s services via port 80, and only when that is a port that is NOT served by the associated k8s service. This tells me my success is likely some kind of hack.
I'm asking if anyone can point out where I'm going wrong and/or why this is working. (I need to use the nginx ingress here, I can't switch to using istio-ingressgateway for this.)
My configuration / ability to reproduce this is documented in full on this github project: https://github.com/bob-walters/nginx-istio which I've created to provide a way to repeat this setup.
My setup:
a standard Istio installation in a k8s cluster (docker desktop) with the namespaces configured to do automatic sidecar injection.
an nginx-ingress deployment (file) with injected istio sidecar.
configured the nginx-ingress with these values in order to ensure that the sidecar would not try to handle inbound traffic but should permit the outbound traffic to go to the sidecar:
podAnnotations:
traffic.sidecar.istio.io/includeInboundPorts: ""
traffic.sidecar.istio.io/excludeInboundPorts: "80,443"
A set of (demo) services based on podinfo representing the different services that I want to route between via Istio virtual services. Each serving on port 9898 with type: ClusterIP (i.e. only accessible via ingress)
A k8s Ingress definition (file) for the nginx-ingress which carries out the routing for some fictitious hostnames to the different podinfo deployments. The ingress includes the following specific annotations:
The annotation nginx.ingress.kubernetes.io/service-upstream: "true" is set in order to ensure that the nginx-ingress uses the cluster IP address, rather than individual pod IP addresses, when forwarding traffic.
The annotation nginx.ingress.kubernetes.io/upstream-vhost: nginx-cache-v2.whitelabel-dev.svc.cluster.local is NOT set. Many articles will indicate that you should typically set this in combination with the above, but setting this has the effect of altering the Host header to the value specified, and Istio routes based on the Host header, so setting this would require that all Istio routing rules be specified in terms of those hostnames and not the original hostnames. More details on this can be found at: https://github.com/kubernetes/ingress-nginx/issues/3171
Finally: a Virtual Service (file) for one of the hosts (same hostname given in the ingress definition) which is meant to apply once the traffic reaches the Istio cluster, and carries out routing based on a cookie header. (It's doing weighted service shifting with a cookie to pin user sessions.)
Here's the oddity:
The Istio traffic management seems to apply correctly if the target port of the ingress is 80. If it's 9898 (as you would expect because that is the service's available port), the Istio traffic management doesn't seem to apply at all.
This is what I'm seeing as I try varying the port numbers:
Target Port of Ingress Rule
K8s Service Port
Virtual service Port
Result
80
9898
not set
virtual service works as desired
9898
9898
not set
routes to K8s Service. Virtual service has no effect
8080
9898
not set
fails: timeout/502 while attempting to invoke service
9898
9898
9898
routes to K8s Service. Virtual service has no effect
443
9898
not set
fails: timeout/502 while attempting to invoke service
I'm really confused as to why this is not working with port 9898, but is working for port 80, especially given that K8s reports my ingress definition as invalid. My understanding of the routing is that the inbound traffic would go to the 'controller' container in the nginx-ingress service, bypassing the istio proxy as long as it comes in on ports 80 or 443. The outbound traffic should all be going through the proxy destined for the ClusterIP addresses of the k8s services, but with the 'Host' header still containing the original requested host. Thus Istio should be able to handle its routing responsibilities based on Host + Port, and does so, but only if I am routing into the mesh with port 80.
Any help greatly appreciated!
I struggled with this some more and eventually got it working.
There are some specific (non-intuitive) things that have to be correctly lined up for virtual services to work with traffic handled by an nginx-ingress. The details are at the README.md at https://github.com/bob-walters/nginx-istio

Istio ingress gateway : domain name and port forwarding

I have set up an Istio service mesh. It works fine as I want so far. From outside I can only access with the port number like http://www.mytest.com:41333. What do I have to do to forward 80 to 41333 so that I can access it with http://www.mytest.com
Here is my Gateway :
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mytest-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "www.mytest.com"
Not sure what to do...
I assume your istio ingress gateway service type is NodePort, if you istio ingress gateway is NodePort then you have to use http://www.mytest.com:41333.
If you want to use http://www.mytest.com then you would have to change it to LoadBalancer.
You can check if your istio ingress gateway is NodePort with
kubectl get svc -n istio-system
And check istio ingress gateway type.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting NodeIP:NodePort.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
As mentioned in istio documentation
If the EXTERNAL-IP value is (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
If you use cloud like aws you can configure Istio with AWS Load Balancer with appropriate annotations.
On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service's .status.loadBalancer
If it´s on premise, like minikube, then you could take a look at metalLB
MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.
Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.
MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible.
You can read more about it in below link:
https://medium.com/#emirmujic/istio-and-metallb-on-minikube-242281b1134b

Exposing a K8s TCP Service Endpoint to the Public Internet Without a Load Balancer

So I'm working on a project that involves managing many postgres instances inside of a k8s cluster. Each instance is managed using a Stateful Set with a Service for network communication. I need to expose each Service to the public internet via DNS on port 5432.
The most natural approach here is to use the k8s Load Balancer resource and something like external dns to dynamically map a DNS name to a load balancer endpoint. This is great for many types of services, but for databases there is one massive limitation: the idle connection timeout. AWS ELBs have a maximum idle timeout limit of 4000 seconds. There are many long running analytical queries/transactions that easily exceed that amount of time, not to mention potentially long-running operations like pg_restore.
So I need some kind of solution that allows me to work around the limitations of Load Balancers. Node IPs are out of the question since I will need port 5432 exposed for every single postgres instance in the cluster. Ingress also seems less than ideal since it's a layer 7 proxy that only supports HTTP/HTTPS. I've seen workarounds with nginx-ingress involving some configmap chicanery, but I'm a little worried about committing to hacks like that for a large project. ExternalName is intriguing but even if I can find better documentation on it I think it may end up having similar limitations as NodeIP.
Any suggestions would be greatly appreciated.
The Kubernetes ingress controller implementation Contour from Heptio can proxy TCP streams when they are encapsulated in TLS. This is required to use the SNI handshake message to direct the connection to the correct backend service.
Contour can handle ingresses, but introduces additionally a new ingress API IngressRoute which is implemented via a CRD. The TLS connection can be terminated at your backend service. An IngressRoute might look like this:
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: postgres
namespace: postgres-one
spec:
virtualhost:
fqdn: postgres-one.example.com
tls:
passthrough: true
tcpproxy:
services:
- name: postgres
port: 5432
routes:
- match: /
services:
- name: dummy
port: 80
ha proxy supports tcp load balancing. you can look at ha-proxy as a proxy and load balancer for postgres database. it can support both tls and non tls connections.