How to disable the unused port in istio operator - istio

I want to follow the best practices and disable the unused ports. But I am not able to figure out where exactly to disable. I am using Istio operator to deploy istiod.
I want to set flag grpcAddr="" in controlplane and also remove/disable unused ports 15090, 15021, 15020 and 15000 in dataplane.
kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: minimal
EOF
Control Plane
Istiod exposes a few unauthenticated plaintext ports for convenience by default. If desired, these can be closed:
Port 8080 exposes the debug interface, which offers read access to a variety of details about the clusters state. This can be disabled by set the environment variable ENABLE_DEBUG_ON_HTTP=false on Istiod. Warning: many istioctl commands depend on this interface and will not function if it is disabled.
Port 15010 exposes the XDS service over plaintext. This can be disabled by adding the --grpcAddr="" flag to the Istiod Deployment. Note: highly sensitive services, such as the certificate signing and distribution services, are never served over plaintext.
Data Plane
The proxy exposes a variety of ports. Exposed externally are port 15090 (telemetry) and port 15021 (health check). Ports 15020 and 15000 provide debugging endpoints. These are exposed over localhost only. As a result, the applications running in the same pod as the proxy have access; there is no trust boundary between the sidecar and application.

Related

mTLS between services running inside and outside a mesh using Istio's trust chain

I understand that I can configure Istio for its Citadel component to use a root x509 certificate + private key that I provide. Can I extend this system in a way that I also use the same root to issue certificates to legacy workloads running in the same k8s cluster, and then configure a destination rule to access these workloads from inside the mesh? Something like:
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: originate-mtls
spec:
host: mymtls-app.legacy.svc.cluster.local
trafficPolicy:
portLevelSettings:
- port:
number: 8443
tls:
mode: ISTIO_MUTUAL
sni: mymtls-app.legacy.svc.cluster.local
Can the above work? Do I need any additional configuration besides the above? I may not be in a position to run spiffe / spire to manage the certificates for workloads outside the mesh - which puts a spiffe-federation solution like this somewhat out of reach for me. But this also doesn't seem like a fully supported mechanism in any case.
I have been able to configure mTLS using a separate certificate hierarchy which I have to inject via secrets and mount into the pods / sidecars in question (illustrated here).

Istio traffic management with nginx-ingress working but only for port 80

I've seen something strange where I've been able to have an nginx-ingress with an injected sidecar (i.e. part of the mesh) successfully route traffic that it receives into a cluster based on a k8s ingress definition, and then apply Istio traffic routing to route traffic as desired internally, but this only works when the traffic is being sent to the k8s services via port 80, and only when that is a port that is NOT served by the associated k8s service. This tells me my success is likely some kind of hack.
I'm asking if anyone can point out where I'm going wrong and/or why this is working. (I need to use the nginx ingress here, I can't switch to using istio-ingressgateway for this.)
My configuration / ability to reproduce this is documented in full on this github project: https://github.com/bob-walters/nginx-istio which I've created to provide a way to repeat this setup.
My setup:
a standard Istio installation in a k8s cluster (docker desktop) with the namespaces configured to do automatic sidecar injection.
an nginx-ingress deployment (file) with injected istio sidecar.
configured the nginx-ingress with these values in order to ensure that the sidecar would not try to handle inbound traffic but should permit the outbound traffic to go to the sidecar:
podAnnotations:
traffic.sidecar.istio.io/includeInboundPorts: ""
traffic.sidecar.istio.io/excludeInboundPorts: "80,443"
A set of (demo) services based on podinfo representing the different services that I want to route between via Istio virtual services. Each serving on port 9898 with type: ClusterIP (i.e. only accessible via ingress)
A k8s Ingress definition (file) for the nginx-ingress which carries out the routing for some fictitious hostnames to the different podinfo deployments. The ingress includes the following specific annotations:
The annotation nginx.ingress.kubernetes.io/service-upstream: "true" is set in order to ensure that the nginx-ingress uses the cluster IP address, rather than individual pod IP addresses, when forwarding traffic.
The annotation nginx.ingress.kubernetes.io/upstream-vhost: nginx-cache-v2.whitelabel-dev.svc.cluster.local is NOT set. Many articles will indicate that you should typically set this in combination with the above, but setting this has the effect of altering the Host header to the value specified, and Istio routes based on the Host header, so setting this would require that all Istio routing rules be specified in terms of those hostnames and not the original hostnames. More details on this can be found at: https://github.com/kubernetes/ingress-nginx/issues/3171
Finally: a Virtual Service (file) for one of the hosts (same hostname given in the ingress definition) which is meant to apply once the traffic reaches the Istio cluster, and carries out routing based on a cookie header. (It's doing weighted service shifting with a cookie to pin user sessions.)
Here's the oddity:
The Istio traffic management seems to apply correctly if the target port of the ingress is 80. If it's 9898 (as you would expect because that is the service's available port), the Istio traffic management doesn't seem to apply at all.
This is what I'm seeing as I try varying the port numbers:
Target Port of Ingress Rule
K8s Service Port
Virtual service Port
Result
80
9898
not set
virtual service works as desired
9898
9898
not set
routes to K8s Service. Virtual service has no effect
8080
9898
not set
fails: timeout/502 while attempting to invoke service
9898
9898
9898
routes to K8s Service. Virtual service has no effect
443
9898
not set
fails: timeout/502 while attempting to invoke service
I'm really confused as to why this is not working with port 9898, but is working for port 80, especially given that K8s reports my ingress definition as invalid. My understanding of the routing is that the inbound traffic would go to the 'controller' container in the nginx-ingress service, bypassing the istio proxy as long as it comes in on ports 80 or 443. The outbound traffic should all be going through the proxy destined for the ClusterIP addresses of the k8s services, but with the 'Host' header still containing the original requested host. Thus Istio should be able to handle its routing responsibilities based on Host + Port, and does so, but only if I am routing into the mesh with port 80.
Any help greatly appreciated!
I struggled with this some more and eventually got it working.
There are some specific (non-intuitive) things that have to be correctly lined up for virtual services to work with traffic handled by an nginx-ingress. The details are at the README.md at https://github.com/bob-walters/nginx-istio

How to expose multiple services with TCP using nginx-ingress controller?

I have multiple deployments running of RDP application and they all are exposed with ClusterIP service. I have nginx-ingress controller in my k8s cluster and to allow tcp I have added --tcp-services-configmap flag in nginx-ingress controller deployment and also created a configmap for the same that is shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
3389: “demo/rdp-service1:3389”
This will expose “rdp-service1” service. And I have 10 more such services which needed to be exposed on the same port number but if I add more service in the same configmap like this
...
data
3389: “demo/rdp-service1:3389”
3389: “demo/rdp-service2:3389”
Then it will remove the previous service data and since here I have also deployed external-dns in k8s, so all the records created by ingress using host: ... will starts pointing to the deployment attached with the newly added service in configmap.
Now my final requirement is as soon as I append the rule for a newly created deployment(RDP application) in the ingress then it starts allowing the TCP connection for that, so is there any way to achieve this. Or is there any other Ingress controller available that can solve such type of use case and can also easily be integrated with external-dns ?
Note:- I am using AWS EKS Cluster and Route53 with external-dns.
Posting this answer as a community wiki to explain some of the topics in the question as well as hopefully point to the solution.
Feel free to expand/edit it.
NGINX Ingress main responsibility is to forward the HTTP/HTTPS traffic. With the addition of the tcp-services/udp-services it can also forward the TCP/UDP traffic to their respective endpoints:
Kubernetes.github.io: Ingress nginx: User guide: Exposing tcp udp services
The main issue is that the Host based routing for Ingress resource in Kubernetes is targeting specifically HTTP/HTTPS traffic and not TCP (RDP).
You could achieve a following scenario:
Ingress controller:
3389 - RDP Deployment #1
3390 - RDP Deployment #2
3391 - RDP Deployment #3
Where there would be no Host based routing. It would be more like port-forwarding.
A side note!
This setup would also depend on the ability of the LoadBalancer to allocate ports (which could be limited due to cloud provider specification)
As for possible solution which could be not so straight-forward I would take a look on following resources:
Stackoverflow.com: Questions: Nxing TCP forwarding based on hostname
Doc.traefik.io: Traefik: Routing: Routers: Configuring TCP routers
Github.com: Bolkedebruin: Rdpgw
I'd also check following links:
Aws.amazon.con: Quickstart: Architecture: Rd gateway - AWS specific
Docs.konghq.com: Kubernetes ingress controller: 1.2.X: Guides: Using tcpingress
Haproxy:
Haproxy.com: Documentation: Aloha: 12-0: Deployment guides: Remote desktop: RDP gateway
Haproxy.com: Documentation: Aloha: 10-5: Deployment guides: Remote desktop
Haproxy.com: Blog: Microsoft remote desktop services rds load balancing and protection
Actually, I really don't know why you are using that configmap.
In my knowledge, nginx-ingress-controller is routing traffic coming in the same port and routing based on host. So if you want to expose your applications on the same port, try using this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Chart.Name }}-ingress
namespace: your-namespace
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: your-hostname
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: {{ .Chart.Name }}-service
servicePort: {{ .Values.service.nodeport.port }}
Looking in your requirement, I feel that you need a LoadBalancer rather than Ingress

Egress Blocking Based on IP Address

We would like to use Istio for achieving blocking of egress access from applications and to have an allow-list/block-list of IP Addresses and CIDR blocks. Are there any solutions possible using Istio?
-Renjith
We would like to use Istio for achieving blocking of egress access from applications
I think you could use REGISTRY_ONLY outboundTrafficPolicy.mode for that.
Istio has an installation option, meshConfig.outboundTrafficPolicy.mode, that configures the sidecar handling of external services, that is, those services that are not defined in Istio’s internal service registry. If this option is set to ALLOW_ANY, the Istio proxy lets calls to unknown services pass through. If the option is set to REGISTRY_ONLY, then the Istio proxy blocks any host without an HTTP service or service entry defined within the mesh. ALLOW_ANY is the default value, allowing you to start evaluating Istio quickly, without controlling access to external services. You can then decide to configure access to external services later.
More about that here and here.
and to have an allow-list/block-list of IP Addresses and CIDR blocks.
AFAIK the only way to create an allow/block list in istio is with AuthorizationPolicy or EnvoyFilter.
I have found few examples where they used AuthorizationPolicy with egress gateway, for example here.
They just changed the AuthorizationPolicy label from app: istio-ingressgateway to app: istio-egressgateway.
spec:
selector:
matchLabels:
app: istio-egressgateway
I was looking for any example with ip/cidr, but I couldn't find anything, so I'm not sure if that's gonna work with the egress gateway.
Additional resources:
https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/#ip-based-allow-list-and-deny-list
Istio authorization policy not applying on child gateway
https://istio.io/latest/docs/reference/config/security/authorization-policy/#Source
https://github.com/salrashid123/istio_helloworld#egress-rules

GKE: Pubsub messages between pods with push subscribers

I am using GKE deployment with multiple pods and I need to send and receive messages between pods. I want to use pubsub push subscribers.
I found for push I need to configure https access for subscribers pods.
In order to receive push messages, you need a publicly accessible HTTPS server to handle POST requests. The server must present a valid SSL certificate signed by a certificate authority and routable by DNS. You also need to validate that you own the domain (or have equivalent access to the endpoint).
Is this really required or is there some workaround. Does it mean I should expose each subscriber pod with Ingress, even for internal communication?
If you only need pods to be exposed on a certain port (for pod to pod communication) then you would just need to expose each pod via a service that targets that port (in your case port 443).
For example, by using the following YAML you can create a service which targets a port on a pod(s):
apiVersion: v1
kind: Service
metadata:
name: my-pod
labels:
run: my--pod
spec:
ports:
- port: 443
targetPort: 443
protocol: TCP
selector:
run: my-pod
The above would create a Service which targets TCP port 443 on any Pod with the run: my-pod label. In the file, targetPort is the port the container (within the pod) accepts traffic on, and port is the abstracted Service port, which can be any port other pods use to access the Service).
EDIT:
However, if you need the pods to be able to communicate with the Pub-Sub API,then the ability to communicate externally is required, so yes ingress would be recommended.
In response to your question in the comment "I wonder why Google needs to access Kubernetes with public HTTPS instead on some internal request"- The reason is it isn't an internal request. The Pub-Sub API sits outside of your project/network, so data travels across other networks. For it to be secure, It needs to be encrypted- this is the reason HTTPS is used.