I want to block traffic between nodes in kubernetes.
This is because i don't want to have any effect on the traffic coming from the specfic pod.
how can do this?
You can use network policies for this. The following example of a network policy blocks all in-cluster traffic to a set of web server pods, except the pods allowed by the policy configuration.
To achieve this setup, create a NetworkPolicy with the following manifest:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
app: nginx
ingress:
- from:
- podSelector:
matchLabels:
app: foo
Once you apply this configuration, only pods with label
app: foo
can talk to the pods with the label
app: nginx.
Related
Problem: I am currently using ingress-nginx in my EKS cluster to route traffic to services that need public access.
My use case: I have services I want to deploy in the same cluster but don't want them to have public access. I only want the pods to communicate will all other services within the cluster. Those pods are meant to be private because they're backend services and only need pod-to-pod communication. How do I modify my ingress resource for this purpose?
Cluster Architecture: All services are in the private subnets of the cluster while the load-balancer is in the public subnets
Additional note: I am using external-dns to dynamically create the subdomains for the hosted zones. The hosted zone is public
Thanks
Below are my service.yml and ingress.yml for public services. I want to modify these files for private services
service.yml
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: myapp
annotations:
external-dns.alpha.kubernetes.io/hostname: myapp.dev.com
spec:
ports:
- port: 80
targetPort: 3000
selector:
app: myapp
ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
namespace: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "nginx"
labels:
app: myapp
spec:
tls:
- hosts:
- myapp.dev.com
secretName: myapp-staging
rules:
- host: myapp.dev.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: 'myapp'
port:
number: 80
From this what you have the Ingress already should work and your services are meant to be private(if you set like this in your public cloud cluster), except the Ingress itself. You can update the ConfigMap to use the PROXY protocol so that you can pass proxy information to the Ingress Controller:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
proxy-protocol: "True"
real-ip-header: "proxy_protocol"
set-real-ip-from: "0.0.0.0/0"
And then: kubectl apply -f common/nginx-config.yaml
Now you can deploy any app that you want to have private with the name specified (for example your myapp Service in your yaml file provided.
If you are a new to Kubernetes Networking, then this article would be useful for you or in official Kubernetes documentation
Here you can find other ELB annotations that may be useful for you
When I delete a service and recreate, I've noticed that status of the ingress indicates Some backend services are in UNKNOWN state.
After some trials and errors, it seems to be related to name of network endpoint group(NEG). NEG tied with a new service has different name, but the ingress gets an old NEG as backend services.
Then, I found that they works again after I recreate an Ingress.
I'd like to avoid downtime to recreate an ingress as much as possible.
Is there a way to avoid recreating ingress when recreating services?
My Service
apiVersion: v1
kind: Service
metadata:
name: client-service
labels:
app: client
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: client
My Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: static-ip-name
networking.gke.io/managed-certificates: managed-certificate
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: client-service
servicePort: 80
If you want to re-use the ingress when the service disappears, you can edit its configuration instead of deleting and recreating it.
To reconfigure the Ingress you will have to update it by editing the configuration, as specified in the official Kubernetes documentation.
To do this, you can perform the following steps:
Issue the command kubectl edit ingress test
Perform the necessary changes, like updating the service configuration
Save the changes
kubectl will update the resource, and trigger an update on the load balancer.
Verify the changes by executing the command kubectl describe ingress test
I'm very new to kubernetes. I have spent the last week learning about Nodes, Pods, Clusters, Services, and Deployments.
With that I'm trying to just get some more understanding of how the networking for kubernetes even works. I just want to expose a simple nginx docker webpage and hit it from my browser.
Our VPC is setup with a direct connect so I'm able to hit EC2 instances on their private IP addresses. I also setup the EKS cluster using the UI on aws for now as private. For testing purposes I have added my cidr range to be allowed on all TCP as an additional security group in the EKS cluster UI.
Here is my basic service and deployment definitions:
apiVersion: v1
kind: Service
metadata:
name: testing-nodeport
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
type: NodePort
selector:
app: testing-app
ports:
- port: 80
targetPort: testing-port
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
replicas: 1
selector:
matchLabels:
infrastructure: fargate
app: testing-app
template:
metadata:
labels:
infrastructure: fargate
app: testing-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- name: testing-port
containerPort: 80
I can see that everything is running correctly when I run:
kubectl get all -n default
However, when I try to hit the NodePort IP address on port 80 I can't load it from the browser.
I can hit the pod if I first setup a kubectl proxy at the following url (as the proxy is started on port 8001):
http://localhost:8001/api/v1/namespaces/default/services/testing-nodeport:80/proxy/
I'm pretty much lost at this point. I don't know what I'm doing wrong and why I can't hit the basic nginx docker outside of the kubectl proxy command.
What if you use the proxy option? Something like this:
kubectl port-forward -n default service/testing-nodeport 3000:80
Forwarding from 127.0.0.1:3000 -> 80
Forwarding from [::1]:3000 -> 80
After this, you can access your K8S service from localhost:3000. More info here
Imagine that the kubernetes cluster is like your AWS VPC. It has its own internal network with private IPs and connects all the PODs. Kubernetes only exposes things which you explicitly ask to expose.
Service port 80 is available within the cluster. So one pod can talk to this service using the service name:service port. But if you need to access from outside, you need ingress controller / LoadBalancer. You can also use NodePort for testing purposes. The node port will be something bigger than 30000 (within this 30000-32767).
You should be able to access nginx using node IP:nodeport. Here I assumed you have security group opening the node port.
Use this yaml. I updated the node port to be 31000. You can access the nginx on nodeport:31000. As I had mentioned you can not use 80 as it is for within the cluster. If you need to use 80, then you need ingress controller.
apiVersion: v1
kind: Service
metadata:
name: testing-nodeport
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
type: NodePort
selector:
app: testing-app
ports:
- port: 80
targetPort: testing-port
protocol: TCP
nodePort: 31000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
replicas: 1
selector:
matchLabels:
infrastructure: fargate
app: testing-app
template:
metadata:
labels:
infrastructure: fargate
app: testing-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- name: testing-port
containerPort: 80
Okay after 16+ hours of debugging this I finally figured out what's going on. On fargate you can't set the security groups per node like you can with managed node groups. I was setting the security group rules in the "Additional security groups" settings. However, fargate apparently completely ignores those settings and ONLY uses the security group from your "Cluster security group" setting. So in the EKS UI I set the correct rules in the "Cluster security group" and I can now hit my pod directly on a fargate instance.
Big take away from this. Only use "Cluster security group" for fargate nodes.
I am trying to write a network policy on Kubernetes that works under AWS EKS. What I want to achieve is to allow traffic to pod/pods from the same Namespace and allow external traffic that is forwarded from AWS ALB Ingress.
AWS ALB Ingress is created under the same NameSpace so I was thinking that only using DENY all traffic from other namespaces would suffice but when I use that traffic from ALB Ingress Load Balancer (whose internal IP addresses are at at the same nameSpace with the pod/pods) are not allowed. Then if I add ALLOW traffic from external clients it allows to Ingress but ALSO allows other namespaces too.
So my example is like: (this does not work as expected)
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
namespace: os
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
namespace: os
spec:
podSelector:
matchLabels:
app: nginx
tier: prod
customer: os
ingress:
- ports:
- port: 80
from: []
When using first policy ALB Ingress is blocked, with adding second one other namespaces are also allowed too which i dont want. I can allow only internal IP address of AWS ALB Ingress but it can change over time and it is created dynamically.
The semantics of the built-in Kubernetes NetworkPolicies are kind of fiddly. There are no deny rules, only allow rules.
The way they work is if no network policies apply to a pod, then all traffic is allowed. Once there is a network policy that applies to a pod, then all traffic not allowed by that policy is blocked.
In other words, you can't say something like "deny this traffic, allow all the rest". You have to effectively say, "allow all the rest".
The documentation for the AWS ALB Ingress controller states that traffic can either be sent to a NodePort for your service, or directly to pods. This means that the traffic originates from an AWS IP address outside the cluster.
For traffic that has a source that isn't well-defined, such as traffic from AWS ALB, this can be difficult - you don't know what the source IP address will be.
If you are trying to allow traffic from the Internet using the ALB, then it means anyone that can reach the ALB will be able to reach your pods. In that case, there's effectively no meaning to blocking traffic within the cluster, as the pods will be able to connect to the ALB, even if they can't connect directly.
My suggestion then is to just create a network policy that allows all traffic to the pods the Ingress covers, but have that policy as specific as possible - for example, if the Ingress accesses a specific port, then have the network policy only allow that port. This way you can minimize the attack surface within the cluster only to that which is Internet-accessible.
Any other traffic to these pods will need to be explicitly allowed.
For example:
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
spec:
podSelector:
matchLabels:
app: <your-app> # app-label
ingress:
- from: []
ports:
- port: 1234 # the port which should be Internet-accessible
This is actually a problem we faced when implementing the Network Policy plugin for the Otterize Intents operator - the operator lets you declare which pods you want to connect to within the cluster and block all the rest by automatically creating network policies and labeling pods, but we had to do that without inadvertently blocking external traffic once the first network policy had been created.
We settled on automatically detecting whether a Service resource of type LoadBalancer or NodePort exists, or an Ingress resource, and creating a network policy that allows all traffic to those ports, as in the example above. A potential improvement for that is to support specific Ingress controllers that have in-cluster pods (so, not AWS ALB, but could be nginx ingress controller, for example), and only allowing traffic from the specific ingress pods.
Have a look here: https://github.com/otterize/intents-operator
And the documentation page explaining this: https://docs.otterize.com/components/intents-operator/#network-policies
If you wanna use this and add support for a specific Ingress controller you're using, hop onto to the Slack or open an issue and we can work on it together.
By design (of Kubernetes NetworkPolicy API), if an endpoint accessible externally, it does not make sense to block it for other namespaces. (After all it can be accessed via the public LB from other namespaces, too, so it doesn't make sense to have an internal firewall for an endpoint that's already publicly accessible.) Back in the day when this API was being designed, this is what I was told.
However you might find that certain CNI plugins (Calico, Cilium etc) provide non-standard CRD APIs that have explicit “deny” operations that supersede “allow”s. They can solve your problem.
And finally, the answer depends on the CNI plugin implementation, how AWS does ALBs in terms of Kubernetes networking and how that CNI plugin deals with that. There’s no easy answer short of asking the CNI provider (or their docs).
Example:
FrontEnd application in namespace spacemyapp and pods with labels app: fe-site and tier: frontend
BackEnd application in namespace spacemyapp and pods with labels app: fe-site and tier: frontend
Frontend service is exposed as NodePort
apiVersion: v1
kind: Service
metadata:
namespace: spacemyapp
name: service-fe-site
labels:
app: fe-site
spec:
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
selector:
app: fe-site
tier: frontend
Ingress controller in namespace spacemyapp with the following annotations:
annotations:
alb.ingress.kubernetes.io/group.name: sgalbfe
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/certificate-arn: arn:aws:xxxxxx/yyyyyy
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/inbound-cidrs: "89.186.39.0/24"
alb.ingress.kubernetes.io/target-type: instance
NetworkPolicy:
Default deny for space spacemyapp
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: spacemyapp
name: default-deny
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
- Egress
ingress: []
egress: []
Backend policy to permit access only from frontend
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: spacemyapp
name: backend-policy
spec:
podSelector:
matchLabels:
app: be-site
tier: backend
ingress:
- from:
- podSelector:
matchLabels:
app: fe-site
tier: frontend
ports:
- protocol: TCP
port: 8090
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
Frontend policy in ingress policy allows pods in namespace spacemyapp with labels app: fe-site and tier: frontend to receive traffic from all namespaces, pods and IP addresses on 8080 ports (that is the port where apache listen on the pods frontend, not the ports of the service-fe-site related it!!!!). In egress policy allows pods in namespace spacemyapp with labels app: fe-site and tier: frontend to connect to pods with labels k8s-app: kube-dns in all namespaces on port UDP 53, and allows pods in namespace spacemyapp with labels app: fe-site and tier: frontend to connect to pods with labels app: be-site and tier: backend in namespaces with labels name: spacemyapp on port TCP 8090
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: spacemyapp
name: frontend-policy
spec:
podSelector:
matchLabels:
app: fe-site
tier: frontend
ingress:
- from: []
ports:
- port: 8080
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- to:
- namespaceSelector:
matchLabels:
name: spacemyapp
podSelector:
matchLabels:
app: be-site
tier: backend
ports:
- port: 8090
I had tried this configuration and work, and on ALB Target Group the HealtCheck not fail.
I want to use existing AWS ALB for my kubernetes setup. i.e. I don't want alb-ingress-controller create or update any existing AWS resource ie. Target groups, roles etc.
How can I make ALB to communicate with Kubernetes cluster, henceforth passing the request to existing services and getting the response back to ALB to display in the front end?
I tried this but it will create new ALB for new ingress resource. I want to use the existing one.
You basically have to open a node port on the instances where the Kubernetes Pods are running. Then you need to let the ALB point to those instances. There are two ways of configuring this. Either via Pods or via Services.
To configure it via a Service you need to specify .spec.ports[].nodePort. In the default setup the port needs to be between 30000 and 32000. This port gets opened on every node and will be redirected to the specified Pods (which might be on any other node). This has the downside that there is another hop, which also can cost money when using a multi-AZ setup. An example Service could look like this:
---
apiVersion: v1
kind: Service
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
type: NodePort
selector:
app: my-frontend
ports:
- port: 8080
nodePort: 30082
To configure it via a Pod you need to specify .spec.containers[].ports[].hostPort. This can be any port number, but it has to be free on the node where the Pod gets scheduled. This means that there can only be one Pod per node and it might conflict with ports from other applications. This has the downside that not all instances will be healthy from an ALB point-of-view, since only nodes with that Pod accept traffic. You could add a sidecar container which registers the current node on the ALB, but this would mean additional complexity. An example could look like this:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
replicas: 3
selector:
matchLabels:
app: my-frontend
template:
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
containers:
- name: nginx
image: "nginx"
ports:
- containerPort: 80
hostPort: 8080