How to integrate Kubernetes with existing AWS ALB? - amazon-web-services

I want to use existing AWS ALB for my kubernetes setup. i.e. I don't want alb-ingress-controller create or update any existing AWS resource ie. Target groups, roles etc.
How can I make ALB to communicate with Kubernetes cluster, henceforth passing the request to existing services and getting the response back to ALB to display in the front end?
I tried this but it will create new ALB for new ingress resource. I want to use the existing one.

You basically have to open a node port on the instances where the Kubernetes Pods are running. Then you need to let the ALB point to those instances. There are two ways of configuring this. Either via Pods or via Services.
To configure it via a Service you need to specify .spec.ports[].nodePort. In the default setup the port needs to be between 30000 and 32000. This port gets opened on every node and will be redirected to the specified Pods (which might be on any other node). This has the downside that there is another hop, which also can cost money when using a multi-AZ setup. An example Service could look like this:
---
apiVersion: v1
kind: Service
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
type: NodePort
selector:
app: my-frontend
ports:
- port: 8080
nodePort: 30082
To configure it via a Pod you need to specify .spec.containers[].ports[].hostPort. This can be any port number, but it has to be free on the node where the Pod gets scheduled. This means that there can only be one Pod per node and it might conflict with ports from other applications. This has the downside that not all instances will be healthy from an ALB point-of-view, since only nodes with that Pod accept traffic. You could add a sidecar container which registers the current node on the ALB, but this would mean additional complexity. An example could look like this:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
replicas: 3
selector:
matchLabels:
app: my-frontend
template:
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
containers:
- name: nginx
image: "nginx"
ports:
- containerPort: 80
hostPort: 8080

Related

how can restrict traffic between nodes in kubernetes?

I want to block traffic between nodes in kubernetes.
This is because i don't want to have any effect on the traffic coming from the specfic pod.
how can do this?
You can use network policies for this. The following example of a network policy blocks all in-cluster traffic to a set of web server pods, except the pods allowed by the policy configuration.
To achieve this setup, create a NetworkPolicy with the following manifest:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
app: nginx
ingress:
- from:
- podSelector:
matchLabels:
app: foo
Once you apply this configuration, only pods with label
app: foo
can talk to the pods with the label
app: nginx.

AWS EKS, How To Hit Pod directly from browser?

I'm very new to kubernetes. I have spent the last week learning about Nodes, Pods, Clusters, Services, and Deployments.
With that I'm trying to just get some more understanding of how the networking for kubernetes even works. I just want to expose a simple nginx docker webpage and hit it from my browser.
Our VPC is setup with a direct connect so I'm able to hit EC2 instances on their private IP addresses. I also setup the EKS cluster using the UI on aws for now as private. For testing purposes I have added my cidr range to be allowed on all TCP as an additional security group in the EKS cluster UI.
Here is my basic service and deployment definitions:
apiVersion: v1
kind: Service
metadata:
name: testing-nodeport
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
type: NodePort
selector:
app: testing-app
ports:
- port: 80
targetPort: testing-port
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
replicas: 1
selector:
matchLabels:
infrastructure: fargate
app: testing-app
template:
metadata:
labels:
infrastructure: fargate
app: testing-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- name: testing-port
containerPort: 80
I can see that everything is running correctly when I run:
kubectl get all -n default
However, when I try to hit the NodePort IP address on port 80 I can't load it from the browser.
I can hit the pod if I first setup a kubectl proxy at the following url (as the proxy is started on port 8001):
http://localhost:8001/api/v1/namespaces/default/services/testing-nodeport:80/proxy/
I'm pretty much lost at this point. I don't know what I'm doing wrong and why I can't hit the basic nginx docker outside of the kubectl proxy command.
What if you use the proxy option? Something like this:
kubectl port-forward -n default service/testing-nodeport 3000:80
Forwarding from 127.0.0.1:3000 -> 80
Forwarding from [::1]:3000 -> 80
After this, you can access your K8S service from localhost:3000. More info here
Imagine that the kubernetes cluster is like your AWS VPC. It has its own internal network with private IPs and connects all the PODs. Kubernetes only exposes things which you explicitly ask to expose.
Service port 80 is available within the cluster. So one pod can talk to this service using the service name:service port. But if you need to access from outside, you need ingress controller / LoadBalancer. You can also use NodePort for testing purposes. The node port will be something bigger than 30000 (within this 30000-32767).
You should be able to access nginx using node IP:nodeport. Here I assumed you have security group opening the node port.
Use this yaml. I updated the node port to be 31000. You can access the nginx on nodeport:31000. As I had mentioned you can not use 80 as it is for within the cluster. If you need to use 80, then you need ingress controller.
apiVersion: v1
kind: Service
metadata:
name: testing-nodeport
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
type: NodePort
selector:
app: testing-app
ports:
- port: 80
targetPort: testing-port
protocol: TCP
nodePort: 31000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
replicas: 1
selector:
matchLabels:
infrastructure: fargate
app: testing-app
template:
metadata:
labels:
infrastructure: fargate
app: testing-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- name: testing-port
containerPort: 80
Okay after 16+ hours of debugging this I finally figured out what's going on. On fargate you can't set the security groups per node like you can with managed node groups. I was setting the security group rules in the "Additional security groups" settings. However, fargate apparently completely ignores those settings and ONLY uses the security group from your "Cluster security group" setting. So in the EKS UI I set the correct rules in the "Cluster security group" and I can now hit my pod directly on a fargate instance.
Big take away from this. Only use "Cluster security group" for fargate nodes.

Kubernetes NetworkPolicy and only allow traffic from same Namespace and from ALB Ingress

I am trying to write a network policy on Kubernetes that works under AWS EKS. What I want to achieve is to allow traffic to pod/pods from the same Namespace and allow external traffic that is forwarded from AWS ALB Ingress.
AWS ALB Ingress is created under the same NameSpace so I was thinking that only using DENY all traffic from other namespaces would suffice but when I use that traffic from ALB Ingress Load Balancer (whose internal IP addresses are at at the same nameSpace with the pod/pods) are not allowed. Then if I add ALLOW traffic from external clients it allows to Ingress but ALSO allows other namespaces too.
So my example is like: (this does not work as expected)
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
namespace: os
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
namespace: os
spec:
podSelector:
matchLabels:
app: nginx
tier: prod
customer: os
ingress:
- ports:
- port: 80
from: []
When using first policy ALB Ingress is blocked, with adding second one other namespaces are also allowed too which i dont want. I can allow only internal IP address of AWS ALB Ingress but it can change over time and it is created dynamically.
The semantics of the built-in Kubernetes NetworkPolicies are kind of fiddly. There are no deny rules, only allow rules.
The way they work is if no network policies apply to a pod, then all traffic is allowed. Once there is a network policy that applies to a pod, then all traffic not allowed by that policy is blocked.
In other words, you can't say something like "deny this traffic, allow all the rest". You have to effectively say, "allow all the rest".
The documentation for the AWS ALB Ingress controller states that traffic can either be sent to a NodePort for your service, or directly to pods. This means that the traffic originates from an AWS IP address outside the cluster.
For traffic that has a source that isn't well-defined, such as traffic from AWS ALB, this can be difficult - you don't know what the source IP address will be.
If you are trying to allow traffic from the Internet using the ALB, then it means anyone that can reach the ALB will be able to reach your pods. In that case, there's effectively no meaning to blocking traffic within the cluster, as the pods will be able to connect to the ALB, even if they can't connect directly.
My suggestion then is to just create a network policy that allows all traffic to the pods the Ingress covers, but have that policy as specific as possible - for example, if the Ingress accesses a specific port, then have the network policy only allow that port. This way you can minimize the attack surface within the cluster only to that which is Internet-accessible.
Any other traffic to these pods will need to be explicitly allowed.
For example:
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
spec:
podSelector:
matchLabels:
app: <your-app> # app-label
ingress:
- from: []
ports:
- port: 1234 # the port which should be Internet-accessible
This is actually a problem we faced when implementing the Network Policy plugin for the Otterize Intents operator - the operator lets you declare which pods you want to connect to within the cluster and block all the rest by automatically creating network policies and labeling pods, but we had to do that without inadvertently blocking external traffic once the first network policy had been created.
We settled on automatically detecting whether a Service resource of type LoadBalancer or NodePort exists, or an Ingress resource, and creating a network policy that allows all traffic to those ports, as in the example above. A potential improvement for that is to support specific Ingress controllers that have in-cluster pods (so, not AWS ALB, but could be nginx ingress controller, for example), and only allowing traffic from the specific ingress pods.
Have a look here: https://github.com/otterize/intents-operator
And the documentation page explaining this: https://docs.otterize.com/components/intents-operator/#network-policies
If you wanna use this and add support for a specific Ingress controller you're using, hop onto to the Slack or open an issue and we can work on it together.
By design (of Kubernetes NetworkPolicy API), if an endpoint accessible externally, it does not make sense to block it for other namespaces. (After all it can be accessed via the public LB from other namespaces, too, so it doesn't make sense to have an internal firewall for an endpoint that's already publicly accessible.) Back in the day when this API was being designed, this is what I was told.
However you might find that certain CNI plugins (Calico, Cilium etc) provide non-standard CRD APIs that have explicit “deny” operations that supersede “allow”s. They can solve your problem.
And finally, the answer depends on the CNI plugin implementation, how AWS does ALBs in terms of Kubernetes networking and how that CNI plugin deals with that. There’s no easy answer short of asking the CNI provider (or their docs).
Example:
FrontEnd application in namespace spacemyapp and pods with labels app: fe-site and tier: frontend
BackEnd application in namespace spacemyapp and pods with labels app: fe-site and tier: frontend
Frontend service is exposed as NodePort
apiVersion: v1
kind: Service
metadata:
namespace: spacemyapp
name: service-fe-site
labels:
app: fe-site
spec:
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
selector:
app: fe-site
tier: frontend
Ingress controller in namespace spacemyapp with the following annotations:
annotations:
alb.ingress.kubernetes.io/group.name: sgalbfe
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/certificate-arn: arn:aws:xxxxxx/yyyyyy
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/inbound-cidrs: "89.186.39.0/24"
alb.ingress.kubernetes.io/target-type: instance
NetworkPolicy:
Default deny for space spacemyapp
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: spacemyapp
name: default-deny
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
- Egress
ingress: []
egress: []
Backend policy to permit access only from frontend
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: spacemyapp
name: backend-policy
spec:
podSelector:
matchLabels:
app: be-site
tier: backend
ingress:
- from:
- podSelector:
matchLabels:
app: fe-site
tier: frontend
ports:
- protocol: TCP
port: 8090
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
Frontend policy in ingress policy allows pods in namespace spacemyapp with labels app: fe-site and tier: frontend to receive traffic from all namespaces, pods and IP addresses on 8080 ports (that is the port where apache listen on the pods frontend, not the ports of the service-fe-site related it!!!!). In egress policy allows pods in namespace spacemyapp with labels app: fe-site and tier: frontend to connect to pods with labels k8s-app: kube-dns in all namespaces on port UDP 53, and allows pods in namespace spacemyapp with labels app: fe-site and tier: frontend to connect to pods with labels app: be-site and tier: backend in namespaces with labels name: spacemyapp on port TCP 8090
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: spacemyapp
name: frontend-policy
spec:
podSelector:
matchLabels:
app: fe-site
tier: frontend
ingress:
- from: []
ports:
- port: 8080
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- to:
- namespaceSelector:
matchLabels:
name: spacemyapp
podSelector:
matchLabels:
app: be-site
tier: backend
ports:
- port: 8090
I had tried this configuration and work, and on ALB Target Group the HealtCheck not fail.

Access AWS cluster endpoint running Kubernetes

I am new to Kubernetes and I am currently deploying a cluster in AWS using Kubeadm. The containers are deployed just fine, but I can't seem to access them with by browser. When I used to do this via Docker Swarm I could simply use the IP address of the AWS node to access and login in my application with by browser, but this does not seem to work with my current Kubernetes setting.
Therefore my question is how can I access my running application under these new settings?
You should read about how to use Services in Kubernetes:
A Kubernetes Service is an abstraction which defines a logical set of
Pods and a policy by which to access them - sometimes called a
micro-service.
Basically Services allows a Deployment (or Pod) to be reached from inside or outside the cluster.
In your case, if you want to expose a single service in AWS, it is as simple as:
apiVersion: v1
kind: Service
metadata:
name: myApp
labels:
app: myApp
spec:
ports:
- port: 80 #port that the service exposes
targetPort: 8080 #port of a container in "myApp"
selector:
app: myApp #your deployment must have the label "app: myApp"
type: LoadBalancer
You can check if the Service was created successfully in the AWS EC2 console under "Elastic Load Balancers" or using kubectl describe service myApp
Both answers were helpful in my pursuit for a solution to my problem, but I ended up getting lost in the details. Here is an example that may help others with a similar situation:
1) Consider the following application yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-web-app
labels:
app: my-web-app
spec:
serviceName: my-web-app
replicas: 1
selector:
matchLabels:
app: my-web-app
template:
metadata:
labels:
app: my-web-app
spec:
containers:
- name: my-web-app
image: myregistry:443/mydomain/my-web-app
imagePullPolicy: Always
ports:
- containerPort: 8080
name: cp
2) I decided to adopt Node Port (thank you #Leandro for pointing it out) to expose my service, hence I added the following to my application yaml:
---
apiVersion: v1
kind: Service
metadata:
name: my-web-app
labels:
name: my-web-app
spec:
type: NodePort
ports:
- name: http1
port: 80
nodePort: 30036
targetPort: 8080
protocol: TCP
selector:
name: my-web-app
One thing that I was missing is that the label names in both sets must match in order to link my-web-app:StatefulSet (1) to my-web-app:Service (2). Then, my-web-app:StatefulSet:containerPort must be the same as my-web-app:Service:targetPort (8080). Finally, my-web-app:Service:nodePort is the port that we expose publicly and it must be a value between 30000-32767.
3) The last step is to ensure that the security group in AWS allows inbound traffic for the chosen my-web-app:Service:nodePort, in this case 30036, if not add the rule.
After following these steps I was able to access my application via aws-node-ip:30036/my-web-app.
Basically the way kubernetes is constructed is different. First of all your containers are kept hidden from the world, unless you create a service to expose them, a load balancer or nodePort. If you create a service of the type of clusterIP, it will be available only from inside the cluster. For simplicity use port forwading to test your containers, if everything is working then create a service to expose them (Node Port or load balancer). The best and more difficult approach is to create an ingress to handle inbound traffic and routing to the services.
Port Forwading example:
kubectl port-forward redis-master-765d459796-258hz 6379:6379
Change redis for your pod name and the appropriate port of your container.

Why is Kubernetes (AWS EKS) registering all workers to the Load Balancer?

I want to know whether this is a default behaviour or something wrong with my setup.
I have 150 workers running on kubernetes.
I made a set of kubernetes workers (10 workers) run only a specific deployment using nodeSelector, I created a service (type=LoadBalancer) for it, when the Load Balancer was created all the 150 workers of Kubernetes were registered to the Load Balancer, while I was expecting to see only this set of workers (10 workers) of this deployment/service.
It behaved the same with alb-ingress-controller and AWS NLB
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- port: 8080
type: LoadBalancer
and the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
selector:
matchLabels:
app: my-app
replicas: 10
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: master-api
image: private/my-app:prod
resources:
requests:
memory: 8000Mi
ports:
- containerPort: 8080
nodeSelector:
role: api
I was already labeled 10 workers nodes with the label role=api
the 10 run only pods of this deployment, and no other worker is running this service
I also don't have another service or container using port 8080
So ALB controller actually doesn't check your labels etc. It is purely looking at the labels in your subnet. So, if your worker nodes are running inside the subnet with tag kubernetes.io/role/alb-ingress or something like that, all your worker nodes from it will be added to the load balancer.
I believe it is part of the auto-discovery thing docs