AWS ALB service in Kubernetes - amazon-web-services

I want to deploy Application Load Balancer for Traefik running in Kubernetes. So, I tried the following:
kind: Service
apiVersion: v1
metadata:
name: traefik-application-elb
namespace: kube-system
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-type: "elb"
service.beta.kubernetes.io/aws-load-balancer-name: "eks-traefik-application-elb"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "App=traefik,Env=dev"
spec:
type: LoadBalancer
ports:
- protocol: TCP
name: web
port: 80
- protocol: TCP
name: websecure
port: 443
selector:
app: traefik
But internet-facing Classic load balancer was created.
Also tried service.beta.kubernetes.io/aws-load-balancer-type: "alb" but nothing changed.
What is wrong here ?

From the docs: https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html
When you create a Kubernetes Service of type LoadBalancer, the AWS cloud provider load balancer controller creates AWS Classic Load Balancers by default, but can also create AWS Network Load Balancers.
ALB is not supported as a k8s load balancer. You can specify an NLB if desired.
However you can use ALB as an ingress controller with a deployment like this - https://aws.amazon.com/blogs/containers/introducing-aws-load-balancer-controller/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: SearchApp
annotations:
# share a single ALB with all ingress rules with search-app-ingress
alb.ingress.kubernetes.io/group.name: search-app-ingress
spec:
defaultBackend:
service:
name: search-svc # route traffic to the service search-svc
port:
number: 80

Related

updating ingress nginx controller with new ACM SSL certificate is causing https connection not working

Our ingress controller configuration for nginx and aws load balancer
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
externalTrafficPolicy: Local
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer
example Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.class: nginx
namespace: dev
spec:
rules:
- host: test.domain.com
http:
paths:
- path: /main
pathType: Prefix
backend:
service:
name: main
port:
number: 8000
We are experiencing an issue when changing ACM SSL certificate, after updating LoadBalancer configuration by updating kubernetes deployments, we observe that new certificate is associated with load balancer but the https connection is not working. The only way to make it work was by removing all ingress related resources: controller, jobs, load balancer and create it again, after all works.
I tried to restart nginx controller, I am thinking if ACM cert is saved somewhere in kubernetes during initializing of ingress nginx controller, and after the value of SSL cert the Load Balancer is updated but nginx ingress controller still keep reference of SSL certificate ?
How You guys are updating SSL ACM certs for nginx ingress controller which use aws network load balancer ? What is the correct way of updating nginx ingress controller after updating SSL cert from ACM ?
Thanks,
Igor

Kubernetes NetworkPolicy and only allow traffic from same Namespace and from ALB Ingress

I am trying to write a network policy on Kubernetes that works under AWS EKS. What I want to achieve is to allow traffic to pod/pods from the same Namespace and allow external traffic that is forwarded from AWS ALB Ingress.
AWS ALB Ingress is created under the same NameSpace so I was thinking that only using DENY all traffic from other namespaces would suffice but when I use that traffic from ALB Ingress Load Balancer (whose internal IP addresses are at at the same nameSpace with the pod/pods) are not allowed. Then if I add ALLOW traffic from external clients it allows to Ingress but ALSO allows other namespaces too.
So my example is like: (this does not work as expected)
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
namespace: os
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
namespace: os
spec:
podSelector:
matchLabels:
app: nginx
tier: prod
customer: os
ingress:
- ports:
- port: 80
from: []
When using first policy ALB Ingress is blocked, with adding second one other namespaces are also allowed too which i dont want. I can allow only internal IP address of AWS ALB Ingress but it can change over time and it is created dynamically.
The semantics of the built-in Kubernetes NetworkPolicies are kind of fiddly. There are no deny rules, only allow rules.
The way they work is if no network policies apply to a pod, then all traffic is allowed. Once there is a network policy that applies to a pod, then all traffic not allowed by that policy is blocked.
In other words, you can't say something like "deny this traffic, allow all the rest". You have to effectively say, "allow all the rest".
The documentation for the AWS ALB Ingress controller states that traffic can either be sent to a NodePort for your service, or directly to pods. This means that the traffic originates from an AWS IP address outside the cluster.
For traffic that has a source that isn't well-defined, such as traffic from AWS ALB, this can be difficult - you don't know what the source IP address will be.
If you are trying to allow traffic from the Internet using the ALB, then it means anyone that can reach the ALB will be able to reach your pods. In that case, there's effectively no meaning to blocking traffic within the cluster, as the pods will be able to connect to the ALB, even if they can't connect directly.
My suggestion then is to just create a network policy that allows all traffic to the pods the Ingress covers, but have that policy as specific as possible - for example, if the Ingress accesses a specific port, then have the network policy only allow that port. This way you can minimize the attack surface within the cluster only to that which is Internet-accessible.
Any other traffic to these pods will need to be explicitly allowed.
For example:
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
spec:
podSelector:
matchLabels:
app: <your-app> # app-label
ingress:
- from: []
ports:
- port: 1234 # the port which should be Internet-accessible
This is actually a problem we faced when implementing the Network Policy plugin for the Otterize Intents operator - the operator lets you declare which pods you want to connect to within the cluster and block all the rest by automatically creating network policies and labeling pods, but we had to do that without inadvertently blocking external traffic once the first network policy had been created.
We settled on automatically detecting whether a Service resource of type LoadBalancer or NodePort exists, or an Ingress resource, and creating a network policy that allows all traffic to those ports, as in the example above. A potential improvement for that is to support specific Ingress controllers that have in-cluster pods (so, not AWS ALB, but could be nginx ingress controller, for example), and only allowing traffic from the specific ingress pods.
Have a look here: https://github.com/otterize/intents-operator
And the documentation page explaining this: https://docs.otterize.com/components/intents-operator/#network-policies
If you wanna use this and add support for a specific Ingress controller you're using, hop onto to the Slack or open an issue and we can work on it together.
By design (of Kubernetes NetworkPolicy API), if an endpoint accessible externally, it does not make sense to block it for other namespaces. (After all it can be accessed via the public LB from other namespaces, too, so it doesn't make sense to have an internal firewall for an endpoint that's already publicly accessible.) Back in the day when this API was being designed, this is what I was told.
However you might find that certain CNI plugins (Calico, Cilium etc) provide non-standard CRD APIs that have explicit “deny” operations that supersede “allow”s. They can solve your problem.
And finally, the answer depends on the CNI plugin implementation, how AWS does ALBs in terms of Kubernetes networking and how that CNI plugin deals with that. There’s no easy answer short of asking the CNI provider (or their docs).
Example:
FrontEnd application in namespace spacemyapp and pods with labels app: fe-site and tier: frontend
BackEnd application in namespace spacemyapp and pods with labels app: fe-site and tier: frontend
Frontend service is exposed as NodePort
apiVersion: v1
kind: Service
metadata:
namespace: spacemyapp
name: service-fe-site
labels:
app: fe-site
spec:
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
selector:
app: fe-site
tier: frontend
Ingress controller in namespace spacemyapp with the following annotations:
annotations:
alb.ingress.kubernetes.io/group.name: sgalbfe
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/certificate-arn: arn:aws:xxxxxx/yyyyyy
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/inbound-cidrs: "89.186.39.0/24"
alb.ingress.kubernetes.io/target-type: instance
NetworkPolicy:
Default deny for space spacemyapp
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: spacemyapp
name: default-deny
spec:
podSelector:
matchLabels: {}
policyTypes:
- Ingress
- Egress
ingress: []
egress: []
Backend policy to permit access only from frontend
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: spacemyapp
name: backend-policy
spec:
podSelector:
matchLabels:
app: be-site
tier: backend
ingress:
- from:
- podSelector:
matchLabels:
app: fe-site
tier: frontend
ports:
- protocol: TCP
port: 8090
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
Frontend policy in ingress policy allows pods in namespace spacemyapp with labels app: fe-site and tier: frontend to receive traffic from all namespaces, pods and IP addresses on 8080 ports (that is the port where apache listen on the pods frontend, not the ports of the service-fe-site related it!!!!). In egress policy allows pods in namespace spacemyapp with labels app: fe-site and tier: frontend to connect to pods with labels k8s-app: kube-dns in all namespaces on port UDP 53, and allows pods in namespace spacemyapp with labels app: fe-site and tier: frontend to connect to pods with labels app: be-site and tier: backend in namespaces with labels name: spacemyapp on port TCP 8090
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: spacemyapp
name: frontend-policy
spec:
podSelector:
matchLabels:
app: fe-site
tier: frontend
ingress:
- from: []
ports:
- port: 8080
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- to:
- namespaceSelector:
matchLabels:
name: spacemyapp
podSelector:
matchLabels:
app: be-site
tier: backend
ports:
- port: 8090
I had tried this configuration and work, and on ALB Target Group the HealtCheck not fail.

Kubernetes is not creating internet-facing AWS Classic Load Balancer

as far as I get the ingress controller documentation, a simple creation of a Service and an Ingress without special annotations should create internet-facing load balancers, weirdly it is creating internal load balancers. So I added the annotation service.beta.kubernetes.io/aws-load-balancer-internal: "false" which is not working either. By the way, I am using NGINX as ingress controller, in the test cluster currently in version 0.8.21. Probably I should update it some time.
Here's my simple spec-file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: nginx
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
labels:
external: "true"
comp: ingress-nginx
env: develop
name: develop-api-external-ing
namespace: develop
spec:
rules:
- host: api.example.com
http:
paths:
- backend:
serviceName: api-external
servicePort: 3000
path: /
tls:
- hosts:
- api.example.com
secretName: api-tls
---
apiVersion: v1
kind: Service
metadata:
labels:
app: api
env: develop
name: api-external
namespace: develop
spec:
ports:
- name: http
port: 3000
protocol: TCP
targetPort: 3000
selector:
app: api
env: develop
sessionAffinity: None
type: ClusterIP
You are not wrong, a service and a ingress should create a load balancer... but you should look at the documentation a bit more...
An ingress needs a NodePort service, yours is ClusterIP. So even if it created something it wouldn't work.
In your ingress you are using kubernetes.io/ingress.class: nginx meaning you want to override the default usage of the ingress and force it to register to the ingress-nginx.
So to make it work, change the type of your service, remove the ingress-class annotation.
You can setup NLB ( Network load balancer) and provide the URL on ingress rule host values. You don't need to expose the underneath backend service either as NodePort or as another load balancer.

How to forward traffic from domain in route53 to a pod using nginx ingress?

I deployed grafana using helm and now it is running in pod. I can access it if I proxy port 3000 to my laptop.
Im trying to point a domain grafana.something.com to that pod so I can access it externally.
I have a domain in route53 that I can attach to a loadbalancer (Application Load Balancer, Network Load Balancer, Classic Load Balancer). That load balancer can forward traffic from port 80 to port 80 to a group of nodes (Let's leave port 443 for later).
I'm really struggling with setting this up. Im sure there is something missing but I don't know what.
Basic diagram would look like this I imagine.
Internet
↓↓
Domain in route53 (grafana.something.com)
↓↓
Loadbalancer 80 to 80 (Application Load Balancer, Network Load Balancer, Classic Load Balancer)
I guess that LB would forward traffic to port 80 to the below Ingress Controllers (Created when Grafana was deployed using Helm)
↓↓
Group of EKS worker nodes
↓↓
Ingress resource ?????
↓↓
Ingress Controllers - Created when Grafana was deployed using Helm in namespace test.
kubectl get svc grafana -n test
grafana Type:ClusterIP ClusterIP:10.x.x.x Port:80/TCP
apiVersion: v1
kind: Service
metadata:
creationTimestamp:
labels:
app: grafana
chart: grafana-
heritage: Tiller
release: grafana-release
name: grafana
namespace: test
resourceVersion: "xxxx"
selfLink:
uid:
spec:
clusterIP: 10.x.x.x
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
selector:
app: grafana
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
↓↓
Pod Grafana is listening on port 3000. I can access it successfully after proxying to my laptop port 3000.
Given that it seems you don't have an Ingress Controller installed, if you have the aws cloud-provider configured in your K8S cluster you can follow this guide to install the Nginx Ingress controller using Helm.
By the end of the guide you should have a load balancer created for your ingress controller, point your Route53 record to it and create an Ingress that uses your grafana service. Example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/app-root: /
nginx.ingress.kubernetes.io/enable-access-log: "true"
name: grafana-ingress
namespace: test
spec:
rules:
- host: grafana.something.com
http:
paths:
- backend:
serviceName: grafana
servicePort: 80
path: /
The final traffic path would be:
Route53 -> ELB -> Ingress -> Service -> Pods
Adding 2 important suggestions here.
1 ) Following improvements to the ingress api in kubernetes 1.18 -
a new ingressClassName field has been added to the Ingress spec that is used to reference the IngressClass that should be used to implement this Ingress.
Please consider to switch to ingressClassName field instead of the kubernetes.io/ingress.class annotation:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: grafana-ingress
namespace: test
spec:
ingressClassName: nginx # <-- Here
rules:
- host: grafana.something.com
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 80
2 ) Consider using External-DNS for the integration between external DNS servers (Check this example on AWS Route53) and the Kubernetes Ingresses / Services.

Traefik Ingress Controller for Kubernetes (AWS EKS)

I'm running my workloads on AWS EKS service in the cloud. I can see that there is not default Ingress Controller available (as it is available for GKE) we have to pick a 3rd party-one.
I decided to go with Traefik. After following documentations and other resources (like this), I feel that using Traefik as the Ingress Controller does not create a LoadBalancer in the cloud automatically. We have to go through it manually to setup everything.
How to use Traefik to work as the Kubernetes Ingress the same way other Ingress Controllers work (i.e. Nginx etc) that create a LoadBalancer, register services etc? Any working example would be appreciated.
Have you tried with annotations like in this example?
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:REGION:ACCOUNTID:certificate/CERT-ID"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
spec:
type: LoadBalancer
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 443
targetPort: 80