I'm running my workloads on AWS EKS service in the cloud. I can see that there is not default Ingress Controller available (as it is available for GKE) we have to pick a 3rd party-one.
I decided to go with Traefik. After following documentations and other resources (like this), I feel that using Traefik as the Ingress Controller does not create a LoadBalancer in the cloud automatically. We have to go through it manually to setup everything.
How to use Traefik to work as the Kubernetes Ingress the same way other Ingress Controllers work (i.e. Nginx etc) that create a LoadBalancer, register services etc? Any working example would be appreciated.
Have you tried with annotations like in this example?
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:REGION:ACCOUNTID:certificate/CERT-ID"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
spec:
type: LoadBalancer
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 443
targetPort: 80
Related
When I delete a service and recreate, I've noticed that status of the ingress indicates Some backend services are in UNKNOWN state.
After some trials and errors, it seems to be related to name of network endpoint group(NEG). NEG tied with a new service has different name, but the ingress gets an old NEG as backend services.
Then, I found that they works again after I recreate an Ingress.
I'd like to avoid downtime to recreate an ingress as much as possible.
Is there a way to avoid recreating ingress when recreating services?
My Service
apiVersion: v1
kind: Service
metadata:
name: client-service
labels:
app: client
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: client
My Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: static-ip-name
networking.gke.io/managed-certificates: managed-certificate
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: client-service
servicePort: 80
If you want to re-use the ingress when the service disappears, you can edit its configuration instead of deleting and recreating it.
To reconfigure the Ingress you will have to update it by editing the configuration, as specified in the official Kubernetes documentation.
To do this, you can perform the following steps:
Issue the command kubectl edit ingress test
Perform the necessary changes, like updating the service configuration
Save the changes
kubectl will update the resource, and trigger an update on the load balancer.
Verify the changes by executing the command kubectl describe ingress test
I'm very new to kubernetes. I have spent the last week learning about Nodes, Pods, Clusters, Services, and Deployments.
With that I'm trying to just get some more understanding of how the networking for kubernetes even works. I just want to expose a simple nginx docker webpage and hit it from my browser.
Our VPC is setup with a direct connect so I'm able to hit EC2 instances on their private IP addresses. I also setup the EKS cluster using the UI on aws for now as private. For testing purposes I have added my cidr range to be allowed on all TCP as an additional security group in the EKS cluster UI.
Here is my basic service and deployment definitions:
apiVersion: v1
kind: Service
metadata:
name: testing-nodeport
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
type: NodePort
selector:
app: testing-app
ports:
- port: 80
targetPort: testing-port
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
replicas: 1
selector:
matchLabels:
infrastructure: fargate
app: testing-app
template:
metadata:
labels:
infrastructure: fargate
app: testing-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- name: testing-port
containerPort: 80
I can see that everything is running correctly when I run:
kubectl get all -n default
However, when I try to hit the NodePort IP address on port 80 I can't load it from the browser.
I can hit the pod if I first setup a kubectl proxy at the following url (as the proxy is started on port 8001):
http://localhost:8001/api/v1/namespaces/default/services/testing-nodeport:80/proxy/
I'm pretty much lost at this point. I don't know what I'm doing wrong and why I can't hit the basic nginx docker outside of the kubectl proxy command.
What if you use the proxy option? Something like this:
kubectl port-forward -n default service/testing-nodeport 3000:80
Forwarding from 127.0.0.1:3000 -> 80
Forwarding from [::1]:3000 -> 80
After this, you can access your K8S service from localhost:3000. More info here
Imagine that the kubernetes cluster is like your AWS VPC. It has its own internal network with private IPs and connects all the PODs. Kubernetes only exposes things which you explicitly ask to expose.
Service port 80 is available within the cluster. So one pod can talk to this service using the service name:service port. But if you need to access from outside, you need ingress controller / LoadBalancer. You can also use NodePort for testing purposes. The node port will be something bigger than 30000 (within this 30000-32767).
You should be able to access nginx using node IP:nodeport. Here I assumed you have security group opening the node port.
Use this yaml. I updated the node port to be 31000. You can access the nginx on nodeport:31000. As I had mentioned you can not use 80 as it is for within the cluster. If you need to use 80, then you need ingress controller.
apiVersion: v1
kind: Service
metadata:
name: testing-nodeport
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
type: NodePort
selector:
app: testing-app
ports:
- port: 80
targetPort: testing-port
protocol: TCP
nodePort: 31000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
replicas: 1
selector:
matchLabels:
infrastructure: fargate
app: testing-app
template:
metadata:
labels:
infrastructure: fargate
app: testing-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- name: testing-port
containerPort: 80
Okay after 16+ hours of debugging this I finally figured out what's going on. On fargate you can't set the security groups per node like you can with managed node groups. I was setting the security group rules in the "Additional security groups" settings. However, fargate apparently completely ignores those settings and ONLY uses the security group from your "Cluster security group" setting. So in the EKS UI I set the correct rules in the "Cluster security group" and I can now hit my pod directly on a fargate instance.
Big take away from this. Only use "Cluster security group" for fargate nodes.
I deployed grafana using helm and now it is running in pod. I can access it if I proxy port 3000 to my laptop.
Im trying to point a domain grafana.something.com to that pod so I can access it externally.
I have a domain in route53 that I can attach to a loadbalancer (Application Load Balancer, Network Load Balancer, Classic Load Balancer). That load balancer can forward traffic from port 80 to port 80 to a group of nodes (Let's leave port 443 for later).
I'm really struggling with setting this up. Im sure there is something missing but I don't know what.
Basic diagram would look like this I imagine.
Internet
↓↓
Domain in route53 (grafana.something.com)
↓↓
Loadbalancer 80 to 80 (Application Load Balancer, Network Load Balancer, Classic Load Balancer)
I guess that LB would forward traffic to port 80 to the below Ingress Controllers (Created when Grafana was deployed using Helm)
↓↓
Group of EKS worker nodes
↓↓
Ingress resource ?????
↓↓
Ingress Controllers - Created when Grafana was deployed using Helm in namespace test.
kubectl get svc grafana -n test
grafana Type:ClusterIP ClusterIP:10.x.x.x Port:80/TCP
apiVersion: v1
kind: Service
metadata:
creationTimestamp:
labels:
app: grafana
chart: grafana-
heritage: Tiller
release: grafana-release
name: grafana
namespace: test
resourceVersion: "xxxx"
selfLink:
uid:
spec:
clusterIP: 10.x.x.x
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
selector:
app: grafana
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
↓↓
Pod Grafana is listening on port 3000. I can access it successfully after proxying to my laptop port 3000.
Given that it seems you don't have an Ingress Controller installed, if you have the aws cloud-provider configured in your K8S cluster you can follow this guide to install the Nginx Ingress controller using Helm.
By the end of the guide you should have a load balancer created for your ingress controller, point your Route53 record to it and create an Ingress that uses your grafana service. Example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/app-root: /
nginx.ingress.kubernetes.io/enable-access-log: "true"
name: grafana-ingress
namespace: test
spec:
rules:
- host: grafana.something.com
http:
paths:
- backend:
serviceName: grafana
servicePort: 80
path: /
The final traffic path would be:
Route53 -> ELB -> Ingress -> Service -> Pods
Adding 2 important suggestions here.
1 ) Following improvements to the ingress api in kubernetes 1.18 -
a new ingressClassName field has been added to the Ingress spec that is used to reference the IngressClass that should be used to implement this Ingress.
Please consider to switch to ingressClassName field instead of the kubernetes.io/ingress.class annotation:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: grafana-ingress
namespace: test
spec:
ingressClassName: nginx # <-- Here
rules:
- host: grafana.something.com
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 80
2 ) Consider using External-DNS for the integration between external DNS servers (Check this example on AWS Route53) and the Kubernetes Ingresses / Services.
I am setting up NGINX ingress controller on AWS EKS.
I went through k8s Ingress resource and it is very helpful to understand we map LB ports to k8s service ports with e.g file def. I installed nginx controller till pre-requisite step. Then the tutorial directs me to create an ingress resource.
https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/#create-an-ingress-resource
But below it is telling me to apply a service config. I am confused with this provider-specific step. Which is different in terms of kind, version, spec definition (Service vs Ingress).
https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml
I am missing something here?
This is a concept that is at first a little tricky to wrap your head around. The Nginx ingress controller is nothing but a service of type LoadBalancer. What is does is be the public-facing endpoint for your services. The IP address assigned to this service can route traffic to multiple services. So you can go ahead and define your services as ClusterIP and have them exposed through the Nginx ingress controller.
Here's a diagram to portray the concept a little better:
image source
On that note, if you have acquired a static IP for your service, you need to assign it to your Nginx ingress-controller. So what is an ingress? Ingress is basically a way for you to communicate to your Nginx ingress-controller how to direct traffic incoming to your LB public IP. So as it is clear now, you have one loadbalancer service, and multiple ingress resources. Each ingress corresponds to a single service that can change based on how you define your services, but you get the idea.
Let's get into some yaml code. As mentioned, you will need the ingress controller service regardless of how many ingress resources you have. So go ahead and apply this code on your EKS cluster.
Now let's see how you would expose your pod to the world through Nginx-ingress. Say you have a wordpress deployment. You can define a simple ClusterIP service for this app:
apiVersion: v1
kind: Service
metadata:
labels:
app: ${WORDPRESS_APP}
namespace: ${NAMESPACE}
name: ${WORDPRESS_APP}
spec:
type: ClusterIP
ports:
- port: 9000
targetPort: 9000
name: ${WORDPRESS_APP}
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: ${WORDPRESS_APP}
This creates a service for your wordpress app which is not accessible outside of the cluster. Now you can create an ingress resource to expose this service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: ${NAMESPACE}
name: ${INGRESS_NAME}
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- ${URL}
secretName: ${TLS_SECRET}
rules:
- host: ${URL}
http:
paths:
- path: /
backend:
serviceName: ${WORDPRESS_APP}
servicePort: 80
Now if you run kubectl get svc you can see the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wordpress ClusterIP 10.23.XXX.XX <none> 9000/TCP,80/TCP,443/TCP 1m
nginx-ingress-controller LoadBalancer 10.23.XXX.XX XX.XX.XXX.XXX 80:X/TCP,443:X/TCP 1m
Now you can access your wordpress service through the URL defined, which maps to the public IP of your ingress controller LB service.
the NGINX ingress controller is the actual process that shapes your traffic to your services. basically like the nginx or loadbalancer installation on a traditional vm.
the ingress resource (kind: Ingress) is more like the nginx-config on your old VM, where you would define host mappings, paths and proxies.
I wanting to have my own Kubernetes playground within AWS which currently involves 2 EC2 instances and an Elastic Load Balancer.
I use Traefik as my ingress controller which has easily allowed me to set up automatic subdomains and TLS to some of the deployments (deployment.k8s.mydomain.com).
I love this but as a student, the load balancer is just too much. I'm having to kill the cluster when not using it but ideally, I want this up full time.
Is there a way to keep my setup (the cool domain/tls stuff) but drop the need for a ELB?
If you want to drop the use of a LoadBalancer, you have still another option, this is to expose Ingress Controller via Service of externalIPs type or NodePort.
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
externalIPs:
- 80.11.12.10
You can then create a CNAME (deployment.k8s.mydomain.com) to point to the external IP of your cluster node. Additionally, you should ensure that the local firewall rules on your node are allowing access to the open port.
route53 dns load balancing? im sure there must be a way . https://www.virtualtothecore.com/load-balancing-services-with-aws-route53-dns-health-checks/