How to avoid recreating ingress when recreate service on GKE? - google-cloud-platform

When I delete a service and recreate, I've noticed that status of the ingress indicates Some backend services are in UNKNOWN state.
After some trials and errors, it seems to be related to name of network endpoint group(NEG). NEG tied with a new service has different name, but the ingress gets an old NEG as backend services.
Then, I found that they works again after I recreate an Ingress.
I'd like to avoid downtime to recreate an ingress as much as possible.
Is there a way to avoid recreating ingress when recreating services?
My Service
apiVersion: v1
kind: Service
metadata:
name: client-service
labels:
app: client
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: client
My Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: static-ip-name
networking.gke.io/managed-certificates: managed-certificate
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: client-service
servicePort: 80

If you want to re-use the ingress when the service disappears, you can edit its configuration instead of deleting and recreating it.
To reconfigure the Ingress you will have to update it by editing the configuration, as specified in the official Kubernetes documentation.
To do this, you can perform the following steps:
Issue the command kubectl edit ingress test
Perform the necessary changes, like updating the service configuration
Save the changes
kubectl will update the resource, and trigger an update on the load balancer.
Verify the changes by executing the command kubectl describe ingress test

Related

AWS EKS, How To Hit Pod directly from browser?

I'm very new to kubernetes. I have spent the last week learning about Nodes, Pods, Clusters, Services, and Deployments.
With that I'm trying to just get some more understanding of how the networking for kubernetes even works. I just want to expose a simple nginx docker webpage and hit it from my browser.
Our VPC is setup with a direct connect so I'm able to hit EC2 instances on their private IP addresses. I also setup the EKS cluster using the UI on aws for now as private. For testing purposes I have added my cidr range to be allowed on all TCP as an additional security group in the EKS cluster UI.
Here is my basic service and deployment definitions:
apiVersion: v1
kind: Service
metadata:
name: testing-nodeport
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
type: NodePort
selector:
app: testing-app
ports:
- port: 80
targetPort: testing-port
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
replicas: 1
selector:
matchLabels:
infrastructure: fargate
app: testing-app
template:
metadata:
labels:
infrastructure: fargate
app: testing-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- name: testing-port
containerPort: 80
I can see that everything is running correctly when I run:
kubectl get all -n default
However, when I try to hit the NodePort IP address on port 80 I can't load it from the browser.
I can hit the pod if I first setup a kubectl proxy at the following url (as the proxy is started on port 8001):
http://localhost:8001/api/v1/namespaces/default/services/testing-nodeport:80/proxy/
I'm pretty much lost at this point. I don't know what I'm doing wrong and why I can't hit the basic nginx docker outside of the kubectl proxy command.
What if you use the proxy option? Something like this:
kubectl port-forward -n default service/testing-nodeport 3000:80
Forwarding from 127.0.0.1:3000 -> 80
Forwarding from [::1]:3000 -> 80
After this, you can access your K8S service from localhost:3000. More info here
Imagine that the kubernetes cluster is like your AWS VPC. It has its own internal network with private IPs and connects all the PODs. Kubernetes only exposes things which you explicitly ask to expose.
Service port 80 is available within the cluster. So one pod can talk to this service using the service name:service port. But if you need to access from outside, you need ingress controller / LoadBalancer. You can also use NodePort for testing purposes. The node port will be something bigger than 30000 (within this 30000-32767).
You should be able to access nginx using node IP:nodeport. Here I assumed you have security group opening the node port.
Use this yaml. I updated the node port to be 31000. You can access the nginx on nodeport:31000. As I had mentioned you can not use 80 as it is for within the cluster. If you need to use 80, then you need ingress controller.
apiVersion: v1
kind: Service
metadata:
name: testing-nodeport
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
type: NodePort
selector:
app: testing-app
ports:
- port: 80
targetPort: testing-port
protocol: TCP
nodePort: 31000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
replicas: 1
selector:
matchLabels:
infrastructure: fargate
app: testing-app
template:
metadata:
labels:
infrastructure: fargate
app: testing-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- name: testing-port
containerPort: 80
Okay after 16+ hours of debugging this I finally figured out what's going on. On fargate you can't set the security groups per node like you can with managed node groups. I was setting the security group rules in the "Additional security groups" settings. However, fargate apparently completely ignores those settings and ONLY uses the security group from your "Cluster security group" setting. So in the EKS UI I set the correct rules in the "Cluster security group" and I can now hit my pod directly on a fargate instance.
Big take away from this. Only use "Cluster security group" for fargate nodes.

Ingress controller does not show external IP

I have been trying to create a kubernetes cluster on Google kubernetes Engine. My pods are sucessfully running but the problem is with the ingress controller. The ingress conroller is not showing the external IP to access the application.
And the YAML file for nginx ingress controller looks like this :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: http-ingress
labels:
app: ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: nodeapp1svc
servicePort: 80
- path: /app1
backend:
serviceName: nodeapp2svc
servicePort: 80
- path: /app2
backend:
serviceName: nodeapp2svc
servicePort: 80
What can I do next?
It looks like the problem is related with your annotations, specifically with this one:
kubernetes.io/ingress.class: addon-http-application-routing
The ingress.class you're trying to use is something specific to Azure AKS so definitely you cannot use it on your GKE Cluster.
Note that you can omit kubernetes.io/ingress.class annotation at all if you want your default GKE Ingress controller - ingress-gce to be used.
I tested it on my GKE cluster and without the mentioned above annotation it works just fine.
As to your specific setup, I noticed one more problem, namely your nodeapp[1-3]svc Services are of a type ClusterIP and they need to be either NodePort or LoadBalancer.
If you run:
kubectl describe ingress http-ingress
and take a look at the events section, you may encounter the error message like the one below:
loadbalancer-controller error while evaluating the ingress spec: service "default/nodeapp1svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"
Summary:
use the correct ingress.class i.e. omit this annotation at all and the default ingress controller will be used.
make sure your backends are exposed via NodePort rather than ClusterIP.

Ingress resource vs NGINX ingress controller on Kubernetes

I am setting up NGINX ingress controller on AWS EKS.
I went through k8s Ingress resource and it is very helpful to understand we map LB ports to k8s service ports with e.g file def. I installed nginx controller till pre-requisite step. Then the tutorial directs me to create an ingress resource.
https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/#create-an-ingress-resource
But below it is telling me to apply a service config. I am confused with this provider-specific step. Which is different in terms of kind, version, spec definition (Service vs Ingress).
https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml
I am missing something here?
This is a concept that is at first a little tricky to wrap your head around. The Nginx ingress controller is nothing but a service of type LoadBalancer. What is does is be the public-facing endpoint for your services. The IP address assigned to this service can route traffic to multiple services. So you can go ahead and define your services as ClusterIP and have them exposed through the Nginx ingress controller.
Here's a diagram to portray the concept a little better:
image source
On that note, if you have acquired a static IP for your service, you need to assign it to your Nginx ingress-controller. So what is an ingress? Ingress is basically a way for you to communicate to your Nginx ingress-controller how to direct traffic incoming to your LB public IP. So as it is clear now, you have one loadbalancer service, and multiple ingress resources. Each ingress corresponds to a single service that can change based on how you define your services, but you get the idea.
Let's get into some yaml code. As mentioned, you will need the ingress controller service regardless of how many ingress resources you have. So go ahead and apply this code on your EKS cluster.
Now let's see how you would expose your pod to the world through Nginx-ingress. Say you have a wordpress deployment. You can define a simple ClusterIP service for this app:
apiVersion: v1
kind: Service
metadata:
labels:
app: ${WORDPRESS_APP}
namespace: ${NAMESPACE}
name: ${WORDPRESS_APP}
spec:
type: ClusterIP
ports:
- port: 9000
targetPort: 9000
name: ${WORDPRESS_APP}
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: ${WORDPRESS_APP}
This creates a service for your wordpress app which is not accessible outside of the cluster. Now you can create an ingress resource to expose this service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: ${NAMESPACE}
name: ${INGRESS_NAME}
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- ${URL}
secretName: ${TLS_SECRET}
rules:
- host: ${URL}
http:
paths:
- path: /
backend:
serviceName: ${WORDPRESS_APP}
servicePort: 80
Now if you run kubectl get svc you can see the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wordpress ClusterIP 10.23.XXX.XX <none> 9000/TCP,80/TCP,443/TCP 1m
nginx-ingress-controller LoadBalancer 10.23.XXX.XX XX.XX.XXX.XXX 80:X/TCP,443:X/TCP 1m
Now you can access your wordpress service through the URL defined, which maps to the public IP of your ingress controller LB service.
the NGINX ingress controller is the actual process that shapes your traffic to your services. basically like the nginx or loadbalancer installation on a traditional vm.
the ingress resource (kind: Ingress) is more like the nginx-config on your old VM, where you would define host mappings, paths and proxies.

Traefik Ingress Controller for Kubernetes (AWS EKS)

I'm running my workloads on AWS EKS service in the cloud. I can see that there is not default Ingress Controller available (as it is available for GKE) we have to pick a 3rd party-one.
I decided to go with Traefik. After following documentations and other resources (like this), I feel that using Traefik as the Ingress Controller does not create a LoadBalancer in the cloud automatically. We have to go through it manually to setup everything.
How to use Traefik to work as the Kubernetes Ingress the same way other Ingress Controllers work (i.e. Nginx etc) that create a LoadBalancer, register services etc? Any working example would be appreciated.
Have you tried with annotations like in this example?
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:REGION:ACCOUNTID:certificate/CERT-ID"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
spec:
type: LoadBalancer
selector:
app: traefik-proxy
tier: proxy
ports:
- port: 443
targetPort: 80

How to integrate Kubernetes with existing AWS ALB?

I want to use existing AWS ALB for my kubernetes setup. i.e. I don't want alb-ingress-controller create or update any existing AWS resource ie. Target groups, roles etc.
How can I make ALB to communicate with Kubernetes cluster, henceforth passing the request to existing services and getting the response back to ALB to display in the front end?
I tried this but it will create new ALB for new ingress resource. I want to use the existing one.
You basically have to open a node port on the instances where the Kubernetes Pods are running. Then you need to let the ALB point to those instances. There are two ways of configuring this. Either via Pods or via Services.
To configure it via a Service you need to specify .spec.ports[].nodePort. In the default setup the port needs to be between 30000 and 32000. This port gets opened on every node and will be redirected to the specified Pods (which might be on any other node). This has the downside that there is another hop, which also can cost money when using a multi-AZ setup. An example Service could look like this:
---
apiVersion: v1
kind: Service
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
type: NodePort
selector:
app: my-frontend
ports:
- port: 8080
nodePort: 30082
To configure it via a Pod you need to specify .spec.containers[].ports[].hostPort. This can be any port number, but it has to be free on the node where the Pod gets scheduled. This means that there can only be one Pod per node and it might conflict with ports from other applications. This has the downside that not all instances will be healthy from an ALB point-of-view, since only nodes with that Pod accept traffic. You could add a sidecar container which registers the current node on the ALB, but this would mean additional complexity. An example could look like this:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
replicas: 3
selector:
matchLabels:
app: my-frontend
template:
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
containers:
- name: nginx
image: "nginx"
ports:
- containerPort: 80
hostPort: 8080