Ingress controller does not show external IP - google-cloud-platform

I have been trying to create a kubernetes cluster on Google kubernetes Engine. My pods are sucessfully running but the problem is with the ingress controller. The ingress conroller is not showing the external IP to access the application.
And the YAML file for nginx ingress controller looks like this :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: http-ingress
labels:
app: ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: nodeapp1svc
servicePort: 80
- path: /app1
backend:
serviceName: nodeapp2svc
servicePort: 80
- path: /app2
backend:
serviceName: nodeapp2svc
servicePort: 80
What can I do next?

It looks like the problem is related with your annotations, specifically with this one:
kubernetes.io/ingress.class: addon-http-application-routing
The ingress.class you're trying to use is something specific to Azure AKS so definitely you cannot use it on your GKE Cluster.
Note that you can omit kubernetes.io/ingress.class annotation at all if you want your default GKE Ingress controller - ingress-gce to be used.
I tested it on my GKE cluster and without the mentioned above annotation it works just fine.
As to your specific setup, I noticed one more problem, namely your nodeapp[1-3]svc Services are of a type ClusterIP and they need to be either NodePort or LoadBalancer.
If you run:
kubectl describe ingress http-ingress
and take a look at the events section, you may encounter the error message like the one below:
loadbalancer-controller error while evaluating the ingress spec: service "default/nodeapp1svc" is type "ClusterIP", expected "NodePort" or "LoadBalancer"
Summary:
use the correct ingress.class i.e. omit this annotation at all and the default ingress controller will be used.
make sure your backends are exposed via NodePort rather than ClusterIP.

Related

Linkerd route-based metrics do not work with GKE default ingress controller

I have a microservice running in GKE. I am trying to befriend default GKE GCE-ingress with linkerd so that I can observe route-based metrics with linkerd. The documentation says that the GCE ingress should be meshed with ingress mode enabled, i.e. with the linkerd.io/inject: ingress annotation rather than the default enabled. I tried the following Ingress resource, however route-based metrics are not coming. I checked with linked tap command and rt_route is not getting set.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
namespace: emojivoto
annotations:
ingress.kubernetes.io/custom-request-headers: "l5d-dst-override: web-svc.emojivoto.svc.cluster.local:80"
ingress.gcp.kubernetes.io/pre-shared-cert: "managed-cert-name"
kubernetes.io/ingress.global-static-ip-name: "static-ip-name"
linkerd.io/inject: ingress
spec:
ingressClassName: gce
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-svc
port:
number: 80
I suspect that linkerd.io/inject: ingress annotation should be added to ingress controller, however since it is managed by GKE, I do not know how I can add it.
The linkerd.io/inject: ingress annotation should be added to your deployment(s) or to one or more namespace for automatic injection.

Creating a Kubernetes Ingress resource for GCP/GKE by example

I'm trying to make sense of an example Kubernetes YAML config file that I am trying to customize:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-web-server
namespace: myapp
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/security-groups: my-sec-group
app.kubernetes.io/name: my-alb-ingress-web-server
app.kubernetes.io/component: my-alb-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-web-server
servicePort: 8080
The documentation for this example claims its for creating an "Ingress", or a K8s object that manages inbound traffic to a service or pod.
This particular Ingress resource appears to use AWS ALB (Application Load Balancers) and I need to adapt it to create and Ingress resource in GCP/GKE.
I'm Googling the Kubernetes documentation high and low and although I found the kubernetes.io/ingress.class docs I don't see where they define "alb" as a valid value for this property. I'm asking because I obviously need to find the correct kubernetes.io/ingress.class value for GCP/GKE and I assume if I can find the K8s/AWS Ingress documentation I should be able to find the K8s/GCP Ingress documentation.
I'm assuming K8s has AWS, GCP, Azure, etc. built-in client to kubectl for connecting to these clouds/providers?
So I ask: how does the above configuration tell K8s that we are creating an AWS Ingress (as opposed to an Azure Ingress, GCP Ingress, etc.) and where is the documentation for this?
The documentation you're looking for is :
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl
An example of an ingress resource :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-front-api
namespace: example
annotations:
networking.gke.io/managed-certificates: "front.example.com, api.example.com"
kubernetes.io/ingress.global-static-ip-name: "prod-ingress-static-ip"
spec:
rules:
- host: front.example.com
http:
paths:
- backend:
service:
name: front
port:
number: 80
path: /*
pathType: ImplementationSpecific
- host: api.example.com
http:
paths:
- backend:
service:
name: api
port:
number: 80
path: /*
pathType: ImplementationSpecific

How to avoid recreating ingress when recreate service on GKE?

When I delete a service and recreate, I've noticed that status of the ingress indicates Some backend services are in UNKNOWN state.
After some trials and errors, it seems to be related to name of network endpoint group(NEG). NEG tied with a new service has different name, but the ingress gets an old NEG as backend services.
Then, I found that they works again after I recreate an Ingress.
I'd like to avoid downtime to recreate an ingress as much as possible.
Is there a way to avoid recreating ingress when recreating services?
My Service
apiVersion: v1
kind: Service
metadata:
name: client-service
labels:
app: client
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: client
My Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: static-ip-name
networking.gke.io/managed-certificates: managed-certificate
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: client-service
servicePort: 80
If you want to re-use the ingress when the service disappears, you can edit its configuration instead of deleting and recreating it.
To reconfigure the Ingress you will have to update it by editing the configuration, as specified in the official Kubernetes documentation.
To do this, you can perform the following steps:
Issue the command kubectl edit ingress test
Perform the necessary changes, like updating the service configuration
Save the changes
kubectl will update the resource, and trigger an update on the load balancer.
Verify the changes by executing the command kubectl describe ingress test

Kubernetes Ingress Controller GPC GKE can't reach the site

Kubernetes Ingress Controller can't reach the site
Hi, this is the first time I am trying to deploy an application with kubernetes. The problem I am facing is I want to be able link subdomains with my svc, but when I try to navigate to the links I get
This site can’t be reached
I will explain the steps I made for these, probably I something is wrong or missing
I installed ingress-controller on google cloud platform
In GCP -> Networking Services -> Cloud DNS
a. I pointed testcompany.com with google dns
b. I created an A record pointing the public IP from the previous step "ingress-nginx-controller"
my svc manifest
apiVersion: v1
kind: Service
metadata:
namespace: staging
name: testcompany-svc
labels:
app: testcompany-svc
spec:
type: NodePort
ports:
- name: test-http
port: 80
protocol: TCP
targetPort: 3001
selector:
app: testcompany
my ingress manifest
apiVersion: networking.k8s.io/v1beta1
- host: api.testcompany.com
http:
paths:
- backend:
serviceName: testcompany-svc
servicePort: test-http
Everything is green and it seems to be working, but when I try to reach the url I get the This site can’t be reached
Update 1
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: staging
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: front.stagingtestcompany.com
http:
paths:
- backend:
serviceName: testcompanyfront-svc
servicePort: testcompanyfront-http
- host: api.stagingtestcompanysrl.com
http:
paths:
- backend:
serviceName: testcompanynodeapi-svc
servicePort: testcompanyapi-http
You should check this, in order:
your Service, Pod, Ingress are in the same namespace: kubectl get all -n staging
your Pod is listening on port 3001: run it locally if you can, or use kubectl port-forward pods/[pod-name] -n staging 3001:3001 and try it locally with http://localhost:3001/...
your Service is reaching your Pod correctly: use kubectl port-forward service/testcompany-svc -n staging 3001:3001 and try it locally with http://localhost:3001/...
check any other Ingress spec rules before the one you posted
check for firewall rules in your VPC network, they should allow traffic from Google LBs

Kubernetes is not creating internet-facing AWS Classic Load Balancer

as far as I get the ingress controller documentation, a simple creation of a Service and an Ingress without special annotations should create internet-facing load balancers, weirdly it is creating internal load balancers. So I added the annotation service.beta.kubernetes.io/aws-load-balancer-internal: "false" which is not working either. By the way, I am using NGINX as ingress controller, in the test cluster currently in version 0.8.21. Probably I should update it some time.
Here's my simple spec-file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: nginx
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
labels:
external: "true"
comp: ingress-nginx
env: develop
name: develop-api-external-ing
namespace: develop
spec:
rules:
- host: api.example.com
http:
paths:
- backend:
serviceName: api-external
servicePort: 3000
path: /
tls:
- hosts:
- api.example.com
secretName: api-tls
---
apiVersion: v1
kind: Service
metadata:
labels:
app: api
env: develop
name: api-external
namespace: develop
spec:
ports:
- name: http
port: 3000
protocol: TCP
targetPort: 3000
selector:
app: api
env: develop
sessionAffinity: None
type: ClusterIP
You are not wrong, a service and a ingress should create a load balancer... but you should look at the documentation a bit more...
An ingress needs a NodePort service, yours is ClusterIP. So even if it created something it wouldn't work.
In your ingress you are using kubernetes.io/ingress.class: nginx meaning you want to override the default usage of the ingress and force it to register to the ingress-nginx.
So to make it work, change the type of your service, remove the ingress-class annotation.
You can setup NLB ( Network load balancer) and provide the URL on ingress rule host values. You don't need to expose the underneath backend service either as NodePort or as another load balancer.