Can't connect static ip to Ingress on GKE - google-cloud-platform

I am trying to connect my ingress to a static ip. I seem to be following all the tutorials, but still I cannot seem to attach my static ip to ingress. My ingress file is as follows (refering to the static ip "test-ip")
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-web
annotations:
kubernetes.io/ingress.global-static-ip-name: "test-ip"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- http:
paths:
- path: /api/
backend:
serviceName: api-cluster-ip-service
servicePort: 5005
- path: /
backend:
serviceName: web-cluster-ip-service
servicePort: 80
However, when I run
kubectl get ingress ingress-web
it returns
kubectl get ingress ingress-web
NAME HOSTS ADDRESS PORTS AGE
ingress-web * 80 4m
without giving the address. In the VPC network [External IP addresses
] the static ip is there, it is global, but it keeps saying: In use by None
gcloud compute addresses describe test-ip --global
gives
address: 34.240.xx.xxx
creationTimestamp: '2019-03-26T00:34:26.086-07:00'
description: ''
id: '536303927960423409'
kind: compute#address
name: test-ip
networkTier: PREMIUM
selfLink: https://www.googleapis.com/compute/v1/projects/my-project- adbc8/global/addresses/test-ip
status: RESERVED
What am I missing here?

I ran into this issue. I believe it has been fixed by this pull request.
Changing
kubernetes.io/ingress.global-static-ip-name
to
kubernetes.io/ingress.regional-static-ip-name
Worked for me.

I've spent hours trying to figure the issue out.
It simply seems like a bug with GKE.
What solved it was:
Starting ingress with no static ip
Going to cloud console on the web under VPC Network > External IP addresses
Waiting for the Ingress ip to show up
Setting is as static, and giving it a name
Adding kubernetes.io/ingress.global-static-ip-name: <ip name> Ingress yaml and applying it.

You have to make sure the IP you created in GCP is Global and not Regional in order to use the following annotation in your ingress:
kubernetes.io/ingress.global-static-ip-name

I had the same problem, but after some research and testing I managed to solve this issue. These are the steps I took:
First you need to create a Global static IP address on GCP.
I happened to use Terraform to do this eg see example below
resource "google_compute_global_address" "static" {
name = "global-test-ip"
project = var.gcp_project_id
address_type = "EXTERNAL"
}
based on this documentation: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_global_address
You could however use the GCP console to do this.
Note: I created this Global Static IP in the same GCP project as my GKE cluster.
Once I had completed the creation of the Global Static IP I then added the following annotation to the Kubernetes ingress yaml file and applied it (ie kubectl apply -f ingress.yaml):
annotations:
kubernetes.io/ingress.global-static-ip-name: "global-test-ip"
Note: it took a few minutes for the Ingress and Google Load balancer to update after I applied this ingress change.

The first thing you should check is the status of the IP, e.g.
gcloud compute addresses describe traefik --global
You should see something along the lines of:
address: 34.111.200.XXX
addressType: EXTERNAL
creationTimestamp: '2022-07-25T14:06:48.827-07:00'
description: ''
id: '5625073968713218XXX'
ipVersion: IPV4
kind: compute#address
name: traefik
networkTier: PREMIUM
selfLink: https://www.googleapis.com/compute/v1/projects/contrawork/global/addresses/traefik
status: RESERVED
Your Ingress should look something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: 'gce'
kubernetes.io/ingress.global-static-ip-name: 'traefik'
name: secondary-ingress
spec:
defaultBackend:
service:
name: 'traefik'
port:
number: 80
After this is deployed, within 5 minutes you should see status change to IN USE.
If not, I would attempt to delete and re-create the Ingress resource.
If it still does not happen, then I would check the documentation if you have properly configured the cluster, e.g. Ensure that GKE cluster has "HTTP Load Balancing" enabled.

Related

Linkerd route-based metrics do not work with GKE default ingress controller

I have a microservice running in GKE. I am trying to befriend default GKE GCE-ingress with linkerd so that I can observe route-based metrics with linkerd. The documentation says that the GCE ingress should be meshed with ingress mode enabled, i.e. with the linkerd.io/inject: ingress annotation rather than the default enabled. I tried the following Ingress resource, however route-based metrics are not coming. I checked with linked tap command and rt_route is not getting set.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
namespace: emojivoto
annotations:
ingress.kubernetes.io/custom-request-headers: "l5d-dst-override: web-svc.emojivoto.svc.cluster.local:80"
ingress.gcp.kubernetes.io/pre-shared-cert: "managed-cert-name"
kubernetes.io/ingress.global-static-ip-name: "static-ip-name"
linkerd.io/inject: ingress
spec:
ingressClassName: gce
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-svc
port:
number: 80
I suspect that linkerd.io/inject: ingress annotation should be added to ingress controller, however since it is managed by GKE, I do not know how I can add it.
The linkerd.io/inject: ingress annotation should be added to your deployment(s) or to one or more namespace for automatic injection.

Exposing a service on EKS using NGINX ingress and issues with load balancer

I am trying to set up a service and expose it externally on EKS. I have already done it on GKE pretty easily but now AWS is giving me a hard time.
My NGINX yaml looks something like that:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- app.mydomain.com
secretName: myapp-tls
rules:
- host: app.mydomain.com
http:
paths:
- path: /
backend:
serviceName: myapp-service
servicePort: 80
And then I have my domain app.mydomain.com on Google Domains pointing at the ingress external address. There is also a cert-manager service running in order to support HTTPS.
However, while basically the same setup worked completely out of the box on GKE, EKS gives me a hard time.
From what I understand it has something to do with EKS default LoadBalancer being layer 4 in comparison to Google's layer 7 (Which explains HTTPS not working) but there is also issues with redirections of the domain as it just resolves as the ingress address instead of my desired address and thus my app doesn't show up.
The domain is registered over Google Domains and I'm creating Synthetic Records (for my subdomain) that points to my ingress external address on EKS. The same scheme works perfectly fine on GKE but here it resolves the address as the ingress address instead of my domain which results in 404 on the ingress side.
I was wondering if someone could please point me to how to properly set it up? Should I give up on nginx ingress on EKS and move onto ALB? and how to properly associate the domain?
Thank you very much in advance!
Edit:
output of kubectl describe ingress myapp-ingress:
Name: myapp-ingress
Namespace: default
Address: ********************************-****************.elb.eu-west-1.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
myapp-tls terminates app.mydomain.com
Rules:
Host Path Backends
---- ---- --------
app.mydomain.com
/ myapp-service:80 (172.31.2.238:8000)
Annotations: cert-manager.io/cluster-issuer: myapp-letsencrypt-prod
kubernetes.io/ingress.class: nginx
Events: <none>
Should I give up on nginx ingress on EKS and move onto ALB
No. NGinX ingress controllers work perfectly well on EKS. It is possible to configure them as either layer 4 or layer 7; we use it in layer 7 mode.
Can you update your question with the output of
kubectl get ingress myapp-ingress
I think your ingress path is also incorrect. Unless I'm mistaken that's just routing the root of your app, not all uris. We use the scheme
spec:
rules:
- host: service.d.tld
http:
paths:
- path: /?(.*) # <---
backend:
serviceName: my-service
servicePort: http
Are you seeing errors in the nginx ingress controller's logs? That + kubectl events are both useful for debugging purposes.
I'd disable TLS everywhere and get your service working on http, then work stepwise on getting TLS enabled on the ingress controller.
Edit: Based on your response above,
curl -H "Host: app.mydomain.com" http://<elb-address>:80
SHOULD call through to your service behind the ingress.
How is app.mydomain.com defined? Is it a CNAME to the dns entry?

AWS EKS Ingress - No Address

I have a kubernetes application using AWS EKS. With the below details:
Cluster:
+ Kubernetes version: 1.15
+ Platform version: eks.1
Node Groups:
+ Instance Type: t3.medium
+ 2(Minimum) - 2(Maximum) - 2(Desired) configuration
[Pods]
+ 2 active pods
[Service]
+ Configured Type: ClusterIP
+ metadata.name: k8s-eks-api-service
[rbac-role.yaml]
https://pastebin.com/Ksapy7vK
[alb-ingress-controller.yaml]
https://pastebin.com/95CwMtg0
[ingress.yaml]
https://pastebin.com/S3gbEzez
When I tried to pull the ingress details. Below are the values (NO ADDRESS)
Host: *
ADDRESS:
My goal is to know why the address has no value. I expect to have private or public address to be used by other service on my application.
solution fitted my case is adding ingressClassName in ingress.yaml or configure default ingressClass.
add ingressClassName in ingress.yaml
#ingress.yaml
metadata:
name: ingress-nginx
...
spec:
ingressClassName: nginx <-- add this
rules:
...
or
edit ingressClass yaml
$ kubectl edit ingressclass <ingressClass Name> -n <ingressClass namespace>
#ingressClass.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
ingressclass.kubernetes.io/is-default-class: "true" <-- add this
....
link
In order for your kubernetes cluster to be able to get an address you will need to be able to manage route53 from withtin the cluster, for this task I would recommend to use externalDns.
In a broader sense, ExternalDNS allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way.
source: ExternalDNS
This happened with me too that after all the setup, I was not able to see the ingress address. The best way to debug this issue is to check the logs for the ingress controller. You can do this by:
Get the Ingress controller po name by using: kubectl get po -n kube-system
Check logs for the po using: kubectl logs <po_name> -n kube-system
This will point you to the exact issue as to why you are not seeing the address.

GKE: ingress loadbalancer does not use configured static IP

I've created a global static IP Address using terraform in GCP. However, when I try to assign it to an ingress controller inside a GKE cluster, it gets ignored:
Here's my kubernetes configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: homefully-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "homefully-ingress-root"
labels:
app: homefully-ingress
spec:
# ...
the IP address referenced here looks like this:
NAME REGION ADDRESS STATUS
homefully-ingress-root europe-west3 35.234.83.106 RESERVED
However, ingress does not use that IP Address but another random one. That's quite a problem, as I am not using google's DNS services, so I need to rely on a static IP.
Instead, this is what I am getting:
Name: homefully-ingress
Namespace: default
Address: 35.227.252.112
Default backend: auth-proxy-staging:4180 (10.4.0.7:4180)
Rules:
Host Path Backends
---- ---- --------
adminpanel.homefully.tech
homefully-management-frontend-website-staging:80 (10.4.2.7:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"gce","kubernetes.io/ingress.global-static-ip-name":"homefully-ingress-root"},"labels":{"app":"homefully-ingress"},"name":"homefully-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"auth-proxy-staging","servicePort":4180},"rules":[{"host":"adminpanel.homefully.tech","http":{"paths":[{"backend":{"serviceName":"homefully-management-frontend-website-staging","servicePort":80}}]}}]}}
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.global-static-ip-name: homefully-ingress-root
ingress.kubernetes.io/backends: {"k8s-be-31611--026ed6556721059b":"Unknown","k8s-be-32450--026ed6556721059b":"Unknown"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-homefully-ingress--026ed6556721059b
ingress.kubernetes.io/target-proxy: k8s-tp-default-homefully-ingress--026ed6556721059b
ingress.kubernetes.io/url-map: k8s-um-default-homefully-ingress--026ed6556721059b
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 8m loadbalancer-controller default/homefully-ingress
Normal CREATE 7m loadbalancer-controller ip: 35.227.252.112
I can't find an error message or any hint on what's wrong here. Would be very thankful for some suggestions
stupid me - it was not a global ip address. in which case just a random new one got assigned

Create specific A record entry to Kubernetes local DNS

I have a Kubernetes v1.4 cluster running in AWS with auto-scaling nodes.
I also have a Mongo Replica Set cluster with SSL-only connections (FQDN common-name) and public DNS entries:
node1.mongo.example.com -> 1.1.1.1
node2.mongo.example.com -> 1.1.1.2
node3.mongo.example.com -> 1.1.1.3
The Kubernetes nodes are part of a security group that allows access to the mongo cluster, but only via their private IPs.
Is there a way of creating A records in the Kubernetes DNS with the private IPs when the public FQDN is queried?
The first thing I tried was a script & ConfigMap combination to update /etc/hosts on startup (ref. Is it a way to add arbitrary record to kube-dns?), but that is problematic as other Kubernetes services may also update the hosts file at different times.
I also tried a Services & Enpoints configuration:
---
apiVersion: v1
kind: Service
metadata:
name: node1.mongo.example.com
spec:
ports:
- protocol: TCP
port: 27017
targetPort: 27017
---
apiVersion: v1
kind: Endpoints
metadata:
name: node1.mongo.example.com
subsets:
- addresses:
- ip: 192.168.0.1
ports:
- port: 27017
But this fails as the Service name cannot be a FQDN...
While not so obvious at first, the solution is quite simple. kube-dns image in recent versions includes dnsmasq as one of it's components. If you look into its man page, you will see some usefull options. Following that lecture you can choose a path similar to this :
Create a ConfigMap to store your dns mappings :
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
myhosts: |
10.0.0.1 foo.bar.baz
Having that ConfigMap applied in your cluster you can now make some changes to kube-dns-vXX deployment you use in your kubernetes.
Define volume that will expose your CM to dnsmasq
volumes:
- name: hosts
configMap:
name: kube-dns
and mount is in your dnsmasq container of kube-dns deployment/rc template
volumeMounts:
- name: hosts
mountPath: /etc/hosts.d
and finally, add a small config flag to your dnsmasq arguments :
args:
- --hostsdir=/etc/hosts.d
now, as you apply these changes to the kube-dns-vXX deployment in your cluster it will mount the configmap and use files mounted in /etc/hosts.d/ (with typical hosts file format) as a source of knowledge for dnsmasq. Hence if you now query for foo.bar.baz in your pods, they will resolve to respective IP. These entries take precedence over public DNS, so it should perfectly fit your case.
Mind that dnsmasq is not watching for changes in ConfigMap so it has to be restarted manually if it changes.
Tested and validated this on a live cluster just few minutes ago.