ingress always showing default backend - 404 - google-cloud-platform

I am trying to install the Cyclos Mobile app on GCP using the demo given on youtube on this URL: https://www.youtube.com/watch?v=LmfdhOXK_ik
Everything setup perfectly as per this video but when I am trying to access the setup on browser it always showing default backend - 404. I have tried everything as per stack overflow several previous questions but still same error.
Any I idea to fix this ?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencypt-staging
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"cert-manager.io/cluster-issuer":"letsencypt-staging","kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/proxy-connect-timeout":"3600"},"name":"cyclos-ingress-nginx-https","namespace":"cyclos-name-space"},"spec":{"backend":{"serviceName":"default-http-backend","servicePort":80},"rules":[{"host":"ip-address.xip.io","http":{"paths":[{"backend":{"serviceName":"cyclos-app-stateful","servicePort":80},"path":"/*"}]}}],"tls":[{"hosts":["ip-address.xip.io"],"secretName":"ip-address.xip.io-tls-secret"}]}}
kubernetes.io/ingress.class: nginx
creationTimestamp: "2020-09-29T07:00:01Z"
generation: 11
name: cyclos-ingress-nginx-https
namespace: cyclos-name-space
resourceVersion: "643221534"
selfLink: /apis/extensions/v1beta1/namespaces/cyclos-name-space/ingresses/cyclos-ingress-nginx-https
uid: uid
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: ip-address.xip.io
http:
paths:
- backend:
serviceName: cyclos-app-stateful
servicePort: 80
path: /*
tls:
- hosts:
- ip-address.xip.io
secretName: ip-address.xip.io-tls-secret
status:
loadBalancer:
ingress:
- ip: IP

Related

AWS Ingress Controller seems to be ignoring host name rules

I am trying to deploy a frontend application to Amazon EKS. The concept is that there will be two deployments as well as two services (frontend-service and stg-frontend-service), one for production and one for staging.
On top of that, there will be an ingress ALB which will redirect traffic based on the hostname. i.e. if the hostname is www.project.io, traffic will be routed to frontend-service and if the hostname is stg-project.io, traffic will be routed to stg-frontend-service.
Here are my deployment and ingress configs
stg-frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: stg-frontend-deployment
namespace: project
spec:
replicas: 3
selector:
matchLabels:
app: stg-frontend
template:
metadata:
labels:
app: stg-frontend
spec:
containers:
- name: stg-frontend
image: STAGING_IMAGE
imagePullPolicy: Always
ports:
- name: web
containerPort: 3000
imagePullSecrets:
- name: project-ecr
---
apiVersion: v1
kind: Service
metadata:
name: stg-frontend-service
namespace: project
spec:
selector:
app: stg-frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
stg-prod-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
namespace: project
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: PRODUCTION_IMAGE
imagePullPolicy: Always
ports:
- name: web
containerPort: 3000
imagePullSecrets:
- name: project-ecr
---
apiVersion: v1
kind: Service
metadata:
name: frontend-service
namespace: project
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 3000
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: project-ingress
namespace: project
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: www.project.io
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- host: stg.project.io
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: stg-frontend-service
port:
number: 80
Later, I have used Route 53 to route traffic from both domain to the ALB.
+----------------+------+---------+-----------------------------------------------------+
| Record Name | Type | Routing | Value/Route traffic to |
+----------------+------+---------+-----------------------------------------------------+
| www.project.io | A | Simple | dualstack.k8s-********.us-west-1.elb.amazonaws.com. |
| stg.project.io | A | Simple | dualstack.k8s-********.us-west-1.elb.amazonaws.com. |
+----------------+------+---------+-----------------------------------------------------+
The problem is, ALB ingress is always routing traffic to the first spec rules. In the config above, the first rule is host www.project.io which refers to frontend-service. Whenever I am trying to access www.project.io or stg.project.io, it's showing me a response from frontend-service.
Later, I have switched the rules and put staging rules first, and then it was showing staging service on both domains.
I even created a dummy record like junk.project.io and pointed to the load balancer, it still worked and showed me the same response, even though junk.project.io is not included in my ingress config.
It seems to me that Ingress Config is totally ignoring what the host name is and always returning response from the first rule.
Your host and http values are defined as separate items in the list, try removing the - (hyphen) in front of the http node:
- host: www.project.io
http: # I removed the hyphen here
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- host: stg.project.io
http: # I removed the hyphen here
paths:
- path: /
pathType: Prefix
backend:
service:
name: stg-frontend-service
port:
number: 80

nginx.ingress.kubernetes.io/rewrite-target doesn't work, Failed to load resource: the server responded with a status of 500

I have backend and fronted applications. I have tried to create one ingress for fronted where both paths will be matched (host.com/api/v1/reference/list/test1 and host.com/api/v1/reference/test2). The second one works fine, but the first give me error: Failed to load resource: the server responded with a status of 500 (). Here is my ingress configuration:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: backend-app
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3
spec:
tls:
- hosts:
- host.com
secretName: tls-secret
rules:
- host: host.com
http:
paths:
- backend:
serviceName: service-backend
servicePort: 80
path: /api(/|$)(.*)
service:
apiVersion: v1
kind: Service
metadata:
name: service-backend
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
Does anyone know why my URLs are not getting rewritten and the requests are not delivered to the backend service for host.com/api/v1/reference/test2 ?
Thanks in advance!
resolved - bug inside application

Problems configuring Ingress with cookie affinity

I was looking for how to use cookie affinity in GKE, using Ingress for that.
I've found the following link to do it: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service
I've created a yaml with the following:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-bsc-deployment
spec:
selector:
matchLabels:
purpose: bsc-config-demo
replicas: 3
template:
metadata:
labels:
purpose: bsc-config-demo
spec:
containers:
- name: hello-app-container
image: gcr.io/google-samples/hello-app:1.0
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-bsc-backendconfig
spec:
timeoutSec: 40
connectionDraining:
drainingTimeoutSec: 60
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 50
---
apiVersion: v1
kind: Service
metadata:
name: my-bsc-service
labels:
purpose: bsc-config-demo
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bsc-backendconfig"}}'
spec:
type: NodePort
selector:
purpose: bsc-config-demo
ports:
- port: 80
protocol: TCP
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
---
Everything seems to go well. When I inspect the created Ingress I see 2 backend services. One of them has the cookie configured, but the other doesn't.
If I create the deployment, and from GCP's console, create the Service and Ingress, only one backend service appears.
Somebody knows why using a yaml I get 2, but doing it from console I only get one?
Thanks in advance
Oscar
Your definition is good.
The reason you have two backend's is because your ingress does not define a default backend. GCE LB require a default backend so during LB creation, a second backend is added to the LB to act as the default (this backend does nothing but serve 404 responses). The default backend does not use the backendConfig.
This shouldn't be a problem, but if you want to ensure only your backend is used, define a default backend value in your ingress definition by adding the spec.backend:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
backend:
serviceName: my-bsc-service
servicePort: 80
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
But, like I said, you don't NEED to define this, the additional backend won't really come into play and no sessions affinity is required (there is only a single pod anyway). If you are curious, the default backend pod in question is called l7-default-backend-[replicaSet_hash]-[pod_hash] in the kube-system namespace
You can enable the cookies on the ingress like
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-sticky
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: ingress.example.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /
You can create the service like :
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
If you are using the traefik ingress instead of the nginx and deault GKe ingress you can write the service like this
apiVersion: v1
kind: Service
metadata:
name: session-affinity
labels:
app: session-affinity
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/session-cookie-name: "sticky"
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
selector:
app: session-affinity-demo

How to fix routes configuration for the gce ingress?

The ingress calls random services on requests, and also it calls the root (/). I need ingress to call the specified in the configuration service and send the full path to the service (I use MVC pattern so I need to provide the application with the full path to resolve the correct controller)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whatever
annotations:
ingress.kubernetes.io/add-base-url: "true"
ingress.kubernetes.io/rewrite-target: "/"
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "my-static-ip-name"
spec:
tls:
- secretName: my-tls-secret
hosts:
- whatever.my-awesome-project.com
rules:
- host: whatever.my-awesome-project.com
http:
paths:
- path: /api/whatever
backend:
serviceName: whatever-service
servicePort: 80
- path: /api/not-whatever
backend:
serviceName: not-whatever-service
servicePort: 80
- path: /images/*
backend:
serviceName: not-whatever-service
servicePort: 80
From what I see in your question:
1) you are trying to use rewrite-target with gce ingress --> this wont work. rewrite-target is a nginx ingress controller feature. Btw starting from 0.22.0 version you should use nginx.ingress.kubernetes.io/rewrite-target: "/" instead of ingress.kubernetes.io/rewrite-target: "/"
2)The annotation add-base-url was removed in 0.22.0. And this is also was nginx annotation, not gce. More info here an here.
3)Also I believe you dont need rewrite-target annotation if you would like to have correct path
From my pow there should be something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whatever
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.global-static-ip-name: "my-static-ip-name"
spec:
tls:
- secretName: my-tls-secret
hosts:
- whatever.my-awesome-project.com
rules:
- host: whatever.my-awesome-project.com
http:
paths:
- path: /api/whatever
backend:
serviceName: whatever-service
servicePort: 80
- path: /api/not-whatever
backend:
serviceName: not-whatever-service
servicePort: 80
- path: /images/*
backend:
serviceName: not-whatever-service
servicePort: 80
A lot of things depends on what ingress and version you use. Do you have a possibility to move to nginx?

Kubernetes - ingress route issue - route across namespaces

I have one master and two worker node kubernetes cluster on AWS. And I have two environments (qc and prod) in the cluster and I created two namespaces.
I have the same service running on qcand prod namespaces.
I have created ingress for both namespaces.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: prod
spec:
rules:
- host: "*.qc-k8s.example.com"
http:
paths:
- path: /app
backend:
serviceName: client-svc
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: qc
spec:
rules:
- host: "*.qc-k8s.example.com"
http:
paths:
- path: /app-qc
backend:
serviceName: client-svc
servicePort: 80
I have client-svc in both qc and prod namespaces and open the nodeport 80.
Then I created ELB service and daemonset as below.
kind: Service
apiVersion: v1
metadata:
name: ingress-svc
namespace: deafult
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:ca-central-1:492276880714:certificate/xxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ingress-nginx
namespace: deafult
spec:
template:
metadata:
labels:
app: my-app
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.6
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
hostPort: 80
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
When I tried to curl curl -iv https://gayan.qc-k8s.example.com/app/. Then Im getting an error.
2017/06/27 15:43:31 [error] 158#158: *981 connect() failed (111: Connection refused) while connecting to upstream, client: 209.128.50.138, server: gayan.qc-k8s.example.com, request: "GET /app/ HTTP/1.1", upstream: "http://100.82.2.47:80/app/", host: "gayan.qc-k8s.example.com"
209.128.50.138 - [209.128.50.138, 209.128.50.138] - - [27/Jun/2017:15:43:31 +0000] "GET /app/ HTTP/1.1" 500 193 "-" "curl/7.51.0" 198 0.014 100.82.2.47:80, 100.96.2.48:80 0, 193 0.001, 0.013 502, 500
If I curl curl -iv https://gayan.qc-k8s.example.com/app-qc, I'm getting the same issue.
Anyone has experienced this error previously ? any clue to resolve this issue?
Thank you
I solved this by https://github.com/kubernetes/kubernetes/issues/17088
An example, from a real document we use:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: dev-1
spec:
rules:
- host: api-gateway-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-gateway
servicePort: 80
path: /
- host: api-shop-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-shop
servicePort: 80
path: /
- host: api-search-dev-1.faceit.com
http:
paths:
- backend:
serviceName: api-search
servicePort: 8080
path: /
tls:
- hosts:
- api-gateway-dev-1.faceit.com
- api-search-dev-1.faceit.com
- api-shop-dev-1.faceit.com
secretName: faceitssl
We make one of these for each of our namespaces for each track.
Then, we have a single namespace with an Ingress Controller which runs automatically configured NGINX pods. Another AWS Load balancer points to these pods which run on a NodePort using a DaemonSet to run at most and at least one on every node in our cluster.
As such, the traffic is then routed:
Internet -> AWS ELB -> NGINX (on node) -> Pod
We keep the isolation between namespaces while using Ingresses as they were intended. It's not correct or even sensible to use one ingress to hit multiple namespaces. It just doesn't make sense, given how they are designed. The solution is to use one ingress per each namespace with a cluster-scope ingress controller which actually does the routing.