Kubernetes ALB Ingres doesn't route traffic to any rules except /* - amazon-web-services

I deployed a "monolithic" app into kubernetes on AWS. This app works fine through the ALB.
Next I want to deploy a small service at the same cluster and map traffic to it through the same ALB ingress.
Here is how the Ingress manifest looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: scala-backend-ingress
namespace: prod
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: akka-backend
spec:
rules:
- http:
paths:
- path: /proxy/service/*
backend:
serviceName: proxy-service-np
servicePort: 80
- path: /*
backend:
serviceName: akka-main-np
servicePort: 80
Unfortunately when I call:
GET www.aliace.example.com/proxy/service/traffic/data
I receive back 502 Bad Gateway response with header Server → awselb/2.0.
All traffic to /* is handled properly.

The problem was not in kubernetes.
The application in the container was bounded to localhost instead of 0.0.0.0

can you try as below
- path: /proxy/service/*/*
backend:
serviceName: proxy-service-np
servicePort: 80

Related

Error in exposing multiple ports with ALB Ingress on EKS

I have a Triton server on EKS listening on 3 ports, 8000 is for http requests, 8001 is for gRPC and 8002 is for prometheus metrics. So, I have created a Triton deployment on EKS which is exposed through NodePort service of EKS. I am also using ALB ingress which is creating an application load balancer to balance the load of Triton servers on these ports.
But, the traffic is not flowing correctly as it is showing same output for all the 3 ports but it should be different. So, now do I have to create 3 Application Load Balancers for 3 ports or is it possible to manage all ports with a single Application Load Balancer?
Yaml file for ALB Ingress looks like:-
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: triton
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":8000}, {"HTTP":8001}, {"HTTP":8002}]'
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: triton
port:
number: 8000
- http:
paths:
- path: /v2
pathType: Prefix
backend:
service:
name: triton
port:
number: 8001
- http:
paths:
- path: /metrics
pathType: Prefix
backend:
service:
name: triton
port:
number: 8002

Creating a Kubernetes Ingress resource for GCP/GKE by example

I'm trying to make sense of an example Kubernetes YAML config file that I am trying to customize:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-web-server
namespace: myapp
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/security-groups: my-sec-group
app.kubernetes.io/name: my-alb-ingress-web-server
app.kubernetes.io/component: my-alb-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-web-server
servicePort: 8080
The documentation for this example claims its for creating an "Ingress", or a K8s object that manages inbound traffic to a service or pod.
This particular Ingress resource appears to use AWS ALB (Application Load Balancers) and I need to adapt it to create and Ingress resource in GCP/GKE.
I'm Googling the Kubernetes documentation high and low and although I found the kubernetes.io/ingress.class docs I don't see where they define "alb" as a valid value for this property. I'm asking because I obviously need to find the correct kubernetes.io/ingress.class value for GCP/GKE and I assume if I can find the K8s/AWS Ingress documentation I should be able to find the K8s/GCP Ingress documentation.
I'm assuming K8s has AWS, GCP, Azure, etc. built-in client to kubectl for connecting to these clouds/providers?
So I ask: how does the above configuration tell K8s that we are creating an AWS Ingress (as opposed to an Azure Ingress, GCP Ingress, etc.) and where is the documentation for this?
The documentation you're looking for is :
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl
An example of an ingress resource :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-front-api
namespace: example
annotations:
networking.gke.io/managed-certificates: "front.example.com, api.example.com"
kubernetes.io/ingress.global-static-ip-name: "prod-ingress-static-ip"
spec:
rules:
- host: front.example.com
http:
paths:
- backend:
service:
name: front
port:
number: 80
path: /*
pathType: ImplementationSpecific
- host: api.example.com
http:
paths:
- backend:
service:
name: api
port:
number: 80
path: /*
pathType: ImplementationSpecific

Kubernetes Ingress Controller GPC GKE can't reach the site

Kubernetes Ingress Controller can't reach the site
Hi, this is the first time I am trying to deploy an application with kubernetes. The problem I am facing is I want to be able link subdomains with my svc, but when I try to navigate to the links I get
This site can’t be reached
I will explain the steps I made for these, probably I something is wrong or missing
I installed ingress-controller on google cloud platform
In GCP -> Networking Services -> Cloud DNS
a. I pointed testcompany.com with google dns
b. I created an A record pointing the public IP from the previous step "ingress-nginx-controller"
my svc manifest
apiVersion: v1
kind: Service
metadata:
namespace: staging
name: testcompany-svc
labels:
app: testcompany-svc
spec:
type: NodePort
ports:
- name: test-http
port: 80
protocol: TCP
targetPort: 3001
selector:
app: testcompany
my ingress manifest
apiVersion: networking.k8s.io/v1beta1
- host: api.testcompany.com
http:
paths:
- backend:
serviceName: testcompany-svc
servicePort: test-http
Everything is green and it seems to be working, but when I try to reach the url I get the This site can’t be reached
Update 1
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: staging
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: front.stagingtestcompany.com
http:
paths:
- backend:
serviceName: testcompanyfront-svc
servicePort: testcompanyfront-http
- host: api.stagingtestcompanysrl.com
http:
paths:
- backend:
serviceName: testcompanynodeapi-svc
servicePort: testcompanyapi-http
You should check this, in order:
your Service, Pod, Ingress are in the same namespace: kubectl get all -n staging
your Pod is listening on port 3001: run it locally if you can, or use kubectl port-forward pods/[pod-name] -n staging 3001:3001 and try it locally with http://localhost:3001/...
your Service is reaching your Pod correctly: use kubectl port-forward service/testcompany-svc -n staging 3001:3001 and try it locally with http://localhost:3001/...
check any other Ingress spec rules before the one you posted
check for firewall rules in your VPC network, they should allow traffic from Google LBs

eks http https redirect using ingress

This is my ingress file , what I need is how to add https redirection settings here in ingress file , I did it using service file and it works but after to reduce costs I decided to use SINGLE ingress file which manage multiple services with SINGLE AWS CLASSIC load balancer.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
generation: 4
name: brain-xx
namespace: xx
spec:
rules:
- host: app.xx.com
http:
paths:
- backend:
serviceName: xx-frontend-service
servicePort: 443
path: /
status:
loadBalancer:
ingress:
- ip: xx.xx.xx.xx
I have managed to create http to https redirection on GKE. Let me know if this solution will work for your case on AWS:
Steps to reproduce
Apply Ingress definitions
Configure basic HTTP ingress resource
Create SSL certificate
Replace old Ingress resource with HTTPS enabled one.
Apply Ingress definitions
Follow this Ingress link to check if there are any needed prerequisites before installing NGINX Ingress controller on your AWS infrastructure and install it.
Configure basic HTTP ingress resource and test it
Example below is Ingress configuration with HTTP traffic only.
It will act as starting point:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-http
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: xx.yy.zz
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
- path: /v2/
backend:
serviceName: goodbye-service
servicePort: goodbye-port
Please change this file to reflect configuration appropriate to your case.
Create SSL certificate
For this to work without browser's security warnings you will need valid SSL certificate and a domain name.
To create this certificate you can use for example: Linode create Let's Encrypt SSL certificates.
Let's Encrypt will create files which will be used later.
Configure HTTPS ingress resource and test it
By default Nginx Ingress will create a self-signed certificate if he's not provided one. To provide him one you will need to add it as a secret to your Kubernetes cluster.
As I said earlier the files (cert.pem privkey.pem) that Let's Encrypt created will be added to Kubernetes to configure HTTPS.
Below command will use this files to create secret for Ingress:
$ kubectl create secret tls ssl-certificate --cert cert.pem --key privkey.pem
This Ingress configuration support HTTPS as well as redirects all the traffic to it:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-https
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- secretName: ssl-certificate
rules:
- host: xx.yy.zz
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
- path: /v2/
backend:
serviceName: goodbye-service
servicePort: goodbye-port
Please change this file to reflect configuration appropriate to your case.
Take a look at this fragment which will enable HTTPS and redirect all the traffic to it:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- secretName: ssl-certificate
Apply this configuration and check if it worked for you.
Below is part of curl output which shows that connecting to http://xx.yy.zz gives redirection to https://xx.yy.zz
< HTTP/1.1 308 Permanent Redirect
< Server: openresty/1.15.8.2
< Date: Fri, 20 Dec 2019 15:06:57 GMT
< Content-Type: text/html
< Content-Length: 177
< Connection: keep-alive
< Location: https://xx.yy.zz/

Re-deploying AWS Ingress keeps binning my AWS ALB

We're using a AWS ALB Ingress controller to manage our entry to the K8S cluster we have.
Every time we add a new ingress rule it seems to bin our ALB and re-provision it, which in turn will take everything down - are we doing something wrong?
Thanks,
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "dv1-ingress"
namespace: "dv1"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: dv1-ingress
spec:
rules:
- http:
paths:
- path: /derivative-cost-new/*
backend:
serviceName: "derivative-cost-new-published-uk-service"
servicePort: 80