Access k8s Load Balancer running inside ec2 from outside - amazon-web-services

I am running a simple web application inside pods and have mapped them with load balaner. I was able to curl it from the ec2 machine but couldn't access it from outside. Am I missing something in configuration?. Here is my deployment and service yml.
Service
apiVersion: v1
kind: Service
metadata:
name: load-balancer-service
spec:
type: LoadBalancer
selector:
tag: frontend
ports:
- name: port-lb-k8s
protocol: TCP
port: 8080
targetPort: 80
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
spec:
selector:
matchLabels:
tag: frontend
replicas: 3 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
tag: frontend
spec:
containers:
- name: frontend-container
image: coitlearning/coit-frontend
Ec2 Machine

In-order to create a service with an internet-facing Network Load Balancer that load balances to IP targets, you can use the following
apiVersion: v1
kind: Service
metadata:
name: nlb-load-balancer-service
namespace: default
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
ports:
- port: 8080
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
tag: frontend
You can get more details in official-docs

Related

How to set an ingress with ClusterIP in AWS-EKS

I am new to the AWS EKS and I want to know how I can setup an ingress and enable TLS (with a free service such as lets-encrypt).
I have deployed an EKS cluster and I have the following sample nginx manifest.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-loadbalancer
spec:
type: LoadBalancer. // <------ can't I use a ClusterIp and still have a LB priovisioned?
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
---
#05-ALB-Ingress-Basic.yml
# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-usermgmt-restapp-service
labels:
app: usermgmt-restapp
annotations:
# Ingress Core Settings
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: internet-facing
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
rules:
- http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: nginx-service-loadbalancer
port:
number: 80
When it creates the LoadBalancer type service, it go ahead and creates a classic load balancer.
My questions are:
How can I provision (automatically) a Layer7 application load balancer and not the classic load balancer
Instead of using LoadBalancer type service, can I use a ClusterIP service and use my ingress to point to that and still create an automatic Load Balancer?
Thank you!
How can I provision (automatically) a Layer7 application load
balancer and not the classic load balancer
By using an ingress resource and specifying kubernetes.io/ingress.class: "alb".
Instead of using LoadBalancer type service, can I use a ClusterIP service and use my ingress to point to that and still
create an automatic Load Balancer?
yes, when used alb ingress resource with annotation alb.ingress.kubernetes.io/target-type: ip you can use a clusterip service.
so please don't create both a service-type loadbalancer and ingress resource at the same time.

ALB for Argo CD on kubernetes using AWS Load Balancer Controller (HTTP2)

I have an EKS kubernetes cluster with AWS Load Balancer Controller and Argo CD installed. I'm creating an Application Load Balancer based on Argo CD documentation here.
Basically, I'm creating a NodePort service that receives traffic from the load balancer, and an ingress that will create the load balancer (using AWS Load Balancer Controller).
The ingress code is this one:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
# Use this annotation (which must match a service name) to route traffic to HTTP2 backends.
alb.ingress.kubernetes.io/conditions.argogrpc: |
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
# ALB annotations
kubernetes.io/ingress.class: 'alb'
alb.ingress.kubernetes.io/scheme: 'internet-facing'
alb.ingress.kubernetes.io/target-type: 'instance'
alb.ingress.kubernetes.io/load-balancer-name: 'test-argocd'
alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:us-east-1:1234567:certificate/longcertcode'
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
# Health Check
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
name: argocd
namespace: argocd
spec:
rules:
- host: argocd.argoproj.io
http:
paths:
- path: /
backend:
service:
name: argogrpc
port:
number: 443
pathType: ImplementationSpecific
tls:
- hosts:
- argocd.argoproj.io
defaultBackend:
service:
name: argogrpc
port:
number: 443
And that creates a Load Balancer as expected.
I'm creating the service with this:
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: NodePort
The issue here is that the health check is failing on the Target Group:
If I change the backend protocol version to GRPC:
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: GRPC
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: NodePort
Then the health check is passed, but I get an 464 error on Chrome:
This is what AWS documentation says about this error, but it doesn't help to clarify why I'm getting it:
So the question is, how do I create an application load balancer for my Argo CD using AWS load balancer controller that works? According to the documentation, it should work in both cases.

Ingress Controller produces 502 Bad Gateway on every other request

I have a kubernetes ingress controller terminating my ssl with an ingress resource handling two routes: 1 my frontend SPA app, and the second backend api. Currently when I hit each frontend and backend service directly they perform flawlessly, but when I call the ingress controller both frontend and backend services alternate between producing the correct result and a 502 Bad Gateway.
To me it smells like my ingress resource is having some sort of path conflict that I'm not sure how to debug.
Reddit suggested that it could be a label and selector mismatch between my services and deployments which I believe I checked thoroughly. they also mentioned: "api layer deployment and a worker layer deployment [that] both share a common app label and your PDB selects that app label with a 50% availability for example". Which I haven't run down because I don't quite understand.
I also realize SSL could play a role in gateway issues; However, my certificates appear to be working when I hit the https:// port of the ingress-controller
frontend-deploy:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: my-performance-front
tier: frontend
replicas: 1
template:
metadata:
labels:
app: my-performance-front
tier: frontend
spec:
containers:
- name: my-performance-frontend
image: "<my current image and location>"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
imagePullSecrets:
- name: regcred
frontend-svc
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: ingress-nginx
spec:
selector:
app: my-performance-front
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
backend-deploy
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: my-performance-back
tier: backend
replicas: 1
template:
metadata:
labels:
app: my-performance-back
tier: backend
spec:
containers:
- name: my-performance-backend
image: "<my current image and location>"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
imagePullSecrets:
- name: regcred
backend-svc
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: ingress-nginx
spec:
selector:
app: my-performance-back
tier: backend
ports:
- protocol: TCP
name: "http"
port: 80
targetPort: 8080
type: LoadBalancer
ingress-rules
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-rules
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
# nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- http:
paths:
- path: /(api/v0(?:/|$).*)
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
- path: /(.*)
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Any ideas, critiques, or experiences are welcomed and appreciated!!!

Kubectl create service type Loadbalancer (on GCP but add flag Global?)

I've create loadbalancer for my microservices, with this template:, all is good and works but wanted to somehow add the global flag (when you create lb through gcp console you have option to add it) to meet expectations of the app functionality, does anyone know what other flag might I need to add ?
apiVersion: v1
kind: Service
metadata:
name: my-app-jmprlb
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: my-app
env: dev
spec:
type: LoadBalancer
selector:
app: my-app
env: dev
ports:
- port: 80
targetPort: 8080
protocol: TCP
loadBalancerIP: 10.10.10.10
externalTrafficPolicy: Local
EDIT:
I found some nice annotations from google docs, seem to do the trick,https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balance-ingress
# web-service.yaml
apiVersion: v1
kind: Service
metadata:
name: hostname
namespace: default
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
ports:
- name: host1
port: 80
protocol: TCP
targetPort: 9376
selector:
app: hostname
type: NodePort
and
# internal-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ilb-demo-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "gce-internal"
spec:
backend:
serviceName: hostname
servicePort: 80
If you want to make it a global LoadBalancer which accessible from the outside your cluster with public IP you can use:
apiVersion: v1
kind: Service
metadata:
name: my-app-jmprlb
labels:
app: my-app
env: dev
spec:
type: LoadBalancer
selector:
app: my-app
env: dev
ports:
- port: 80
targetPort: 8080
protocol: TCP
Note that the annotation of cloud.google.com/load-balancer-type: "Internal" means that your service is only accessible withing subnets that were peer with the subnet where your cluster resided.

Traefik ingress is not working behind aws load balancer

After I created a traefik daemonset, I created a service as loadbalancer on port 80, which is the Traefik proxy port and the node got automatically registered to it. If i hit the elb i get the proxy 404 which is OK because no service is registered yet
I then created a nodeport service for the web-ui. targeted port 8080 inside the pod and 80 on clusterip. I can curl the traefik ui from inside the cluster and it retruns traefik UI
I then created an ingress so that when i hit elb/ui it gets me to the backend web-ui service of traefik and it fails. I also have the right annotations in my ingress but the elb does not route the path to the traefik ui in the backend which is running properly
What am i doing wrong here? I can post all my yml files if required
UPD
My yaml files:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: traefik
labels:
app: traefik
spec:
template:
metadata:
labels:
app: traefik
spec:
containers:
- image: traefik
name: traefik
args:
- --api
- --kubernetes
- --logLevel=INFO
- --web
ports:
- containerPort: 8080
name: traefikweb
- containerPort: 80
name: traefikproxy
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
spec:
selector:
app: traefik
ports:
- port: 80
targetPort: traefikproxy
type: LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
spec:
selector:
app: traefik
ports:
- name: http
targetPort: 8080
nodePort: 30001
port: 80
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: default
name: traefik-ing
annotations:
kubernetes.io/ingress.class: traefik
#traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip:/ui
spec:
rules:
- http:
paths:
- path: /ui
backend:
serviceName: traefik-web-ui
servicePort: 80
If you are on Private_Subnets use
kind: Service
metadata:
name: traefik-proxy
> annotations:
> "service.beta.kubernetes.io/aws-load-balancer-internal": "0.0.0.0/0"
spec:
selector:
app: traefik
ports:
- port: 80
targetPort: traefikproxy
type: LoadBalancer```
I then created an ingress so that when i hit elb/ui it gets me to the backend web-ui service of traefik and it fails."
How did it fail? Did you get error 404, error 500, or something else?
Also, for traefik-web-ui service, you don't need to set type: NodePort, it should be type: ClusterIP.
When you configure backends for your Ingress, the only requirement is availability from inside a cluster, so ClusterIP type will be more than enough for that.
Your service should be like that:
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
spec:
selector:
app: traefik
ports:
- name: http
targetPort: 8080
port: 80
Option PathPrefixStrip can be useful because without it request will be forwarded to UI with /ui prefix, which you definitely don't want.
All other configs look good.