Re-deploying AWS Ingress keeps binning my AWS ALB - amazon-web-services

We're using a AWS ALB Ingress controller to manage our entry to the K8S cluster we have.
Every time we add a new ingress rule it seems to bin our ALB and re-provision it, which in turn will take everything down - are we doing something wrong?
Thanks,
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "dv1-ingress"
namespace: "dv1"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: dv1-ingress
spec:
rules:
- http:
paths:
- path: /derivative-cost-new/*
backend:
serviceName: "derivative-cost-new-published-uk-service"
servicePort: 80

Related

How to use Ingress Nginx Controller to route traffic to private pods Internally

Problem: I am currently using ingress-nginx in my EKS cluster to route traffic to services that need public access.
My use case: I have services I want to deploy in the same cluster but don't want them to have public access. I only want the pods to communicate will all other services within the cluster. Those pods are meant to be private because they're backend services and only need pod-to-pod communication. How do I modify my ingress resource for this purpose?
Cluster Architecture: All services are in the private subnets of the cluster while the load-balancer is in the public subnets
Additional note: I am using external-dns to dynamically create the subdomains for the hosted zones. The hosted zone is public
Thanks
Below are my service.yml and ingress.yml for public services. I want to modify these files for private services
service.yml
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: myapp
annotations:
external-dns.alpha.kubernetes.io/hostname: myapp.dev.com
spec:
ports:
- port: 80
targetPort: 3000
selector:
app: myapp
ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
namespace: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "nginx"
labels:
app: myapp
spec:
tls:
- hosts:
- myapp.dev.com
secretName: myapp-staging
rules:
- host: myapp.dev.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: 'myapp'
port:
number: 80
From this what you have the Ingress already should work and your services are meant to be private(if you set like this in your public cloud cluster), except the Ingress itself. You can update the ConfigMap to use the PROXY protocol so that you can pass proxy information to the Ingress Controller:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
proxy-protocol: "True"
real-ip-header: "proxy_protocol"
set-real-ip-from: "0.0.0.0/0"
And then: kubectl apply -f common/nginx-config.yaml
Now you can deploy any app that you want to have private with the name specified (for example your myapp Service in your yaml file provided.
If you are a new to Kubernetes Networking, then this article would be useful for you or in official Kubernetes documentation
Here you can find other ELB annotations that may be useful for you

Creating a Kubernetes Ingress resource for GCP/GKE by example

I'm trying to make sense of an example Kubernetes YAML config file that I am trying to customize:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-web-server
namespace: myapp
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/security-groups: my-sec-group
app.kubernetes.io/name: my-alb-ingress-web-server
app.kubernetes.io/component: my-alb-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-web-server
servicePort: 8080
The documentation for this example claims its for creating an "Ingress", or a K8s object that manages inbound traffic to a service or pod.
This particular Ingress resource appears to use AWS ALB (Application Load Balancers) and I need to adapt it to create and Ingress resource in GCP/GKE.
I'm Googling the Kubernetes documentation high and low and although I found the kubernetes.io/ingress.class docs I don't see where they define "alb" as a valid value for this property. I'm asking because I obviously need to find the correct kubernetes.io/ingress.class value for GCP/GKE and I assume if I can find the K8s/AWS Ingress documentation I should be able to find the K8s/GCP Ingress documentation.
I'm assuming K8s has AWS, GCP, Azure, etc. built-in client to kubectl for connecting to these clouds/providers?
So I ask: how does the above configuration tell K8s that we are creating an AWS Ingress (as opposed to an Azure Ingress, GCP Ingress, etc.) and where is the documentation for this?
The documentation you're looking for is :
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl
An example of an ingress resource :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-front-api
namespace: example
annotations:
networking.gke.io/managed-certificates: "front.example.com, api.example.com"
kubernetes.io/ingress.global-static-ip-name: "prod-ingress-static-ip"
spec:
rules:
- host: front.example.com
http:
paths:
- backend:
service:
name: front
port:
number: 80
path: /*
pathType: ImplementationSpecific
- host: api.example.com
http:
paths:
- backend:
service:
name: api
port:
number: 80
path: /*
pathType: ImplementationSpecific

argoCD - external access with ingress not working

I am new at kubernetes so apologies in advance for any silly questions and mistakes. I am trying to setup external access through ingress for ArgoCD. My setup is an aws eks cluster. I have setup alb following the guide here. I have also setup external dns service as described here. I also followed the verification steps in that guide and was able to confirm that the dns record got created as well and i was able to access the foo service.
For argoCD I installed the manifests via
kubectl create namespace argocd
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml -n argocd
The argoCD docs mention adding a service to split up http and grpc and an ingress setup here. I followed that and installed those as well
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
external-dns.alpha.kubernetes.io/hostname: argocd.<mydomain.com>
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: ClusterIP
apiVersion: networking.k8s.io/v1 # Use extensions/v1beta1 for Kubernetes 1.18 and older
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/conditions.argogrpc: |
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
name: argocd
namespace: argocd
spec:
rules:
- host: argocd.<mydomain.com>
http:
paths:
- backend:
service:
name: argogrpc
port:
number: 443
pathType: ImplementationSpecific
- backend:
service:
name: argocd-server
port:
number: 443
pathType: ImplementationSpecific
tls:
- hosts:
- argocd.<mydomain.com>
The definitions are applied successfully but I don't see the dns record created neither any external IP listed. Am I missing any steps or is there any misconfiguration here? Thanks in advance!
Service type needs to be NodePort.

Kubernetes is not creating internet-facing AWS Classic Load Balancer

as far as I get the ingress controller documentation, a simple creation of a Service and an Ingress without special annotations should create internet-facing load balancers, weirdly it is creating internal load balancers. So I added the annotation service.beta.kubernetes.io/aws-load-balancer-internal: "false" which is not working either. By the way, I am using NGINX as ingress controller, in the test cluster currently in version 0.8.21. Probably I should update it some time.
Here's my simple spec-file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: nginx
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
labels:
external: "true"
comp: ingress-nginx
env: develop
name: develop-api-external-ing
namespace: develop
spec:
rules:
- host: api.example.com
http:
paths:
- backend:
serviceName: api-external
servicePort: 3000
path: /
tls:
- hosts:
- api.example.com
secretName: api-tls
---
apiVersion: v1
kind: Service
metadata:
labels:
app: api
env: develop
name: api-external
namespace: develop
spec:
ports:
- name: http
port: 3000
protocol: TCP
targetPort: 3000
selector:
app: api
env: develop
sessionAffinity: None
type: ClusterIP
You are not wrong, a service and a ingress should create a load balancer... but you should look at the documentation a bit more...
An ingress needs a NodePort service, yours is ClusterIP. So even if it created something it wouldn't work.
In your ingress you are using kubernetes.io/ingress.class: nginx meaning you want to override the default usage of the ingress and force it to register to the ingress-nginx.
So to make it work, change the type of your service, remove the ingress-class annotation.
You can setup NLB ( Network load balancer) and provide the URL on ingress rule host values. You don't need to expose the underneath backend service either as NodePort or as another load balancer.

Kubernetes ALB Ingres doesn't route traffic to any rules except /*

I deployed a "monolithic" app into kubernetes on AWS. This app works fine through the ALB.
Next I want to deploy a small service at the same cluster and map traffic to it through the same ALB ingress.
Here is how the Ingress manifest looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: scala-backend-ingress
namespace: prod
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: akka-backend
spec:
rules:
- http:
paths:
- path: /proxy/service/*
backend:
serviceName: proxy-service-np
servicePort: 80
- path: /*
backend:
serviceName: akka-main-np
servicePort: 80
Unfortunately when I call:
GET www.aliace.example.com/proxy/service/traffic/data
I receive back 502 Bad Gateway response with header Server → awselb/2.0.
All traffic to /* is handled properly.
The problem was not in kubernetes.
The application in the container was bounded to localhost instead of 0.0.0.0
can you try as below
- path: /proxy/service/*/*
backend:
serviceName: proxy-service-np
servicePort: 80