Error in exposing multiple ports with ALB Ingress on EKS - amazon-web-services

I have a Triton server on EKS listening on 3 ports, 8000 is for http requests, 8001 is for gRPC and 8002 is for prometheus metrics. So, I have created a Triton deployment on EKS which is exposed through NodePort service of EKS. I am also using ALB ingress which is creating an application load balancer to balance the load of Triton servers on these ports.
But, the traffic is not flowing correctly as it is showing same output for all the 3 ports but it should be different. So, now do I have to create 3 Application Load Balancers for 3 ports or is it possible to manage all ports with a single Application Load Balancer?
Yaml file for ALB Ingress looks like:-
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: triton
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":8000}, {"HTTP":8001}, {"HTTP":8002}]'
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: triton
port:
number: 8000
- http:
paths:
- path: /v2
pathType: Prefix
backend:
service:
name: triton
port:
number: 8001
- http:
paths:
- path: /metrics
pathType: Prefix
backend:
service:
name: triton
port:
number: 8002

Related

Migrate Classic Loadbalancer to Application Loadbalancer in EKS

I am trying to migrate my CLB to ALB. I know there is a direct option on the AWS loadbalancer UI console to do a migration. But I don't want to use that. I have a service file which deploys classic loadbalancer on EKS using kubectl.
apiVersion: v1
kind: Service
metadata:
annotations: {service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600',
service.beta.kubernetes.io/aws-load-balancer-type: classic}
name: helloworld
spec:
ports:
- {name: https, port: 8443, protocol: TCP, targetPort: 8080}
- {name: http, port: 8080, protocol: TCP, targetPort: 8080}
selector: {app: helloworld}
type: LoadBalancer
I want to convert it into ALB. I tried the following approach but not worked.
apiVersion: v1
kind: Service
metadata:
name: helloworld
spec:
ports:
- {name: https, port: 8443, protocol: TCP, targetPort: 8080}
- {name: http, port: 8080, protocol: TCP, targetPort: 8080}
selector: {app: helloworld}
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: helloworld
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: Environment=dev,Team=app**
spec:
rules:
- host: "*.amazonaws.com"
http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: helloworld
port:
number: 8080
It has not created any loadbalancer. When I did kubectl get ingress, It showed me the ingress but it has no address. What am I doing wrong here?
your Ingress file seems to be correct,
for having ALB installed automatically from an Ingress you should install the AWS Load Balancer Controller which manages AWS Elastic Load Balancers for a Kubernetes cluster.
You can follow this and then verify that it is installed correctly:
kubectl get deployment -n kube-system aws-load-balancer-controller
apply:
kubectl apply -f service-ingress.yaml
and verify that your ALB, TG, etc are created:
kubectl logs deploy/aws-load-balancer-controller -n kube-system --follow

How to access ArgoCD server pod running on EKS?

I am creating an EKS cluster using terraform and then I am deploying ArgoCD pods on it via helm charts. Now, I want to access my ArgoCD server UI in my browser but am unable to access it. My EKS is in a private subnet and I am accessing it using VPN.
If anyone knows the process to access my ArgoCD in my browser then please reply.
Thanks
You could create a ingress resource object with backend of it being your srgocd-server service, it would create a load balancer and you can access your UI using hostname. You need to have a alb-ingress-controller to create this Ingress resource and to provision a alb. check these out:-
aws-alb-controller
argocd-k8s-ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: <certificate arn>
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
generation: 1
name: ingress-name
namespace: argocd
spec:
defaultBackend:
service:
name: argocd-server
port:
number: 80
rules:
- host: hostname
http:
paths:
- backend:
service:
name: argocd-server
port:
number: 80
path: /*
pathType: ImplementationSpecific

Creating a Kubernetes Ingress resource for GCP/GKE by example

I'm trying to make sense of an example Kubernetes YAML config file that I am trying to customize:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-web-server
namespace: myapp
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/security-groups: my-sec-group
app.kubernetes.io/name: my-alb-ingress-web-server
app.kubernetes.io/component: my-alb-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-web-server
servicePort: 8080
The documentation for this example claims its for creating an "Ingress", or a K8s object that manages inbound traffic to a service or pod.
This particular Ingress resource appears to use AWS ALB (Application Load Balancers) and I need to adapt it to create and Ingress resource in GCP/GKE.
I'm Googling the Kubernetes documentation high and low and although I found the kubernetes.io/ingress.class docs I don't see where they define "alb" as a valid value for this property. I'm asking because I obviously need to find the correct kubernetes.io/ingress.class value for GCP/GKE and I assume if I can find the K8s/AWS Ingress documentation I should be able to find the K8s/GCP Ingress documentation.
I'm assuming K8s has AWS, GCP, Azure, etc. built-in client to kubectl for connecting to these clouds/providers?
So I ask: how does the above configuration tell K8s that we are creating an AWS Ingress (as opposed to an Azure Ingress, GCP Ingress, etc.) and where is the documentation for this?
The documentation you're looking for is :
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl
An example of an ingress resource :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-front-api
namespace: example
annotations:
networking.gke.io/managed-certificates: "front.example.com, api.example.com"
kubernetes.io/ingress.global-static-ip-name: "prod-ingress-static-ip"
spec:
rules:
- host: front.example.com
http:
paths:
- backend:
service:
name: front
port:
number: 80
path: /*
pathType: ImplementationSpecific
- host: api.example.com
http:
paths:
- backend:
service:
name: api
port:
number: 80
path: /*
pathType: ImplementationSpecific

Kubernetes ALB Ingres doesn't route traffic to any rules except /*

I deployed a "monolithic" app into kubernetes on AWS. This app works fine through the ALB.
Next I want to deploy a small service at the same cluster and map traffic to it through the same ALB ingress.
Here is how the Ingress manifest looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: scala-backend-ingress
namespace: prod
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: akka-backend
spec:
rules:
- http:
paths:
- path: /proxy/service/*
backend:
serviceName: proxy-service-np
servicePort: 80
- path: /*
backend:
serviceName: akka-main-np
servicePort: 80
Unfortunately when I call:
GET www.aliace.example.com/proxy/service/traffic/data
I receive back 502 Bad Gateway response with header Server → awselb/2.0.
All traffic to /* is handled properly.
The problem was not in kubernetes.
The application in the container was bounded to localhost instead of 0.0.0.0
can you try as below
- path: /proxy/service/*/*
backend:
serviceName: proxy-service-np
servicePort: 80

Exposing the same service with same URL but two different ports with traefik?

recently I am trying to set up CI/CD flow with Kubernetes v1.7.3 and jenkins v2.73.2 on AWS in China (GFW blocking dockerhub).
Right now I can expose services with traefik but it seems I cannot expose the same service with the same URL with two different ports.
Ideally I would want expose http://jenkins.mydomain.com as jenkins-ui on port 80, as well as the jenkin-slave (jenkins-discovery) on port 50000.
For example, I'd want this to work:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
namespace: default
spec:
rules:
- host: jenkins.mydomain.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
- host: jenkins.mydomain.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 50000
and my jenkins-svc is defined as
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
labels:
run: jenkins
spec:
selector:
run: jenkins
ports:
- port: 80
targetPort: 8080
name: http
- port: 50000
targetPort: 50000
name: slave
In reality the latter rule overwrites the former rule.
Furthermore, There are two plugins I have tried: kubernetes-cloud and kubernetes.
With the former option I cannot configure jenkins-tunnel URL, so the slave fails to connect with the master; with the latter option I cannot pull from a private docker registry such as AWS ECR (no place to provice credential), therefore not able to create the slave (imagePullError).
Lastly, really I am just trying to get jenkins to work (create slaves with my custom image, build with slaves and delete slaves after jobs' finished ), any other solution is welcomed.
If you want your jenkins to be reachable from outside of your cluster then you need to change your ingress configuration.
Default type of ingress type is ClusterIP
Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType
You want it type to be NodePort
Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting :
So your service should look like:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
labels:
run: jenkins
spec:
selector:
run: jenkins
type: NodePort
ports:
- port: 80
targetPort: 8080
name: http
- port: 50000
targetPort: 50000
name: slave