Paths not working in ALB ingress in EKS cluster - amazon-web-services

I am using AWS as a Cloud Provider. I want the ALB to work on a single host which is a domain that I own, I want it to open different services based on the paths and service name that I provide in the ingress file but it will not open the service at the expected page
The service should open at http://mydomain/attacker1 and http://mydomain/attacker2 attacker1 and attacker2 are 2 different services with respective service and deployment files
But I get a "page cannot be found" when I hit the page
I have deployed and configured ALB ingress controller using official docs https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
I have created Hosting Zones and Routes in Route 53 to propagate my Domain
I am adding my service file, deployment file and ingress file for reference
Please help me with any insights or anything I have missed in the code
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: attacker-ingress
namespace: development
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
spec:
# ingressClassName: alb
rules:
- host: www.mydomain.com
http:
paths:
- path: /attacker1
pathType: Prefix
backend:
service:
name: attacker1
port:
number: 30002
- path: /attacker2
pathType: Prefix
backend:
service:
name: attacker2
port:
number: 30003
Deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "attacker"
namespace: "development"
spec:
selector:
matchLabels:
app: "attacker1"
replicas: 1
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: "attacker"
spec:
containers:
-
name: "attacker"
image: "qabcr/abc"
imagePullPolicy: "Always"
env:
-
name: "NODE_ENV"
value: "development"
ports:
-
containerPort: 30002
imagePullSecrets:
-
name: "secrets-development"
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "attack1"
namespace: "development"
labels:
app: "attacker"
spec:
type: NodePort
ports:
-
nodePort: 31400
port: 30002
targetPort: 30002
selector:
app: "attacker"

Related

Service can't be registered in Target groups

I'm new to using Kubernetes and AWS so there are a lot of concepts I may not understand. I hope you can help me with this problem I am having.
I have 3 services, frontend, backend and auth each with their corresponding nodeport and an ingress that maps the one host to each service, everything is running on EKS and for the ingress deployment I am using AWS ingress controller. Once everything is deployed I try to register the node-group in the targets the frontend and auth services work correctly but backend stays in unhealthy state. I thought it could be a port problem but if you look at auth and backend they have almost the same deployment defined and both are api created with dotnet core. One thing to note is that I can do kubectl port-forward <backend-pod> 80:80 and it is running without problems. And when I run the kubectl describe ingresses command I get this:
Name: ingress
Labels: app.kubernetes.io/managed-by=Helm
Namespace: default
Address: xxxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxx.elb.amazonaws.com
Ingress Class: \<none\>
Default backend: \<default\>
Rules:
Host Path Backends
----------------
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
alb.ingress.kubernetes.io/listen-ports: \[{"HTTPS":443}, {"HTTP":80}\]
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: 443
kubernetes.io/ingress.class: alb
Events:
Type Reason Age From Message
-------------------------
Normal SuccessfullyReconciled 8m20s (x15 over 41h) ingress Successfully reconciled
Frontend
apiVersion: apps/v1
kind: Deployment
metadata:
name: front
labels:
name: front
spec:
replicas: 2
selector:
matchLabels:
name: front
template:
metadata:
labels:
name: front
spec:
containers:
- name: frontend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: wrfront-{{ .Values.namespace }}-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
name: default-port
protocol: TCP
selector:
name: front
---
Auth
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-wrauth-keys
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
---
apiVersion: "v1"
kind: "ConfigMap"
metadata:
name: "auth-config-ocpm"
labels:
app: "auth"
data:
ASPNETCORE_URL: "http://+:80"
ASPNETCORE_ENVIRONMENT: "Development"
ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS: "true"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "auth"
labels:
app: "auth"
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
app: "auth"
template:
metadata:
labels:
app: "auth"
spec:
volumes:
- name: auth-keys-storage
persistentVolumeClaim:
claimName: pvc-wrauth-keys
containers:
- name: "api-auth"
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: auth-keys-storage
mountPath: "/app/auth-keys"
env:
- name: "ASPNETCORE_URL"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_URL"
name: "auth-config-ocpm"
- name: "ASPNETCORE_ENVIRONMENT"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_ENVIRONMENT"
name: "auth-config-ocpm"
- name: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
name: "auth-config-ocpm"
---
apiVersion: v1
kind: Service
metadata:
name: auth-service
spec:
type: NodePort
selector:
app: auth
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Backend (Service with problem)
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: NodePort
selector:
name: backend
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
# SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: {{ .Values.certificate }}
spec:
rules:
- host: {{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: front-service
port:
name: default-port
pathType: Prefix
- host: back.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: backend-service
port:
name: default-port
pathType: Prefix
- host: auth.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: auth-service
port:
name: default-port
pathType: Prefix
I've tried to deploy other services and they work correctly, also running only backend or only another service, but always the same thing happens and always with the backend.
What could be happening? Is it a configuration problem? Some error in Ingress or Deployment? Or is it just the backend service?
I would be very grateful for any help.
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
This one is saying that your backend service is not registered to the Ingress.
One thing to remember is that Ingress registers Services by pods' ClusterIP, like your Ingress output "10.0.1.30:80", not NodePort. And according to docs , I don't know why you can have multiple NodePort services with the same port. But when you do port-forward, you actually open that port on all your instances, I assume you have 2 instances, and then your ALB health check that port and return healthy.
But I think your issue is from your Ingress that can not locate your backend service.
My suggestions are:
Trying with only backend-service with port changed, and maybe without auth and frontend services. Default range of NodePort is 30000 - 32767
Going inside that pod or create a new pod, make a request to that service using its URL to check what it returns. By default, ALB only accept status 200 from its homepage.

Ingress Controller produces 502 Bad Gateway on every other request

I have a kubernetes ingress controller terminating my ssl with an ingress resource handling two routes: 1 my frontend SPA app, and the second backend api. Currently when I hit each frontend and backend service directly they perform flawlessly, but when I call the ingress controller both frontend and backend services alternate between producing the correct result and a 502 Bad Gateway.
To me it smells like my ingress resource is having some sort of path conflict that I'm not sure how to debug.
Reddit suggested that it could be a label and selector mismatch between my services and deployments which I believe I checked thoroughly. they also mentioned: "api layer deployment and a worker layer deployment [that] both share a common app label and your PDB selects that app label with a 50% availability for example". Which I haven't run down because I don't quite understand.
I also realize SSL could play a role in gateway issues; However, my certificates appear to be working when I hit the https:// port of the ingress-controller
frontend-deploy:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: my-performance-front
tier: frontend
replicas: 1
template:
metadata:
labels:
app: my-performance-front
tier: frontend
spec:
containers:
- name: my-performance-frontend
image: "<my current image and location>"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
imagePullSecrets:
- name: regcred
frontend-svc
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: ingress-nginx
spec:
selector:
app: my-performance-front
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
backend-deploy
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: my-performance-back
tier: backend
replicas: 1
template:
metadata:
labels:
app: my-performance-back
tier: backend
spec:
containers:
- name: my-performance-backend
image: "<my current image and location>"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
imagePullSecrets:
- name: regcred
backend-svc
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: ingress-nginx
spec:
selector:
app: my-performance-back
tier: backend
ports:
- protocol: TCP
name: "http"
port: 80
targetPort: 8080
type: LoadBalancer
ingress-rules
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-rules
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
# nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- http:
paths:
- path: /(api/v0(?:/|$).*)
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
- path: /(.*)
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Any ideas, critiques, or experiences are welcomed and appreciated!!!

AWS EKS Fargate Ingress Has No Address

Updated
So, I followed the AWS docs on how to setup an EKS cluster with Fargate using the eksctl tool. That all went smoothly but when I get to the part where I deploy my actual app, I get no endpoints and the ingress controller has no address associated with it. As seen here:
NAME HOSTS ADDRESS PORTS AGE
testapp-ingress * 80 129m
So, I can't hit it externally. But the test app (2048 game) had an address from the elb associated with the ingress. I thought it might be the subnet-tags as suggested here and my subnets weren't tagged the right way so I tagged them the way suggested in that article. Still no luck.
This is the initial article I followed to get set up. I've performed all the steps and only hit a wall with the alb: https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-next-steps
This is the alb article I've followed: https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
I followed the steps to deploy the sample app 2048 and that works just fine. I've made my configs very similar and it should work. I've followed all of the steps. Here are my old configs, new config below:
deployment yaml>>>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "testapp-deployment"
namespace: "testapp-qa"
spec:
selector:
matchLabels:
app: "testapp"
replicas: 5
template:
metadata:
labels:
app: "testapp"
spec:
containers:
- image: xxxxxxxxxxxxxxxxxxxxxxxxtestapp:latest
imagePullPolicy: Always
name: "testapp"
ports:
- containerPort: 80
---
service yaml>>>
apiVersion: v1
kind: Service
metadata:
name: "testapp-service"
namespace: "testapp-qa"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
type: NodePort
selector:
app: "testapp"
---
ingress yaml >>>
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "testapp-ingress"
namespace: "testapp-qa"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: testapp-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "testapp-service"
servicePort: 80
---
namespace yaml>>>
apiVersion: v1
kind: Namespace
metadata:
name: "testapp-qa"
Here are some of the logs from the ingress controller>>
E0316 22:32:39.776535 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
E0316 22:36:28.222391 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxxx" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp-ingress"}
Per the suggestion in the comments from #Michael Hausenblas, I've added an annotation to my service for the alb ingress.
Now that my ingress controller is using the correct ELB, I checked the logs because I still can't hit my app's /healthcheck. The logs:
E0317 16:00:45.643937 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile targetGroups due to failed to reconcile targetGroup targets due to Unable to DescribeInstanceStatus on fargate-ip-xxxxxxxxxxx.ec2.internal: InvalidInstanceID.Malformed: Invalid id: \"fargate-ip-xxxxxxxxxxx.ec2.internal\"\n\tstatus code: 400, request id: xxxxxxxxxxx-3a7d-4794-95fb-a18835abe0d3" "controller"="alb-ingress-controller" "request"={"Namespace":"testapp-qa","Name":"testapp"}
I0317 16:00:47.868939 1 rules.go:82] testapp-qa/testapp-ingress: modifying rule 1 on arn:aws:elasticloadbalancing:us-east-1:xxxxxxxxxxx:listener/app/xxxxxxxxxxx-testappqa-testappin-b879/xxxxxxxxxxx/6b41c0d3ce97ae6b
I0317 16:00:47.890674 1 rules.go:98] testapp-qa/testapp-ingress: rule 1 modified with conditions [{ Field: "path-pattern", Values: ["/*"] }]
Update
I've updated my config. I don't have any more errors but still unable to hit my endpoints to test if my app is accepting traffic. It might have something to do with fargate or on the AWS side I'm not seeing. Here's my updated config:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "testapp"
namespace: "testapp-qa"
spec:
selector:
matchLabels:
app: "testapp"
replicas: 5
template:
metadata:
labels:
app: "testapp"
spec:
containers:
- image: 673312057223.dkr.ecr.us-east-1.amazonaws.com/wood-testapp:latest
imagePullPolicy: Always
name: "testapp"
ports:
- containerPort: 9898
---
apiVersion: v1
kind: Service
metadata:
name: "testapp"
namespace: "testapp-qa"
annotations:
alb.ingress.kubernetes.io/target-type: ip
spec:
ports:
- port: 80
targetPort: 9898
protocol: TCP
name: http
type: NodePort
selector:
app: "testapp"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "testapp-ingress"
namespace: "testapp-qa"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: /healthcheck
labels:
app: testapp
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "testapp"
servicePort: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: "testapp-qa"
In your service, try adding the following annotation:
annotations:
alb.ingress.kubernetes.io/target-type: ip
And also you'd need to explicitly tell the Ingress resource via the alb.ingress.kubernetes.io/healthcheck-path annotation where/how to perform the health checks for the target group. See the ALB Ingress controller docs for the annotation semantics.

Aws ingress controller setup

I have try to expose my micro-service to the internet with aws ec2. Using the deployment and service yaml file under below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
strategy: {}
template:
metadata:
labels:
app: my-app
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
containers:
- name: my-app
image: XXX
ports:
- name: my-app
containerPort: 3000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
ports:
- name: my-app
nodePort: 32000
port: 3000
targetPort: 3000
type: NodePort
And also create a ingress resource.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.myApp.com
http:
paths:
- path: /my-app
backend:
serviceName: my-app
servicePort: 80
The last step I open the 80 port in aws dashboard, how should I choice the ingress controller to realize my intend?
servicePort should be 3000, the same as port in your service object.
Note however that, setting up cluster with kubeadm on AWS is not the best way to go: EKS provides you optimized, well configured clusters with external load-balancers and ingress controllers.

istio - using vs service and gw instead loadbalancer not working

I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes
/ - you should see 'hello application`
/api/books should provide list of book in json format
This is the service
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: go-ms
This is the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
after applied the both yamls and when calling the URL:
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I was able to see the data in the browser as expected and also for the root app using just the external ip
Now I want to use istio, so I follow the guide and install it successfully via helm
using https://istio.io/docs/setup/kubernetes/install/helm/ and verify that all the 53 crd are there and also istio-system
components (such as istio-ingressgateway
istio-pilot etc all 8 deployments are in up and running)
I’ve change the service above from LoadBalancer to NodePort
and create the following istio config according to the istio docs
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: "/"
- uri:
exact: "/api/books"
route:
- destination:
port:
number: 8080
host: go-ms
in addition I’ve added the following
kubectl label namespace books istio-injection=enabled where the application is deployed,
Now to get the external Ip i've used command
kubectl get svc -n istio-system -l istio=ingressgateway
and get this in the external-ip
b0751-1302075110.eu-central-1.elb.amazonaws.com
when trying to access to the URL
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I got error:
This site can’t be reached
ERR_CONNECTION_TIMED_OUT
if I run the docker rayndockder/http:0.0.2 via
docker run -it -p 8080:8080 httpv2
I path's works correctly!
Any idea/hint What could be the issue ?
Is there a way to trace the istio configs to see whether if something is missing or we have some collusion with port or network policy maybe ?
btw, the deployment and service can run on each cluster for testing of someone could help...
if I change all to port to 80 (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books"
I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
ports:
- port: 8080
selector:
app: go-ms
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: go-ms-virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /
- uri:
exact: /api/books
route:
- destination:
port:
number: 8080
host: go-ms
EOF
The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.
I was able to access the app with url of http://$(minikube ip):31380.
There is no point in changing the port of services, deployments since these are application specific.
May be this question specifies the ports opened by istio ingress gateway.