Kubernetes - services without selector - amazon-web-services

I'm struggling with Kubernetes' service without a selector. The cluster is installed on AWS with the kops. I have a deployment with 3 nginx pods exposing port 80:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngix-dpl # Name of the deployment object
labels:
app: nginx
spec:
replicas: 3 # Number of instances in the deployment
selector: # Selector identifies pods to be
matchLabels: # part of the deployment
app: nginx # by matching of the label "app"
template: # Templates describes pods of the deployment
metadata:
labels: # Defines key-value map
app: nginx # Label to be recognized by other objects
spec: # as deployment or service
containers: # Lists all containers in the pod
- name: nginx-pod # container name
image: nginx:1.17.4 # container docker image
ports:
- containerPort: 80 # port exposed by container
After creation of the deployment, I noted the IP addresses:
$ kubectl get pods -o wide | awk {'print $1" " $3" " $6'} | column -t
NAME STATUS IP
curl Running 100.96.6.40
ngix-dpl-7d6b8c8944-8zsgk Running 100.96.8.53
ngix-dpl-7d6b8c8944-l4gwk Running 100.96.6.43
ngix-dpl-7d6b8c8944-pffsg Running 100.96.8.54
and created a service that should serve the IP addresses:
apiVersion: v1
kind: Service
metadata:
name: dummy-svc
labels:
app: nginx
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: dummy-svc
subsets:
- addresses:
- ip: 100.96.8.53
- ip: 100.96.6.43
- ip: 100.96.8.54
ports:
- port: 80
name: http
The service is successfully created:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dummy-svc ClusterIP 100.64.222.220 <none> 80/TCP 32m
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 5d14h
Unfortunately, my attempt to connect to the nginx through the service from another pod of the same namespace fails:
$ curl 100.64.222.220
curl: (7) Failed to connect to 100.64.222.220 port 80: Connection refused
I can successfully connect to the nginx pods directly:
$ curl 100.96.8.53
<!DOCTYPE html>
<html>
<head>
....
I noticed that my service does not have any endpoints. But I'm not sure that the manual endpoints should be shown there:
$ kubectl get svc/dummy-svc -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"dummy-svc","namespace":"default"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":80}]}}
creationTimestamp: "2019-11-22T08:41:29Z"
labels:
app: nginx
name: dummy-svc
namespace: default
resourceVersion: "4406151"
selfLink: /api/v1/namespaces/default/services/dummy-svc
uid: e0aa9d01-0d03-11ea-a19c-0a7942f17bf8
spec:
clusterIP: 100.64.222.220
ports:
- port: 80
protocol: TCP
targetPort: 80
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
I understand that it is not a proper use case for services and using of a pod selector will bring it to work. But I want to understend why this configuration does not work. I don't know where to look for the solution. Any hint will be appreciated.

it works if you remove the "name" field from the endpoints configuration. it should look like this:
apiVersion: v1
kind: Endpoints
metadata:
name: dummy-svc
subsets:
- addresses:
- ip: 172.17.0.4
- ip: 172.17.0.5
- ip: 172.17.0.6
ports:
- port: 80

As #iliefa mentioned in his comment above, the below part of the definition is treated as labels in this type of cases.
ports:
- port: 80
name: http
In your scenario, we need to either remove 'name: http' as mentioned by #iliefa or we need to add 'name: http' under 'ports:' in the service definition as you can spot below.
apiVersion: v1
kind: Service
metadata:
name: dummy-svc
labels:
app: nginx
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
name: http

Related

Why ALB is not launching after successfully creating ingress

I am trying to launch application load balancer(ALB) on AWS EKS. I have already installed Application load balancer controller in my cluster successfully. The tutorial I am following tells me that after creating ingress and applying it, I should see an ALB created in my AWS, which I don't. What could be the reason? Am I missing something?
I have already created and started apple-service and banana-service and their pods too.
Here's the ingress YAML. I can successfully apply this ingress also, but the ALB didn't launch.
I am using EKS k8s version 1.22
kubectl -n kube-system get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 19m
coredns 2/2 2 2 38m
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
my-awesome-app-ingress <none> testingkarlo.ml 80 14m
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-awesome-app-ingress
labels:
app: my-awesome-app
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: testingkarlo.ml
http:
paths:
- path: /apple
pathType: Prefix
backend:
service:
name: apple-service
port:
number: 5678
- path: /banana
pathType: Prefix
backend:
service:
name: banana-service
port:
number: 5678
apple.yaml
kind: Pod
apiVersion: v1
metadata:
name: apple-app
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
spec:
selector:
app: apple
ports:
- port: 5678 # Default port for image
targetPort: 5678
type: LoadBalancer
banana.yaml is similar as above.
After applying apple.yaml and banana.yaml ,Two classic Load balancers are launched in AWS.

Sticky sessions on Kubernetes cluster

Currently, I'm trying to create a Kubernetes cluster on Google Cloud with two load balancers: one for backend (in Spring boot) and another for frontend (in Angular), where each service (load balancer) communicates with 2 replicas (pods). To achieve that, I created the following ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-ingress
spec:
rules:
- http:
paths:
- path: /rest/v1/*
backend:
serviceName: sample-backend
servicePort: 8082
- path: /*
backend:
serviceName: sample-frontend
servicePort: 80
The ingress above mentioned can make the frontend app communicate with the REST API made available by the backend app. However, I have to create sticky sessions, so that every user communicates with the same POD because of the authentication mechanism provided by the backend. To clarify, if one user authenticates in POD #1, the cookie will not be recognized by POD #2.
To overtake this issue, I read that the Nginx-ingress manages to deal with this situation and I installed through the steps available here: https://kubernetes.github.io/ingress-nginx/deploy/ using Helm.
You can find below the diagram for the architecture I'm trying to build:
With the following services (I will just paste one of the services, the other one is similar):
apiVersion: v1
kind: Service
metadata:
name: sample-backend
spec:
selector:
app: sample
tier: backend
ports:
- protocol: TCP
port: 8082
targetPort: 8082
type: LoadBalancer
And I declared the following ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: sample-cookie
spec:
rules:
- http:
paths:
- path: /rest/v1/*
backend:
serviceName: sample-backend
servicePort: 8082
- path: /*
backend:
serviceName: sample-frontend
servicePort: 80
After that, I run kubectl apply -f sample-nginx-ingress.yaml to apply the ingress, it is created and its status is OK. However, when I access the URL that appears in "Endpoints" column, the browser can't connect to the URL.
Am I doing anything wrong?
Edit 1
** Updated service and ingress configurations **
After some help, I've managed to access the services through the Ingress Nginx. Above here you have the configurations:
Nginx Ingress
The paths shouldn't contain the "", unlike the default Kubernetes ingress that is mandatory to have the "" to route the paths I want.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "sample-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /rest/v1/
backend:
serviceName: sample-backend
servicePort: 8082
- path: /
backend:
serviceName: sample-frontend
servicePort: 80
Services
Also, the services shouldn't be of type "LoadBalancer" but "ClusterIP" as below:
apiVersion: v1
kind: Service
metadata:
name: sample-backend
spec:
selector:
app: sample
tier: backend
ports:
- protocol: TCP
port: 8082
targetPort: 8082
type: ClusterIP
However, I still can't achieve sticky sessions in my Kubernetes Cluster, once I'm still getting 403 and even the cookie name is not replaced, so I guess the annotations are not working as expected.
I looked into this matter and I have found solution to your issue.
To achieve sticky session for both paths you will need two definitions of ingress.
I created example configuration to show you the whole process:
Steps to reproduce:
Apply Ingress definitions
Create deployments
Create services
Create Ingresses
Test
I assume that the cluster is provisioned and is working correctly.
Apply Ingress definitions
Follow this Ingress link to find if there are any needed prerequisites before installing Ingress controller on your infrastructure.
Apply below command to provide all the mandatory prerequisites:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Run below command to apply generic configuration to create a service:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
Create deployments
Below are 2 example deployments to respond to the Ingress traffic on specific services:
hello.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 1.0.0
replicas: 5
template:
metadata:
labels:
app: hello
version: 1.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:1.0"
env:
- name: "PORT"
value: "50001"
Apply this first deployment configuration by invoking command:
$ kubectl apply -f hello.yaml
goodbye.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: goodbye
spec:
selector:
matchLabels:
app: goodbye
version: 2.0.0
replicas: 5
template:
metadata:
labels:
app: goodbye
version: 2.0.0
spec:
containers:
- name: goodbye
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
Apply this second deployment configuration by invoking command:
$ kubectl apply -f goodbye.yaml
Check if deployments configured pods correctly:
$ kubectl get deployments
It should show something like that:
NAME READY UP-TO-DATE AVAILABLE AGE
goodbye 5/5 5 5 2m19s
hello 5/5 5 5 4m57s
Create services
To connect to earlier created pods you will need to create services. Each service will be assigned to one deployment. Below are 2 services to accomplish that:
hello-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 1.0.0
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
Apply first service configuration by invoking command:
$ kubectl apply -f hello-service.yaml
goodbye-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: goodbye-service
spec:
type: NodePort
selector:
app: goodbye
version: 2.0.0
ports:
- name: goodbye-port
protocol: TCP
port: 50001
targetPort: 50001
Apply second service configuration by invoking command:
$ kubectl apply -f goodbye-service.yaml
Take in mind that in both configuration lays type: NodePort
Check if services were created successfully:
$ kubectl get services
Output should look like that:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
goodbye-service NodePort 10.0.5.131 <none> 50001:32210/TCP 3s
hello-service NodePort 10.0.8.13 <none> 50001:32118/TCP 8s
Create Ingresses
To achieve sticky sessions you will need to create 2 ingress definitions.
Definitions are provided below:
hello-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "hello-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
goodbye-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goodbye-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "goodbye-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /v2/
backend:
serviceName: goodbye-service
servicePort: goodbye-port
Please change DOMAIN.NAME in both ingresses to appropriate to your case.
I would advise to look on this Ingress Sticky session link.
Both Ingresses are configured to HTTP only traffic.
Apply both of them invoking command:
$ kubectl apply -f hello-ingress.yaml
$ kubectl apply -f goodbye-ingress.yaml
Check if both configurations were applied:
$ kubectl get ingress
Output should be something like this:
NAME HOSTS ADDRESS PORTS AGE
goodbye-ingress DOMAIN.NAME IP_ADDRESS 80 26m
hello-ingress DOMAIN.NAME IP_ADDRESS 80 26m
Test
Open your browser and go to http://DOMAIN.NAME
Output should be like this:
Hello, world!
Version: 1.0.0
Hostname: hello-549db57dfd-4h8fb
Hostname: hello-549db57dfd-4h8fb is the name of the pod. Refresh it a couple of times.
It should stay the same.
To check if another route is working go to http://DOMAIN.NAME/v2/
Output should be like this:
Hello, world!
Version: 2.0.0
Hostname: goodbye-7b5798f754-pbkbg
Hostname: goodbye-7b5798f754-pbkbg is the name of the pod. Refresh it a couple of times.
It should stay the same.
To ensure that cookies are not changing open developer tools (probably F12) and navigate to place with cookies. You can reload the page to check if they are not changing.
I think your Service configuration is wrong. Just remove type: LoadBalancer and the type will be ClusterIP by default.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created. See more here: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer.
apiVersion: v1
kind: Service
metadata:
name: sample-backend
spec:
selector:
app: sample
tier: backend
ports:
- protocol: TCP
port: 8082
targetPort: 8082

istio - using vs service and gw instead loadbalancer not working

I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes
/ - you should see 'hello application`
/api/books should provide list of book in json format
This is the service
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: go-ms
This is the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
after applied the both yamls and when calling the URL:
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I was able to see the data in the browser as expected and also for the root app using just the external ip
Now I want to use istio, so I follow the guide and install it successfully via helm
using https://istio.io/docs/setup/kubernetes/install/helm/ and verify that all the 53 crd are there and also istio-system
components (such as istio-ingressgateway
istio-pilot etc all 8 deployments are in up and running)
I’ve change the service above from LoadBalancer to NodePort
and create the following istio config according to the istio docs
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: "/"
- uri:
exact: "/api/books"
route:
- destination:
port:
number: 8080
host: go-ms
in addition I’ve added the following
kubectl label namespace books istio-injection=enabled where the application is deployed,
Now to get the external Ip i've used command
kubectl get svc -n istio-system -l istio=ingressgateway
and get this in the external-ip
b0751-1302075110.eu-central-1.elb.amazonaws.com
when trying to access to the URL
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I got error:
This site can’t be reached
ERR_CONNECTION_TIMED_OUT
if I run the docker rayndockder/http:0.0.2 via
docker run -it -p 8080:8080 httpv2
I path's works correctly!
Any idea/hint What could be the issue ?
Is there a way to trace the istio configs to see whether if something is missing or we have some collusion with port or network policy maybe ?
btw, the deployment and service can run on each cluster for testing of someone could help...
if I change all to port to 80 (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books"
I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
ports:
- port: 8080
selector:
app: go-ms
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: go-ms-virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /
- uri:
exact: /api/books
route:
- destination:
port:
number: 8080
host: go-ms
EOF
The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.
I was able to access the app with url of http://$(minikube ip):31380.
There is no point in changing the port of services, deployments since these are application specific.
May be this question specifies the ports opened by istio ingress gateway.

Kubernetes ingress in AWS

Please, help me to deal with accessibility of my simple application.
I created YML with an application:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-test
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/kubernetes-e2e-test-images/echoserver:2.1
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Service
apiVersion: v1
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- path: /
backend:
serviceName: myapp-service
servicePort: 80
- path: /hello
backend:
serviceName: myapp-service
servicePort: 80
Then I created k8s cluster via kops, like this, all services k8s have risen, I can enter the master:
kops create cluster \
--node-count = 2 \
--node-size = t2.micro \
--master-size = t2.micro \
--master-count = 1 \
--zones = us-east-1a \
--name = ${KOPS_CLUSTER_NAME}
In the end, I can't get to the application on port 80, it write's that the connection is refused!
Can someone tell me, what is the problem? This yml above fully works, but in the minikube environment(
Indeed you have created an Ingress resource, but I presume you have not deployed prior the NGINX Ingress Controller for your on-premise cluster on AWS. It's explained here on how to do this in general.
In case of Kubernetes cluster bootsrapped with Kops, things are more complex, and it requires you to modify an existing cluster, to use a dedicated kops add-on: kube-ingress-aws-controller, as explained on their github project page here
In current form your app can be reached only via Node/AWS Instance external IP on port assigned from default range (30000-32767). You can check currently assign port via kubectl get svc myapp-service), but this requires opening it first on firewall (default Inbound rules deny All traffic apart SSH). Based on you deploy/service manifest files:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp-service NodePort 100.64.187.80 <none> 80:32076/TCP 37m
with port 32076 open in inbound rules of Security Group assigned to my instance I can now reach app on NodePort:
curl <node_external_ip>:32076
Hostname: myapp-test-f87bcbd44-8nxpn
Pod Information:
-no pod information available-
Server values:

Traefik ingress is not working behind aws load balancer

After I created a traefik daemonset, I created a service as loadbalancer on port 80, which is the Traefik proxy port and the node got automatically registered to it. If i hit the elb i get the proxy 404 which is OK because no service is registered yet
I then created a nodeport service for the web-ui. targeted port 8080 inside the pod and 80 on clusterip. I can curl the traefik ui from inside the cluster and it retruns traefik UI
I then created an ingress so that when i hit elb/ui it gets me to the backend web-ui service of traefik and it fails. I also have the right annotations in my ingress but the elb does not route the path to the traefik ui in the backend which is running properly
What am i doing wrong here? I can post all my yml files if required
UPD
My yaml files:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: traefik
labels:
app: traefik
spec:
template:
metadata:
labels:
app: traefik
spec:
containers:
- image: traefik
name: traefik
args:
- --api
- --kubernetes
- --logLevel=INFO
- --web
ports:
- containerPort: 8080
name: traefikweb
- containerPort: 80
name: traefikproxy
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
spec:
selector:
app: traefik
ports:
- port: 80
targetPort: traefikproxy
type: LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
spec:
selector:
app: traefik
ports:
- name: http
targetPort: 8080
nodePort: 30001
port: 80
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: default
name: traefik-ing
annotations:
kubernetes.io/ingress.class: traefik
#traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip:/ui
spec:
rules:
- http:
paths:
- path: /ui
backend:
serviceName: traefik-web-ui
servicePort: 80
If you are on Private_Subnets use
kind: Service
metadata:
name: traefik-proxy
> annotations:
> "service.beta.kubernetes.io/aws-load-balancer-internal": "0.0.0.0/0"
spec:
selector:
app: traefik
ports:
- port: 80
targetPort: traefikproxy
type: LoadBalancer```
I then created an ingress so that when i hit elb/ui it gets me to the backend web-ui service of traefik and it fails."
How did it fail? Did you get error 404, error 500, or something else?
Also, for traefik-web-ui service, you don't need to set type: NodePort, it should be type: ClusterIP.
When you configure backends for your Ingress, the only requirement is availability from inside a cluster, so ClusterIP type will be more than enough for that.
Your service should be like that:
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
spec:
selector:
app: traefik
ports:
- name: http
targetPort: 8080
port: 80
Option PathPrefixStrip can be useful because without it request will be forwarded to UI with /ui prefix, which you definitely don't want.
All other configs look good.