Traefik ingress is not working behind aws load balancer - amazon-web-services

After I created a traefik daemonset, I created a service as loadbalancer on port 80, which is the Traefik proxy port and the node got automatically registered to it. If i hit the elb i get the proxy 404 which is OK because no service is registered yet
I then created a nodeport service for the web-ui. targeted port 8080 inside the pod and 80 on clusterip. I can curl the traefik ui from inside the cluster and it retruns traefik UI
I then created an ingress so that when i hit elb/ui it gets me to the backend web-ui service of traefik and it fails. I also have the right annotations in my ingress but the elb does not route the path to the traefik ui in the backend which is running properly
What am i doing wrong here? I can post all my yml files if required
UPD
My yaml files:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: traefik
labels:
app: traefik
spec:
template:
metadata:
labels:
app: traefik
spec:
containers:
- image: traefik
name: traefik
args:
- --api
- --kubernetes
- --logLevel=INFO
- --web
ports:
- containerPort: 8080
name: traefikweb
- containerPort: 80
name: traefikproxy
apiVersion: v1
kind: Service
metadata:
name: traefik-proxy
spec:
selector:
app: traefik
ports:
- port: 80
targetPort: traefikproxy
type: LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
spec:
selector:
app: traefik
ports:
- name: http
targetPort: 8080
nodePort: 30001
port: 80
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: default
name: traefik-ing
annotations:
kubernetes.io/ingress.class: traefik
#traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip:/ui
spec:
rules:
- http:
paths:
- path: /ui
backend:
serviceName: traefik-web-ui
servicePort: 80

If you are on Private_Subnets use
kind: Service
metadata:
name: traefik-proxy
> annotations:
> "service.beta.kubernetes.io/aws-load-balancer-internal": "0.0.0.0/0"
spec:
selector:
app: traefik
ports:
- port: 80
targetPort: traefikproxy
type: LoadBalancer```

I then created an ingress so that when i hit elb/ui it gets me to the backend web-ui service of traefik and it fails."
How did it fail? Did you get error 404, error 500, or something else?
Also, for traefik-web-ui service, you don't need to set type: NodePort, it should be type: ClusterIP.
When you configure backends for your Ingress, the only requirement is availability from inside a cluster, so ClusterIP type will be more than enough for that.
Your service should be like that:
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
spec:
selector:
app: traefik
ports:
- name: http
targetPort: 8080
port: 80
Option PathPrefixStrip can be useful because without it request will be forwarded to UI with /ui prefix, which you definitely don't want.
All other configs look good.

Related

Kubectl create service type Loadbalancer (on GCP but add flag Global?)

I've create loadbalancer for my microservices, with this template:, all is good and works but wanted to somehow add the global flag (when you create lb through gcp console you have option to add it) to meet expectations of the app functionality, does anyone know what other flag might I need to add ?
apiVersion: v1
kind: Service
metadata:
name: my-app-jmprlb
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: my-app
env: dev
spec:
type: LoadBalancer
selector:
app: my-app
env: dev
ports:
- port: 80
targetPort: 8080
protocol: TCP
loadBalancerIP: 10.10.10.10
externalTrafficPolicy: Local
EDIT:
I found some nice annotations from google docs, seem to do the trick,https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balance-ingress
# web-service.yaml
apiVersion: v1
kind: Service
metadata:
name: hostname
namespace: default
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
ports:
- name: host1
port: 80
protocol: TCP
targetPort: 9376
selector:
app: hostname
type: NodePort
and
# internal-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ilb-demo-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "gce-internal"
spec:
backend:
serviceName: hostname
servicePort: 80
If you want to make it a global LoadBalancer which accessible from the outside your cluster with public IP you can use:
apiVersion: v1
kind: Service
metadata:
name: my-app-jmprlb
labels:
app: my-app
env: dev
spec:
type: LoadBalancer
selector:
app: my-app
env: dev
ports:
- port: 80
targetPort: 8080
protocol: TCP
Note that the annotation of cloud.google.com/load-balancer-type: "Internal" means that your service is only accessible withing subnets that were peer with the subnet where your cluster resided.

GKE Ingress to services on same container with 2 exposed ports

I have a GKE cluster, a static ip, and a container/pod which exposes 2 ports: 8081(UI https) and 8082 (WSS https). I must connect to the "UI Https" and "WSS Https" on the same IP. The "WSS Https" service does not have a health check endpoint.
Do i need to use Isito, Consul, Nginx ingress or some service mesh to allow these connections on the same IP with different ports?
Is this even possible?
Things i have tried:
GCP global lb with 2 independent ingress services. The yaml for the second service never works correctly but i can add another backed service via the UI. The ingress always reverts to the default health check for the "WSS Https" service and it always unhealthy.
Changed Service type from NodePort to LoadBalancer with a static ip. This will not allow me to change the readiness check and always reverts back.
GCP GLIB with 1 ingress and 2 backend gives me the same healthcheck failure as above
TCP Proxy - Does not allow me to set the same instance group.
Below are my ingress, service, and deployment.
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
namespace: myappnamespace
annotations:
kubernetes.io/ingress.global-static-ip-name: global-static-ip-name
labels:
app: appname
spec:
backend:
serviceName: ui-service
servicePort: 8081
tls:
- hosts:
- my-host-name.com
secretName: my-secret
rules:
- host: my-host-name.com
http:
paths:
- backend:
serviceName: ui-service
servicePort: 8081
- host: my-host-name.com
http:
paths:
- backend:
serviceName: app-service
servicePort: 8082
Services
---
apiVersion: v1
kind: Service
metadata:
labels:
name: ui-service
name: ui-service
namespace: myappnamespace
annotations:
cloud.google.com/app-protocols: '{"ui-https":"HTTPS"}'
beta.cloud.google.com/backend-config: '{"ports":{"8081":"cloud-armor"}}'
spec:
selector:
app: appname
ports:
- name: ui-https
port: 8081
targetPort: "ui"
protocol: "TCP"
selector:
name: appname
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
labels:
name: app-service
name: app-service
namespace: myappnamespace
annotations:
cloud.google.com/app-protocols: '{"serviceport-https":"HTTPS"}'
beta.cloud.google.com/backend-config: '{"ports":{"8082":"cloud-armor"}}'
spec:
selector:
app: appname
ports:
- name: serviceport-https
port: 8082
targetPort: "service-port"
protocol: "TCP"
selector:
name: appname
type: NodePort
---
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: appname
namespace: myappnamespace
labels:
name: appname
spec:
replicas:1
selector:
matchLabels:
name: appname
strategy:
type: Recreate
template:
metadata:
name: appname
namespace: appnamespace
labels:
name: appname
spec:
restartPolicy: Always
serviceAccountName: myserviceaccount
containers:
- name: my-container
image: image
ports:
- name: service-port
containerPort: 8082
- name: ui
containerPort: 8081
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/health
port: 8081
scheme: HTTPS
livenessProbe:
exec:
command:
- cat
- /version.txt
[......]
A Service exposed through an Ingress must respond to health checks from the load balancer.
External HTTP(S) Load Balancer that GKE Ingress creates only supports port 443 for https traffic.
In that case you may want to:
Use two separate Ingress resources to route traffic for two different host names on the same IP address and port:
ui-https.my-host-name.com
wss-https.my-host-name.com
Opt to use Ambassador or Istio Virtual Service.
Try Multi-Port Services.
Please let me know if that helped.

Problems configuring Ingress with cookie affinity

I was looking for how to use cookie affinity in GKE, using Ingress for that.
I've found the following link to do it: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service
I've created a yaml with the following:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-bsc-deployment
spec:
selector:
matchLabels:
purpose: bsc-config-demo
replicas: 3
template:
metadata:
labels:
purpose: bsc-config-demo
spec:
containers:
- name: hello-app-container
image: gcr.io/google-samples/hello-app:1.0
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-bsc-backendconfig
spec:
timeoutSec: 40
connectionDraining:
drainingTimeoutSec: 60
sessionAffinity:
affinityType: "GENERATED_COOKIE"
affinityCookieTtlSec: 50
---
apiVersion: v1
kind: Service
metadata:
name: my-bsc-service
labels:
purpose: bsc-config-demo
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bsc-backendconfig"}}'
spec:
type: NodePort
selector:
purpose: bsc-config-demo
ports:
- port: 80
protocol: TCP
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
---
Everything seems to go well. When I inspect the created Ingress I see 2 backend services. One of them has the cookie configured, but the other doesn't.
If I create the deployment, and from GCP's console, create the Service and Ingress, only one backend service appears.
Somebody knows why using a yaml I get 2, but doing it from console I only get one?
Thanks in advance
Oscar
Your definition is good.
The reason you have two backend's is because your ingress does not define a default backend. GCE LB require a default backend so during LB creation, a second backend is added to the LB to act as the default (this backend does nothing but serve 404 responses). The default backend does not use the backendConfig.
This shouldn't be a problem, but if you want to ensure only your backend is used, define a default backend value in your ingress definition by adding the spec.backend:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-bsc-ingress
spec:
backend:
serviceName: my-bsc-service
servicePort: 80
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-bsc-service
servicePort: 80
But, like I said, you don't NEED to define this, the additional backend won't really come into play and no sessions affinity is required (there is only a single pod anyway). If you are curious, the default backend pod in question is called l7-default-backend-[replicaSet_hash]-[pod_hash] in the kube-system namespace
You can enable the cookies on the ingress like
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-sticky
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: ingress.example.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /
You can create the service like :
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
If you are using the traefik ingress instead of the nginx and deault GKe ingress you can write the service like this
apiVersion: v1
kind: Service
metadata:
name: session-affinity
labels:
app: session-affinity
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/session-cookie-name: "sticky"
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
selector:
app: session-affinity-demo

istio - using vs service and gw instead loadbalancer not working

I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes
/ - you should see 'hello application`
/api/books should provide list of book in json format
This is the service
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: go-ms
This is the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
after applied the both yamls and when calling the URL:
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I was able to see the data in the browser as expected and also for the root app using just the external ip
Now I want to use istio, so I follow the guide and install it successfully via helm
using https://istio.io/docs/setup/kubernetes/install/helm/ and verify that all the 53 crd are there and also istio-system
components (such as istio-ingressgateway
istio-pilot etc all 8 deployments are in up and running)
I’ve change the service above from LoadBalancer to NodePort
and create the following istio config according to the istio docs
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: "/"
- uri:
exact: "/api/books"
route:
- destination:
port:
number: 8080
host: go-ms
in addition I’ve added the following
kubectl label namespace books istio-injection=enabled where the application is deployed,
Now to get the external Ip i've used command
kubectl get svc -n istio-system -l istio=ingressgateway
and get this in the external-ip
b0751-1302075110.eu-central-1.elb.amazonaws.com
when trying to access to the URL
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I got error:
This site can’t be reached
ERR_CONNECTION_TIMED_OUT
if I run the docker rayndockder/http:0.0.2 via
docker run -it -p 8080:8080 httpv2
I path's works correctly!
Any idea/hint What could be the issue ?
Is there a way to trace the istio configs to see whether if something is missing or we have some collusion with port or network policy maybe ?
btw, the deployment and service can run on each cluster for testing of someone could help...
if I change all to port to 80 (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books"
I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
ports:
- port: 8080
selector:
app: go-ms
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: go-ms-virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /
- uri:
exact: /api/books
route:
- destination:
port:
number: 8080
host: go-ms
EOF
The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.
I was able to access the app with url of http://$(minikube ip):31380.
There is no point in changing the port of services, deployments since these are application specific.
May be this question specifies the ports opened by istio ingress gateway.

Kubernetes on AWS with NGINX ingress controller and SSL termination

Having issues configuring SSL termination in my Kubernetes cluster. Attempting to figure out best place for this to happen.
I have been able to get it working following the instructions listed here and then updating the ingress controller service to include the SSL certificate details using service.beta.kubernetes.io/aws-load-balancer-ssl-cert annotation:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:...
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: https
port: 443
targetPort: 80
I then have ingress rules and services setup similar to:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: app1.foo.bar
http:
paths:
- backend:
serviceName: app1
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app1
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
selector:
app: app1
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app1
spec:
template:
metadata:
labels:
app: app1
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
When going to app1.foo.bar I can see that:
http requests are redirected to https
the SSL certificate is correctly applied
Originally I was trying to apply the certificate to my individual app services. I could see the certificate being applied to the ELB in AWS but wasn't being passed through. So my question is:
Is this the correct configuration or is there a better solution?
Thank you :)
I would suggest terminating SSL on the Nginx Ingress Controller exposed with ELB, and use kube-lego for automated SSL certificate management.
https://github.com/kubernetes/ingress-nginx &
https://github.com/jetstack/kube-lego