I have an AWS EKS cluster with Traefik2 deployed via helm with the following config. The ACM cert set is a wildcard cert *.example.com
service:
enabled: true
type: LoadBalancer
annotations: {
service.beta.kubernetes.io/aws-load-balancer-internal: "true",
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443",
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http",
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:xxxxxxxxx:certificate/"
With the following IngressRoute set for the dashboard.
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboard
namespace: traefik
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`traefik.example.com`)
kind: Rule
services:
- name: api#internal
kind: TraefikService
The issue is after pointing the domain in R53 to the Traefik CLB I can only hit the dashboard with HTTP access and not HTTPS. When trying to access via HTTPS I am receiving the "404 page not found" error. The goal is to eventually just have HTTP redirect to HTTPS but unable to hit HTTPS in the first place.
Could there me something in the configuration that I am missing?
Related
We have Jenkins deployed as a pod using nginx ingress behind AWS classic load balancer. When we are open to internet , We are able to hit the jenkins URL. But when we add specific IP in inbound rules of the load balancer, traffic is not reached to ingress . Please find the ingress definition. Please help to solve the issue
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx-external-ingress
nginx.ingress.kubernetes.io/ssl-redirect: "true"
name: external-auth-oauth2
namespace: cicd
spec:
rules:
- host: jenkins.xyz.com
http:
paths:
- backend:
serviceName: jenkins
servicePort: 8080
path: /generic-webhook-trigger/invoke
There is an AWS EKS 1.22 with Traefik 2.x and ExternalDNS configured to managed the DNS/Ingress in Route53.
I've been using a Classic Load Balancer with the following traefik service config:
service:
enabled: true
type: LoadBalancer
annotations:
external-dns.alpha.kubernetes.io/hostname: traefik.mydomain.com
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: app=xxx
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-3:XXXX:certificate/XXXX
#
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
When I deploy Keycloak, trying to access to the Administration Console, it redirects with HTTPS by default so it works due I only accept HTTPS in my CLB:
However, using the NLB with the following traefik service config, it redirects with HTTP and it doesn't works:
service:
enabled: true
type: LoadBalancer
annotations:
external-dns.alpha.kubernetes.io/hostname: traefik.mydomain.com
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: app=xxx
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-3:XXXX:certificate/XXXX
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
How can I solve this problem? But above all, why does this happen when changing the LB type?
Is it possible to get https to work on the automatically assigned DNS you get from the aws load balancer when you deploy a service like so:
kind: Service
apiVersion: v1
metadata:
name: gateway-svc
spec:
selector:
app: gateway
type: LoadBalancer
ports:
- name: gateway-svc
port: 80
targetPort: 4000
I know you can use annotations and something like this:
metadata:
name: gateway-svc
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:region:<NUMBER>:certificate/c556ca29-ddbe-4983-b01b-ff7e9f2708ba
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
But registering the auto assigned dns that is something like http://<NUMBERS>-<NUMBERS>.<REGION>.elb.amazonaws.com/ is too long for the ACM.
How can I get this working?
But registering the auto assigned dns that is something like
http://-..elb.amazonaws.com/ is too long for
the ACM.
Why are you looking forward to get the ACM cert for that domain ?
ACM provides the wildcard certificate and you can that for your domain.
While for adding the entry into DNS like route53
you should checkout the external DNS
https://github.com/kubernetes-sigs/external-dns#running-locally
If you domain if xyz.com you can get the *.xyz.com and use that everywhere as it's the wild card and attach to any LB service.
I deployed grafana using helm and now it is running in pod. I can access it if I proxy port 3000 to my laptop.
Im trying to point a domain grafana.something.com to that pod so I can access it externally.
I have a domain in route53 that I can attach to a loadbalancer (Application Load Balancer, Network Load Balancer, Classic Load Balancer). That load balancer can forward traffic from port 80 to port 80 to a group of nodes (Let's leave port 443 for later).
I'm really struggling with setting this up. Im sure there is something missing but I don't know what.
Basic diagram would look like this I imagine.
Internet
↓↓
Domain in route53 (grafana.something.com)
↓↓
Loadbalancer 80 to 80 (Application Load Balancer, Network Load Balancer, Classic Load Balancer)
I guess that LB would forward traffic to port 80 to the below Ingress Controllers (Created when Grafana was deployed using Helm)
↓↓
Group of EKS worker nodes
↓↓
Ingress resource ?????
↓↓
Ingress Controllers - Created when Grafana was deployed using Helm in namespace test.
kubectl get svc grafana -n test
grafana Type:ClusterIP ClusterIP:10.x.x.x Port:80/TCP
apiVersion: v1
kind: Service
metadata:
creationTimestamp:
labels:
app: grafana
chart: grafana-
heritage: Tiller
release: grafana-release
name: grafana
namespace: test
resourceVersion: "xxxx"
selfLink:
uid:
spec:
clusterIP: 10.x.x.x
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
selector:
app: grafana
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
↓↓
Pod Grafana is listening on port 3000. I can access it successfully after proxying to my laptop port 3000.
Given that it seems you don't have an Ingress Controller installed, if you have the aws cloud-provider configured in your K8S cluster you can follow this guide to install the Nginx Ingress controller using Helm.
By the end of the guide you should have a load balancer created for your ingress controller, point your Route53 record to it and create an Ingress that uses your grafana service. Example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/app-root: /
nginx.ingress.kubernetes.io/enable-access-log: "true"
name: grafana-ingress
namespace: test
spec:
rules:
- host: grafana.something.com
http:
paths:
- backend:
serviceName: grafana
servicePort: 80
path: /
The final traffic path would be:
Route53 -> ELB -> Ingress -> Service -> Pods
Adding 2 important suggestions here.
1 ) Following improvements to the ingress api in kubernetes 1.18 -
a new ingressClassName field has been added to the Ingress spec that is used to reference the IngressClass that should be used to implement this Ingress.
Please consider to switch to ingressClassName field instead of the kubernetes.io/ingress.class annotation:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: grafana-ingress
namespace: test
spec:
ingressClassName: nginx # <-- Here
rules:
- host: grafana.something.com
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 80
2 ) Consider using External-DNS for the integration between external DNS servers (Check this example on AWS Route53) and the Kubernetes Ingresses / Services.
I am new to k8s and exploring more on production grade deployment.
We have py Django app which is running in (say in 9000) node port. When I try to expose them using a k8s-service ELB,
- it works by running 80 and 443 separately; where as 80 to 443 redirection is not supported in AWS classic ELB.
Then I switched to aws alb ingress controller; the problem i faced was
- ALB does not works with node port and only with http and https port.
Any thoughts would be much appreciated!!
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ABC
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets: 'subnet-1, subnet-2'
alb.ingress.kubernetes.io/security-group: sg-ABC
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "200"
labels:
name: ABC
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: ABC
servicePort: 80 ```
Thank you #sulabh and #Fahri it works perfect now. I went through the doc again and corrected my mistake.
Issues was with route path in ALb;
Setup is like;
python-django-uwsgi app in pods and expose it as service in NodePort and use aws ingress controller for ALB;
Cheers!