Problem: after restarting ingress, ssl certificates are removed and you have to constantly install them back. After some searching, I found out that you can somehow configure ssl certificates in the yaml configuration file. Playing with the configs I did not manage to achieve the desired result.
yaml with ingress service configs:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:...
nginx.ingress.kubernetes.io/aws-load-balancer-backend-protocol: TCP
nginx.ingress.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
name: nginx-ingress
ports:
- name: http
port: 80
targetPort: 80
nodePort: 31823
protocol: TCP
- name: https
port: 443
targetPort: 443
nodePort: 31822
protocol: TCP
This is what the working settings in aws look like. How can you achieve this result through the configuration file?
The solution was to remove this line
nginx.ingress.kubernetes.io/aws-load-balancer-backend-protocol: TCP
And change the targetPort value of https from 443 to 80
- name: https
port: 443
targetPort: 80
nodePort: 31822
protocol: TCP
Related
I deployed Istio in my RKE2 cluster using AWS EC2, and everything works fine with the istio-ingress service set as a nodeport, we can communicate with the application without any issue.
When I change the service from nodeport to loadbalancer the external IP address permanently stays in .
The RKE2 cluster is set to work with istio, but the cloud provider was never assigned because of internal policies.
This is my ingressgateway service
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/publicEndpoints: "null"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: '"true"'
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: '"false"'
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-security-groups: '"sg-mysg"'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.14.1
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
clusterIP: XX.XX.XX.XX
clusterIPs:
- XX.XX.XX.XX
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: status-port
nodePort: 30405
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 31380
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 31390
port: 443
protocol: TCP
targetPort: 8443
- name: tcp
nodePort: 31400
port: 31400
protocol: TCP
targetPort: 31400
- name: tls
nodePort: 32065
port: 15443
protocol: TCP
targetPort: 15443
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
and this is the current configuration of my NLB
For now the load balancer I use has only two ports set:
80 mapped to the target group pointing to TCP port 31380
443 mapped to the target group pointing to the TCP port 31390
I also tried target groups pointing to TCP port 8080 for port 80 and TCP port 8443 for port 443 without success
The security groups have all the ports used by istio unlocked for the CIDR and the VPC.
Any help is appreciated
Is it possible to load balance multiple services using a single aws load balancer? If that's not possible I guess I could just use a nodejs proxy to forward from httpd pod to tomcat pod and hope it doesn't lag...
Either way which Loadbalancer is recommended for multiport services? CLB doesn't support mutliports and ALB doesn't support mutliport for a single / path. So I guess NLB is the right thing implement?
I'm trying to cut cost and move to k8s but I need to know if I'm choosing the right service. Tomcat and Httpd are both part of a single prod website but can't do path based routing.
Httpd pod service:
apiVersion: v1
kind: Service
metadata:
name: httpd-service
labels:
app: httpd-service
namespace: test1-web-dev
spec:
selector:
app: httpd
ports:
- name: port_80
protocol: TCP
port: 80
targetPort: 80
- name: port_443
protocol: TCP
port: 443
targetPort: 443
- name: port_1860
protocol: TCP
port: 1860
targetPort: 1860
Tomcat pod service:
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
labels:
app: tomcat-service
namespace: test1-web-dev
spec:
selector:
app: tomcat
ports:
- name: port_8080
protocol: TCP
port: 8080
targetPort: 8080
- name: port_1234
protocol: TCP
port: 1234
targetPort: 1234
- name: port_8222
protocol: TCP
port: 8222
targetPort: 8222
It's done like this: install Ingress controller (e.g. ingress-nginx) to your cluster, it's gonna be your loadbalancer looking into outside world.
Then configure Ingress resource(s) to drive traffic to services (as many as you want). Then you have a single Ingress controller (which means a single Loadbalancer) per cluster.
https://kubernetes.io/docs/concepts/services-networking/ingress/
You can do this, using Ingress controller backing with a load balancer, and use one path / you may make the Ingress tells the backing load balancer to route requests based on the Host header.
I am using istio 1.7.3 version in my env and have installed ingress and egress using demo profile using below command:
istioctl install --set profile=demo
I have customized my ingress config to create CLB to attach to the aws cert once the ingress is created and that is working fine, please review the below config:
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: “arn:”
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: “http”
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: “https”
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: “3600”
labels:
app: istio-ingressgateway
release: istio
istio: ingressgateway
spec:
type: LoadBalancer
selector:
app: istio-ingressgateway
ports:
name: status-port
port: 15021
targetPort: 15021
name: http2
port: 80
targetPort: 8080
name: https
port: 443
targetPort: 8443
name: tcp
port: 31400
targetPort: 31400
name: tls
port: 15443
targetPort: 15443
Now I want to create a ALB instead to CLB and create a access-log-s3-bucket-name to get the logs of the ALB.
Can we tweak the above configuration so that it can override and make it a ALB with s3 bucket access logs enabled or what should I do when using istioctl command with demo profile to change it to ALB with s3 access logs enabled instead of CLB.
Do we have any sample config or examples somewhere that can help or you can point me to.
I am working on a Kubernetes application in which we are running a nginx server. There are 2 issues we are facing currently,
one is related to healthchecks. I would like to add healthchecks which check for the container on port-443, but with TCP, Kubernetes is somehow doing that on SSL, causing the containers to show out of service by AWS.
Secondly, SSL Traffic is getting terminated on ELB, while still talking with container on port-443. I have added a self-signed certificate on the container inside nginx already. We redirect from http to https internally, so anything on port-80 is of no use to us. What am I doing wrong?
service.yaml :
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: service-name
app.kubernetes.io/instance: service-name-instance
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/component: backend
app.kubernetes.io/managed-by: kubectl
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: OUR ARN
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "TCP"
service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
name: service-name
spec:
selector:
app: service-name
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
Im have k8s app (Web api) which first exposed via NodePort (I've used port forwarding to run it and it works as expected)
run it like localhost:8080/api/v1/users
Than I've created a service with type LoadBalancer to expose it outside, which works as expected.
e.g. http://myhost:8080/api/v1/users
apiVersion: v1
kind: Service
metadata:
name: fzr
labels:
app: fzr
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: fzr
Now we need to make it secure and after reading about this topic we have decided to use ingress for it.
This is what I did
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ctr-ingress
selector:
app: fzr
spec:
ports:
- name: https
port: 443
targetPort: https
now I want to run it like
https://myhost:443/api/v1/users
This is not working, im not able to run the application with port 443 as https, please advice?
It looks to me like you are using a yaml template for a type service to deploy your ingress but not correctly. targetPort should be a numeric port, and anyway, I don't think "https" is a correct value (I might be wrong though).
Something like this:
apiVersion: v1
kind: Service
type: NodePort
metadata:
name: fzr-ingress
spec:
type: NodePort
selector:
app: fzr
ports:
- protocol: TCP
port: 443
targetPort: 8080
Now you have a nodeport service listening on 443 and forwarding the traffic to your fzr pods listening on port 8080.
However, the fact you are listening on port 443 does nothing to secure your app by itself. To encrypt the traffic you need a TLS certificate that you have to make available to the ingress as a secret.
If this seems somewhat complicated (because it is) you could look into deploying an Nginx ingress from a helm chart
In any case your ingress yaml would look something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: gcs-ingress
namespace: default
spec:
rules:
- host: myhost
http:
paths:
- backend:
serviceName: fzr
servicePort: 443
path: /api/v1/users
tls:
- hosts:
- myhost
secretName: myhosts-tls
More info on how to configure this here