I deployed Istio in my RKE2 cluster using AWS EC2, and everything works fine with the istio-ingress service set as a nodeport, we can communicate with the application without any issue.
When I change the service from nodeport to loadbalancer the external IP address permanently stays in .
The RKE2 cluster is set to work with istio, but the cloud provider was never assigned because of internal policies.
This is my ingressgateway service
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/publicEndpoints: "null"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: '"true"'
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: '"false"'
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-security-groups: '"sg-mysg"'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.14.1
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
clusterIP: XX.XX.XX.XX
clusterIPs:
- XX.XX.XX.XX
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: status-port
nodePort: 30405
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 31380
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 31390
port: 443
protocol: TCP
targetPort: 8443
- name: tcp
nodePort: 31400
port: 31400
protocol: TCP
targetPort: 31400
- name: tls
nodePort: 32065
port: 15443
protocol: TCP
targetPort: 15443
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
and this is the current configuration of my NLB
For now the load balancer I use has only two ports set:
80 mapped to the target group pointing to TCP port 31380
443 mapped to the target group pointing to the TCP port 31390
I also tried target groups pointing to TCP port 8080 for port 80 and TCP port 8443 for port 443 without success
The security groups have all the ports used by istio unlocked for the CIDR and the VPC.
Any help is appreciated
Related
I am trying to migrate my CLB to ALB. I know there is a direct option on the AWS loadbalancer UI console to do a migration. But I don't want to use that. I have a service file which deploys classic loadbalancer on EKS using kubectl.
apiVersion: v1
kind: Service
metadata:
annotations: {service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600',
service.beta.kubernetes.io/aws-load-balancer-type: classic}
name: helloworld
spec:
ports:
- {name: https, port: 8443, protocol: TCP, targetPort: 8080}
- {name: http, port: 8080, protocol: TCP, targetPort: 8080}
selector: {app: helloworld}
type: LoadBalancer
I want to convert it into ALB. I tried the following approach but not worked.
apiVersion: v1
kind: Service
metadata:
name: helloworld
spec:
ports:
- {name: https, port: 8443, protocol: TCP, targetPort: 8080}
- {name: http, port: 8080, protocol: TCP, targetPort: 8080}
selector: {app: helloworld}
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: helloworld
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: Environment=dev,Team=app**
spec:
rules:
- host: "*.amazonaws.com"
http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: helloworld
port:
number: 8080
It has not created any loadbalancer. When I did kubectl get ingress, It showed me the ingress but it has no address. What am I doing wrong here?
your Ingress file seems to be correct,
for having ALB installed automatically from an Ingress you should install the AWS Load Balancer Controller which manages AWS Elastic Load Balancers for a Kubernetes cluster.
You can follow this and then verify that it is installed correctly:
kubectl get deployment -n kube-system aws-load-balancer-controller
apply:
kubectl apply -f service-ingress.yaml
and verify that your ALB, TG, etc are created:
kubectl logs deploy/aws-load-balancer-controller -n kube-system --follow
Problem: after restarting ingress, ssl certificates are removed and you have to constantly install them back. After some searching, I found out that you can somehow configure ssl certificates in the yaml configuration file. Playing with the configs I did not manage to achieve the desired result.
yaml with ingress service configs:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:...
nginx.ingress.kubernetes.io/aws-load-balancer-backend-protocol: TCP
nginx.ingress.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
name: nginx-ingress
ports:
- name: http
port: 80
targetPort: 80
nodePort: 31823
protocol: TCP
- name: https
port: 443
targetPort: 443
nodePort: 31822
protocol: TCP
This is what the working settings in aws look like. How can you achieve this result through the configuration file?
The solution was to remove this line
nginx.ingress.kubernetes.io/aws-load-balancer-backend-protocol: TCP
And change the targetPort value of https from 443 to 80
- name: https
port: 443
targetPort: 80
nodePort: 31822
protocol: TCP
Is it possible to load balance multiple services using a single aws load balancer? If that's not possible I guess I could just use a nodejs proxy to forward from httpd pod to tomcat pod and hope it doesn't lag...
Either way which Loadbalancer is recommended for multiport services? CLB doesn't support mutliports and ALB doesn't support mutliport for a single / path. So I guess NLB is the right thing implement?
I'm trying to cut cost and move to k8s but I need to know if I'm choosing the right service. Tomcat and Httpd are both part of a single prod website but can't do path based routing.
Httpd pod service:
apiVersion: v1
kind: Service
metadata:
name: httpd-service
labels:
app: httpd-service
namespace: test1-web-dev
spec:
selector:
app: httpd
ports:
- name: port_80
protocol: TCP
port: 80
targetPort: 80
- name: port_443
protocol: TCP
port: 443
targetPort: 443
- name: port_1860
protocol: TCP
port: 1860
targetPort: 1860
Tomcat pod service:
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
labels:
app: tomcat-service
namespace: test1-web-dev
spec:
selector:
app: tomcat
ports:
- name: port_8080
protocol: TCP
port: 8080
targetPort: 8080
- name: port_1234
protocol: TCP
port: 1234
targetPort: 1234
- name: port_8222
protocol: TCP
port: 8222
targetPort: 8222
It's done like this: install Ingress controller (e.g. ingress-nginx) to your cluster, it's gonna be your loadbalancer looking into outside world.
Then configure Ingress resource(s) to drive traffic to services (as many as you want). Then you have a single Ingress controller (which means a single Loadbalancer) per cluster.
https://kubernetes.io/docs/concepts/services-networking/ingress/
You can do this, using Ingress controller backing with a load balancer, and use one path / you may make the Ingress tells the backing load balancer to route requests based on the Host header.
I am using istio 1.7.3 version in my env and have installed ingress and egress using demo profile using below command:
istioctl install --set profile=demo
I have customized my ingress config to create CLB to attach to the aws cert once the ingress is created and that is working fine, please review the below config:
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: “arn:”
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: “http”
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: “https”
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: “3600”
labels:
app: istio-ingressgateway
release: istio
istio: ingressgateway
spec:
type: LoadBalancer
selector:
app: istio-ingressgateway
ports:
name: status-port
port: 15021
targetPort: 15021
name: http2
port: 80
targetPort: 8080
name: https
port: 443
targetPort: 8443
name: tcp
port: 31400
targetPort: 31400
name: tls
port: 15443
targetPort: 15443
Now I want to create a ALB instead to CLB and create a access-log-s3-bucket-name to get the logs of the ALB.
Can we tweak the above configuration so that it can override and make it a ALB with s3 bucket access logs enabled or what should I do when using istioctl command with demo profile to change it to ALB with s3 access logs enabled instead of CLB.
Do we have any sample config or examples somewhere that can help or you can point me to.
I just deployed nginx on a K8S Node in a cluster, the master and worker communicate using internal IP address.
I can curl http://worker_ip:8080 (nginx) from internal network, but how to make it can be accessed from external/internet network?
Or should I use public IP as my node host?
update the service type to NodePort. grab the nodePort that is assigned to the service.
you should be able to access nginx using host:nodeport
see below for reference
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx