I installed istio using istioctl on AWS EKS cluster.
**Version**
EKS : 1.21
Istio : 1.11.1
**Command**
# istioctl install -y -f istio-manifest.yaml
**istio-manifest.yaml**
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
hpaSpec:
minReplicas: 1
maxReplicas: 3
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-name: 'test-istio'
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-xxxxx,subnet-xxxxx"
service.beta.kubernetes.io/aws-load-balancer-type: "external"
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-1:xxxxx:certificate/xxxxx"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: "preserve_client_ip.enabled=false"
service:
ports:
- name: status
protocol: TCP
port: 15021
targetPort: 15021
- name: https
protocol: TCP
port: 443
targetPort: 8443
- name: grpc
protocol: TCP
port: 9090
targetPort: 9090
An NLB was created successfully. After istio installed, I changed preserve_client_ip.enabled from false to true and then ran the installation command again. However the attribute of the NLB was not changed, still false. Of course, AWS Load Balancer Controller was installed correctly.
How can I change the attribute after istio installation?
Related
I have an EKS kubernetes cluster with AWS Load Balancer Controller and Argo CD installed. I'm creating an Application Load Balancer based on Argo CD documentation here.
Basically, I'm creating a NodePort service that receives traffic from the load balancer, and an ingress that will create the load balancer (using AWS Load Balancer Controller).
The ingress code is this one:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
# Use this annotation (which must match a service name) to route traffic to HTTP2 backends.
alb.ingress.kubernetes.io/conditions.argogrpc: |
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
# ALB annotations
kubernetes.io/ingress.class: 'alb'
alb.ingress.kubernetes.io/scheme: 'internet-facing'
alb.ingress.kubernetes.io/target-type: 'instance'
alb.ingress.kubernetes.io/load-balancer-name: 'test-argocd'
alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:us-east-1:1234567:certificate/longcertcode'
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
# Health Check
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
name: argocd
namespace: argocd
spec:
rules:
- host: argocd.argoproj.io
http:
paths:
- path: /
backend:
service:
name: argogrpc
port:
number: 443
pathType: ImplementationSpecific
tls:
- hosts:
- argocd.argoproj.io
defaultBackend:
service:
name: argogrpc
port:
number: 443
And that creates a Load Balancer as expected.
I'm creating the service with this:
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: NodePort
The issue here is that the health check is failing on the Target Group:
If I change the backend protocol version to GRPC:
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: GRPC
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: NodePort
Then the health check is passed, but I get an 464 error on Chrome:
This is what AWS documentation says about this error, but it doesn't help to clarify why I'm getting it:
So the question is, how do I create an application load balancer for my Argo CD using AWS load balancer controller that works? According to the documentation, it should work in both cases.
Ingress is not routing traffic to backend service port. Connection times out.
Any idea how to troubleshoot? There are no errors in the ingress controller logs.
curl -v -L https://www.example.com
* Trying 172.20.xxx.xx:443...
I've checked the Loadbalancer and its health is fine. I'm not sure why its not routing traffic to backend port and what else I could check to figure out what the issue is.
Here is my aws ingress controller.yaml config :
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:xxx"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.3.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer
And here the service config service.yaml:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2022-08-06T12:16:17Z"
labels:
app: manageiq
name: httpd
namespace: default
spec:
clusterIP: 100.69.xxx.xx
clusterIPs:
- 100.69.xxx.xx
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
name: httpd
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-08-06T12:47:32Z"
generation: 2
labels:
app: manageiq
name: httpd
namespace: default
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: httpd
port:
number: 8080
path: /
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- hostname: adc1d30f0db264d7ea54aed4dcdc12ec-atest.elb.ap-south-1.amazonaws.com
Figured it out. Loadbalancer was set to internal. I was following a documentation.
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
I had to delete services and recreate so a new load balancer gets created.
Why is default load balancer port 80 and 443 is considered as TCP ports? I want to test stickiness as shown in the aws docs either through yaml file or through aws console.
I was using nginx ingress and moved to default load balancer to test stickiness but I see the error Stickiness options not available for TCP protocols
I even tried specifying protocol https but it doesn't accept. It only allows "SCTP", "TCP", "UDP".
apiVersion: v1
kind: Service
metadata:
name: httpd
labels:
app: httpd-service
namespace: test-web-dev
spec:
#type: LoadBalancer
selector:
app: httpd
ports:
- name: port-80
port: 80
targetPort: 80
- name: port-443
port: 443
targetPort: 443
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
When I try ingress, I disable the service type Loadbalancer above
nginx-ingress-lb-service.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
---
Stickiness requires listener which operates in layer 7 of OSI model, which in case of CLB, is provided by http and https listeners.
Since you are using TCP listener which operates in layer 3, stickiness is not supported. Thus, if you want to use sticky sessions, you must change to http or https listeners.
UDP and SCTP are invalid listeners for CLB. It only supports TCP, HTTP, HTTPS and SSL.
I want to assign an external IP to the Ingress Gateway of Istio.
I want to use the Istio Operator Spec.
So far I got this:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
profile: demo
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
loadBalancerIP: 1.2.3.4
addonComponents:
grafana:
enabled: false
prometheus:
enabled: true
It's assigning an automatically IP to the Service:
kubectl get svc -n istio-system
Is not showing 1.2.3.4. for the EXTERNAL-IP
Is it only possible if I really own this IP over GCP?
First you have to create an IP resource in GCP, and then you can give that IP here in the yaml below.
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- enabled: true
k8s:
overlays:
- api_version: autoscaling/v1
kind: HorizontalPodAutoscaler
name: istio-ingressgateway
patches:
- path: spec.minReplicas
value: 3
- path: spec.maxReplicas
value: 5
- path: spec.metrics[0].resource.targetAverageUtilization
value: 60
service:
loadBalancerIP: XXX.XXX.XXX.XXX
loadBalancerSourceRanges: []
ports:
- name: status-port
port: 15020
targetPort: 15020
- name: http2
port: 80
targetPort: 80
- name: https
port: 443
- name: tcp
port: 31400
targetPort: 31400
- name: tls
port: 15443
targetPort: 15443
label:
app: istio-ingressgateway
istio: ingressgateway
name: istio-ingressgateway
I am working on Kubernetes where our pods are running on AWS. I am creating a service with pre-defined LoadBalancer specifications. Still, Kubernetes is adding an extra SG to the loadbalancer. How can I specify not to do that? Thank you.
service.yaml :
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: akeneo-service
app.kubernetes.io/instance: akeneo-service-instance
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/component: backend
app.kubernetes.io/managed-by: kubectl
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "ssl"
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: OUR_ARN
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-0d3a65fc39e47e3cf"
name: akeneo-service
spec:
selector:
app: akeneo-service
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: 443
- name: http
port: 80
targetPort: 80
Any help would be nice. :-)