AWS/Kubernetes - Stickiness options not available for TCP protocols - amazon-web-services

Why is default load balancer port 80 and 443 is considered as TCP ports? I want to test stickiness as shown in the aws docs either through yaml file or through aws console.
I was using nginx ingress and moved to default load balancer to test stickiness but I see the error Stickiness options not available for TCP protocols
I even tried specifying protocol https but it doesn't accept. It only allows "SCTP", "TCP", "UDP".
apiVersion: v1
kind: Service
metadata:
name: httpd
labels:
app: httpd-service
namespace: test-web-dev
spec:
#type: LoadBalancer
selector:
app: httpd
ports:
- name: port-80
port: 80
targetPort: 80
- name: port-443
port: 443
targetPort: 443
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
When I try ingress, I disable the service type Loadbalancer above
nginx-ingress-lb-service.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
1234: "test-web-dev/httpd:1234"
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: port-1234
port: 1234
protocol: TCP
targetPort: 1234
---

Stickiness requires listener which operates in layer 7 of OSI model, which in case of CLB, is provided by http and https listeners.
Since you are using TCP listener which operates in layer 3, stickiness is not supported. Thus, if you want to use sticky sessions, you must change to http or https listeners.
UDP and SCTP are invalid listeners for CLB. It only supports TCP, HTTP, HTTPS and SSL.

Related

ALB for Argo CD on kubernetes using AWS Load Balancer Controller (HTTP2)

I have an EKS kubernetes cluster with AWS Load Balancer Controller and Argo CD installed. I'm creating an Application Load Balancer based on Argo CD documentation here.
Basically, I'm creating a NodePort service that receives traffic from the load balancer, and an ingress that will create the load balancer (using AWS Load Balancer Controller).
The ingress code is this one:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
# Use this annotation (which must match a service name) to route traffic to HTTP2 backends.
alb.ingress.kubernetes.io/conditions.argogrpc: |
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
# ALB annotations
kubernetes.io/ingress.class: 'alb'
alb.ingress.kubernetes.io/scheme: 'internet-facing'
alb.ingress.kubernetes.io/target-type: 'instance'
alb.ingress.kubernetes.io/load-balancer-name: 'test-argocd'
alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:us-east-1:1234567:certificate/longcertcode'
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
# Health Check
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
name: argocd
namespace: argocd
spec:
rules:
- host: argocd.argoproj.io
http:
paths:
- path: /
backend:
service:
name: argogrpc
port:
number: 443
pathType: ImplementationSpecific
tls:
- hosts:
- argocd.argoproj.io
defaultBackend:
service:
name: argogrpc
port:
number: 443
And that creates a Load Balancer as expected.
I'm creating the service with this:
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: NodePort
The issue here is that the health check is failing on the Target Group:
If I change the backend protocol version to GRPC:
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: GRPC
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: NodePort
Then the health check is passed, but I get an 464 error on Chrome:
This is what AWS documentation says about this error, but it doesn't help to clarify why I'm getting it:
So the question is, how do I create an application load balancer for my Argo CD using AWS load balancer controller that works? According to the documentation, it should work in both cases.

Kubernetes Ingress not routing traffic to backend port

Ingress is not routing traffic to backend service port. Connection times out.
Any idea how to troubleshoot? There are no errors in the ingress controller logs.
curl -v -L https://www.example.com
* Trying 172.20.xxx.xx:443...
I've checked the Loadbalancer and its health is fine. I'm not sure why its not routing traffic to backend port and what else I could check to figure out what the issue is.
Here is my aws ingress controller.yaml config :
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:xxx"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.3.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer
And here the service config service.yaml:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2022-08-06T12:16:17Z"
labels:
app: manageiq
name: httpd
namespace: default
spec:
clusterIP: 100.69.xxx.xx
clusterIPs:
- 100.69.xxx.xx
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
name: httpd
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-08-06T12:47:32Z"
generation: 2
labels:
app: manageiq
name: httpd
namespace: default
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: httpd
port:
number: 8080
path: /
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- hostname: adc1d30f0db264d7ea54aed4dcdc12ec-atest.elb.ap-south-1.amazonaws.com
Figured it out. Loadbalancer was set to internal. I was following a documentation.
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
I had to delete services and recreate so a new load balancer gets created.

Kubernetes, AWS : Service creating extra Security-Group per service

I am working on Kubernetes where our pods are running on AWS. I am creating a service with pre-defined LoadBalancer specifications. Still, Kubernetes is adding an extra SG to the loadbalancer. How can I specify not to do that? Thank you.
service.yaml :
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: akeneo-service
app.kubernetes.io/instance: akeneo-service-instance
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/component: backend
app.kubernetes.io/managed-by: kubectl
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "ssl"
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: OUR_ARN
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-0d3a65fc39e47e3cf"
name: akeneo-service
spec:
selector:
app: akeneo-service
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: 443
- name: http
port: 80
targetPort: 80
Any help would be nice. :-)

How to redirect HTTP to HTTPS with Nginx Ingress Controller, AWS NLB and TLS certificate managed by AWS Certificate Manager?

I've tried the following to get HTTP to redirect to HTTPS. I'm not sure where I'm going wrong.
ingress-nginx object:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:...
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
my-ingress object:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
tls:
- hosts:
- app.example.com
rules:
- host: app.example.com
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 80
I get a 308 Permanent Redirect on HTTP and HTTPS. I guess this makes sense as the NLB is performing the SSL termination and therefore forwarding HTTP to the Nginx service? I guess I would need to move the SSL termination from the NLB to the Nginx service?
Thanks
I believe you do need to move the SSL termination to the ingress controller because I am having the same issue and I appear to be in a permanent redirect situation. The traffic comes into the NLB on 443 and is terminated and sends to the backend instances over port 80. The ingress sees the traffic on port 80 and redirects to https:// and thus begins the infinite loop.

Kubernetes HTTP to HTTPS Redirect on AWS with ELB terminating SSL

I'm trying to set up a simple HTTP to HTTPS redirect for traffic going to a Kubernetes cluster. The SSL termination is happening on the ELB. When I try to use the nginx.ingress.kubernetes.io/ssl-redirect = true it results in an infinite redirect which led me to setting up a config map to handle this (nginx-ingress: Too many redirects when force-ssl is enabled).
Now there seems to be no redirection happening at all.
My ingress service is defined as:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:...:certificate/...
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: ingress-nginx
namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
ports:
- name: https
port: 443
protocol: TCP
targetPort: http
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: ingress-nginx
type: LoadBalancer
My config map is defined as:
apiVersion: v1
kind: ConfigMap
data:
client-body-buffer-size: 32M
hsts: "true"
proxy-body-size: 1G
proxy-buffering: "off"
proxy-read-timeout: "600"
proxy-send-timeout: "600"
server-tokens: "false"
ssl-redirect: "false"
upstream-keepalive-connections: "50"
use-proxy-protocol: "true"
http-snippet: |
server {
listen 8080 proxy_protocol;
server_tokens off;
return 307 https://$host$request_uri;
}
metadata:
labels:
app: ingress-nginx
name: nginx-configuration
namespace: ingress-nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
And, the ingress is defined as:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: Authorization, origin, accept
nginx.ingress.kubernetes.io/cors-allow-methods: GET, OPTIONS
nginx.ingress.kubernetes.io/cors-allow-origin: gateway.example.com.com/monitor
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: gateway.example.com
http:
paths:
- backend:
serviceName: gateway
servicePort: 8080
path: /
tls:
- hosts:
- gateway.example.com
The issue was the target port I was using on the load balancer not matching the port the redirection server was listening on:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
This was just sending everything to port 80. It should have been this:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
That way it matches up with the ConfigMap's:
data:
...
http-snippet: |
server {
listen 8080 proxy_protocol;
server_tokens off;
return 307 https://$host$request_uri;
}