I have a nginx ingress controller on GKE, it is behind an TCP LB from GCP.
We have some requests that take longer than 30s to process, and the TCP LB seems to be killing the connection with aounrd this timeout (30~35s).
How can I change the LB timeout?
Any other workaround so that the LB does not close the connection would be helpful too. (maybe on nginx to send some packet to keep alive?)
Observation: the HTTP LoadBalancer has this config, I know. But I need it on TCP. In fact, this ingress controller was installed acording to these docs on GCP:
https://cloud.google.com/community/tutorials/nginx-ingress-gke
Related
I remember doing it before but now it seems I have forgotten the process.
I want to create an HTTPS listener for an ALB. However, I don't have anything in my ec2 running on 443.
Should I configure a reverse proxy which points 443 to the app running port or add my HTTPS listener with port 80 HTTP target group?
Could someone help me with this?
You need a single Target Group pointing to your EC2 instance on port 80.
Then you can create a port 443 listener on the ALB that uses that target group. You will have to attach an SSL certificate to the listener when you create it. The ALB will terminate the SSL connection and send the request to the backend server over port 80.
So I'm working on a project that involves managing many postgres instances inside of a k8s cluster. Each instance is managed using a Stateful Set with a Service for network communication. I need to expose each Service to the public internet via DNS on port 5432.
The most natural approach here is to use the k8s Load Balancer resource and something like external dns to dynamically map a DNS name to a load balancer endpoint. This is great for many types of services, but for databases there is one massive limitation: the idle connection timeout. AWS ELBs have a maximum idle timeout limit of 4000 seconds. There are many long running analytical queries/transactions that easily exceed that amount of time, not to mention potentially long-running operations like pg_restore.
So I need some kind of solution that allows me to work around the limitations of Load Balancers. Node IPs are out of the question since I will need port 5432 exposed for every single postgres instance in the cluster. Ingress also seems less than ideal since it's a layer 7 proxy that only supports HTTP/HTTPS. I've seen workarounds with nginx-ingress involving some configmap chicanery, but I'm a little worried about committing to hacks like that for a large project. ExternalName is intriguing but even if I can find better documentation on it I think it may end up having similar limitations as NodeIP.
Any suggestions would be greatly appreciated.
The Kubernetes ingress controller implementation Contour from Heptio can proxy TCP streams when they are encapsulated in TLS. This is required to use the SNI handshake message to direct the connection to the correct backend service.
Contour can handle ingresses, but introduces additionally a new ingress API IngressRoute which is implemented via a CRD. The TLS connection can be terminated at your backend service. An IngressRoute might look like this:
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: postgres
namespace: postgres-one
spec:
virtualhost:
fqdn: postgres-one.example.com
tls:
passthrough: true
tcpproxy:
services:
- name: postgres
port: 5432
routes:
- match: /
services:
- name: dummy
port: 80
ha proxy supports tcp load balancing. you can look at ha-proxy as a proxy and load balancer for postgres database. it can support both tls and non tls connections.
We are running rails application with unicorn and websocket.
We are using AWS ELB as ingress
SSL terminates on ELB and forwards traffic to application.
Nginx ingress routes traffic to web app running unicorn/puma on port 8080.
App works but our websocket responds with 200 instead of 101. We have enabled CORS and used required annotations in ingress.
This are annotations used for the ingress controller service
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert::arn:aws:iam::xxx:server-certificate/staging
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
When we use aws loadbalancer protocol as tcp and load balancer ports as 443 it fails on infinite redirect loop.
Following are the annotations used in the ingress:
nginx.ingress.kubernetes.io/service-upstream: true
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
Our sample nginx configuration we used earlier without ingress is here
How to get websockets working with nginx ingress controller with AWS ELB ?
Is it possible to try without CORS?
Part of the handshake is the client must send at least these headers:
Sec-WebSocket-Key
Sec-WebSocket-Version
And maybe something else. Look at https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers#The_WebSocket_Handshake
I am setting up an application load balancer.
The ALB, has 1 listener
http: 80 to the target-group
target-group has port 3000
I also have an auto scaling group that points to the target group and is setup to create 2 instances.
Cluster group is setup, with service that runs 4 tasks.
I setup the service to use the alb and http:80 port. The
task created has a dynamic host port and container port 3000.
I have checked my security groups and I have inbound setup to accept port 3000, and 80 and outbound takes all traffic.
All the instances in the target-group are unhealthy
I can ssh into the ec2 instances and docker ps -a returns two docker containers.
I logged out and ran curl -i ec2-user#ec2-22-236-243-39.compute-4.amazonaws.com:3000/health-check-target-page I get
Failed to connect to ec2-user#ec2-22-236-243-39.compute-4.amazonaws.com port 3000: Connection refused
I tried same command with port 80 and I get
curl: (56) Recv failure: Connection reset by peer
I'm still learning AWS so hope this info helps. Let me know what I am missing here.
Thanks!
I am using tcp load balancer in google cloud platform, How do i forward the the frontend configurations
<static-ip>:8000 and <static-ip>:80
to the 8000 port of a backend instance group ?
The temporary solution i have used is by logging into each machines in the instance group and used ip-tables to forward the incomming traffic in port 80 to port 8000. But this is not a feasible solution if the number of instances are more.
Port forwarding cannot be implemented in google cloud's tcp loadbalancer, but available in HTTP and HTTPS load balancers. The port forwarding should be done through ip-tables in the machines.