AWS Ingress for socket application - amazon-web-services

I have multiple microservices deployed on aws eks. one the microservices has external http access configured with my ingress file as below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cc-ingress
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: microservice-name
port:
number: 3001
I want to deploy another microservice which interacts with socket protocol and in my local environment I call the service with postman like below:
ws://localhost:3001/some-route
so I need to deploy this microservice in aws and provide external access for it. I would be appreciated if anyone could help me.
Thanks for any comments or guides.

web socket works on top of https protocol, by adding an upgrade:connection header to the request. So, you don't have to rewrite the ingress specifically for web sockets. Just make sure you have https ports opened and the app is listening to it.

Related

How to correctly expose internal gRPC microservice to other services in EKS?

We are trying to bring up a gRPC microservice on AWS EKS. We've gotten to the point where we have an ALB up, however it's giving us this error: A certificate must be specified for HTTPS listeners
Here is our service YAML:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: dev
name: some-service-name
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: GRPC
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 80}, {"HTTPS": 50051}]'
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
spec:
rules:
- http:
paths:
- backend:
serviceName: some-service-name
servicePort: 50051
path: /*
We don't want to expose this service externally, and only want internal services to hit it. I feel like we don't even need HTTPS for this, and can use HTTP, however it looks like gRPC requires HTTPS.
What's the correct way to get this working? The examples I've seen seem to be for external-facing services mostly. Do we need to create a private certificate authority, create a certificate from it, and then attribute it to the HTTPS listener in the load balancer settings?
Thanks!
ALB controller expects certificate ARN when we mention listen ports as HTTPS.
There are two options to get it working -
Make listen ports as HTTP
Or add the associated certificate
Since you want to use GRPC, I do not think there is no hard rule to use HTTPS for GTPC
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/#backend-protocol
And it is still recommended to do HTTPS for internal communication

How to expose multiple services with TCP using nginx-ingress controller?

I have multiple deployments running of RDP application and they all are exposed with ClusterIP service. I have nginx-ingress controller in my k8s cluster and to allow tcp I have added --tcp-services-configmap flag in nginx-ingress controller deployment and also created a configmap for the same that is shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
3389: “demo/rdp-service1:3389”
This will expose “rdp-service1” service. And I have 10 more such services which needed to be exposed on the same port number but if I add more service in the same configmap like this
...
data
3389: “demo/rdp-service1:3389”
3389: “demo/rdp-service2:3389”
Then it will remove the previous service data and since here I have also deployed external-dns in k8s, so all the records created by ingress using host: ... will starts pointing to the deployment attached with the newly added service in configmap.
Now my final requirement is as soon as I append the rule for a newly created deployment(RDP application) in the ingress then it starts allowing the TCP connection for that, so is there any way to achieve this. Or is there any other Ingress controller available that can solve such type of use case and can also easily be integrated with external-dns ?
Note:- I am using AWS EKS Cluster and Route53 with external-dns.
Posting this answer as a community wiki to explain some of the topics in the question as well as hopefully point to the solution.
Feel free to expand/edit it.
NGINX Ingress main responsibility is to forward the HTTP/HTTPS traffic. With the addition of the tcp-services/udp-services it can also forward the TCP/UDP traffic to their respective endpoints:
Kubernetes.github.io: Ingress nginx: User guide: Exposing tcp udp services
The main issue is that the Host based routing for Ingress resource in Kubernetes is targeting specifically HTTP/HTTPS traffic and not TCP (RDP).
You could achieve a following scenario:
Ingress controller:
3389 - RDP Deployment #1
3390 - RDP Deployment #2
3391 - RDP Deployment #3
Where there would be no Host based routing. It would be more like port-forwarding.
A side note!
This setup would also depend on the ability of the LoadBalancer to allocate ports (which could be limited due to cloud provider specification)
As for possible solution which could be not so straight-forward I would take a look on following resources:
Stackoverflow.com: Questions: Nxing TCP forwarding based on hostname
Doc.traefik.io: Traefik: Routing: Routers: Configuring TCP routers
Github.com: Bolkedebruin: Rdpgw
I'd also check following links:
Aws.amazon.con: Quickstart: Architecture: Rd gateway - AWS specific
Docs.konghq.com: Kubernetes ingress controller: 1.2.X: Guides: Using tcpingress
Haproxy:
Haproxy.com: Documentation: Aloha: 12-0: Deployment guides: Remote desktop: RDP gateway
Haproxy.com: Documentation: Aloha: 10-5: Deployment guides: Remote desktop
Haproxy.com: Blog: Microsoft remote desktop services rds load balancing and protection
Actually, I really don't know why you are using that configmap.
In my knowledge, nginx-ingress-controller is routing traffic coming in the same port and routing based on host. So if you want to expose your applications on the same port, try using this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Chart.Name }}-ingress
namespace: your-namespace
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: your-hostname
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: {{ .Chart.Name }}-service
servicePort: {{ .Values.service.nodeport.port }}
Looking in your requirement, I feel that you need a LoadBalancer rather than Ingress

Exposing a service on EKS using NGINX ingress and issues with load balancer

I am trying to set up a service and expose it externally on EKS. I have already done it on GKE pretty easily but now AWS is giving me a hard time.
My NGINX yaml looks something like that:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- app.mydomain.com
secretName: myapp-tls
rules:
- host: app.mydomain.com
http:
paths:
- path: /
backend:
serviceName: myapp-service
servicePort: 80
And then I have my domain app.mydomain.com on Google Domains pointing at the ingress external address. There is also a cert-manager service running in order to support HTTPS.
However, while basically the same setup worked completely out of the box on GKE, EKS gives me a hard time.
From what I understand it has something to do with EKS default LoadBalancer being layer 4 in comparison to Google's layer 7 (Which explains HTTPS not working) but there is also issues with redirections of the domain as it just resolves as the ingress address instead of my desired address and thus my app doesn't show up.
The domain is registered over Google Domains and I'm creating Synthetic Records (for my subdomain) that points to my ingress external address on EKS. The same scheme works perfectly fine on GKE but here it resolves the address as the ingress address instead of my domain which results in 404 on the ingress side.
I was wondering if someone could please point me to how to properly set it up? Should I give up on nginx ingress on EKS and move onto ALB? and how to properly associate the domain?
Thank you very much in advance!
Edit:
output of kubectl describe ingress myapp-ingress:
Name: myapp-ingress
Namespace: default
Address: ********************************-****************.elb.eu-west-1.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
myapp-tls terminates app.mydomain.com
Rules:
Host Path Backends
---- ---- --------
app.mydomain.com
/ myapp-service:80 (172.31.2.238:8000)
Annotations: cert-manager.io/cluster-issuer: myapp-letsencrypt-prod
kubernetes.io/ingress.class: nginx
Events: <none>
Should I give up on nginx ingress on EKS and move onto ALB
No. NGinX ingress controllers work perfectly well on EKS. It is possible to configure them as either layer 4 or layer 7; we use it in layer 7 mode.
Can you update your question with the output of
kubectl get ingress myapp-ingress
I think your ingress path is also incorrect. Unless I'm mistaken that's just routing the root of your app, not all uris. We use the scheme
spec:
rules:
- host: service.d.tld
http:
paths:
- path: /?(.*) # <---
backend:
serviceName: my-service
servicePort: http
Are you seeing errors in the nginx ingress controller's logs? That + kubectl events are both useful for debugging purposes.
I'd disable TLS everywhere and get your service working on http, then work stepwise on getting TLS enabled on the ingress controller.
Edit: Based on your response above,
curl -H "Host: app.mydomain.com" http://<elb-address>:80
SHOULD call through to your service behind the ingress.
How is app.mydomain.com defined? Is it a CNAME to the dns entry?

Ingress resource vs NGINX ingress controller on Kubernetes

I am setting up NGINX ingress controller on AWS EKS.
I went through k8s Ingress resource and it is very helpful to understand we map LB ports to k8s service ports with e.g file def. I installed nginx controller till pre-requisite step. Then the tutorial directs me to create an ingress resource.
https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/#create-an-ingress-resource
But below it is telling me to apply a service config. I am confused with this provider-specific step. Which is different in terms of kind, version, spec definition (Service vs Ingress).
https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml
I am missing something here?
This is a concept that is at first a little tricky to wrap your head around. The Nginx ingress controller is nothing but a service of type LoadBalancer. What is does is be the public-facing endpoint for your services. The IP address assigned to this service can route traffic to multiple services. So you can go ahead and define your services as ClusterIP and have them exposed through the Nginx ingress controller.
Here's a diagram to portray the concept a little better:
image source
On that note, if you have acquired a static IP for your service, you need to assign it to your Nginx ingress-controller. So what is an ingress? Ingress is basically a way for you to communicate to your Nginx ingress-controller how to direct traffic incoming to your LB public IP. So as it is clear now, you have one loadbalancer service, and multiple ingress resources. Each ingress corresponds to a single service that can change based on how you define your services, but you get the idea.
Let's get into some yaml code. As mentioned, you will need the ingress controller service regardless of how many ingress resources you have. So go ahead and apply this code on your EKS cluster.
Now let's see how you would expose your pod to the world through Nginx-ingress. Say you have a wordpress deployment. You can define a simple ClusterIP service for this app:
apiVersion: v1
kind: Service
metadata:
labels:
app: ${WORDPRESS_APP}
namespace: ${NAMESPACE}
name: ${WORDPRESS_APP}
spec:
type: ClusterIP
ports:
- port: 9000
targetPort: 9000
name: ${WORDPRESS_APP}
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: ${WORDPRESS_APP}
This creates a service for your wordpress app which is not accessible outside of the cluster. Now you can create an ingress resource to expose this service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: ${NAMESPACE}
name: ${INGRESS_NAME}
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- ${URL}
secretName: ${TLS_SECRET}
rules:
- host: ${URL}
http:
paths:
- path: /
backend:
serviceName: ${WORDPRESS_APP}
servicePort: 80
Now if you run kubectl get svc you can see the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wordpress ClusterIP 10.23.XXX.XX <none> 9000/TCP,80/TCP,443/TCP 1m
nginx-ingress-controller LoadBalancer 10.23.XXX.XX XX.XX.XXX.XXX 80:X/TCP,443:X/TCP 1m
Now you can access your wordpress service through the URL defined, which maps to the public IP of your ingress controller LB service.
the NGINX ingress controller is the actual process that shapes your traffic to your services. basically like the nginx or loadbalancer installation on a traditional vm.
the ingress resource (kind: Ingress) is more like the nginx-config on your old VM, where you would define host mappings, paths and proxies.

Exposing service in kubernetes pods with aws alb ingress controller

I am new to k8s and exploring more on production grade deployment.
We have py Django app which is running in (say in 9000) node port. When I try to expose them using a k8s-service ELB,
- it works by running 80 and 443 separately; where as 80 to 443 redirection is not supported in AWS classic ELB.
Then I switched to aws alb ingress controller; the problem i faced was
- ALB does not works with node port and only with http and https port.
Any thoughts would be much appreciated!!
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ABC
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets: 'subnet-1, subnet-2'
alb.ingress.kubernetes.io/security-group: sg-ABC
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "200"
labels:
name: ABC
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: ABC
servicePort: 80 ```
Thank you #sulabh and #Fahri it works perfect now. I went through the doc again and corrected my mistake.
Issues was with route path in ALb;
Setup is like;
python-django-uwsgi app in pods and expose it as service in NodePort and use aws ingress controller for ALB;
Cheers!