How to connect to rabbitmq service using load balancer hostname - amazon-web-services

The kubectl describe service the-load-balancer command returns:
Name: the-load-balancer
Namespace: default
Labels: app=the-app
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"the-app"},"name":"the-load-balancer","namespac...
Selector: app=the-app
Type: LoadBalancer
IP: 10.100.129.251
LoadBalancer Ingress: 1234567-1234567890.us-west-2.elb.amazonaws.com
Port: the-load-balancer 15672/TCP
TargetPort: 15672/TCP
NodePort: the-load-balancer 30080/TCP
Endpoints: 172.31.77.44:15672
Session Affinity: None
External Traffic Policy: Cluster
The RabbitMQ server that runs on another container, behind of load balancer is reachable from another container via the load balancer's Endpoints 172.31.77.44:15672.
But it fails to connect using the-load-balancer hostname or via its local 10.100.129.251 IP address.
What needs to be done in order to make the RabbitMQ service reachable via the load balancer's the-load-balancer hostname?
Edited later:
Running a simple Python test from another container:
import socket
print(socket.gethostbyname('the-load-balancer'))
returns a load balancer local IP 10.100.129.251.
Connecting to RabbitMQ using '172.31.18.32' works well:
import pika
credentials = pika.PlainCredentials('guest', 'guest')
parameters = pika.ConnectionParameters(host='172.31.18.32', port=5672, credentials=credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
print('...channel: %s' % channel)
But after replacing the host='172.31.18.32' with host='the-load-balancer' or host='10.100.129.251' and the client fails to connect.

When serving RabbitMQ from behind the Load Balancer you will need to open the ports 5672 and 15672. When configured properly the kubectl describe service the-load-balancer command should return both ports mapped to a local IP address:
Name: the-load-balancer
Namespace: default
Labels: app=the-app
Selector: app=the-app
Type: LoadBalancer
IP: 10.100.129.251
LoadBalancer Ingress: 123456789-987654321.us-west-2.elb.amazonaws.com
Port: the-load-balancer-port-15672 15672/TCP
TargetPort: 15672/TCP
NodePort: the-load-balancer-port-15672 30080/TCP
Endpoints: 172.31.18.32:15672
Port: the-load-balancer-port-5672 5672/TCP
TargetPort: 5672/TCP
NodePort: the-load-balancer-port-5672 30081/TCP
Endpoints: 172.31.18.32:5672
Below is the the-load-balancer.yaml file used to create RabbitMQ service:
apiVersion: v1
kind: Service
metadata:
name: the-load-balancer
labels:
app: the-app
spec:
type: LoadBalancer
ports:
- port: 15672
nodePort: 30080
protocol: TCP
name: the-load-balancer-port-15672
- port: 5672
nodePort: 30081
protocol: TCP
name: the-load-balancer-port-5672
selector:
app: the-app

I've noticed that in your code, you are using port 5672 to talk to the endpoint directly, while it is 15672 in the service definition which is the port for web console?

Be sure that the load balancer service and rabbitmq are in the same namespace of your application.
If not, you have to use the full dns record service-x.namespace-b.svc.cluster.local, according to the DNS for Services and Pods documentation

Related

Kubernetes - load balance multiple services using a single load balancer

Is it possible to load balance multiple services using a single aws load balancer? If that's not possible I guess I could just use a nodejs proxy to forward from httpd pod to tomcat pod and hope it doesn't lag...
Either way which Loadbalancer is recommended for multiport services? CLB doesn't support mutliports and ALB doesn't support mutliport for a single / path. So I guess NLB is the right thing implement?
I'm trying to cut cost and move to k8s but I need to know if I'm choosing the right service. Tomcat and Httpd are both part of a single prod website but can't do path based routing.
Httpd pod service:
apiVersion: v1
kind: Service
metadata:
name: httpd-service
labels:
app: httpd-service
namespace: test1-web-dev
spec:
selector:
app: httpd
ports:
- name: port_80
protocol: TCP
port: 80
targetPort: 80
- name: port_443
protocol: TCP
port: 443
targetPort: 443
- name: port_1860
protocol: TCP
port: 1860
targetPort: 1860
Tomcat pod service:
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
labels:
app: tomcat-service
namespace: test1-web-dev
spec:
selector:
app: tomcat
ports:
- name: port_8080
protocol: TCP
port: 8080
targetPort: 8080
- name: port_1234
protocol: TCP
port: 1234
targetPort: 1234
- name: port_8222
protocol: TCP
port: 8222
targetPort: 8222
It's done like this: install Ingress controller (e.g. ingress-nginx) to your cluster, it's gonna be your loadbalancer looking into outside world.
Then configure Ingress resource(s) to drive traffic to services (as many as you want). Then you have a single Ingress controller (which means a single Loadbalancer) per cluster.
https://kubernetes.io/docs/concepts/services-networking/ingress/
You can do this, using Ingress controller backing with a load balancer, and use one path / you may make the Ingress tells the backing load balancer to route requests based on the Host header.

How to deploy a Kubernetes service using NodePort on Amazon AWS?

I have created a cluster on AWS EC2 using kops consisting of a master node and two worker nodes, all with public IPv4 assigned.
Now, I want to create a deployment with a service using NodePort to expose the application to the public.
After having created the service, I retrieve the following information, showing that it correctly identified my three pods:
nlykkei:~/projects/k8s-examples$ kubectl describe svc hello-svc
Name: hello-svc
Namespace: default
Labels: app=hello
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hello"},"name":"hello-svc","namespace":"default"},"spec"...
Selector: app=hello-world
Type: NodePort
IP: 100.69.62.27
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30001/TCP
Endpoints: 100.96.1.5:8080,100.96.2.3:8080,100.96.2.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
However, when I try to visit any of my public IPv4's on port 30001, I get no response from the server. I have already created a Security Group allowing all ingress traffic to port 30001 for all of the instances.
Everything works with Docker Desktop for Mac, and here I notice the following service field not present in the output above:
LoadBalancer Ingress: localhost
I've already studied https://kubernetes.io/docs/concepts/services-networking/service/, and think that NodePort should serve my needs?
Any help is appreciated!
So you want to have a service able to be accessed from public. In order to achieve this I would recommend to create a ClusterIP service and then an Ingress for that service. So, saying that you have the deployment hello-world serving at 8081 you will then have the following two objects:
Service:
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
ports:
- name: service
port: 8081(or whatever you want)
protocol: TCP
targetPort: 8080 (here goes the opened port in your pods)
selector:
app: hello-world
type: ClusterIP
Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
labels:
app: hello-world
name: hello-world
spec:
rules:
- host: hello-world.mycutedomainname.com
http:
paths:
- backend:
serviceName: hello-world
servicePort: 8081 (or whatever you have set for the service port)
path: /
Note: the name tag in the service's port is optional.

GKE - how to attach static ip to internal load balancer

I want to connect service from one GKE cluster to another one. I created service as a internal load balancer and I would like to attach a static ip to it. I created my service.yml
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
kubernetes.io/ingress.global-static-ip-name: es-test
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
However after apply -f when I check the service the load balancer ingress looks like this:
status:
loadBalancer:
ingress:
- ip: 10.156.0.60
And I cannot connect using the static ip. How to solve it ?
EDIT:
After suggestion I changed the yml file to:
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
loadBalancerIP: "xx.xxx.xxx.xxx" -- here my static ip
Service now looks like it:
spec:
clusterIP: 11.11.1.111
externalTrafficPolicy: Cluster
loadBalancerIP: xx.xxx.xxx.xxx
ports:
- nodePort: 31894
port: 80
protocol: TCP
targetPort: 8080
selector:
app: hello
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
And I still cannot connect
November 2021 Update
It is possible to create a static internal IP and assign it to a LoadBalancer k8s service type.
Go to the VPC networks -> Select your VPC -> Static Internal IP Addresses
Click Reserve Static Address, then select a name for your IP and click Reserve. You can choose IP address manually here as well.
In your Service YAML add the following annotation. Also make sure type is LoadBalancer and then assign the IP address.
...
annotations:
networking.gke.io/load-balancer-type: "Internal"
...
type: LoadBalancer
loadBalancerIP: <your_static_internal_IP>
This will spin up an internal LB and assign your static IP to it. You can also check in Static Internal IP Addresses screen that new IP is now in use by freshly created load balancer. You can assign a Cloud DNS record to it, if needed.
Also, you can choose IP address "shared" during the reservation process so it can be used by up to 50 internal load balancers.
Assigning Static IP to Internal LB
Enabling Shared IP

AWS EKS: Service(LoadBalancer) running but not responding to requests

I am using AWS EKS.
I have launched my django app with help of gunicorn in kubernetes cluster.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
labels:
app: api
type: web
spec:
replicas: 1
template:
metadata:
labels:
app: api
type: web
spec:
containers:
- name: vogofleet
image: xx.xx.com/api:image2
imagePullPolicy: Always
env:
- name: DATABASE_HOST
value: "test-db-2.xx.xx.xx.xx.com"
- name: DATABASE_PASSWORD
value: "xxxyyyxxx"
- name: DATABASE_USER
value: "admin"
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_NAME
value: "test"
ports:
- containerPort: 9000
I have applied these changes and I can see my pod running in kubectl get pods
Now, I am trying to expose it via service object. Here is my service object,
# service
---
apiVersion: v1
kind: Service
metadata:
name: api
labels:
app: api
spec:
ports:
- port: 9000
protocol: TCP
targetPort: 9000
selector:
app: api
type: web
type: LoadBalancer
The service is also up and running. It has given me the external IP to access the service, which is the address of the load balancer. I can see that it has launched a new load balancer in the AWS console. But I am not able to access it from browser. It says that address didn't return any data. The ELB is showing the healthcheck on instances as OutOfService.
There are other pods also running in the cluster. When I run printenv in those pods, here is the result,
root#consumer-9444cf7cd-4dr5z:/consumer# printenv | grep API
API_PORT_9000_TCP_ADDR=172.20.140.213
API_SERVICE_HOST=172.20.140.213
API_PORT_9000_TCP_PORT=9000
API_PORT=tcp://172.20.140.213:9000
API_PORT_9000_TCP=tcp://172.20.140.213:9000
API_PORT_9000_TCP_PROTO=tcp
API_SERVICE_PORT=9000
And I tried to check connection to my api pod,
root#consumer-9444cf7cd-4dr5z:/consumer# telnet $API_PORT_9000_TCP_ADDR $API_PORT_9000_TCP_PORT
Trying 172.20.140.213...
telnet: Unable to connect to remote host: Connection refused
But, when I do port-forward to my localhost, I can access it on my localhost,
$ kubectl port-forward api-6d94dcb65d-br6px 9000
and check the connection,
$ nc -vz localhost 9000
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src ::1 port 53299
dst ::1 port 9000
rank info not available
TCP aux info available
Connection to localhost port 9000 [tcp/cslistener] succeeded!
Why am I not able to access it from other containers and from public internet? And, The security groups are correct.
I have the same problem. Here's the o/p of kubectl describe service command.
kubectl describe services nginx-elb
Name: nginx-elb
Namespace: default
Labels: deploy=slido
Annotations: service.beta.kubernetes.io/aws-load-balancer-internal: true
Selector: deploy=slido
Type: LoadBalancer
IP: 10.100.29.66
LoadBalancer Ingress: internal-a2d259057e6f94965bfc1f08cf86d4ce-884461987.us-west-2.elb.amazonaws.com
Port: http 80/TCP
TargetPort: 3000/TCP
NodePort: http 32582/TCP
Endpoints: 192.168.60.119:3000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 119s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 117s service-controller Ensured load balancer

Expose internal IP so it can be accessed from internet

I just deployed nginx on a K8S Node in a cluster, the master and worker communicate using internal IP address.
I can curl http://worker_ip:8080 (nginx) from internal network, but how to make it can be accessed from external/internet network?
Or should I use public IP as my node host?
update the service type to NodePort. grab the nodePort that is assigned to the service.
you should be able to access nginx using host:nodeport
see below for reference
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx