I just deployed nginx on a K8S Node in a cluster, the master and worker communicate using internal IP address.
I can curl http://worker_ip:8080 (nginx) from internal network, but how to make it can be accessed from external/internet network?
Or should I use public IP as my node host?
update the service type to NodePort. grab the nodePort that is assigned to the service.
you should be able to access nginx using host:nodeport
see below for reference
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
Related
Is it possible to load balance multiple services using a single aws load balancer? If that's not possible I guess I could just use a nodejs proxy to forward from httpd pod to tomcat pod and hope it doesn't lag...
Either way which Loadbalancer is recommended for multiport services? CLB doesn't support mutliports and ALB doesn't support mutliport for a single / path. So I guess NLB is the right thing implement?
I'm trying to cut cost and move to k8s but I need to know if I'm choosing the right service. Tomcat and Httpd are both part of a single prod website but can't do path based routing.
Httpd pod service:
apiVersion: v1
kind: Service
metadata:
name: httpd-service
labels:
app: httpd-service
namespace: test1-web-dev
spec:
selector:
app: httpd
ports:
- name: port_80
protocol: TCP
port: 80
targetPort: 80
- name: port_443
protocol: TCP
port: 443
targetPort: 443
- name: port_1860
protocol: TCP
port: 1860
targetPort: 1860
Tomcat pod service:
apiVersion: v1
kind: Service
metadata:
name: tomcat-service
labels:
app: tomcat-service
namespace: test1-web-dev
spec:
selector:
app: tomcat
ports:
- name: port_8080
protocol: TCP
port: 8080
targetPort: 8080
- name: port_1234
protocol: TCP
port: 1234
targetPort: 1234
- name: port_8222
protocol: TCP
port: 8222
targetPort: 8222
It's done like this: install Ingress controller (e.g. ingress-nginx) to your cluster, it's gonna be your loadbalancer looking into outside world.
Then configure Ingress resource(s) to drive traffic to services (as many as you want). Then you have a single Ingress controller (which means a single Loadbalancer) per cluster.
https://kubernetes.io/docs/concepts/services-networking/ingress/
You can do this, using Ingress controller backing with a load balancer, and use one path / you may make the Ingress tells the backing load balancer to route requests based on the Host header.
I have created a cluster on AWS EC2 using kops consisting of a master node and two worker nodes, all with public IPv4 assigned.
Now, I want to create a deployment with a service using NodePort to expose the application to the public.
After having created the service, I retrieve the following information, showing that it correctly identified my three pods:
nlykkei:~/projects/k8s-examples$ kubectl describe svc hello-svc
Name: hello-svc
Namespace: default
Labels: app=hello
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hello"},"name":"hello-svc","namespace":"default"},"spec"...
Selector: app=hello-world
Type: NodePort
IP: 100.69.62.27
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30001/TCP
Endpoints: 100.96.1.5:8080,100.96.2.3:8080,100.96.2.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
However, when I try to visit any of my public IPv4's on port 30001, I get no response from the server. I have already created a Security Group allowing all ingress traffic to port 30001 for all of the instances.
Everything works with Docker Desktop for Mac, and here I notice the following service field not present in the output above:
LoadBalancer Ingress: localhost
I've already studied https://kubernetes.io/docs/concepts/services-networking/service/, and think that NodePort should serve my needs?
Any help is appreciated!
So you want to have a service able to be accessed from public. In order to achieve this I would recommend to create a ClusterIP service and then an Ingress for that service. So, saying that you have the deployment hello-world serving at 8081 you will then have the following two objects:
Service:
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
ports:
- name: service
port: 8081(or whatever you want)
protocol: TCP
targetPort: 8080 (here goes the opened port in your pods)
selector:
app: hello-world
type: ClusterIP
Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
labels:
app: hello-world
name: hello-world
spec:
rules:
- host: hello-world.mycutedomainname.com
http:
paths:
- backend:
serviceName: hello-world
servicePort: 8081 (or whatever you have set for the service port)
path: /
Note: the name tag in the service's port is optional.
I want to connect service from one GKE cluster to another one. I created service as a internal load balancer and I would like to attach a static ip to it. I created my service.yml
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
kubernetes.io/ingress.global-static-ip-name: es-test
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
However after apply -f when I check the service the load balancer ingress looks like this:
status:
loadBalancer:
ingress:
- ip: 10.156.0.60
And I cannot connect using the static ip. How to solve it ?
EDIT:
After suggestion I changed the yml file to:
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
loadBalancerIP: "xx.xxx.xxx.xxx" -- here my static ip
Service now looks like it:
spec:
clusterIP: 11.11.1.111
externalTrafficPolicy: Cluster
loadBalancerIP: xx.xxx.xxx.xxx
ports:
- nodePort: 31894
port: 80
protocol: TCP
targetPort: 8080
selector:
app: hello
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
And I still cannot connect
November 2021 Update
It is possible to create a static internal IP and assign it to a LoadBalancer k8s service type.
Go to the VPC networks -> Select your VPC -> Static Internal IP Addresses
Click Reserve Static Address, then select a name for your IP and click Reserve. You can choose IP address manually here as well.
In your Service YAML add the following annotation. Also make sure type is LoadBalancer and then assign the IP address.
...
annotations:
networking.gke.io/load-balancer-type: "Internal"
...
type: LoadBalancer
loadBalancerIP: <your_static_internal_IP>
This will spin up an internal LB and assign your static IP to it. You can also check in Static Internal IP Addresses screen that new IP is now in use by freshly created load balancer. You can assign a Cloud DNS record to it, if needed.
Also, you can choose IP address "shared" during the reservation process so it can be used by up to 50 internal load balancers.
Assigning Static IP to Internal LB
Enabling Shared IP
The kubectl describe service the-load-balancer command returns:
Name: the-load-balancer
Namespace: default
Labels: app=the-app
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"the-app"},"name":"the-load-balancer","namespac...
Selector: app=the-app
Type: LoadBalancer
IP: 10.100.129.251
LoadBalancer Ingress: 1234567-1234567890.us-west-2.elb.amazonaws.com
Port: the-load-balancer 15672/TCP
TargetPort: 15672/TCP
NodePort: the-load-balancer 30080/TCP
Endpoints: 172.31.77.44:15672
Session Affinity: None
External Traffic Policy: Cluster
The RabbitMQ server that runs on another container, behind of load balancer is reachable from another container via the load balancer's Endpoints 172.31.77.44:15672.
But it fails to connect using the-load-balancer hostname or via its local 10.100.129.251 IP address.
What needs to be done in order to make the RabbitMQ service reachable via the load balancer's the-load-balancer hostname?
Edited later:
Running a simple Python test from another container:
import socket
print(socket.gethostbyname('the-load-balancer'))
returns a load balancer local IP 10.100.129.251.
Connecting to RabbitMQ using '172.31.18.32' works well:
import pika
credentials = pika.PlainCredentials('guest', 'guest')
parameters = pika.ConnectionParameters(host='172.31.18.32', port=5672, credentials=credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
print('...channel: %s' % channel)
But after replacing the host='172.31.18.32' with host='the-load-balancer' or host='10.100.129.251' and the client fails to connect.
When serving RabbitMQ from behind the Load Balancer you will need to open the ports 5672 and 15672. When configured properly the kubectl describe service the-load-balancer command should return both ports mapped to a local IP address:
Name: the-load-balancer
Namespace: default
Labels: app=the-app
Selector: app=the-app
Type: LoadBalancer
IP: 10.100.129.251
LoadBalancer Ingress: 123456789-987654321.us-west-2.elb.amazonaws.com
Port: the-load-balancer-port-15672 15672/TCP
TargetPort: 15672/TCP
NodePort: the-load-balancer-port-15672 30080/TCP
Endpoints: 172.31.18.32:15672
Port: the-load-balancer-port-5672 5672/TCP
TargetPort: 5672/TCP
NodePort: the-load-balancer-port-5672 30081/TCP
Endpoints: 172.31.18.32:5672
Below is the the-load-balancer.yaml file used to create RabbitMQ service:
apiVersion: v1
kind: Service
metadata:
name: the-load-balancer
labels:
app: the-app
spec:
type: LoadBalancer
ports:
- port: 15672
nodePort: 30080
protocol: TCP
name: the-load-balancer-port-15672
- port: 5672
nodePort: 30081
protocol: TCP
name: the-load-balancer-port-5672
selector:
app: the-app
I've noticed that in your code, you are using port 5672 to talk to the endpoint directly, while it is 15672 in the service definition which is the port for web console?
Be sure that the load balancer service and rabbitmq are in the same namespace of your application.
If not, you have to use the full dns record service-x.namespace-b.svc.cluster.local, according to the DNS for Services and Pods documentation
I'm not sure if this is preferred/correct way of setting up kubernetes, but I have two websites "x.com" and "y.com" each with their own separate IPs. Currently, they running off separate ec2 instances, but I'm in the process of moving our architecture to using docker/kubernetes on aws. What I'd like to do is have a single nginx container that hands of the requests to the appropriate backend services. However, I'm currently stuck on trying to figure out how to point two IPs at the same container.
My k8s setup is like so:
Replication controller/pod for x.com
Replication controller/pod for y.com
Service for x.com
Service for y.com
Replication controller for nginx, specifying a single replica
Service for nginx specifying that I need port 80 and 443.
Is there a way for me to specify that I want two IPs pointing to the single nginx container, or is there a preferred k8s way of solving this problem?
You could use the ingress routing for this:
http://kubernetes.io/docs/user-guide/ingress/
http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html
https://github.com/nginxinc/kubernetes-ingress
Also to not use ingress you could setup services type LoadBalancer and then create CNAMEs to the ELB matching the domains you want to route.
I ended up using two separate services/load balancers that point to the same nginx pod but using different ports.
apiVersion: v1
kind: Service
metadata:
name: nginx-service1
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 880
protocol: TCP
- name: https
port: 443
targetPort: 8443
protocol: TCP
selector:
app: nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service2
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 980
protocol: TCP
- name: https
port: 443
targetPort: 9443
protocol: TCP
selector:
app: nginx