AWS EKS: Service(LoadBalancer) running but not responding to requests - amazon-web-services

I am using AWS EKS.
I have launched my django app with help of gunicorn in kubernetes cluster.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
labels:
app: api
type: web
spec:
replicas: 1
template:
metadata:
labels:
app: api
type: web
spec:
containers:
- name: vogofleet
image: xx.xx.com/api:image2
imagePullPolicy: Always
env:
- name: DATABASE_HOST
value: "test-db-2.xx.xx.xx.xx.com"
- name: DATABASE_PASSWORD
value: "xxxyyyxxx"
- name: DATABASE_USER
value: "admin"
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_NAME
value: "test"
ports:
- containerPort: 9000
I have applied these changes and I can see my pod running in kubectl get pods
Now, I am trying to expose it via service object. Here is my service object,
# service
---
apiVersion: v1
kind: Service
metadata:
name: api
labels:
app: api
spec:
ports:
- port: 9000
protocol: TCP
targetPort: 9000
selector:
app: api
type: web
type: LoadBalancer
The service is also up and running. It has given me the external IP to access the service, which is the address of the load balancer. I can see that it has launched a new load balancer in the AWS console. But I am not able to access it from browser. It says that address didn't return any data. The ELB is showing the healthcheck on instances as OutOfService.
There are other pods also running in the cluster. When I run printenv in those pods, here is the result,
root#consumer-9444cf7cd-4dr5z:/consumer# printenv | grep API
API_PORT_9000_TCP_ADDR=172.20.140.213
API_SERVICE_HOST=172.20.140.213
API_PORT_9000_TCP_PORT=9000
API_PORT=tcp://172.20.140.213:9000
API_PORT_9000_TCP=tcp://172.20.140.213:9000
API_PORT_9000_TCP_PROTO=tcp
API_SERVICE_PORT=9000
And I tried to check connection to my api pod,
root#consumer-9444cf7cd-4dr5z:/consumer# telnet $API_PORT_9000_TCP_ADDR $API_PORT_9000_TCP_PORT
Trying 172.20.140.213...
telnet: Unable to connect to remote host: Connection refused
But, when I do port-forward to my localhost, I can access it on my localhost,
$ kubectl port-forward api-6d94dcb65d-br6px 9000
and check the connection,
$ nc -vz localhost 9000
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src ::1 port 53299
dst ::1 port 9000
rank info not available
TCP aux info available
Connection to localhost port 9000 [tcp/cslistener] succeeded!
Why am I not able to access it from other containers and from public internet? And, The security groups are correct.

I have the same problem. Here's the o/p of kubectl describe service command.
kubectl describe services nginx-elb
Name: nginx-elb
Namespace: default
Labels: deploy=slido
Annotations: service.beta.kubernetes.io/aws-load-balancer-internal: true
Selector: deploy=slido
Type: LoadBalancer
IP: 10.100.29.66
LoadBalancer Ingress: internal-a2d259057e6f94965bfc1f08cf86d4ce-884461987.us-west-2.elb.amazonaws.com
Port: http 80/TCP
TargetPort: 3000/TCP
NodePort: http 32582/TCP
Endpoints: 192.168.60.119:3000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 119s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 117s service-controller Ensured load balancer

Related

Connect to redis instance from Istio enabled pod in a EKS cluster

I am running a EKS cluster with Istio enabled. I have launched an EC2 instance, where a redis server is running. EKS cluster and Redis both are in same VPC. All Inbound and Outbound rules allowed for both of them. But, When I am trying to access the redis instance inside of a pod, it is throwing "Connection reset by peer", while it is working fine from non-istio pod. What could be the reason ?
Istio Version :-
image: docker.io/istio/pilot:1.4.3
imagePullPolicy: IfNotPresent
image: docker.io/istio/proxyv2:1.4.3
imagePullPolicy: IfNotPresent
I have also created a Serviceentry in that namespace .
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: svc-redis
namespace: mynamespace
spec:
hosts:
- "redis-X.xxx.xxxx"
location: MESH_EXTERNAL
ports:
- number: 6379
name: http
protocol: REDIS
resolution: NONE
As you are using the Domain name as a host, so you need to set the resolution to DNS. Because When you set the resolution to None. It will try to connect to an IP address instead of using the domain name.
Here is my service entry for external Redis access.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: redis-svc
spec:
hosts:
- redis01.example.com
ports:
- number: 6379
name: redis
protocol: TCP
resolution: DNS
location: MESH_EXTERNAL

How to deploy a Kubernetes service using NodePort on Amazon AWS?

I have created a cluster on AWS EC2 using kops consisting of a master node and two worker nodes, all with public IPv4 assigned.
Now, I want to create a deployment with a service using NodePort to expose the application to the public.
After having created the service, I retrieve the following information, showing that it correctly identified my three pods:
nlykkei:~/projects/k8s-examples$ kubectl describe svc hello-svc
Name: hello-svc
Namespace: default
Labels: app=hello
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hello"},"name":"hello-svc","namespace":"default"},"spec"...
Selector: app=hello-world
Type: NodePort
IP: 100.69.62.27
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30001/TCP
Endpoints: 100.96.1.5:8080,100.96.2.3:8080,100.96.2.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
However, when I try to visit any of my public IPv4's on port 30001, I get no response from the server. I have already created a Security Group allowing all ingress traffic to port 30001 for all of the instances.
Everything works with Docker Desktop for Mac, and here I notice the following service field not present in the output above:
LoadBalancer Ingress: localhost
I've already studied https://kubernetes.io/docs/concepts/services-networking/service/, and think that NodePort should serve my needs?
Any help is appreciated!
So you want to have a service able to be accessed from public. In order to achieve this I would recommend to create a ClusterIP service and then an Ingress for that service. So, saying that you have the deployment hello-world serving at 8081 you will then have the following two objects:
Service:
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
ports:
- name: service
port: 8081(or whatever you want)
protocol: TCP
targetPort: 8080 (here goes the opened port in your pods)
selector:
app: hello-world
type: ClusterIP
Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
labels:
app: hello-world
name: hello-world
spec:
rules:
- host: hello-world.mycutedomainname.com
http:
paths:
- backend:
serviceName: hello-world
servicePort: 8081 (or whatever you have set for the service port)
path: /
Note: the name tag in the service's port is optional.

Sonar cannot be access via istio virtual service but can be locally accessed after port forwarding

I am trying to implement SonarQube in a Kubernetes cluster. The deployment is running properly and is also exposed via a Virtual Service. I am able to open the UI via the localhost:port/sonar but I am not able to access it through my external ip. I understand that sonar binds to localhost and does not allow access from outside the remote server. I am running this on GKE with a MYSQL database. Here is my YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sonarqube
namespace: sonar
labels:
service: sonarqube
version: v1
spec:
replicas: 1
template:
metadata:
name: sonarqube
labels:
name: sonarqube
spec:
terminationGracePeriodSeconds: 15
initContainers:
- name: volume-permission
image: busybox
command:
- sh
- -c
- sysctl -w vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: sonarqube
image: sonarqube:6.7
resources:
limits:
memory: 4Gi
cpu: 2
requests:
memory: 2Gi
cpu: 1
args:
- -Dsonar.web.context=/sonar
- -Dsonar.web.host=0.0.0.0
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:mysql://***.***.**.*:3306/sonar?useUnicode=true&characterEncoding=utf8
ports:
- containerPort: 9000
name: sonarqube-port
---
apiVersion: v1
kind: Service
metadata:
labels:
service: sonarqube
version: v1
name: sonarqube
namespace: sonar
spec:
selector:
name: sonarqube
ports:
- name: http
port: 80
targetPort: sonarqube-port
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube-internal
namespace: sonar
spec:
hosts:
- sonarqube.staging.jeet11.internal
- sonarqube
gateways:
- default/ilb-gateway
- mesh
http:
- route:
- destination:
host: sonarqube
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube-external
namespace: sonar
spec:
hosts:
- sonarqube.staging.jeet11.com
gateways:
- default/elb-gateway
http:
- route:
- destination:
host: sonarqube
---
The deployment completes successfully. My exposed services gives a public ip that has been mapped to the host url but I am unable to access the service at the host url.
I need to change the mapping such that sonar binds with the server ip but I am unable to understand how to do that. I cannot bind it to my cluster ip, neither to my internal or external service ip.
What should I do? Please help!
I had the same issue recently and I managed to get this resolved today.
I hope the following solution will work for anyone facing the same issue!.
Environment
Cloud Provider: Azure - AKS
This should work regardless of whatever provider you use.
Istio Version: 1.7.3
K8 Version: 1.16.10
Tools - Debugging
kubectl logs -n istio-system -l app=istiod
logs from Istiod and events happening in the control plane.
istioctl analyze -n <namespace>
This generally gives you any warnings and errors for a given namespace.
Lets you know if things are misconfigured.
Kiali - istioctl dashboard kiali
See if you are getting inbound traffic.
Also, shows you any misconfigurations.
Prometheus - istioctl dashboard prometheus
query metric - istio_requests_total. This shows you the traffic going into the service.
If there's any misconfiguration you will see the destination_app as unknown.
Issue
Unable to access sonarqube UI via external IP, but accessible via localhost (port-forward).
Unable to route traffic via Istio Ingressgateway.
Solution
Sonarqube Service Manifest
apiVersion: v1
kind: Service
metadata:
name: sonarqube
namespace: sonarqube
labels:
name: sonarqube
spec:
type: ClusterIP
ports:
- name: http
port: 9000
targetPort: 9000
selector:
app: sonarqube
status:
loadBalancer: {}
Your targetport is the container port. To avoid any confusion just assign the service port number as same as the service targetport.
The port name is very important here. “Istio required the service ports to follow the naming form of ‘protocol-suffix’ where the ‘-suffix’ part is optional” - KIA0601 - Port name must follow [-suffix] form
Istio Gateway and VirtualService manifest for sonarqube
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: sonarqube-gateway
namespace: sonarqube
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9000
name: http
protocol: HTTP
hosts:
- "XXXX.XXXX.com.au"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube
namespace: sonarqube
spec:
hosts:
- "XXXX.XXXX.com.au"
gateways:
- sonarqube-gateway
http:
- route:
- destination:
host: sonarqube
port:
number: 9000
Gateway protocol must be set to HTTP.
Gateway Server Port and VirtualService Destination Port is the same. If you have different app Service Port, then your VirtualService Destination Port number should match the app Service Port. The Gateway Server Port should match the app Service Targetport.
Now comes to the fun bit! The hosts. If you want to access the service outside of the cluster, then you need to have your host-name (whatever host-name that you want to map the sonarqube server) as an DNS A record mapped to the External Public IP address of the istio-ingressgateway.
To get the EXTERNAL-IP address of the ingressgateway, run kubectl -n istio-system get service istio-ingressgateway.
If you do a simple nslookup (run - nslookup <hostname>), The IP address you get must match with the IP address that is assigned to the istio-ingressgateway service.
Expose a new port in the ingressgateway
Note that your sonarqube gateway port is a new port that you are introducing to Kubernetes and you’re telling the cluster to listen on that port. But your load balancer doesn’t know about this port. Therefore, you need to open the specified gateway port on your kubernetes external load balancer. Ref - Info
You don’t need to manually change your load balancer service. You just need to update the ingress gateway to include the new port, which will update the load balancer automatically.
You can identify if the port is causing issues by running istioctl analyze -n sonarqube. You should get the following warning;
[33mWarn[0m [IST0104] (Gateway sonarqube-gateway.sonarqube) The gateway refers to a port that is not exposed on the workload (pod selector istio=ingressgateway; port 9000) Error: Analyzers found issues when analyzing namespace: sonarqube. See https://istio.io/docs/reference/config/analysis for more information about causes and resolutions.
You should get the corresponding error in the control plane. Run kubectl logs -n istio-system -l app=istiod.
At this point you need to update the Istio ingressgateway service to expose the new port. Run kubectl edit svc istio-ingressgateway -n istio-system and add the following section to the ports.
Bypass creating a new port
In the previous section you saw how to expose a new port. This is optional and depending on your use case.
In this section you will see how to use a port that is already exposed.
If you look at the service of the istio-ingressgateway. You can see that there are default ports exposed. Here we are going to use port 80.
Your setup will look like the following;
To void specifying the port with your host name just add match uri prefix, as shown in the virtualservice manifest.
Time for testing
If everything works up to this point as expected, then you are good to go.
During testing I made one mistake by not specifying the port. If you get 404 status, Which is still a good thing, in this way you can verify what server it is using. If you setup things correctly, it should use the istio-envoy server, not the nginx.
Without specifiying the port. This will only work if you add the match uri prefix.
Donot pass argument just try running without it once working for me.
This is how my deployment file hope helpful
apiVersion: v1
kind: Service
metadata:
name: sonarqube-service
spec:
selector:
app: sonarqube
ports:
- protocol: TCP
port: 9000
targetPort: 9000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: sonarqube
name: sonarqube
spec:
replicas: 1
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- name: sonarqube
image: sonarqube:7.1
resources:
requests:
memory: "1200Mi"
cpu: .10
limits:
memory: "2500Mi"
cpu: .50
volumeMounts:
- mountPath: "/opt/sonarqube/data/"
name: sonar-data
- mountPath: "/opt/sonarqube/extensions/"
name: sonar-extensions
env:
- name: "SONARQUBE_JDBC_USERNAME"
value: "root" #Put your db username
- name: "SONARQUBE_JDBC_URL"
value: "jdbc:mysql://192.168.112.4:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true" #DB URL
- name: "SONARQUBE_JDBC_PASSWORD"
value : password
ports:
- containerPort: 9000
protocol: TCP
volumes:
- name: sonar-data
persistentVolumeClaim:
claimName: sonar-data
- name: sonar-extensions
persistentVolumeClaim:
claimName: sonar-extensions

How to connect to rabbitmq service using load balancer hostname

The kubectl describe service the-load-balancer command returns:
Name: the-load-balancer
Namespace: default
Labels: app=the-app
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"the-app"},"name":"the-load-balancer","namespac...
Selector: app=the-app
Type: LoadBalancer
IP: 10.100.129.251
LoadBalancer Ingress: 1234567-1234567890.us-west-2.elb.amazonaws.com
Port: the-load-balancer 15672/TCP
TargetPort: 15672/TCP
NodePort: the-load-balancer 30080/TCP
Endpoints: 172.31.77.44:15672
Session Affinity: None
External Traffic Policy: Cluster
The RabbitMQ server that runs on another container, behind of load balancer is reachable from another container via the load balancer's Endpoints 172.31.77.44:15672.
But it fails to connect using the-load-balancer hostname or via its local 10.100.129.251 IP address.
What needs to be done in order to make the RabbitMQ service reachable via the load balancer's the-load-balancer hostname?
Edited later:
Running a simple Python test from another container:
import socket
print(socket.gethostbyname('the-load-balancer'))
returns a load balancer local IP 10.100.129.251.
Connecting to RabbitMQ using '172.31.18.32' works well:
import pika
credentials = pika.PlainCredentials('guest', 'guest')
parameters = pika.ConnectionParameters(host='172.31.18.32', port=5672, credentials=credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
print('...channel: %s' % channel)
But after replacing the host='172.31.18.32' with host='the-load-balancer' or host='10.100.129.251' and the client fails to connect.
When serving RabbitMQ from behind the Load Balancer you will need to open the ports 5672 and 15672. When configured properly the kubectl describe service the-load-balancer command should return both ports mapped to a local IP address:
Name: the-load-balancer
Namespace: default
Labels: app=the-app
Selector: app=the-app
Type: LoadBalancer
IP: 10.100.129.251
LoadBalancer Ingress: 123456789-987654321.us-west-2.elb.amazonaws.com
Port: the-load-balancer-port-15672 15672/TCP
TargetPort: 15672/TCP
NodePort: the-load-balancer-port-15672 30080/TCP
Endpoints: 172.31.18.32:15672
Port: the-load-balancer-port-5672 5672/TCP
TargetPort: 5672/TCP
NodePort: the-load-balancer-port-5672 30081/TCP
Endpoints: 172.31.18.32:5672
Below is the the-load-balancer.yaml file used to create RabbitMQ service:
apiVersion: v1
kind: Service
metadata:
name: the-load-balancer
labels:
app: the-app
spec:
type: LoadBalancer
ports:
- port: 15672
nodePort: 30080
protocol: TCP
name: the-load-balancer-port-15672
- port: 5672
nodePort: 30081
protocol: TCP
name: the-load-balancer-port-5672
selector:
app: the-app
I've noticed that in your code, you are using port 5672 to talk to the endpoint directly, while it is 15672 in the service definition which is the port for web console?
Be sure that the load balancer service and rabbitmq are in the same namespace of your application.
If not, you have to use the full dns record service-x.namespace-b.svc.cluster.local, according to the DNS for Services and Pods documentation

Exposing the same service with same URL but two different ports with traefik?

recently I am trying to set up CI/CD flow with Kubernetes v1.7.3 and jenkins v2.73.2 on AWS in China (GFW blocking dockerhub).
Right now I can expose services with traefik but it seems I cannot expose the same service with the same URL with two different ports.
Ideally I would want expose http://jenkins.mydomain.com as jenkins-ui on port 80, as well as the jenkin-slave (jenkins-discovery) on port 50000.
For example, I'd want this to work:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
namespace: default
spec:
rules:
- host: jenkins.mydomain.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
- host: jenkins.mydomain.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 50000
and my jenkins-svc is defined as
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
labels:
run: jenkins
spec:
selector:
run: jenkins
ports:
- port: 80
targetPort: 8080
name: http
- port: 50000
targetPort: 50000
name: slave
In reality the latter rule overwrites the former rule.
Furthermore, There are two plugins I have tried: kubernetes-cloud and kubernetes.
With the former option I cannot configure jenkins-tunnel URL, so the slave fails to connect with the master; with the latter option I cannot pull from a private docker registry such as AWS ECR (no place to provice credential), therefore not able to create the slave (imagePullError).
Lastly, really I am just trying to get jenkins to work (create slaves with my custom image, build with slaves and delete slaves after jobs' finished ), any other solution is welcomed.
If you want your jenkins to be reachable from outside of your cluster then you need to change your ingress configuration.
Default type of ingress type is ClusterIP
Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType
You want it type to be NodePort
Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting :
So your service should look like:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
labels:
run: jenkins
spec:
selector:
run: jenkins
type: NodePort
ports:
- port: 80
targetPort: 8080
name: http
- port: 50000
targetPort: 50000
name: slave