Can't access my kubernetes services even after exposing it with LoadBalancer - amazon-web-services

I have created the replication controller in Kubernetes with the following configuration:
{
"kind":"ReplicationController",
"apiVersion":"v1",
"metadata":{
"name":"guestbook",
"labels":{
"app":"guestbook"
}
},
"spec":{
"replicas":1,
"selector":{
"app":"guestbook"
},
"template":{
"metadata":{
"labels":{
"app":"guestbook"
}
},
"spec":{
"containers":[
{
"name":"guestbook",
"image":"username/fsharp-microservice:v1",
"ports":[
{
"name":"http-server",
"containerPort":3000
}
],
"command": ["fsharpi", "/home/SuaveServer.fsx"]
}
]
}
}
}
}
The code of the service that is running on the port 3000 is basically this:
#r "Suave.dll"
#r "Mono.Posix.dll"
open Suave
open Suave.Http
open Suave.Successful
open System
open System.Net
open System.Threading
open System.Diagnostics
open Mono.Unix
open Mono.Unix.Native
let app = OK "PONG"
let port = 3000us
let config =
{ defaultConfig with
bindings = [ HttpBinding.mk HTTP IPAddress.Loopback port ]
bufferSize = 8192
maxOps = 10000
}
open System.Text.RegularExpressions
let cts = new CancellationTokenSource()
let listening, server = startWebServerAsync config app
Async.Start(server, cts.Token)
Console.WriteLine("Server should be started at this point")
Console.ReadLine()
After I created the service I can see the pod:
$kubectl create -f guestbook.json
replicationcontroller "guestbook" created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
guestbook-0b9py 1/1 Running 0 32m
I want to access my web service and create the service with type=LoadBalancer that will expose the 3000 port with the following configuration:
{
"kind":"Service",
"apiVersion":"v1",
"metadata":{
"name":"guestbook",
"labels":{
"app":"guestbook"
}
},
"spec":{
"ports": [
{
"port":3000,
"targetPort":"http-server"
}
],
"selector":{
"app":"guestbook"
},
"type": "LoadBalancer"
}
}
Here is the result:
$ kubectl create -f guestbook-service.json
service "guestbook" created
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
guestbook 10.0.82.40 3000/TCP 7s
kubernetes 10.0.0.1 <none> 443/TCP 3h
$ kubectl describe services
Name: guestbook
Namespace: default
Labels: app=guestbook
Selector: app=guestbook
Type: LoadBalancer
IP: 10.0.82.40
LoadBalancer Ingress: a43eee4a008cf11e68f210a4fa30c03e-1918213320.us-west-2.elb.amazonaws.com
Port: <unset> 3000/TCP
NodePort: <unset> 30877/TCP
Endpoints: 10.244.1.6:3000
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
18s 18s 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer
17s 17s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer
Name: kubernetes
Namespace: default
Labels: component=apiserver,provider=kubernetes
Selector: <none>
Type: ClusterIP
IP: 10.0.0.1
Port: https 443/TCP
Endpoints: 172.20.0.9:443
Session Affinity: None
No events.
The "External IP" column is empty
I have tried to access the service using "LoadBalancer Ingress" but DNS name can't be resolved.
If I check in the AWS console - load balancer is created (but in the details panel there is a message "0 of 2 instances in service" because of health-checks).
I have also tried to expose my RC using kubectl expose --type=Load-Balancer, but result is the same.
What is the problem?

I fixed the error.
The thing was in the actual service. It needs to be listening on 0.0.0.0 instead of 127.0.0.1 or localhost. That way it will listen on every available network interface. More details on the difference between 0.0.0.0 and 127.0.0.1: https://serverfault.com/questions/78048/whats-the-difference-between-ip-address-0-0-0-0-and-127-0-0-1

Related

Redis deployed in AWS - Connection time out from localhost SpringBoot app

Small question regarding Redis deployed in AWS (not AWS Elastic Cache) and an issue connecting to it.
Here is the setup of the Redis deployed in AWS: (pasting only the Kubernetes StatefulSet and Service)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
initContainers:
- name: config
image: redis:7.0.5-alpine
command: [ "sh", "-c" ]
args:
- |
cp /tmp/redis/redis.conf /etc/redis/redis.conf
echo "finding master..."
MASTER_FDQN=`hostname -f | sed -e 's/redis-[0-9]\./redis-0./'`
if [ "$(redis-cli -h sentinel -p 5000 ping)" != "PONG" ]; then
echo "master not found, defaulting to redis-0"
if [ "$(hostname)" = "redis-0" ]; then
echo "this is redis-0, not updating config..."
else
echo "updating redis.conf..."
echo "slaveof $MASTER_FDQN 6379" >> /etc/redis/redis.conf
fi
else
echo "sentinel found, finding master"
MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '(^redis-\d{1,})|([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})')"
echo "master found : $MASTER, updating redis.conf"
echo "slaveof $MASTER 6379" >> /etc/redis/redis.conf
fi
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: config
mountPath: /tmp/redis/
containers:
- name: redis
image: redis:7.0.5-alpine
command: ["redis-server"]
args: ["/etc/redis/redis.conf"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: data
mountPath: /data
- name: redis-config
mountPath: /etc/redis/
volumes:
- name: redis-config
emptyDir: {}
- name: config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: nfs-1
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
ports:
- port: 6379
targetPort: 6379
name: redis
selector:
app: redis
type: LoadBalancer
The pods are healthy, I can exec into it and perform operations fine. Here is the get all:
NAME READY STATUS RESTARTS AGE
pod/redis-0 1/1 Running 0 22h
pod/redis-1 1/1 Running 0 22h
pod/redis-2 1/1 Running 0 22h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/redis LoadBalancer 192.168.45.55 10.51.5.2 6379:30315/TCP 26h
NAME READY AGE
statefulset.apps/redis 3/3 22h
Here is the describe of the service:
Name: redis
Namespace: Namespace
Labels: <none>
Annotations: <none>
Selector: app=redis
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 192.168.22.33
IPs: 192.168.22.33
LoadBalancer Ingress: 10.51.5.2
Port: redis 6379/TCP
TargetPort: 6379/TCP
NodePort: redis 30315/TCP
Endpoints: 192.xxx:6379,192.xxx:6379,192.xxx:6379
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 68s metallb-controller Assigned IP ["10.51.5.2"]
Normal nodeAssigned 58s (x5 over 66s) metallb-speaker announcing from node "someaddress.com" with protocol "bgp"
Normal nodeAssigned 58s (x5 over 66s) metallb-speaker announcing from node "someaddress.com" with protocol "bgp"
I then try to connect to it, i.e. inserting some data with a very straightforward Spring Boot application. The application has no business logic, just trying to insert data.
Here are the relevant parts:
#Configuration
public class RedisConfiguration {
#Bean
public ReactiveRedisConnectionFactory reactiveRedisConnectionFactory() {
return new LettuceConnectionFactory("10.51.5.2", 30315);
}
#Repository
public class RedisRepository {
private final ReactiveRedisOperations<String, String> reactiveRedisOperations;
public RedisRepository(ReactiveRedisOperations<String, String> reactiveRedisOperations) {
this.reactiveRedisOperations = reactiveRedisOperations;
}
public Mono<RedisPojo> save(RedisPojo redisPojo) {
return reactiveRedisOperations.opsForValue().set(redisPojo.getInput(), redisPojo.getOutput()).map(__ -> redisPojo);
}
Each time I am trying to write the data, I am getting this exception:
2022-12-02T20:20:08.015+08:00 ERROR 1184 --- [ctor-http-nio-3] a.w.r.e.AbstractErrorWebExceptionHandler : [8f16a752-1] 500 Server Error for HTTP POST "/save"
org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.translateException(LettuceConnectionFactory.java:1602) ~[spring-data-redis-3.0.0.jar:3.0.0]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
*__checkpoint ⇢ Handler com.redis.controller.RedisController#test(RedisRequest) [DispatcherHandler]
*__checkpoint ⇢ HTTP POST "/save" [ExceptionHandlingWebHandler]
Original Stack Trace:
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.translateException(LettuceConnectionFactory.java:1602) ~[spring-data-redis-3.0.0.jar:3.0.0]
Caused by: io.lettuce.core.RedisConnectionException: Unable to connect to 10.51.5.2/<unresolved>:30315
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78) ~[lettuce-core-6.2.1.RELEASE.jar:6.2.1.RELEASE]
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56) ~[lettuce-core-6.2.1.RELEASE.jar:6.2.1.RELEASE]
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:350) ~[lettuce-core-6.2.1.RELEASE.jar:6.2.1.RELEASE]
at io.lettuce.core.RedisClient.connect(RedisClient.java:216) ~[lettuce-core-6.2.1.RELEASE.jar:6.2.1.RELEASE]
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.51.5.2:30315
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:261) ~[netty-transport-4.1.85.Final.jar:4.1.85.Final]
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.85.Final.jar:4.1.85.Final]
This is particularly puzzling, because I am quite sure the code of the Spring Boot app is working. When I change the IP of return new LettuceConnectionFactory("10.51.5.2", 30315);: to
a regular Redis on my laptop ("localhost", 6379),
a dockerized Redis on my laptop,
a dockerized Redis on prem, all are working fine.
Therefore, I am quite puzzled what did I do wrong with the setup of this Redis in AWS.
What should I do in order to connect to it properly.
May I get some help please?
Thank you
By default, Redis binds itself to the IP addresses 127.0.0.1 and ::1 and does not accept connections against non-local interfaces. Chances are high that this is your main issue and you may want to review your redis.conf file to bind Redis to the interface you need or to the generic * -::*, as explained in the comments of the config file itself (which I have linked above).
With that being said, Redis also does not accept connections on non-local interfaces if the default user has no password - a security layer named Protected mode. Thus you should either give your default user a password or disable protected mode in your redis.conf file.
Not sure if this applies to your case but, as a side note, I would suggest to always avoid exposing Redis to the Internet.
You are mixing 2 things.
To enable this service for pods in different namespaces you do not need external load balancer, you can just try to use redis.namespace-name:6379 dns name and it will just work. Such dns is there for every service you create (but works only inside kubernetes)
Kubernetes will make sure that your traffic will be routed to proper pods (assuming there is more than one).
If you want to expose redis from outside of kubernetes then you need to make sure there is connectivity from the outside and then you need network load balancer that will forward traffic to your kubernetes service (in your case node port, so you need NLB with eks worker nodes: 30315 as a targets)
If your worker nodes have public IP and their SecurityGroups allow connecting to them directly, you could try to connect to worker node's IP directly just to test things out (without LB).
And regardless off yout setup you can always create proxy via kubectl
kubectl port-forward -n redisNS svc/redis 6379:6379
and connect from spring boot app to localhost:6379
How do you want to connect from app to redis in a final setup?

How to connect to an EKS service from outside the cluster using a LoadBalancer in a private VPC

I am trying to expose An EKS deployment of Kafka outside the cluster, within the same VPC.
In terraform I added an ingress rule for the Kafka security group:
ingress {
from_port = 9092
protocol = "tcp"
to_port = 9092
cidr_blocks = [
"10.0.0.0/16",
]
}
This is the service yaml
apiVersion: v1
kind: Service
metadata:
name: bootstrap-external
namespace: kafka
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "10.0.0.0/16"
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-0....d,sg-0db....ae"
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9092
targetPort: 9092
selector:
app: kafka
When trying to connect from another instance, belonging to one of the security groups in the yaml,
I seem to be able to establish a connection through the load balancer but not get referred to Kafka:
[ec2-user#ip-10-0-4-47 kafkacat]$ nc -zvw10 internal-a08....628f-1654182718.us-east-2.elb.amazonaws.com 9092
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 10.0.3.151:9092.
Ncat: 0 bytes sent, 0 bytes received in 0.05 seconds.
[ec2-user#ip-10-0-4-47 kafkacat]$ nmap -Pn internal-a0837....a0e628f-1654182718.us-east-2.elb.amazonaws.com -p 9092
Starting Nmap 6.40 ( http://nmap.org ) at 2021-02-28 07:19 UTC
Nmap scan report for internal-a083747ab.....8f-1654182718.us-east-2.elb.amazonaws.com (10.0.2.41)
Host is up (0.00088s latency).
Other addresses for internal-a083747ab....36f0a0e628f-1654182718.us-east-2.elb.amazonaws.com (not scanned): 10.0.3.151 10.0.1.85
rDNS record for 10.0.2.41: ip-10-0-2-41.us-east-2.compute.internal
PORT STATE SERVICE
9092/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 0.03 seconds
[ec2-user#ip-10-0-4-47 kafkacat]$ kafkacat -b internal-a083747abf4....-1654182718.us-east-2.elb.amazonaws.com:9092 -t models
% Auto-selecting Consumer mode (use -P or -C to override)
% ERROR: Local: Host resolution failure: kafka-2.broker.kafka.svc.cluster.local:9092/2: Failed to resolve 'kafka-2.broker.kafka.svc.cluster.local:9092': Name or service not known
% ERROR: Local: Host resolution failure: kafka-1.broker.kafka.svc.cluster.local:9092/1: Failed to resolve 'kafka-1.broker.kafka.svc.cluster.local:9092': Name or service not known
% ERROR: Local: Host resolution failure: kafka-0.broker.kafka.svc.cluster.local:9092/0: Failed to resolve 'kafka-0.broker.kafka.svc.cluster.local:9092': Name or service not known
^C[ec2-user#ip-10-0-4-47 kafkacat]$
``
We solved the Kafka connection by:
Adding ingress rule to the Kafka worker security group (We use Terraform)
ingress {
from_port = 9094
protocol = "tcp"
to_port = 9094
cidr_blocks = [
"10.0.0.0/16",
]
}
Provisioning each broker a load balancer service in Kubernetes YAML (note that the last digit in the nodePort corresponds to the broker stateful set ID).
apiVersion: v1
kind: Service
metadata:
name: bootstrap-external-0
namespace: kafka
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "10.0.0.0/16"
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: sg-....d,sg-0db14....e,sg-001ce.....e,sg-0fe....15d13c
spec:
type: LoadBalancer
ports:
-
protocol: TCP
targetPort: 9094
port: 32400
nodePort: 32400
selector:
app: kafka
kafka-broker-id: "0"
Retrieving load balancer name by parsing kubctl -n kafka get svc bootstrap-external-0.
Adding DNS name by convention using Route 53.
We plan to automate by terraforming the Route53 after load balancer is created.

gcp ingress fail to be created - Error during sync: Error running backend syncing routine: googleapi: got HTTP response code 404 with body: Not Found

I'm trying a simple ingress in gke.
Following the example from https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
the pods are up and running, services are active. When I create ingress I'm getting
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 48m loadbalancer-controller default/my-ingress
Warning Sync 2m32s (x25 over 48m) loadbalancer-controller Error during sync: Error running backend syncing routine: googleapi: got HTTP response code 404 with body: Not Found
I can't find the source of the problem. Any suggestion of where to look?
I have checked cluster add-ons and permissions
httpLoadBalancing enabled
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/trace.append
NAME READY STATUS RESTARTS AGE
hello-kubernetes-deployment-f6cb6cf4f-kszd9 1/1 Running 0 1h
hello-kubernetes-deployment-f6cb6cf4f-lw49t 1/1 Running 0 1h
hello-kubernetes-deployment-f6cb6cf4f-qqgxs 1/1 Running 0 1h
hello-world-deployment-5cfbc486f-4c2bm 1/1 Running 0 1h
hello-world-deployment-5cfbc486f-dmcqf 1/1 Running 0 1h
hello-world-deployment-5cfbc486f-rnpcc 1/1 Running 0 1h
Name: hello-world
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"hello-world","namespace":"default"},"spec":{"ports":[{"port":6000...
Selector: department=world,greeting=hello
Type: NodePort
IP: 10.59.254.88
Port: <unset> 60000/TCP
TargetPort: 50000/TCP
NodePort: <unset> 30418/TCP
Endpoints: 10.56.2.7:50000,10.56.3.6:50000,10.56.6.4:50000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: hello-kubernetes
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"hello-kubernetes","namespace":"default"},"spec":{"ports":[{"port"...
Selector: department=kubernetes,greeting=hello
Type: NodePort
IP: 10.59.251.189
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32464/TCP
Endpoints: 10.56.2.6:8080,10.56.6.3:8080,10.56.8.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: my-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (10.56.0.9:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* hello-world:60000 (<none>)
/kube hello-kubernetes:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"gce"},"name":"my-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"hello-world","servicePort":60000},"path":"/*"},{"backend":{"serviceName":"hello-kubernetes","servicePort":80},"path":"/kube"}]}}]}}
kubernetes.io/ingress.class: gce
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 107s loadbalancer-controller default/my-ingress
Warning Sync 66s (x15 over 107s) loadbalancer-controller Error during sync: Error running backend syncing routine: googleapi: got HTTP response code 404 with body: Not Found
Pulumi Cluster Config
{
"name": "test-cluster",
"region": "europe-west4",
"addonsConfig": {
"httpLoadBalancing": {
"disabled": false
},
"kubernetesDashboard": {
"disabled": false
}
},
"ipAllocationPolicy": {},
"pools": [
{
"name": "default-pool",
"initialNodeCount": 1,
"nodeConfig": {
"oauthScopes": [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/trace.append",
"https://www.googleapis.com/auth/cloud-platform"
],
"machineType": "n1-standard-1",
"labels": {
"pool": "api-zero"
}
},
"management": {
"autoUpgrade": false,
"autoRepair": true
},
"autoscaling": {
"minNodeCount": 1,
"maxNodeCount": 20
}
},
{
"name": "outbound",
"initialNodeCount": 2,
"nodeConfig": {
"machineType": "custom-1-1024",
"oauthScopes": [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/trace.append",
"https://www.googleapis.com/auth/cloud-platform"
],
"labels": {
"pool": "outbound"
}
},
"management": {
"autoUpgrade": false,
"autoRepair": true
}
}
The author of this post eventually figured out, that issue persist only when cluster is bootstrapped with pulumi.
It looks like you are missing a default backend (L7 - HTTTP LoadBalancer) for your default ingress controller. From what I observed it`s not deployed when you have Istio add-on enabled in your GKE cluster (Istio has its own default ingress/egress gateways).
Please verify if it`s up and running in your cluster:
kubectl get pod -n kube-system | grep l7-default-backend

AWS ALB not resolving

So I have an EKS cluster, and have set up the AWS Alb Ingress Controller:
https://github.com/kubernetes-sigs/aws-alb-ingress-controller
I'm trying to set up Grafana here, and the Ingress is created but it doesn't seem to resolve at all.
I have the follow Ingress:
$ kubectl describe ingress grafana
Name: grafana
Namespace: orbix-mvp
Address: 4ae1e4ba-orbixmvp-grafana-fd7d-993303634.eu-central-1.elb.amazonaws.com
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
grafana-orbix.orbixpay.com
/ grafana:80 (<none>)
Annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-2016-08
alb.ingress.kubernetes.io/subnets: subnet-08431d96168e36c30,subnet-0e2a7e2766852bf8a
alb.ingress.kubernetes.io/success-codes: 302
kubernetes.io/ingress.class: alb
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 45m alb-ingress-controller LoadBalancer 4ae1e4ba-orbixmvp-grafana-fd7d created, ARN: arn:aws:elasticloadbalancing:eu-central-1:109153834985:loadbalancer/app/4ae1e4ba-orbixmvp-grafana-fd7d/4b98cb7027b71697
Normal CREATE 45m alb-ingress-controller rule 1 created with conditions [{ Field: "host-header", Values: ["grafana-orbix.orbixpay.com"] },{ Field: "path-pattern", Values: ["/"] }]
The backend fro it is the following service:
$ kubectl describe service grafana
Name: grafana
Namespace: orbix-mvp
Labels: app=grafana
chart=grafana-1.25.1
heritage=Tiller
release=grafana
Annotations: <none>
Selector: app=grafana,release=grafana
Type: NodePort
IP: 172.20.11.232
Port: service 80/TCP
TargetPort: 3000/TCP
NodePort: service 30772/TCP
Endpoints: 10.0.0.180:3000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
It does have a proper endpoint:
$ kubectl get endpoints | grep grafana
grafana 10.0.0.180:3000 46m
The pod itself is properly tagged and has the correct IP that's the endpoint above:
$ kubectl describe pod grafana-bdc977fd4-ptzhg
Name: grafana-bdc977fd4-ptzhg
Namespace: orbix-mvp
Priority: 0
PriorityClassName: <none>
Node: ip-10-0-0-230.eu-central-1.compute.internal/10.0.0.230
Start Time: Mon, 11 Feb 2019 13:24:43 +0200
Labels: app=grafana
pod-template-hash=687533980
release=grafana
Annotations: <none>
Status: Running
IP: 10.0.0.180
My AWS account has the LoadBalancer listed as Active, the subnets are on the same VPC as the cluster, security groups are being generated by the Ingress Controller.
Everything seems to be set up properly, however when I access the LoadBalancer address, it just times out.
$ kubectl get ingresses
NAME HOSTS ADDRESS PORTS AGE
grafana grafana-orbix.orbixpay.com 4ae1e4ba-orbixmvp-grafana-fd7d-993303634.eu-central-1.elb.amazonaws.com 80 49m
I actually figured it out - the Ingress configuration was allowing for traffic for the domain only. That excludes traffic to the load balancer address (which I assumed is allowed by default).
Basically it needs to be allowed for * in order for the Load Balancer URL to work too. Also, if the app redirects to /login like in my case, all paths need to be allowed too, since that redirect doesn't work if the path specified is for / only.

Not geting public DNS name for service(Stuck in pending)-Openshift on AWS

Followed the installation guide to setup cluster: https://s3.amazonaws.com/quickstart-reference/redhat/openshift/latest/doc/red-hat-openshift-on-the-aws-cloud.pdf
I'm able to get the public DNS name for a service in Kubernetes but not in Openshift. It is very basic thing, I dont know why it is not working?. I'm attaching manifest files that are used to create app and server. It is not working openshift.
prometheus-configmap.yml
prometheus-rbac.yml
prometheus-deployment.yml
In K8s
kubectl apply -f prometheus-configmap.yml
kubectl apply -f prometheus-rbac.yml
kubectl apply -f prometheus-deployment.yml
veeru#ultron:~/prometheus-k8s-monitoring$ kubectl describe svc prometheus-test
Name: prometheus-test
Namespace: default
Labels: name=prometheus-test
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"prometheus.io/scrape":"true"},"labels":{"name":"prometheus-test"},"name":"prometheus-te...
prometheus.io/scrape=true
Selector: app=prometheus-test
Type: LoadBalancer
IP: 100.xx.xx.xx
LoadBalancer Ingress: xxxxx-1679955855.us-east-2.elb.amazonaws.com
Port: prometheus-test 9090/TCP
TargetPort: 9090/TCP
NodePort: prometheus-test 31558/TCP
Endpoints: 100.xx.xx.xx:9090
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 9m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 9m service-controller Ensured load balancer
In above you can see that I got the LoadBalancer Ingress with public DNS name.
In Openshift
kubectl apply -f prometheus-configmap.yml
kubectl apply -f prometheus-rbac.yml
kubectl apply -f prometheus-deployment.yml
root#ultron:/home/veeru/prometheus-k8s-monitoring# oc describe svc prometheus-test
Name: prometheus-test
Namespace: spinnaker
Labels: name=prometheus-test
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"prometheus.io/scrape":"true"},"labels":{"name":"prometheus-test"},"name":"prometheus-te...
prometheus.io/scrape=true
Selector: app=prometheus-test
Type: LoadBalancer
IP: 172.30.134.153
Port: prometheus-test 9090/TCP
NodePort: prometheus-test 31667/TCP
Endpoints: <none>
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
10m 36s 8 service-controller Normal CreatingLoadBalancer Creating load balancer
10m 36s 8 service-controller Warning CreatingLoadBalancerFailed Error creating load balancer (will retry): Failed to create load balancer for service spinnaker/prometheus-test: could not find any suitable subnets for creating the ELB
You can see the status failed to create load balancer for service
If I specify annotation like --> service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
Then I'm able get the "internal" DNS name for service
root#ultron:/home/veeru/prometheus-k8s-monitoring# oc describe svc test4-dev
Name: test4-dev
Namespace: default
Labels: <none>
Annotations: service.beta.kubernetes.io/aws-load-balancer-internal=0.0.0.0/0
Selector: load-balancer-test4-dev=true
Type: LoadBalancer
IP: 172.30.177.217
LoadBalancer Ingress: internal-xxxxx-298335522.us-east-2.elb.amazonaws.com
Port: http 8080/TCP
TargetPort: 8080/TCP
NodePort: http 31595/TCP
Endpoints: 10.131.0.75:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreatingLoadBalancer 1m (x208 over 16h) service-controller Creating load balancer
Openshift is not using AWS ELB to create public DNS name?.
Ok, instead of relying on AWS load balancer to provide public DNS name. I configured subdomain in /etc/openshift/master/master-config.yaml.
Create A recode(Wildcard DNS); *.cluster.example.com -> Your master IP
Specify in /etc/openshift/master/master-config.yaml
routingConfig:
subdomain: cluster.example.com
serviceAccountConfig
Restart daemans
systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
After this you should able to create Openshift Route
Resources:
https://docs.openshift.com/container-platform/3.7/install_config/router/default_haproxy_router.html#customizing-the-default-routing-subdomain
https://docs.openshift.com/container-platform/3.7/install_config/install/prerequisites.html#wildcard-dns-prereq
https://access.redhat.com/solutions/2081043