kubectl expose - did I missed something? - kubectl

Hi I try to get access to th spring-boot tutorial app through fabric8
after:
C:\Users\gregor>kubectl expose deployment springboottut --type=LoadBalancer --name=my-service
service "my-service" exposed
C:\Users\gregor>kubectl get services my-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service 10.0.0.200 <pending> 8080:30852/TCP,9779:32327/TCP,8778:31587/TCP 19s
C:\Users\gregor>kubectl describe services my-service
Name: my-service
Namespace: default
Labels: group=net.sklorz
project=springboottut
provider=fabric8
version=0.0.1-SNAPSHOT
Annotations: <none>
Selector: group=net.sklorz,project=springboottut,provider=fabric8,version=0.0.1-SNAPSHOT
Type: LoadBalancer
IP: 10.0.0.200
Port: port-1 8080/TCP
NodePort: port-1 30852/TCP
Endpoints: 172.17.0.9:8080
Port: port-2 9779/TCP
NodePort: port-2 32327/TCP
Endpoints: 172.17.0.9:9779
Port: port-3 8778/TCP
NodePort: port-3 31587/TCP
Endpoints: 172.17.0.9:8778
Session Affinity: None
Events: <none>
C:\Users\gregor> kubectl get pods --output=wide
NAME READY STATUS RESTARTS AGE IP NODE
configmapcontroller-4273343753-hfg5q 1/1 Running 17 6d 172.17.0.7 minikube
exposecontroller-1770961830-hbkgg 1/1 Running 17 6d 172.17.0.6 minikube
fabric8-3873669821-rhvw5 2/2 Running 33 6d 172.17.0.2 minikube
fabric8-docker-registry-125311296-ghrl8 1/1 Running 17 6d 172.17.0.11 minikube
fabric8-forge-1088523184-k0q82 1/1 Running 17 6d 172.17.0.4 minikube
gogs-2069416242-nc1j6 1/1 Running 15 6d 172.17.0.8 minikube
jenkins-56914896-5zcl2 1/1 Running 27 6d 172.17.0.5 minikube
nexus-2230784709-1k9kr 1/1 Running 17 6d 172.17.0.12 minikube
springboottut-1863166851-0778n 1/1 Running 0 16m 172.17.0.9 minikube
then asking the browser: for
http://172.17.0.9:8080
or
http://100.0.0.200:8080
the connection timed out
I obviously missed something, and the docs dont give me any more hints. Any ideas whats wrong,please?
Thanks for any help.

Both addresses (172.* and 10.*) you are trying to access are private IPs that won’t be directly addressable over the public internet.
Your service listed the external IP as pending.
EXTERNAL-IP
<pending>
Once that value is filled in by the cloud provider, that value is the public endpoint you should use as the address in your browser. Don't forget to add any necessary port 8080

Related

accessing kubernetes service from local host

I created a single node cluster. There is a nodeport service
kubectl get all --namespace default
service/backend-org-1-substra-backend-server NodePort 10.43.81.5 <none> 8000:30068/TCP 4d23h
The node ip is
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-k3s-default-server-0 Ready control-plane,master 5d v1.24.4+k3s1 172.18.0.2 <none> K3s dev 5.15.0-1028-aws containerd://1.6.6-k3s1
From the same host, but not inside the cluster, I can ping the 172.18.0.2 ip. Since the backend-org-1-substra-backend-server is a nodeport, shouldn't I be able to access it by
curl 172.18.0.2:30068? I get
curl: (7) Failed to connect to 172.18.0.2 port 30068 after 0 ms: Connection refused
additional information:
$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
$ kubectl get nodes -o yaml
...
addresses:
- address: 172.24.0.2
type: InternalIP
- address: k3d-k3s-default-server-0
type: Hostname
allocatable:
$ kubectl describe svc backend-org-1-substra-backend-server
Name: backend-org-1-substra-backend-server
Namespace: org-1
Labels: app.kubernetes.io/instance=backend-org-1
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=substra-backend-server
app.kubernetes.io/part-of=substra-backend
app.kubernetes.io/version=0.34.1
helm.sh/chart=substra-backend-22.3.1
skaffold.dev/run-id=394a8d19-bbc8-4a3b-b04e-08e0fff40681
Annotations: meta.helm.sh/release-name: backend-org-1
meta.helm.sh/release-namespace: org-1
Selector: app.kubernetes.io/instance=backend-org-1,app.kubernetes.io/name=substra-backend-server
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.68.217
IPs: 10.43.68.217
Port: http 8000/TCP
TargetPort: http/TCP
NodePort: http 31960/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Here, I noticed the endpoints shows . which worries me.
I followed the doc at https://docs.substra.org/en/stable/contributing/getting-started.html
It's a lot to ask someone to replicate the whole thing.
My point is AFAIK, the nodeport service allows callers from outside the cluster to call pods inside the cluster. But neither the cluster ip nor the node ip allows me to curl that service.
I found that it was due to a faulty installation. Now wget to the load balancer ip and port does get a connection.

Golang REST API Deployment on AWS EKS Fails with CrashLoopBackOff

I'm trying to deploy a simple REST API written in Golang to AWS EKS.
I created an EKS cluster on AWS using Terraform and applied the AWS load balancer controller Helm chart to it.
All resources in the cluster look like:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/aws-load-balancer-controller-5947f7c854-fgwk2 1/1 Running 0 75m
kube-system pod/aws-load-balancer-controller-5947f7c854-gkttb 1/1 Running 0 75m
kube-system pod/aws-node-dfc7r 1/1 Running 0 120m
kube-system pod/aws-node-hpn4z 1/1 Running 0 120m
kube-system pod/aws-node-s6mng 1/1 Running 0 120m
kube-system pod/coredns-66cb55d4f4-5l7vm 1/1 Running 0 127m
kube-system pod/coredns-66cb55d4f4-frk6p 1/1 Running 0 127m
kube-system pod/kube-proxy-6ndf5 1/1 Running 0 120m
kube-system pod/kube-proxy-s95qk 1/1 Running 0 120m
kube-system pod/kube-proxy-vdrdd 1/1 Running 0 120m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 127m
kube-system service/aws-load-balancer-webhook-service ClusterIP 10.100.202.90 <none> 443/TCP 75m
kube-system service/kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 127m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/aws-node 3 3 3 3 3 <none> 127m
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 <none> 127m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/aws-load-balancer-controller 2/2 2 2 75m
kube-system deployment.apps/coredns 2/2 2 2 127m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/aws-load-balancer-controller-5947f7c854 2 2 2 75m
kube-system replicaset.apps/coredns-66cb55d4f4 2 2 2 127m
I can run the application locally with Go and with Docker. But releasing this on AWS EKS always throws CrashLoopBackOff.
Running kubectl describe pod PODNAME shows:
Name: go-api-55d74b9546-dkk9g
Namespace: default
Priority: 0
Node: ip-172-16-1-191.ec2.internal/172.16.1.191
Start Time: Tue, 15 Mar 2022 07:04:08 -0700
Labels: app=go-api
pod-template-hash=55d74b9546
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 172.16.1.195
IPs:
IP: 172.16.1.195
Controlled By: ReplicaSet/go-api-55d74b9546
Containers:
go-api:
Container ID: docker://a4bc07b60c85fd308157d967d2d0d688d8eeccfe4c829102eb929ca82fb25595
Image: saurabhmish/golang-hello:latest
Image ID: docker-pullable://saurabhmish/golang-hello#sha256:f79a495ad17710b569136f611ae3c8191173400e2cbb9cfe416e75e2af6f7874
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 15 Mar 2022 07:09:50 -0700
Finished: Tue, 15 Mar 2022 07:09:50 -0700
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jt4gp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-jt4gp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m31s default-scheduler Successfully assigned default/go-api-55d74b9546-dkk9g to ip-172-16-1-191.ec2.internal
Normal Pulled 7m17s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 12.77458991s
Normal Pulled 7m16s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 110.127771ms
Normal Pulled 7m3s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 109.617419ms
Normal Created 6m37s (x4 over 7m17s) kubelet Created container go-api
Normal Started 6m37s (x4 over 7m17s) kubelet Started container go-api
Normal Pulled 6m37s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 218.952336ms
Normal Pulling 5m56s (x5 over 7m30s) kubelet Pulling image "saurabhmish/golang-hello:latest"
Normal Pulled 5m56s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 108.105083ms
Warning BackOff 2m28s (x24 over 7m15s) kubelet Back-off restarting failed container
Running kubectl logs PODNAME and kubectl logs PODNAME -c go-api shows standard_init_linux.go:228: exec user process caused: exec format error
Manifests:
go-deploy.yaml ( This is the Docker Hub Image with documentation )
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-api
labels:
app: go-api
spec:
replicas: 2
selector:
matchLabels:
app: go-api
strategy: {}
template:
metadata:
labels:
app: go-api
spec:
containers:
- name: go-api
image: saurabhmish/golang-hello:latest
ports:
- containerPort: 3000
resources: {}
go-service.yaml
---
kind: Service
apiVersion: v1
metadata:
name: go-api
spec:
selector:
app: go-api
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 3000
How can I fix this error ?
Posting this as Community wiki for better visibility.
Feel free to expand it.
Thanks to #David Maze, who pointed to the solution. There is an article 'Build Intel64-compatible Docker images from Mac M1 (ARM)' (by Beppe Catanese) here.
This article describes the underlying problem well.
You are developing/building on the ARM architecture (Mac M1), but you deploy the docker image to a x86-64 architecture based Kubernetes cluster.
Solution:
Option A: use buildx
Buildx is a Docker plugin that allows, amongst other features, to build images for various target platforms.
$ docker buildx build --platform linux/amd64 -t myapp .
Option B: set DOCKER_DEFAULT_PLATFORM
The DOCKER_DEFAULT_PLATFORM environment variable permits to set the default platform for the commands that take the --platform flag.
export DOCKER_DEFAULT_PLATFORM=linux/amd64
A CrashloopBackOff means that you have a pod starting, crashing, starting again, and then crashing again.
Maybe the error come from the application itself that it can not connect to database, redis,...
You may find something useful here:
My kubernetes pods keep crashing with "CrashLoopBackOff" but I can't find any log

Unable to view kiali dashboard

I installed Istion version 1.6.9 with below steps
Install Istio Version 1.6.9
wget https://github.com/istio/istio/releases/download/1.6.9/istio-1.6.9-linux-amd64.tar.gz
tar -xzvf istio-1.6.9-linux-amd64.tar.gz
cd istio-1.6.9
cd bin/
sudo mv istioctl /usr/local/bin/
istioctl --version
istioctl install --set profile=demo
I want to access kiali dashboard but I am unable to figure out how to access!
I can see kiali is running in pod:
kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-5dc4b4676c-wcb59 1/1 Running 0 32h
istio-egressgateway-5889bb8976-stlqd 1/1 Running 0 32h
istio-ingressgateway-699d97bdbf-w6x46 1/1 Running 0 32h
istio-tracing-8584b4d7f9-p66wh 1/1 Running 0 32h
istiod-86d4497c9-xv2km 1/1 Running 0 32h
kiali-6f457f5964-6sssn 1/1 Running 0 32h
prometheus-5d64cf8b79-2kdww 2/2 Running 0 32h
I am able to see the kiali as services as well:
kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.100.101.71 <none> 3000/TCP 32h
istio-egressgateway ClusterIP 10.100.34.75 <none> 80/TCP,443/TCP,15443/TCP 32h
istio-ingressgateway LoadBalancer 10.100.84.203 a736b038af6b5478087f0682ddb4dbbb-1317589033.ap-southeast-2.elb.amazonaws.com 15021:31918/TCP,80:32736/TCP,443:30611/TCP,31400:30637/TCP,15443:31579/TCP 32h
istiod ClusterIP 10.100.111.159 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP,853/TCP 32h
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 32h
jaeger-collector ClusterIP 10.100.84.202 <none> 14267/TCP,14268/TCP,14250/TCP 32h
jaeger-collector-headless ClusterIP None <none> 14250/TCP 32h
jaeger-query ClusterIP 10.100.165.216 <none> 16686/TCP 32h
kiali ClusterIP 10.100.159.127 <none> 20001/TCP 32h
prometheus ClusterIP 10.100.113.255 <none> 9090/TCP 32h
tracing ClusterIP 10.100.77.39 <none> 80/TCP 32h
zipkin ClusterIP 10.100.247.201 <none> 9411/TCP
I also can see secret is also deployed as below:
kubectl get secrets
NAME TYPE DATA AGE
default-token-ghz6r kubernetes.io/service-account-token 3 8d
sh.helm.release.v1.aws-efs-csi-driver.v1 helm.sh/release.v1 1 6d
[centos#ip-10-0-0-61 ~]$ kubectl get secrets -n istio-system
NAME TYPE DATA AGE
default-token-z6t2v kubernetes.io/service-account-token 3 32h
istio-ca-secret istio.io/ca-root 5 32h
istio-egressgateway-service-account-token-c8hfp kubernetes.io/service-account-token 3 32h
istio-ingressgateway-service-account-token-fx65w kubernetes.io/service-account-token 3 32h
istio-reader-service-account-token-hxsll kubernetes.io/service-account-token 3 32h
istiod-service-account-token-zmtsv kubernetes.io/service-account-token 3 32h
kiali Opaque 2 32h
kiali-service-account-token-82gk7 kubernetes.io/service-account-token 3 32h
prometheus-token-vs4f6 kubernetes.io/service-account-token 3 32h
I ran all of the above commands on my Linux bastion host, I am hoping that if I open port 20001 on my Linux bastion as well as SG I should be able to access it admin/admin credentials? as like below:
http://10.100.159.127:20001/
My second question is ISTIO as the software is running on my Linux Bastion Server or on my EKS CLuster?
My feeling is it is running on the local Bastion Server, but since we used the below commands
kubectl label ns default istio-injection=enabled
kubectl get ns
kubectl label ns jenkins istio-injection=enabled
kubectl label ns spinnaker istio-injection=enabled
Any pods running in these namespaces will have Envoy proxy pod injected automatically, correct?
P.S: I did the below:
nohup istioctl dashboard kiali &
Opened port at the SG level and at the OS level too... still not able to access the Kiali dashboard
http://3.25.217.61:40235/kiali
[centos#ip-10-0-0-61 ~]$ wget http://3.25.217.61:40235/kiali
--2020-09-11 15:56:18-- http://3.25.217.61:40235/kiali
Connecting to 3.25.217.61:40235... failed: Connection refused.
curl ifconfig.co
3.25.217.61
sudo netstat -nap|grep 40235
tcp 0 0 127.0.0.1:40235 0.0.0.0:* LISTEN 29654/istioctl
tcp6 0 0 ::1:40235 :::* LISTEN 29654/istioctl
Truly, unable to understand what is going wrong...
Just run istioctl dashboard kiali.
Istioctl will create a proxy. Now log in with admin/admin credentials.
To answer the second question:
Istio is running on your cluster and is configure with istioctl, installed on your bastion.
By labeling a namespace with istio-injection=enabled the sidecar will be injected automatically. If necessary, you can disable the injection for a pod by annotating it like this:
spec:
selector:
matchLabels:
...
template:
metadata:
labels:
...
annotations:
sidecar.istio.io/inject: "false"
Update
To access kiali without istioctl/kubectl proxy, you have three options. As you found correctly, it depends on the kiali service type:
ClusterIP (default)
To use the default, set up a route from gateway to kiali service. This is done using VirtualService and DestinationRule. You can than access kiali by eg <ingress-gateway-loadbalancer-id>.amazonaws.com/kiali
NodePort
You can change type to NodePort by setting the corresponding value on istio installation and access kiali by <ingress-gateway-loadbalancer-id>.amazonaws.com:20001/kiali``
LoadBalancer
You can change type to LoadBalancer by setting the corresponding value on istio installation. A second elastic load balancer will be created on aws and the kiali service will have an external ip, like the ingressgateway service does. You can now access it by <kiali-loadbalancer-id>.amazonaws.com/kiali
I would recommend option 1. It's best practice for production and you don't have to dig to deep into istio installation config, which can be overwhelming in the beginning.
Check the port and its type for kiali service by following command.
kubectl get svc -n istio-system
If the type is NodePort then you can check localhost:(port of kiali service) otherwise if the type is clusterIP then you have to expose it by forwarding it.
Expose Kiali either via Kubernetes port forwarding or via a gateway. The following forwarding command exposes Kiali on localhost, port 20001:
kubectl -n istio-system port-forward svc/kiali 20001:20001 &
Then check localhost:20001 for kiali dashboard.
using Kubernetes: https://{domain or ingress ip}/kiali
kubectl get ingress kiali -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Or (for any kind of platform)
oc port-forward svc/kiali 20001:20001 -n istio-system
kubectl port-forward svc/kiali 20001:20001 -n istio-system
kubectl port-forward $(kubectl get pod -n istio-system -l app=kiali -o jsonpath='{.items[0].metadata.name}') -n istio-system 20001

Not able to communicate with Pods locally created in AWS EC2 instance with kubernetes

I have created simple nginx deplopyment in Ubuntu EC2 instance and exposed to port through service in kubernetes cluster, but I am unable to ping the pods even in local envirnoment. My Pods are running fine and service is also created successfully. I am sharing some outputs of commands below
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-39-226 Ready <none> 2d19h v1.16.1
master-node Ready master 2d20h v1.16.1
kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-54f57cf6bf-dqt5v 1/1 Running 0 101m 192.168.39.17 ip-172-31-39-226 <none> <none>
nginx-deployment-54f57cf6bf-gh4fz 1/1 Running 0 101m 192.168.39.16 ip-172-31-39-226 <none> <none>
sample-nginx-857ffdb4f4-2rcvt 1/1 Running 0 20m 192.168.39.18 ip-172-31-39-226 <none> <none>
sample-nginx-857ffdb4f4-tjh82 1/1 Running 0 20m 192.168.39.19 ip-172-31-39-226 <none> <none>
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d20h
nginx-deployment NodePort 10.101.133.21 <none> 80:31165/TCP 50m
sample-nginx LoadBalancer 10.100.77.31 <pending> 80:31854/TCP 19m
kubectl describe deployment nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Mon, 14 Oct 2019 06:28:13 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},"spec":{"replica...
Selector: app=nginx
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-54f57cf6bf (2/2 replicas created)
Events: <none>
Now I am unable to ping 192.168.39.17/16/18/19 from master, also not able to access curl 172.31.39.226:31165/31854 from master as well. Any help will be highly appreciated..
From the information, you have provided. And from the discussion we had the worker node has the Nginx pod running. And you have attached a NodePort Service and Load balancer Service to it.
The only thing which is missing here is the server from which you are trying to access this.
So, I tried to reach this URL 52.201.242.84:31165. I think all you need to do is whitelist this port for public access or the IP. This can be done via security group for the worker node EC2.
Now the URL above is constructed from the public IP of the worker node plus(+) the NodePort svc which is attached. Thus here is a simple formula you can use to get the exact address of the pod running.
Pod Access URL = Public IP of Worker Node + The NodePort

kubernetes guesbook example on aws

I'm trying to run through the kubernetes example in AWS. I created the master and 4 nodes with the kube-up.sh script and trying to get the frontend exposed via a load balancer.
Here are the pods
root#ip-172-20-0-9:~/kubernetes# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-2q0at 1/1 Running 0 5m
frontend-5hmxq 1/1 Running 0 5m
frontend-s7i0r 1/1 Running 0 5m
redis-master-y6160 1/1 Running 0 53m
redis-slave-49gya 1/1 Running 0 24m
redis-slave-85u1r 1/1 Running 0 24m
Here are the services
root#ip-172-20-0-9:~/kubernetes# kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.0.0.1 <none> 443/TCP <none> 1h
redis-master 10.0.90.210 <none> 6379/TCP name=redis-master 37m
redis-slave 10.0.205.92 <none> 6379/TCP name=redis-slave 24m
I edited the yml for the frontend service to try to add a load balancer but its not showing up
root#ip-172-20-0-9:~/kubernetes# cat examples/guestbook/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 80
selector:
name: frontend
Here the commands i ran
root#ip-172-20-0-9:~/kubernetes# kubectl create -f examples/guestbook/frontend-controller.yaml
replicationcontroller "frontend" created
root#ip-172-20-0-9:~/kubernetes# kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.0.0.1 <none> 443/TCP <none> 1h
redis-master 10.0.90.210 <none> 6379/TCP name=redis-master 39m
redis-slave 10.0.205.92 <none> 6379/TCP name=redis-slave 26m
If I remove the loadbalancer it loads up but with no external IP
Looks like the external IP might only be there for Google's platform. in AWS it creates a ELB and doesn't show the external IP of the ELB.