I have a service and I would like to get an IP from its spec, using -o go-template I can do it that way:
kubectl get service webapp1-loadbalancer-svc -o go-template='{{(index .status.loadBalancer.ingress 0).ip}}'
This returns the IP of the first ingress in the load balancer which is what I want.
However, instead of using -o go-template, I would like to use -o template. I have tried multiple commands but I am unable to do so. The closest thing I have working is:
kubectl get service webapp1-loadbalancer-svc -o template={{.status.loadBalancer.ingress}}
But this returns the map [map[ip:172.17.0.28]], not just the IP. Everything I have tried to get the IP in the same command is returning errors while executing the template.
Is there a way to obtain the IP from the map using one kubectl command using -o template instead of -o go-template?
Kubectl supports JSONPath template.
Using JsonPath you can retrieve service cluster IP or other details as below.
ubuntu#k8s-master:~$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx 1/1 Running 0 129m 192.168.85.226 k8s-node01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d2h <none>
service/nginx ClusterIP 10.96.184.196 <none> 80/TCP 11m run=nginx
ubuntu#k8s-master:~$ kubectl get service nginx -o jsonpath='{.spec.clusterIP}'
10.96.184.196
Using JsonPath Works well with LoadBlancer type as well .. hope this is what you want to work with.
$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx 1/1 Running 0 133m 192.168.85.226 k8s-node01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d2h <none>
service/nginx LoadBalancer 10.100.165.17 <pending> 80:30852/TCP 5s run=nginx
ubuntu#k8s-master:~$
10.100.165.17ubuntu#k8s-master:~$ kubectl get service nginx -o jsonpath='{.spec.type}'
LoadBalancer
ubuntu#k8s-master:~$ kubectl get service nginx -o jsonpath='{.spec.clusterIP}'
10.100.165.17
So when you extract PORTs it also return a map as below
$ kubectl get service nginx -o jsonpath='{.spec.ports}'
[map[nodePort:30852 port:80 protocol:TCP targetPort:80]]
you can extract the nodePort, port and targetPort each as as below
$ kubectl get service nginx -o jsonpath='{.spec.ports[].targetPort}'
80
ubuntu#k8s-master:~$ kubectl get service nginx -o jsonpath='{.spec.ports[].nodePort}'
30852
ubuntu#k8s-master:~$ kubectl get service nginx -o jsonpath='{.spec.ports[].port}'
80
Hope above examples will help you fix your query
I think you should be able to do by this command when using jsonpath.
kubectl get service webapp1-loadbalancer-svc -o jsonpath={{.status.loadBalancer.ingress[].ip}}
Related
I have an aws eks cluster, with 10.10.0.0/16 Service IPv4 range.
When I would like to see the nameservers inside of pods, I get a strange nameserver configuration:
/ # cat /etc/resolv.conf
nameserver 172.20.0.10
search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5
So instead of getting 10.10.0.10 I'm getting as you can see 172.20.0.10. What should be change to get proper nameserver? So at the very end all pods which will be created, should use 10.10.0.10 as their nameserver, like:
nameserver 10.10.0.10
search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5
Here is additional info:
Output of: kubectl get service kube-dns -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.10.0.10 <none> 53/UDP,53/TCP 3h14m
Output of: kubectl -n kube-system get endpoints kube-dns
NAME ENDPOINTS AGE
kube-dns 10.0.80.105:53,10.0.96.46:53,10.0.80.105:53 + 1 more... 3h15m
I installed Istion version 1.6.9 with below steps
Install Istio Version 1.6.9
wget https://github.com/istio/istio/releases/download/1.6.9/istio-1.6.9-linux-amd64.tar.gz
tar -xzvf istio-1.6.9-linux-amd64.tar.gz
cd istio-1.6.9
cd bin/
sudo mv istioctl /usr/local/bin/
istioctl --version
istioctl install --set profile=demo
I want to access kiali dashboard but I am unable to figure out how to access!
I can see kiali is running in pod:
kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-5dc4b4676c-wcb59 1/1 Running 0 32h
istio-egressgateway-5889bb8976-stlqd 1/1 Running 0 32h
istio-ingressgateway-699d97bdbf-w6x46 1/1 Running 0 32h
istio-tracing-8584b4d7f9-p66wh 1/1 Running 0 32h
istiod-86d4497c9-xv2km 1/1 Running 0 32h
kiali-6f457f5964-6sssn 1/1 Running 0 32h
prometheus-5d64cf8b79-2kdww 2/2 Running 0 32h
I am able to see the kiali as services as well:
kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.100.101.71 <none> 3000/TCP 32h
istio-egressgateway ClusterIP 10.100.34.75 <none> 80/TCP,443/TCP,15443/TCP 32h
istio-ingressgateway LoadBalancer 10.100.84.203 a736b038af6b5478087f0682ddb4dbbb-1317589033.ap-southeast-2.elb.amazonaws.com 15021:31918/TCP,80:32736/TCP,443:30611/TCP,31400:30637/TCP,15443:31579/TCP 32h
istiod ClusterIP 10.100.111.159 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP,853/TCP 32h
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 32h
jaeger-collector ClusterIP 10.100.84.202 <none> 14267/TCP,14268/TCP,14250/TCP 32h
jaeger-collector-headless ClusterIP None <none> 14250/TCP 32h
jaeger-query ClusterIP 10.100.165.216 <none> 16686/TCP 32h
kiali ClusterIP 10.100.159.127 <none> 20001/TCP 32h
prometheus ClusterIP 10.100.113.255 <none> 9090/TCP 32h
tracing ClusterIP 10.100.77.39 <none> 80/TCP 32h
zipkin ClusterIP 10.100.247.201 <none> 9411/TCP
I also can see secret is also deployed as below:
kubectl get secrets
NAME TYPE DATA AGE
default-token-ghz6r kubernetes.io/service-account-token 3 8d
sh.helm.release.v1.aws-efs-csi-driver.v1 helm.sh/release.v1 1 6d
[centos#ip-10-0-0-61 ~]$ kubectl get secrets -n istio-system
NAME TYPE DATA AGE
default-token-z6t2v kubernetes.io/service-account-token 3 32h
istio-ca-secret istio.io/ca-root 5 32h
istio-egressgateway-service-account-token-c8hfp kubernetes.io/service-account-token 3 32h
istio-ingressgateway-service-account-token-fx65w kubernetes.io/service-account-token 3 32h
istio-reader-service-account-token-hxsll kubernetes.io/service-account-token 3 32h
istiod-service-account-token-zmtsv kubernetes.io/service-account-token 3 32h
kiali Opaque 2 32h
kiali-service-account-token-82gk7 kubernetes.io/service-account-token 3 32h
prometheus-token-vs4f6 kubernetes.io/service-account-token 3 32h
I ran all of the above commands on my Linux bastion host, I am hoping that if I open port 20001 on my Linux bastion as well as SG I should be able to access it admin/admin credentials? as like below:
http://10.100.159.127:20001/
My second question is ISTIO as the software is running on my Linux Bastion Server or on my EKS CLuster?
My feeling is it is running on the local Bastion Server, but since we used the below commands
kubectl label ns default istio-injection=enabled
kubectl get ns
kubectl label ns jenkins istio-injection=enabled
kubectl label ns spinnaker istio-injection=enabled
Any pods running in these namespaces will have Envoy proxy pod injected automatically, correct?
P.S: I did the below:
nohup istioctl dashboard kiali &
Opened port at the SG level and at the OS level too... still not able to access the Kiali dashboard
http://3.25.217.61:40235/kiali
[centos#ip-10-0-0-61 ~]$ wget http://3.25.217.61:40235/kiali
--2020-09-11 15:56:18-- http://3.25.217.61:40235/kiali
Connecting to 3.25.217.61:40235... failed: Connection refused.
curl ifconfig.co
3.25.217.61
sudo netstat -nap|grep 40235
tcp 0 0 127.0.0.1:40235 0.0.0.0:* LISTEN 29654/istioctl
tcp6 0 0 ::1:40235 :::* LISTEN 29654/istioctl
Truly, unable to understand what is going wrong...
Just run istioctl dashboard kiali.
Istioctl will create a proxy. Now log in with admin/admin credentials.
To answer the second question:
Istio is running on your cluster and is configure with istioctl, installed on your bastion.
By labeling a namespace with istio-injection=enabled the sidecar will be injected automatically. If necessary, you can disable the injection for a pod by annotating it like this:
spec:
selector:
matchLabels:
...
template:
metadata:
labels:
...
annotations:
sidecar.istio.io/inject: "false"
Update
To access kiali without istioctl/kubectl proxy, you have three options. As you found correctly, it depends on the kiali service type:
ClusterIP (default)
To use the default, set up a route from gateway to kiali service. This is done using VirtualService and DestinationRule. You can than access kiali by eg <ingress-gateway-loadbalancer-id>.amazonaws.com/kiali
NodePort
You can change type to NodePort by setting the corresponding value on istio installation and access kiali by <ingress-gateway-loadbalancer-id>.amazonaws.com:20001/kiali``
LoadBalancer
You can change type to LoadBalancer by setting the corresponding value on istio installation. A second elastic load balancer will be created on aws and the kiali service will have an external ip, like the ingressgateway service does. You can now access it by <kiali-loadbalancer-id>.amazonaws.com/kiali
I would recommend option 1. It's best practice for production and you don't have to dig to deep into istio installation config, which can be overwhelming in the beginning.
Check the port and its type for kiali service by following command.
kubectl get svc -n istio-system
If the type is NodePort then you can check localhost:(port of kiali service) otherwise if the type is clusterIP then you have to expose it by forwarding it.
Expose Kiali either via Kubernetes port forwarding or via a gateway. The following forwarding command exposes Kiali on localhost, port 20001:
kubectl -n istio-system port-forward svc/kiali 20001:20001 &
Then check localhost:20001 for kiali dashboard.
using Kubernetes: https://{domain or ingress ip}/kiali
kubectl get ingress kiali -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Or (for any kind of platform)
oc port-forward svc/kiali 20001:20001 -n istio-system
kubectl port-forward svc/kiali 20001:20001 -n istio-system
kubectl port-forward $(kubectl get pod -n istio-system -l app=kiali -o jsonpath='{.items[0].metadata.name}') -n istio-system 20001
Background:
$ kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.108.245.210 <pending> 80:30742/TCP,443:31028/TCP 41m
$ kubectl cluster-info dump | grep LoadBalancer
14:35:47.072444 1 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
k8s cluster is up and running fine. -
$ ls /etc/kubernetes/manifests
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
~$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
ingress-nginx default-http-backend ClusterIP 10.100.2.163 <none> 80/TCP 21h
ingress-nginx ingress-nginx LoadBalancer 10.108.221.18 <pending> 80:32010/TCP,443:31271/TCP 18h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 21h
How do I link the cloud provider to kubernetes cluster in the existing setup?
I would expect grep -r cloud-provider= /etc/kubernetes/manifests to either show you where the flag is being explicitly set to --cloud-provider= (that is, the empty value), or let you know that there is no such flag, in which case you'll need(?) to add them in three places:
kube-apiserver.yaml
kube-cloud-provider.yaml
in kubelet.service or however you are currently running kubelet
I said "need(?)" because I thought that I read one upon a time that the kubernetes components were good enough at auto-detecting their cloud environment, and thus those flags were only required if you needed to improve or alter the default behavior. However, I just checked the v1.13 page and there doesn't seem to be any "optional" about it. They've even gone so far as to now make --cloud-config= seemingly mandatory, too
Following this document step by step:
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?shortFooter=true
I created EKS cluster using aws cli instead-of UI. So I got the following output
proxy-kube$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 18h
But when I am following this getting started and associating Worker nodes with the cluster, I get
proxy-kube$ kubectl get nodes
No resources found.
I can see 3 EC2 instances created and running in AWS console (UI).
But I am unable to deploy and run even Guestbook application.
When I deploy application, I get following:
~$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
guestbook LoadBalancer 10.100.46.244 a08e89122c10311e88fdd0e3fbea8df8-1146802048.us-east-1.elb.amazonaws.com 3000:32758/TCP 17s app=guestbook
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 21h <none>
redis-master ClusterIP 10.100.208.141 <none> 6379/TCP 1m app=redis,role=master
redis-slave ClusterIP 10.100.226.147 <none>
But if I try to access EXTERNAL-IP, It shows
server is not reachable
in browser.
Also tried to get Dashboard for kubernetes but it failed to show anything on 127.0.0.1:8001
Does anyone know what might be going wrong?
Any help on this is appreciated.
Thanks
Looks you your kubelet (your node) is not registering with the master. If you don't have any nodes basically you can't run anything.
You can ssh into one of the nodes and check the logs in the kubelet with something like this:
journalctl -xeu kubelet
Also, it would help to post the output of kubectl describe deployment <deployment-name> and kubectl get pods
I am following the example in here: http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html#environment-variables. Although the dns seems to be enabled:
skwok-wpc-3:1.0 skwok$ kubectl get services kube-dns --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
53/TCP
and the service is up
$ kubectl get svc
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
nginxsvc app=nginx app=nginx 10.0.128.194 80/TCP
Following the example, I cannot use the curlpod to look up the service:
$ kubectl exec curlpod -- nslookup nginxsvc
Server: 10.0.0.10
Address 1: 10.0.0.10 ip-10-0-0-10.us-west-2.compute.internal
nslookup: can't resolve 'nginxsvc'
Did I miss anything? I am using aws and I use export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash to start my cluster. Thank you.
Please see: http://kubernetes.io/v1.0/docs/user-guide/debugging-services.html, and make sure nginx is running and serving within your pod. I would also suggest something like:
$ kubectl get ep nginxsvc
$ kubectl exec -it curlpod /bin/sh
pod$ curl ip-from-kubectl-get-ep
pod$ traceroute ip-from-kubectl-get-ep
If that doesn't work, please reply or jump on the Kubernetes slack channel