I have an aws eks cluster, with 10.10.0.0/16 Service IPv4 range.
When I would like to see the nameservers inside of pods, I get a strange nameserver configuration:
/ # cat /etc/resolv.conf
nameserver 172.20.0.10
search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5
So instead of getting 10.10.0.10 I'm getting as you can see 172.20.0.10. What should be change to get proper nameserver? So at the very end all pods which will be created, should use 10.10.0.10 as their nameserver, like:
nameserver 10.10.0.10
search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal
options ndots:5
Here is additional info:
Output of: kubectl get service kube-dns -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.10.0.10 <none> 53/UDP,53/TCP 3h14m
Output of: kubectl -n kube-system get endpoints kube-dns
NAME ENDPOINTS AGE
kube-dns 10.0.80.105:53,10.0.96.46:53,10.0.80.105:53 + 1 more... 3h15m
Related
I am following the instructions provided here using minikube. The main difference is I am doing everything in the 'test' namespace. Everything looks to be correct I get an IP and a port...
~/Code/lib/tmp >echo "http://$GATEWAY_URL/productpage"
http://192.168.49.2:31914/productpage
But when I try to access that in the browser it times out. I did notice in the gateway-spec it is pointing at 8080...
~/Code/lib/tmp >kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports}'
[{"name":"status-port","nodePort":31745,"port":15021,"protocol":"TCP","targetPort":15021},{"name":"http2","nodePort":31914,"port":80,"protocol":"TCP","targetPort":8080},{"name":"https","nodePort":31244,"port":443,"protocol":"TCP","targetPort":8443},{"name":"tcp","nodePort":30472,"port":31400,"protocol":"TCP","targetPort":31400},{"name":"tls","nodePort":32306,"port":15443,"protocol":"TCP","targetPort":15443}]%
Whereas the gateway on the namespace side is using 9080...
route:
- destination:
host: productpage
port:
number: 9080
But I also could be off base. The response I get is...
The server at 192.168.49.2 is taking too long to respond.
Gateway
~/Code/lib/tmp >kubectl describe Gateway bookinfo-gateway -n test
Name: bookinfo-gateway
Namespace: test
Labels: <none>
Annotations: <none>
API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
...
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Events: <none>
~/Code/lib/tmp >kubectl get Gateway -n test
NAME AGE
bookinfo-gateway 33m
Update
I tried using a simple load balancer like
kubectl expose deployment productpage-v1 --type=LoadBalancer --port=9080 -n test
This works fine going to localhost:9080
~/Code/lib/tmp >kubectl get services -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.103.184.105 <none> 9080/TCP 6h43m
productpage ClusterIP 10.109.150.132 <none> 9080/TCP 6h43m
productpage-v1 LoadBalancer 10.98.252.50 127.0.0.1 9080:32172/TCP 14s
ratings ClusterIP 10.98.189.253 <none> 9080/TCP 6h43m
reviews ClusterIP 10.106.74.190 <none> 9080/TCP 6h43m
So it must be something from the Istio side.
Update 2
The Gateway and Virtual Service are both on the test namespace...
~/Code/lib/tmp >kubectl get GATEWAY -A
NAMESPACE NAME AGE
test bookinfo-gateway 4m52s
~/Code/lib/tmp >kubectl get virtualservice -A
NAMESPACE NAME GATEWAYS HOSTS AGE
test bookinfo ["bookinfo-gateway"] ["*"] 5m3s
Here are the list of steps I followed...
minikube delete
minikube start --memory=16384 --cpus=4 --kubernetes-version=v1.20.2
istioctl install --set profile=demo -y
Kubectl create namespace test
kubectl label namespace test istio-injection=enabled
kubectl apply -f bookinfo.yml -n test
kubectl apply -f bookinfo-gateway.yml -n test
istioctl analyze -n test
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="https")].nodePort}')
export INGRESS_HOST=$(minikube ip)
minikube tunnel (separate window)
Here is what happens with the tunnel command...
~ >minikube tunnel
❗ The service/ingress istio-ingressgateway requires privileged ports to be exposed: [80 443]
🔑 sudo permission will be asked for it.
🏃 Starting tunnel for service istio-ingressgateway.
Password:
After I enter my password there is no more text. It does seem to be running because when I ctrl+c it stops the command.
I installed Istion version 1.6.9 with below steps
Install Istio Version 1.6.9
wget https://github.com/istio/istio/releases/download/1.6.9/istio-1.6.9-linux-amd64.tar.gz
tar -xzvf istio-1.6.9-linux-amd64.tar.gz
cd istio-1.6.9
cd bin/
sudo mv istioctl /usr/local/bin/
istioctl --version
istioctl install --set profile=demo
I want to access kiali dashboard but I am unable to figure out how to access!
I can see kiali is running in pod:
kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-5dc4b4676c-wcb59 1/1 Running 0 32h
istio-egressgateway-5889bb8976-stlqd 1/1 Running 0 32h
istio-ingressgateway-699d97bdbf-w6x46 1/1 Running 0 32h
istio-tracing-8584b4d7f9-p66wh 1/1 Running 0 32h
istiod-86d4497c9-xv2km 1/1 Running 0 32h
kiali-6f457f5964-6sssn 1/1 Running 0 32h
prometheus-5d64cf8b79-2kdww 2/2 Running 0 32h
I am able to see the kiali as services as well:
kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.100.101.71 <none> 3000/TCP 32h
istio-egressgateway ClusterIP 10.100.34.75 <none> 80/TCP,443/TCP,15443/TCP 32h
istio-ingressgateway LoadBalancer 10.100.84.203 a736b038af6b5478087f0682ddb4dbbb-1317589033.ap-southeast-2.elb.amazonaws.com 15021:31918/TCP,80:32736/TCP,443:30611/TCP,31400:30637/TCP,15443:31579/TCP 32h
istiod ClusterIP 10.100.111.159 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP,853/TCP 32h
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 32h
jaeger-collector ClusterIP 10.100.84.202 <none> 14267/TCP,14268/TCP,14250/TCP 32h
jaeger-collector-headless ClusterIP None <none> 14250/TCP 32h
jaeger-query ClusterIP 10.100.165.216 <none> 16686/TCP 32h
kiali ClusterIP 10.100.159.127 <none> 20001/TCP 32h
prometheus ClusterIP 10.100.113.255 <none> 9090/TCP 32h
tracing ClusterIP 10.100.77.39 <none> 80/TCP 32h
zipkin ClusterIP 10.100.247.201 <none> 9411/TCP
I also can see secret is also deployed as below:
kubectl get secrets
NAME TYPE DATA AGE
default-token-ghz6r kubernetes.io/service-account-token 3 8d
sh.helm.release.v1.aws-efs-csi-driver.v1 helm.sh/release.v1 1 6d
[centos#ip-10-0-0-61 ~]$ kubectl get secrets -n istio-system
NAME TYPE DATA AGE
default-token-z6t2v kubernetes.io/service-account-token 3 32h
istio-ca-secret istio.io/ca-root 5 32h
istio-egressgateway-service-account-token-c8hfp kubernetes.io/service-account-token 3 32h
istio-ingressgateway-service-account-token-fx65w kubernetes.io/service-account-token 3 32h
istio-reader-service-account-token-hxsll kubernetes.io/service-account-token 3 32h
istiod-service-account-token-zmtsv kubernetes.io/service-account-token 3 32h
kiali Opaque 2 32h
kiali-service-account-token-82gk7 kubernetes.io/service-account-token 3 32h
prometheus-token-vs4f6 kubernetes.io/service-account-token 3 32h
I ran all of the above commands on my Linux bastion host, I am hoping that if I open port 20001 on my Linux bastion as well as SG I should be able to access it admin/admin credentials? as like below:
http://10.100.159.127:20001/
My second question is ISTIO as the software is running on my Linux Bastion Server or on my EKS CLuster?
My feeling is it is running on the local Bastion Server, but since we used the below commands
kubectl label ns default istio-injection=enabled
kubectl get ns
kubectl label ns jenkins istio-injection=enabled
kubectl label ns spinnaker istio-injection=enabled
Any pods running in these namespaces will have Envoy proxy pod injected automatically, correct?
P.S: I did the below:
nohup istioctl dashboard kiali &
Opened port at the SG level and at the OS level too... still not able to access the Kiali dashboard
http://3.25.217.61:40235/kiali
[centos#ip-10-0-0-61 ~]$ wget http://3.25.217.61:40235/kiali
--2020-09-11 15:56:18-- http://3.25.217.61:40235/kiali
Connecting to 3.25.217.61:40235... failed: Connection refused.
curl ifconfig.co
3.25.217.61
sudo netstat -nap|grep 40235
tcp 0 0 127.0.0.1:40235 0.0.0.0:* LISTEN 29654/istioctl
tcp6 0 0 ::1:40235 :::* LISTEN 29654/istioctl
Truly, unable to understand what is going wrong...
Just run istioctl dashboard kiali.
Istioctl will create a proxy. Now log in with admin/admin credentials.
To answer the second question:
Istio is running on your cluster and is configure with istioctl, installed on your bastion.
By labeling a namespace with istio-injection=enabled the sidecar will be injected automatically. If necessary, you can disable the injection for a pod by annotating it like this:
spec:
selector:
matchLabels:
...
template:
metadata:
labels:
...
annotations:
sidecar.istio.io/inject: "false"
Update
To access kiali without istioctl/kubectl proxy, you have three options. As you found correctly, it depends on the kiali service type:
ClusterIP (default)
To use the default, set up a route from gateway to kiali service. This is done using VirtualService and DestinationRule. You can than access kiali by eg <ingress-gateway-loadbalancer-id>.amazonaws.com/kiali
NodePort
You can change type to NodePort by setting the corresponding value on istio installation and access kiali by <ingress-gateway-loadbalancer-id>.amazonaws.com:20001/kiali``
LoadBalancer
You can change type to LoadBalancer by setting the corresponding value on istio installation. A second elastic load balancer will be created on aws and the kiali service will have an external ip, like the ingressgateway service does. You can now access it by <kiali-loadbalancer-id>.amazonaws.com/kiali
I would recommend option 1. It's best practice for production and you don't have to dig to deep into istio installation config, which can be overwhelming in the beginning.
Check the port and its type for kiali service by following command.
kubectl get svc -n istio-system
If the type is NodePort then you can check localhost:(port of kiali service) otherwise if the type is clusterIP then you have to expose it by forwarding it.
Expose Kiali either via Kubernetes port forwarding or via a gateway. The following forwarding command exposes Kiali on localhost, port 20001:
kubectl -n istio-system port-forward svc/kiali 20001:20001 &
Then check localhost:20001 for kiali dashboard.
using Kubernetes: https://{domain or ingress ip}/kiali
kubectl get ingress kiali -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Or (for any kind of platform)
oc port-forward svc/kiali 20001:20001 -n istio-system
kubectl port-forward svc/kiali 20001:20001 -n istio-system
kubectl port-forward $(kubectl get pod -n istio-system -l app=kiali -o jsonpath='{.items[0].metadata.name}') -n istio-system 20001
Background:
$ kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.108.245.210 <pending> 80:30742/TCP,443:31028/TCP 41m
$ kubectl cluster-info dump | grep LoadBalancer
14:35:47.072444 1 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
k8s cluster is up and running fine. -
$ ls /etc/kubernetes/manifests
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
~$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
ingress-nginx default-http-backend ClusterIP 10.100.2.163 <none> 80/TCP 21h
ingress-nginx ingress-nginx LoadBalancer 10.108.221.18 <pending> 80:32010/TCP,443:31271/TCP 18h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 21h
How do I link the cloud provider to kubernetes cluster in the existing setup?
I would expect grep -r cloud-provider= /etc/kubernetes/manifests to either show you where the flag is being explicitly set to --cloud-provider= (that is, the empty value), or let you know that there is no such flag, in which case you'll need(?) to add them in three places:
kube-apiserver.yaml
kube-cloud-provider.yaml
in kubelet.service or however you are currently running kubelet
I said "need(?)" because I thought that I read one upon a time that the kubernetes components were good enough at auto-detecting their cloud environment, and thus those flags were only required if you needed to improve or alter the default behavior. However, I just checked the v1.13 page and there doesn't seem to be any "optional" about it. They've even gone so far as to now make --cloud-config= seemingly mandatory, too
Following this document step by step:
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html?shortFooter=true
I created EKS cluster using aws cli instead-of UI. So I got the following output
proxy-kube$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 18h
But when I am following this getting started and associating Worker nodes with the cluster, I get
proxy-kube$ kubectl get nodes
No resources found.
I can see 3 EC2 instances created and running in AWS console (UI).
But I am unable to deploy and run even Guestbook application.
When I deploy application, I get following:
~$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
guestbook LoadBalancer 10.100.46.244 a08e89122c10311e88fdd0e3fbea8df8-1146802048.us-east-1.elb.amazonaws.com 3000:32758/TCP 17s app=guestbook
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 21h <none>
redis-master ClusterIP 10.100.208.141 <none> 6379/TCP 1m app=redis,role=master
redis-slave ClusterIP 10.100.226.147 <none>
But if I try to access EXTERNAL-IP, It shows
server is not reachable
in browser.
Also tried to get Dashboard for kubernetes but it failed to show anything on 127.0.0.1:8001
Does anyone know what might be going wrong?
Any help on this is appreciated.
Thanks
Looks you your kubelet (your node) is not registering with the master. If you don't have any nodes basically you can't run anything.
You can ssh into one of the nodes and check the logs in the kubelet with something like this:
journalctl -xeu kubelet
Also, it would help to post the output of kubectl describe deployment <deployment-name> and kubectl get pods
I'm trying to run through the kubernetes example in AWS. I created the master and 4 nodes with the kube-up.sh script and trying to get the frontend exposed via a load balancer.
Here are the pods
root#ip-172-20-0-9:~/kubernetes# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-2q0at 1/1 Running 0 5m
frontend-5hmxq 1/1 Running 0 5m
frontend-s7i0r 1/1 Running 0 5m
redis-master-y6160 1/1 Running 0 53m
redis-slave-49gya 1/1 Running 0 24m
redis-slave-85u1r 1/1 Running 0 24m
Here are the services
root#ip-172-20-0-9:~/kubernetes# kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.0.0.1 <none> 443/TCP <none> 1h
redis-master 10.0.90.210 <none> 6379/TCP name=redis-master 37m
redis-slave 10.0.205.92 <none> 6379/TCP name=redis-slave 24m
I edited the yml for the frontend service to try to add a load balancer but its not showing up
root#ip-172-20-0-9:~/kubernetes# cat examples/guestbook/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 80
selector:
name: frontend
Here the commands i ran
root#ip-172-20-0-9:~/kubernetes# kubectl create -f examples/guestbook/frontend-controller.yaml
replicationcontroller "frontend" created
root#ip-172-20-0-9:~/kubernetes# kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.0.0.1 <none> 443/TCP <none> 1h
redis-master 10.0.90.210 <none> 6379/TCP name=redis-master 39m
redis-slave 10.0.205.92 <none> 6379/TCP name=redis-slave 26m
If I remove the loadbalancer it loads up but with no external IP
Looks like the external IP might only be there for Google's platform. in AWS it creates a ELB and doesn't show the external IP of the ELB.