I am following the example in here: http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html#environment-variables. Although the dns seems to be enabled:
skwok-wpc-3:1.0 skwok$ kubectl get services kube-dns --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
53/TCP
and the service is up
$ kubectl get svc
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
nginxsvc app=nginx app=nginx 10.0.128.194 80/TCP
Following the example, I cannot use the curlpod to look up the service:
$ kubectl exec curlpod -- nslookup nginxsvc
Server: 10.0.0.10
Address 1: 10.0.0.10 ip-10-0-0-10.us-west-2.compute.internal
nslookup: can't resolve 'nginxsvc'
Did I miss anything? I am using aws and I use export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash to start my cluster. Thank you.
Please see: http://kubernetes.io/v1.0/docs/user-guide/debugging-services.html, and make sure nginx is running and serving within your pod. I would also suggest something like:
$ kubectl get ep nginxsvc
$ kubectl exec -it curlpod /bin/sh
pod$ curl ip-from-kubectl-get-ep
pod$ traceroute ip-from-kubectl-get-ep
If that doesn't work, please reply or jump on the Kubernetes slack channel
Related
I'm applying aws-efs-csi driver like this on a kubernates cluster:
kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.0"
I need to edit the configuration file to add credentials for pulling docker images.
I couldn't find ways to edit via kubectl edit ..
This is the pod in the kube-system namespace:
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
...
kube-system efs-csi-node-xxssqr 3/3 Running 0 69d
...
It’s a daemonset.
kubectl -n kube-system edit ds/efs-csi-node
After having a local Kubernates up using minikube, I run a series of kubectl commands:
$ kubectl apply -f https://app.getambassador.io/initializer/yaml/0a6624ff-5b39-418f-b61d-7ba83dc3ab7b/crds && \
kubectl wait --for condition=established --timeout=90s crd -lproduct=aes
$ kubectl apply -f https://app.getambassador.io/initializer/yaml/0a6624ff-5b39-418f-b61d-7ba83dc3ab7b/install && \
kubectl wait -n ambassador deploy -lproduct=aes --for condition=available --timeout=90s
$ kubectl apply -f https://app.getambassador.io/initializer/yaml/0a6624ff-5b39-418f-b61d-7ba83dc3ab7b/configure
To see the Ambassador Edge Stack (Ingress and API gateway)
$ kubectl get svc --namespace ambassador
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador LoadBalancer 10.111.43.125 <pending> 80:30130/TCP,443:30217/TCP 8m37s
ambassador-admin ClusterIP 10.111.152.68 <none> 8877/TCP 8m37s
ambassador-redis ClusterIP 10.98.170.102 <none> 6379/TCP 8m38s
The external IP shall be localhost, but it isn't. Those commands don't specify the external IP, at least not directly. Is some sort of setting missing in this case?
This question possibly shall be addressed to Ambassador Labs (Datawire) people.
Minikube has an ambassador addon so you don`t have to deploy this like that. All you have to to is to enable it:
minikube addons enable ambassador
You can then access it with 3 ways as described in the documents:
With minikube tunnel running in separate terminal:
With this you will be able to access it via external-ip. To get this you can run kubectl get service ambassador -n ambassador:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador LoadBalancer 10.106.55.39 10.106.55.39 80:31993/TCP,443:31124/TCP 24h
Note that Minikube does not natively support load balancers. Instead of minikube tunnel you might want to check also minikube service list. The output should look like this:
➜ ~ minikube service list
|-------------|-----------------------------|--------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|-----------------------------|--------------|-------------------------|
| ambassador | ambassador | http/80 | http://172.17.0.3:31993 |
| | | https/443 | http://172.17.0.3:31124 |
| ambassador | ambassador-admin | No node port |
| ambassador | ambassador-operator-metrics | No node port |
Configuring it with Ingress resource which is also described here.
Configuring it with ambassador Mapping Resource.
Normally when you create a Service of type LoadBalancer, Kubernetes automatically tries to provision an external load balancer in the cloud environment. Because Minikube does not have a built-in LoadBalancer that it can provision to the ambassador service. There are 2 ways around this.
Use minikube tunnel to simulate a load balancer. Depending on your minikube version, this might be either localhost or the cluster IP of the service.
Deploy Ambassador as a NodePort Service and do a kubectl port-forward so that localhost resolves to the Ambassador service.
I have a service and I would like to get an IP from its spec, using -o go-template I can do it that way:
kubectl get service webapp1-loadbalancer-svc -o go-template='{{(index .status.loadBalancer.ingress 0).ip}}'
This returns the IP of the first ingress in the load balancer which is what I want.
However, instead of using -o go-template, I would like to use -o template. I have tried multiple commands but I am unable to do so. The closest thing I have working is:
kubectl get service webapp1-loadbalancer-svc -o template={{.status.loadBalancer.ingress}}
But this returns the map [map[ip:172.17.0.28]], not just the IP. Everything I have tried to get the IP in the same command is returning errors while executing the template.
Is there a way to obtain the IP from the map using one kubectl command using -o template instead of -o go-template?
Kubectl supports JSONPath template.
Using JsonPath you can retrieve service cluster IP or other details as below.
ubuntu#k8s-master:~$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx 1/1 Running 0 129m 192.168.85.226 k8s-node01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d2h <none>
service/nginx ClusterIP 10.96.184.196 <none> 80/TCP 11m run=nginx
ubuntu#k8s-master:~$ kubectl get service nginx -o jsonpath='{.spec.clusterIP}'
10.96.184.196
Using JsonPath Works well with LoadBlancer type as well .. hope this is what you want to work with.
$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx 1/1 Running 0 133m 192.168.85.226 k8s-node01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d2h <none>
service/nginx LoadBalancer 10.100.165.17 <pending> 80:30852/TCP 5s run=nginx
ubuntu#k8s-master:~$
10.100.165.17ubuntu#k8s-master:~$ kubectl get service nginx -o jsonpath='{.spec.type}'
LoadBalancer
ubuntu#k8s-master:~$ kubectl get service nginx -o jsonpath='{.spec.clusterIP}'
10.100.165.17
So when you extract PORTs it also return a map as below
$ kubectl get service nginx -o jsonpath='{.spec.ports}'
[map[nodePort:30852 port:80 protocol:TCP targetPort:80]]
you can extract the nodePort, port and targetPort each as as below
$ kubectl get service nginx -o jsonpath='{.spec.ports[].targetPort}'
80
ubuntu#k8s-master:~$ kubectl get service nginx -o jsonpath='{.spec.ports[].nodePort}'
30852
ubuntu#k8s-master:~$ kubectl get service nginx -o jsonpath='{.spec.ports[].port}'
80
Hope above examples will help you fix your query
I think you should be able to do by this command when using jsonpath.
kubectl get service webapp1-loadbalancer-svc -o jsonpath={{.status.loadBalancer.ingress[].ip}}
As you know by installing Istio, it creates a kubernetes loadbalancer with a publicIP and use the public IP as External IP of istio-ingress-gateway LoadBalancer service. As the IP is not Static, I have created a static public IP in Azure which is in the same ResourceGroup as AKS, I found the resource-group name as below:
$ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip
I download the installation file through following command:
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.4.2 sh -
I tried to re-install istio by following command:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set grafana.enabled=true --set prometheus.enabled=true --set tracing.enabled=true --set kiali.enabled=true --set gateways.istio-ingressgateway.loadBalancerIP= my-static-public-ip | kubectl apply -f -
However it didn't work, still got the dynamic IP. So I tried to setup my static public IP on the files:
istio-demo.yaml, istio-demo-auth.yaml by adding loadbalancer IP under istio-ingressgateway:
spec:
type: LoadBalancer
loadBalancerIP: my-staticPublicIP
Also file: values-istio-gteways.yaml
loadBalancerIP: "mystaticPublicIP"
externalIPs: ["mystaticPublicIP"]
And then re-installed the istio using helm command as it mentioned above. This time it added mystaticPublicIP as one of the External_IP of istio-ingress-gateway Loadbalancer service. So now it has both dynamic IP and mystaticPublicIP.
That doesn't seem a right way to do that.
I went through the relevant questions under this website and also googled but none of them could help.
I'm wondering if anyone know how to make this work out?
I can successfully assign the static public IP to Istio gateway service with the following command,
helm template install/kubernetes/helm --name istio --namespace istio-system --set gateways.istio-ingressgateway.loadBalancerIP=my-static-public-ip | kubectl apply -f –
I have a simple container in google container registry which basically does a few things and executes a binary which is a go based server, here are the contents of the DockerFile:
FROM debian:stable
WORKDIR /workspace/
COPY key.json .
COPY bin/user-creds.
EXPOSE 1108
ENV GOOGLE_APPLICATION_CREDENTIALS /workspace/key.json
RUN apt-get update \
&& apt-get install -y ca-certificates \
&& chmod +x user-creds
CMD ["./user-creds"]
this container has been tested locally and works perfectly. So using the google cloud shell I ran this container:
kubectl run user-creds --image=eu.gcr.io/GCLOUD_PROJECT/user-creds:COMMIT_SHA --port=1108
Then like it says on the doc, i exposed it on a nodeport
kubectl expose deployment user-creds --target-port=1108 --type=NodePort
Then I created an ingress with a path to the sevice:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: INGRESS_NAME
annotations:
kubernetes.io/ingress.global-static-ip-name: IP_NAME
spec:
rules:
- http:
paths:
- path: /user/creds/*
backend:
serviceName: user-creds
servicePort: 1108
then i created the ingress:
kubectl create -f INGRESS_NAME.yaml
the ingress was created and i waited some time, here is the details of the ingress:
NAME HOSTS ADDRESS PORTS AGE
INGRESS_NAME * IP_ADDRESS 80 38m
but when i go the the actual url with the path I get a 502 error:
When I go to any other path I get the default backend 404 error but when i visit the specific /user/creds/ path i get the 502 error.
To check if it is something wrong with the cluster or my specific container, port or something else, I tried exposing the container as a LoadBalancer and it works perfectly, the Command:
kubectl expose deployment user-creds --target-port=1108 --port=80 --type=LoadBalancer
service details:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP INT_IP_ADDRESS <none> 443/TCP 1h
user-creds LoadBalancer INT_IP_ADDRESS IP_ADDRESS 80:31618/TCP 1m
result: 200 with the correst response body.
Been stuck on this for time now, tried the ingress with no paths just the user-creds as the backend but still has the same error.
Any help or suggestion would be appreciated, thanks :)
Finally figured it out, it was to do with the health check. The health check visits / and expects a 200, if it doesn't get it then it marks the backend as unhealthy and returns 502 for every requests sent to it. My problem was that I was using the / endpoint which would've normally returned a 400 if its being called with no specific request parameters.
It was really a human error on my side, it even specifically said that in the docs: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#remarks
Another thing to consider is that the ingress returns all the the paths before the route so the the server needs to literally listen for /user/creds/ in my case.