How EXTERNAL-IP is set In This Case? - kubectl

After having a local Kubernates up using minikube, I run a series of kubectl commands:
$ kubectl apply -f https://app.getambassador.io/initializer/yaml/0a6624ff-5b39-418f-b61d-7ba83dc3ab7b/crds && \
kubectl wait --for condition=established --timeout=90s crd -lproduct=aes
$ kubectl apply -f https://app.getambassador.io/initializer/yaml/0a6624ff-5b39-418f-b61d-7ba83dc3ab7b/install && \
kubectl wait -n ambassador deploy -lproduct=aes --for condition=available --timeout=90s
$ kubectl apply -f https://app.getambassador.io/initializer/yaml/0a6624ff-5b39-418f-b61d-7ba83dc3ab7b/configure
To see the Ambassador Edge Stack (Ingress and API gateway)
$ kubectl get svc --namespace ambassador
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador LoadBalancer 10.111.43.125 <pending> 80:30130/TCP,443:30217/TCP 8m37s
ambassador-admin ClusterIP 10.111.152.68 <none> 8877/TCP 8m37s
ambassador-redis ClusterIP 10.98.170.102 <none> 6379/TCP 8m38s
The external IP shall be localhost, but it isn't. Those commands don't specify the external IP, at least not directly. Is some sort of setting missing in this case?
This question possibly shall be addressed to Ambassador Labs (Datawire) people.

Minikube has an ambassador addon so you don`t have to deploy this like that. All you have to to is to enable it:
minikube addons enable ambassador
You can then access it with 3 ways as described in the documents:
With minikube tunnel running in separate terminal:
With this you will be able to access it via external-ip. To get this you can run kubectl get service ambassador -n ambassador:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador LoadBalancer 10.106.55.39 10.106.55.39 80:31993/TCP,443:31124/TCP 24h
Note that Minikube does not natively support load balancers. Instead of minikube tunnel you might want to check also minikube service list. The output should look like this:
➜ ~ minikube service list
|-------------|-----------------------------|--------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|-----------------------------|--------------|-------------------------|
| ambassador | ambassador | http/80 | http://172.17.0.3:31993 |
| | | https/443 | http://172.17.0.3:31124 |
| ambassador | ambassador-admin | No node port |
| ambassador | ambassador-operator-metrics | No node port |
Configuring it with Ingress resource which is also described here.
Configuring it with ambassador Mapping Resource.

Normally when you create a Service of type LoadBalancer, Kubernetes automatically tries to provision an external load balancer in the cloud environment. Because Minikube does not have a built-in LoadBalancer that it can provision to the ambassador service. There are 2 ways around this.
Use minikube tunnel to simulate a load balancer. Depending on your minikube version, this might be either localhost or the cluster IP of the service.
Deploy Ambassador as a NodePort Service and do a kubectl port-forward so that localhost resolves to the Ambassador service.

Related

How can I troubleshoot a Rancher HA deployment cert-manager issue on AWS?

I am new to both Rancher and K8s.
I walked through the Rancher HA documentation and deployed a 3-node cluster on AWS with a Layer 4 Load Balanced configured.
Everything indicates that the deployment was successful, but I am having issues with certificates. When I go to the sit after install (https://rancher.domain.net), I am prompted with an un-trusted site warning. I accept the risk , then the page just hangs. I can see the rancher favicon, but the page never loads.
I opted for the self-signed certs to get it up and running. My AWS NLB is just forward 443 and 80 to the target groups and not using a ACM provided cert.
I checked these two settings per the documentation:
$ kubectl -n cattle-system describe certificate
No resources found in castle-system namespace.
$ kubectl -n cattle-system describe issuer
No resources found in castle-system namespace.
Describe issuer originally showed what looked like appropriate output, but that is no longer showing anything.
I rand this command:
$ kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-**********-***** 1/1 Running 0 34m
cert-manager-caininjector-**********-***** 1/1 Running 0 34m
cert-manager-webhook-**********-***** 1/1 Running 0 34m
At this point, I am beyond my experience and would appreciate some pointers on how to troubleshoot this.
List the services. What is the status of the rancher service?
kubectl -n <namespace> get services
Can you describe the rancher service object?
kubectl -n <namespace> describe service <rancher service>
or
kubectl -n <namespace> get service <rancher service> -o json
Is it of type Loadbalancer, i.e. did you let Kubernetes AWS Cloud provider create the NLB or did you create it outside of K8S? If you can better to let Kubernetes create the LB.
Reference for tweaking the cloud providers via annotations.

JupyterHub proxy-public svc has no external IP (stuck in <pending>)

I am using helm to deploy JupyterHub (version 0.8.2) to kubernetes (AWS managed kubernetes "EKS"). I have a helm config to describe the proxy-public service, with an AWS elastic load balancer:
proxy:
secretToken: ""
https:
enabled: true
type: offload
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '1801'
Problem: When I deploy JupyterHub to EKS via helm:
helm upgrade --install jhub jupyterhub/jupyterhub --namespace jhub --version=0.8.2 --values config.yaml
The proxy-public svc never get's an external IP. It is stuck in pending state:
> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 172.20.241.23 <none> 8081/TCP 15m
proxy-api ClusterIP 172.20.170.189 <none> 8001/TCP 15m
proxy-public LoadBalancer 172.20.72.196 <pending> 80:31958/TCP,443:30470/TCP 15m
I did kubectl describe svc proxy-public and kubectl get events and there does not appear to be anything out of the ordinary. No errors.
The problem turned out to be the fact that I had mistakenly put the kubernetes cluster (and control plane) in private subnets only, thus making it impossible for the ELB to get an external IP.
You will need another annotation like this in order to use AWS classic loadbalancer.
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
Deploying JupyterHub on kubernetes can be an overkill sometimes if all you want is Jupyterhub which is accessible over the internet for you or your team. Instead of doing the complicated kubernetes setup, you can setup a VM in AWS or any other cloud and have jupyterhub installed and run as a service .
In fact, there is already a VM setup available on AWS, GCP and Azure which can be used to spinup your jupyterhub vm that will be accessible on a public ip and support single or multiuser sessions in just few clicks. Details are below if you want to try it out:
Setup on GCP
Setup on AWS
Setup on Azure

Assign a Static Public IP to Istio ingress-gateway Loadbalancer service

As you know by installing Istio, it creates a kubernetes loadbalancer with a publicIP and use the public IP as External IP of istio-ingress-gateway LoadBalancer service. As the IP is not Static, I have created a static public IP in Azure which is in the same ResourceGroup as AKS, I found the resource-group name as below:
$ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip
I download the installation file through following command:
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.4.2 sh -
I tried to re-install istio by following command:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set grafana.enabled=true --set prometheus.enabled=true --set tracing.enabled=true --set kiali.enabled=true --set gateways.istio-ingressgateway.loadBalancerIP= my-static-public-ip | kubectl apply -f -
However it didn't work, still got the dynamic IP. So I tried to setup my static public IP on the files:
istio-demo.yaml, istio-demo-auth.yaml by adding loadbalancer IP under istio-ingressgateway:
spec:
type: LoadBalancer
loadBalancerIP: my-staticPublicIP
Also file: values-istio-gteways.yaml
loadBalancerIP: "mystaticPublicIP"
externalIPs: ["mystaticPublicIP"]
And then re-installed the istio using helm command as it mentioned above. This time it added mystaticPublicIP as one of the External_IP of istio-ingress-gateway Loadbalancer service. So now it has both dynamic IP and mystaticPublicIP.
That doesn't seem a right way to do that.
I went through the relevant questions under this website and also googled but none of them could help.
I'm wondering if anyone know how to make this work out?
I can successfully assign the static public IP to Istio gateway service with the following command,
helm template install/kubernetes/helm --name istio --namespace istio-system --set gateways.istio-ingressgateway.loadBalancerIP=my-static-public-ip | kubectl apply -f –

kubernetes LoadBalancer service

Trying to teach myself on how to use Kubernetes, and having some issues.
I was able to set up a cluster, deploy the nginx image and then access nginx using a service of type NodePort (once I added the port to the security group inbound rules of the node).
My next step was to try to use a service of type LoadBalancer to try to access nginx.
I set up a new cluster and deployed the nginx image.
kubectl \
create deployment my-nginx-deployment \
--image=nginx
I then set up the service for the LoadBalancer
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=8080 --name=nginxpubic
Once it was done setting up, I tried to access nginx using the LoadBalancer Ingress (Which I found from describing the LoadBalancer service). I received a This page isn’t working error.
Not really sure where I went wrong.
results of kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 7h
nginxpubic LoadBalancer 100.71.37.139 a5396ba70d45d11e88f290658e70719d-1485253166.us-west-2.elb.amazonaws.com 80:31402/TCP 7h
From the nginx dockerhub page , I see that the container is using port 80.
https://hub.docker.com/_/nginx/
It should be like this:
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=80 --name=nginxpubic
Also,
make sure the service type loadbalancer is available in your environement.
Known Issues for minikube installation
Features that require a Cloud Provider will not work in Minikube. These include:
LoadBalancers
Features that require multiple nodes. These include:
Advanced scheduling policies

skydns not able to find nginxsvc

I am following the example in here: http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html#environment-variables. Although the dns seems to be enabled:
skwok-wpc-3:1.0 skwok$ kubectl get services kube-dns --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
53/TCP
and the service is up
$ kubectl get svc
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
nginxsvc app=nginx app=nginx 10.0.128.194 80/TCP
Following the example, I cannot use the curlpod to look up the service:
$ kubectl exec curlpod -- nslookup nginxsvc
Server: 10.0.0.10
Address 1: 10.0.0.10 ip-10-0-0-10.us-west-2.compute.internal
nslookup: can't resolve 'nginxsvc'
Did I miss anything? I am using aws and I use export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash to start my cluster. Thank you.
Please see: http://kubernetes.io/v1.0/docs/user-guide/debugging-services.html, and make sure nginx is running and serving within your pod. I would also suggest something like:
$ kubectl get ep nginxsvc
$ kubectl exec -it curlpod /bin/sh
pod$ curl ip-from-kubectl-get-ep
pod$ traceroute ip-from-kubectl-get-ep
If that doesn't work, please reply or jump on the Kubernetes slack channel