JupyterHub proxy-public svc has no external IP (stuck in <pending>) - amazon-web-services

I am using helm to deploy JupyterHub (version 0.8.2) to kubernetes (AWS managed kubernetes "EKS"). I have a helm config to describe the proxy-public service, with an AWS elastic load balancer:
proxy:
secretToken: ""
https:
enabled: true
type: offload
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '1801'
Problem: When I deploy JupyterHub to EKS via helm:
helm upgrade --install jhub jupyterhub/jupyterhub --namespace jhub --version=0.8.2 --values config.yaml
The proxy-public svc never get's an external IP. It is stuck in pending state:
> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 172.20.241.23 <none> 8081/TCP 15m
proxy-api ClusterIP 172.20.170.189 <none> 8001/TCP 15m
proxy-public LoadBalancer 172.20.72.196 <pending> 80:31958/TCP,443:30470/TCP 15m
I did kubectl describe svc proxy-public and kubectl get events and there does not appear to be anything out of the ordinary. No errors.

The problem turned out to be the fact that I had mistakenly put the kubernetes cluster (and control plane) in private subnets only, thus making it impossible for the ELB to get an external IP.

You will need another annotation like this in order to use AWS classic loadbalancer.
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

Deploying JupyterHub on kubernetes can be an overkill sometimes if all you want is Jupyterhub which is accessible over the internet for you or your team. Instead of doing the complicated kubernetes setup, you can setup a VM in AWS or any other cloud and have jupyterhub installed and run as a service .
In fact, there is already a VM setup available on AWS, GCP and Azure which can be used to spinup your jupyterhub vm that will be accessible on a public ip and support single or multiuser sessions in just few clicks. Details are below if you want to try it out:
Setup on GCP
Setup on AWS
Setup on Azure

Related

Kubernetes Load Balancer on EC2 (Not EKS) [duplicate]

I've created a Kubernetes cluster with AWS ec2 instances using kubeadm but when I try to create a service with type LoadBalancer I get an EXTERNAL-IP pending status
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 123m
nginx LoadBalancer 10.107.199.170 <pending> 8080:31579/TCP 45m52s
My create command is
kubectl expose deployment nginx --port 8080 --target-port 80 --type=LoadBalancer
I'm not sure what I'm doing wrong.
What I expect to see is an EXTERNAL-IP address given for the load balancer.
Has anyone had this and successfully solved it, please?
Thanks.
You need to setup the interface between k8s and AWS which is aws-cloud-provider-controller.
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
More details can be found:
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/
https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/
https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2
Once you finish this setup, you will have the luxury to control not only the creation of AWS LB for each k8s service with type LoadBalancer.. But also , you will be able to control many things using annotations.
apiVersion: v1
kind: Service
metadata:
name: example
namespace: kube-system
labels:
run: example
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5556
protocol: TCP
selector:
app: example
Different settings can be applied to a load balancer service in AWS using annotations.
To Create K8s cluster on AWS using EC2, you need to consider some configuration to make it work as expected.
that's why your service is not exposed right with external IP.
you need to get the public IP of the EC2 instance that your cluster used it to deploy Nginx pod on it and then edit Nginx service to add external IP
kubectl edit service nginx
and that will prompt terminal to add external IP:
type: LoadBalancer
externalIPs:
- 1.2.3.4
where 1.2.3.4 is the public IP of the EC2 instance.
then make sure your security group inbound traffic allowed on your port (31579)
Now you are ready to user k8s service from any browser open: 1.2.3.4:31579

How can I set up an Ingress to connect to a ClusterIP service?

Objective
I am have deployed Apache Airflow on AWS' Elastic Kubernetes Service using Airflow's Stable Helm chart. My goal is to create an Ingress to allow others to access the airflow webserver UI via their browser. It's worth mentioning that I am deploying on EKS using AWS Fargate. My experience with Kubernetes is somewhat limited, and I have not set an Ingress myself before.
What I have tried to do
I am currently able to connect to the airflow web-server pod via port-forwarding (like kubectl port-forward airflow-web-pod 8080:8080). I have tried setting the Ingress through the Helm chart (documented here). After which:
Running kubectl get ingress -n dp-airflow I got:
NAME CLASS HOSTS ADDRESS PORTS AGE
airflow-flower <none> foo.bar.com 80 3m46s
airflow-web <none> foo.bar.com 80 3m46s
Then running kubectl describe ingress airflow-web -n dp-airflow I get:
Name: airflow-web
Namespace: dp-airflow
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/airflow airflow-web:web (<redacted_ip>:8080)
Annotations: meta.helm.sh/release-name: airflow
meta.helm.sh/release-namespace: dp-airflow
I am not sure what did I need to put into the browser, so I have tried using http://foo.bar.com/airflow as well as the cluster endpoint/ip without success.
This is how the airflow webservice service looks like:
Running kubectl get services -n dp-airflow, I get:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
airflow-web ClusterIP <redacted_ip> <none> 8080/TCP 28m
Other things I have tried
I have tried creating an Ingress without the Helm chart (I am using Terraform), like:
resource "kubernetes_ingress" "airflow_ingress" {
metadata {
name = "ingress"
}
spec {
backend {
service_name = "airflow-web"
service_port = 8080
}
rule {
http {
path {
backend {
service_name = "airflow-web"
service_port = 8080
}
path = "/airflow"
}
}
}
}
}
However I was still not able to connect to the web UI. What are the steps that I need to take to set up an Ingress? Which address do I need to use in my browser to connect to the web UI?
I am happy to provide further details if needed.
It sound like you have created Ingress resources. That is a good step. But for those Ingress resources to have any effect, you also need an Ingress Controller than can realize your Ingress to an actual load balancer.
In an AWS environment, you should look at AWS Load Balancer Controller that creates an AWS Application Load Balancer that is configured according your Ingress resources.
Ingress to connect to a ClusterIP service?
First, the default load balancer is classic load balancer, but you probably want to use the newer Application Load Balancer to be used for your Ingress resources, so on your Ingress resources add this annotation:
annotations:
kubernetes.io/ingress.class: alb
By default, your services should be of type NodePort, but as you request, it is possible to use ClusterIP services as well, when you on your Ingress resource also add this annotation (for traffic mode):
alb.ingress.kubernetes.io/target-type: ip
See the ALB Ingress documentation for more on this.

kubernetes LoadBalancer service

Trying to teach myself on how to use Kubernetes, and having some issues.
I was able to set up a cluster, deploy the nginx image and then access nginx using a service of type NodePort (once I added the port to the security group inbound rules of the node).
My next step was to try to use a service of type LoadBalancer to try to access nginx.
I set up a new cluster and deployed the nginx image.
kubectl \
create deployment my-nginx-deployment \
--image=nginx
I then set up the service for the LoadBalancer
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=8080 --name=nginxpubic
Once it was done setting up, I tried to access nginx using the LoadBalancer Ingress (Which I found from describing the LoadBalancer service). I received a This page isn’t working error.
Not really sure where I went wrong.
results of kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 7h
nginxpubic LoadBalancer 100.71.37.139 a5396ba70d45d11e88f290658e70719d-1485253166.us-west-2.elb.amazonaws.com 80:31402/TCP 7h
From the nginx dockerhub page , I see that the container is using port 80.
https://hub.docker.com/_/nginx/
It should be like this:
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=80 --name=nginxpubic
Also,
make sure the service type loadbalancer is available in your environement.
Known Issues for minikube installation
Features that require a Cloud Provider will not work in Minikube. These include:
LoadBalancers
Features that require multiple nodes. These include:
Advanced scheduling policies

How to configure an AWS Elastic IP to point to an OpenShift Origin running pod?

We have set up OpenShift Origin on AWS using this handy guide. Our eventual
hope is to have some pods running REST or similar services that we can access
for development purposes. Thus, we don't need DNS or anything like that at this
point, just a public IP with open ports that points to one of our running pods.
Our first proof of concept is trying to get a jenkins (or even just httpd!) pod
that's running inside OpenShift to be exposed via an allocated Elastic IP.
I'm not a network engineer by any stretch, but I was able to successuflly get
an Elastic IP connected to one of my OpenShift "worker" instances, which I
tested by sshing to the public IP allocated to the Elastic IP. At this point
we're struggling to figure out how to make a pod visible that allocated Elastic IP,
owever. We've tried a kubernetes LoadBalancer service, a kubernetes Ingress,
and configuring an AWS Network Load Balancer, all without being able to
successfully connect to 18.2XX.YYY.ZZZ:8080 (my public IP).
The most promising success was using oc port-forward seemed to get at least part way
through, but frustratingly hangs without returning:
$ oc port-forward --loglevel=7 jenkins-2-c1hq2 8080 -n my-project
I0222 19:20:47.708145 73184 loader.go:354] Config loaded from file /home/username/.kube/config
I0222 19:20:47.708979 73184 round_trippers.go:383] GET https://ec2-18-2AA-BBB-CCC.us-east-2.compute.amazonaws.com:8443/api/v1/namespaces/my-project/pods/jenkins-2-c1hq2
....
I0222 19:20:47.758306 73184 round_trippers.go:390] Request Headers:
I0222 19:20:47.758311 73184 round_trippers.go:393] X-Stream-Protocol-Version: portforward.k8s.io
I0222 19:20:47.758316 73184 round_trippers.go:393] User-Agent: oc/v1.6.1+5115d708d7 (linux/amd64) kubernetes/fff65cf
I0222 19:20:47.758321 73184 round_trippers.go:393] Authorization: Bearer Pqg7xP_sawaeqB2ub17MyuWyFnwdFZC5Ny1f122iKh8
I0222 19:20:47.800941 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
I0222 19:20:47.800963 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
( oc port-forward hangs at this point and never returns)
We've found a lot of information about how to get this working under GKE, but
nothing that's really helpful for getting this working for OpenShift Origin on
AWS. Any ideas?
Update:
So we realized that sysdig.com's blog post on deploying OpenShift Origin on AWS was missing some key AWS setup information, so based on OpenShift Origin's Configuring AWS page, we set the following env variables and re-ran the ansible playbook:
$ export AWS_ACCESS_KEY_ID='AKIASTUFF'
$ export AWS_SECRET_ACCESS_KEY='STUFF'
$ export ec2_vpc_subnet='my_vpc_subnet'
$ ansible-playbook -c paramiko -i hosts openshift-ansible/playbooks/byo/config.yml --key-file ~/.ssh/my-aws-stack
I think this gets us closer, but creating a load-balancer service now gives us an always-pending IP:
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-lb 172.30.XX.YYY <pending> 8080:31338/TCP 12h
The section on AWS Applying Configuration Changes seems to imply I need to use AWS Instance IDs rather than hostnames to identify my nodes, but I tried this and OpenShift Origin fails to start if I use that method. Still at a loss.
It may not satisfy the "Elastic IP" part but how about using AWS cloud provider ELB to expose the IP/port to the pod via a service to the pod with LoadBalancer option?
Make sure to configure the AWS cloud provider for the cluster (References)
Create a svc to the pod(s) with type LoadBalancer.
For instance to expose a Dashboard via AWS ELB.
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: LoadBalancer <-----
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Then the svc will be exposed as an ELB and the pod can be accessed via the ELB public DNS name a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com.
$ kubectl (oc) get svc kubernetes-dashboard -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard LoadBalancer 10.100.96.203 a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com 443:31636/TCP 16m k8s-app=kubernetes-dashboard
References
K8S AWS Cloud Provider Notes
Reference Architecture OpenShift Container Platform on Amazon Web Services
DEPLOYING OPENSHIFT CONTAINER PLATFORM 3.5 ON AMAZON WEB SERVICES
Configuring for AWS
Check this guide out: https://github.com/dwmkerr/terraform-aws-openshift
It's got some significant advantages vs. the one you referring to in your post. Additionally, it has a clear terraform spec that you can modify and reset to using an Elastic IP (haven't tried myself but should work).
Another way to "lock" your access to the installation is to re-code the assignment of the Public URL to the master instance in the terraform script, e.g., to a domain that you own (the default script sets it to an external IP-based value with "xip.io" added - works great for testing), then set up a basic ALB that forwards https 443 and 8443 to the master instance that the install creates (you can do it manually after the install is completed, also need a second dummy Subnet; dummy-up the healthcheck as well) and link the ALB to your domain via Route53. You can even use free Route53 wildcard certs with this approach.

Kubernetes Cluster on AWS with Kops - NodePort Service Unavailable

I am having difficulties accessing a NodePort service on my Kubernetes cluster.
Goal
set up ALB Ingress controller so that i can use websockets and http/2
setup NodePort service as required by that controller
Steps taken
Previously a Kops (Version 1.6.2) cluster was created on AWS eu-west-1. The kops addons for nginx ingress was added as well as Kube-lego. ELB ingress working fine.
Setup the ALB Ingress Controller with custom AWS keys using IAM profile specified by that project.
Changed service type from LoadBalancer to NodePort using kubectl replace --force
> kubectl describe svc my-nodeport-service
Name: my-node-port-service
Namespace: default
Labels: <none>
Selector: service=my-selector
Type: NodePort
IP: 100.71.211.249
Port: <unset> 80/TCP
NodePort: <unset> 30176/TCP
Endpoints: 100.96.2.11:3000
Session Affinity: None
Events: <none>
> kubectl describe pods my-nodeport-pod
Name: my-nodeport-pod
Node: <ip>.eu-west-1.compute.internal/<ip>
Labels: service=my-selector
Status: Running
IP: 100.96.2.11
Containers:
update-center:
Port: 3000/TCP
Ready: True
Restart Count: 0
(ssh into node)
$ sudo netstat -nap | grep 30176
tcp6 0 0 :::30176 :::* LISTEN 2093/kube-proxy
Results
Curl from ALB hangs
Curl from <public ip address of all nodes>:<node port for service> hangs
Expected
Curl from both ALB and directly to the node:node-port should return 200 "Ok" (the service's http response to the root)
Update:
Issues created on github referencing above with some further details in some cases:
https://github.com/kubernetes/kubernetes/issues/50261
https://github.com/coreos/alb-ingress-controller/issues/169
https://github.com/kubernetes/kops/issues/3146
By default Kops does not configure the EC2 instances to allows NodePort traffic from outside.
In order for traffic outside of the cluster to reach the NodePort you must edit the configuration for your EC2 instances that are your Kubernetes nodes in the EC2 Console on AWS.
Once in the EC2 console click "Security groups." Kops should have annotated the original Security groups that it made for your cluster as nodes.<your cluster name> and master.<your cluster name>
We need to modify these Security Groups to forward traffic from the default port range for NodePorts to the instances.
Click on the security group, click on rules and add the following rule.
Port range to open on the nodes and master: 30000-32767
This will allow anyone on the internet to access a NodePort on your cluster, so make sure you want these exposed.
Alternatively instead of allowing it from any origin you can allow it only from the security group created by for the ALB by the alb-ingress-controller. However, since these can be re-created it will likely be necessary to modify the rule on modifications to the kubernetes service. I suggest specifying the NodePort explicitly to it is a predetermined known NodePort rather than a randomly assigned one.
The SG of master is not needed to open the nodeport range in order to make : working.
So only the Worker's SG needs to open the port range.