I'm following this guide CoreOS + Kubernetes - Step By Step in a VirtualBox VM's. Everything working until i need expose services as extenal_ips to my host. When a create my guestbook-service.json a external ip should be created, like the image bellow.
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR
guestbook 10.0.207.218 146.148.81.8 3000/TCP app=guestbook
But instead i'm having the following
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR
guestbook 10.0.207.218 3000/TCP app=guestbook
My VM's have a Bridge Network and a NAT Network configured, and CoreOS with static IP's.
Thanks.
The "external IP" feature needs cloud provider's support. In your case you might want to explore the nodePort option.
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#type-nodeport
Related
I got the following setup:
Ingress-Nginx-Controller (serviceType "NodePort")
AWS-Load-Balancer-Controller
External-DNS
I am exposing the Ingress-Nginx-Controller via an Ingress, backed by the AWS Load Balancer Controller both public and private. I chose this route, since it was pretty easy to limit the inbound CIDRs. And nginx ingress cannot create an ALB but only Classic LB or NLB.
kubectl -n ingress-nginx get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
alb-ingress-connect-nginx alb * xxxx.region.elb.amazonaws.com 80 2d8h
This ingress forwards all traffic to my nginx controller.
The service looks like
kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort a.b.c.d
I am trying to automatically setup dns records for my deployment via External-DNS. Therefore, I am creating an ingress for my deployment with ingress-class nginx and specified hostname.
Creating the records works, however it uses the IP of my ingress-nginx-controller service (a.b.c.d) instead of the loadbalancer's address.
Now my question: Is it possible to for external-dns to lookup the address of the nginx ingress or does this work only if the Nginx is exposed as service of type "LoadBalancer"?
Thanks for any help
I am able to figure this out by using --publish-status-address in nginx controller to point to ALB.
If you are using 2 ALBs (public and private), you need to create 2 nginx controllers with --publish-status-address points to each ALB. Also, remember to disable --publish-service parameter. And use different electionID for each controller if you have installed nginx controllers using Helm
I am using helm to deploy JupyterHub (version 0.8.2) to kubernetes (AWS managed kubernetes "EKS"). I have a helm config to describe the proxy-public service, with an AWS elastic load balancer:
proxy:
secretToken: ""
https:
enabled: true
type: offload
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '1801'
Problem: When I deploy JupyterHub to EKS via helm:
helm upgrade --install jhub jupyterhub/jupyterhub --namespace jhub --version=0.8.2 --values config.yaml
The proxy-public svc never get's an external IP. It is stuck in pending state:
> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 172.20.241.23 <none> 8081/TCP 15m
proxy-api ClusterIP 172.20.170.189 <none> 8001/TCP 15m
proxy-public LoadBalancer 172.20.72.196 <pending> 80:31958/TCP,443:30470/TCP 15m
I did kubectl describe svc proxy-public and kubectl get events and there does not appear to be anything out of the ordinary. No errors.
The problem turned out to be the fact that I had mistakenly put the kubernetes cluster (and control plane) in private subnets only, thus making it impossible for the ELB to get an external IP.
You will need another annotation like this in order to use AWS classic loadbalancer.
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
Deploying JupyterHub on kubernetes can be an overkill sometimes if all you want is Jupyterhub which is accessible over the internet for you or your team. Instead of doing the complicated kubernetes setup, you can setup a VM in AWS or any other cloud and have jupyterhub installed and run as a service .
In fact, there is already a VM setup available on AWS, GCP and Azure which can be used to spinup your jupyterhub vm that will be accessible on a public ip and support single or multiuser sessions in just few clicks. Details are below if you want to try it out:
Setup on GCP
Setup on AWS
Setup on Azure
Trying to teach myself on how to use Kubernetes, and having some issues.
I was able to set up a cluster, deploy the nginx image and then access nginx using a service of type NodePort (once I added the port to the security group inbound rules of the node).
My next step was to try to use a service of type LoadBalancer to try to access nginx.
I set up a new cluster and deployed the nginx image.
kubectl \
create deployment my-nginx-deployment \
--image=nginx
I then set up the service for the LoadBalancer
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=8080 --name=nginxpubic
Once it was done setting up, I tried to access nginx using the LoadBalancer Ingress (Which I found from describing the LoadBalancer service). I received a This page isn’t working error.
Not really sure where I went wrong.
results of kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 7h
nginxpubic LoadBalancer 100.71.37.139 a5396ba70d45d11e88f290658e70719d-1485253166.us-west-2.elb.amazonaws.com 80:31402/TCP 7h
From the nginx dockerhub page , I see that the container is using port 80.
https://hub.docker.com/_/nginx/
It should be like this:
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=80 --name=nginxpubic
Also,
make sure the service type loadbalancer is available in your environement.
Known Issues for minikube installation
Features that require a Cloud Provider will not work in Minikube. These include:
LoadBalancers
Features that require multiple nodes. These include:
Advanced scheduling policies
We have a requirement to connect from a POD in GKE to service running on a VM on it's internal IP address.
The K8s cluster and the VM are on different network so we setup VPC Peering between these nets:
As how to point to an external IP, we applied a service without a selector as discussed here:
https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
The POD should connect to the internal IP of the VM through this service, the service and endpoint description is:
kubectl describe svc vm-proxy
Name: vm-proxy
Namespace: test-environment
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.59.251.146
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.164.0.10:8080
Session Affinity: None
Events: <none>
Whereby the Endpoint, the internal IP of the VM is, en the
Service IP is allocated by K8s.
The pod simply sets up an http connection to the IP of the Service, but connection is re-fused. (Connection timeout eventually).
The use case is pretty straightforward, and documented on k8s documentation, giving the example of connecting to a DB running on a VM. However it doesn't work in our case, and we are not sure if our setup is wrong or this is simply not possible, using an internal IP of a VM.
I reproduced your issue and it worked fine for me. This is what I did:
Create 2 networks (one of them (demo) on 172.16.0.0/16, the other one is my default network, set on 10.132.0.0/20).
Set up VPC peering.
Created a VM in demo network. It got assigned 172.16.0.2
Created the service as you described (with the endpoint pointing to 172.16.0.2).
curl from the pod to the service IP and got 200!
If the steps are right, but your configuration is not working, I'd like to know your network IP ranges. Both of them.
We have set up OpenShift Origin on AWS using this handy guide. Our eventual
hope is to have some pods running REST or similar services that we can access
for development purposes. Thus, we don't need DNS or anything like that at this
point, just a public IP with open ports that points to one of our running pods.
Our first proof of concept is trying to get a jenkins (or even just httpd!) pod
that's running inside OpenShift to be exposed via an allocated Elastic IP.
I'm not a network engineer by any stretch, but I was able to successuflly get
an Elastic IP connected to one of my OpenShift "worker" instances, which I
tested by sshing to the public IP allocated to the Elastic IP. At this point
we're struggling to figure out how to make a pod visible that allocated Elastic IP,
owever. We've tried a kubernetes LoadBalancer service, a kubernetes Ingress,
and configuring an AWS Network Load Balancer, all without being able to
successfully connect to 18.2XX.YYY.ZZZ:8080 (my public IP).
The most promising success was using oc port-forward seemed to get at least part way
through, but frustratingly hangs without returning:
$ oc port-forward --loglevel=7 jenkins-2-c1hq2 8080 -n my-project
I0222 19:20:47.708145 73184 loader.go:354] Config loaded from file /home/username/.kube/config
I0222 19:20:47.708979 73184 round_trippers.go:383] GET https://ec2-18-2AA-BBB-CCC.us-east-2.compute.amazonaws.com:8443/api/v1/namespaces/my-project/pods/jenkins-2-c1hq2
....
I0222 19:20:47.758306 73184 round_trippers.go:390] Request Headers:
I0222 19:20:47.758311 73184 round_trippers.go:393] X-Stream-Protocol-Version: portforward.k8s.io
I0222 19:20:47.758316 73184 round_trippers.go:393] User-Agent: oc/v1.6.1+5115d708d7 (linux/amd64) kubernetes/fff65cf
I0222 19:20:47.758321 73184 round_trippers.go:393] Authorization: Bearer Pqg7xP_sawaeqB2ub17MyuWyFnwdFZC5Ny1f122iKh8
I0222 19:20:47.800941 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
I0222 19:20:47.800963 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
( oc port-forward hangs at this point and never returns)
We've found a lot of information about how to get this working under GKE, but
nothing that's really helpful for getting this working for OpenShift Origin on
AWS. Any ideas?
Update:
So we realized that sysdig.com's blog post on deploying OpenShift Origin on AWS was missing some key AWS setup information, so based on OpenShift Origin's Configuring AWS page, we set the following env variables and re-ran the ansible playbook:
$ export AWS_ACCESS_KEY_ID='AKIASTUFF'
$ export AWS_SECRET_ACCESS_KEY='STUFF'
$ export ec2_vpc_subnet='my_vpc_subnet'
$ ansible-playbook -c paramiko -i hosts openshift-ansible/playbooks/byo/config.yml --key-file ~/.ssh/my-aws-stack
I think this gets us closer, but creating a load-balancer service now gives us an always-pending IP:
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-lb 172.30.XX.YYY <pending> 8080:31338/TCP 12h
The section on AWS Applying Configuration Changes seems to imply I need to use AWS Instance IDs rather than hostnames to identify my nodes, but I tried this and OpenShift Origin fails to start if I use that method. Still at a loss.
It may not satisfy the "Elastic IP" part but how about using AWS cloud provider ELB to expose the IP/port to the pod via a service to the pod with LoadBalancer option?
Make sure to configure the AWS cloud provider for the cluster (References)
Create a svc to the pod(s) with type LoadBalancer.
For instance to expose a Dashboard via AWS ELB.
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: LoadBalancer <-----
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Then the svc will be exposed as an ELB and the pod can be accessed via the ELB public DNS name a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com.
$ kubectl (oc) get svc kubernetes-dashboard -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard LoadBalancer 10.100.96.203 a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com 443:31636/TCP 16m k8s-app=kubernetes-dashboard
References
K8S AWS Cloud Provider Notes
Reference Architecture OpenShift Container Platform on Amazon Web Services
DEPLOYING OPENSHIFT CONTAINER PLATFORM 3.5 ON AMAZON WEB SERVICES
Configuring for AWS
Check this guide out: https://github.com/dwmkerr/terraform-aws-openshift
It's got some significant advantages vs. the one you referring to in your post. Additionally, it has a clear terraform spec that you can modify and reset to using an Elastic IP (haven't tried myself but should work).
Another way to "lock" your access to the installation is to re-code the assignment of the Public URL to the master instance in the terraform script, e.g., to a domain that you own (the default script sets it to an external IP-based value with "xip.io" added - works great for testing), then set up a basic ALB that forwards https 443 and 8443 to the master instance that the install creates (you can do it manually after the install is completed, also need a second dummy Subnet; dummy-up the healthcheck as well) and link the ALB to your domain via Route53. You can even use free Route53 wildcard certs with this approach.