kubernetes LoadBalancer service - amazon-web-services

Trying to teach myself on how to use Kubernetes, and having some issues.
I was able to set up a cluster, deploy the nginx image and then access nginx using a service of type NodePort (once I added the port to the security group inbound rules of the node).
My next step was to try to use a service of type LoadBalancer to try to access nginx.
I set up a new cluster and deployed the nginx image.
kubectl \
create deployment my-nginx-deployment \
--image=nginx
I then set up the service for the LoadBalancer
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=8080 --name=nginxpubic
Once it was done setting up, I tried to access nginx using the LoadBalancer Ingress (Which I found from describing the LoadBalancer service). I received a This page isn’t working error.
Not really sure where I went wrong.
results of kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 7h
nginxpubic LoadBalancer 100.71.37.139 a5396ba70d45d11e88f290658e70719d-1485253166.us-west-2.elb.amazonaws.com 80:31402/TCP 7h

From the nginx dockerhub page , I see that the container is using port 80.
https://hub.docker.com/_/nginx/
It should be like this:
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=80 --name=nginxpubic
Also,
make sure the service type loadbalancer is available in your environement.
Known Issues for minikube installation
Features that require a Cloud Provider will not work in Minikube. These include:
LoadBalancers
Features that require multiple nodes. These include:
Advanced scheduling policies

Related

External DNS + Ingress Nginx + AWS ALB

I got the following setup:
Ingress-Nginx-Controller (serviceType "NodePort")
AWS-Load-Balancer-Controller
External-DNS
I am exposing the Ingress-Nginx-Controller via an Ingress, backed by the AWS Load Balancer Controller both public and private. I chose this route, since it was pretty easy to limit the inbound CIDRs. And nginx ingress cannot create an ALB but only Classic LB or NLB.
kubectl -n ingress-nginx get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
alb-ingress-connect-nginx alb * xxxx.region.elb.amazonaws.com 80 2d8h
This ingress forwards all traffic to my nginx controller.
The service looks like
kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort a.b.c.d
I am trying to automatically setup dns records for my deployment via External-DNS. Therefore, I am creating an ingress for my deployment with ingress-class nginx and specified hostname.
Creating the records works, however it uses the IP of my ingress-nginx-controller service (a.b.c.d) instead of the loadbalancer's address.
Now my question: Is it possible to for external-dns to lookup the address of the nginx ingress or does this work only if the Nginx is exposed as service of type "LoadBalancer"?
Thanks for any help
I am able to figure this out by using --publish-status-address in nginx controller to point to ALB.
If you are using 2 ALBs (public and private), you need to create 2 nginx controllers with --publish-status-address points to each ALB. Also, remember to disable --publish-service parameter. And use different electionID for each controller if you have installed nginx controllers using Helm

JupyterHub proxy-public svc has no external IP (stuck in <pending>)

I am using helm to deploy JupyterHub (version 0.8.2) to kubernetes (AWS managed kubernetes "EKS"). I have a helm config to describe the proxy-public service, with an AWS elastic load balancer:
proxy:
secretToken: ""
https:
enabled: true
type: offload
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '1801'
Problem: When I deploy JupyterHub to EKS via helm:
helm upgrade --install jhub jupyterhub/jupyterhub --namespace jhub --version=0.8.2 --values config.yaml
The proxy-public svc never get's an external IP. It is stuck in pending state:
> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 172.20.241.23 <none> 8081/TCP 15m
proxy-api ClusterIP 172.20.170.189 <none> 8001/TCP 15m
proxy-public LoadBalancer 172.20.72.196 <pending> 80:31958/TCP,443:30470/TCP 15m
I did kubectl describe svc proxy-public and kubectl get events and there does not appear to be anything out of the ordinary. No errors.
The problem turned out to be the fact that I had mistakenly put the kubernetes cluster (and control plane) in private subnets only, thus making it impossible for the ELB to get an external IP.
You will need another annotation like this in order to use AWS classic loadbalancer.
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
Deploying JupyterHub on kubernetes can be an overkill sometimes if all you want is Jupyterhub which is accessible over the internet for you or your team. Instead of doing the complicated kubernetes setup, you can setup a VM in AWS or any other cloud and have jupyterhub installed and run as a service .
In fact, there is already a VM setup available on AWS, GCP and Azure which can be used to spinup your jupyterhub vm that will be accessible on a public ip and support single or multiuser sessions in just few clicks. Details are below if you want to try it out:
Setup on GCP
Setup on AWS
Setup on Azure

How to call spring api from frontend in kubernetes

I am trying to create a Kubernetes application in which, I have created one pod and service for backend(spring boot microservice) & frontend pod and
a loadbalancer service.
I wanted to know how would I call the backend API from frontend pod in Kubernetes?
Here are the running services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
angular LoadBalancer 10.100.15.215 a17f17fd2e25011e886100a0e002191e-1613530232.us-east-1.elb.amazonaws.com 4200:30126/TCP 12s app=angular
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 35m <none>
login ClusterIP 10.100.99.52 <none> 5555/TCP 13m app=login,tier=backend
I am calling the following API from frontend and it is showing name not resolved error:
http://login/login
I have also tried to call the API with cluster IP but that failed.
Looks like your backend service is running on port 5555, so you would have to call your backend service like this:
http://login:5555/login
This assuming the pods for your frontend are on the same Kubernetes namespace. If they are on a different namespace you would call something like this:
http://login.<namespace>.svc.cluster.local:5555/login
Also as described here.
Note that this will work only within the cluster, if you are hitting your Angular frontend from a web browser outside of the cluster, this will not work, because the web browser would have no idea of where your backend is in the cluster. So either you will have to expose your backend using another LoadBalancer type of service or you may consider using a Kubernetes Ingress with an ingress controller.
Does your angular application directly access the login service? If thats the case, its normal that you will not be able to get thru because login service is using ClusterIP. Which means, the IP is within the cluster only. You can use LoadBalancer type like you did for your "angular" application.

NodePort not working with AWS EKS server endpoint

eks server endpoint is xxxxxxxxxxx.xxx.eks.amazonaws.com and I've created a yml file with a deployment and service object.
[ec2-user#ip-]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fakeserver NodePort 10.100.235.246 <none> 6311:30002/TCP 1h
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 1d
When I browse xxxxxxxxxxx.xxx.eks.amazonaws.com:30002 returns too long to respond. security groups have all traffic in inbound rules.
You should be using your Worker Node's IP (one of the nodes if you have more than one), not the EKS server endpoint. The EKS server endpoint is the master plane, meant to process requests pertaining to creating/deleting pods, etc.
You also need to make sure that the Security Group of your Node's will allow the traffic.
With this in place you should be able to make the request to your NodePort service.
For Example:
http://your-workernodeIp:NodePortNumber
For temp solution
you need run kubectl port-forward to redirect it to your local and access with https://localhost:30002
Remember: kubectl port-forward command binds the address 127.0.0.1 only, which means you can't visit the forward port from outside the server. So you have to run it locally
$ kubectl port-forward $(kubectl get pod -l "app=fakeserver" -o jsonpath={.items[0].metadata.name}) 30002
Access via loadbalancer
If you need access it permanently, you need change service type to LoadBalancer, then access this service via its loadbalancer url or you can you define another route53 DNS to redirect to this loadbalancer.
The service that you have created is of type - Node-Port. Did you try with :30002
If it also returns the same error, then its an issue that your deployment.
Check the port exposed on the container and the target port. It should be same.

How to configure an AWS Elastic IP to point to an OpenShift Origin running pod?

We have set up OpenShift Origin on AWS using this handy guide. Our eventual
hope is to have some pods running REST or similar services that we can access
for development purposes. Thus, we don't need DNS or anything like that at this
point, just a public IP with open ports that points to one of our running pods.
Our first proof of concept is trying to get a jenkins (or even just httpd!) pod
that's running inside OpenShift to be exposed via an allocated Elastic IP.
I'm not a network engineer by any stretch, but I was able to successuflly get
an Elastic IP connected to one of my OpenShift "worker" instances, which I
tested by sshing to the public IP allocated to the Elastic IP. At this point
we're struggling to figure out how to make a pod visible that allocated Elastic IP,
owever. We've tried a kubernetes LoadBalancer service, a kubernetes Ingress,
and configuring an AWS Network Load Balancer, all without being able to
successfully connect to 18.2XX.YYY.ZZZ:8080 (my public IP).
The most promising success was using oc port-forward seemed to get at least part way
through, but frustratingly hangs without returning:
$ oc port-forward --loglevel=7 jenkins-2-c1hq2 8080 -n my-project
I0222 19:20:47.708145 73184 loader.go:354] Config loaded from file /home/username/.kube/config
I0222 19:20:47.708979 73184 round_trippers.go:383] GET https://ec2-18-2AA-BBB-CCC.us-east-2.compute.amazonaws.com:8443/api/v1/namespaces/my-project/pods/jenkins-2-c1hq2
....
I0222 19:20:47.758306 73184 round_trippers.go:390] Request Headers:
I0222 19:20:47.758311 73184 round_trippers.go:393] X-Stream-Protocol-Version: portforward.k8s.io
I0222 19:20:47.758316 73184 round_trippers.go:393] User-Agent: oc/v1.6.1+5115d708d7 (linux/amd64) kubernetes/fff65cf
I0222 19:20:47.758321 73184 round_trippers.go:393] Authorization: Bearer Pqg7xP_sawaeqB2ub17MyuWyFnwdFZC5Ny1f122iKh8
I0222 19:20:47.800941 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
I0222 19:20:47.800963 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
( oc port-forward hangs at this point and never returns)
We've found a lot of information about how to get this working under GKE, but
nothing that's really helpful for getting this working for OpenShift Origin on
AWS. Any ideas?
Update:
So we realized that sysdig.com's blog post on deploying OpenShift Origin on AWS was missing some key AWS setup information, so based on OpenShift Origin's Configuring AWS page, we set the following env variables and re-ran the ansible playbook:
$ export AWS_ACCESS_KEY_ID='AKIASTUFF'
$ export AWS_SECRET_ACCESS_KEY='STUFF'
$ export ec2_vpc_subnet='my_vpc_subnet'
$ ansible-playbook -c paramiko -i hosts openshift-ansible/playbooks/byo/config.yml --key-file ~/.ssh/my-aws-stack
I think this gets us closer, but creating a load-balancer service now gives us an always-pending IP:
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-lb 172.30.XX.YYY <pending> 8080:31338/TCP 12h
The section on AWS Applying Configuration Changes seems to imply I need to use AWS Instance IDs rather than hostnames to identify my nodes, but I tried this and OpenShift Origin fails to start if I use that method. Still at a loss.
It may not satisfy the "Elastic IP" part but how about using AWS cloud provider ELB to expose the IP/port to the pod via a service to the pod with LoadBalancer option?
Make sure to configure the AWS cloud provider for the cluster (References)
Create a svc to the pod(s) with type LoadBalancer.
For instance to expose a Dashboard via AWS ELB.
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: LoadBalancer <-----
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Then the svc will be exposed as an ELB and the pod can be accessed via the ELB public DNS name a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com.
$ kubectl (oc) get svc kubernetes-dashboard -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard LoadBalancer 10.100.96.203 a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com 443:31636/TCP 16m k8s-app=kubernetes-dashboard
References
K8S AWS Cloud Provider Notes
Reference Architecture OpenShift Container Platform on Amazon Web Services
DEPLOYING OPENSHIFT CONTAINER PLATFORM 3.5 ON AMAZON WEB SERVICES
Configuring for AWS
Check this guide out: https://github.com/dwmkerr/terraform-aws-openshift
It's got some significant advantages vs. the one you referring to in your post. Additionally, it has a clear terraform spec that you can modify and reset to using an Elastic IP (haven't tried myself but should work).
Another way to "lock" your access to the installation is to re-code the assignment of the Public URL to the master instance in the terraform script, e.g., to a domain that you own (the default script sets it to an external IP-based value with "xip.io" added - works great for testing), then set up a basic ALB that forwards https 443 and 8443 to the master instance that the install creates (you can do it manually after the install is completed, also need a second dummy Subnet; dummy-up the healthcheck as well) and link the ALB to your domain via Route53. You can even use free Route53 wildcard certs with this approach.