I have setup a Kubernetes cluster on two EC-2 instances & dashboard but I'm not able to access the ui for the kubernetes dashboard on browser - amazon-web-services

I have setup a kubernetes(1.9) cluster on two ec-2 servers(ubuntu 16.04) and have installed a dashboard, the cluster is working fine and i get output when i do curl localhost:8001 on the master machine, but im not able to access the ui for the kubernetes dashboard on my laptops browser with masternode_public_ip:8001, master-machine-output
this is what my security group looks like security group which contains my machine ip.
Both the master and slave node are in ready state.
I know there are a lot of other ways to deploy an application on kubernetes cluster, however i want to explore this particular option for POC purpose.
I need to access the dashboard of the kubernetes UI and the nginx application which is deployed on this cluster.
So, my question: is it something else i need to add in my security group
or its because i need to do some more things on my master machine?
Also, it would be great if someone could throw some light on private and public IP and which one could be used to access the application and how does these are related
Here is the screenshot of deployment details describe deployment [2b][2c]4

This is an extensive topic ranging from Kubernetes Services (NodePort or LoadBalancer for this case) to Ingress Controllers and such. But there is a simple, quick and clean way to access your dashboard without all that.
Use either kubectl proxy or kubectl port-forward to access dashboard via embeded Kube apiserver proxy or directly forward from localhost to POD it self.

Found out the answer
Sorry for the delayed reply
I was trying to access the web application through its container's port but in kubernetes there is a concept of NodePort. so, if your container is running at port 8080 it will redirect it to a port between somewhere 30001 to 35000
all you need to do is add details to your deployment file
and expose the service
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello-world
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001

Related

Using existing load balancer for K8S service

I have a simple app that I need to deploy in K8S (running on AWS EKS) and expose it to the outside world.
I know that I can add a service with the type LoadBalancer and viola K8S will create AWS ALB for me.
spec:
type: LoadBalancer
However, the issue is that it will create a new LB.
The main reason why this is an issue for me is that I am trying to separate out infrastructure creation/upgrades (vs. software deployment/upgrade). All of my infrastructures will be managed by Terraform and all of my software will be defined via K8S YAML files (may be Helm in the future).
And the creation of a load balancer (infrastructure) breaks this model.
Two questions:
Do I understand correctly that you can't change this behavior (create vs. use existing)?
I read multiple articles about K8S and all of them lead me into the direction of Ingress + Ingress Controller. Is this the way to solve this problem?
I am hesitant to go in this direction. There are tons of steps to get it working and it will take time for me to figure out how to retrofit it in Terraform and k8s YAML files
Short Answer , you can only change it to "NodePort" and couple the existing LB manually by adding EKS nodes with the right exposed port.
like
spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: **30080**
But to attach it like a native, that is not supported by AWS k8s Controller yet and may not be a priority to do support such behavior as :
Configuration: Controllers get configuration from k8s config maps or special CustomResourceDefinitions(CRDs) that will conflict with any manual
config on the already existing LB and my lead to wiping existing configs as not tracked in configs source.
Q: Direct expose or overlay ingress :
Note: Use ingress ( Nginx or AWS ALB ) if you have (+1) services to expose or you need to add controls on exposed APIs.

Split Horizon DNS in local Minikube cluster and AWS

Initially, I've deployed my frontend web application and all the backend APIS in AWS ECS, each of the backend APIs has a Route53 record, and the frontend is connected to these APIs in the .env file. Now, I would like to migrate from ECS to EKS and I am trying to deploy all these application in a Minikube local cluster. I would like to keep my .env in my frontend application unchanged(using the same URLs for all the environment variables), the application should first look for the backend API inside the local cluster through service discovery, if the backend API doesn't exist in the cluster, it should connect to the the external service, which is the API deployed in the ECS. In short, first local(Minikube cluster)then external(AWS). How to implement this in Kubernetes?
http:// backendapi.learning.com --> backend API deployed in the pod --> if not presented --> backend API deployed in the ECS
.env
BACKEND_API_URL = http://backendapi.learning.com
one of the example in the code in which the frontend is calling the backend API
export const ping = async _ => {
const res = await fetch(`${process.env.BACKEND_API_URL}/ping`);
const json = await res.json();
return json;
}
Assuming that your setup is:
Basing on microservices architecture.
Applications deployed in Kubernetes cluster (frontend and backend) are Dockerized
Applications are capable to be running on top of Kubernetes.
etc.
You can configure your Kubernetes cluster (minikube instance) to relay your request to different locations by using Services.
Service
In Kubernetes terminology "Service" is an abstract way to expose an application running on a set of Pods as a network service.
Some of the types of Services are following:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
You can use Headless Service with selectors and dnsConfig (in Deployment manifest) to achieve the setup referenced in your question.
Let me explain more:
Example
Let's assume that you have a backend:
nginx-one - located inside and outside
Your frontend manifest in most basic form should look following:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
dnsConfig: # <--- IMPORTANT
searches:
- DOMAIN.NAME
Taking specific look on:
dnsConfig: # <--- IMPORTANT
searches:
- DOMAIN.NAME
Dissecting above part:
dnsConfig - the dnsConfig field is optional and it can work with any dnsPolicy settings. However, when a Pod's dnsPolicy is set to "None", the dnsConfig field has to be specified.
searches: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list will be merged into the base search domain names generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows for at most 6 search domains.
As for the Services for your backends.
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-one
spec:
clusterIP: None # <-- IMPORTANT
selector:
app: nginx-one
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
Above Service will tell your frontend that one of your backends (nginx) is available through a Headless service (why it's Headless will come in hand later!). By default you could communicate with it by:
service-name (nginx-one)
service-name.namespace.svc.cluster.local (nginx-one.default.svc.cluster.local) - only locally
Connecting to your backend
Assuming that you are sending the request using curl (for simplicity) from frontend to backend you will have a specific order when it comes to the DNS resolution:
check the DNS record inside the cluster
check the DNS record specified in dnsConfig
The specifics of connecting to your backend will be following:
If the Pod with your backend is available in the cluster, the DNS resolution will point to the Pod's IP (not ClusterIP)
If the Pod backend is not available in the cluster due to various reasons, the DNS resolution will first check the internal records and then opt to use DOMAIN.NAME in the dnsConfig (outside of minikube).
If there is no Service associated with specific backend (nginx-one), the DNS resolution will use the DOMAIN.NAME in the dnsConfig searching for it outside of the cluster.
A side note!
The Headless Service with selector comes into play here as its intention is to point directly to the Pod's IP and not the ClusterIP (which exists as long as Service exists). If you used a "normal" Service you would always try to communicate with the ClusterIP even if there is no Pods available matching the selector. By using a headless one, if there is no Pod, the DNS resolution would look further down the line (external sources).
Additional resources:
Minikube.sigs.k8s.io: Docs: Start
Aws.amazon.com: Blogs: Compute: Enabling dns resolution for amazon eks cluster endpoints
EDIT:
You could also take a look on alternative options:
Alernative option 1:
Use rewrite rule plugin in CoreDNS to rewrite DNS queries for backendapi.learning.com to backendapi.default.svc.cluster.local
Alernative option 2:
Add hostAliases to the Frontend Pod
You can also use Configmaps to re-use .env files.

How to expose a Kubernetes service on AWS using `service.spec.externalIPs` and not `--type=LoadBalancer`?

I've deployed a Kubernetes cluster on AWS using kops and I'm able to expose my pods using a service with --type=LoadBalancer:
kubectl run sample-nginx --image=nginx --replicas=2 --port=80
kubectl expose deployment sample-nginx --port=80 --type=LoadBalancer
However, I cannot get it to work by specifying service.spec.externalIPs with the public IP of my master node.
I've allowed ingress traffic the specified port and used https://kubernetes.io/docs/concepts/services-networking/service/#external-ips as documentation.
Can anyone clarify how to expose a service on AWS without using the cloud provider's native load balancer?
If you want to avoid using Loadbalancer then you case use NodePort type of service.
NodePort exposes service on each Node’s IP at a static port (the NodePort).
ClusterIP service that NodePort service routes is created along. You will be able to reach the NodePort service, from outside by requesting:
<NodeIP>:<NodePort>
That means that if you access any node with that port you will be able to reach your service. It worth to remember that NodePorts are high-numbered ports (30 000 - 32767)
Coming back specifically to AWS here is theirs official document how to expose a services along with NodePort explained.
Do note very important inforamation there about enabling the ports:
Note: Before you access NodeIP:NodePort from an outside cluster, you must enable the security group of the nodes to allow
incoming traffic through your service port.
Let me know if this helps.

How to configure an AWS Elastic IP to point to an OpenShift Origin running pod?

We have set up OpenShift Origin on AWS using this handy guide. Our eventual
hope is to have some pods running REST or similar services that we can access
for development purposes. Thus, we don't need DNS or anything like that at this
point, just a public IP with open ports that points to one of our running pods.
Our first proof of concept is trying to get a jenkins (or even just httpd!) pod
that's running inside OpenShift to be exposed via an allocated Elastic IP.
I'm not a network engineer by any stretch, but I was able to successuflly get
an Elastic IP connected to one of my OpenShift "worker" instances, which I
tested by sshing to the public IP allocated to the Elastic IP. At this point
we're struggling to figure out how to make a pod visible that allocated Elastic IP,
owever. We've tried a kubernetes LoadBalancer service, a kubernetes Ingress,
and configuring an AWS Network Load Balancer, all without being able to
successfully connect to 18.2XX.YYY.ZZZ:8080 (my public IP).
The most promising success was using oc port-forward seemed to get at least part way
through, but frustratingly hangs without returning:
$ oc port-forward --loglevel=7 jenkins-2-c1hq2 8080 -n my-project
I0222 19:20:47.708145 73184 loader.go:354] Config loaded from file /home/username/.kube/config
I0222 19:20:47.708979 73184 round_trippers.go:383] GET https://ec2-18-2AA-BBB-CCC.us-east-2.compute.amazonaws.com:8443/api/v1/namespaces/my-project/pods/jenkins-2-c1hq2
....
I0222 19:20:47.758306 73184 round_trippers.go:390] Request Headers:
I0222 19:20:47.758311 73184 round_trippers.go:393] X-Stream-Protocol-Version: portforward.k8s.io
I0222 19:20:47.758316 73184 round_trippers.go:393] User-Agent: oc/v1.6.1+5115d708d7 (linux/amd64) kubernetes/fff65cf
I0222 19:20:47.758321 73184 round_trippers.go:393] Authorization: Bearer Pqg7xP_sawaeqB2ub17MyuWyFnwdFZC5Ny1f122iKh8
I0222 19:20:47.800941 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
I0222 19:20:47.800963 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
( oc port-forward hangs at this point and never returns)
We've found a lot of information about how to get this working under GKE, but
nothing that's really helpful for getting this working for OpenShift Origin on
AWS. Any ideas?
Update:
So we realized that sysdig.com's blog post on deploying OpenShift Origin on AWS was missing some key AWS setup information, so based on OpenShift Origin's Configuring AWS page, we set the following env variables and re-ran the ansible playbook:
$ export AWS_ACCESS_KEY_ID='AKIASTUFF'
$ export AWS_SECRET_ACCESS_KEY='STUFF'
$ export ec2_vpc_subnet='my_vpc_subnet'
$ ansible-playbook -c paramiko -i hosts openshift-ansible/playbooks/byo/config.yml --key-file ~/.ssh/my-aws-stack
I think this gets us closer, but creating a load-balancer service now gives us an always-pending IP:
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-lb 172.30.XX.YYY <pending> 8080:31338/TCP 12h
The section on AWS Applying Configuration Changes seems to imply I need to use AWS Instance IDs rather than hostnames to identify my nodes, but I tried this and OpenShift Origin fails to start if I use that method. Still at a loss.
It may not satisfy the "Elastic IP" part but how about using AWS cloud provider ELB to expose the IP/port to the pod via a service to the pod with LoadBalancer option?
Make sure to configure the AWS cloud provider for the cluster (References)
Create a svc to the pod(s) with type LoadBalancer.
For instance to expose a Dashboard via AWS ELB.
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: LoadBalancer <-----
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Then the svc will be exposed as an ELB and the pod can be accessed via the ELB public DNS name a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com.
$ kubectl (oc) get svc kubernetes-dashboard -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard LoadBalancer 10.100.96.203 a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com 443:31636/TCP 16m k8s-app=kubernetes-dashboard
References
K8S AWS Cloud Provider Notes
Reference Architecture OpenShift Container Platform on Amazon Web Services
DEPLOYING OPENSHIFT CONTAINER PLATFORM 3.5 ON AMAZON WEB SERVICES
Configuring for AWS
Check this guide out: https://github.com/dwmkerr/terraform-aws-openshift
It's got some significant advantages vs. the one you referring to in your post. Additionally, it has a clear terraform spec that you can modify and reset to using an Elastic IP (haven't tried myself but should work).
Another way to "lock" your access to the installation is to re-code the assignment of the Public URL to the master instance in the terraform script, e.g., to a domain that you own (the default script sets it to an external IP-based value with "xip.io" added - works great for testing), then set up a basic ALB that forwards https 443 and 8443 to the master instance that the install creates (you can do it manually after the install is completed, also need a second dummy Subnet; dummy-up the healthcheck as well) and link the ALB to your domain via Route53. You can even use free Route53 wildcard certs with this approach.

Kubernetes Multinode CoreOS gude doesn't create ELBs in AWS

The CoreOS Multinode Cluster guide appears to have a problem. When I create a cluster and configure connectivity, everything appears fine -- however, I'm unable to create an ELB through service exposing:
$ kubectl expose rc my-nginx --port 80 --type=LoadBalancer
service "my-nginx" exposed
$ kubectl describe services
Name: my-nginx
Namespace: temp
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.100.6.247
Port: <unnamed> 80/TCP
NodePort: <unnamed> 32224/TCP
Endpoints: 10.244.37.2:80,10.244.73.2:80
Session Affinity: None
No events.
The IP line that says 10.100.6.247 looks promising, but no ELB is actually created in my account. I can otherwise interact with the cluster just fine, so it seems bizarre. A "kubectl get services" listing is similar -- it shows the private IP (same as above) but the EXTERNAL_IP column is empty.
Ultimately, my goal is a solution that allows me to easily configure my VPC (ie. private subnets with NAT instances) and if I can get this working, it'd be easy enough to drop into CloudFormation since it's based on user-data. The official method of kube-up doesn't leave room for VPC-level customization in a repeatable way.
Unfortunately, that getting-started guide isn't nearly as up to date as the kube-up implementation. For instance, I don't see a --cloud-provider=aws flag anywhere, and the kubernetes-controller-manager would need that in order to know to call the AWS APIs.
You may want to check out the official CoreOS on AWS guide:
https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html
If you hit a deadend or find a problem, I recommend asking in the AWS Special Interest Group forum:
https://groups.google.com/forum/#!forum/kubernetes-sig-aws