How can I connect to kubernetes-dashboard via "https"? - amazon-web-services

I have a running private Kubernetes Cluster (v1.20) with fargate instances for the pods on the complete cluster. Access is restricted to the nodePorts range and 443.
I use externalDns to create Route53 entries to route from my internal network to the Kubernetes cluster. Everything works fine with e.g. UIs listening on port 443.
So now I want to use the kubernetes-dashboard not via proxy to my localhost but via DNS resolution with Route53. For this, I made two changes in the kubernetes-dashboard.yml:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
is now:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
annotations:
external-dns.alpha.kubernetes.io/hostname: kubernetes-dashboard.hostname.local
spec:
ports:
- port: 443
targetPort: 8443
externalTrafficPolicy: Local
type: NodePort
selector:
k8s-app: kubernetes-dashboard
and the container specs now contain the self-signed certificates:
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
- --tls-cert-file=/tls.crt
- --tls-key-file=/tls.key
The certificates (the same used for the UIs) are mapped via kubernetes secret.
The subnets for the fargate instances are the same as for my other applications. Yet i receive a "Connection refused" when i try to call my dashboard.
Checking the dashboard with the localproxy the configuration setup seems fine. Also the DNS entry is resolved to the correct IP address of the fargate instance.
My problem here is, that I already use this setup and it works. But I see no difference to the dashboard here. Do I miss something?
Can anybody help me out here?
Greetings,
Eric

Related

Migrate Classic Loadbalancer to Application Loadbalancer in EKS

I am trying to migrate my CLB to ALB. I know there is a direct option on the AWS loadbalancer UI console to do a migration. But I don't want to use that. I have a service file which deploys classic loadbalancer on EKS using kubectl.
apiVersion: v1
kind: Service
metadata:
annotations: {service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600',
service.beta.kubernetes.io/aws-load-balancer-type: classic}
name: helloworld
spec:
ports:
- {name: https, port: 8443, protocol: TCP, targetPort: 8080}
- {name: http, port: 8080, protocol: TCP, targetPort: 8080}
selector: {app: helloworld}
type: LoadBalancer
I want to convert it into ALB. I tried the following approach but not worked.
apiVersion: v1
kind: Service
metadata:
name: helloworld
spec:
ports:
- {name: https, port: 8443, protocol: TCP, targetPort: 8080}
- {name: http, port: 8080, protocol: TCP, targetPort: 8080}
selector: {app: helloworld}
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: helloworld
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: Environment=dev,Team=app**
spec:
rules:
- host: "*.amazonaws.com"
http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: helloworld
port:
number: 8080
It has not created any loadbalancer. When I did kubectl get ingress, It showed me the ingress but it has no address. What am I doing wrong here?
your Ingress file seems to be correct,
for having ALB installed automatically from an Ingress you should install the AWS Load Balancer Controller which manages AWS Elastic Load Balancers for a Kubernetes cluster.
You can follow this and then verify that it is installed correctly:
kubectl get deployment -n kube-system aws-load-balancer-controller
apply:
kubectl apply -f service-ingress.yaml
and verify that your ALB, TG, etc are created:
kubectl logs deploy/aws-load-balancer-controller -n kube-system --follow

kind kubernetes : Nodeport Service ( front-end ) service is not able to access ClusterIP ( back- end ) service from browser

I have used kind kubernetes to create cluster.
I have created 3 services for 3 Pods ( EmberJS, Flask, Postgres ). Pods are created using Deployment.
I have exposed my front-end service to port 84 ( NodePort Service ).
My app is now accessible on localhost:84 on my machine's browser.
But the app is not able to connect to the flask API which is exposed as flask-dataapp-service:6003 .
net:: ERR_NAME_NOT_RESOLVED
My flask service is available as flask-dataapp-service:6003. When I do a
curl flask-dataapp-service:6003
inside the bash of the ember pod container. It is being resolved without any issues.
But from the browser the flask-dataapp-service is not being resolved.
Find the config files below.
kind-custom.yaml
> kind: Cluster
> apiVersion: kind.x-k8s.io/v1alpha4 nodes:
> - role: control-plane
> extraPortMappings:
> - containerPort: 30000
> hostPort: 84
> listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
> protocol: tcp
Emberapp.yaml
apiVersion: v1
kind: Service
metadata:
name: ember-dataapp-service
spec:
selector:
app: ember-dataapp
ports:
- protocol: "TCP"
port: 4200
nodePort: 30000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ember-dataapp
spec:
selector:
matchLabels:
app: ember-dataapp
replicas: 1
template:
metadata:
labels:
app: ember-dataapp
spec:
containers:
- name: emberdataapp
image: emberdataapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4200
flaskapp.yaml
apiVersion: v1
kind: Service
metadata:
name: flask-dataapp-service
spec:
selector:
app: flask-dataapp
ports:
- protocol: "TCP"
port: 6003
targetPort: 1234
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-dataapp
spec:
selector:
matchLabels:
app: flask-dataapp
replicas: 1
template:
metadata:
labels:
app: flask-dataapp
spec:
containers:
- name: dataapp
image: dataapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1234
my flask service is available as flask-dataapp-service:6003. When I do a
curl flask-dataapp-service:6003
inside the bash of the ember pod container. It is being resolved without any issues.
Kubernetes has an in-cluster DNS which allows names such as this to be resolved directly within the cluster (i.e. DNS requests do not leave the cluster). This is also why this name does not resolve outside the cluster (hence why you cannot see it in your browser)
(Unrelated side note: this is actually a gotcha in the Kubernetes CKA certification)
Since you have used a NodePort service, you should in theory be able to use the NodePort you described (6003) and access the app using "http://localhost:6003"
Alternatively, you can port-forward:
kubectl port-forward svc/flask-dataapp-service 6003:6003
then use the same link
The port-forward option is not really of much use when running a local kubernetes cluster (in fact, the kubectl might fail with "port in use"), it's a good idea to get used to that method since it's the easiest way you can access a service in a remote kubernetes cluster that is using ClusterIP or NodePort without having to have direct access to the nodes.

How to deploy a Kubernetes service using NodePort on Amazon AWS?

I have created a cluster on AWS EC2 using kops consisting of a master node and two worker nodes, all with public IPv4 assigned.
Now, I want to create a deployment with a service using NodePort to expose the application to the public.
After having created the service, I retrieve the following information, showing that it correctly identified my three pods:
nlykkei:~/projects/k8s-examples$ kubectl describe svc hello-svc
Name: hello-svc
Namespace: default
Labels: app=hello
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hello"},"name":"hello-svc","namespace":"default"},"spec"...
Selector: app=hello-world
Type: NodePort
IP: 100.69.62.27
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30001/TCP
Endpoints: 100.96.1.5:8080,100.96.2.3:8080,100.96.2.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
However, when I try to visit any of my public IPv4's on port 30001, I get no response from the server. I have already created a Security Group allowing all ingress traffic to port 30001 for all of the instances.
Everything works with Docker Desktop for Mac, and here I notice the following service field not present in the output above:
LoadBalancer Ingress: localhost
I've already studied https://kubernetes.io/docs/concepts/services-networking/service/, and think that NodePort should serve my needs?
Any help is appreciated!
So you want to have a service able to be accessed from public. In order to achieve this I would recommend to create a ClusterIP service and then an Ingress for that service. So, saying that you have the deployment hello-world serving at 8081 you will then have the following two objects:
Service:
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
ports:
- name: service
port: 8081(or whatever you want)
protocol: TCP
targetPort: 8080 (here goes the opened port in your pods)
selector:
app: hello-world
type: ClusterIP
Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
labels:
app: hello-world
name: hello-world
spec:
rules:
- host: hello-world.mycutedomainname.com
http:
paths:
- backend:
serviceName: hello-world
servicePort: 8081 (or whatever you have set for the service port)
path: /
Note: the name tag in the service's port is optional.

k8s-ingress to make the application secured with https

Im have k8s app (Web api) which first exposed via NodePort (I've used port forwarding to run it and it works as expected)
run it like localhost:8080/api/v1/users
Than I've created a service with type LoadBalancer to expose it outside, which works as expected.
e.g. http://myhost:8080/api/v1/users
apiVersion: v1
kind: Service
metadata:
name: fzr
labels:
app: fzr
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: fzr
Now we need to make it secure and after reading about this topic we have decided to use ingress for it.
This is what I did
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ctr-ingress
selector:
app: fzr
spec:
ports:
- name: https
port: 443
targetPort: https
now I want to run it like
https://myhost:443/api/v1/users
This is not working, im not able to run the application with port 443 as https, please advice?
It looks to me like you are using a yaml template for a type service to deploy your ingress but not correctly. targetPort should be a numeric port, and anyway, I don't think "https" is a correct value (I might be wrong though).
Something like this:
apiVersion: v1
kind: Service
type: NodePort
metadata:
name: fzr-ingress
spec:
type: NodePort
selector:
app: fzr
ports:
- protocol: TCP
port: 443
targetPort: 8080
Now you have a nodeport service listening on 443 and forwarding the traffic to your fzr pods listening on port 8080.
However, the fact you are listening on port 443 does nothing to secure your app by itself. To encrypt the traffic you need a TLS certificate that you have to make available to the ingress as a secret.
If this seems somewhat complicated (because it is) you could look into deploying an Nginx ingress from a helm chart
In any case your ingress yaml would look something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: gcs-ingress
namespace: default
spec:
rules:
- host: myhost
http:
paths:
- backend:
serviceName: fzr
servicePort: 443
path: /api/v1/users
tls:
- hosts:
- myhost
secretName: myhosts-tls
More info on how to configure this here

Service (LoadBalancer) port not working on aws

I have a LoadBalancer service on a k8s deployment on aws (made via kops).
Service definition is as follows:
apiVersion: v1
kind: Service
metadata:
name: ui
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <certificate_id>
spec:
ports:
- name: http
port: 80
targetPort: ui-port
protocol: TCP
- name: https
port: 443
targetPort: ui-port
protocol: TCP
selector:
els-pod: ui
type: LoadBalancer
Here is the respective deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ui-deployment
spec:
replicas: 1
template:
metadata:
labels:
els-pod: ui
spec:
containers:
- image: <my_ecr_registry>/<my_image>:latest
name: ui
ports:
- name: ui-port
containerPort: 80
restartPolicy: Always
I know that <my_image> exposes port 80.
I have also assigned an alias to the ELB that gets deployed, say. my-k8s.mydomain.org
Here is the issue:
https://my-k8s.mydomain.org works just fine
http://my-k8s.mydomain.org returns an empty page (when accessing behind a squid proxy, I get the zero-sized reply error message)
Why am I unable to access the service via port 80?
edit: what I have just found is that the service annotation regarding the certificate, also assigns it on port 80 on the ELB.
Could that be the issue?
Is there a way around this?
Just needed to add the following annotation in the service definition:
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"