I have a LoadBalancer service on a k8s deployment on aws (made via kops).
Service definition is as follows:
apiVersion: v1
kind: Service
metadata:
name: ui
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <certificate_id>
spec:
ports:
- name: http
port: 80
targetPort: ui-port
protocol: TCP
- name: https
port: 443
targetPort: ui-port
protocol: TCP
selector:
els-pod: ui
type: LoadBalancer
Here is the respective deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ui-deployment
spec:
replicas: 1
template:
metadata:
labels:
els-pod: ui
spec:
containers:
- image: <my_ecr_registry>/<my_image>:latest
name: ui
ports:
- name: ui-port
containerPort: 80
restartPolicy: Always
I know that <my_image> exposes port 80.
I have also assigned an alias to the ELB that gets deployed, say. my-k8s.mydomain.org
Here is the issue:
https://my-k8s.mydomain.org works just fine
http://my-k8s.mydomain.org returns an empty page (when accessing behind a squid proxy, I get the zero-sized reply error message)
Why am I unable to access the service via port 80?
edit: what I have just found is that the service annotation regarding the certificate, also assigns it on port 80 on the ELB.
Could that be the issue?
Is there a way around this?
Just needed to add the following annotation in the service definition:
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
Related
I am trying to migrate my CLB to ALB. I know there is a direct option on the AWS loadbalancer UI console to do a migration. But I don't want to use that. I have a service file which deploys classic loadbalancer on EKS using kubectl.
apiVersion: v1
kind: Service
metadata:
annotations: {service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600',
service.beta.kubernetes.io/aws-load-balancer-type: classic}
name: helloworld
spec:
ports:
- {name: https, port: 8443, protocol: TCP, targetPort: 8080}
- {name: http, port: 8080, protocol: TCP, targetPort: 8080}
selector: {app: helloworld}
type: LoadBalancer
I want to convert it into ALB. I tried the following approach but not worked.
apiVersion: v1
kind: Service
metadata:
name: helloworld
spec:
ports:
- {name: https, port: 8443, protocol: TCP, targetPort: 8080}
- {name: http, port: 8080, protocol: TCP, targetPort: 8080}
selector: {app: helloworld}
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: helloworld
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: Environment=dev,Team=app**
spec:
rules:
- host: "*.amazonaws.com"
http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: helloworld
port:
number: 8080
It has not created any loadbalancer. When I did kubectl get ingress, It showed me the ingress but it has no address. What am I doing wrong here?
your Ingress file seems to be correct,
for having ALB installed automatically from an Ingress you should install the AWS Load Balancer Controller which manages AWS Elastic Load Balancers for a Kubernetes cluster.
You can follow this and then verify that it is installed correctly:
kubectl get deployment -n kube-system aws-load-balancer-controller
apply:
kubectl apply -f service-ingress.yaml
and verify that your ALB, TG, etc are created:
kubectl logs deploy/aws-load-balancer-controller -n kube-system --follow
I have used kind kubernetes to create cluster.
I have created 3 services for 3 Pods ( EmberJS, Flask, Postgres ). Pods are created using Deployment.
I have exposed my front-end service to port 84 ( NodePort Service ).
My app is now accessible on localhost:84 on my machine's browser.
But the app is not able to connect to the flask API which is exposed as flask-dataapp-service:6003 .
net:: ERR_NAME_NOT_RESOLVED
My flask service is available as flask-dataapp-service:6003. When I do a
curl flask-dataapp-service:6003
inside the bash of the ember pod container. It is being resolved without any issues.
But from the browser the flask-dataapp-service is not being resolved.
Find the config files below.
kind-custom.yaml
> kind: Cluster
> apiVersion: kind.x-k8s.io/v1alpha4 nodes:
> - role: control-plane
> extraPortMappings:
> - containerPort: 30000
> hostPort: 84
> listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
> protocol: tcp
Emberapp.yaml
apiVersion: v1
kind: Service
metadata:
name: ember-dataapp-service
spec:
selector:
app: ember-dataapp
ports:
- protocol: "TCP"
port: 4200
nodePort: 30000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ember-dataapp
spec:
selector:
matchLabels:
app: ember-dataapp
replicas: 1
template:
metadata:
labels:
app: ember-dataapp
spec:
containers:
- name: emberdataapp
image: emberdataapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4200
flaskapp.yaml
apiVersion: v1
kind: Service
metadata:
name: flask-dataapp-service
spec:
selector:
app: flask-dataapp
ports:
- protocol: "TCP"
port: 6003
targetPort: 1234
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-dataapp
spec:
selector:
matchLabels:
app: flask-dataapp
replicas: 1
template:
metadata:
labels:
app: flask-dataapp
spec:
containers:
- name: dataapp
image: dataapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1234
my flask service is available as flask-dataapp-service:6003. When I do a
curl flask-dataapp-service:6003
inside the bash of the ember pod container. It is being resolved without any issues.
Kubernetes has an in-cluster DNS which allows names such as this to be resolved directly within the cluster (i.e. DNS requests do not leave the cluster). This is also why this name does not resolve outside the cluster (hence why you cannot see it in your browser)
(Unrelated side note: this is actually a gotcha in the Kubernetes CKA certification)
Since you have used a NodePort service, you should in theory be able to use the NodePort you described (6003) and access the app using "http://localhost:6003"
Alternatively, you can port-forward:
kubectl port-forward svc/flask-dataapp-service 6003:6003
then use the same link
The port-forward option is not really of much use when running a local kubernetes cluster (in fact, the kubectl might fail with "port in use"), it's a good idea to get used to that method since it's the easiest way you can access a service in a remote kubernetes cluster that is using ClusterIP or NodePort without having to have direct access to the nodes.
I have a running private Kubernetes Cluster (v1.20) with fargate instances for the pods on the complete cluster. Access is restricted to the nodePorts range and 443.
I use externalDns to create Route53 entries to route from my internal network to the Kubernetes cluster. Everything works fine with e.g. UIs listening on port 443.
So now I want to use the kubernetes-dashboard not via proxy to my localhost but via DNS resolution with Route53. For this, I made two changes in the kubernetes-dashboard.yml:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
is now:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
annotations:
external-dns.alpha.kubernetes.io/hostname: kubernetes-dashboard.hostname.local
spec:
ports:
- port: 443
targetPort: 8443
externalTrafficPolicy: Local
type: NodePort
selector:
k8s-app: kubernetes-dashboard
and the container specs now contain the self-signed certificates:
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
- --tls-cert-file=/tls.crt
- --tls-key-file=/tls.key
The certificates (the same used for the UIs) are mapped via kubernetes secret.
The subnets for the fargate instances are the same as for my other applications. Yet i receive a "Connection refused" when i try to call my dashboard.
Checking the dashboard with the localproxy the configuration setup seems fine. Also the DNS entry is resolved to the correct IP address of the fargate instance.
My problem here is, that I already use this setup and it works. But I see no difference to the dashboard here. Do I miss something?
Can anybody help me out here?
Greetings,
Eric
Im have k8s app (Web api) which first exposed via NodePort (I've used port forwarding to run it and it works as expected)
run it like localhost:8080/api/v1/users
Than I've created a service with type LoadBalancer to expose it outside, which works as expected.
e.g. http://myhost:8080/api/v1/users
apiVersion: v1
kind: Service
metadata:
name: fzr
labels:
app: fzr
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: fzr
Now we need to make it secure and after reading about this topic we have decided to use ingress for it.
This is what I did
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ctr-ingress
selector:
app: fzr
spec:
ports:
- name: https
port: 443
targetPort: https
now I want to run it like
https://myhost:443/api/v1/users
This is not working, im not able to run the application with port 443 as https, please advice?
It looks to me like you are using a yaml template for a type service to deploy your ingress but not correctly. targetPort should be a numeric port, and anyway, I don't think "https" is a correct value (I might be wrong though).
Something like this:
apiVersion: v1
kind: Service
type: NodePort
metadata:
name: fzr-ingress
spec:
type: NodePort
selector:
app: fzr
ports:
- protocol: TCP
port: 443
targetPort: 8080
Now you have a nodeport service listening on 443 and forwarding the traffic to your fzr pods listening on port 8080.
However, the fact you are listening on port 443 does nothing to secure your app by itself. To encrypt the traffic you need a TLS certificate that you have to make available to the ingress as a secret.
If this seems somewhat complicated (because it is) you could look into deploying an Nginx ingress from a helm chart
In any case your ingress yaml would look something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: gcs-ingress
namespace: default
spec:
rules:
- host: myhost
http:
paths:
- backend:
serviceName: fzr
servicePort: 443
path: /api/v1/users
tls:
- hosts:
- myhost
secretName: myhosts-tls
More info on how to configure this here
We have a cluster of several nodes so I can't do a NodePort and just go to my node-ip (which it's what I've done for testing prometheus).
I did a helm install stable/prometheus and stable/grafana at "monitoring" namespace.
Everything looks okay so far.
Then, I'm trying to create an LB service to access Grafana, which gets created, I can see the CNAME pointing to the A record for the ELB at AWS, but when accessing the URL of Grafana, nothing happens, no HTTP error, no problem page, nothing.
Here's the service-elb.yaml:
apiVersion: v1
kind: Service
metadata:
name: grafana-lb
namespace: monitoring
labels:
app: grafana
annotations:
dns.alpha.kubernetes.io/external: grafana-testing.country.ourdomain
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
spec:
selector:
app: grafana
tier: frontend
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: 80
- name: http
port: 80
targetPort: 3000
loadBalancerSourceRanges:
- somerange
- someotherrange
- etc etc
BTW, Got an error of permissions regarding serviceaccount if I don't create the chart with --set rbac.create=false
I recently use a nginx-proxy-pass for Kibana and also use a LB service similar to this with no issue. But I'm missing something here and can't find out what it is yet.
Any help will be much appreciated. I'll update if I make it work.
Solved, had to remove the "tier" selector and just use a spec like this:
spec:
selector:
app: grafana
type: LoadBalancer
ports:
- name: http
port: 3000