I am new to Kops and a bit to kubernetes as well. I managed to create a cluster with Kops, and run a deployment and a service on it. everything went well, and an ELB was created for me and I could access the application via this ELB endpoint.
My question is: How can I map my subdomain (eg. my-sub.example.com) to the generated ELB endpoint ? I believe this should be somehow done automatic by kubernetes and I should not hardcode the ELB endpoint inside my code. I tried something that has to do with annotation -> DomainName, but it did not work.(see kubernetes yml file below)
apiVersion: v1
kind: Service
metadata:
name: django-app-service
labels:
role: web
dns: route53
annotations:
domainName: "my.personal-site.de"
spec:
type: LoadBalancer
selector:
app: django-app
ports:
- protocol: TCP
port: 80
targetPort: 8000
----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: django-app-deployment
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: django-app
spec:
containers:
- image: fettah/djano_kubernetes_medium:latest
name: django-app
imagePullPolicy: Always
ports:
- containerPort: 8000
When you have ELBs in place you can use external-dns (https://github.com/kubernetes-incubator/external-dns) plugin which can attach DNS records to those ELBs using AWS Route53 integration. You need to add proper rights to Kubernetes so he can create DNS record in Route53 - you need to add additional policy in kops (according guide in external-dns plugin) in additionalPolicies section in kops cluster configuration. Then use annotation like:
external-dns.alpha.kubernetes.io/hostname: myservice.mydomain.com.
Related
When I try to apply a new Service deployment yaml in AWS EKS, it does not delete the old load balancer from the previous build/deploy
apiVersion: v1
kind: Service
metadata:
# The name must be equal to KubernetesConnectionConfiguration.serviceName
name: ignite-service
# The name must be equal to KubernetesConnectionConfiguration.namespace
namespace: ignite
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
app: ignite
spec:
type: LoadBalancer
ports:
- name: rest
port: 8080
targetPort: 8080
- name: thinclients
port: 10800
targetPort: 10800
# Optional - remove 'sessionAffinity' property if the cluster
# and applications are deployed within Kubernetes
# sessionAffinity: ClientIP
selector:
# Must be equal to the label set for pods.
app: ignite
status:
loadBalancer: {}
I had a situation where I deployed a ELB, but this time a NLB, but it would not destroy the previous ELB.
Is there a way to when applying the k8s manifest, the old load balancer on AWS gets deleted?
Not sure what you are using to create the resources helm, resources or kubectl however consider the kubectl as of now
you can use the
kubectl replace --force -f <service-filename>.yaml
Problem: I am currently using ingress-nginx in my EKS cluster to route traffic to services that need public access.
My use case: I have services I want to deploy in the same cluster but don't want them to have public access. I only want the pods to communicate will all other services within the cluster. Those pods are meant to be private because they're backend services and only need pod-to-pod communication. How do I modify my ingress resource for this purpose?
Cluster Architecture: All services are in the private subnets of the cluster while the load-balancer is in the public subnets
Additional note: I am using external-dns to dynamically create the subdomains for the hosted zones. The hosted zone is public
Thanks
Below are my service.yml and ingress.yml for public services. I want to modify these files for private services
service.yml
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: myapp
annotations:
external-dns.alpha.kubernetes.io/hostname: myapp.dev.com
spec:
ports:
- port: 80
targetPort: 3000
selector:
app: myapp
ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
namespace: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "nginx"
labels:
app: myapp
spec:
tls:
- hosts:
- myapp.dev.com
secretName: myapp-staging
rules:
- host: myapp.dev.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: 'myapp'
port:
number: 80
From this what you have the Ingress already should work and your services are meant to be private(if you set like this in your public cloud cluster), except the Ingress itself. You can update the ConfigMap to use the PROXY protocol so that you can pass proxy information to the Ingress Controller:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: nginx-ingress
data:
proxy-protocol: "True"
real-ip-header: "proxy_protocol"
set-real-ip-from: "0.0.0.0/0"
And then: kubectl apply -f common/nginx-config.yaml
Now you can deploy any app that you want to have private with the name specified (for example your myapp Service in your yaml file provided.
If you are a new to Kubernetes Networking, then this article would be useful for you or in official Kubernetes documentation
Here you can find other ELB annotations that may be useful for you
I'm very new to kubernetes. I have spent the last week learning about Nodes, Pods, Clusters, Services, and Deployments.
With that I'm trying to just get some more understanding of how the networking for kubernetes even works. I just want to expose a simple nginx docker webpage and hit it from my browser.
Our VPC is setup with a direct connect so I'm able to hit EC2 instances on their private IP addresses. I also setup the EKS cluster using the UI on aws for now as private. For testing purposes I have added my cidr range to be allowed on all TCP as an additional security group in the EKS cluster UI.
Here is my basic service and deployment definitions:
apiVersion: v1
kind: Service
metadata:
name: testing-nodeport
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
type: NodePort
selector:
app: testing-app
ports:
- port: 80
targetPort: testing-port
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
replicas: 1
selector:
matchLabels:
infrastructure: fargate
app: testing-app
template:
metadata:
labels:
infrastructure: fargate
app: testing-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- name: testing-port
containerPort: 80
I can see that everything is running correctly when I run:
kubectl get all -n default
However, when I try to hit the NodePort IP address on port 80 I can't load it from the browser.
I can hit the pod if I first setup a kubectl proxy at the following url (as the proxy is started on port 8001):
http://localhost:8001/api/v1/namespaces/default/services/testing-nodeport:80/proxy/
I'm pretty much lost at this point. I don't know what I'm doing wrong and why I can't hit the basic nginx docker outside of the kubectl proxy command.
What if you use the proxy option? Something like this:
kubectl port-forward -n default service/testing-nodeport 3000:80
Forwarding from 127.0.0.1:3000 -> 80
Forwarding from [::1]:3000 -> 80
After this, you can access your K8S service from localhost:3000. More info here
Imagine that the kubernetes cluster is like your AWS VPC. It has its own internal network with private IPs and connects all the PODs. Kubernetes only exposes things which you explicitly ask to expose.
Service port 80 is available within the cluster. So one pod can talk to this service using the service name:service port. But if you need to access from outside, you need ingress controller / LoadBalancer. You can also use NodePort for testing purposes. The node port will be something bigger than 30000 (within this 30000-32767).
You should be able to access nginx using node IP:nodeport. Here I assumed you have security group opening the node port.
Use this yaml. I updated the node port to be 31000. You can access the nginx on nodeport:31000. As I had mentioned you can not use 80 as it is for within the cluster. If you need to use 80, then you need ingress controller.
apiVersion: v1
kind: Service
metadata:
name: testing-nodeport
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
type: NodePort
selector:
app: testing-app
ports:
- port: 80
targetPort: testing-port
protocol: TCP
nodePort: 31000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment
namespace: default
labels:
infrastructure: fargate
app: testing-app
spec:
replicas: 1
selector:
matchLabels:
infrastructure: fargate
app: testing-app
template:
metadata:
labels:
infrastructure: fargate
app: testing-app
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- name: testing-port
containerPort: 80
Okay after 16+ hours of debugging this I finally figured out what's going on. On fargate you can't set the security groups per node like you can with managed node groups. I was setting the security group rules in the "Additional security groups" settings. However, fargate apparently completely ignores those settings and ONLY uses the security group from your "Cluster security group" setting. So in the EKS UI I set the correct rules in the "Cluster security group" and I can now hit my pod directly on a fargate instance.
Big take away from this. Only use "Cluster security group" for fargate nodes.
I want to know whether this is a default behaviour or something wrong with my setup.
I have 150 workers running on kubernetes.
I made a set of kubernetes workers (10 workers) run only a specific deployment using nodeSelector, I created a service (type=LoadBalancer) for it, when the Load Balancer was created all the 150 workers of Kubernetes were registered to the Load Balancer, while I was expecting to see only this set of workers (10 workers) of this deployment/service.
It behaved the same with alb-ingress-controller and AWS NLB
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- port: 8080
type: LoadBalancer
and the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
selector:
matchLabels:
app: my-app
replicas: 10
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: master-api
image: private/my-app:prod
resources:
requests:
memory: 8000Mi
ports:
- containerPort: 8080
nodeSelector:
role: api
I was already labeled 10 workers nodes with the label role=api
the 10 run only pods of this deployment, and no other worker is running this service
I also don't have another service or container using port 8080
So ALB controller actually doesn't check your labels etc. It is purely looking at the labels in your subnet. So, if your worker nodes are running inside the subnet with tag kubernetes.io/role/alb-ingress or something like that, all your worker nodes from it will be added to the load balancer.
I believe it is part of the auto-discovery thing docs
I want to use existing AWS ALB for my kubernetes setup. i.e. I don't want alb-ingress-controller create or update any existing AWS resource ie. Target groups, roles etc.
How can I make ALB to communicate with Kubernetes cluster, henceforth passing the request to existing services and getting the response back to ALB to display in the front end?
I tried this but it will create new ALB for new ingress resource. I want to use the existing one.
You basically have to open a node port on the instances where the Kubernetes Pods are running. Then you need to let the ALB point to those instances. There are two ways of configuring this. Either via Pods or via Services.
To configure it via a Service you need to specify .spec.ports[].nodePort. In the default setup the port needs to be between 30000 and 32000. This port gets opened on every node and will be redirected to the specified Pods (which might be on any other node). This has the downside that there is another hop, which also can cost money when using a multi-AZ setup. An example Service could look like this:
---
apiVersion: v1
kind: Service
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
type: NodePort
selector:
app: my-frontend
ports:
- port: 8080
nodePort: 30082
To configure it via a Pod you need to specify .spec.containers[].ports[].hostPort. This can be any port number, but it has to be free on the node where the Pod gets scheduled. This means that there can only be one Pod per node and it might conflict with ports from other applications. This has the downside that not all instances will be healthy from an ALB point-of-view, since only nodes with that Pod accept traffic. You could add a sidecar container which registers the current node on the ALB, but this would mean additional complexity. An example could look like this:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
replicas: 3
selector:
matchLabels:
app: my-frontend
template:
metadata:
name: my-frontend
labels:
app: my-frontend
spec:
containers:
- name: nginx
image: "nginx"
ports:
- containerPort: 80
hostPort: 8080