Kubernetes can't port-forward externalName service - amazon-web-services

Im create service with type external name:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: dev
spec:
externalName: google.com
ports:
- port: 80
protocol: TCP
targetPort: 80
sessionAffinity: None
type: ExternalName
By off K8s docs add new endpoint:
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
namespace: dev
subsets:
- addresses:
- ip: 172.217.20.206
ports:
- port: 80
protocol: TCP
And trying forward it to my localhost:
kubectl port-forward -n dev svc/my-service 8080:80
and got the error:
error: cannot attach to *v1.Service: invalid service 'my-service':
Service is defined without a selector
AFAIU, I did all steps by off docs, where I missed ? Or K8s not provide ability port-forward externalName in general?

kubectl port-forward only actually forwards a local connection to a single specific pod. While it looks like you can port-forward to other things, these are just means of picking a pod. If you run kubectl port-forward service/foo 12345:80, it actually looks at the pods selected by that Service, remaps the service's port 80 to the corresponding pod port, and forwards to that specific pod.
In your case, this means you can't port-forward to an ExternalName service, because there isn't a pod behind it, and kubectl port-forward only actually forwards to pods.
There are a couple of other implications (or demonstrations) of this. Start a normal Deployment running some service with 3 replicas, with a normal Service in front of it. Port-forward to either the Deployment or the Service, and run a load test; you will see only one pod receive all the traffic. Delete that specific pod, and the port-forward will shut down.
If you want to connect to an ExternalName service, or otherwise do any of the more interesting things services do, you need to make the connection originate from inside the cluster. You could kubectl run a temporary pod as an example:
kubectl run curl-test --rm --image=curlimages/curl --generator=run-pod/v1 -- \
http://my-service.dev.svc.cluster.local

Related

Exposing a web application to the global network using Kubernetes

i'm new to K8S, trying some exercises for the first time.
i'm trying to expose a simple web app (nginx) to the outer network. i'm working on an EC2 instance, with elastic ip (for a static ip address).
my deployment.yml file looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deployment
template:
metadata:
labels:
app: nginx-deployment
spec:
containers:
- image: "nginx:latest"
name: nginx
ports:
- containerPort: 80
after running the commands:
kubectl apply -f deployment.yml
kubectl expose deployment nginx-deployment --name my-service --port 8080 --target-port=80 --type=NodePort
i would expect that i could address this simple app by the elastic ip:port (in my situation - 8080). can't connect.
i've tried to see details about my app via the command:
kubectl get services my-service
and got this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service NodePort 10.99.98.56 <none> 8080:32725/TCP 26m
i've also tried to open ALL OF THE PORTS in my instance - to check if there's any connection. what i did manage to do is to retrieve the internal ip address with:
kubectl get nodes -o wide
and then by adding the port number (the 32725) with the curl command - i've managed to get the nginx base html page.
my question is this: why couldn't i get the nginx base page via the elastic ip?
and how can i access my simple app?

Upgrade classic loadbalancer to network loadbalancer

I am having trouble upgrading our CLB to a NLB. I did a manual upgrade via the wizard through the console, but the connectivity wouldn't work. This upgrade is needed so we can use static IPs in the loadbalancer. I think it needs to be upgraded through kubernetes, but my attempts failed.
What I (think I) understand about this setup is that this loadbalancer was set up using Helm. What I also understand is that the ingress (controller) is responsible for redirecting http requests to https. and that this lb is working on layer 4.
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.30.0
component: controller
heritage: Tiller
release: nginx-ingress-external
name: nginx-ingress-external-controller
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/nginx-ingress-external-controller
spec:
clusterIP: 172.20.41.16
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30854
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30621
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress-external
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: xxx.region.elb.amazonaws.com
How would I be able to perform the upgrade by modifying this configuration file?
As #Jonas pointed out in the comments section, creating a new LoadBalancer Service with the same selector as the existing one is probably the fastest and easiest method. As a result we will have two LoadBalancer Services using the same ingress-controller.
You can see in the following snippet that I have two Services (ingress-nginx-1-controller and ingress-nginx-2-controller) with exactly the same endpoint:
$ kubectl get pod -o wide ingress-nginx-1-controller-5856bddb98-hb865
NAME READY STATUS RESTARTS AGE IP
ingress-nginx-1-controller-5856bddb98-hb865 1/1 Running 0 55m 10.36.2.8
$ kubectl get svc ingress-nginx-1-controller ingress-nginx-2-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP
ingress-nginx-1-controller LoadBalancer 10.40.15.230 <PUBLIC_IP>
ingress-nginx-2-controller LoadBalancer 10.40.11.221 <PUBLIC_IP>
$ kubectl get endpoints ingress-nginx-1-controller ingress-nginx-2-controller
NAME ENDPOINTS AGE
ingress-nginx-1-controller 10.36.2.8:443,10.36.2.8:80 39m
ingress-nginx-2-controller 10.36.2.8:443,10.36.2.8:80 11m
Additionally to avoid downtime, we can first change the DNS records to point at the new LoadBalancer and after the propagation time we can safely delete the old LoadBalancer Service.

K8s expose LoadBalancer service giving external-ip pending

I've created a Kubernetes cluster with AWS ec2 instances using kubeadm but when I try to create a service with type LoadBalancer I get an EXTERNAL-IP pending status
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 123m
nginx LoadBalancer 10.107.199.170 <pending> 8080:31579/TCP 45m52s
My create command is
kubectl expose deployment nginx --port 8080 --target-port 80 --type=LoadBalancer
I'm not sure what I'm doing wrong.
What I expect to see is an EXTERNAL-IP address given for the load balancer.
Has anyone had this and successfully solved it, please?
Thanks.
You need to setup the interface between k8s and AWS which is aws-cloud-provider-controller.
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
More details can be found:
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/
https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/
https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2
Once you finish this setup, you will have the luxury to control not only the creation of AWS LB for each k8s service with type LoadBalancer.. But also , you will be able to control many things using annotations.
apiVersion: v1
kind: Service
metadata:
name: example
namespace: kube-system
labels:
run: example
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5556
protocol: TCP
selector:
app: example
Different settings can be applied to a load balancer service in AWS using annotations.
To Create K8s cluster on AWS using EC2, you need to consider some configuration to make it work as expected.
that's why your service is not exposed right with external IP.
you need to get the public IP of the EC2 instance that your cluster used it to deploy Nginx pod on it and then edit Nginx service to add external IP
kubectl edit service nginx
and that will prompt terminal to add external IP:
type: LoadBalancer
externalIPs:
- 1.2.3.4
where 1.2.3.4 is the public IP of the EC2 instance.
then make sure your security group inbound traffic allowed on your port (31579)
Now you are ready to user k8s service from any browser open: 1.2.3.4:31579

Kubernetes Cluster-IP service not working as expected

Ok, so currently I've got kubernetes master up and running on AWS EC2 instance, and a single worker running on my laptop:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 34d v1.9.2
worker Ready <none> 20d v1.9.2
I have created a Deployment using the following configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostnames
labels:
app: hostnames-deployment
spec:
selector:
matchLabels:
app: hostnames
replicas: 1
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: k8s.gcr.io/serve_hostname
ports:
- containerPort: 9376
protocol: TCP
The deployment is running:
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hostnames 1 1 1 1 1m
A single pod has been created on the worker node:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hostnames-86b6bcdfbc-v8s8l 1/1 Running 0 2m
From the worker node, I can curl the pod and get the information:
$ curl 10.244.8.5:9376
hostnames-86b6bcdfbc-v8s8l
I have created a service using the following configuration:
kind: Service
apiVersion: v1
metadata:
name: hostnames-service
spec:
selector:
app: hostnames
ports:
- port: 80
targetPort: 9376
The service is up and running:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames-service ClusterIP 10.97.21.18 <none> 80/TCP 1m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 34d
As I understand, the service should expose the pod cluster-wide and I should be able to use the service IP to get the information pod is serving from any node on the cluster.
If I curl the service from the worker node it works just as expected:
$ curl 10.97.21.18:80
hostnames-86b6bcdfbc-v8s8l
But if I try to curl the service from the master node located on the AWS EC2 instance, the request hangs and gets timed out eventually:
$ curl -v 10.97.21.18:80
* Rebuilt URL to: 10.97.21.18:80/
* Trying 10.97.21.18...
* connect to 10.97.21.18 port 80 failed: Connection timed out
* Failed to connect to 10.97.21.18 port 80: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to 10.97.21.18 port 80: Connection timed out
Why can't the request from the master node reach the pod on the worker node by using the Cluster-IP service?
I have read quite a bit of articles regarding kubernetes networking and the official kubernetes services documentation and couldn't find a solution.
Depends of which mode you using it working different in details, but conceptually same.
You trying to connect to 2 different types of addresses - the pod IP address, which is accessible from the node, and the virtual IP address, which is accessible from pods in the Kubernetes cluster.
IP address of the service is not an IP address on some pod or any other subject, that is a virtual address which mapped to pods IP address based on rules you define in service and it managed by kube-proxy daemon, which is a part of Kubernetes.
That address specially desired for communication inside a cluster for make able to access the pods behind a service without caring about how much replicas of pod you have and where it actually working, because service IP is static, unlike pod's IP.
So, service IP address desired to be available from other pod, not from nodes.
You can read in official documentation about how the Service Virtual IPs works.
kube-proxy is responsible for setting up the IPTables rules (by default) that route cluster IPs. The Service's cluster IP should be routable from anywhere running kube-proxy. My first guess would be that kube-proxy is not running on the master.

Kubernetes not creating ELB

I'm trying to set up my Kubernetes services as being external by using type: LoadBalancer on AWS. After I created my service using kubectl I can see the change but no ELB is created, not even async. Any hints on what could cause this? The pod I'm trying to expose is running a Docker image which exposes a web-server on port 8001.
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
name: my-service
spec:
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 8001
selector:
name: my-service
This was answered by Jan Garaj in Google Container Engine: Kubernetes is not exposing external IP after creating container regarding a GCE deployment and the answer for AWS is the same: you need to wait a few minutes for the reconciler to kick in, notice that the ELB should be created, talk to the AWS APIs and create it for you.