Why does port-forwarding to a headless service always route to the same pod? - kubectl

My question is about the Port Forwarding feature applied to a headless service in Kubernetes.
For example this command:
kubectl port-forward service/mongodb-headless 27017:27017 -n mongodb
port-forwards to a headless service. Although I've read that a headless service does neither load balance nor proxy requests to its backing pods, I've observed that it always connects to the same backing pod (selection seems random).
After some investigation I've found out that DNS lookups for a headless service's FQDN returns the IPs of all backing pods in a round-robin based behavior. Regardless re-port-forwarding to my headless service always connects to the same pod. Also a re-deployment of the headless service does not change the selected pod.
Does anyone know about the behavior of port-forwading to a headless service?
Why does port-forwarding to a headless service bind the pod's port to my localhost port at all?
Which backing pod (let's say 1 out of 3) is chosen for this binding?
Why does the headless service always stick to the same pod?
Thanks

Yes, kubectl port-forward always routes you to the same pod, as of November 2022.
This is simply due to how it's implemented under the hood.
It seems to just take the first available endpoint of the service.
Why does port-forwarding to a headless service bind the pod's port to my localhost port at all?
Why not? The original purpose of kubectl port-forward is to target a pod.
Then it was improved to also allow users to target a service - but first, it "translates" that service into a suitable pod.
This happens to always be the same pod at the moment.
Which backing pod (let's say 1 out of 3) is chosen for this binding?
I suspect it's the first one returned by kubectl get endpoints for that service.
Why does the headless service always stick to the same pod?
Normal services that aren't headless behave exactly the same.
Note that this has nothing to do with the service being headless.
Sources:
https://github.com/kubernetes/kubectl/issues/881
https://github.com/kubernetes/kubernetes/issues/15180#issuecomment-410798267

Related

Enable HTTPS for Akka-Discovery Endpoints while forming akka-cluster in kubernetes environment

I need to setup up an Akka-Cluster (using Akka Classic) in Kubernetes using DNS-resolver. I've created a headless-service which is able to resolve address for various pods of my Akka application.
After DNS resolving, I'm able to get addresses for various pods. Now my Akka-Management runs over Https,
So while one pod tries connecting to management endpoints of various other pods, It needs to use "HTTPS" instead of "HTTP" but Akka by default uses "http". Is there a way to modify this behavior in Java
Yes, there is: to enable HTTPS, you have to instantiate your server by providing an HttpsConnectionContext object to it.
You should probably do something like:
Http.get(system).newServerAt("localhost", 8080)
.enableHttps(createHttpsContext(system))
.bind(app.createRoute());
The previous example is taken from the official documentation, which also shows how the createHttpsContext(system) method works.

Can GCP's Cloud Run be used for non-HTTP services?

I'm new to GCP and trying to make heads and tails of it. So far, I've experienced with GKE and Cloud Run.
In GKE, I can create a Workload (deployment) for a service of any kind under any port I like and allocate resources to it. Then I can create a load balancer and open the ports from the pods to the Internet. The load balancer has an IP that I can use to access the underlying pods.
On the other hand, when I create a Could Run service, I'll give it a docker image and a port and once the service is up and running, it exposes an HTTPS URL! The port that I specify in Cloud Run is the docker's internal port and if I want to access the URL, I have to do that through port 80.
Does this mean that Cloud Run is designed only for HTTP services under port 80? Or maybe I'm missing something?
Technically "no", Cloud Run cannot be used for non-HTTP services. See Cloud Run's container runtime contract.
But also "sort of":
The URL of a Cloud Run service can be kept "private" (and they are by default), this means that nobody but some specific identities are allowed to invoked the Cloud Run service. See this page to learn more)
The container must listen for requests on a certain port, and it does not have CPU outside of request processing. However, it is very easy to wrap your binary into a lightweight HTTP server. See for example the Shell sample that Uses a very small Go HTTP sevrer to invoke an arbitrary shell script.

standalone network endpoint group (NEG) on GKE not working

i am running a minimal stateful database service on GKE. single node cluster. i've setup a database as a stateful set on a single pod as of now. the database has exposed a management console on a particular port along with the mandatory database port. i am attempting to do two things.
expose management port over a global HTTP(S) load balancer
expose database port outside of GKE to be consumed by the likes of Cloud Functions or App Engine Applications.
My stateful set is running fine and i can see from the container logs that the database is properly booted up and is listening on required ports.
i am attempting to setup a standalone NEG (ref: https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg) using a simple ClusterIP service.
the cluster service comes up fine and i can see it using
kubectl get service service-name
but i dont see the NEG setup as such... the following command returns nothing
$ gcloud compute network-endpoint-groups list
Listed 0 items.
my pod exposes the port 8080 my service maps 51000 to 8080 and i have provided the neg annotation
cloud.google.com/neg: '{"exposed_ports": {"51000":{}}'
I dont see any errors as such but neither do i see a NEG created/listed.
Any suggestions on how i would go about debugging this.
As a followup question...
when exposing NEG over global load balancer, how do i enforce authn?
im ok with either of service account roles or oauth/openid.
would i be able to expose multiple ports using a single NEG? for
e.g. if i wanted to expose one port to my global load balancer and
another to local services, is this possible with a single NEG or
should i expose each port using a dedicated ClusterIP service?
where can i find documentation/specification for google kubernetes
annotations. i tried to expose two ports on the neg using the
following annotation syntax. is that even supported/meaningful?
cloud.google.com/neg: '{"exposed_ports": {"51000":{},"51010":{}}'
Thanks in advance!
In order to create the service that is backed by a network endpoint group, you need to be working on a GKE Cluster that is VPC Native:
https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#before_you_begin
When you create a new cluster, this option is disabled by default and you must enable it upon creation. You can confirm if your cluster is VPC Native going to your Cluster details in GKE. It should appear like this:
VPC-native (alias IP) Enabled
If the cluster is not VPC Native, you won’t be able to use this feature as described on their restrictions:
https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#restrictions
In case you have VPC Native enabled, make sure as well that the pods have the same labels “purpose:” and “topic:” to make sure they are members of the service:
kubectl get pods --show-labels
You can also create multi-port services as it is described on Kubernetes documentation:
https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services

Container instance network

I am having troubles to connect one ECS container instance(www, python) to another container instance (redis).
I am getting an "connecting to 0.0.0.0:6379. Connection refused" error from the www container.
Both instances are running on the same host and were created using two task definitions each containing one docker image.
Both use Bridge networking mode. Each task is executed by means of a service.
I also did setup service discovery for both services.
Things I did do and try:
Assure that Redis is bound to 0.0.0.0 and not 127.0.0.1
Added port mappings for www (80) and redis container (6379)
ssh'ed into the ec2 instance to assure port mappings are ok. I can telnet to both port 80 and 6379
connected to the www instance and tested by means of the python console if 0.0.0.0:6379 was available.
It wasn't the case. I also tried with the docker(redis) IP address 172.17.0.3 without luck. I also tried using the .local service discovery name of the redis container without luck. The service discovery name did not resolve
resolving the service discovery name from the ec2 container (using dig): that did work but returned a 10.0.* address
I am a bit out of option why this is the case. Obviously things do work on a local development machine.
Update 10/5: I changed container networking to type "host" which appears to be working. Still not understanding why "bridge" won't work.

Pre-deploy development communication with an Internal Kubernetes service

I'm investigating a move to Kubernetes (coming from AWS ECS). But I haven't solved the local development issue when depending on internal services.
Let me elaborate:
When developing and testing microservices, before they are deployed as a Kubernetes Service I want to be able to talk to other, internal Kubernetes Services. As there are > 20 microservices I have a Kubernetes cluster running latest development versions. I can't run a MiniKube.
example:
I'm developing an user-service which needs access to the email service. The Email service is already on Kubernetes and is an internal service.
So before the user-service is deployed I want to be able to talk to the internal email service for dev/testing. I can't make use of K8S nice service discovery env vars.
As we currently already have a VPN up to restrict DEV env to testers/development only, could I use this VPN to provide access to the Kubernetes-Service IP-addresses? I do have Kubernetes DEV-env on the same VPC as the VPN is in.
If you deploy your internal services as type NodePort, then you can access them over your VPN via that nodePort. NodePorts can be dynamically allocated or you can customize them to be 'static' where they are known by you up front.
When developing an app on your local machine, you can access the dependent service by that NodePort.
As an alternative, you can use port-forwarding from kubectl (https://kubernetes.io/docs/user-guide/connecting-to-applications-port-forward/) to forward a pod to your local machine. (Note: This only handles traffic to a pod not a service).
Telepresence (http://telepresence.io) is designed for this scenario, though it presumes developers have kubectl access to the staging/dev cluster.