RedHat Openshift on AWS - amazon-web-services

I have installed Openshift on AWS using installer provisioned Tool. My Container application is an VOIP server and it need to have 4 public IP addresses. So from external world other VOIP devices can connect to these 4 public IP addresses using SIP/RTP protocol messages. How can I do that? I tried setting up own VPC then install Openshift. But Openshift always install compute Node on private subnet. If I dont pass a private subnet in install script, openshft wont start installation process.
Will multus CNI can give me 4 public IP addresses for my Container?
Thanks,
Prince

Set up your cluster using the standard IPI (installer-provisioned infrastructure) installation using the openshift-installer.
Then, create a Service of type LoadBalancer for each of the IPs that you want: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#create-a-service-from-a-manifest
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
type: LoadBalancer
In your case you'll have to create 4 of these Services with different ports, so you'll get 4 different IPs assigned to the Load Balancer (and the Service). I do not think you need Multus or similar to achieve this.

Related

How does GCP internal load balancer + GKE service work? (It works, but I do not know why)

E.g. an istio service
istio-ingressgateway LoadBalancer 10.103.19.83 10.160.32.41 15021:30943/TCP,80:32609/TCP,443:30341/TCP,3306:30682/TCP,15443:30302/TCP
Which resulted in a TCP internal load balancer. The front end is ports 15021, 80, 443, 3306, and 15443.
The backend is basically the instance group of the cluster.
How does the load balancer know 443 at the front end will forward to 30341 at backend? As far as I know, TCP load balancer is doing port forwarding? How/Where does the magic happening
The LoadBalancer Service type is an extension of the NodePort type, which is an extension of the ClusterIP type. A nodePort just opens up a port in the range 30000-32767 on each worker node and uses a label selector to identify which Pods to send the traffic to.
This means that internal clients call the Service by using the internal IP address of a node along with the TCP port specified by nodePort. The request is forwarded to one of the member Pods on the TCP port specified by the targetPort field.
Here’s an example
When a Service is created in kubernetes, a corresponding Endpoints object is created along with it. It also applies to LoadBalancer service type.
If you create a simple nginx deployment e.g. by running:
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
and then expose it as a LoadBalancer service:
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
apart from the service itself, you will also see the lb-nginx Endpoints object. You can inspect its details:
kubectl get ep lb-nginx -o yaml
As you can see it keeps track of all exposed pods (being part of a Deployment in this case) so that corresponding iptables rules, which are responsible for forwarding the traffic to a particular pod, can be up-to-date all the time, even if number of them or their ip chages.
You can e.g. scale your deployment to 5 replicas:
kubectl scale deployment nginx-deployment --replicas=5
and inspect the Endpoints object again:
kubectl get ep lb-nginx -o yaml
and you will see that right after your 5 pods are up and running it immediately gets updated as well.
As you can see in subsets section of the yaml:
subsets:
- addresses:
- ip: 10.12.0.3
nodeName: gke-gke-default-pool-75259266-oauz
targetRef:
kind: Pod
name: nginx-deployment-66b6c48dd5-dw9mt
namespace: default
resourceVersion: "22394113"
uid: 8d7e1d3e-64e2-4891-b567-61ee48f61ed1
apart from the ip address of the Pod it maintains information about the node on which it is running.
Let's go back for a moment to the Service:
kubectl get svc lb-nginx -o yaml
As you can see LoadBalancer service apart from its external IP address has its ClusterIP as every other Service (well, almost every as headless services don't have ClusterIP):
spec:
clusterIP: 10.16.6.236
clusterIPs:
- 10.16.6.236
externalTrafficPolicy: Cluster
ports:
- nodePort: 31935
port: 80
protocol: TCP
targetPort: 80
So as you can imagine this external IP is somehow mapped to the cluster ip so it route the traffic further to the respective endpoints in the cluster. How exactly this mapping is done doesn't really matter as it is done by the cloud provider and such implementation details are not part of publicly shared knowledge. The only thing you need to know is that when your cloud provider provisions an external load balancer to satisfy your request defined in a Service of LoadBalancer type, apart from creating an external load balancer it takes care of the mapping between this external IP and some standard port assigned to it and a kubernetes service which has all the information needed to route the traffic further to the respective pods. In case you wonder how exactly this is done on GCP side i.e. mapping/binding between the external (or internal) loadbalancer and kubernetes LoadBalancer service, I'm affraid such implementation details are not publicly revealed.

Expose TCP/UDP port externally with ingess/egress through the same IP

I have a workload in GKE cluster and I need to expose one port with both TCP and UDP protocols externally. The complication is that egress and ingress should go through the same external IP in order to make P2P protocol working.
Currently, my cluster is public and I use a trick with hostNetwork: true described here https://stackoverflow.com/a/47887571/803403, but considering moving to a private cluster and using Cloud NAT. However, I did not find a way how to expose that port in this case. I tried to expose it via ClusterIP, but in firewall rules could not map the external port to that ClusterIP port since the last one does not have network tags. And also I'm not sure if firewall rules can be applied to Cloud Router that is bonded to Cloud NAT.
Any ideas?
You are in a dead end! Today you expose your service through a public IP of one of your node. If you go private, you will no longer have a public IP, only private IP. Thus, you need something that bridge the private world and the public internet: a Load balancer
However, multiprotocol on the same IP (here TCP and UDP) isn't natively supported by Google Load balancer, and you can't use Load Balancer.
No luck...
Note: I know there are updates in progress on Google Cloud internal network side, but that's all. I don't know exactly what and if a new type of load balancer will be released or not. Maybe... stay tune, but it won't be en the next weeks
You can
create a gcloud compute address
create a LoadBalancer service that listens on your TCP port(s)
create a second LoadBalancer service that listens on your UDP port(s)
assign the glcoud compute IP address to both LoadBalancer services using spec.loadBalancerIp
Make sure the IP and GKE services are in the same glcoud project and region.
apiVersion: v1
kind: Service
metadata:
name: service-tcp
labels:
app: nginx
spec:
ports:
- protocol: TCP
port: 80
selector:
app: nginx
type: LoadBalancer
loadBalancerIP: 1.2.3.4
---
apiVersion: v1
kind: Service
metadata:
name: service-udp
labels:
app: nginx
spec:
ports:
- protocol: UDP
port: 80
selector:
app: nginx
type: LoadBalancer
loadBalancerIP: 1.2.3.4

Dynamically create public IPs or subdomains for EKS pods

Complex AWS EKS / ENI / Route53 issue has us stumped. Need an expert.
Context:
We are working on dynamic game servers for a social platform (https://myxr.social) that transport game and video data using WebRTC / UDP SCTP/SRTP via https://MediaSoup.org
Each game server will have about 50 clients
Each client requires 2-4 UDP ports
Our working devops strategy
https://github.com/xr3ngine/xr3ngine/tree/dev/packages/ops
We are provisioning these game servers using Kubernetes and https://agones.dev
Mediasoup requires each server connection to a client be assigned individual ports. Each client will need two ports, one for sending data and one for receiving data; with a target maximum of about 50 users per server, this requires 100 ports per server be publicly accessible.
We need some way to route this UDP traffic to the corresponding gameserver. Ingresses appear to primarily handle HTTP(S) traffic, and configuring our NGINX ingress controller to handle UDP traffic assumes that we know our gameserver Services ahead of time, which we do not since the gameservers are spun up and down as they are needed.
Questions:
We see two possible ways to solve this problem.
Path 1
Assign each game server in the node group public IPs and then allocate ports for each client. Either IP v4 or v6. This would require SSL termination for IP ports in AWS. Can we use ENI and EKS to dynamically create and provision IP ports for each gameserver w/ SSL? Essentially expose these pods to the internet via a public subnet with them each having their own IP address or subdomain. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html We have been referencing this documentation trying to figure out if this is possible.
Path 2
Create a subdomain (eg gameserver01.gs.xrengine.io, etc) dynamically for each gameserver w/ dynamic port allocation for each client (eg client 1 [30000-30004], etc). This seems to be limited by the ports accessible in the EKS fleet.
Are either of these approaches possible? Is one better? Can you give us some detail about how we should go about implementation?
The native way for receiving UDP traffic on Amazon EKS is by using a Kubernetes Service of type Loadbalancer with an extra annotation to get the NLB.
Example
apiVersion: v1
kind: Service
metadata:
name: my-game-app-service
annotation:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
selector:
app: my-game-app
ports:
- name: outgoing-port # choose your name
protocol: UDP
port: 9000 # choose your port
- name: incoming-port # choose your name
protocol: UDP
port: 9001 # choose your port
type: LoadBalancer

How do pods communicates with the other pod in a cluster created using Kubernetes

I have created 1 master node and 1 worker node with 2 pods using kops in aws.
In one pod i have my oracle database running and in other pod i have deployed my java web application. Now to run the java web application needs to talk to the database pod for
To communicate between 2 pods in a cluster, i have configured the pod's IP address in my java application. I am able to access the application using the cloud providers public URL, everything is good in the dev environment. But in case of production environment, i cannot keep configuring the database Pod's IP address in my java application Pod.
How do people solve this issue ? Do you guys use Pod's Ip address to communicate with other pod's in kubernetes ? or is there any other way for communication between pods ?
Here is how my Pods look like in the cloud
NAME READY STATUS RESTARTS AGE IP NODE
csapp-8cd5d44556-7725f 1/1 Running 2 1d 100.96.1.54 ip-172-56-35-213.us-west-2.compute.internal
csdb-739d459467-92cmh 1/1 Running 0 1h 100.96.1.57 ip-172-27-86-213.us-west-2.compute.internal
Any help or directions on this issue would be helpful.
To make communication between two pods, you should use service resource with port type ClusterPort since they are in the same cluster.
According to the output of kubectl get pods, you have two tiers:
App Tier: csapp-8cd5d44556-7725f
Data Tier : csdb-739d459467-92cmh
Below is an example of service resource for data tier, then how it is used inside App tier.
apiVersion: v1
kind: Service
metadata:
name: example-data-tier
spec:
selector:
app: csdb # ⚠️Make sure of this, it should select the POD csdb-...
ports:
- name: redis
protocol: TCP
port: 6379
# type (default is ClusterPort which is for internal)
And in the POD of App tier, you should feed the environment variable with values from the service above:
apiVersion: v1
kind: Pod
spec:
containers:
- name: xx
image: "xxx:v1"
ports:
- containerPort: 8080
protocol: TCP
env:
- name: "REDIS_URL"
value: "redis://$(EXAMPLE_DATA_TIER_SERVICE_HOST):$(EXAMPLE_DATA_TIER_SERVICE_PORT_REDIS)"
If your DB is something other than Redis, you need to consider that when applying this solution.

I have setup a Kubernetes cluster on two EC-2 instances & dashboard but I'm not able to access the ui for the kubernetes dashboard on browser

I have setup a kubernetes(1.9) cluster on two ec-2 servers(ubuntu 16.04) and have installed a dashboard, the cluster is working fine and i get output when i do curl localhost:8001 on the master machine, but im not able to access the ui for the kubernetes dashboard on my laptops browser with masternode_public_ip:8001, master-machine-output
this is what my security group looks like security group which contains my machine ip.
Both the master and slave node are in ready state.
I know there are a lot of other ways to deploy an application on kubernetes cluster, however i want to explore this particular option for POC purpose.
I need to access the dashboard of the kubernetes UI and the nginx application which is deployed on this cluster.
So, my question: is it something else i need to add in my security group
or its because i need to do some more things on my master machine?
Also, it would be great if someone could throw some light on private and public IP and which one could be used to access the application and how does these are related
Here is the screenshot of deployment details describe deployment [2b][2c]4
This is an extensive topic ranging from Kubernetes Services (NodePort or LoadBalancer for this case) to Ingress Controllers and such. But there is a simple, quick and clean way to access your dashboard without all that.
Use either kubectl proxy or kubectl port-forward to access dashboard via embeded Kube apiserver proxy or directly forward from localhost to POD it self.
Found out the answer
Sorry for the delayed reply
I was trying to access the web application through its container's port but in kubernetes there is a concept of NodePort. so, if your container is running at port 8080 it will redirect it to a port between somewhere 30001 to 35000
all you need to do is add details to your deployment file
and expose the service
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello-world
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001