How to create application load balancer to expose kubernetes cluster - amazon-web-services

As i went through few resources on the internet i managed to create classic load balancer by setting the flags in my kube.apiserver, kubelet.service, kubecontroller manager and created a cluster and deployed a sample nginx file and it has exposed the application but i see that it has created classic load balancer and what i wanted is an application load balancer to be created am i supposed to do anymore changes? and also when i deploy a kibana helm chart i do get an load balancer external ip but when i access it i don't see any page.
NAME READY STATUS RESTARTS AGE
pod/elasticsearch-client-5df74c974d-dp6xw 1/1 Running 0 5h52m
pod/elasticsearch-data-0 1/1 Running 0 5h52m
pod/elasticsearch-master-0 1/1 Running 0 5h52m
pod/fluent-bit-h9kgm 1/1 Running 0 5h52m
pod/kibana-b9d8dc6d5-cbj8j 1/1 Running 0 7s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch-client ClusterIP 10.100.13.46 <none> 9200/TCP 5h52m
service/elasticsearch-discovery ClusterIP None <none> 9300/TCP 5h52m
service/kibana LoadBalancer 10.100.14.245 adaec083b81644ecbb87d4d2ba0dc070-693460825.us-east-1.elb.amazonaws.com 443:32734/TCP 7s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/fluent-bit 1 1 1 1 1 <none> 5h52m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/elasticsearch-client 1/1 1 1 5h52m
deployment.apps/kibana 1/1 1 1 7s
NAME DESIRED CURRENT READY AGE
replicaset.apps/elasticsearch-client-5df74c974d 1 1 1 5h52m
replicaset.apps/kibana-b9d8dc6d5 1 1 1 7s
NAME READY AGE
statefulset.apps/elasticsearch-data 1/1 5h52m
statefulset.apps/elasticsearch-master 1/1 5h52m
As you can see above i'm able to get a LoadBalancer <externalIP> but i don't see anything when i open that link.
And also my requirement was to deploy an Application Load Balancer and after i would deploy an Ingress helm chart and in the ingress resources i would specify the paths and ports.

From the docs as of now only ELB and NLB is supported loadbalancer type for AWS.
Edit:
Using LoadBalancer type service you can have single NLB/ELB for the nginx ingress controller and use it for as many ingress resource as you want to route traffic to backend cluster IP type service.
But if you wan to use ALB you have to manually create it (following AWS docs) and configure it to forward traffic to your Kubernetes nodes NodePort where nginx ingress controller is running. Creating a LoadBalancer type service will not work in this case. You will have to create a NodePort service for the nginx ingress controller.

Related

Unable to produce/consume to Kafka on Kubernetes when using loadBalancer Service

Background
I am running Kafka on kuberentes using confluent open source helm charts. I already have an eks cluster running with managed node groups.
When i expose the brokers using NodePort it works fine. However i want to enable load balancer, I am able to enable it and service is created per broker pod. (Enabled an internal Network load balancer) All our producers are in aws.
$ kubectl get svc -n kafka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka-0-external LoadBalancer 10.100.185.40 ac5b7fccb69bc4738b2e498995e65de2-9d6b81206f5d1d7d.elb.us-east-2.amazonaws.com 31090:30366/TCP 10m
kafka-1-external LoadBalancer 10.100.192.249 ae035d93de7874c49bc2402d5c174403-65cdb5cda161fa89.elb.us-east-2.amazonaws.com 31090:31063/TCP 10m
kafka-2-external LoadBalancer 10.100.80.80 a36dc44c757f4429b81163ab651a7012-e94e40584210b988.elb.us-east-2.amazonaws.com 31090:32700/TCP 10m
kafka-cp-kafka ClusterIP 10.100.163.158 <none> 9092/TCP 10m
kafka-cp-kafka-connect ClusterIP 10.100.139.66 <none> 8083/TCP 10m
kafka-cp-kafka-headless ClusterIP None <none> 9092/TCP 10m
kafka-cp-kafka-rest ClusterIP 10.100.146.106 <none> 8082/TCP 10m
kafka-cp-schema-registry ClusterIP 10.100.103.114 <none> 8081/TCP 10m
kafka-cp-zookeeper NodePort 10.100.22.195 <none> 2181:32724/TCP 10m
kafka-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 10m
Now i want to test by producing and consuming . I started a new ec2 instance in the same vpc. I can get metadata but i cannot produce and consume.
ubuntu#ip-192-168-87-196:~/kafka_2.11-2.3.1/bin$ kafkacat -b ae035d93de7874c49bc2402d5c174403-65cdb5cda161fa89.elb.us-east-2.amazonaws.com:31090 -L
Metadata for all topics (from broker -1: ae035d93de7874c49bc2402d5c174403-65cdb5cda161fa89.elb.us-east-2.amazonaws.com:31090/bootstrap):
3 brokers:
broker 0 at kafka-cp-kafka-0.kafka-cp-kafka-headless.kafka.svc.cluster.local:31090
broker 2 at kafka-cp-kafka-2.kafka-cp-kafka-headless.kafka.svc.cluster.local:31090
broker 1 at kafka-cp-kafka-1.kafka-cp-kafka-headless.kafka.svc.cluster.local:31090
8 topics:
topic "test" with 25 partitions:
partition 0, leader 1, replicas: 1,2,0, isrs: 1,0,2
partition 5, leader 0, replicas: 0,2,1, isrs: 1,0,2
partition 10, leader 2, replicas: 2,1,0, isrs: 1,0,2
When i try to produce i get this error
ubuntu#ip-192-168-87-196:~/kafka_2.11-2.3.1/bin$ kafkacat -b ae035d93de7874c49bc2402d5c174403-65cdb5cda161fa89.elb.us-east-2.amazonaws.com:31090 -C -t test
% ERROR: Local: Host resolution failure: kafka-cp-kafka-0.kafka-cp-kafka-headless.kafka.svc.cluster.local:31090/0: Failed to resolve 'kafka-cp-kafka-0.kafka-cp-kafka-headless.kafka.svc.cluster.local:31090': Temporary failure in name resolution
% ERROR: Local: Host resolution failure: kafka-cp-kafka-2.kafka-cp-kafka-headless.kafka.svc.cluster.local:31090/2: Failed to resolve 'kafka-cp-kafka-2.kafka-cp-kafka-headless.kafka.svc.cluster.local:31090': Temporary failure in name resolution
% ERROR: Local: Host resolution failure: kafka-cp-kafka-1.kafka-cp-kafka-headless.kafka.svc.cluster.local:31090/1: Failed to resolve 'kafka-cp-kafka-1.kafka-cp-kafka-headless.kafka.svc.cluster.local:31090': Temporary failure in name resolution
These are my listeners
$ kubectl logs kafka-cp-kafka-2 -n kafka -c cp-kafka-broker | grep -i listeners
+ export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka-cp-kafka-2.kafka-cp-kafka-headless.kafka:9092,EXTERNAL://:31090
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka-cp-kafka-2.kafka-cp-kafka-headless.kafka:9092,EXTERNAL://:31090
advertised.listeners = PLAINTEXT://kafka-cp-kafka-2.kafka-cp-kafka-headless.kafka:9092,EXTERNAL://:31090
listeners = PLAINTEXT://0.0.0.0:9092,EXTERNAL://0.0.0.0:31090
advertised.listeners = PLAINTEXT://kafka-cp-kafka-2.kafka-cp-kafka-headless.kafka:9092,EXTERNAL://:31090
listeners = PLAINTEXT://0.0.0.0:9092,EXTERNAL://0.0.0.0:31090
advertised.listeners = PLAINTEXT://kafka-cp-kafka-2.kafka-cp-kafka-headless.kafka:9092,EXTERNAL://:31090
listeners = PLAINTEXT://0.0.0.0:9092,EXTERNAL://0.0.0.0:31090
I tried for a few days now and want some guidance. Let me know if anyone has anything to share, what am i missing ?
If you are running kafka client outside the k8s cluster you have to use external IP or hostname visible outside of the cluster for KAFKA_ADVERTISED_LISTENERS:
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka-cp-kafka-0.kafka-cp-kafka-headless.kafka:9092,EXTERNAL://ac5b7fccb69bc4738b2e498995e65de2-9d6b81206f5d1d7d.elb.us-east-2.amazonaws.com:30366

How do i connect AWS RDS to mysql Kubernetes Pod

So i have launched a wordpress by following the documentation provided from https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/ but i see that the mysql is running as a pod, but my requirement to connect the running mysql pod to AWS rds so that i can dump my existing info into it.Please guide me
pod/wordpress-5f444c8849-2rsfd 1/1 Running 0 27m
pod/wordpress-mysql-ccc857f6c-7hj9m 1/1 Running 0 27m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 29m
service/wordpress LoadBalancer 10.100.148.152 a4a868cfc752f41fdb4397e3133c7001-1148081355.us-east-1.elb.amazonaws.com 80:32116/TCP 27m
service/wordpress-mysql ClusterIP None <none> 3306/TCP 27m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/wordpress 1/1 1 1 27m
deployment.apps/wordpress-mysql 1/1 1 1 27m
NAME DESIRED CURRENT READY AGE
replicaset.apps/wordpress-5f444c8849 1 1 1 27m
replicaset.apps/wordpress-mysql-ccc857f6c 1 1 1 27m
Once you have mysql running on K9s as a cluster ip service, which is accessible only inside the cluster via it's own ip.
wordpress-mysql:3306
you can double check your database recreating your service as NodePort, then you would be able to connect via SQL Administrator like Workbench, then you would be able to admin it...
here is an example: https://www.youtube.com/watch?v=s0uIvplOqJM

How to work with AWS cloud controller manager

I am trying to expose my applications running in my kubernetes cluster through AWS load balancer.
I followed the document https://cloudyuga.guru/blog/cloud-controller-manager and got till the point where i added --cloud-provider=external in kubeadm.conf file.
But this document is based on Digitial Ocean cloud and i'm working on AWS, i'm confused if i have to run any deployment.yaml file to get the pods running which are in pending status if so please provide me the link, i'm stuck at this point.
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-dlx76 0/1 Pending 0 3m32s
coredns-66bff467f8-svb6z 0/1 Pending 0 3m32s
etcd-ip-172-31-74-144.ec2.internal 1/1 Running 0 3m38s
kube-apiserver-ip-172-31-74-144.ec2.internal 1/1 Running 0 3m38s
kube-controller-manager-ip-172-31-74-144.ec2.internal 1/1 Running 0 3m37s
kube-proxy-rh8g4 1/1 Running 0 3m32s
kube-proxy-vsvlt 1/1 Running 0 3m28s
kube-scheduler-ip-172-31-74-144.ec2.internal 1/1 Running 0 3m37s
The coredns pods are pending because you have not installed a Pod Network add-on yet. From the docs here you can choose any supported Pod Network add-on. For example to use calico
kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
After the Pod Network add-on is installed the coredns pods should come up.

kiali showing unkown traffic via sending through ambassador

I have installed service mesh(Istio) and working with Ambassador to route traffic to our application. Whenever I am sending traffic through Istio Ingress its working fine and working with the ambassador but when sending through Ambassador, It is showing unknown, You can see on the attached image, could be related to the fact that the ambassador does not use an Istio sidecar.
Used code to deploy Ambassador service:
apiVersion: v1
kind: Service
metadata:
name: ambassador
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: ambassador-http
port: 80
targetPort: 8080
selector:
service: ambassador
---
Is there anything to I can add here to make it possible?
Thanks
Yes, it is possible and here is detailed guide for this from Abmassador documentation:
Getting Ambassador Working With Istio
Getting Ambassador working with Istio is straightforward. In this example, we'll use the bookinfo sample application from Istio.
Install Istio on Kubernetes, following the default instructions (without using mutual TLS auth between sidecars)
Next, install the Bookinfo sample application, following the instructions.
Verify that the sample application is working as expected.
By default, the Bookinfo application uses the Istio ingress. To use Ambassador, we need to:
Install Ambassador.
First you will need to deploy the Ambassador ambassador-admin service to your cluster:
It's simplest to use the YAML files we have online for this (though of course you can download them and use them locally if you prefer!).
First, you need to check if Kubernetes has RBAC enabled:
kubectl cluster-info dump --namespace kube-system | grep authorization-mode
If you see something like --authorization-mode=Node,RBAC in the output, then RBAC is enabled.
If RBAC is enabled, you'll need to use:
kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml
Without RBAC, you can use:
kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-no-rbac.yaml
(Note that if you are planning to use mutual TLS for communication between Ambassador and Istio/services in the future, then the order in which you deploy the ambassador-admin service and the ambassador LoadBalancer service below may need to be swapped)
Next you will deploy an ambassador service that acts as a point of ingress into the cluster via the LoadBalancer type. Create the following YAML and put it in a file called ambassador-service.yaml.
---
apiVersion: getambassador.io/v1
kind: Mapping
metadata:
name: httpbin
spec:
prefix: /httpbin/
service: httpbin.org
host_rewrite: httpbin.org
Then, apply it to the Kubernetes with kubectl:
kubectl apply -f ambassador-service.yaml
The YAML above does several things:
It creates a Kubernetes service for Ambassador, of type LoadBalancer. Note that if you're not deploying in an environment where LoadBalancer is a supported type (i.e. MiniKube), you'll need to change this to a different type of service, e.g., NodePort.
It creates a test route that will route traffic from /httpbin/ to the public httpbin.org HTTP Request and Response service (which provides useful endpoint that can be used for diagnostic purposes). In Ambassador, Kubernetes annotations (as shown above) are used for configuration. More commonly, you'll want to configure routes as part of your service deployment process, as shown in this more advanced example.
You can see if the two Ambassador services are running correctly (and also obtain the LoadBalancer IP address when this is assigned after a few minutes) by executing the following commands:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador LoadBalancer 10.63.247.1 35.224.41.XX 8080:32171/TCP 11m
ambassador-admin NodePort 10.63.250.17 <none> 8877:32107/TCP 12m
details ClusterIP 10.63.241.224 <none> 9080/TCP 16m
kubernetes ClusterIP 10.63.240.1 <none> 443/TCP 24m
productpage ClusterIP 10.63.248.184 <none> 9080/TCP 16m
ratings ClusterIP 10.63.255.72 <none> 9080/TCP 16m
reviews ClusterIP 10.63.252.192 <none> 9080/TCP 16m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ambassador-2680035017-092rk 2/2 Running 0 13m
ambassador-2680035017-9mr97 2/2 Running 0 13m
ambassador-2680035017-thcpr 2/2 Running 0 13m
details-v1-3842766915-3bjwx 2/2 Running 0 17m
productpage-v1-449428215-dwf44 2/2 Running 0 16m
ratings-v1-555398331-80zts 2/2 Running 0 17m
reviews-v1-217127373-s3d91 2/2 Running 0 17m
reviews-v2-2104781143-2nxqf 2/2 Running 0 16m
reviews-v3-3240307257-xl1l6 2/2 Running 0 16m
Above we see that external IP assigned to our LoadBalancer is 35.224.41.XX (XX is used to mask the actual value), and that all ambassador pods are running (Ambassador relies on Kubernetes to provide high availability, and so there should be two small pods running on each node within the cluster).
You can test if Ambassador has been installed correctly by using the test route to httpbin.org to get the external cluster Origin IP from which the request was made:
$ curl 35.224.41.XX/httpbin/ip
{
"origin": "35.192.109.XX"
}
If you're seeing a similar response, then everything is working great!
(Bonus: If you want to use a little bit of awk magic to export the LoadBalancer IP to a variable AMBASSADOR_IP, then you can type export AMBASSADOR_IP=$(kubectl get services ambassador | tail -1 | awk '{ print $4 }')and usecurl $AMBASSADOR_IP/httpbin/ip
Now you are going to modify the bookinfo demo bookinfo.yaml manifest to include the necessary Ambassador annotations. See below.
---
apiVersion: getambassador.io/v1
kind: Mapping
metadata:
name: productpage
spec:
prefix: /productpage/
rewrite: /productpage
service: productpage:9080
---
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
The annotation above implements an Ambassador mapping from the '/productpage/' URI to the Kubernetes productpage service running on port 9080 ('productpage:9080'). The 'prefix' mapping URI is taken from the context of the root of your Ambassador service that is acting as the ingress point (exposed externally via port 80 because it is a LoadBalancer) e.g. '35.224.41.XX/productpage/'.
You can now apply this manifest from the root of the Istio GitHub repo on your local file system (taking care to wrap the apply with istioctl kube-inject):
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)
Optionally, delete the Ingress controller from the bookinfo.yaml manifest by typing kubectl delete ingress gateway.
Test Ambassador by going to the IP of the Ambassador LoadBalancer you configured above e.g. 35.192.109.XX/productpage/. You can see the actual IP address again for Ambassador by typing kubectl get services ambassador.
Also according to documentation there is no need for Ambassador pods to be injected.
Yes, I have already configured all these things. That's why I have mentioned it in the attached image. I have taken this from kiali dashboard. That output I have shared of the bookinfo application. I have deployed my own application and its also working fine.
But I want short out this unknown thing.
I am using the AWS EKS cluster.
Putting note about ambassador:
Ambassador should not have the Istio sidecar for two reasons. First, it cannot since running the two separate Envoy instances will result in a conflict over their shared memory segment. The second is Ambassador should not be in your mesh anyway. The mesh is great for handling traffic routing from service to service, but since Ambassador is your ingress point, it should be solely in charge of deciding which service to route to and how to do it. Having both Ambassador and Istio try to set routing rules would be a headache and wouldn't make much sense.
All the traffic coming from a source that is not part of the service mesh is going to be shown as unknown.
See what kiali says about the unknowns:
https://kiali.io/faq/graph/#many-unknown

kubernetes guesbook example on aws

I'm trying to run through the kubernetes example in AWS. I created the master and 4 nodes with the kube-up.sh script and trying to get the frontend exposed via a load balancer.
Here are the pods
root#ip-172-20-0-9:~/kubernetes# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-2q0at 1/1 Running 0 5m
frontend-5hmxq 1/1 Running 0 5m
frontend-s7i0r 1/1 Running 0 5m
redis-master-y6160 1/1 Running 0 53m
redis-slave-49gya 1/1 Running 0 24m
redis-slave-85u1r 1/1 Running 0 24m
Here are the services
root#ip-172-20-0-9:~/kubernetes# kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.0.0.1 <none> 443/TCP <none> 1h
redis-master 10.0.90.210 <none> 6379/TCP name=redis-master 37m
redis-slave 10.0.205.92 <none> 6379/TCP name=redis-slave 24m
I edited the yml for the frontend service to try to add a load balancer but its not showing up
root#ip-172-20-0-9:~/kubernetes# cat examples/guestbook/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 80
selector:
name: frontend
Here the commands i ran
root#ip-172-20-0-9:~/kubernetes# kubectl create -f examples/guestbook/frontend-controller.yaml
replicationcontroller "frontend" created
root#ip-172-20-0-9:~/kubernetes# kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.0.0.1 <none> 443/TCP <none> 1h
redis-master 10.0.90.210 <none> 6379/TCP name=redis-master 39m
redis-slave 10.0.205.92 <none> 6379/TCP name=redis-slave 26m
If I remove the loadbalancer it loads up but with no external IP
Looks like the external IP might only be there for Google's platform. in AWS it creates a ELB and doesn't show the external IP of the ELB.