How do i connect AWS RDS to mysql Kubernetes Pod - amazon-web-services

So i have launched a wordpress by following the documentation provided from https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/ but i see that the mysql is running as a pod, but my requirement to connect the running mysql pod to AWS rds so that i can dump my existing info into it.Please guide me
pod/wordpress-5f444c8849-2rsfd 1/1 Running 0 27m
pod/wordpress-mysql-ccc857f6c-7hj9m 1/1 Running 0 27m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 29m
service/wordpress LoadBalancer 10.100.148.152 a4a868cfc752f41fdb4397e3133c7001-1148081355.us-east-1.elb.amazonaws.com 80:32116/TCP 27m
service/wordpress-mysql ClusterIP None <none> 3306/TCP 27m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/wordpress 1/1 1 1 27m
deployment.apps/wordpress-mysql 1/1 1 1 27m
NAME DESIRED CURRENT READY AGE
replicaset.apps/wordpress-5f444c8849 1 1 1 27m
replicaset.apps/wordpress-mysql-ccc857f6c 1 1 1 27m

Once you have mysql running on K9s as a cluster ip service, which is accessible only inside the cluster via it's own ip.
wordpress-mysql:3306
you can double check your database recreating your service as NodePort, then you would be able to connect via SQL Administrator like Workbench, then you would be able to admin it...
here is an example: https://www.youtube.com/watch?v=s0uIvplOqJM

Related

In AWS EKS, how to install and access etcd, kube-apiserver, and other things?

I am learning AWS EKS now and I want to know how to access etcd, kube-apiserver and other control plane components?
For example, when we run command as below in minikube, we can find etcd-minikube,kube-apiserver-minikube
[vagrant#localhost ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-lrt6z 1/1 Running 0 176d
kube-system coredns-6955765f44-xbtc2 1/1 Running 1 176d
kube-system etcd-minikube 1/1 Running 1 176d
kube-system kube-addon-manager-minikube 1/1 Running 1 176d
kube-system kube-apiserver-minikube 1/1 Running 1 176d
kube-system kube-controller-manager-minikube 1/1 Running 1 176d
kube-system kube-proxy-69mqp 1/1 Running 1 176d
kube-system kube-scheduler-minikube 1/1 Running 1 176d
kube-system storage-provisioner 1/1 Running 2 176d
And then, we can access them by below command:
[vagrant#localhost ~]$ kubectl exec -it -n kube-system kube-apiserver-minikube -- /bin/sh
# kube-apiserver
W0715 13:56:17.176154 21 services.go:37] No CIDR for service cluster IPs specified.
...
My question: I want to do something like the above example in AWS EKS, but I cannot find kube-apiserver
xiaojie#ubuntu:~/environment/calico_resources$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-flv95 1/1 Running 0 23h
kube-system aws-node-kpkv9 1/1 Running 0 23h
kube-system aws-node-rxztq 1/1 Running 0 23h
kube-system coredns-cdd78ff87-bjnmg 1/1 Running 0 23h
kube-system coredns-cdd78ff87-f7rl4 1/1 Running 0 23h
kube-system kube-proxy-5wv5m 1/1 Running 0 23h
kube-system kube-proxy-6846w 1/1 Running 0 23h
kube-system kube-proxy-9rbk4 1/1 Running 0 23h
AWS EKS is a managed kubernetes offering. Kubernetes control plane components such as API Server, ETCD are installed, managed and upgraded by AWS. Hence you can neither see these components nor can exec into these components.
In AWS EKS you can only play with the worker nodes
You are at the left ... AWS is at the right
EKS is not a managed service for the whole kubernetes cluster.
EKS is a managed service only for Kubernetes Master nodes.
That's why, it's worth to operate EKS with tools (.e.g; terraform) that helps provisioning the whole cluster in no time .. as explained here.
As what Arghya Sadhu and Abdennour TOUMI said, EKS Encapsulates most Control Plane Components but kube-proxy, See here.
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane.
So, I have tried to find the way to configure these Components instead of accessing these container and input command, but finally I give up. See this Github issue.

How to create application load balancer to expose kubernetes cluster

As i went through few resources on the internet i managed to create classic load balancer by setting the flags in my kube.apiserver, kubelet.service, kubecontroller manager and created a cluster and deployed a sample nginx file and it has exposed the application but i see that it has created classic load balancer and what i wanted is an application load balancer to be created am i supposed to do anymore changes? and also when i deploy a kibana helm chart i do get an load balancer external ip but when i access it i don't see any page.
NAME READY STATUS RESTARTS AGE
pod/elasticsearch-client-5df74c974d-dp6xw 1/1 Running 0 5h52m
pod/elasticsearch-data-0 1/1 Running 0 5h52m
pod/elasticsearch-master-0 1/1 Running 0 5h52m
pod/fluent-bit-h9kgm 1/1 Running 0 5h52m
pod/kibana-b9d8dc6d5-cbj8j 1/1 Running 0 7s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch-client ClusterIP 10.100.13.46 <none> 9200/TCP 5h52m
service/elasticsearch-discovery ClusterIP None <none> 9300/TCP 5h52m
service/kibana LoadBalancer 10.100.14.245 adaec083b81644ecbb87d4d2ba0dc070-693460825.us-east-1.elb.amazonaws.com 443:32734/TCP 7s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/fluent-bit 1 1 1 1 1 <none> 5h52m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/elasticsearch-client 1/1 1 1 5h52m
deployment.apps/kibana 1/1 1 1 7s
NAME DESIRED CURRENT READY AGE
replicaset.apps/elasticsearch-client-5df74c974d 1 1 1 5h52m
replicaset.apps/kibana-b9d8dc6d5 1 1 1 7s
NAME READY AGE
statefulset.apps/elasticsearch-data 1/1 5h52m
statefulset.apps/elasticsearch-master 1/1 5h52m
As you can see above i'm able to get a LoadBalancer <externalIP> but i don't see anything when i open that link.
And also my requirement was to deploy an Application Load Balancer and after i would deploy an Ingress helm chart and in the ingress resources i would specify the paths and ports.
From the docs as of now only ELB and NLB is supported loadbalancer type for AWS.
Edit:
Using LoadBalancer type service you can have single NLB/ELB for the nginx ingress controller and use it for as many ingress resource as you want to route traffic to backend cluster IP type service.
But if you wan to use ALB you have to manually create it (following AWS docs) and configure it to forward traffic to your Kubernetes nodes NodePort where nginx ingress controller is running. Creating a LoadBalancer type service will not work in this case. You will have to create a NodePort service for the nginx ingress controller.

Not able to communicate with Pods locally created in AWS EC2 instance with kubernetes

I have created simple nginx deplopyment in Ubuntu EC2 instance and exposed to port through service in kubernetes cluster, but I am unable to ping the pods even in local envirnoment. My Pods are running fine and service is also created successfully. I am sharing some outputs of commands below
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-39-226 Ready <none> 2d19h v1.16.1
master-node Ready master 2d20h v1.16.1
kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-54f57cf6bf-dqt5v 1/1 Running 0 101m 192.168.39.17 ip-172-31-39-226 <none> <none>
nginx-deployment-54f57cf6bf-gh4fz 1/1 Running 0 101m 192.168.39.16 ip-172-31-39-226 <none> <none>
sample-nginx-857ffdb4f4-2rcvt 1/1 Running 0 20m 192.168.39.18 ip-172-31-39-226 <none> <none>
sample-nginx-857ffdb4f4-tjh82 1/1 Running 0 20m 192.168.39.19 ip-172-31-39-226 <none> <none>
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d20h
nginx-deployment NodePort 10.101.133.21 <none> 80:31165/TCP 50m
sample-nginx LoadBalancer 10.100.77.31 <pending> 80:31854/TCP 19m
kubectl describe deployment nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Mon, 14 Oct 2019 06:28:13 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},"spec":{"replica...
Selector: app=nginx
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-54f57cf6bf (2/2 replicas created)
Events: <none>
Now I am unable to ping 192.168.39.17/16/18/19 from master, also not able to access curl 172.31.39.226:31165/31854 from master as well. Any help will be highly appreciated..
From the information, you have provided. And from the discussion we had the worker node has the Nginx pod running. And you have attached a NodePort Service and Load balancer Service to it.
The only thing which is missing here is the server from which you are trying to access this.
So, I tried to reach this URL 52.201.242.84:31165. I think all you need to do is whitelist this port for public access or the IP. This can be done via security group for the worker node EC2.
Now the URL above is constructed from the public IP of the worker node plus(+) the NodePort svc which is attached. Thus here is a simple formula you can use to get the exact address of the pod running.
Pod Access URL = Public IP of Worker Node + The NodePort

Diplay interface Prometheus

I have a problem.
When I execute the command
kubectl port-forward -n monitoring prometheus-prometheus-operator-prometheus-0 9090
on my cluster Kubernetes (AKS) it gives me this result
Forwarding from 127.0.0.1.19090 -> 9090
Forwarding from [:: 1]: 9090 -> 9090
and when I put the URL http://127.0.0.1:9090/targets on my browser, I have no result.
 $ kubectl get pods -n monitoring
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-operator-alertmanager-0 2/2 Running 0 11h
prometheus-operator-grafana-8549568bc7-zntgz 2/2 Running 0 11h
prometheus-operator-kube-state-metrics-5fc876d8f6-v5zh7 1/1 Running 0 11h
prometheus-operator-operator-75d88ccc6-t4lt5 1/1 Running 0 11h
prometheus-operator-prometheus-node-export-zqk7t 1/1 Running 0 11h
prometheus-prometheus-operator-prometheus-0 3/3 Running 1 11h
What should be done ? or how to forwarding prometheus from my public cloud to my local machine.
Thank you

kubernetes guesbook example on aws

I'm trying to run through the kubernetes example in AWS. I created the master and 4 nodes with the kube-up.sh script and trying to get the frontend exposed via a load balancer.
Here are the pods
root#ip-172-20-0-9:~/kubernetes# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-2q0at 1/1 Running 0 5m
frontend-5hmxq 1/1 Running 0 5m
frontend-s7i0r 1/1 Running 0 5m
redis-master-y6160 1/1 Running 0 53m
redis-slave-49gya 1/1 Running 0 24m
redis-slave-85u1r 1/1 Running 0 24m
Here are the services
root#ip-172-20-0-9:~/kubernetes# kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.0.0.1 <none> 443/TCP <none> 1h
redis-master 10.0.90.210 <none> 6379/TCP name=redis-master 37m
redis-slave 10.0.205.92 <none> 6379/TCP name=redis-slave 24m
I edited the yml for the frontend service to try to add a load balancer but its not showing up
root#ip-172-20-0-9:~/kubernetes# cat examples/guestbook/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 80
selector:
name: frontend
Here the commands i ran
root#ip-172-20-0-9:~/kubernetes# kubectl create -f examples/guestbook/frontend-controller.yaml
replicationcontroller "frontend" created
root#ip-172-20-0-9:~/kubernetes# kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.0.0.1 <none> 443/TCP <none> 1h
redis-master 10.0.90.210 <none> 6379/TCP name=redis-master 39m
redis-slave 10.0.205.92 <none> 6379/TCP name=redis-slave 26m
If I remove the loadbalancer it loads up but with no external IP
Looks like the external IP might only be there for Google's platform. in AWS it creates a ELB and doesn't show the external IP of the ELB.