I'm trying to setup kubernetes on AWS. For this I created an EKS cluster with 3 nodes (t2.small) according to official AWS tutorial. Then I want to run a pod with some app which communicates with Postgres (RDS in different VPC).
But unfortunately the app doesn't connect to the database.
What I have:
EKS cluster with its own VPC (CIDR: 192.168.0.0/16)
RDS (Postgres) with its own VPC (CIDR: 172.30.0.0/16)
Peering connection initiated from the RDS VPC to the EKS VPC
Route table for 3 public subnets of EKS cluster is updated: route with destination 172.30.0.0/16 and target — peer connection from the step #3 is added.
Route table for the RDS is updated: route with destination 192.168.0.0/16 and target — peer connection from the step #3 is added.
The RDS security group is updated, new inbound rule is added: all traffic from 192.168.0.0/16 is allowed
After all these steps I execute kubectl command:
kubectl exec -it my-pod-app-6vkgm nslookup rds-vpc.unique_id.us-east-1.rds.amazonaws.com
nslookup: can't resolve '(null)': Name does not resolve
Name: rds-vpc.unique_id.us-east-1.rds.amazonaws.com
Address 1: 52.0.109.113 ec2-52-0-109-113.compute-1.amazonaws.com
Then I connect to one of the 3 nodes and execute a command:
getent hosts rds-vpc.unique_id.us-east-1.rds.amazonaws.com
52.0.109.113 ec2-52-0-109-113.compute-1.amazonaws.com rds-vpc.unique_id.us-east-1.rds.amazonaws.com
What I missed in EKS setup in order to have access from pods to RDS?
UPDATE:
I tried to fix the problem by Service:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ExternalName
externalName: rds-vpc.unique_id.us-east-1.rds.amazonaws.com
So I created this service in EKS, and then tried to refer to postgres-service as DB URL instead of direct RDS host address.
This fix does not work :(
Have you tried to enable "dns propagation" in the peering connection? It looks like you are not getting the internally routable dns. You can enable it by going into the setting for the peering connection and checking the box for dns propagation. I generally do this will all of the peering connections that I control.
The answer I provided here may actually apply to your case, too.
It is about using Services without selectors. Look also into ExternalName Services.
Related
i want to deploy Jenkins on EKS cluster and any one can access Jenkins url
i tried this and i change type: NodePort in service.yaml to LoadBalancer
but DNS didn't work
Your worker nodes will have to have a public IP, which is a big security risk.
Better to create a Kubernetes service of type LoadBalancer which in your case will expose the Jenkins service in AWS.
I was trying to demonstrate VPC peering in GCP. I followed the below steps;
Setup1 :
I logged into GCP admin user account and I have created VPC in custom mode and added a subnet in the us-central region under one project. Than I have set the firewall rule to allow ssh and tcp. Than I created a VM instance in the same us-central region also selected this custom VPC and subnet in networking option. Than I tried to ssh into that VM and tried to ping from cloud shell. Both are working fine.
Setup2:
I logged into GCP user account which is added as a service account user by admin (previously used admin account). In that I created VPC in custom mode and added a subnet in the asia-east region under another project. Than I created a VM instance in the same asia-east region also selected this custom VPC and subnet in networking. Then I have set the firewall rule to allow ssh and tcp. Then I tried to ssh into that VM and tried to ping from cloud shell. Both are working fine.
Both VPC's haveDynamic routing mode set as Regional.
Than I tried to ping us-central machine from asia-east machine and also asia-east machine from us-central machine.
My expectation was, it won't work as it uses two different VPC which has subnet in two different region. So i can implement VPC peering to make it possible. But unfortunately it is working. I just tried to demonstrate VPC peering concept.
Can anyone suggest me what i missed in it?
===============================================================
UPDATE
gcloud compute networks describe vpc1
autoCreateSubnetworks: false
creationTimestamp: '2021-09-03T04:08:24.491-07:00'
description: ''
id: '8530504402595724487'
kind: compute#network
mtu: 1460
name: my-vpc
routingConfig:
routingMode: REGIONAL
selfLink: https://www.googleapis.com/compute/v1/projects/project-name/global/networks/my-vpc
subnetworks:
- https://www.googleapis.com/compute/v1/projects/project-name/regions/us-central1/subnetworks/my-subnet
x_gcloud_bgp_routing_mode: REGIONAL
x_gcloud_subnet_mode: CUSTOM
gcloud compute networks describe vpc2
autoCreateSubnetworks: false
creationTimestamp: '2021-09-03T04:56:02.154-07:00'
description: ''
id: '8965341541776208829'
kind: compute#network
mtu: 1460
name: my-project2-vpc
routingConfig:
routingMode: REGIONAL
selfLink: https://www.googleapis.com/compute/v1/projects/project-name/global/networks/my-project2-vpc
subnetworks:
- https://www.googleapis.com/compute/v1/projects/project-name/regions/asia-east1/subnetworks/asia-subnet
x_gcloud_bgp_routing_mode: REGIONAL
x_gcloud_subnet_mode: CUSTOM
I finally got my head around it - I couldn't reproduce your issue because I was missing firewall rules (I assumed GCP will create them but when you're creating custom networks no rules are created by default). I had to allow SSH (TCP port 22 and ICMP protocol to be allowed) traffic manually and then everything started working as you described.
Communication between the networks is possible due to the fact, that the VM's (by default) get public IP's and are accessible for every machine connected to the internet. You didn't provide that information so I assumed that you didn't change any networking setting while creating test VM's - thus created VM's with public IP's.
But if you create the VM's with only internal IP's - they won't be able to communicate to VM's from different VPC's without VPC peering. Any communication between the networks (regardless of the region they are in) is impossible in such cases.
I have a non-EKS AWS kubernetes cluster with 1 master 3 worker nodes
I am trying to install nginx ingress controller in order to use the cluster with a domain name but unfortunately it does not seem to work, the nginx ingress controller service is not taking automatically an IP and even if I set manually an external IP this IP is not answering in 80 port.
If you are looking for a public domain . Expose the nginx-ingress deployment(service) as a loadbalancer which will create an ALB.
You can then route the domain name to the ALB Alias in R53
The reason for External IP remaining in pending is that there is no load balancer in front of your cluster to provide it with external IP, like it would work EKS. You can achieve it by boostraping your cluster with --cloud-provider option using kubeadm.
You can follow these tutorials on how to successfully achieve it:
Kubernetes, Kubeadm, and the AWS Cloud Provider
Setting up the Kubernetes AWS Cloud Provider
Kubernetes: part 2 — a cluster set up on AWS with AWS cloud-provider and AWS LoadBalancer
There are a couple of different solutions to that.. my favorite solution is
Use an Ingress Controller like the ingress-nginx (there are multiple different Ingress Controllers available for Kubernetes, a very good comparison is provided here)
Configure the IngressController Service to use NodePort and use a port like 30080
Create an own AWS ALB with Terraform for an example and add the NodePort 30080 to the TargetGroup
Create a Ingress resource to configure the IngressController
The whole traffic flow could look like that:
If you still have some questions, just ask them here :)
Yes you will have to expose the deployment as a service
kubectl expose deployment {deploymentname} -n ns ==type=Loadbalancer --name={name}
I am experiencing an issue on my EKS cluster (kubernetes version 1.14).
I am trying to install JupyterHub (version 0.8.2) onto the EKS cluster, via helm+tiller.
Install appears to succeed, but the proxy-public service fails to create an ELB. See the output of "kubectl describe svc" below:
The app install/deployment fails with an error event.
> kubectl describe svc
...
Name: proxy-public
Namespace: jhub
Labels: app=jupyterhub
chart=jupyterhub-0.8.2
component=proxy-public
heritage=Tiller
release=jhub
...
Warning CreatingLoadBalancerFailed 1m (x6 over 3m) service-controller Error creating load balancer (will retry): failed to ensure load balancer for service jhub/proxy-public: could not find any suitable subnets for creating the ELB
My EKS cluster is associated with 3 subnets, 2 private and 1 public. I would think that the ELB could be placed in the public subnet?
EKS requires subnets to be tagged in order to be used for load balancer creation. To be considered an eligible subnet, tag it with the following: kubernetes.io/role/elb: shared. For more info, see the knowledge portal article on subnet tagging for EKS.
I am trying set up eks with rds mysql. I used eksctl to setup the eks cluster and I did not change any default network configuration. eks and rds are using the same vpc.
This is the result in a debugging pod
telnet xx.rds.amazonaws.com 3306
Connected to xx.us-west-2.rds.amazonaws.com
J
8.0.16\#t'Ti1??]Gp^;&Aomysql_native_passwordConnection closed by foreign host
/ # nslookup xxx.us-west-2.rds.amazonaws.com
Server: 10.100.0.10
Address: 10.100.0.10:53
Non-authoritative answer:
xxx.us-west-2.rds.amazonaws.com canonical name = ec2-xx.us-west-2.compute.amazonaws.com
Name: ec2-xx.us-west-2.compute.amazonaws.com
Address: 192.168.98.108
nc -vz 192.168.98.108 3306
192.168.98.108 (192.168.98.108:3306) open
I used service mesh Istio I created a mysql client pod in a sidecar not enabled namespace I get an error message like following
Mysql client pod
ERROR 2002 (HY000) Can't connect to MySQL sever on xxxxx.us-west-2.rds.amazonaws.com.
I am new to vpc. rds and vpc are using the same vpc. they are connected within the private network?
If it says connection refused in my grpc server log, eks grpc server try to connect 192.168.98.108 and that is the private ip of the ads Do I need other configuration in vpc?. Any ideas? cheers
I did had the same scenario (RDS in the same VPC as the EKS cluster). What I did is as following:
I've created a Cloudformation template with which I created my custom VPC, 8 subnetes(3 public, 3 private for EKS cluster and 2 private networks for RDS database), internet gateway, NAT Gateway, route tables and routes.
Using eksctl with cluster configuration yaml I created the cluster and the node group. The node group joined my cluster.
Using aws cli, I've created the db-subnet-group (containing the 2 private DB subnet) and I also started and RDS instance. Then I've set up some security group to allow traffic to DB just from the 3 private subnets)
As reference to create my custom cloudformation template I used the template created by eksctl when running the create command with the flag --node-private-networking.