Kubernete have no access from eks pod to rds mysql - amazon-web-services

I am trying set up eks with rds mysql. I used eksctl to setup the eks cluster and I did not change any default network configuration. eks and rds are using the same vpc.
This is the result in a debugging pod
telnet xx.rds.amazonaws.com 3306
Connected to xx.us-west-2.rds.amazonaws.com
J
8.0.16\#t'Ti1??]Gp^;&Aomysql_native_passwordConnection closed by foreign host
/ # nslookup xxx.us-west-2.rds.amazonaws.com
Server: 10.100.0.10
Address: 10.100.0.10:53
Non-authoritative answer:
xxx.us-west-2.rds.amazonaws.com canonical name = ec2-xx.us-west-2.compute.amazonaws.com
Name: ec2-xx.us-west-2.compute.amazonaws.com
Address: 192.168.98.108
nc -vz 192.168.98.108 3306
192.168.98.108 (192.168.98.108:3306) open
I used service mesh Istio I created a mysql client pod in a sidecar not enabled namespace I get an error message like following
Mysql client pod
ERROR 2002 (HY000) Can't connect to MySQL sever on xxxxx.us-west-2.rds.amazonaws.com.
I am new to vpc. rds and vpc are using the same vpc. they are connected within the private network?
If it says connection refused in my grpc server log, eks grpc server try to connect 192.168.98.108 and that is the private ip of the ads Do I need other configuration in vpc?. Any ideas? cheers

I did had the same scenario (RDS in the same VPC as the EKS cluster). What I did is as following:
I've created a Cloudformation template with which I created my custom VPC, 8 subnetes(3 public, 3 private for EKS cluster and 2 private networks for RDS database), internet gateway, NAT Gateway, route tables and routes.
Using eksctl with cluster configuration yaml I created the cluster and the node group. The node group joined my cluster.
Using aws cli, I've created the db-subnet-group (containing the 2 private DB subnet) and I also started and RDS instance. Then I've set up some security group to allow traffic to DB just from the 3 private subnets)
As reference to create my custom cloudformation template I used the template created by eksctl when running the create command with the flag --node-private-networking.

Related

How is eks cluster accessible when deployed in a private subnet?

When deploying an EKS cluster, the best practice is to deploy the managed control plane in private subnets. In terms of accessibility, the defalt option is public cluster, meaning that I can access it locally with kubectl tool and updated kubeconfig.
How am I able to access the cluster if it is deployed in private subnets with no inbound traffic? As per the documentation, AWS creates a managed endpoint that can access the cluster from within the AWS network.
What is the architecture behind it, how does it internally work? Is there some kind of a proxy (agent) being deployed (found aws-node)?
deployed my own EKS cluster
read the documentation
tried to scrape for additional info
The type of EKS networking you're setting up is configured to restrict access to the API server with a private endpoint that's only accessible from within the VPC. So any Kubernetes API requests (kubectl commands) have to originate from within the VPC (public or private subnets). If you are doing this as a personal project, then you can do the following:
Create a bastion host in the public subnet of your VPC with a key pair. Launch this host with user data that installs kubectl and any other CLI tools you need.
Access the bastion host via SSH from your workstation to ensure it works as expected.
Check that the security group attached to your EKS control plane can receive 443 traffic from the public subnet. You can create a rule for this if one doesn't exist. This will enable communication between the bastion host in the public subnet and the cluster in the private subnets.
Access the bastion host and then use it to communicate with the cluster just as you would with your personal machine. For example, run aws eks --region <region> update-kubeconfig --name <name-of-your-cluster> to update your kubeconfig and then proceed to run kubectl commands.
Sidenote:
If this is for an enterprise project, you can also look into using AWS VPN or DirectConnect to access the VPC.
Other helpful resources:
https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access

Aurora serverless connection timed out

I'm trying to connect to my aurora serverless but every time I try to do it I receive this error:
2021/03/18 17:10:00 error verifying database connection is alive: dial tcp 10.247.15.113:3306: connect: operation timed out exit status 1
I created a VPC, subnets and security groups.
VPC -> 10.247.0.0/20
4 Subnets -> 10.247.0.0/22, 10.247.8.0/22, 10.247.4.0/22 and 10.247.12.0/22
Security group -> Lives inside my VPC and as inbound has port SSH 22 for 0.0.0.0/0 and MYSQL/Aurora 3306 for my EC2 instance IP address. Outbound has all traffic
Using ssh in a database client works but inside my code I receive the error I mentioned, I also tried doing telnet and I receive another operation timed out.
I know this may be something related to the networking but not sure why since I can connect via ssh with an EC2 instance. What can it be?
Your guide is for RDS. It does not apply to Aurora Serverless (AS). Specifically AS can't be accessed from internet. So you can't connect to it directly from home:
You can't give an Aurora Serverless v1 DB cluster a public IP address. You can access an Aurora Serverless v1 DB cluster only from within a VPC.
You have to connect to it from within a VPC, e.g. EC2 instance, ECS container or a lambda function.
The only way to connect to it from home is to use RDS DataAPI, or setup ssh tunnel or VPN between your home network and your VPC.

problems connecting to AWS DocumentDB

I created a Cluster and an Instance of DocumentDB in amazon. When I try to connect to my Local SSH (MacOS) it displays the following message:
When I try for the MongoDB Compass Community:
mongodb://Mobify:<My-Password>#docdb-2019-04-07-23-28-45.cluster-cmffegva7sne.us-east-2.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0
It loads many minutes and in the end it has this result:
After solving this problem, I would like to know if it is possible to connect a cluster of documentDB to an instance in another zone of availability ... I have my DocumentDB in Ohio and I have an EC2 in São Paulo ... is it possible?
Amazon DocumentDB clusters are deployed in a VPC to provide strong network isolation from the Internet. To connect to your cluster from outside of the VPC, please see the following: https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
AWS document DB is hosted on a VPC (virtual private cloud) which has its own specific subnets and security groups; basically, anything that resides in a VPC is not publicly accessible.
Document DB is deployed in a VPC. In order to access it, you need to create an EC2 instance or AWS Could9.
Let's access it from the EC2 instance and access AWS document DB using SSH tunneling.
Create an EC2 instance (preferably ubuntu) of any configuration and select the same VPC in which your document DB cluster is hosted.
After the EC2 is completely initialized, start an SSH tunnel and bind the local port # 27017 with document DB cluster host # 27017.
ssh -i "<ec2-private-key>" -L 27017:docdb-2019-04-07-23-28-45.cluster-cmffegva7sne.us-east-2.docdb.amazonaws.com:27017 ubuntu#<ec2-host> -N
Now your localhost is tunneled to ec2 on port 27017. Connect from mongosh or mongo, enter your cluster password and you will be logged in and execute any queries.
mongosh --sslAllowInvalidHostnames --ssl --sslCAFile rds-combined-ca-bundle.pem --username Mobify --password
Note: SSL will be deprecated. Use tls, just replace SSL with tls in the above command.

Kubernetes: Have no access from EKS pod to RDS Postgres

I'm trying to setup kubernetes on AWS. For this I created an EKS cluster with 3 nodes (t2.small) according to official AWS tutorial. Then I want to run a pod with some app which communicates with Postgres (RDS in different VPC).
But unfortunately the app doesn't connect to the database.
What I have:
EKS cluster with its own VPC (CIDR: 192.168.0.0/16)
RDS (Postgres) with its own VPC (CIDR: 172.30.0.0/16)
Peering connection initiated from the RDS VPC to the EKS VPC
Route table for 3 public subnets of EKS cluster is updated: route with destination 172.30.0.0/16 and target — peer connection from the step #3 is added.
Route table for the RDS is updated: route with destination 192.168.0.0/16 and target — peer connection from the step #3 is added.
The RDS security group is updated, new inbound rule is added: all traffic from 192.168.0.0/16 is allowed
After all these steps I execute kubectl command:
kubectl exec -it my-pod-app-6vkgm nslookup rds-vpc.unique_id.us-east-1.rds.amazonaws.com
nslookup: can't resolve '(null)': Name does not resolve
Name: rds-vpc.unique_id.us-east-1.rds.amazonaws.com
Address 1: 52.0.109.113 ec2-52-0-109-113.compute-1.amazonaws.com
Then I connect to one of the 3 nodes and execute a command:
getent hosts rds-vpc.unique_id.us-east-1.rds.amazonaws.com
52.0.109.113 ec2-52-0-109-113.compute-1.amazonaws.com rds-vpc.unique_id.us-east-1.rds.amazonaws.com
What I missed in EKS setup in order to have access from pods to RDS?
UPDATE:
I tried to fix the problem by Service:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ExternalName
externalName: rds-vpc.unique_id.us-east-1.rds.amazonaws.com
So I created this service in EKS, and then tried to refer to postgres-service as DB URL instead of direct RDS host address.
This fix does not work :(
Have you tried to enable "dns propagation" in the peering connection? It looks like you are not getting the internally routable dns. You can enable it by going into the setting for the peering connection and checking the box for dns propagation. I generally do this will all of the peering connections that I control.
The answer I provided here may actually apply to your case, too.
It is about using Services without selectors. Look also into ExternalName Services.

Cant connect redis-cli with amazon elastic cache

I have created a redis endpoint on amazon elastic cache and also setup vpc & NAT gateway. I need to connect created redis endpoint with redis-cli. i using command like this
redis-cli -h dev-redis.434dffsdsf.0094.ustyue1.cache.amazonaws.com
But i got error message like this
Could not connect to Redis at dev-redis.a35gy4.0001.use1.cache.amazonaws.com:6379: Connection timed out
I tried with several ways
tried to connect from my local ubuntu machine.
tried to connect from a ec2 instance.
My source code is running on aws lambda. Using aws lambda we can successfully connect to the same redis endpoint.
What is the actual issue with my redis client?
Please check SG of ElastiCache. Redis instance should be accessible to server where you're running Redis CLI.
Option 1 will not work, as ElastiCache instances are not accessible outside of their VPC. From the FAQs: "Amazon ElastiCache Nodes, deployed within a VPC, can never be accessed from the Internet or from EC2 Instances outside the VPC."
Option 2 should work, if the EC2 instance is within the same VPC as the ElastiCache instance.
Adding more details as none of the answers here gave me full clarity
What is security group - security group in AWS is like a firewall.
What should I check in the security group - Check in inbound rules of the security group attached to the Redis if port 6379 is open to IPs within the CIDR (e.g. 192.168.32.0/20) of the EC2 instance from which you tried to access the Redis/ElasticCache