why is rds in 3 subnets in aws - amazon-web-services

I haven't changed my vpc/subnet settings since making an aws account, and I've recently found my rds instance is apparently in 3 subnets (subnet is listed as default with 3 subnet names underneath), one of which also has my application server. Is it necessary to have my rds in all 3 subnets? I want to move it to a separate subnet away from the application server and make it private - if that's the case is there anything in particular I will need to do?

Typically, an Amazon RDS instance is running on one server in one subnet.
However, when launching the database, you are asked to provide a Subnet Group, which identifies which subnets the database could launch in. These are typically private subnets within the VPC.
If you are using a Multi-AZ database, then it will use two subnets -- one for the Master (running) database and one for the secondary (standby) database.
It is also possible to create Read Replicas that could be in a different subnet to the Master database.
Bottom line: You are probably viewing the list of subnets in the Subnet Group that it can use, but it is likely to only be in one subnet at the moment.

Related

Move RDS Aurora Instance from private to public subnet

I currently have the typical setup of an RDS cluster with 1 instance running in a private subnet. I am migrating our application out of AWS and into Heroku (while leaving the DB as is), but I need to be able to connect to the DB from the Heroku dynos.
What I can't figure out is how to move the DB out of the private subnet and into a public one.
The AWS docs have instructions for moving from public to private, and I thought I could just follow them for the opposite direction too. But the process involves standing up a new secondary in the desired subnet using Multi-AZ configuration and a failover. But when I go to Modify my instance, there is no option for configuring Multi-AZ:
It seems like Aurora instances in particular do not support Multi-AZ? "Multi-AZ DB clusters are in preview for RDS for MySQL and RDS for PostgreSQL" Leaving me somewhat stuck.
Edit I did just see this message elsewhere: "You have no Aurora Replicas in your DB cluster." which might be why Multi-AZ is not available. But I'm not seeing any options to spin up a replica anywhere.
Again my goal is to get my Aurora DB into a public subnet (or otherwise make it accessible from the internet, but not through an SSH tunnel)
The Availability Zone options are in the "Availability & durability" section above "Connectivity" FYI
I just had your same issue but with a Postgres DB, though I have the option to change its "Subnet group" in the Connectivity section, which you don't have it seems, but it only appears for me if the DB is NOT currently multi-AZ. AWS will prevent you from moving the DB between subnet groups* in the same VPC, but you can just move the DB to a subnet group in a different VPC and then move it back to the subnet group that you actually want it in (configured with the appropriate public subnets).
*You can create subnet groups in the RDS service, left side menu.

Subnet not appearing when creating AuroraDB

I've created a VPC. By default the VPC created one public and one private subnet. I've created an additional private subnet in a different availability zone.
I then (in the ElasticCache console) created a new subnet group that contains these two private subnets from the VPC. This subnet group is also associated, of course, with the VPC.
Then, on creating an Aurora RDS within this VPC, it asks for a subnet group. There's one there, a 'default' group, but my new subnet group doesn't appear.
How do I create a subnet group that is acceptable when creating a database?
Well.. as it turns out the ElasticCache subnet groups are not visible when creating a database. You have to use the RDS console to create a 'Database Subnet Group'. Once you do that.. you're set.
ElasticCache and RDS are different types of product groups. Their SubnetGroups are different entities and they don't over lap with each other. You would need to create subnet groups in RDS to use it with RDS engines (database engines). They are not tied to any db engine, so you can re-use them within RDS.
One additional thing to note is that some other Database productions like Amazon Neptune and Amazon DocDB are able to access your RDS subnet groups as well. Just an FYI.

How can I get kubernetes resources to succesfully connect to an RDS instance in another vpc

Background: I have a kubernetes cluster set up in one AWS account that needs to access data in an RDS MySQL instance in a different account and I can't seem to get the settings correct to allow traffic to flow.
What I've tried so far:
Setup a peering connection between the two VPCs. They are in the same region, us-east-1.
Created Route table entries in each account to point traffic on the corresponding subnet to the peering connection.
Created a security group in the RDS VPC to allow traffic from the kubernetes subnets to access MySql.
Made sure DNS Resolution is enabled on both VPC's.
Kubernetes VPC details (Requester)
This contains 3 EC2's (looks like each has its own subnet) that house my kubernetes cluster. I used EKS to set this up.
The route table rules I set up have the 3 subnets associated, and point the RDS VPC CIDR block at the peering connection.
RDS VPC details (Accepter)
This VPC contains the mysql RDS instance, as well as some other resources. The RDS instance has quite a few VPC security groups assigned to it for access from our office IP's etc. It has Public Accessibility set to true.
I repeated the route table setup (in reverse) and pointed back to the K8s VPC subnet / peering connection.
Testing
To test the connection, I've tried 2 different ways. The application that needs to access mysql is written in node, so I just wrote a test connector and example query and it times out.
I also tried netcat from a terminal in the pod running in the kubernetes cluster.
nc -v {{myclustername}}.us-east-1.rds.amazonaws.com 3306
Which also times out. It seems to be trying to hit the correct mysql instance IP though so I'm not sure if that means my routing rules are working right from the k8s vpc side.
DNS fwd/rev mismatch: ec2-XXX.compute-1.amazonaws.com != ip-{{IP OF MY MYSQL}}.ec2.internal
I'm not sure what steps to take next. Any direction would be greatly appreciated.
Side Note: I've read thru this Kubernetes container connection to RDS instance in separate VPC
I think I understand what's going on there. My CIDR blocks do not conflict with the default K8s ips (10.0...) so my problem seems to be different.
I know this was asked a long time ago, but I just ran into this problem as well.
It turns out I was editing the wrong AWS routing table! When I ran kops to create my cluster, it created a new VPC with its own routing table but also another routing table! I needed to add the peer connection route to the cluster's routing table instead of the VPC's Main routing table.

When is it possible to change the subnet group within AWS RDS?

I have one Oracle SE instance that is not multi-az and does not have encryption enabled, and I have an Oracle EE instance that is multi-az and has encryption enabled. The former has the option to change the subnet group through the console (modify instance > network and security), whereas the latter does not. Both instances are in a subnet group within the default vpc, and I have a custom vpc within the same account with another subnet group in it.
What conditions determine whether or not it is possible to change the subnet group of an RDS instance? I have not been able to find any documentation on this so far.
It is the Multi AZ Deployment that is the determining factor. To test this, modify your DB instance and turn off the Multi AZ Deployment. Once it is done, go modify it again and you'll notice you now have the option to change the subnet group.
I haven't found any indication as to the reason for this behavior in the AWS documentation.
As #hackakhan mentioned, you need to have Multi AZ Deployment turned off to modify the DB subnet group of an RDS instance. Unfortunately, the RDS instance will only be migrated to one of the subnets from the new DB subnet group if the new DB subnet group resides in a different VPC. You could create a temporary VPC to migrate away the RDS instance only to migrate it back to your existing VPC and the right DB subnet group within that VPC.
The AWS Premium Support Knowledge Center has a detailed explanation of the steps involved: https://aws.amazon.com/premiumsupport/knowledge-center/change-vpc-rds-db-instance/
My understanding
RDS instances can't be migrated from one database subnet group to another if:
The destination database subnet group is in the same VPC as the current group
The instance has the multi-availability zone setting enabled
What worked for me
Creating the subnet group within my VPC that would be the eventual home of my RDS instance
Creating a temporary database subnet group in the "DEFAULT" VPC (my RDS instance has previously been a subnet group in a VPC that I had provisioned, not the default one) consisting of the three subnets that belong to the "DEFAULT" VPC—this can be done in the RDS section of the AWS Console, no need to go to the VPC section
Modifying the instance's subnet group to the newly created group (from 2.)
Modifying the instance's subnet group to its eventual home within my original VPC (from 1.)

EC2 Classic to VPC

Been on EC2 Classic for years and we're getting squeezed off. I'm having trouble planning the migration for the following reasons:
ec2 classic security groups don't see vpc security groups
routing only seems possible through public internet
I need to migrate master/slave db and a redis cluster into the vpc, but I can't see a clear path for the two bullet points above. Short of taking the site offline and importing all the data via dumps, I'm unsure how to proceed.
Any advice would be appreciated.
You cannot migrate anything "LIVE" from classic to VPC. You need to take Snapshot, create AMI.. etc and then re-launch the whole thing from the scratch inside the VPC. There is no other way out.
For the security group (SG), VPC SG and EC2-classic SG do not mingle. You will have to create separate SGs inside the VPC.
You need to figure out what all things you want to host in public subnet of VPC (OR Private subnet of VPC). Things only inside the Public Subnet could be accessed from Internet.
e.g. you can have your Webserver in the Public Subnet while you can have the back-end Application server in the private subnet. This was just an example.
To make the long story short, you are eventually going to launch everything new in the VPC (take help of EC2 AMI, snapshot…etc so that the things that you will launch in the VPC will have your data)