Redirect old RDS traffic to the new RDS in AWS - amazon-web-services

Old database endpoint : old.cy336nc8sq5l.us-east-1.rds.amazonaws.com
New database endpoint : new.cy336nc8sq5l.us-east-1.rds.amazonaws.com
Above endpoints are automatically created by AWS, at the time of creation of RDS instance
I have tried setting up CNAME for old.cy336nc8sq5l.us-east-1.rds.amazonaws.com with value new.cy336nc8sq5l.us-east-1.rds.amazonaws.com but it did not worked.For this I have to create a new Hosted zone in route53 name cy336nc8sq5l.us-east-1.rds.amazonaws.com
However, If a setup a CNAME in other hosted zone for any url like abc.example.com with value new.cy336nc8sq5l.us-east-1.rds.amazonaws.com works like a charm. The old rds url has been used in multiple application I cannot take a risk to completely abandon, the best way is to use some kind of redirection.
In addition to it, any CNAME under the cy336nc8sq5l.us-east-1.rds.amazonaws.com Hosted zone is not working.
How can I fix this? Please also suggest what is the best practice for redirection rds traffic? I knew for the new DB endpoint, I will create a new custom CNAME and will use that going forward rather that just using the default one. All suggestions are welcome :)

You can't add any records for the domain cy336nc8sq5l.us-east-1.rds.amazonaws.com, because you don't control it, Generally you will be able to create any hosted zones like google.com etc but it won't get reflect unless you change the NS record and SOA records from the original DNS provider to point yours, you can't it with aws rds domains. you can confirm it by doing
dig +short -t ns cy336nc8sq5l.us-east-1.rds.amazonaws.com
If above results returns your NS records then you control that domain.
To have this kind of flexibility in future, i would suggest a way create a private zone like mydb.com and have A record like master.mydb.com with value old.cy336nc8sq5l.us-east-1.rds.amazonaws.com and when you want to switch to another endpoint just switch it in route53, after TTL expires the connections will start coming to new endpoint, since you are making a change, its better to start using this way.
Also for your case, after you switch to new endpoint, you can check the connections count in the old DB to know if its being referred somewhere and by running show processlist;, you will be able to know which IP, its being used.

The bottom line is that you are going to have to update all 30 applications to use the new DB endpoint. If you are going to be deleting databases & recreating them like this regularly, then configure your databases to use a name in a zone you control, and create a CNAME to whatever database endpoint is current.
You may be able to create a temporary solution by adding an entry to /etc/hosts (assuming your clients are running linux - I believe this is also possible on Windows, but it has been a long time) that maps the current IP for the new database to the old hostname. But this is probably just as much work as updating the application to use the new database. It will also fail if you are running a multi-AZ database and have a failover event.

Change your DB identifier can help in some way.
Select your cluster -> Modify -> change DB cluster identifier
You will keep your old database with difference endpoint, then change new DB to new endpoint.
But I love /etc/hosts solution as simple and safe.

Related

Switching between on-prem and cloud server IPs without load balancing

I own a something.com domain and want to switch between an old on-premises server to a new Google Cloud VM. I can do that by changing the A record under DNS settings. If the new server fails I need to be able to switch back to the old server.
The problem with using A records is that DNS doesn't propagate fast even if you use Cloudflare. Google Chrome in particular sticks to its DNS table like crazy and if it first learned that something.com resolves to X.X.X.X it will not let go of it.
I need to be able to direct all traffic going to the Google Cloud static IP back to the old server's IP. I'm looking to find a proxy/routing rule menu that I can use to apply - not a full blown load-balancing menu that will cost extra per month.
The solution is to get rid of the old server and build a more robust solution on GCP. There are multiple ways to do this, but one obvious way is to use a Managed Instance Group (https://cloud.google.com/compute/docs/instance-groups). MIGs can be configured to be autohealing (https://cloud.google.com/compute/docs/tutorials/high-availability-autohealing) and autoscaling (if needed).
In this case you should be particularly looking at stateful MIGs I guess (https://cloud.google.com/compute/docs/instance-groups/stateful-migs).
You have two solutions to switch your DNS from an IP to another one dynamically
Either you use a DNS failover service, not proposed on GCP today. Use a low TTL in your DNS definition, else your will wait a lot before the automatic switch.
Or you implement it by yourselves with a proxy server that you have to manage.

AWS - Can I launch nodes under a DNS domain (Auto Scale Group)?

Use Case
I'm working on an application that uses Presto, and for Presto, I have to set up HTTPS traffic internally (for security compliance reasons).
For this, I preferably need the nodes' FQDN to be in the same domain. E.g. myhost1.mydomain.com, myhost2.mydomain.com.
My Question
AWS automatically gives a FQDN like ip-10-20-30-40.ec2.internal. So, my question is:
Is there a way I can have a new node automatically be created with a FQDN like myhost1.mydomain.com? I know I can create internal "hosted zones" and DNS records for my hosts pretty easily, but I can't figure out how to make that the default domain for a new host.
Also, just FYI, I'm doing this for an auto-scale group; but I suspect that's irrelevant.
When the Amazon EC2 starts, it can run a script passed in via User Data.
You could code this script to create a CNAME record in Amazon Route 53 that points to the IP address of the instance.
I'm not sure how you'd necessarily determine the number within the name, so you could just create a random name. Also, it might be tricky to remove the CNAME entry when the instance is terminated. One way to both assign and remove the record would be to use Amazon EC2 Auto Scaling Lifecycle Hooks, which allows code to be triggered outside of the instance itself. It's more complex but would be fully effective.
I'm not familiar with Presto, but here's a couple of ideas.
First, if you are using an AWS managed load balancer, you can enable HTTPS between it and the instance using self-signed cert: the load balancer will NOT validate the cert, so your connection will be secure.
If that's not what you need, take a look at DHCP option sets for your VPC - I believe you can set your own domain name, rather than use the default ec2.internal.

I want to gradually move from Heroku to AWS. How to do setup "Weighted routing policy" in Route 53?

This problem is hurting my brains for almost the whole weekend. I hope someone will come and release me :-)
I want to move a webapplication from Heroku to AWS in a gradual way. So, i.e. that we start routing 10% of the request to AWS, and increase that number in time – when our canary tests passed and everything runs smoothly. FYI; the database is already moved to AWS and can also be accessed by Heroku via a Network Load Balancer.
The setup should also be able to serve a maintenance-page (running from s3 bucket with cloudfront) when – in some hopefully rare case – the health checks for both are failing. I've added an extra alias record for that with a weight of 0, because route53 will always try to give a result when all checks are failing, even if the weight is set to nil.
The Application Load Balancer we need for routing all the traffic to the correct ECS containers, also arrange some redirects (apex to www, and http to https) for us.
With all these requirement, I came up with the diagram shown below.
During implementation, I run into a problem that I do not get solved.
I can't create an specific A-Record (the one with weight 100), because it tries to refer to an recordset as alias which is from another type (CNAME). And that's not allowed within Route 53.
The problem is that it has to be a A-record, because when you want to leverage the 'weighting routing policy', all dns record should be from the same type.
The records with weight 90 and 10 should also be CNAME's (the need to be from the same type as well), because I can't use a A-record for my Heroku endpoint.
Anyone have an idea how to solve this? Our maybe knows a better way to do this?

Updates to Type A record set not reflecting in Route 53

I had 2 EC2 instances - one connected to mydomain.com and another connected to dev.mydomain.com
When mydomain.com instance went down because of some reason I changed the record set of mydomain.com to the public IP of second EC2 instance. The change was immediately reflected and mydomain.com started working fine.
After a few hours after fixing issues with the first EC2 server I reverted the IP address in the record set of mydomain.com. But this does not work. mydomain.com still points to the 2nd EC2 machine.
Can anybody suggest possible solutions?
DNS changes take time to propagate. Also, computers cache DNS responses, so checking changes can be difficult. The best advice is to wait, or to check it via a different computer.
You might want to use a service like https://cachecheck.opendns.com/ to check the resolution, or clear your cache before checking (in Windows, use ipconfig /flushdns).
DNS records have a ttl or Time To Live. This means records are not refreshed from the central server until that TTL has expired.
You should look at using failover records in R53 :)

AWS RDS IP static or dynamic?

I have an RDS instance with a URL that was provided by Amazon. (This URL has an IP that's associated with (of course)).
To make connecting to the DB easier I made a redirect from my domain like this: "db.myDomain.com" to the IP of the DB Instance.
For a week it all worked fine, but then, suddenly, it stopped working. After searching for a few hours, I have realized that the IP I was redirecting to was not the same as the IP of the instance.
This made me think that maybe the IPs on RDS are dynamic and the only way to access the DB is with the URL provided by Amazon.
Is this correct? If so, is there away to redirect from one URL to another?
Yes, your observation about the dynamic nature of the IPs for RDS is correct and it is the anticipated behaviour of the Service. Always use the URL provided for RDS instance to access it RDS instance(s).
For most of the use cases, you don't to do a redirect to access; as the DNS name would go inside a config file / connection string. If you still need a friendly name - you may use the Route53 to create an alias. Here is a documentation link from AWS to accomplish that [ https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-rds-db.html ] - it is easier & convenient.
For RDS instance, the DNS name is not changed, but IP address will be changed in some case, especially when you enable Multi-AZ (multiple available zone), the RDS instance will be switched to other available zone with different IP address when AWS found any fails in it.
So in your application, you can't fix the IP address for your database accessing, always set DNS (domain name) to access your database.