Achieving read and write query availability in AWS Multi-AZ RDS - amazon-web-services

I have configured Multi-AZ RDS mysql instance with no read replicas in a development environment and I am testing Multi-AZ RDS fail-over by rebooting the DB instance.
Below is my observation: During RDS fail-over, the client application will not lost connection but at the same time it won't be able to access the database as well and once fail-over completes, client will able to access the database.
Update 1: Above observation is wrong.What I observed just now is that after fail-over completion I am getting below error and it results in connection termination.
ERROR 2003 (HY000): Can't connect to MySQL server on 'rds-test.czswqpewzqas.---------.amazonaws.com' (110)
So in short my queries are failing during reboot of Multi-AZ mysql instance.
Does any one have any idea, what I am missing here.
Update - Achieving read availability : Now I have created a Read Replica for the Multi-AZ mysql instance and on getting above mentioned error, redirecting "select queries" to the Read Replica Instance.
So,using Read replica I am able to achieve read availability.Is this the right way?Would like to know if there is any other way to do it.
Also, how I can achieve write availability in Multi-AZ RDS?

Your observations are correct. During the fail over, TCP connections are lost, the time to fail over to the secondary database and to switch over IP addresses in DNS.
It is up to the application to
a/ try to reconnect using exponential back off. Reconnection will be possible within minutes.
b/ decide how to behave during the failover.
Read transactions (SELECT) can be hand off to a read replica. Modern JDBC and ODBC drivers are able to handle read replica by themselves, just give the list of IP address / DNS names of your replicas. The driver will apply the load balancing automatically. No code change is required.
Write transactions are more complex to handle and there is no single answer for all applications. Correct answer will depend on your application & business requirements.
Some customers decide to block all write operations, return an error message to end users asking them to try again a few minutes later.
Some customers are queuing write transactions in an SQS queue. They develop a queue reader application to flush pending transactions when master database is available again. (depending on workload, S3 or DynamoDB can be use for this as well). Of course, your data will not be consistent during the fail over and a short period of time right after the fail-over, the time required to flush all pending write.
Please feel free to comment about other strategies used in real world scenarios.

Related

AWS RDS Read Replica act as Failover Standby

I am currently assessing whether to use RDS MySQL Multi-AZ or Single AZ with Read Replica.
Considerations are budget and performance, as Multi-AZ cost twice as much as Single AZ and have no ability to offload read operations, Single AZ with Read Replica seems to be a logical choice.
However, I saw a way to manually 'promote' the Read Replica to master in the event of master's failure, but is there a way to automate this?
Note: There was a similar question but it did not address my question:
Read replicas in RDS AWS
I think the problem is that you are a bit confused with these features. Let me help - you can launch AWS RDS in Multi-AZ deployment mode. In this case, AWS will do the following:
It will allocate a DNS record for you. This DNS record represents a single entry point to your master database, which is, lets assume, currently active and able to serve connections.
In the case of master failure for any reason, AWS will simply address hidden by the DNS record (quite fast, within 1-2 minutes) to be pointed to your stand by, which is located on another AZ.
When the master will become available again, then your stand by, which have served writes also needs now to synchronize everything with the master. You do not need to take care about it - AWS will manage it for you
In case of read replica:
AWS will allocate you 2 different DNS records - one for master, another for read replica. Read replica can be on the same AZ as a master, or even in an another Region
You can, and must in you application choose what DNS name to use in different scenarios. I mean, you, most probably, will have 2 different connection pools - one for master, another for read replica. Replication itself will be asynchronous
In the case of read replica, AWS solves the problem of replication by its own - you do not need to worry about it. But since the replica is read only AWS does not solve, by nature, the synchronization problem between read replica and master, because the replica is aimed to be read only, it should not accept any write traffic
Addressing your question directly:
Technically, you can try to make you read replica serve as a failover, but in this case you will have to implement a custom solution for synchronization with the master, because during the time the master was down, your read replica certainly received N amount of writes. AWS does not solve this synchronization problem in this case
In redards to Mutli-AZ - you cannot use your Multi-AZ standby as read replica, since it is not supported in AWS. I highly recommend to check out this documentation. I think it will help you sort the things out, have a nice day!)

Rebooting a AWS RDS Aurora master/writer also reboots the readers?

I'm trying to evaluate AWS RDS Aurora as future replacement for our local MySQL databases, but I'm noticing some strange behaviors.
I have a basic cluster with a DB master (writer) and a replica (reader). My idea was to use the reader as an always available datasource, even when the writer in unavailable. But when I'm rebooting the master, it takes down the reader as well, making the setup quite worthless.
Looking at the reader replica log, this is what happens when the it notices that the writer is down:
Does anyone know how to have a Aurora read entry point that never goes down even if the writer is offline or busy for a brief time?
Or does the write/read "out of sync" always take down the reader entry points no matter the size of the cluster?
The only way to have a replica that remains available during a reboot of the master would be to have an asynchronous replica using conventional MySQL replication -- which Aurora does support.
Aurora replication is very different than MySQL (or Galera) replication. A loss of the master necessarily triggers a reorganization of the cluster, because the individual instances don't have their own copies of the data, they share a 6-way replicated storage volume -- that's how replication can remain in the 10-20 ms time range. What's actually replicated from the master is the transaction log LSN. Replacement of a master requires one replica to be promoted, verify that the on-disk data structures are clean after taking over, and then all of the other replicas start follow it.
If the DB cluster has one or more Aurora Replicas, then an Aurora Replica is promoted to the primary instance during a failure event. A failure event results in a brief interruption, during which read and write operations fail with an exception.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html#Aurora.Managing.FaultTolerance
When an Aurora replica stops seeing updates from the master, it doesn't matter where the actual fault lies -- whether with the actual master or elsewhere in the infrastructure -- the replica stops serving queries because, best case, it no longer has access to authoritative data.
Where possible, zero-downtime patching appears to avoid a master restart during upgrades. Other than upgrades, there should not be a need to restart the master.

Multi-AZ RDS test failover and connection monitoring

My question has two parts:
What is the best way to initiate an RDS failover for testing purposes?
How can I monitor the connection during failover in order to observe the time that it takes for AWS to reconnect the user to the standby instance?
With respect to part (1): If I understand correctly, all instance modifications are made on the standby and then AWS fails over by flipping the CNAME over to the standby as the primary is updated, so if I were to make any kind of instance modification and select "apply immediately," it should cause a failover, correct?
With respect to part (2): I am looking specifically for a way of monitoring the failover of an Oracle RDS instance, whether through a lambda function, a bash script, or some other means. As far as I can tell, it is not possible to use ping with RDS, even when I allow all ICMP traffic via the security group. I can connect without trouble using telnet or an SQL client. What I would like though is some way of doing something like periodically pinging the database during a failover to see when the IP associated with the connection string switches over and how long it takes. Any suggestions?
Correct, RDS will make your modifications on the failover instance and then failover to it. Per their documentation:
The availability benefits of Multi-AZ deployments also extend to
planned maintenance and backups. In the case of system upgrades like
OS patching or DB Instance scaling, these operations are applied first
on the standby, prior to the automatic failover. As a result, your
availability impact is, again, only the time required for automatic
failover to complete.
To simulate failover, simply reboot with failover when rebooting, instead of rebooting both. From the linked documentation:
Reboot with failover is beneficial when you want to simulate a failure
of a DB instance for testing, or restore operations to the original AZ
after a failover occurs.
Write a script that, on a regular interval, connects with a SQL Client and performs a quick select on a table of your preference. You can use this to measure true downtime during the failover; we have a tool very similar to this that we use when getting estimates of modifications on a test RDS before we apply it to our production RDS. Our tool simply writes to console with a timestamp and whether it failed/succeeded every few seconds. The tool will write success before the reboot, failure during, and success again after the cutover completes.
Additional Resources:
Modifying an Amazon RDS DB Instance and Using the Apply Immediately Parameter
Modifying a DB Instance Running the Oracle Database Engine
Update on this:
I ended up using a simple bash script:
date; while true; date; do nc -vz DBNAME.REGION.rds.amazonaws.com PORT; sleep 1; done
Note: the above is for netcat-openbsd. If using netcat-traditional, you'll need to modify this.
This polls the database each second to see if it's still possible to connect. Typically when I ran this and then initiated reboot with failover, the connection would simply dangle during the failover then display a timeout error when the failover was complete and connectivity resumed, presumably because the failover usually takes longer than the reboot. If the reboot happens to take longer than the failover though, there may be a period of time during which the connection is refused as the reboot completes. In any case, using this method, I was able to get a consistent failover time of 2:08.
It seeems, however, that unlike I originally thought, most instance modifications do not involve a failover at all. I have tested resizing the instance as well as changing the option groups and parameter groups and did not experience any downtime.
Changing the database engine does result in a failover.

Migrating Redis to AWS Elasticache with minimal downtime

Let's start by listing some facts:
Elasticache can't be a slave of my existing Redis setup. Real shame, that would be so much more efficent.
I have only one Redis server to migrate, with roughly 3gb of data.
Downtime must be less than 10 mins. I assume the usual "stop the site, stop redis, provision cluster with snapshot" will take longer than this.
Similar to this question: How do I set an elasticache redis cluster as a slave?
One idea on how this might work:
Set Redis to use an AOF and trigger BGSAVE at the same time.
When BGSAVE finishes, provision the Elasticache cluster with RDB seed.
Stop the site and shut down my local Redis instance.
Use an aof-replay tool to replay the AOF into Elasticache.
Start the site again, pointed at the Elasticache cluster.
My questions:
How can I guarantee that my AOF file begins at exactly the point the RDB file ends, and that no data will be written in between?
Is there an AOF tool supported by the maintainers of Redis, or are they all third-party solutions, and therefore (potentially) of questionable reliability?*
* No offence intended to any authors of such tools, I'm sure they're great, I just feel much more confident using a tool written by the same team as the product to avoid potential compatibility bugs.
I have only one Redis server to migrate, with roughly 3gb of data
I would halt, save the REDIS to S3 and then upload it to a new cluster.
I'm guessing 10 mins to save the file and get it into s3.
10 minutes to just launch an elasticache cluster from that data.
Leaves you ten extra minutes to configure and test.
But there is a simple way of knowing EXACTLY how long.
Do a test migration of it.
DONT stop your live system
Run BGSAVE and get a dump of your Redis (leave everything running as normal)
move the dump S3
launch an elasticache cluster for it.
Take DETAILED notes, TIME each step, copy the commands to a notepad window.
Put a Word/excel document so you have a migration document. That way you know how long it takes and there are no surprises. Let us know how it goes.
ElastiCache has online migration support. You can use the start-migration API to start migration from self managed cluster to ElastiCache cluster.
aws elasticache start-migration --replication-group-id <ElastiCache Replication Group Id> --customer-node-endpoint-list "Address='<IP Address>',Port=<Port>"
The input to the API is your ElastiCache replication group id and the IP and port of the master of your self managed cluster. You need to ensure that the IP address is accessible from ElastiCache node. (An example IP address would be the private IP address of the master of your self managed cluster). This API will make the master node of the ElastiCache cluster call 'SLAVEOF' on the master of your self managed cluster. This will establish a replication stream and will start migrating data from self-managed cluster to ElastiCache cluster. During migration, the master of the ElastiCache cluster will stop accepting writes sent to it directly. You can start using ElastiCache cluster from your application for reads.
Once you have all your data in ElastiCache cluster, you can use the complete-migration API to stop the migration. This API will stop the replication from self managed cluster to ElastiCache cluster.
aws elasticache complete-migration --replication-group-id <ElastiCache Replication Group Id>
After this, the master of the ElastiCache cluster will start accepting writes. You can start using ElastiCache cluster from your application for both read and write.
The following limitations to be aware of for this migration method:
An existing or newly created ElastiCache deployment should meet the following requirements for migration:
It's cluster-mode disabled using Redis engine version 5.0.5 or higher.
It doesn't have either encryption in-transit or encryption at-rest enabled.
It has Multi-AZ with Auto-Failover enabled.
It has sufficient memory available to fit the data from your Redis on EC2 instance. To configure the right reserved memory settings, see Managing Reserved Memory.
There are a few ways to migrate the data without downtime. They are harder to achieve though.
you could have your app write to two redis instances simultaneously - one of which would be on EC. Once the caches are both 'warm', you could just restart your app, and read from the EC cache.
You could initially migrate to EC2 instead of EC. not really what you were hoping to hear, I imagine. this is easy to do because you can set EC2 as salve of your redis instance. Also, migrating from EC2 to EC is somewhat easier (the data is already on AWS), so there's a benefit for users with huge sets of data.
You could, in theory, intercept the commands from the client and send them to EC, thus effectively "replicating". But this requires some programming ( I dont believe a tool like this exists ATM) and would be hard with multiple, ephemeral clients.

Read replicas in RDS AWS

I am a newbie to amazon RDS. I have set up a db instance in RDS. I want to try the RDS read replicas feature.
I have few queries:
For what kind of applications read replicas are suitable?
Is the read replica replicates synchronously or asynchronously data to other read replicas?
Is it the substitute of the Multi AZ deployments?
How is it better than the master slave or master master replication in MYSQL.
If we have replicas on EC2 will it work the same way as RDS read replicas work
Thanks in advance.
For what kind of applications read replicas are suitable?
It is best suited if your application is
Read intensive and is used by several read clients
Can adopt ( live with ) a minor lag between the data written to db and data replicated to read replicas.
Is the read replica replicates synchronously or asynchronously data to other read replicas?
The replication is asynchronous, so expect a small lag for replication
Is it the substitute of the Multi AZ deployments ?
Multi AZ setup and Read Replica compliment each other; they aren't replacement or substitute for each other. Multi AZ setup is for High Availability ( Out of the Box Setup By AWS ) whereas Read Replica is purely to reduce / distribute the load on the Database Instances to improve the read performance and to avoid bottlenecks to the databases for writes and read. You can / need to write your application logic to divert your reads to Read Replica and Writes to Main Instance; to make the best use of the setup.
Generally people mix and match both Multi AZ and Read Replica(s) depending on the application and load.
How is it better than the master slave or master master replication in MYSQL
The comparison of the master master vs master slave depends on several factors like data, data volume, operation like write or read, load etc. you need to work to see exactly how the system performs with either of the setup.
The best advantage you go with Multi AZ / Read Replica is that, you can offload the DB management activities and overhead of supervising the replica setup and health to AWS; instead of you managing those by yourself.
If we have replicas on EC2 will it work the same way as RDS read replicas work
This is again more like corollary to the Q4. When try to install a database in your EC2 instance you need to take care ( monitor & manage ) - EC2 Instance Patches, Database Instance Patches, Replication Setup, Replication Lag, Availability.
Whereas when you leave that to AWS by using Read Replica they manage all the above for you. It is your call to choose which ever is best for you either depending on the application requires which involves factors like cost, availability, compliance etc.