Will AWS RDS Multi-AZ failover cause unexpected data transfer costs? - amazon-web-services

According to the Amazon docs, I should setup my RDS database with multiple availability zones (multi-AZ) for high availability and automatic failover. However data transfer between EC2 and RDS within the same availability zone is free whereas data transfer between zones is not (see pricing). So if I setup my webserver on an EC2 instance in the same AZ as my database server - to get the zero data transfer costs, and the database server then fails and automatically failovers to a different AZ, will Amazon suddenly start charging me data transfer costs?
Am I missing something here? Is there a way to minimise this data transfer cost, or is it just luck if you end up running in the same AZ or not?

Don't really like Multi-AZ. You are worried about the data transfer costs but these tend to be minimal. I normally pay cents. With Multi-AZ the real cost is that you pay about 75% more to have the failover server on standby... however, the failover is still slow, taking several minutes, so you will get an outage. You would have thought if you're paying so much that the failover procedure would take seconds, but it doesn't.
The failover server will not be in the same AZ - this defeats the whole point. And, theoretically, the outage should be a few hours, so the inter-AZ data transfers will be short-lived.
Another point is that if you have, say, your web servers and RDS in AZ us-east-1b, and this AZ goes down, then it's pretty useless having an RDS failover because your web servers are down!
Given all this, I go with "assume robust". This means I expect AWS to be 99.9% and if something happens (which never has to me), it will be short lived and something to live with. OR you can almost double your costs and MAYBE it will work if there is an outage.
This does not apply, of course, to mega-sites with servers in many zones, proper load balancing, cluster databases etc. But I'm pretty sure they don't use RDS Multi-AZ!

Related

AWS: How to set up disaster recovery for ec2 instances in 2 VPCs?

What I have:
One VPC with 2 EC2 Ubuntu instances in it: One with phpmyadmin,
another one with mysql database. I am able to connect from one
instance to another.
What I need to achieve:
Set up the Disaster recovery for those instances. In case of networking issues or if the first VPC is not available for any reason all requests sent to the first VPC are
redirected to the second one. If I got it right it can be achieved
with VPC endpoints. Cannot find any guide on how to proceed with
this. (I have 2 VPCs with 2 ec2 instances in each of them)
Edit:
Currently I have 2 VPC with 2 EC2 instances in each of them.
Yes, ideally I need to have 2 databases running and sync the date between them. Not it is just 2 separate db instances with no sync.
First ec2 instance in each VPC has web app running. So external requests to the web app should be sent to the first VPC if it is available and to the second VPC if smth is wrong with the first one. Same with the DBs: if DB instance in the first VPC is available - web app requests should update data in this DB. If not requests should access the data from the second DB instance
Traditionally, Disaster Recovery (DR) involves having a secondary copy of 'everything' (eg servers in a different data center). Then, if something goes wrong, failover would involve pointing to the secondary copy.
However, the modern cloud emphasises High Availability rather than Disaster Recovery. An HA architecture actually has multiple systems continually running in separate Availability Zones (AZs) (which are effectively Data Centers). When something goes wrong, the remaining systems continue to service requests without needing to 'failover' to alternate infrastructure. Then, additional infrastructure is brought online to make up for the failed portion.
High Availability can also operate at multiple levels. For example:
High Availability for the database would involve running the database under Amazon RDS "Multi-AZ" configuration. There is one 'primary' database that is servicing requests, but the data is being continually copied to a 'secondary database in a different AZ. If the database or AZ should fail, then the secondary database takes over as the primary database. No data is lost.
High Availability for web apps running on Amazon EC2 instances involves using a Load Balancer to distribute requests to Amazon EC2 instances running in multiple AZs. If an instance or AZ should fail, then the Load Balancer will continue serving traffic to the remaining instances. Auto Scaling would automatically launch new instances to make up for the lost capacity.
To compare:
Disaster Recovery is about having a second set of infrastructure that isn't being used. When something fails, the second set of infrastructure is 'switched on' and traffic is redirected there.
High Availability is all about continually handling loads across multiple Data Centers (AZs). When something fails, it keeps going and new infrastructure is launched. There should be no 'outage period'.
You might think that running multiple EC2 instances simultaneously to provide High Availability is more expensive. However, each instance would only need to handle a portion of the load. A single 'Large' instance costs the same as two 'Medium' instances, so splitting the workload between multiple instances does not need to cost more.
Also, please note that VPCs are logical network configurations. A VPC can have multiple Subnets, and each Subnet can be in a different AZ. Therefore, there is no need for two VPCs -- one is perfectly sufficient.
VPC Endpoints are not relevant for DR or HA. They are a means of connecting from a VPC to AWS Services, and operate across multiple AZs already.
See also:
High availability is not disaster recovery - Disaster Recovery of Workloads on AWS: Recovery in the Cloud
High Availability Application Architectures in Amazon VPC (ARC202) | AWS re:Invent 2013 - YouTube
In addition to the previous answers, you might wanna take a look in migrating your DBs to RDS or Aurora.
It would provide HA for your DB tier via multi-AZ configuration, and you would not have to figure out how to sync the data between the databases.
That being said, you also have to decide what level of availability is acceptable for you:
multi AZ - data & services span across multiple data centers in one region -> if the whole region goes down, your application goes down.
multi region - data & services span across multiple data centers in multiple regions -> single region failure won't put you out of business, but it requires some more bucks & effort to configure

How can I specify the subnet of an RDS or DynamoDB?

I would like to specify the subnet for a DynamoDB or RDS database to decrease the latency when I access the database from my EC2 server. Is this possible? Right now, the latency is about 0.1s for reads from the DynamoDB server by the EC2 server in the same region, which seems much too slow.
How do you measure the latency? I'm extremely surprised to hear that you're gettin 100ms latency with Dynamo, from an EC2 host. In my experience DynamoDB gives pretty consistent latency in the low tens of milliseconds: 10-20ms in the 99th percentile and 10ms in the 90th percentile latency is pretty typical. The median latency (p50) is even lower with the majority or requests being completed in 4-5ms.
Of course, how long requests take also depends on how much data you're shuttling around. Writing large items for instance may take longer than simply updating a smaller item.
If you do want to configure a VPC endpoint for DynamoDB though, that is totally possible and you can accomplish that using the aws ec2 create-vpc-endpoint command as shown in the following guide:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html
Note In your question you're asking about RDS as well. While RDS does not offer VPC endpoints as a service, an RDS instance is already within your VPC so it doesn't really make sense to talk about VPC endpoint for RDS databases. You can simply create the instance and not even give it public internet access. Requests to the RDS databases will simply flow through the VPC router, straight to the RDS instance endpoint, all within the VPC.

Running aws RDS MySQL in 3 AZs

I'm planning to run MySQL RDS.
My question is Is it possible to run MySQL in 3 availability zones? Or is it only limited to 2 AZs. If it's running in 3AZs does it mean I get better redundancy compare with running in two AZs?
Using the RDS Multi-AZ High Availability feature1 you can only have one stand-by replica:
In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption.
This is only a failover solution -- you can't use the standby for load balancing.
You can create additional Read Replicas2 that cover other availability zones and can be used to horizontally scale read traffic. But there are two caveats:
Unlike the standby, RDS cannot automatically fail over to a read replica when the primary DB goes down. You would need to implement this yourself using other tools like Route53.
Read replicas use asynchronous replication, so they may lag behind the master. You need to determine if this is acceptable in your failover scenario.

Cloud Services Risks

Please tell me what are the risk mitigation features to consider when choosing a cloud service provider for an organization? Reliability seems to be an issue considering Nirvanix's shut down and Amazon's outage in August 2013. Thank You.
-Nandhini
Quite the open-ended question, I'm sure just about everyone could spend weeks about risk mitigation. There are many procedures put in place and using Amazon as a provider I'll go through a few.
Amazon has a plethora of tools for disaster recovery, redundancy and general good practise for the Cloud Environment but it is totally up to you if you choose to use them. Let's take Availability Zones as an example.
In each AWS Region (a location where their datacentres are held) they have what they call Availability Zones which are completely separated datacentres in order to improve redundancy. An entire AZ could go offline and not affect the other AZ. A well executed Cloud migration strategy would utilise several of the following:
Spread of all necessary VMs, appliances, databases etc. spread evenly over a single or multiple geographic region
Utilising auto-scaling groups to allow rapid increase of Infrastructure of a single AZ in case of massive outage in another AZ (also good for flash traffic or periods of high server loads)
Utilising Route 53 DNS records to automatically re-route traffic through to nearby Elastic Load Balancers, thus causing your site to have near-zero downtime through an AZ failure as traffic switches over to a new Region or AZ in milliseconds (done at the Amazon level so no waiting for TTL DNS changes)
Elastic Load Balancers in general to near automatically place newly spun VMs straight into serving traffic
Managed Relational Database Service can place a warm back-up in another AZ in a single Region, instantly spin up multiple Read Replicas and second level Read Replicas
I could go on for days but for a service like AWS with a properly implemented Cloud Strategy they offer a plethora of services and techniques (their white papers at http://aws.amazon.com/whitepapers/ allow you to get your feet wet in Security and Deployment)

Amazon availability zones

I'm fairly new to Amazon services and wondering what some of the best practices are for clustering/load balancing?
I have a load balancer in my colo (NJ) which may potentially be upgraded to Netscaler.
The application we're hosting on Amazon is nothing crazy and don't expect too much traffic. We're looking at 2 linux instances that would run a Node JS application with a MongoDB replica set. From what I understand, Amazon will evenly divide the traffic amongst the zones. The end-users location has no effect on where they'll be distributed (ie if I have a server in the west coast and one in the east coast, the user in the east coast could be directed to either east or west).
If I wanted to direct users traffic based on location, a global DNS solution would make more sense?
One server would be the master db and the other would be slave with data replicating to each other.
Anybody have any experience with this and how is the network performance?
A question about EC2/S3
EC2 Instances and S3 buckets can only communicate if they are in the same region, correct?
The load balancer only works within one region. If you want to balance traffic between different regions you will need to look at latency based routing in Route 53. Keep in mind that availability zone and region have different meanings within EC2
MongoDB replica set is a flexible master/slave configuration. If the primary instance fails, a secondary, based on configured priority can automatically become primary. Network within a region is fast, you will have some latency if you use multiple regions.
EC2 instance can access an s3 bucket in any region, you wont pay for outgoing bandwidth if both are in the same region.