How can I specify the subnet of an RDS or DynamoDB? - amazon-web-services

I would like to specify the subnet for a DynamoDB or RDS database to decrease the latency when I access the database from my EC2 server. Is this possible? Right now, the latency is about 0.1s for reads from the DynamoDB server by the EC2 server in the same region, which seems much too slow.

How do you measure the latency? I'm extremely surprised to hear that you're gettin 100ms latency with Dynamo, from an EC2 host. In my experience DynamoDB gives pretty consistent latency in the low tens of milliseconds: 10-20ms in the 99th percentile and 10ms in the 90th percentile latency is pretty typical. The median latency (p50) is even lower with the majority or requests being completed in 4-5ms.
Of course, how long requests take also depends on how much data you're shuttling around. Writing large items for instance may take longer than simply updating a smaller item.
If you do want to configure a VPC endpoint for DynamoDB though, that is totally possible and you can accomplish that using the aws ec2 create-vpc-endpoint command as shown in the following guide:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html
Note In your question you're asking about RDS as well. While RDS does not offer VPC endpoints as a service, an RDS instance is already within your VPC so it doesn't really make sense to talk about VPC endpoint for RDS databases. You can simply create the instance and not even give it public internet access. Requests to the RDS databases will simply flow through the VPC router, straight to the RDS instance endpoint, all within the VPC.

Related

AWS: How to set up disaster recovery for ec2 instances in 2 VPCs?

What I have:
One VPC with 2 EC2 Ubuntu instances in it: One with phpmyadmin,
another one with mysql database. I am able to connect from one
instance to another.
What I need to achieve:
Set up the Disaster recovery for those instances. In case of networking issues or if the first VPC is not available for any reason all requests sent to the first VPC are
redirected to the second one. If I got it right it can be achieved
with VPC endpoints. Cannot find any guide on how to proceed with
this. (I have 2 VPCs with 2 ec2 instances in each of them)
Edit:
Currently I have 2 VPC with 2 EC2 instances in each of them.
Yes, ideally I need to have 2 databases running and sync the date between them. Not it is just 2 separate db instances with no sync.
First ec2 instance in each VPC has web app running. So external requests to the web app should be sent to the first VPC if it is available and to the second VPC if smth is wrong with the first one. Same with the DBs: if DB instance in the first VPC is available - web app requests should update data in this DB. If not requests should access the data from the second DB instance
Traditionally, Disaster Recovery (DR) involves having a secondary copy of 'everything' (eg servers in a different data center). Then, if something goes wrong, failover would involve pointing to the secondary copy.
However, the modern cloud emphasises High Availability rather than Disaster Recovery. An HA architecture actually has multiple systems continually running in separate Availability Zones (AZs) (which are effectively Data Centers). When something goes wrong, the remaining systems continue to service requests without needing to 'failover' to alternate infrastructure. Then, additional infrastructure is brought online to make up for the failed portion.
High Availability can also operate at multiple levels. For example:
High Availability for the database would involve running the database under Amazon RDS "Multi-AZ" configuration. There is one 'primary' database that is servicing requests, but the data is being continually copied to a 'secondary database in a different AZ. If the database or AZ should fail, then the secondary database takes over as the primary database. No data is lost.
High Availability for web apps running on Amazon EC2 instances involves using a Load Balancer to distribute requests to Amazon EC2 instances running in multiple AZs. If an instance or AZ should fail, then the Load Balancer will continue serving traffic to the remaining instances. Auto Scaling would automatically launch new instances to make up for the lost capacity.
To compare:
Disaster Recovery is about having a second set of infrastructure that isn't being used. When something fails, the second set of infrastructure is 'switched on' and traffic is redirected there.
High Availability is all about continually handling loads across multiple Data Centers (AZs). When something fails, it keeps going and new infrastructure is launched. There should be no 'outage period'.
You might think that running multiple EC2 instances simultaneously to provide High Availability is more expensive. However, each instance would only need to handle a portion of the load. A single 'Large' instance costs the same as two 'Medium' instances, so splitting the workload between multiple instances does not need to cost more.
Also, please note that VPCs are logical network configurations. A VPC can have multiple Subnets, and each Subnet can be in a different AZ. Therefore, there is no need for two VPCs -- one is perfectly sufficient.
VPC Endpoints are not relevant for DR or HA. They are a means of connecting from a VPC to AWS Services, and operate across multiple AZs already.
See also:
High availability is not disaster recovery - Disaster Recovery of Workloads on AWS: Recovery in the Cloud
High Availability Application Architectures in Amazon VPC (ARC202) | AWS re:Invent 2013 - YouTube
In addition to the previous answers, you might wanna take a look in migrating your DBs to RDS or Aurora.
It would provide HA for your DB tier via multi-AZ configuration, and you would not have to figure out how to sync the data between the databases.
That being said, you also have to decide what level of availability is acceptable for you:
multi AZ - data & services span across multiple data centers in one region -> if the whole region goes down, your application goes down.
multi region - data & services span across multiple data centers in multiple regions -> single region failure won't put you out of business, but it requires some more bucks & effort to configure

Performance test in AWS : How to guarantee Bandwidth

I need to run a performance test against an application based on Elastic Beanstalk located in AWS fronted by and ELB.
I expect traffic to be around 25 Gbit/s
As per AWS requirements, I am using another account (dedicated to tests) from my AWS organisation.
The application is a production application in another account of my AWS organisation.
My performance test will use the DNS entry of the production website, it will be executed by EC2 instances in subnet of a VPC that has an internet gateway.
I have a doubt regarding the bandwidth, I don't understand from AWS documentations I read if there will be a limitation of bandwidth or not ?
From this answer it seems I may face such issues:
https://stackoverflow.com/a/62344703/9565222
In this case, how can I run a performance test that reflects what happens in production, ie pass through DNS entry pointing to the ELB.
Let's say I create a Peering connection between the Test account VPC and production VPC, what is the max bandwidth ?
My test shows that with 3 c5d.9xlarge using a VPC Peering connection , I only get around 10 Gbits/s, so it would be the max whatever the number of instances.
Another test shows that with 3 c5d.9xlarge using a Internet Gateway, I get varying bandwidth capped around 12 Gbits/s, but I cannot tell what's the real limit.
So what are my option ?
- VPC Peering is not
- Internet Gateway from multiple machines may be but I would like a kind of guarantee
- Are there better options (Transit Gateway ?) ?
I need to run a performance test against an application based on Elastic Beanstalk located in AWS fronted by and ELB. I expect traffic to be around 25 Gbit/s
That sounds totally fine, ELB can easily handle 25 Gbps.
Make sure that your test reflects what your production load is going to be like. If your production load is all coming from a very small number of sources, replicate that. If it's coming from a very large number of sources (e.g., lots of users of a client app, each generating a bit of traffic, resulting in a ton of total aggregated traffic), make sure you replicate that. There are differences that may seem nuanced if you're not experienced in this kind of testing, and reproducing the real environment as closely as possible is the easiest way to avoid any of those issues.
For testing with a very large number of relatively low-bandwidth sources, take a look at projects like these:
Bees with Machine Guns
Tsung
I have a doubt regarding the bandwidth, I don't understand from AWS documentations I read if there will be a limitation of bandwidth or not ?
Some components in AWS have bandwidth limitations, some don't.
Specifically, EC2 instances each have a maximum bandwidth they support depending on the instance type. Also, you should know that even if a given EC2 Instance Type supports a certain bandwidth, you need to be sure that the OS running on that instance supports that bandwidth. This usually means that you need to ensure that the correct drivers are being used. In my experience, as long as you use the most recent version of Amazon Linux avaialable, everything should "just work".
Also, as I mention in more details later, VPC Peering Connections and Internet Gateway are do not limit bandwidth.
Let's say I create a Peering connection between the Test account VPC and production VPC, what is the max bandwidth ?
VPC Peering Connections are not a bandwidth bottleneck. That is, they don't limit the amount of bandwidth you have across the peering connection.
From the Amazon VPC FAQ:
Q. Are there any bandwidth limitations for peering connections?
Bandwidth between instances in peered VPCs is no different than bandwidth between instances in the same VPC.
[nb: there's a note about placement groups in the FAQs, but you don't mentioned that so I removed it; if you are using the feature, please clarify, as it's something that you most likely shouldn't be using anyway based on what you described originally in the question]
My test shows that with 3 c5d.9xlarge using a VPC Peering connection , I only get around 10 Gbits/s
The c5d.9xlarge instance type is limited to 10 Gbps. So if you use that for your test, you won't ever see one instance with more than 10 Gbps.
More info here: Amazon EC2 C5 Instances.
Also, make sure you check the EC2 C6g instances. I haven't personally used them, but they are supposed to be incredibly faster and lower cost: they were released just 2 days ago.
Another test shows that with 3 c5d.9xlarge using a Internet Gateway, I get varying bandwidth capped around 12 Gbits/s [...]
The Internet Gateway isn't a bandwidth bottleneck. In other words, there's no bandwidth limit imposed by the Internet Gateway.
In fact, there's no "single device" that is an Internet Gateway. Think of it more as a "flag" that tells the VPC networking system that your VPC has a path to and from the Internet.
From the Amazon VPC FAQ:
Q. Are there any bandwidth limitations for Internet gateways? Do I need to be concerned about its availability? Can it be a single point of failure?
No. An Internet gateway is horizontally-scaled, redundant, and highly available. It imposes no bandwidth constraints.
So what are my option ? - VPC Peering is not - Internet Gateway from multiple machines may be but I would like a kind of guarantee - Are there better options (Transit Gateway ?) ?
VPC Peering is probably the best choice here. As I mentioned, it is not limiting your bandwidth. Check other things like I mentioned before: the instance type, the OS, the drivers, etc.
Using an Internet Gateway for this implies that, from a routing perspective, your traffic is "leaving AWS" and going "out to the Internet" (even though, physically, it probably won't ever truly leave AWS's physical devices). This means that, from a billing perspective, you'll be charged "Data Transfer Out to the Internet" rates. They are significantly higher than what you'd pay for VPC Peering.
I see no need for a Transit Gateway here, as the scenario you describe is really simple and can be solved with a VPC Peering Connection.

Will AWS RDS Multi-AZ failover cause unexpected data transfer costs?

According to the Amazon docs, I should setup my RDS database with multiple availability zones (multi-AZ) for high availability and automatic failover. However data transfer between EC2 and RDS within the same availability zone is free whereas data transfer between zones is not (see pricing). So if I setup my webserver on an EC2 instance in the same AZ as my database server - to get the zero data transfer costs, and the database server then fails and automatically failovers to a different AZ, will Amazon suddenly start charging me data transfer costs?
Am I missing something here? Is there a way to minimise this data transfer cost, or is it just luck if you end up running in the same AZ or not?
Don't really like Multi-AZ. You are worried about the data transfer costs but these tend to be minimal. I normally pay cents. With Multi-AZ the real cost is that you pay about 75% more to have the failover server on standby... however, the failover is still slow, taking several minutes, so you will get an outage. You would have thought if you're paying so much that the failover procedure would take seconds, but it doesn't.
The failover server will not be in the same AZ - this defeats the whole point. And, theoretically, the outage should be a few hours, so the inter-AZ data transfers will be short-lived.
Another point is that if you have, say, your web servers and RDS in AZ us-east-1b, and this AZ goes down, then it's pretty useless having an RDS failover because your web servers are down!
Given all this, I go with "assume robust". This means I expect AWS to be 99.9% and if something happens (which never has to me), it will be short lived and something to live with. OR you can almost double your costs and MAYBE it will work if there is an outage.
This does not apply, of course, to mega-sites with servers in many zones, proper load balancing, cluster databases etc. But I'm pretty sure they don't use RDS Multi-AZ!

Amazon availability zones

I'm fairly new to Amazon services and wondering what some of the best practices are for clustering/load balancing?
I have a load balancer in my colo (NJ) which may potentially be upgraded to Netscaler.
The application we're hosting on Amazon is nothing crazy and don't expect too much traffic. We're looking at 2 linux instances that would run a Node JS application with a MongoDB replica set. From what I understand, Amazon will evenly divide the traffic amongst the zones. The end-users location has no effect on where they'll be distributed (ie if I have a server in the west coast and one in the east coast, the user in the east coast could be directed to either east or west).
If I wanted to direct users traffic based on location, a global DNS solution would make more sense?
One server would be the master db and the other would be slave with data replicating to each other.
Anybody have any experience with this and how is the network performance?
A question about EC2/S3
EC2 Instances and S3 buckets can only communicate if they are in the same region, correct?
The load balancer only works within one region. If you want to balance traffic between different regions you will need to look at latency based routing in Route 53. Keep in mind that availability zone and region have different meanings within EC2
MongoDB replica set is a flexible master/slave configuration. If the primary instance fails, a secondary, based on configured priority can automatically become primary. Network within a region is fast, you will have some latency if you use multiple regions.
EC2 instance can access an s3 bucket in any region, you wont pay for outgoing bandwidth if both are in the same region.

EC2 instance region via IP Address

I'm trying to get my EC2 instances to communicate better with APIs of a 3rd party service. Latency is extremely important as voice communication is heavily involved & lag is intolerable.
I know a few of the providers use EC2, but the thing is Amazon's IP system makes it difficult to find which region the instance is in. With non elastic-ip services I could do a whois and find if it was in Australia or somewhere in Europe so I could put a server close by.
With these elastic IP's how can I find which zone they're in. I can use ping times but its a bit of a guess and I have to make all these instances in different regions to find the shortest ping time.
Amazon EC2 regularly publishes its Amazon EC2 Public IP Ranges, which clusters them by Region.
It does not cluster them by Availability Zone (AZ) (if you actually meant that literally), but this shouldn't matter much, insofar cross AZ latency should regularly be within single digit milliseconds range only.
Other than that you might also be interested in my answer to How could I determine which AWS location is best for serving customers from a particular region?, which outlines two other options for handling this based on external data/algorithms or via the Multi-Region Latency Based Routing now Available for AWS (which would likely only be useful when fully embracing Amazon Route 53 as well).
Put your server behind a Route 53 DNS and let Latency Based Routing do the rest for you - it can decide automatically for you the least latent server.
http://aws.typepad.com/aws/2012/03/latency-based-multi-region-routing-now-available-for-aws.html