How to direct traffic to different servers per geographic location? - amazon-web-services

If I have servers placed across the globe through AWS, Rackspace, some other cloud, or even bare metal, how do I direct traffic from, say Singapore, to a server instance living in the Asia region?
Is it some kind of load balancing, or DNS type things I would have to configure?
Thanks!

Use Route 53 Latency Based Routing: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingLatencyRRSets.html
UPDATE: Route 53 now support geolocation resource record sets:
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-values-geo.html

As Julio pointed out Latency Based Routing on Route 53 is a good option for this. One downside if you're hoping to say users in China go to this datacenter Route 53 won't do that for you. Instead they are constantly measuring the latencies to each of the AWS datacenters and will simply send people to the best AWS option (of the ones you have configured). To be honest this is probably good enough for you.
Lots of other DNS providers have similar offerings. I believe they however mainly focus on letting you decide where each region should go. I'm not a huge fan of this approach but it does give you a bit more flexibility at a cost of effort and potentially performance (if you're worried about that).

Related

HAproxy vs ALB or any other load balancer which one to use?

We are looking to separate our blog platform to a separate ec2 server (In Nginx) for better performance and scalability.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2 Server
Blog request (www.example.com/blog) -> Load Balancer/Route -> New Separate EC2 Server for blog
Please help in this case what is the best option to use:
Haproxy
ALB - AWS
Any other solution?
Also, is it possible to have the load balancer or routing mechanism in a different AWS region? We are currently hosted in AWS.
Haproxy
You would have to set this up on an EC2 server and manage everything yourself. You would be responsible for scaling this correctly to handle all the traffic it gets. You would be responsible for deploying it to multiple availability zones to provide high availability. You would be responsible for installing all security updates on the operating system.
ALB - AWS
Amazon will automatically scale this out to handle any amount of traffic you get. Amazon will handle all security patches of the underlying system. Amazon provides free SSL certificates for ALBs. Amazon will deploy this automatically across multiple availability zones to provide high availability.
Any other solution?
I think AWS Global Accelerator would work here as well, but you would have to weigh the differences between Global Accelerator and ALB to decide which fits your use case and budget the best.
You could also look at placing a CDN in front of everything, like CloudFront or Cloudflare.
Also, is it possible to have the load balancer or routing mechanism in
a different AWS region?
AWS Global Accelerator would be the thing to look at if load balancing in different regions is a concern for you. Given the details you have provided I'm not sure why you would want this however.
Probably what you really need is a CDN in front of your websites, with or without the ALB.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2
Server Blog request (www.example.com/blog) -> Load Balancer/Route ->
New Separate EC2 Server for blog
In my view you can use ALB deployed in multi AZ for high availability for the following reasons :-
aws alb allows us to route traffic based on various attributes and path in URL is one of them them.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#rule-condition-types
With aws ALB you can have two target groups with instance handling traffic one for first path (www.example.com) and second target group for another path (www.example.com/blog).
ALB allows something called SNI (which allows to handle multiple certications behind a single alb for multiple domains), so all you need to do is set up single https listener and upload your certificates https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/
i have answered on [something similar] it might help you also
This is my opinion, take it as that. I am sure a lot of people wont agree.
If your project is small or personal, you can go with HAProxy (Cheap USD4 or less if you get a t3a as a spot instance) Or free if you place it inside another EC2 of yours may be using docker.
If your project is not personal or not small, go with ALB (Expensive but simpler and better integrated to other AWS stuff)
HAProxy can handle tons of connections, but you have to do more things by yourself. ALB can also handle tons of connections and AWS will do most of the work.
I think HAProxy is more suitable for personal/small projects because if your project doesnt grow, then you dont have to touch HAProxy. It is set and forget the same as ALB but cost less.
You usually wont mind about Availability zones or disaster tolerance in a personal project, so HAProxy should be easy to config.
Another consideration: AWS offers a free tier on ALB, so if your project will run for less than a year ALB is the way to go.
If you are learning, then ALB should be considered because real clients usually love to stick to AWS in all aspects, and HAProxy is your call and also your risk (just to reduce cost for a company that usually pays a lot more for your salary, so not worth the risk).

Does traffic going from AWS US-East to AWS Singapore travel over a priority route? Or is it subject to normal routing rules?

Does traffic traveling from AWS US-East to AWS Singapore travel over a priority route? Or is it subject to normal internet routing?
To paint an example:
Server A is in AWS-US-East (us-east)
Server B is in AWS-Singapore(southeast-asia)
Server C is in Azure-Singapore (southeast-asia)
Assuming the same network request is made to two different availability zones:
A <-- Http Request 1 --> B
A <-- Http Request 2 --> C
Would there be a noticeable speed increase when making network requests between Http Request 1 and Http Request 2 given that Request 1 travels between two AWS availability zones?
See Behind the Scenes: Exploring the AWS Global Network (NET305) from AWS re:Invent 2018, starting around 25:00. AWS manages the transport infrastructure connecting all the regions together, except for China... so inter-region traffic is routed across high-bandwidth, uncongested links under AWS's control.
This should typically offer the best case performance across distances like these. Connections on the public Internet are not likely to be any better. They might be comparable or they might be worse, but it is impossible to definitively say what the anticipated differences between your specific scenarios might be -- or how performance on the Internet might vary over time. Over the AWS network, performance should be quite consistent over time.
If you find a significant difference, you can deploy a proxy server in EC2 in ap-southeast-1 (Singapore) and use it from us-east-1 to contact the external service, instead of contacting it directly via the Internet. This will cause you to pay cross-region as well as out-to-Internet pricing for bandwidth, but that may be worthwhile if the route is either superior in latency or more consistent in performance and stability over time.

AWS autoscaling and DDoS Attacks

I have create an ELB with an autoscaling group on AWS and I was wondering how does autoscaling work when a DDoS attack takes places? Is it going to scale until the limit I have set? How can I protect my AWS infrastructure against this? Thanks a lot in advance.
There are a lot of varieties of DDOS and an autoscaling group won't recognize a DDOS attack from normal traffic implicitly.
Assuming your scaling policies are setup correctly, your autoscaling group MAY grow (in number of instances) in the event of a DDOS attack because the instances are receiving a high volume of traffic and overloading. (I say may because all applications respond slightly differently to high volumes and varieties of traffic. I have worked with applications that don't play nicely with scaling policies without extra engineering. Also if your max number of instances is already reached it should not continue to grow).
The problem is that there is nothing to distinguish between real traffic and non-real traffic, so your services will still be flooded with the 'fake' stuff. The general goal is to 'filter' DDOS traffic before it hits your application instances.
That being said, AWS has some services to help against DDOS attacks:
https://aws.amazon.com/answers/networking/aws-ddos-attack-mitigation/
Specifically AWS Shield and AWS WAF would allow you to use tools like pattern matching or geolocation blocking to reject the unwanted traffic in question from attacking your infrastructure. Different services use different mitigation techniques. If you implement some of these services, they will help you respond effectively and keep your costs down but there is no 'one size fits all' methodology that I'm aware of.
Depending on budget there are other organizations and applications that you can work with to prepare yourself. For people where this is your first application or your organization is just starting out, I wouldn't worry too much about DDOS mitigation. Having and becoming comfortable with web application firewalls/shields are a good starting point for a host of other benefits that are likely more relevant for the early days. (Good security hygiene, familiarity with an applications traffic, etc etc)

Why is Cloudfront serving me from datacenter far from me?

I'm located 1000 miles from Singapore. I use S3 in Singapore region with CloudFront to serve data.
When I download contents, cloudfront is serving me from US Washington server. (checking IP addresses)
Why doesn't it serve from Singapore instead?
GeoIP lookups for IP addresses associated with CDN's is notoriously inaccurate.
Services that provide a GeoIP lookup gather information about the geographical assignment of each IP address from a wide range of sources and do their best to provide an accurate geographic assignment. In my experience, cheaper services are 80%-85% accurate, while the most expensive services are not more than around 90% accurate.
AWS does not publish the assignment of IP address to specific region. Instead, they designate the IP addresses merely as GLOBAL. As a result, the specific geography of each IP is likely unknown to the GeoIP service you are using. They make the best guess they can.
Additionally, a CDN will attempt to use the node with the least latency to you. Latency generally increases with geographic distance, but at times the longer route may offer lower latency due to a faster or less congested connection.
In your case, I suspect that you are receiving data from Singapore and your GeoIP provider is just getting the location of the IP wrong.

A better usage of Weighted Round Robin Routing in Amazon Route 53

The question might not be as fundamental as you thought. First of all, thanks for reading it. I am a computer science student. I just begin to learn about AWS, especially the Route 53 so please forgive me if there is anything that hurts your eyes :)
We all know that Amazon Route 53 provides customers with the ability
to route users to EC2 instances, S3 buckets, and elastic Load
Balancers across multiple availability zones and regions and there are
different forms of DNS load balancing including:
LBR/Latency Based Routing, to route to the region with the lowest latency
WRR/Weighted Round Robin, to assign weights to different targets
Also, user-specified configurations that combine both are possible
(LBR+WRR).
Route 53 flexibility allows users to save costs, however manual
configuration can become increasingly complex for final users. Looking
for the best non-probabilistic policy (such as the WRR weights) is
NP-complete.
What are the possible cases that we need to give server ip addresses different weight ? given that there can be EC2 servers that across multiple availability zones and instances can contain both front end and back end or contain either application tiers or databases only ? Are there any ideas of finding a possible better usage of Route 53 in combination with other AWS services, in order to improve the performance of interactive multi-tier cloud applications ?
Sorry for the lengthy question. I am looking for thoughts and ideas about the best way/starting point to experiment about the better usage of Route 53 and in combination with other AWS services for a multi-tier cloud application. Not necessarily a 100% correct answer. Any ideas or suggestions are welcomed. Many thanks in advance !
UPDATE:
I should probably rephrase the question: What is the purpose of having Weighted record set in Route 53 i.e in a DNS service ? Obviously, WRR in DNS can control potions of traffic but if we simply rely on this DNS load balance (or load distribution) we are going to put heavy workload on the many other DNS servers. One case I could think off is that web sites like google or Facebook will potentially gets tons of tons domain name queries, WRR DNS load balancing can be useful and there has to be some sort of session stickiness since sharing session across servers seems to be a bad idea.
Are there any other way / purpose of using Weighted record in Route 53.
Thank you very much for considering my question !
Another use case to consider is A/B testing of frontend or backend services. Let me illustrate: Let's say we've just CI-tested version 1.0.1 of our web application (which runs in a Docker container), and we've deployed the container but we're not yet routing traffic to it. We don't want to flip a switch and immediately dump our one million daily active users (woohoo!) onto v1.0.1 until we can give it a little real-world testing. So we decide to use the Weighted Round Robin load balancing available in Route 53 to send 0.25% of our users to the v1.0.1 container(s), allowing us to feel out the new version with real-world users before flipping the switch. We can do the same thing with virtually any service that uses hostname lookup to find resources.
One use case can be, to use it to load balance internal services that can't be balanced using an elastic load balancer, like a rds or elastic cache read replicas, so instead of creating a ec2 instance with a haproxy for example to load balance your services, you can create a Route 53 level balancer based on weights or latency.
My guess is that internally, they use a custom load balancer at the dns server, that balance requests based on domain aliases and the selected balancing policy.