AWS autoscaling and DDoS Attacks - amazon-web-services

I have create an ELB with an autoscaling group on AWS and I was wondering how does autoscaling work when a DDoS attack takes places? Is it going to scale until the limit I have set? How can I protect my AWS infrastructure against this? Thanks a lot in advance.

There are a lot of varieties of DDOS and an autoscaling group won't recognize a DDOS attack from normal traffic implicitly.
Assuming your scaling policies are setup correctly, your autoscaling group MAY grow (in number of instances) in the event of a DDOS attack because the instances are receiving a high volume of traffic and overloading. (I say may because all applications respond slightly differently to high volumes and varieties of traffic. I have worked with applications that don't play nicely with scaling policies without extra engineering. Also if your max number of instances is already reached it should not continue to grow).
The problem is that there is nothing to distinguish between real traffic and non-real traffic, so your services will still be flooded with the 'fake' stuff. The general goal is to 'filter' DDOS traffic before it hits your application instances.
That being said, AWS has some services to help against DDOS attacks:
https://aws.amazon.com/answers/networking/aws-ddos-attack-mitigation/
Specifically AWS Shield and AWS WAF would allow you to use tools like pattern matching or geolocation blocking to reject the unwanted traffic in question from attacking your infrastructure. Different services use different mitigation techniques. If you implement some of these services, they will help you respond effectively and keep your costs down but there is no 'one size fits all' methodology that I'm aware of.
Depending on budget there are other organizations and applications that you can work with to prepare yourself. For people where this is your first application or your organization is just starting out, I wouldn't worry too much about DDOS mitigation. Having and becoming comfortable with web application firewalls/shields are a good starting point for a host of other benefits that are likely more relevant for the early days. (Good security hygiene, familiarity with an applications traffic, etc etc)

Related

HAproxy vs ALB or any other load balancer which one to use?

We are looking to separate our blog platform to a separate ec2 server (In Nginx) for better performance and scalability.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2 Server
Blog request (www.example.com/blog) -> Load Balancer/Route -> New Separate EC2 Server for blog
Please help in this case what is the best option to use:
Haproxy
ALB - AWS
Any other solution?
Also, is it possible to have the load balancer or routing mechanism in a different AWS region? We are currently hosted in AWS.
Haproxy
You would have to set this up on an EC2 server and manage everything yourself. You would be responsible for scaling this correctly to handle all the traffic it gets. You would be responsible for deploying it to multiple availability zones to provide high availability. You would be responsible for installing all security updates on the operating system.
ALB - AWS
Amazon will automatically scale this out to handle any amount of traffic you get. Amazon will handle all security patches of the underlying system. Amazon provides free SSL certificates for ALBs. Amazon will deploy this automatically across multiple availability zones to provide high availability.
Any other solution?
I think AWS Global Accelerator would work here as well, but you would have to weigh the differences between Global Accelerator and ALB to decide which fits your use case and budget the best.
You could also look at placing a CDN in front of everything, like CloudFront or Cloudflare.
Also, is it possible to have the load balancer or routing mechanism in
a different AWS region?
AWS Global Accelerator would be the thing to look at if load balancing in different regions is a concern for you. Given the details you have provided I'm not sure why you would want this however.
Probably what you really need is a CDN in front of your websites, with or without the ALB.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2
Server Blog request (www.example.com/blog) -> Load Balancer/Route ->
New Separate EC2 Server for blog
In my view you can use ALB deployed in multi AZ for high availability for the following reasons :-
aws alb allows us to route traffic based on various attributes and path in URL is one of them them.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#rule-condition-types
With aws ALB you can have two target groups with instance handling traffic one for first path (www.example.com) and second target group for another path (www.example.com/blog).
ALB allows something called SNI (which allows to handle multiple certications behind a single alb for multiple domains), so all you need to do is set up single https listener and upload your certificates https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/
i have answered on [something similar] it might help you also
This is my opinion, take it as that. I am sure a lot of people wont agree.
If your project is small or personal, you can go with HAProxy (Cheap USD4 or less if you get a t3a as a spot instance) Or free if you place it inside another EC2 of yours may be using docker.
If your project is not personal or not small, go with ALB (Expensive but simpler and better integrated to other AWS stuff)
HAProxy can handle tons of connections, but you have to do more things by yourself. ALB can also handle tons of connections and AWS will do most of the work.
I think HAProxy is more suitable for personal/small projects because if your project doesnt grow, then you dont have to touch HAProxy. It is set and forget the same as ALB but cost less.
You usually wont mind about Availability zones or disaster tolerance in a personal project, so HAProxy should be easy to config.
Another consideration: AWS offers a free tier on ALB, so if your project will run for less than a year ALB is the way to go.
If you are learning, then ALB should be considered because real clients usually love to stick to AWS in all aspects, and HAProxy is your call and also your risk (just to reduce cost for a company that usually pays a lot more for your salary, so not worth the risk).

AWS ALB - single for all services?

We have many internet services, what are the considerations whether to use alb per service or single alb for all using listener rule pointing to target 🎯 group.
The services has its own clusters/target group with different functionality and different url.
Can one service spike impact other services?
Is it going to be a single point of failure ?
Cost perspective ?
Observability, monitoring, logs ?
Ease of management ?
Personally I would normally use a single ALB and use different listeners for different services.
For example, I have service1.domain.com and service2.domain.com. I would have two hostname listeners in the same ALB which route to the different services.
In my experience ALB is highly available and scales very nicely without any issues. I've never had a service become unreachable due to scaling issues. ALB's scale based on "Load Balancer Capacity Units" (LBCU). As your load balancer requires more capacity, AWS automatically assigns more LBCU's which allows it to handle more traffic.
Source: Own experience working on an international system consisting of monoliths and microservices which have a large degree of scaling between timezones.
You don't have impact on service B if service A has a spike, but the identification of which service is having bad times could be a little pain.
For monitoring perspective it's is a bit hard because is not that easy to have a fast identification of which service/target is suffering.
For management, as soon as different teams need to create/management its targets it can create some conflicts.
I wouldn't encourage you using that monolith architecture.
From cost perspective you can use one load balancer with multi forward rules, but using a single central load balancer for an entire application ecosystem essentially duplicates the standard monolith architecture, but increases the number of instances to be served by one load balancer enormously. In addition to being a single point of failure for the entire system should it go down, this single load balancer can very quickly become a major bottleneck, since all traffic to every microservice has to pass through it.
Using a separate load balancer per microservice type may add additional overhead but it make single point of failure per microservice in this model, incoming traffic for each type of microservice is sent to a different load balancer.

Automatically block DOS attacks in AWS

I would like to know what is the best and the easiest solution
to protect http server deployed on AWS cloud against DOS attacks
I know that there is AWS Advanced Shield
that can be turned on for that purpose
however it is too expensive (3000$ per month)
https://aws.amazon.com/shield/pricing/
System architecture
HTTP request -> Application Load Balancer -> EC2
Nginx server is installed on this machine
Nginx server is configured with rate limiting
Nginx server responds with 429 code when too many requests are send from one IP
Nginx server is generating log files (access.log, error.log)
AmazonCloudWatchAgent is installed on this machine
AmazonCloudWatchAgent listen on log files
AmazonCloudWatchAgent send changes from log files to specific CloudWatch Log groups
Logs from all EC2 machines are centralized in on place (CloudWatch Log groups)
I can configure CloudWatch Logs Metric Filters
to send me alarms when too many 429 requests happen from one IP number
In that way I can manually block particular IP in Network ACL
and cut off all requests from bad IP number in a lower network layer
and protect my AWS resources from being drained
I would like to do it somehow automatically
What is the easiest and the cleanest way to do it?
Note that, per the AWS Shield pricing documentation:
AWS Shield Standard provides protection for all AWS customers from
common, most frequently occurring network and transport layer DDoS
attacks that target your web site or application at no additional
charge.
For a more comprehensive discussion on DDoS mitigation, see:
Denial of Service Attack Mitigation on AWS
AWS Best Practices for DDoS Resiliency
There is no one straightforward way to block DDOS to your infrastructure. However, there are a few techniques and best practices which you can follow to at least protect the infrastructure. DDOS attacks can be stopped by analyzing and patching it at the same moment.
You may consider using external services listed below to block ddos at some extent:
Cloudflare: https://www.cloudflare.com/en-in/ddos/
Imperva Incapsula:
https://www.imperva.com/products/ddos-protection-services/
I have tried both in the production system and they are pretty decent. Cloudflare is right now handling 10% of total internet traffic, they know about the good and bad traffic.
They are not much expensive comparative to shield. You may integrate it with your infrastructure as a code in order to automate for all of your services.
Disclaimer: I am not associated in any way with any of the services I recommended above.

aws ELB vs. ALB Review: Performance,Resiliancy

It has been almost a year since aws deployed the ALB.
There seems to be little to none technical comparison between the classic LB and the ALB.
The only thing that my research yielded is: aws forum question
While the above implies that ALB is not reliable it is only one source.
I am interested in knowing the experience of people who did the switch between ELB and ALB, mainly in regards to latency, resilience, HA.
It seems to me that Layer4 balancing is more robust than Layer7 and therefore the general performance would be better.
I would like to provide my experience here, but I am not going into very detail. Basically my company has over 200k devices on the field actively connecting our backend via websocket. We were using ELB in front our servers and it was working quite well, all the connections were quite stable. We tried to switch ELB to ALB before, but the result wasn't really good and we switched back.
The main problem we found the connections were dropped suddenly (with a pattern like every 15 hrs?). Please check the graph of number of connections I captured from the monitoring system:
As you can see there are some dips in the graph which showing the connections have be dropped. I keep the same server setup, only factor I have changed is from ELB to ALB and got this result.

Load balancer for php application

Questions about load balancers if you have time.
So I've been using AWS for some time now. Super basic instances, using them to do some tasks whenever I needed something done.
I have a task that needs to be load balanced now. It's not a public service though. It's pretty much a giant cron job that I don't want running on the same servers as my website.
I set up an AWS load balancer, but it doesn't do what I expected it to do.
It get's stuck on one server, and doesn't load balance at all. I've read why it does this, and that's all fine and well, but I need it to be a serious round-robin load balancer.
edit:
I've set up the instances on different zones, but no matter how many instances I add to the ELB, it just uses one. If I take that instance down, it switches to a different one, so I know it's working. But I really would like it to always use a different one under every circumstance.
I know there are alternatives. Here's my question(s):
Would a custom php load balancer be an ok option for now?
IE: Have a list of servers, and have php randomly select a ec2 instance. Wouldn't be scalable at all, bu atleast I could set this up in 2 mins and it can work for now.
or
Should I take the time to learn how HAProxy works, and set that up in place of the AWS ELB?
or
Am I doing it wrong, and AWS's ELB does do round-robin. I just have something configured wrong?
edit:
Structure:
1) Web server finds a task to do.
2) If it's too large it sends it off to AWS (to load balancer).
3) Do the job on EC2
4) Report back via curl to an API
5) Rinse and repeat
Everything works great. But because the connection always comes from my server (one IP) it get's sticky'd to a single EC2 machine.
ELB works well for sites whose loads increase gradually. If you are expecting an uncommon and sudden increase on the load, you can ask AWS to pre-warm it for you.
I can tell you I used ELB in different scenarios and it always worked well for me. As you didn't provide too much information about your architecture, I would bet that ELB works for you, and the case that all connections are hitting only one server, I would ask you:
1) Did you check the ELB to see how many instances are behind it?
2) The instances that you have behind the ELB, are all alive?
3) Are you accessing your application through the ELB DNS?
Anyway, I took an excerpt from the excellent article that does a very good comparison between ELB and HAProxy. http://harish11g.blogspot.com.br/2012/11/amazon-elb-vs-haproxy-ec2-analysis.html
ELB provides Round Robin and Session Sticky algorithms based on EC2
instance health status. HAProxy provides variety of algorithms like
Round Robin, Static-RR, Least connection, source, uri, url_param etc.
Hope this helps.
This point comes as a surprise to many users using Amazon ELB. Amazon
ELB behaves little strange when incoming traffic is originated from
Single or Specific IP ranges, it does not efficiently do round robin
and sticks the request. Amazon ELB starts favoring a single EC2 or
EC2’s in Single Availability zones alone in Multi-AZ deployments
during such conditions. For example: If you have application
A(customer company) and Application B, and Application B is deployed
inside AWS infrastructure with ELB front end. All the traffic
generated from Application A(single host) is sent to Application B in
AWS, in this case ELB of Application B will not efficiently Round
Robin the traffic to Web/App EC2 instances deployed under it. This is
because the entire incoming traffic from application A will be from a
Single Firewall/ NAT or Specific IP range servers and ELB will start
unevenly sticking the requests to Single EC2 or EC2’s in Single AZ.
Note: Users encounter this usually during load test, so it is ideal to
load test AWS Infra from multiple distributed agents.
More info at the Point 9 in the following article http://harish11g.blogspot.in/2012/07/aws-elastic-load-balancing-elb-amazon.html
HAProxy is not hard to learn and is tremendously lightweight yet flexible. I actually use HAProxy behind ELB for the best of both worlds -- the hardened, managed, hands-off reliability of ELB facing the Internet and unwrapping SSL, and the flexible configuration of HAProxy to allow me to fine tune how things hit my servers. I've never lost an HAProxy instance yet, but it I do, ELB will just take that one out of rotation... as I have seen happen when the back-end servers have all become inaccessible, which (because of the way it's configured) makes ELB think the HAProxy is unhealthy, but that's by design in my setup.