AWS - How to limit calls to one endpoint in a domain? - amazon-web-services

We have an application hosted in AWS. We are now planning to have a public API for this application. It is expensive to service requests to this api. Is it possible to throttle requests to this api using AWS (not implementing logic in our application) such that if more than a certain number in a specified time are made they will be rejected?
Any advice is appreciated.
Thank you.

If you want to blacklist IPs that spam certain endpoints, you can use AWS WAF to create rate limiting rules for your API:
https://aws.amazon.com/blogs/aws/protect-web-sites-services-using-rate-based-rules-for-aws-waf/

I think there are at least two ways to do this:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
If you are using EC2 to host Linux instances, you could use iptables to rate limit by source IP address.

Related

Is this possible in API Gateway?

I've been asked to look into an AWS setup for my organisation but this isn't my area of experience so it's a bit of a challenge. After doing some research, I'm hoping that API Gateway will work for us and I'd really appreciate it if someone could tell me if I'm along the right lines.
The plan is:
We create a VPC with several private subnets. The EC2 instances in the subnets will be hosting browser based applications like Apache Guacamole, Splunk etc.
We attach to the VPC an API Gateway with a REST API which will allow users access to only the applications on 'their' subnet
Users follow a link to the API Gateway from an external API which will provide Oauth2 credentials.
The API Gateway REST API verifies their credentials and serves them with a page with links to the private IP addresses for the services in 'their' subnet only. They can then click on the links and open the Splunk, Guacamole browser pages etc.
I've also looked at Client VPN as a possible solution but my organisation wants users to be able to connect directly to the individual subnets from an existing API without having to download any other tools (this is due to differing levels of expertise of users and the need to work remotely). If there is a better solution which would provide the same workflow then I'd be happy to implement that instead.
Thanks for any help
This sounds like it could work in theory. My main concern would be if Apache Guacomole, or any of the other services you are trying to expose, requires long lived HTTP connections. API Gateway has a hard requirement that all requests must take no longer than 29 seconds.
I would suggest also looking into exposing these services via a public Application Load Balancer, instead of API Gateway, which has OIDC authentication support. You'll need to look at the requirements of the specific services you are trying to expose to evaluate if API Gateway or ALB would be a better fit.
I would personally go about this by configuring each of these environments using an Infrastructure as Code, in such a way that you can create a new client environment by simply running your IaC tool with a few parameters like the client ID and the domain name or subdomain you want to use. I would actually spin each up in their own VPC since it sounds like you want each client's environment to be isolated from the others.

Automatically block DOS attacks in AWS

I would like to know what is the best and the easiest solution
to protect http server deployed on AWS cloud against DOS attacks
I know that there is AWS Advanced Shield
that can be turned on for that purpose
however it is too expensive (3000$ per month)
https://aws.amazon.com/shield/pricing/
System architecture
HTTP request -> Application Load Balancer -> EC2
Nginx server is installed on this machine
Nginx server is configured with rate limiting
Nginx server responds with 429 code when too many requests are send from one IP
Nginx server is generating log files (access.log, error.log)
AmazonCloudWatchAgent is installed on this machine
AmazonCloudWatchAgent listen on log files
AmazonCloudWatchAgent send changes from log files to specific CloudWatch Log groups
Logs from all EC2 machines are centralized in on place (CloudWatch Log groups)
I can configure CloudWatch Logs Metric Filters
to send me alarms when too many 429 requests happen from one IP number
In that way I can manually block particular IP in Network ACL
and cut off all requests from bad IP number in a lower network layer
and protect my AWS resources from being drained
I would like to do it somehow automatically
What is the easiest and the cleanest way to do it?
Note that, per the AWS Shield pricing documentation:
AWS Shield Standard provides protection for all AWS customers from
common, most frequently occurring network and transport layer DDoS
attacks that target your web site or application at no additional
charge.
For a more comprehensive discussion on DDoS mitigation, see:
Denial of Service Attack Mitigation on AWS
AWS Best Practices for DDoS Resiliency
There is no one straightforward way to block DDOS to your infrastructure. However, there are a few techniques and best practices which you can follow to at least protect the infrastructure. DDOS attacks can be stopped by analyzing and patching it at the same moment.
You may consider using external services listed below to block ddos at some extent:
Cloudflare: https://www.cloudflare.com/en-in/ddos/
Imperva Incapsula:
https://www.imperva.com/products/ddos-protection-services/
I have tried both in the production system and they are pretty decent. Cloudflare is right now handling 10% of total internet traffic, they know about the good and bad traffic.
They are not much expensive comparative to shield. You may integrate it with your infrastructure as a code in order to automate for all of your services.
Disclaimer: I am not associated in any way with any of the services I recommended above.

Do I need a WAF (Web Application Firewall) to protect my app?

I have created a micro-service app relying on simple functions as a service. Since this app is API based, I distribute tokens in exchange for some personal login info (Oauth or login/password).
Just to be clear, developers will then access my app using something like: https://example.com/api/get_ressource?token=personal-token-should-go-here
However, my server and application logic still gets hit even if the token is not provided, meaning anonymous attackers could flood my services without login, taking my service down.
I came across WAF recently and they promise to act as a middle-man, filtering abusive attacks. My understanding is that a WAF is just reverse-proxying my API and applies some known attacks patterns filters before delegating a request to my actual backend.
What I don't really get is: what if an attacker has direct access to my backend's IP?! Wouldn't he be able to bypass the WAF and DDoS my backend directly? Does WAF protection only relies on my original IP not being leaked?
Finally, I have read that WAF only makes sense if it is able to mitigate DDoS through a CDN in order to spread Layer 7 DDoS attacks across multiple servers and bandwidth if needed. Is it any true ? or can I just implement WAF myself ?
Go with cloud, you can deploy your app to AWS, there are 2 plus points of this.
1. Your prod server will be behind private IP not public IP.
2. AWS WAF is budgeted service, and good for block dos,scanner, and flood attacks.
You can also use captcha on failed attempts to block IP.

Amazon-Guard-Duty for my spring boot application running on AWS

I have a spring boot application running in an EC2 instance in AWS. It basically exposes REST endpoints and APIs for other application. Now I want to improve the security measures for my app such as preventing DDoS attacks, requests from malicious hosts and using our own certificates for communications. I came across Amazon guard duty but I don't understand how it will help in securing my app and what are the alternatives? Any suggestions and guidelines are welcomed.
Amazon GuardDuty is simply a security monitoring tool akin to a Intrusion Detection System you may run in a traditional data center. It analyzes logs generated by AWS (CloudTrial, VPC Flows, etc.) and compares them with threat feeds, as well as uses machine learning to discover anomalies. It will alert you to traffic from known malicious hosts, but will not block. To do this you would need to use AWS Web Application Firewall or a 3rd party network appliance.
You get some DDOS protection just by using AWS. All workloads running in AWS are protected against Network and Transport layer attacks by AWS Shield. If you are using CloudFront and Route 53, you also get layer 3 and 4 protections.
You should be able to use your own certificates in AWS in a similar manner to how you would use them anywhere else.

Amazon API Gateway in front of ELB and ECS Cluster

I'm trying to put an Amazon API Gateway in front of an Application Load Balancer, which balances traffic to my ECS Cluster, where all my microservices are deployed. The motivation to use the API Gateway is to use a custom authorizer through a lambda function.
System diagram
In Amazon words (https://aws.amazon.com/api-gateway/faqs/): "Proxy requests to backend operations also need to be publicly accessible on the Internet". This forces me to make the ELB public (internet-facing) instead of internal. Then, I need a way to ensure that only the API Gateway is able to access the ELB outside the VPC.
My first idea was to use a Client Certificate in the API Gatway, but the ELB doesn't seem to support it.
Any ideas would be highly appreciated!
This seems to be a huge missing piece for the API gateway technology, given the way it's pushed. Not being able to call into an internal-facing server in the VPC severely restricts its usefulness as an authentication front-door for internet access.
FWIW, in Azure, API Management supports this out of the box - it can accept requests from the internet and call directly into your virtual network which is otherwise firewalled off.
The only way this seems to be possible under AWS is using Lambdas, which adds a significant layer of complexity, esp. if you need to support various binary protocols.
Looks like this support has now been added. Haven't tested, YMMV:
https://aws.amazon.com/about-aws/whats-new/2017/11/amazon-api-gateway-supports-endpoint-integrations-with-private-vpcs/
We decided to use a header to check to make sure all traffic is coming through API Gateway. We save a secret in our apps environmental variables and tell the API Gateway to inject that when we create the API. Then check for that key in our app.
Here is what we are doing for this:
In our base controller we check for the key (we just have an REST API behind the gateway):
string ApiGatewayPassthroughHeader = context.HttpContext.Request.Headers["ApiGatewayPassthroughHeader"];
if (ApiGatewayPassthroughHeader != Environment.GetEnvironmentVariable("ApiGatewayPassthroughHeader"))
{
throw new error;
}
In our swagger file (we are using swagger.json as the source of our APIs)
"x-amazon-apigateway-integration": {
"type": "http_proxy",
"uri": "https://${stageVariables.url}/path/to/resource",
"httpMethod": "post",
"requestParameters": {
"integration.request.header.ApiGatewayPassthroughHeader": "${ApiGatewayPassthroughHeader}"
}
},
In our docker compose file (we are using docker, but the same could be used in any settings file)
services:
example:
environment:
- ApiGatewayPassthroughHeader=9708cc2d-2d42-example-8526-4586b1bcc74d
At build time we take the secret from our settings file and replace it in the swagger.json file. This way we can rotate the key in our settings file and API gateway will update to use the key the app is looking for.
I know this is an old issue, but I think they may have just recently added support.
"Amazon API Gateway announced the general availability of HTTP APIs, enabling customers to easily build high performance RESTful APIs that offer up to 71% cost savings and 60% latency reduction compared to REST APIs available from API Gateway. As part of this launch, customers will be able to take advantage of several new features including the ability the route requests to private AWS Elastic Load Balancers (ELB), including new support for AWS ALB, and IP-based services registered in AWS CloudMap. "
https://aws.amazon.com/about-aws/whats-new/2020/03/api-gateway-private-integrations-aws-elb-cloudmap-http-apis-release/
It is possible if you use VPC Link and Network Load Balancer.
Please have a look at this post:
https://adrianhesketh.com/2017/12/15/aws-api-gateway-to-ecs-via-vpc-link/
TL;DR
Create internal Network Load Balancer connected to your target group
(instances in a VPC)
In the API Gateway console, create a VPC Link and link it to above NLB
Create API Gateway endpoint, choose "VPC Link integration" and specify your NLB internal URL as an "Endpoint URL"
Hope that helps!
It is now possible to add an authorizer directly to Application Load Balancer (ALB) in front of ECS.
This can be configured directly in the rules of a listener. See this blog post for details:
https://aws.amazon.com/de/blogs/aws/built-in-authentication-in-alb/
Currently there is no way to put API Gateway in front of private ELB, so you're right that it has to be internet facing. The best workaround for your case I can think of would be to put ELB into TCP pass through mode and terminate client certificate on your end hosts behind the ELB.
The ALB should be internal in order to have the requests routed there through private link. Works perfectly fine in my setup without need to put NLB in front of it.
Routes should be as following:
$default
/
GET (or POST or whichever you want to use)
Integration should be attached to all paths $default and GET/POST/ANY etc