EC2 instance outgoing http request rate limits - amazon-web-services

I'm building a node application on EC2 that queries various external APIs several times per second via http requests.
I cannot work this out from the EC2 documentation - are there any EC2 rate limits for querying external APIs?
E.g. if I'm continuously making 2 or 3 http requests per second from an ec2 instance, will I start getting rate limit errors from ec2?
Thanks

ec2 provides you the virtual machine and (if you configure it) the external connection.
Then it's totally up to you on what you query with it.
AWS provides you the layer 3 network which is charged by traffic amount, not number of requests.

Related

Dynamic Rate Limiting for REST API which uses Auto Scaling & Load Balancer

I have an REST API running in AWS EC2 instance and I'm having Nginx server inside the EC2 to handle the rate limiting (QPS) for the clients. My plan is to build an Auto Scaling with Load Balancer architecture with this API.
I've configured everything to setup the Load Balancer and the Auto Scaling in the AWS and the Auto Scaling works based on my Dynamic Scaling Policy.
Query : The Auto Scaling broke the rate limiting done by Nginx, Since Nginx is residing inside the EC2 server and now I have N number of EC2's for the API the QPS given to clients is also increased N times.
For Example if ii have set 25 QPS in Nginx for a client named
X & in this Auto Scaling architecture say for an instance there is
4 servers running and the clients request goes to various servers (we are not sure that all request goes to same server in a second) and
the QPS of the client increases to 25*4=100 QPS.
What is the solution for this kind of problem?
Do I need to Have a separate server for Nginx and do rate limiting
in it and then send the requests to the Auto Scaling?
Is there any dynamic way to update the nginx config in every EC2 based on the number of instances spun out by Auto Scaling Group?
Any other AWS service to deal with this kind of problem?
I heard about something like CloudFront Distribution -> AWS WAF -> ALB etc. but not getting how that architecture to be used.
Any other solution also welcomed......!!!
The AWS solution for implementing rate limiting on an API is to use AWS API Gateway in front of your load balancer, and then enable usage plans with rate limits on the API Gateway.

AWS Nat Gateway, wrong requests limits - high load - timeout

I did a load test for NAT Gateway in AWS. I reached a much lower requests-limit than described in docs. According to the docs the Nat is supposed to support ~900 requests per second, but with my configuration, I saw that ~0.04% of the requests are untreated when running ~300 requests per second.
I run node.js app using ECS cluster, and have the ability to configure requests per second. The NAT is working fine around 1 minute, and later my app starts to get timeouts for few requests.
AWS does not allow access to such machines, and the cloudwatch metrics seem fine.
In general, I am looking for a static ip solution that will withstand high loads. Does anyone here has experienced something similar?

AWS - Abnormal Data Transfer OUT

I´m consistently being charged for a surprisingly high amount of data transfer out (from Amazon to Internet).
I looked into the Usage Reports of the past few months and found out that the Data Transfer Out was coming out of an Application Load Balancer (ALB) between the Internet and multiple nodes of my application (internal IPs).
Also noticed that DataTransfer-Out-Bytes is very close to the DataTransfer-In-Bytes in the same load balancer, which is weird (coincidence?). I was expecting the response to each request to be way smaller than the request itself.
So, I enabled flow logs in the ALB for a few minutes and found out the following:
Requests coming from the Internet (public IPs) in to ALB = ~0.47 GB;
Requests coming from ALB to application servers in the same availability zone = ~0.47 GB - ALB simply passing requests through to application servers, as expected. So, about the same amount of traffic.
Responses from application servers back into the same ALB = ~0.04 GB – As expected, responses generate way less traffic back into ALB. Usually a 1K request gets a simple “HTTP 200 OK” response.
Responses from ALB back to the external IP addresses => ~0.43 GB – this was mind-blowing. I was expecting ~0.04GB, the same amount received from the application servers.
Unfortunately, ALB does not allow me to use packet sniffers (e.g. tcpdump) to see that is actually coming in and out. Is there anything I´m missing? Any help will be much appreciated. Thanks in advance!
Ricardo.
I believe the next step in your investigation would be to enable ALB access logs and see whether you can correlate the "sent_bytes" in the ALB access log to either your Flow log or your bill.
For information on ALB access logs see: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html
There is more than one way to analyze the ALB access logs, but I've always been happy to use Athena, please see: https://aws.amazon.com/premiumsupport/knowledge-center/athena-analyze-access-logs/

AWS - How to limit calls to one endpoint in a domain?

We have an application hosted in AWS. We are now planning to have a public API for this application. It is expensive to service requests to this api. Is it possible to throttle requests to this api using AWS (not implementing logic in our application) such that if more than a certain number in a specified time are made they will be rejected?
Any advice is appreciated.
Thank you.
If you want to blacklist IPs that spam certain endpoints, you can use AWS WAF to create rate limiting rules for your API:
https://aws.amazon.com/blogs/aws/protect-web-sites-services-using-rate-based-rules-for-aws-waf/
I think there are at least two ways to do this:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
If you are using EC2 to host Linux instances, you could use iptables to rate limit by source IP address.

AWS - EC2 and RDS in different regions is very slow

I'm currently in Sydney and I do have the following scenario:
1 RDS on N. Virginia.
1 EC2 on Sydney
1 EC2 on N. Virginia
I need this to redundation, and this is the simplified scenario.
When my app on EC2 sydney connection to RDS on N. Virgnia, it takes almost 2.5 seconds to give me the result. We can think: Ok, that's the latency.
BUT, when I send the request to EC2 N. Virginia, I get the result in less then 500ms.
Why there is a slow connection when you access RDS from outside the region?
I mean: I can experience this slow connection when I'm running the application on my computer too. But when the application is in the same region that RDS, works quickier that on my own computer.
Most likely you request to RDS requires multiple roundtrips to complete. I.e. at first your EC2 instance requests something to RDS, then something else based on the first request etc. Without seeing your database code, it's hard to say exactly what might be the cause of that.
You say then when you talk to the remote EC2 instance, instead, you get the response in less than 500 ms. That suggests that setting up a TCP connection and sending a single request with reply is 500 ms. Based on that, my guess is that your database connection requires at least 5x back and forth traffic.
There is no additional penalty with RDS in terms of using it out of region, but most database protocols are not optimized for high latency conditions. You might be much better off setting up a read replica in Sydney.
If you are trying to connect the RDS using public-facing network, then it might be slow. AWS launched cross region VPC peering, please peer all the region's VPC (make sure there will not be any IP conflict) and try to connect using private connections.