Weighted traffic flow document in AWS is not working as expected? - amazon-web-services

I have the following traffic policy document in AWS
Weighted Resource Record Set Weighted Resource Record Set
---------------------------- ----------------------------
Name: www.example.com Name: www.example.com
Type: A Type: A
Value: 192.0.2.11 Value: 192.0.2.12
Weight: 1 Weight: 3
And based on the above document 25% of the requests should hit 192.0.2.11 and 75% of the requests should hit 192.0.2.12.
e.g. If I send 4 concurrent requests to www.example.com 3 should hit 192.0.2.12 and 1 should hit 192.0.2.11, but this is not happening.
What I observed is that first few requests will hit only 192.0.2.11 and after some time it hits only 192.0.2.12.
Is this the default beahaviour?

Weighted RRs don't exhibit the behavior you are expecting on a small scale like this. It is a statistical behavior, not an active load balancing mechanism.
If you were to have 1000 people make 1000 requests at 1000 randomly selected times, you would expect to see approximately 250 requests go to one endpoint and 750 requests go to the other.
By the nature of DNS and browser DNS caching behavior preclude you from seeing such a split on small numbers of requests, particularly concurrent requests from a single client. The more typical outcome is that you will see a 25%/75% split of which server each viewer will connect to and often tend to stick to for some period of time.
If you repeat your test 1000 times, you should again see numbers closer to the expecte split. Longer TTLs on your DNS records will also tend to cause your test results to be less consistent with the weights, if the times between your tests are short. Shorter DNS TTLs will not be ideal for overall performance, but you might try temporarily setting the TTL to 0 and test again to see what results you get.
Remember, though, that a TTL change doesn't take effect until the time since the TTL change exceeds the old TTL value. If, for example, the old TTL was 300 seconds, you are not assured of the new TTL having an effect until at least 300 seconds have passed since the time you changed the TTL (plus about 30 seconds for internal Route 53 propagation of the change).

Related

Network data out - nmon/nload vs AWS Cloudwatch disparity

We are running a video conferencing server in an EC2 instance.
Since this is a data out (egress) heavy app, we want to monitor the network data out closely (since we are charged heavily for that).
As seen in the screenshot above, in our test, using nmon (top right) or nload (left) in our EC2 server shows the network out as 138 Mbits/s in nload and 17263 KB/s in nmon which are very close (138/8 = 17.25).
But, when we check the network out (bytes) in AWS Cloudwatch (bottom right), the number shown is very high (~ 1 GB/s) (which makes more sense for the test we are running), and this is the number for which we are finally charged.
Why is there such a big difference between nmon/nload and AWS Cloudwatch?
Are we missing some understanding here? Are we not looking at the AWS Cloudwatch metrics correctly?
Thank you for your help!
Edit:
Adding the screenshot of a longer test which shows the average network out metric in AWS Cloudwatch to be flat around 1 GB for the test duration while nmon shows average network out of 15816 KB/s.
Just figured out the answer to this.
The following link talks about the periods of data capture in AWS:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html
Periods
A period is the length of time associated with a specific
Amazon CloudWatch statistic. Each statistic represents an aggregation
of the metrics data collected for a specified period of time. Periods
are defined in numbers of seconds, and valid values for period are 1,
5, 10, 30, or any multiple of 60. For example, to specify a period of
six minutes, use 360 as the period value. You can adjust how the data
is aggregated by varying the length of the period. A period can be as
short as one second or as long as one day (86,400 seconds). The
default value is 60 seconds.
Only custom metrics that you define with a storage resolution of 1
second support sub-minute periods. Even though the option to set a
period below 60 is always available in the console, you should select
a period that aligns to how the metric is stored. For more information
about metrics that support sub-minute periods, see High-resolution
metrics.
As seen in the link above, if we don't set a custom metric with custom periods, AWS by default does not capture sub-minute data. So, the lowest resolution of data available is every 1 minute.
So, in our case, the network out data within 60 seconds is aggregated and captured as a single data point.
Even if I change the statistic to Average and the period to 1 second, it still shows every 1 minute data.
Now, if I divide 1.01 GB (shown by AWS) with 60, I get the per second data which is roughly around 16.8 MBps which is very close to the data shown by nmon or nload.
From the AWS docs:
NetworkOut: The number of bytes sent out by the instance on all network interfaces. This metric identifies the volume of outgoing network traffic from a single instance.
The number reported is the number of bytes sent during the period. If you are using basic (five-minute) monitoring, you can divide this number by 300 to find Bytes/second. If you have detailed (one-minute) monitoring, divide it by 60.
The NetworkOut graph in your case does not represent the current speed, it represents the number of bytes sent out by all network interfaces in the last 5 minutes. If my calculations are correct, we should get the following values:
1.01 GB ~= 1027 MB (reading from your graph)
To get the average speed for the last 5 minutes:
1027 MB / 300 = 3.42333 MB/s ~= 27.38 Mbits/s
It is still more than what you are expecting, although this is just an average for the last 5 minutes.

AWS Elasticsearch publishing wrong total request metric

We have an AWS Elasticsearch cluster setup. However, our Error rate alarm goes off at regular intervals. The way we are trying to calculate our error rate is:
((sum(4xx) + sum(5xx))/sum(ElasticsearchRequests)) * 100
However, if you look at the screenshot below, at 7:15 4xx was 4, however ElasticsearchRequests value is only 2. Based on the metrics info on AWS Elasticsearch documentation page, ElasticsearchRequests should be total number of requests, so it should clearly be greater than or equal to 4xx.
Can someone please help me understand in what I am doing wrong here?
AWS definitions of these metrics are:
OpenSearchRequests (previously ElasticsearchRequests): The number of requests made to the OpenSearch cluster. Relevant statistics: Sum
2xx, 3xx, 4xx, 5xx: The number of requests to the domain that resulted in the given HTTP response code (2xx, 3xx, 4xx, 5xx). Relevant statistics: Sum
Please note the different terms used for the subjects of the metrics: cluster vs domain
To my understanding, OpenSearchRequests only considers requests that actually reach the underlying OpenSearch/ElasticSearch cluster, so some the 4xx requests might not (e.g. 403 errors), hence the difference in metrics.
Also, AWS only recommends comparing 5xx to OpenSearchRequests:
5xx alarms >= 10% of OpenSearchRequests: One or more data nodes might be overloaded, or requests are failing to complete within the idle timeout period. Consider switching to larger instance types or adding more nodes to the cluster. Confirm that you're following best practices for shard and cluster architecture.
I know this was posted a while back but I've additionally struggled with this issue and maybe I can add a few pointers.
First off, make sure your metrics are properly configured. For instance, some responses (4xx for example) take up to 5 minutes to register, while OpensearchRequests are refershed every minute. This makes for a very wonky graph that will definitely throw off your error rate.
In the picture above, I send a request that returns 400 every 5 seconds, and send a response that returns 200 every 0.5 seconds. The period in this case is 1 minute. This makes it so on average it should be around a 10% error rate. As you can see by the green line, the requests sent are summed up every minute, whereas the the 4xx are summed up every 5 minute, and in between every minute they are 0, which makes for an error rate spike every 5 minutes (since the opensearch requests are not multiplied by 5).
In the next image, the period is set to 5 minutes. Notice how this time the error rate is around 10 percent.
When I look at your graph, I see metrics that look like they are based off of a different period.
The second pointer I may add is to make sure to account for when no data is coming in. The behavior the alarm has may vary based on your how you define the "treat missing data" parameter. In some cases, if no data comes in, your expression might make it so it stays in alarm when in fact there is only no new data coming in. Some metrics might return no value when no requests are made, while some may return 0. In the former case, you can use the FILL(metric, value) function to specify what to return when no value is returned. Experiment with what happens to your error rate if you divide by zero.
Hope this message helps clarify a bit.

AWS WAF How to rate limit path by IP below the minimum of 2000 requests/minute

I have a path (mysite.com/myapiendpoint for sake of example) that is both resource intensive to service, and very prone to bot abuse. I need to rate limit access to that specific path to something like 10 requests per minute per client IP address. How can this be done?
I'm hosting off an EC2 instance with CloudFront and AWS WAF in front. I have the standard "Rate Based Rule" enabled, but its 2,000 requests per minute per IP address minimum is absolutely unusable for my application.
I was considering using API Gateway for this, and have used it in the past, but its rate limiting as I understand it is not based on IP address, so bots would simply use up the limit and legitimate users would constantly be denied usage of the endpoint.
My site does not use sessions of any sort, so I don't think I could do any sort of rate limiting in the server itself. Also please bear in mind my site is a one-man-operation and I'm somewhat new to AWS :)
How can I limit the usage per IP to something like 10 requests per minute, preferably in WAF?
[Edit]
After more research I'm wondering if I could enable header forwarding to the origin (running node/express) and use a rate-limiter package. Is this a viable solution?
I don't know if this is still useful to you - but I just got a tip from AWS support. If you add the rate limit rule multiple times, it effectively reduces the number of requests each time. Basically what happens is each time you add the rule, it counts an extra request for each IP. So say an IP makes a single request. If you have 2 rate limit rules applied, the request is counted twice. So basically, instead of 2000 requests, the IP only has to make 1000 before it gets blocked. If you add 3 rules, it will count each request 3 times - so the IP will be blocked at 667 requests.
The other thing they clarified is that the "window" is 5 minutes, but if the total is breached anywhere in that window, it will be blocked. I thought the WAF would only evaluate the requests after a 5 minute period. So for example. Say you have a single rule for 2000 requests in 5 minutes. Say an IP makes 2000 requests in the 1st minute, then only 10 requests after that for the next 4 minutes. I initially understood that the IP would only be blocked after minute 5 (because WAF evaluates a 5 minute window). But apparently, if the IP exceeds the limit anywhere in that window, it will be locked immediately. So if that IP makes 2000 requests in minute 1, it will actually be blocked from minute 2, 3, 4 and 5. But then will be allowed again from minute 6 onward.
This clarified a lot for me. Having said that, I haven't tested this yet. I assume the AWS support techie knows what he's talking about - but definitely worth testing first.
AWS have now finally released an update which allows the rate limit to go as low as 100 requests every 5 minutes.
Announcement post: https://aws.amazon.com/about-aws/whats-new/2019/08/lower-threshold-for-aws-waf-rate-based-rules/
Using rule twice will not work, because WAF rate based rule will count on cloud watch logs basis, both rule will count 2000 requests separately, so it would not work for you.
You can use AWS-WAF automation cloud front template, and choose lambda/Athena parser, this way, request count will perform on s3 logs basis, also you will be able to block SQL,XSS and bad bot requests.

Converting a high performance web service from Nginx on AWS EC2 to AWS Lambda

On a project I’m working on, there are a number of web services implemented on AWS. The services that are relatively simple (DynamoDB insert or lookup) and will be used relatively infrequently have been implemented as Lambdas, which were perfect for the task. There is also a more complex web service which does a lot of string processing and regex matching which needs to be highly performant, that has been implemented in C++ (roughly 5K LOC) as a Nginx module and can handle in the region of 20K requests/s running on an EC2 instance (the service just takes in a small JSON payload, does a lot of string processing and regex matching against some reference data that sits in static data files on S3, and returns a JSON response under 1KB in size)
There is a push from management to unify our use of AWS services and have all the web services implemented as Lambdas.
My question is: can a high performance web service such as the C/C++ nginx compiled module running on EC2 that’s expected to run continuously and handle 20K to 100K req/s actually be converted to AWS Lambda (in Python) and expected to have the same performance or is this better left as is on EC2? What are the performance considerations to be aware of if converting to Lambda?
Can Lambda do it? Yes. Should Lambda do it? No.
Why? Cost.
First, let's say you do handle 20k Requests / Second, every second for an entire day. That will then equate to 1.728 Billion requests in that day. In the free tier, you do get 1 Million requests free, so that drops the billable requests down to 1.727 Billion. Lambda charges $0.20 / Million Requests, so:
1.728 Billion requests * $0.20 / Million requests = $345.40
I'm pretty sure your cost for EC2 is lower than that per day. Taking the m4.16xlarge instance, with on-demand pricing, we get:
$3.20 / Hour * 24 Hours = $76.80
See the difference? But, Lambda also charges for compute time!
Let's say you include the c++ executable in your Lambda function (called from Python or Node, so we won't take into account the performance hit going from c++ to an interpreted language. Since Lambda charges in 100 Millisecond blocks, rounded up, for this estimate we will assume that all the requests finish within 100 Milliseconds.
Say you use the smallest memory size, 128 MB. That will give you 3.2 Million Seconds within the free tier, or 32 Million Requests given that they are all under 100 Milliseconds, free. But that still leaves you with 1.696 Billion Requests billable. The cost for the 128 MB size is $0.000000208 / 100 Milliseconds. Given that each request finishes under 100 Milliseconds, the cost for the execution time will be:
$0.000000208 / 100 Milliseconds * 1.696 Billion 100 Millisecond Units = $352.77
Adding that cost to the cost of the requests, you get:
$345.40 + $352.77 = $707.17
EC2: $76.80
Lambda: $707.17
Note, this is just using the 20k Requests / Second number that you gave and is for a single day. If the actual number of requests differs, the requests take longer than 100 Milliseconds, or you need more memory than 128 MB, the cost estimate will go up or down accordingly.
Lambda has its place, but EC2 does also. Just because you can put it on Lambda doesn't mean you should.

Counting number of requests per second generated by JMeter client

This is how application setup goes -
2 c4.8xlarge instances
10 m4.4xlarge jmeter clients generating load. Each client used 70 threads
While conducting load test on a simple GET request (685 bytes size page). I came across issue of reduced throughput after some time of test run. Throughput of about 18000 requests/sec is reached with 700 threads, remains at this level for 40 minutes and then drops. Thread count remains 700 throughout the test. I have executed tests with different load patterns but results have been same.
The application response time considerably low throughout the test -
According to ELB monitor, there is reduction in number of requests (and I suppose hence the lower throughput ) -
There are no errors encountered during test run. I also set connect timeout with http request but yet no errors.
I discussed this issue with aws support at length and according to them I am not blocked by any network limit during test execution.
Given number of threads remain constant during test run, what are these threads doing? Is there a metrics I can check on to find out number of requests generated (not Hits/sec) by a JMeter client instance?
Testplan - http://justpaste.it/qyb0
Try adding the following Test Elements:
HTTP Cache Manager
and especially DNS Cache Manager as it might be the situation where all your threads are hitting only one c4.8xlarge instance while the remaining one is idle. See The DNS Cache Manager: The Right Way To Test Load Balanced Apps article for explanation and details.