camel jetty benchmark testing for requests per second - jetty

I am building a high load http service that will consume thousands of messages per second and pass it to a messaging system like activemq.
I currently have a rest service (non-camel, non-jetty) that accepts posts from http clients and returns a plain successful respone and i could load test this using apache ab.
We are also looking at camel-jetty as input endpoint since it has integration components for activemq and be part of an esb if required. Before i start building a camel-jetty to activemq route i want to test the load that camel-jetty can support. What should my jetty only route look like,
I am thinking of the route
from("jetty:http://0.0.0.0:8085/test").transform(constant("a"));
and use apache ab to test.
I am concerned if this route provides a real camel-jetty capacity since transform could add overhead. or would it not.
Based on these tests i am planning to build the http-mq with or without camel.

the transform API will not add significant overhead...I just ran a test against your basic route...
ab -n 2000 -c 50 http://localhost:8085/test
and got the following...
Concurrency Level: 50
Time taken for tests: 0.459 seconds
Complete requests: 2000
Failed requests: 0
Write errors: 0
Non-2xx responses: 2010
Total transferred: 2916510 bytes
HTML transferred: 2566770 bytes
Requests per second: 4353.85 [#/sec] (mean)
Time per request: 11.484 [ms] (mean)
Time per request: 0.230 [ms] (mean, across all concurrent requests)
Transfer rate: 6200.21 [Kbytes/sec] received

Related

I have tested my AWS server (8 GB RAM) on which my Moodle site is hosted for 1000 users using JMeter, I am getting 0% error, what could be the issue?

My moodle site is hosted on AWS Server of 8 GB RAM, i carried out various tests on the server using JMeter (NFT), I have tested from 15 to almost 1000 users, however I am still not getting any error(less than 0.3%). I am using the scripts provided by moodle itself. What could be the issue? Is there any issue with the script? I have attached a screenshot with this which shows the reports of 1000 users test for referenceenter image description here
If you're happy with the amount of errors and response times (maximum response time is more than 1 hour which is kind of too much for me) you can stop here and report the results.
However I doubt that a real user will be happy to wait 1 hour to see the login page so I would rather define some realistic pass/fail criteria, for example would expect the response time to be not more than 5 seconds. In this case you will have > 60% of failures if this is what you're trying to achieve.
You can consider using the following test elements
Set reasonable response timeouts using HTTP Request Defaults:
so if any request will last longer than 5 seconds it will be terminated as failed
Or use Duration Assertion
in this case JMeter will wait for the response and mark it as failed if the response time exceeds the defined duration

JMeter Load testing of API in AWS API Gateway

I am new to JMeter so getting confused in conducting a test. My test scenario
1) Hit a REST URL in API Gateway
2) Request should be 100 requests per seconds
3) Conduct the test for 2 hrs
4) Evaluate the error / success percentage
What parameters should I put to achieve this combination ? Any help will be appreciated
Thanks in advance
Add Concurrency Thread Group to your Test Plan and configure it like:
Put ${__tstFeedback(jp#gc - Throughput Shaping Timer,500,1000,10)} into "Target Concurrency" input.
Put 120 into "Hold Target Rate Time (min)" input
Add HTTP Request Sampler to your Test Plan and configure it to send request to the REST URL
You might also need to add HTTP Header Manager to send Content-Type header with the value of application/json
Add Throughput Shaping Timer as a child of your HTTP Request sampler and configure it like:
Start RPS: 100
End RPS: 100
Duration: 7200
Run your test in command-line non-GUI mode like:
jmeter -n -t test.jmx -l result.csv
Open JMeter GUI, add i.e. Aggregate Report listener to your test plan and see the metrics. You can also generate a HTML Reporting Dashboard to see extended results and charts .

Throttling between API Gateway and Lambda

From my understanding, API Gateway by default has a 1000 RPS limit--when this is crossed, it will begin throttling calls and returning 429 error codes. Past the Gateway, Lambda has a 100 concurrent invocation limit, and when this is crossed, it will begin throttling calls and returning 500 (or 502) error codes.
Given that, when viewing my graph on Cloudwatch, I would expect my number of throttled calls to be closer to the number of 4XX errors, or at least above the number of 5XX errors, because the calls must pass through API Gateway first in order to get to Lambda at all. However, it looks like the number of throttled calls is closer to the number of 5XX errors.
Is there something I might be missing from the way I'm reading the graph?
Depending on how long it takes for your Lambda function to execute and how spread are your requests you can hit Lambda limits way before or way after API Gateway throttling limits. I'd say the 2 metrics you are comparing are independent of each other.
According to the API Gateway Request documentation:
API Gateway limits the steady-state request rate to 10,000 requests per second (rps)
This means that per 100 milliseconds the API can process 1,000 requests.
The comments above are correct in stating that CloudWatch is not giving you the full picture. The actual performance of your system depends on both the runtime of your lambda and the number of concurrent requests.
To better understand what is going on I suggest a using the Lambda Load Tester seen in the following images or building your own.
Testing
The lambda used has the following properties:
Upon Invocation, it sleeps for 1 second and then exits.
Has a Reserved Concurrency limit of 25, meaning the lambda will only execute 25 concurrent instances. Any surplus will be returned with a 500 error.
Requests: 1000 Concurrent: 25
In the first test, we'll send 1000 requests in 40 batches of 25 requests each.
Command:
bash run.sh -n 1000 -c 25
Output:
Status code distribution:
[200] 1000 responses
Summary:
In this case, the number of requests was below both the lambda and API Gateways limits. All executions were successful.
Requests: 1000 Concurrent: 50
In the first test, we'll send 1000 requests in 20 batches of 50 requests each.
Command:
bash run.sh -n 1000 -c 50
Output:
Status code distribution:
[200] 252 responses
[500] 748 responses
Summary:
In this case, the number of requests was below both the API Gateways limit, so every request was passed to the lambda. However, 50 concurrent requests exceeded the limit of 25 we placed on the lambda, so about 75% of the requests returned a 500 error.
Requests: 800 Concurrent: 800
In this test, we'll send 800 requests in 1 batch of 800 requests each.
Command:
bash run.sh -n 800 -c 800
Output:
Status code distribution:
[200] 34 responses
[500] 765 responses
Error distribution:
[1] Get https://XXXXXXX.execute-api.us-east-1.amazonaws.com/dev/dummy: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Summary:
In this case, the number of requests was starting to push the limits of the API Gateway and you can see one of the requests timed out. The 800 concurrent requests well exceeded the 25 reserved concurrency limit we placed on the lambda and in this case, about 95% of the requests returned a 500 error.
Requests: 3000 Concurrent: 1500
In this test, we'll send 3000 requests in 2 batches of 1500 requests each.
Command:
bash run.sh -n 3000 -c 1500
Output:
Status code distribution:
[200] 69 responses
[500] 1938 responses
Error distribution:
[985] Get https://drlhus6zf3.execute-api.us-east-1.amazonaws.com/dev/dummy: dial tcp 52.84.175.209:443: connect: connection refused
[8] Get https://drlhus6zf3.execute-api.us-east-1.amazonaws.com/dev/dummy: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Summary:
In this case, the number of requests exceeded the limits of the API Gateway and several of the connection attempts were refused. Those that did pass through the Gateway were still met with the reserved concurrency limit we placed on the lambda and returned a 500 error.

Counting number of requests per second generated by JMeter client

This is how application setup goes -
2 c4.8xlarge instances
10 m4.4xlarge jmeter clients generating load. Each client used 70 threads
While conducting load test on a simple GET request (685 bytes size page). I came across issue of reduced throughput after some time of test run. Throughput of about 18000 requests/sec is reached with 700 threads, remains at this level for 40 minutes and then drops. Thread count remains 700 throughout the test. I have executed tests with different load patterns but results have been same.
The application response time considerably low throughout the test -
According to ELB monitor, there is reduction in number of requests (and I suppose hence the lower throughput ) -
There are no errors encountered during test run. I also set connect timeout with http request but yet no errors.
I discussed this issue with aws support at length and according to them I am not blocked by any network limit during test execution.
Given number of threads remain constant during test run, what are these threads doing? Is there a metrics I can check on to find out number of requests generated (not Hits/sec) by a JMeter client instance?
Testplan - http://justpaste.it/qyb0
Try adding the following Test Elements:
HTTP Cache Manager
and especially DNS Cache Manager as it might be the situation where all your threads are hitting only one c4.8xlarge instance while the remaining one is idle. See The DNS Cache Manager: The Right Way To Test Load Balanced Apps article for explanation and details.

High load on jetty

I'm running load tests on my MBP. The load is injected using gatling.
My web server is jetty 9.2.6
On a heavy load, number of threads remains constant : 300 but the number open socket is growing from 0 to 4000+, which generates a too much open files at OS level.
What does it mean ?
Any idea to improve the situation ?
Here is the output of jetty stat
Statistics:
Statistics gathering started 643791ms ago
Requests:
Total requests: 56084
Active requests: 1
Max active requests: 195
Total requests time: 36775697
Mean request time: 655.7369791202325
Max request time: 12638
Request time standard deviation: 1028.5144674112403
Dispatches:
Total dispatched: 56084
Active dispatched: 1
Max active dispatched: 195
Total dispatched time: 36775697
Mean dispatched time: 655.7369791202325
Max dispatched time: 12638
Dispatched time standard deviation: 1028.5144648655212
Total requests suspended: 0
Total requests expired: 0
Total requests resumed: 0
Responses:
1xx responses: 0
2xx responses: 55644
3xx responses: 0
4xx responses: 0
5xx responses: 439
Bytes sent total: 281222714
Connections:
org.eclipse.jetty.server.ServerConnector#243883582
Protocols:http/1.1
Statistics gathering started 643784ms ago
Total connections: 8788
Current connections open: 1
Max concurrent connections open: 4847
Mean connection duration: 77316.87629452601
Max connection duration: 152694
Connection duration standard deviation: 36153.705226514794
Total messages in: 56083
Total messages out: 56083
Memory:
Heap memory usage: 1317618808 bytes
Non-heap memory usage: 127525912 bytes
Some advice:
Don't have the Client Load and the Server Load on the same machine (don't cheat and attempt to put the load on 2 different VMs on a single physical machine)
Use multiple client machines, not just 1 (when the Jetty developers test load characteristics, we use at least 10:1 ratio of client machines to server machines)
Don't test with loopback, virtual network interfaces, localhost, etc.. Use a real network interface.
Understand how your load client manages its HTTP version + connections (such as keep-alive or http/1.1 close), and make sure you read the response body content, close the response content / streams, and finally disconnect the connection.
Don't test with unrealistic load scenarios. A real-world usage of your server will be a majority of HTTP/1.1 pipelined connections with multiple requests per physical connection. Some on fast networks, some on slow networks, some even on unreliable networks (think mobile)
Raw speed, serving the same content, all on unique connections, is ultimately a fascinating number and can produce impressive results, and also completely pointless and proves nothing about how your application's performance on Jetty will behave with real world scenarios.
Finally, be sure you are testing load in realistic ways.