I am new to JMeter so getting confused in conducting a test. My test scenario
1) Hit a REST URL in API Gateway
2) Request should be 100 requests per seconds
3) Conduct the test for 2 hrs
4) Evaluate the error / success percentage
What parameters should I put to achieve this combination ? Any help will be appreciated
Thanks in advance
Add Concurrency Thread Group to your Test Plan and configure it like:
Put ${__tstFeedback(jp#gc - Throughput Shaping Timer,500,1000,10)} into "Target Concurrency" input.
Put 120 into "Hold Target Rate Time (min)" input
Add HTTP Request Sampler to your Test Plan and configure it to send request to the REST URL
You might also need to add HTTP Header Manager to send Content-Type header with the value of application/json
Add Throughput Shaping Timer as a child of your HTTP Request sampler and configure it like:
Start RPS: 100
End RPS: 100
Duration: 7200
Run your test in command-line non-GUI mode like:
jmeter -n -t test.jmx -l result.csv
Open JMeter GUI, add i.e. Aggregate Report listener to your test plan and see the metrics. You can also generate a HTML Reporting Dashboard to see extended results and charts .
Related
My moodle site is hosted on AWS Server of 8 GB RAM, i carried out various tests on the server using JMeter (NFT), I have tested from 15 to almost 1000 users, however I am still not getting any error(less than 0.3%). I am using the scripts provided by moodle itself. What could be the issue? Is there any issue with the script? I have attached a screenshot with this which shows the reports of 1000 users test for referenceenter image description here
If you're happy with the amount of errors and response times (maximum response time is more than 1 hour which is kind of too much for me) you can stop here and report the results.
However I doubt that a real user will be happy to wait 1 hour to see the login page so I would rather define some realistic pass/fail criteria, for example would expect the response time to be not more than 5 seconds. In this case you will have > 60% of failures if this is what you're trying to achieve.
You can consider using the following test elements
Set reasonable response timeouts using HTTP Request Defaults:
so if any request will last longer than 5 seconds it will be terminated as failed
Or use Duration Assertion
in this case JMeter will wait for the response and mark it as failed if the response time exceeds the defined duration
I started using Kamon instrumentation recently and facing issues with the rate of the kamon/prometheus http endpoint refresh
Preface:
using "io.kamon" %% "kamon-bundle" % "2.1.4" && "io.kamon" %% "kamon-prometheus" % "2.1.4"
exposing metrics as http endpoint so that prometheus scrapes them and evaluates every 1 sec
created custom Counter, Gauge and Histogram metrics and they are updated 2-3K times per sec inside the Akka actor processing incoming messages
The reason to use Kamon instead of standard prometheus client is to get thread safety
There is configuration kamon.metric.tick-interval 1 second & kamon.prometheus.refresh-interval 1 second related to the rate of refresh
Problem:
Custom metrics that are exposed at the endpoint (localhost:9095) are not refreshed every second. Approximately, they are refreshed every 60 seconds.
It's not prometheus configuration problem, I'm checking the values on the http endpoint exposed by kamon, manually refreshing the page
This was misconfiguration issue. If you are getting same problem, please make sure that the kamon configuration is on the top level of the application.conf, not inside akka {..} as I had it
I am trying to load test Nginx installed on an EC2 instance via Jmeter, Everytime I try to load test, only 50% request are successful,
For Eg:
If I try with 10 users, only 5 response are OK
If I try with 100 users, only 50 response are OK
If I try with 500, only 250 response are OK
Any Idea, regarding this strange behavior?
This sounds weird. I would recommend the following troubleshooting techniques:
First of all always check jmeter.log file, it should contain enough information to get to the bottom of your test failure(s).
If JMeter log file doesn't contain any suspicious entries next step would be checking response messages using i.e. View Results In Table and/or View Results Tree listener. This should provide you some high-level information and trends, i.e. you will be able to see if some particular sampler(s) is(are) always failing.
If above steps don't give enough clue to resolve your issue you can temporary enable saving of request and response data to see what is wrong with the failing sampler(s). Add the next lines to user.properties file (located in JMeter's "bin" folder)
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
jmeter.save.saveservice.samplerData=true
jmeter.save.saveservice.requestHeaders=true
jmeter.save.saveservice.responseHeaders=true
jmeter.save.saveservice.url=true
and next time your run JMeter test the .jtl results file will contain all the relevant data which can be analyzed using aforementioned View Results Tree listener. Don't forget to revert the change once you fix the script as JMeter listeners are very resource intensive per se and above settings greatly increase disk IO and it may ruin your test.
If none of above helps - check logs on the application under test side, most probably you will get something from them.
I am using Postman to run a Runner on some specific requests. Is it possible to create a schedule to execute (meaning every day on specific hour)?
You can set up a Postman Monitor on your collection, and schedule it to execute the request each minute/hour/weekly basis.
This article can get you started on creating your monitor. Postman allows 1000 monitoring requests for free per month.
PS: Postman gives you details about the responses as in No. of successful requests, response codes, response size etc. I wanted the actual response for my test. So I just printed the response body as shown below. Hope it helps someone out there :)
Well, if there is no other possibility, you can actually try doing this:
- launch postman runner
- configure the highest possible number of iterations
- configure the delay (in milliseconds) to fit your scheduling requirement
It is absolutely awful, but if the delay variable can be set high enough, it might work.
It implies that postman is continuousely running.
You may do this using a scheduling tool that can launch command lines and use Newman ...
I don't think Postman can do it on its own
Alexandre
EDIT:
You may do this using a scheduling tool that can launch command lines and use Newman ... I don't think Postman can do it on its own
check this postman feature : https://www.getpostman.com/docs/postman/monitors/intro_monitors
from postman v10.2.1 onwards you can schedule your collections to run directly (without using monitors) on the specified times
check out here - https://learning.postman.com/docs/running-collections/scheduling-collection-runs/
I am doing it like this:
Inside OSB pipeline's message flow, at the beginning of request, assign the current time to a variable. Then in the response, use the current time of the response subtract the variable to calculate the response time. Then I have a reporting action to reporting this number.
I know OSB has a build in monitoring tool, it can display the response time for proxy server, pipeline and business server. As you can see my solution only include the time from the beginning of the pipeline + business server, but not including the time of the request and response message going through the proxy server. Besides that calculating it this way also feels like a non-standard approach.
OSB provided a JMX API which can get these build in monitoring data. But this would make our project more complicated.
If we want to use the OSB reporting action to report the response time. Is there a best way to do it?
Just switch Weblogic to use extended log format, and tell it to add time-taken to the list of tokens it logs on each response.
http://middlewaretechnologies.blogspot.com.au/2012/03/configure-extended-logging-in-http.html
or if you want to read the official docs:
http://docs.oracle.com/cd/E14571_01/web.1111/e13701/web_server.htm#CNFGD207