I have a volume test to test response time for sending PUT method to different locations concurrently up to 100-200 locations. I'm using jmeter and I'm just wondering if there is a way to achieve in jmeter?
Test - HTTP PUT same file to different locations concurrently (up to 100-200 locations).
Example - It is to send below 5 requests (up to 200) at the time time to different location.
1. Put /location1/object1 File 1
2. Put /location2/object2 File 1
3. Put /location3/object3 File 1
4. Put /location4/object4 File 1
5. Put /location4/object4 File 1
I've been tried loop and while controller with CSV data set but it seems like they are sending one after one from CSV instead of concurrently. The only solution I can think of is to create up to 100 - 200 thread groups to run the test plan. If I do create 100 - 200 thread groups, I'm not certain that will impact my PC?
Below is my current Test Plan.
Test Plan
HTTP Request Default
HTTP Header Manager
Thread Group
+ Get Service
Get URL
+ While Controller
Put Method
Put {PATH from CSV} File 1
CSV Data Set Config
{5 paths in CSV}
Please use specify the thread num to be 400 in thread group and use threadNum() function
To run all request from csv file concurrently, you can use CSVRead function
Related
I am new to JMeter so getting confused in conducting a test. My test scenario
1) Hit a REST URL in API Gateway
2) Request should be 100 requests per seconds
3) Conduct the test for 2 hrs
4) Evaluate the error / success percentage
What parameters should I put to achieve this combination ? Any help will be appreciated
Thanks in advance
Add Concurrency Thread Group to your Test Plan and configure it like:
Put ${__tstFeedback(jp#gc - Throughput Shaping Timer,500,1000,10)} into "Target Concurrency" input.
Put 120 into "Hold Target Rate Time (min)" input
Add HTTP Request Sampler to your Test Plan and configure it to send request to the REST URL
You might also need to add HTTP Header Manager to send Content-Type header with the value of application/json
Add Throughput Shaping Timer as a child of your HTTP Request sampler and configure it like:
Start RPS: 100
End RPS: 100
Duration: 7200
Run your test in command-line non-GUI mode like:
jmeter -n -t test.jmx -l result.csv
Open JMeter GUI, add i.e. Aggregate Report listener to your test plan and see the metrics. You can also generate a HTML Reporting Dashboard to see extended results and charts .
I am trying to load test Nginx installed on an EC2 instance via Jmeter, Everytime I try to load test, only 50% request are successful,
For Eg:
If I try with 10 users, only 5 response are OK
If I try with 100 users, only 50 response are OK
If I try with 500, only 250 response are OK
Any Idea, regarding this strange behavior?
This sounds weird. I would recommend the following troubleshooting techniques:
First of all always check jmeter.log file, it should contain enough information to get to the bottom of your test failure(s).
If JMeter log file doesn't contain any suspicious entries next step would be checking response messages using i.e. View Results In Table and/or View Results Tree listener. This should provide you some high-level information and trends, i.e. you will be able to see if some particular sampler(s) is(are) always failing.
If above steps don't give enough clue to resolve your issue you can temporary enable saving of request and response data to see what is wrong with the failing sampler(s). Add the next lines to user.properties file (located in JMeter's "bin" folder)
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
jmeter.save.saveservice.samplerData=true
jmeter.save.saveservice.requestHeaders=true
jmeter.save.saveservice.responseHeaders=true
jmeter.save.saveservice.url=true
and next time your run JMeter test the .jtl results file will contain all the relevant data which can be analyzed using aforementioned View Results Tree listener. Don't forget to revert the change once you fix the script as JMeter listeners are very resource intensive per se and above settings greatly increase disk IO and it may ruin your test.
If none of above helps - check logs on the application under test side, most probably you will get something from them.
I have an architecture where a customer will upload a file or set of files to process to S3, the files are then moved (and untarred/zipped/etc) to a more appropriate S3 bucket, then a message will be placed in SQS from the lambda to be picked up by a compute engine. In most cases, only one message per customer request is generated. However, there might be a case where the customer will load say 200 images to the same request (all 200 images are slices from a single 3D image) one at a time. This will generate 200 lambda calls and 200 messages. My compute engine can process the same request multiple times without a problem, but I would like to avoid processing the same request 200+ times (each such process takes > 5mins on a large ec2 instance).
Is there a way working within the amazon tools to either coalesce messages in a queue having the same message body to a single message? Or to peek into a queue for a message having a specific message body?
The only thing I can think to do is have a "special" file in my destination S3 bucket that records the last time a lambda put this message in the queue, but the issue with that is, say the first image slice comes in and I put in the queue "Do this guy", 50 more images come in and I notice that the "special" file is there, the message is picked up and processing starts, the rest of the images come in, then processing finishes and fails due to only 50 out of 60 needed images, but there are no pending messages in the queue because I blocked them all...
Or, I just suck it up and let the compute run for 200 times, fail quickly ~199 times, then succeed 1 (or more) times...
I have a Java web app hosted on Google App Engine (GAE). The User clicks on a button and he gets a data table with 100 rows. At the bottom of the page, there is a "Make Web service calls" button. Clicking on that, the application will take one row at a time and make a third party web-service call using the URLConnection class. That part is working fine.
However, since there is a 60 second limit to the HttpRequest/Response cycle, all the 100 transactions don't go through as the timeout happens around row 50 or so.
How do I create a loop and send the Web service calls without the User having to click on the 'Make Webservice calls' more than once?
Is there a way to stop the loop before 60 seconds and then start again without committing the HttpResponse? (I don't want to use asynchronous Google backend).
Also, does GAE support file upload (to get the 100 rows from a file instead of a database)
Thank you.
Adding some code as per the comments:
URL url = new URL(urlString);
HttpURLConnection connection = (HttpURLConnection) url
.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("POST");
connection.setConnectTimeout(35000);
connection.setRequestProperty("Accept-Language", "en-US,en;q=0.5");
connection.setRequestProperty("Authorization", encodedCredentials);
// Send post request
DataOutputStream wr = new DataOutputStream(
connection.getOutputStream());
wr.writeBytes(submitRequest);
It all depends on what happens with the results of these calls.
If results are not returned to a UI, there is no need to block it. You can use Tasks API to create 100 tasks and return a response to a user. This will take a few seconds at most. The additional benefit is that you can make up to 10 calls in parallel by using tasks.
If results have to be returned to a user, you can still use up to 10 threads to process as many requests in parallel as possible. Hopefully, this will bring your time under 1 minute, but you cannot guarantee it since you depend on responses from third-party resources which maybe unavailable at the moment. You will have to implement your own retry mechanism.
Also note that users are not accustomed to waiting for several minutes for a website to respond. You may consider a different approach when a user is notified after the last request is processed without blocking your client code.
And yes, you can load data from files on App Engine.
Try using asynchronous urlfetch calls:
LinkedList<Future<HttpResponse>> futures;
// Start all the request
for (Url url : urls) {
HttpRequest request = new HttpRequest(url, HTTPMethod.POST);
request.setPayload(...)
futures.add(urlfetchservice.fetchAsync(request);
}
// Collect all the results
for (Future<HttpResponse> future : futures) {
HttpResponse response = future.get()
// Do something with future
}
In a web service that I am working on, a user's data needs to be updated in the background - for example pulling down and storing their tweets. As there may be multiple servers performing these updates, I want to ensure that only one can update any single user's data at one time. Therefore, (I believe) I need a method of doing an atomic read (is the user already being updated) and write (no? Then I am going to start updating). What I need to avoid is this:
Server 1 sends request to see if user is being updated.
Server 2 sends request to see if user is being updated.
Server 1 receives response back saying the user is not being updated.
Server 2 receives response back saying the user is not being updated.
Server 1 starts downloading tweets.
Server 2 starts downloading the same set of tweets.
Madness!!!
Steps 1 and 3 need to be combined into an atomic read+write operation so that Step 2 would have to wait until Step 3 had completed before a response was given. Is there a simple mechanism for effectively providing a "lock" around access to something across multiple servers, similar to the synchronized keyword in Java (but obviously distributed across all servers)?
Take a loot at Dekker's algorithm, it might give you an idea.
http://en.wikipedia.org/wiki/Dekker%27s_algorithm