JMeter repeat Regular Expression extractor for all requests - regex

I have a JMeter script that goes through a bunch of requests, each being different, GETs, POSTs and so on...
Each request returns a custom header from the server that has some numeric values in it. This header returns the actual processing time it took on the server side (without latency/http overhead)
I was able to add a Regular Expression Extractor to get that value from the header without any problems, however I would like this to be repeated for all the requests.
By using Debug Sampler I can see that the extractor only runs once, seems to be the last instance.
How can I have an extractor that runs all all requests and collects all the values from the header.
Bonus question. Finally I would love to be able to aggregate these values and get one average value.
Disclaimer: This other question is similar to mine but it doesn't explain how to actually do it in terms of the locations of the extractor and the debug sampler.
Track results of a regular expression extractor in JMeter
Thank you.

Just put Regular Expression Extractor at the same level as your HTTP Request samplers and it will be applied to all of them
See Scoping Rules User Manual entry for more detailed explanation.
With regards to value collection the best option is using Sample Variables property. Given you store your header value into a variable called ${foo} you can get it appended to jtl results file by adding the next line to the user.properties file:
sample_variables=foo
JMeter restart will be required to pick the property up. The other way (which doesn't require restart) is passing the property via -J command-line argument as
jmeter -Jsample_variables=foo -n -t test.jmx -l result.jtl
As the result you will get an extra column called foo in the .jtl results file and it will hold the ${foo} variable value for each Sampler. Upon test completion you will be able to open .jtl results file with MS Excel or equivalent and use AVERAGE function to get the value you're looking for.
See Apache JMeter Properties Customization Guide for more information on setting and amending various JMeter properties for Configuring JMeter according to your needs.

While Dmitri's answer is one way of doing it. But I wanted something different than each time exporting it into a file and post processing it...
I ended up doing this "manually"
By manually I mean I added a BSF Assertion with language = JavaScript and then wrote some JavaScript to do this:
Pull the value out of the header (if found)
Keep a record of total/count using variables
Updating a variable that shows the aggregate always
Add Debug Sampler to get easy access to the values after the test.
The following is the code that I used in the BSF Assertion:
var responseHeaders = prev.getResponseHeaders();
var xNodetasticRt = /x-nodetastic-rt: (\d+\.?\d*)/.exec(responseHeaders);
if (xNodetasticRt) {
var value = parseFloat(xNodetasticRt[1]);
vars.put("xNodetasticRt", value);
var total = parseFloat(vars.get("xNodetasticRt-Total"));
if (!total) {
total = 0.0;
}
total += value;
vars.put("xNodetasticRt-Total", total);
var count = parseFloat(vars.get("xNodetasticRt-Count"));
if (!count) {
count = 0;
}
count++;
vars.put("xNodetasticRt-Count", count);
vars.put("xNodetasticRt-Average", total / count);
}

Related

How to load savedsearch with huge data in MR script? NetSuite

We have a transactional saved search with lines in millions. The saved search fails to get load in UI, is there any way to load such saved searches in the map-reduce script?
I tried using pagination but it still shows an error (ABORT_SEARCH_EXCEEDED_MAX_TIME).
Netsuite may stil time out depending on the complexity of the search but you do not have to run the search in order to send the results to the map stage
function getInputData(ctx){
return search.load({id:'mysearchid'});
}
function map(ctx){
var ref = JSON.parse(ctx.value);
const tranRec = record.load({type:ref.recordType, id:ref.id});
log.debug({
title:'map stage with '+ ref.values.tranid, //presumes Document Number was a result column
details: ctx.value // have a look at the serialized form
});
}
Instead of getting all rows, perhaps it's even a better option to get the first nth rows (100K or even less) per MR execution, save the last internal id from the processed row and use the next internal id in the next MR script execution.

Parse Days in Status field from Jira Cloud for Google Sheets

I am using Jira Cloud for Sheets Adds on in order to get Days in Status field from Jira, it seems to have the following syntax, from this post
<STATUS_ID>_*:*_<NUMBER_OF_TIMES_ISSUE_WAS_IN_THIS_STATUS>_*:*_<SECONDS>_*|
Here is an example:
10060_*:*_1_*:*_1121033406_*|*_3_*:*_1_*:*_7409_*|*_10000_*:*_1_*:*_270003163_*|*_10088_*:*_1_*:*_2595005_*|*_10087_*:*_1_*:*_1126144_*|*_10001_*:*_1_*:*_0
I am trying to extract for example how many times the issue was In QA status and the duration on a given status. I am dealing with parsing this pattern for obtaining this information and return it using an ARRAYFORMULA. Days in Status field information will be provided only when the issue was completed (is in Done status), otherwise, no information will be provided. if the issue is in Done status, but it didn't transition for a given status, this information will not be provided in the Days in Status string.
I am trying to use REGEXEXTRACT function to match a pattern for example:
=REGEXEXTRACT(C2, "(10060)_\*:\*_\d+_\*:\*_\d+_\*|")
and it returns an empty value, where I expect 10068. I brought my attention that when I use REGEXMATCH function it returns TRUE:
=REGEXMATCH(C2, "(10060)_\*:\*_\d+_\*:\*_\d+_\*|")
so the syntax is not clear. Google refers as a reference for Regular Expression to the following documentation. It seems to be an issue with the vertical bar |, per this documentation it is a special character that should be represented like this \v, but this doesn't work. The REGEXMATCH returns FALSE. I am trying to use some online RegEx tester, that implements Google Sheets syntax (RE2), I found ReGo, that I don't know if it is a valid one.
I was trying to use SPLITfunction like this:
=query(SPLIT(C2, "_*:*_"), "SELECT Col1")
but it seems to be a more complicated approach for getting all the values I need from Days in Status field string, but it separates well all the values from the previous pattern. In this case, I am getting the first Status ID. The number of columns returned by SPLITwill varies because it depends on the number of statuses the issues transitioned in order to get to DONE status.
It seems to be a complex task given all the issues I have encounter, but maybe some of you were dealing with this before and may advise about some ideas. It requires properly parsing the information and then extracting the information on specific columns using ARRAYFORMULA function when it applies for a given status from Status column.
Here is a google spreadsheet sample with the input information. I would like to populate the information for the following columns for Times In QA (C column) and Duration in QA (D column, the information is provided in seconds I would need in days but this is a minor task) for In QA status, then the same would apply for the rest of the other statuses. I added the tab Settings for mapping the Status ID to my Status, I would need to use a lookup function for matching the Status column in the Jira Issues tab. I would like to have a solution, without adding helper columns maybe it will require some script.
https://docs.google.com/spreadsheets/d/1ys6oiel1aJkQR9nfxWJsmEyd7XiNkVB-omcNL0ohckY/edit?usp=sharing
try:
=INDEX(IFERROR(1/(1/QUERY(1*IFNA(REGEXEXTRACT(C2:C, "10087.{5}(\d+).{5}(\d+)")),
"select Col1,Col2/86400 label Col2/86400''"))))
...so after we do REGEXEXTRACT some rows (which cannot be extracted from) will output as #N/A error so we wrap it into IFNA to remove those errors. then we multiply it by *1 to convert everything into numeric numbers (regex works & outputs always only plain text format). then we use QUERY to convert 2nd column into proper seconds in one go. at this point every row has some value so to get rid of zeros for rows we don't need (like row 2,3,5,8,9,etc) and keep the output numeric, we use IFERROR(1/(1/ wrapping. and finally, we use INDEX or ARRAYFORMULA to process our array.

How to increase Variable value based on the iteration being run in Postman

I have an API request that I need to run in Postman-Collection-Runner thru multiple iterations. The API request uses Variable.
How can I make this variable to automatically increase with each iteration (or maybe set the iteration value as another Variable)?
If I understand your question correctly, you would like to assign different values to a variable in the request in different iterations which is achievable in 2 ways.
a) Using data files
https://learning.getpostman.com/docs/postman/collection_runs/working_with_data_files/
The data files could be in JSON or CSV format. Unfortunately, there is no way in Postman to tie the variable values to another variable unless you want to do it in a hacky way!
b) Pre-request & Tests scripts
1- Initialise the environment variable in the Pre-request Scripts like this:
var value = pm.environment.get("var");
if( !value) {
pm.environment.set("var", 1);
}
2- Increment the variable value in Tests
var value = pm.environment.get("var");
pm.environment.set("var", value+1);
This creates an environment variable and increments it after each iteration. depending on how you structure your collection you might need to consider flushing/resetting the environment variable to be ready for the next run
It worth mentioning that Pre-request Scripts and Tests running before and after the requests respectively, so you can write any scripts that would like to run after the request in the Tests. It shouldn't be necessarily a test!
1. Using Global pm.* functions and Variables in Pre-Request Scripts/Tests
Pre-Request script - runs before executing the request
Tests - runs after executing the request
a.
pm.variables.set("id", pm.info.iteration);
Ex: example.com/{{id}}/update gives
example.com/0/update
example.com/1/update etc...
Number of iterations is set in Collection Runner. pm.info.iteration key has the current iteration number, starting at 0.
b.
var id = +pm.globals.get("id");
pm.globals.set("id", ++id);
The variables can be in any scope - globals/collection/environment/local/data.
In Collection Runner, check the Keep Variable Values checkbox, to persist the final value of the variable in the session (here id).
Note: If the variable is accessed via individual scopes (via pm.globals.* or pm.environment.* or pm.collectionVariables.*), then the mentioned checkbox should be toggled as required. Else if accessed via local scope (pm.variables.*), the value will not be persisted irrespective of the checkbox.
Ex: Same as above
More on variables and scoping
2. Using Dynamic variables
These variables can be used in case random values are needed or no specific order is necessary.
a. $randomInt - gives a random Integer within 1 - 1000.
Ex: example.com/{{$randomInt}}/update gives
example.com/789/update,
example.com/265/update etc...
b. $timestamp - gives current UNIX timestamp in seconds.
Ex: example.com/{{$timestamp}}/update gives
example.com/1587489427/update
example.com/1587489434/update etc...
More on Dynamic variables
Using Postman 7.22.1, while answering this. New methods may come in future.

PDI - Check data types of field

I'm trying to create a transformation read csv files and check data types for each field in that csv.
Like this : the standard field A should string(1) character and field B is integer/number.
And what I want is to check/validate: If A not string(1) then set Status = Not Valid also if B not a integer/number to. Then all file with status Not Valid will be moved to error folder.
I know I can use Data Validator to do it, but how to move the file with that status? I can't find any step to do it.
You can read files in loop, and
add step as below,
after data validation, you can filter rows with the negative result(not matched) -> add constant values step and with error = 1 -> add set variable step for error field with default values 0.
after transformation finishes, you can do add simple evaluation step in parent job to check value of ERROR variable.
If it has value 1 then move files else ....
I hope this can help.
You can do same as in this question. Once read use the Group by to have one flag per file. However, this time you cannot do it in one transform, you should use a job.
Your use case is in the samples that was shipped with your PDI distribution. The sample is in the folder your-PDI/samples/jobs/run_all. Open the Run all sample transformations.kjb and replace the Filter 2 of the Get Files - Get all transformations.ktr by your logic which includes a Group by to have one status per file and not one status per row.
In case you wonder why you need such a complex logic for such a task, remember that the PDI starts all the steps of a transformation at the same time. That's its great power, but you do not know if you have to move the file before every row has been processed.
Alternatively, you have the quick and dirty solution of your similar question. Change the filter row by a type check, and the final Synchronize after merge by a Process File/Move
And a final advice: instead of checking the type with a Data validator, which is a good solution in itself, you may use a Javascript like
there. It is more flexible if you need maintenance on the long run.

Track results of a regular expression extractor in JMeter

Our server returns a custom 'X-Execution-Time' HTTP response header that returns in miliseconds the time between the server getting a request and our code returning a page, ie how long our code takes to run. I'm using JMeter to do some testing & I'd like to be able to report on this number of over time. I've setup this regular expression extractor: X-Execution-Time:\s(\d+) but I don't know how to get JMeter to report on this number per request so i can get a trend over time
This isn't elegant by any means, but it certainly works:
Add a debug sampler into your test plan, and give it the same name as your regex reference. This will write out the time value into the results file.
Example if you have different pages:
regex reference = X-Execution-Time
Debug Sampler Name = PageName - Execution: ${X-Execution-Time}