Dynamic pre-request script Postman - postman

I have this pre-request script and using runner to send bulk requests each second
const moment = require('moment');
postman.setEnvironmentVariable("requestid", moment().format("0223YYYYMMDDHHmmss000000"));
I need the “requestid” to be unique every time.
first request: "022320221115102036000001"
second request: "022320221115102037000002"
third request: "022320221115102038000003"
.
.
.
and so on until let’s say 1000 requests.
Basically, I need to make the last 6 digits dynamic.

Your answer can be found on this postman request I've created for you. There's many ways to achieve this, given the little information provided, I've defaulted to:
Set a baseline prefix (before the last 6 numbers)
Give a start number for the last 6 numbers
If there IS NOT a previous variable stored initialized with the values above
If there IS a previous variable stored just increment it by one.
The variable date is your final result, current is just the increment
You can see here sequential requests:
And here is the code, but I would test this directly on the request I've provided above:
// The initial 6 number to start with
// A number starting with 9xxxxxx will be easier for String/Number converstions
const BASELINE = '900000'
const PREFIX = '022320221115102036'
// The previous used value if any
let current = pm.collectionVariables.get('current')
// If there's a previous number increment that, otherwise use the baseline
if (isNaN(current)) {
current = BASELINE
} else {
current = Number(current) + 1
}
const date = PREFIX + current
// Final number you want to use
pm.collectionVariables.set('current', current)
pm.collectionVariables.set('date', PREFIX + date)
console.log(current)
console.log(date)

Related

SuiteScript 2.0: Are there any search result limitations when executing a saved search via "getInputData" stage of map/reduce script?

I am currently building a map/reduce script in NetSuite which passes the results of a saved search from the getInputData stage to the map stage. This is being done by first running a WHILE loop in the getInputData stage to obtain the internal ids of each entry, inserting into an array, then passing over to the map stage. Like so:
// run saved search - unlimited rows from saved search.
do {
var subresults = invoiceSearch.run().getRange({ start: start, end: start + pageSize });
results = results.concat(subresults);
count = subresults.length;
start += pageSize + 1;
} while (count == pageSize);
var invSearchArray = [];
if(invoiceSearch){
//NOTE: .run().each has a limit of 4,000 results, hence the do-while loop above.
for (var i = 0; i < results.length; i++){
var invObj = new Object();
invObj['invID'] = results[i].getValue({name: 'internalid'});
invSearchArray.push(invObj);
}
}
return invSearchArray;
I implemented it this way because I feared there would be result restrictions, just as the ".run().each" function has (limited to 4000 results).
I made the assumption that passing the search object directly from getInputData to Map would have restricted results of 4000 as well. Can someone offer clarity on whether there are such restrictions? Am I right to fear the script holting prematurely because search results cannot be processed beyond 4000 in the getInputData stage of a map/reduce script?
Any example to aid me in understanding how a search object is processed in a map/reduce script would be most appreciated.
Thanks
If you simply return the Search instance, all results will be passed along to map, beyond the 1000 or 4000 limits of the getRange and each methods.
If the Search has 8500 results, all 8500 will get passed to map.
function getInputData() {
return search.load(...); // alternatively search.create(...)
}

Keeping a constant number to a set, before it appends other numbers

I have a set of numbers which is generated real-time during running the code. In the snippet of code shown below with the function constant_set, the number generated real time are stored in the variable set_accepted_list_frozen. What i want to ask is, if i want to keep a constant number always in the set_accepted_list_frozen before the other numbers are appended to the list set_accepted_list_frozen, how should i proceed?
set_accepted_list_frozen = set()
def constant_set(set_accepted_list,set_forbidden_list):
global set_accepted_list_frozen
if num_accepted >= len(set_accepted_list_frozen):
for var in set_accepted_list:
if len(set_accepted_list_frozen) == num_accepted:
break
set_accepted_list_frozen.add(var)

CouchDB View - filter keys before grouping

I have a CouchDB database which has documents with the following format:
{ createdBy: 'userId', at: 123456, type: 'action_type' }
I want to write a view that will give me how many actions of each type were created by which user. I was able to do that creating a view that does this:
emit([doc.createdBy, doc.type, doc.at], 1);
With the reduce function "sum" and consuming the view in this way:
/_design/userActionsDoc/_view/userActions?group_level=2
this returns a result with rows just in the way I want:
"rows":[ {"key":["userId","ACTION_1"],"value":20}, ...
the problem is that now I want to filter the results for a given time period. So I want to have the exact same information but only considering actions which happened within a given time period.
I can filter the documents by "at" if I emit the fields in a different order.
?group_level=3&startkey=[149328316160]&endkey=[1493283161647,{},{}]
emit([doc.at, doc.type, doc.createdBy], 1);
but then I won't get the results grouped by userId and actionType. Is there a way to have both? Maybe writing my own reduce function?
I feel your pain. I have done two different things in the past to attempt to solve similar issues.
The first pattern is a pain and may work great or may not work at all. I've experienced both. Your map function looks something like this:
function(doc) {
var obj = {};
obj[doc.createdBy] = {};
obj[doc.createdBy][doc.type] = 1;
emit(doc.at, obj);
// Ignore this for now
// emit(doc.at, JSON.stringify(obj));
}
Then your reduce function looks like this:
function(key, values, rereduce) {
var output = {};
values.forEach(function(v) {
// Ignore this for now
// v = JSON.parse(v);
for (var user in v) {
for (var action in v[user]) {
output[user][action] = (output[user][action] || 0) + v[user][action];
}
}
});
return output;
// Ignore this for now
// return JSON.stringify(output);
}
With large datasets, this usually results in a couch error stating that your reduce function is not shrinking fast enough. In that case, you may be able to stringify/parse the objects as shown in the "ignore" comments in the code.
The reasoning behind this is that couchdb ultimately wants you to output a simple object like a string or integer in a reduce function. In my experience, it doesn't seem to matter that the string gets longer, as long as it remains a string. If you output an object, at some point the function errors because you have added too many props to that object.
The second pattern is potentially better, but requires that your time periods are "defined" ahead of time. If your time period requirements can be locked down to a specific year, specific month, day, quarter, etc. You just emit multiple times in your map function. Below I assume the at property is epoch milliseconds, or at least something that the date constructor can accurately parse.
function(doc) {
var time_key;
var my_date = new Date(doc.at);
//// Used for filtering results in a given year
//// e.g. startkey=["2017"]&endkey=["2017",{}]
time_key = my_date.toISOString().substr(0,4);
emit([time_key, doc.createdBy, doc.type], 1);
//// Used for filtering results in a given month
//// e.g. startkey=["2017-01"]&endkey=["2017-01",{}]
time_key = my_date.toISOString().substr(0,7);
emit([time_key, doc.createdBy, doc.type], 1);
//// Used for filtering results in a given quarter
//// e.g. startkey=["2017Q1"]&endkey=["2017Q1",{}]
time_key = my_date.toISOString().substr(0,4) + 'Q' + Math.floor(my_date.getMonth()/3).toString();
emit([time_key, doc.createdBy, doc.type], 1);
}
Then, your reduce function is the same as in your original. Essentially you're just trying to define a constant value for the first item in your key that corresponds to a defined time period. Works well for business reporting, but not so much for allowing for flexible time periods.

Jmeter Regular Expression Extractor. How to save all returned values to a single variable?

I'm quite new to Jmeter and already spent numerous hours to figure it out.
What i'm trying to achieve:
Using Post Processor Regex Extractor I wrote a regex that returns me several values (already tested it in www.regex101.com and it's working as expected). However, when I do this in Jmeter, I need to provide MatchNo. which in this case will only return to me one certain value. I sort of figured it out that negative digit in this field (Match No) suppose to return all values found. When I use Debug Sampler to find out how many values are returned and to what variables they are assigned, I see a lot of unfamiliar stuff. Please see examples below:
Text where regex to be parsed:
some data here...
"PlanDescription":"DF4-LIB 4224-NNJ"
"PlanDescription":"45U-LIP 2423-NNJ"
"PlanDescription":"PMH-LIB 131-NNJ"
some data here...
As I said earlier, at www.regex101.com I tested this with regex:
\"PlanDescription\":\"([^\"]*)\"
And all needed for me information are correct (with the group 1).
DF4-LIB 4224-NNJ
45U-LIP 2423-NNJ
PMH-LIB 131-NNJ
With the negative number (I tried -1, -2, -3 - same result) at MatchNo. field in Jmeter Regex Extractor field (which Reference Name is Plans) at the Debug Sampler I see the following:
Plans=
Plans_1=DF4-LIB 4224-NNJ
Plans_1_g=1
Plans_1_g0="PlanDescription":"DF4-LIB 4224-NNJ"
Plans_1_g1=DF4-LIB 4224-NNJ
Plans_2=45U-LIP 2423-NNJ
Plans_2_g=1
Plans_2_g0="PlanDescription":"45U-LIP 2423-NNJ"
Plans_2_g1=45U-LIP 2423-NNJ
Plans_3=PMH-LIB 131-NNJ
Plans_3_g=1
Plans_3_g0="PlanDescription":"PMH-LIB 131-NNJ"
Plans_3_g1=PMH-LIB 131-NNJ
I only need at this particular case - Jmeter regex to return 3 values that contain:
DF4-LIB 4224-NNJ
45U-LIP 2423-NNJ
PMH-LIB 131-NNJ
And nothing else. If anybody faced that problem before any help will be appreciated.
Based on output of the Debug Sampler, there's no problem, it's just how RegEx returns the response:
Plans_1,Plans_2,Plans_3 is the actual set of variables you wanted.
There should also be Plans_matchNr which should contain the number of matches (3 in your example). It's important if you loop through them (you will loop from 1 to the value of this variable)
_g sets of variables refer to matching groups per matching instance (3 in your case). Ignore them if you don't care about them. They are always publish, but there's no harm in that.
Once variables are published you can do a number of things:
Use them as ${Plans_1}, ${Plans_2}, ${Plans_3} etc. (as comment above noticed).
Use Plans_... variables in loop: refer to the next variable in the loop as ${__V(Plans_${i})}, where i is a counter with values between 1 and Plans_matchNr
You can also concatenate them into 1 variable using the following simple BeanShell Post-Processor or BeanShell Sampler script:
int count = 0;
String allPlans = "";
// Get number of variables
try {
count = Integer.parseInt(vars.get("Plans_matchNr"));
} catch(NumberFormatException e) {}
// Concatenate them (using space). This could be optimized using StringBuffer of course
for(int i = 1; i <= count; i++) {
allPlans += vars.get("Plans_" + i) + " ";
}
// Save concatenated string into new variable
vars.put("AllPlans", allPlans);
As a result you will have all old variables, plus:
AllPlans=DF4-LIB 4224-NNJ 45U-LIP 2423-NNJ PMH-LIB 131-NNJ

How do you do multiple transactions in one I-Descriptor?

Currently I have an IDescriptor that pulls Sales from another FILE for Period 1,2,3. I want to be able to pull Costs from Period 1,2,3 and subtract the totals to get a profit.
The Current I-Descriptor Statement is:
TRANS(SAS1,ITEM,4,'X');#1<1,1,1>+#1<1,1,2>+#1<1,1,3>
4 = Sales
3 = Cost
#1<1,1,1> = Period 1
#1<1,1,2> = Period 2
#1<1,1,3> = Period 3
#1<1,1,4> = Period 4
You are looking for EXTRACT
So, try the following the the loc attribute:
TRANS(SAS1,ITEM,4,'X');EXTRACT(#1,1,1,1)+EXTRACT(#1,1,1,2)+EXTRACT(#1,1,1,3)
The next bit of the question isn't entirely clear to me, so let me know if I've made an incorrect assumption.
Costs come from the current file (the one this dictionary file is) from attribute (field) 3. It has the same format as the data for Sales (<1,1,1 to 3>). In this case you would need to use #RECORD.
TRANS(SAS1,ITEM,4,'X');EXTRACT(#1,1,1,1)+EXTRACT(#1,1,1,2)+EXTRACT(#1,1,1,3);EXTRACT(#RECORD,1,1,1)+EXTRACT(#RECORD,1,1,2)+EXTRACT(#RECORD,1,1,3);#2-#3
So, let's break it down:
Read attribute 4 from record ITEM in file SAS1. Return an empty string if the item doesn't exist. Hold this in position 1 (#1):
TRANS(SAS1,ITEM,4,'X');
Extract multi-subvalues 1 to 3 from the value in position 1 then add them together (). Hold this in position 2:
EXTRACT(#1,1,1,1)+EXTRACT(#1,1,1,2)+EXTRACT(#1,1,1,3);
Extract multi-subvalues 1 to 3 from the current record and add them together. Hold this in position 3:
EXTRACT(#RECORD,1,1,1)+EXTRACT(#RECORD,1,1,2)+EXTRACT(#RECORD,1,1,3);
Finally, subtract the value in position 3 (total costs) from the value in position 2 (total sales). As this is the last position, return the result:
#2-#3
The only missing thing in Dan's answer is that you need another TRANS to get your COST field, hence TRANS(SAS1,ITEM,3,'X');
after the first operations on the EXTRACTs.