Nested json path using variables in Postman Flow - postman

I have a scenario where I have some JSON ("lldp" in the image below), and I need to find a particular key and pull all of its values from within. The particular key I need to pull is dynamic and is identified as the 'thisPort' variable. All of this is shown in the screenshot below.
The lldp data basically looks like this. Note how the ports are not within a list. Any given instance of the lldp data may contain anywhere between 1 - 48 ports.
lldp = {
"port1": {"stuff":"things"},
"port2": {"stuff":"things"},
"port40": {"stuff":"things"}
}
I assumed I could do something like "lldp.thisPort" to access the keys and variables withing, however this produces useless errors and doesn't work. In this case I passed it three different 'thisPort' variables from a list, so presumably its the same problem three times, and not three different problems.
'thisPort' does correctly come across to the Evaluate function as a string that should lead to a valid JSON path. Eg, 'lldp.thisPort' does seem to translate to a valid path like 'lldp.port1', but Evaluate doesn't seem to agree and I get an error.
Using variables (or any other 'dynamic' way of working), how can you access the keys/values within some JSON as part of a postman flow, when the path to the thing you're trying to pull is dynamic?

You can use $lookup(lldp, thisPort) inside the Evaluate block to get the values inside the thisPort object.

Related

How can I save value of dynamic variable in Postman?

Postman allows to generate random dummy data by using pre-defined variables, on example this one would be replaced by random company name:
{{$randomCompanyName}}
Using pre-defined variables multiple times return different values per request.
The question is how to save once generated value to the variable for further usage on example in tests, something like(it doesn't work):
pm.variables.set("company", {{$randomCompanyName}});
Thanks.
You can use the .replaceIn() function with that {{...}} syntax in the sandbox.
pm.globals.set("company", pm.variables.replaceIn('{{$randomCompanyName}}'));
I've used a global variable to store the value as you would want to use it again. You could also use either the environment or collectionVariables scope to do the same thing.

BizTalk Mapping:Source record does not exists but need to map and pass default value

I have a source schema in which a particular record is optional and in the source message instance the record does not exist. I need to map this record to destination record, scenario goes like if the source record doesn't exist, need to map a default value 0 to destination nodes. and If it does exists , need to pass the source node values as it is (followed by few arithmetic operations).
I have tried using various combinations of functoids like logical existence followed by value mapping,record count ,string existence,etc. Also tried using c# within scripting functoid and also xslt , nothing works.its very tough to deal with mapping non existing records. I have several records on top of this record which are mapped just fine and they do exists. having trouble only with this one.No matter how many combination of c# and xslt code i write , it feels like scripting functoid will never accept a non existence record or node link. Mind you that this record if exists ,can repeat multiple times.
Using BizTalk2013r2.
If the record doesn't exist (record is not coming, not even as < record/>) you can use this simple combination of Functoids.
Link the record to Logical Existence, if exist it will be sent by the top Value Mapping. If doesn't exit the second condition will be true and the zero will be sent from the value mapping in the bottom.

Set Mapping variable in Expression and use it in Source Filter

I have two tables in different databases. In a table A is the data, in the other table B are information for incremental load of the data from the first table. I want to load from table B and store the date of the last successful load from table A in a mapping variable $$LOAD_DATE. To achieve this, I read a date from table B and use the SETVARIABLE() function in a expression to set the $$LOAD_DATE variable. The port in which I do this is marked as output and writes into a dummy flat file. I only read on row of this source!
Then I use this $$LOAD_DATE variable in the Source Filter of the Source Qualifier of table A to only load new records which are younger than the date stored in the $$LOAD_DATE variable.
My problem is that I am not able to set the $$LOAD_DATE variable correctly. It is always the date 1753-1-1-00.00.00, which is the default value for mapping variables of the type date/time.
How do I solve this? How can I store a date in that variable and use it later in a Source Qualifiers source filter? Is it even possible?
EDIT: Table A has too much records to read them all and filter them later. This would be to expensive, so they have to be filtered at source filter level.
Yes, it's possible.
In the first map you have to initialize the variable, like this:
In first session configuration you have to define the Post-session on success variable assignment:
The second map (with your table A) will get the variable after this configuration of the session in Pre-session variable assignment:
It will work.
It is not possible to set a mapping variable and use it's value somewhere else in the same run, because, the variable is actually set when the session completes.
If you really want to implement it using mapping variables you have to create two mappings, one for setting the mapping variable and another for actual incremental load. You can pass a mapping variable value from one session to another in a workflow using a workflow variable. https://stackoverflow.com/a/26849639/2626813
Other solutions could be to use a lookup on B and a filter after that.
You can also write some scripts to query table B and modify the parameter file with the latest $LOAD_DATE value prior to executing the mapping.
Since we're having two different DBs, use two sessions. Get values in the first one and pass the parameters to the second one.

Regex for Google analytics advanced segment based on custom variable value

Trying to create an advanced segment (include) using regex (or any other filter mechnanism, contains with just the substring isn't working either) which uses the value of the custom variable value.
It ought to be straightforward, but it's driving me insane. I currently have this regex:
.*CLAS_LIBRARIES.*
which rightly matches a custom variable value of:
HOME/CLASMAIN/CLAS_LIBRARIES/
but when I apply the segment and then browse the custom variable values in the report, it contains values like:
HOME/
/museumcollections/
HOME/MAPS/
Tried wrapping it like this:
.*(CLAS_LIBRARIES).*
(.*)(CLAS_LIBRARIES)(.*)
to no avail.
What the hell is going on, and am I an idiot?
What's the scope of your custom variable? Can multiple sessions have different values?
Advanced segments will return any data that matches your query (e.g. in case of creating a segment for a specific page, GA will return data for all user activity which included that specific page as part of their navigation).

What is a good design pattern to implement a dynamic data importer tool?

We are planning to build a dynamic data import tool. Basically taking information on one end in a specified format (access, excel, csv) and upload it into an web service.
The situation is that we do not know the export field names, so the application will need to be able to see the wsdl definition and map to the valid entries in the other end.
In the import section we can define most of the fields, but usually they have a few that are custom. Which I see no problem with that.
I just wonder if there is a design pattern that will fit this type of application or help with the development of it.
I am not sure where the complexity is in your application, so I will just give an example of how I have used patterns for importing data of different formats. I created a factory which takes file format as argument and returns a parser for particular file format. Then I use the builder pattern. The parser is provided with a builder which the parser calls as it is parsing the file to construct desired data objects in application.
// In this example file format describes a house (complex data object)
AbstractReader reader = factory.createReader("name of file format");
AbstractBuilder builder = new HouseBuilder(list_of_houses);
reader.import(text_stream, builder);
// now the list_of_houses should contain an extra house
// as defined in the text_stream
I would say the Adaptor Pattern, as you are "adapting" the data from a file to an object, like the SqlDataDataAdapter does it from a Sql table to a DataTable
have a different Adaptor for each file type/format? example SqlDataAdptor, MySqlDataAdapter, they handle the same commands but different datasources, to achive the same output DataTable
Adaptor pattern
HTH
Bones
Probably Bridge could fit, since you have to deal with different file formats.
And Façade to simplify the usage. Handle my reply with care, I'm just learning design patterns :)
You will probably also need Abstract Factory and Command patterns.
If the data doesn't match the input format you will probably need to transform it somehow.
That's where the command pattern come in. Because the formats are dynamic, you will need to base the commands you generate off of the input. That's where Abstract factory is useful.
Our situation is that we need to import parametric shapes from competitors files. The layout of their screen and data fields are similar but different enough so that there is a conversion process. In addition we have over a half dozen competitor and maintenance would be a nightmare if done through code only. Since most of them use tables to store their parameters for their shapes we wrote a general purpose collection of objects to convert X into Y.
In my CAD/CAM application the file import is a Command. However the conversion magic is done by a Ruleset via the following steps.
Import the data into a table. The field names are pulled in as well depending on the format.
We pass the table to a RuleSet. I will explain the structure the ruleset in a minute.
The Ruleset transform the data into a new set of objects (or tables) which we retrieve
We pass the result to the rest of the software.
A RuleSet is comprise of set of Rules. A Rule can contain another Rule. A rule has a CONDITION that it tests, and a MAP TABLE.
The MAP TABLE maps the incoming field with a field (or property) in the result. There are can be one mapping or a multitude. The mapping doesn't have to involve just poking the input value into a output field. We have a syntax for calculation and string concatenation as well.
This syntax is also used in the Condition and can incorporate multiple files like ([INFIELD1] & "-" & [INFIELD2])="A-B" or [DIM1] + [DIM2] > 10. Anything between the brackets is substituted with a incoming field.
Rules can contain other Rules. The way this works is that in order for a sub Rule mapping to apply both it's condition and those of it's parent (or parents) have to be true. If a subRule has a mapping that conflicts with a parent's mapping then the subRule Mapping applies.
If two Rules on the same level have condition that are true and have conflicting mapping then the rule with the higher index (or lower on the list if you are looking at tree view) will have it's mapping apply.
Nested Rules is equivalent to ANDs while rules on the same level are equivalent of ORs.
The result is a mapping table that is applied to the incoming data to transform it to the needed output.
It is amicable to be being displayed in a UI. Namely a Treeview showing the rules hierarchy and a side panel showing the mapping table and conditions of the rule. Just as importantly you can create wizards that automate common rule structures.