BizTalk Mapping supress empty attribute in the destination - xslt

I have a case in BizTalk where I want to map an "Attribute Node" which type is Date that can't be null or empty and to avoid problems I need to suppress in the destination transformation map when the source is null.
I followed this link to try the same thing that we do with Nodes but doesn't answer my problem.
https://social.technet.microsoft.com/Forums/security/en-US/4ab184e0-c978-429c-a80d-e869732de8a2/how-to-suppress-empty-nodes-in-biztalk-map?forum=biztalkgeneral
Anyone have an idea?
Thank you,
Roberto
Edited
The link didn't mentioned that the Logical String returns True or Nill depending on your source.
I've obliged to check if is empty and as well null

From the source, connect to a Length Functoid, then Greater Than 1 Functoid.
Connect the Greater Than Functoid to the target.
However you end up composing it, connecting a Functoid that returns a boolean is treated as a create yes/no regardless of any value also mapped.

Related

Corda allowedToSeeCriteria filter always returns empty with consumed states

I have an issue using the AllowedToSeeStateMapping. To me it looks that it only works with unconsumed states but not with the consumed ones (if I query the vault for consumed states using the allowedToSeeCriteria it always returns an empty list). Does anyone have the same problem, or is it intended to work like this?
The default values for status and relevancyStatus are UNCONSUMED and ALL respectively. (https://github.com/corda/corda/blob/release/os/4.8/core/src/main/kotlin/net/corda/core/node/services/vault/QueryCriteria.kt)
When you build a vault query expression the sub-criteria don't copy those values from the root, so you have to include them explicitly for every sub-criteria expression.
So yes, I'm guessing allowedToSeeCriteria is intended to work that way, but you could reverse engineer the logic to set status as CONSUMED or ALL of any sub-criteria in that expression.

How do i arrange Single cardinality for Vertex properties imported via CSV into AWS Neptune?

Neptune documentation says they support "Set" property cardinality only on property data imported via CSV, which means there is no way that a newly arrived property value could overwrite the old property value on the same vertex, on the same property.
For example, if the first CSV imports
~id,~label,age
Marko,person,29
then Marko has a birthday & a second CSV imports
~id,~label,age
Marko,person,30
'Marko' vertex 'age' property will contain both age values, which doesn't seem useful.
AWS says this (collapsing Set to Single cardinality properties (keeping the last arrived value only) needs to be done with post-processing, via Gremlin traversals.
Does this mean that there should be a traversal that continuously scanning Vertexes with multiple (Set) properties and set the property once again with Single cardinality, with the last value possible? IF so, what is the optimal Gremlin query to do do that?
In pseudo-Gremlin i'd imagine something like:
g.V().property(single, properties(*), _.tail())
Is there a guarantee at all that Set-cardinality properties are always listed in order of arrival?
Or am i completely on the wrong track here.
Any help would be appreciated.
Update:
So the best thing i was able to come with up so far is still far from a perfect solution, but it still might be useful for someone in my shoes.
In Plan A if we happen to know the property names and the order of arrival does not matter at all (just want single cardinality on these props), the traversal for all vertexes could be something like:
g.V().has(${propname}).where(property(single, ${propname}, properties(${propname}).value().order().tail() ) )
The plan B is to collect new property values under temporary property names in the same vertex (eg. starting with _), and traverse through vertexes having such temporary property names and set original properties with their tailed values with single cardinality:
g.V().has(${temp_propname}).where(property(single, ${propname}, properties(${temp_propname}).value().order().tail() ) ).properties('temp_propname').drop()
The Plan C, which would be the coolest, but unfortunately does not work, is to keep collecting property values in a dedicated vertex, with epoch timestamps as property names, and property values as their values:
g.V(${vertexid}).out('has_propnames').properties()
==>vp[1542827843->value1]
==>vp[1542827798->value2]
==>vp[1542887080->latestvalue]
and sort the property names (keys), take the last one, and use its value to keep THE main vertex property value up-to-date with the latest value:
g.V().has(${propname}).where(out(${has_these_properties}).count().is(gt(0))).where(property(single, ${propname}, out(${has_these_properties}).properties().value( out(${has_these_properties}).properties().keys().order().tail() ) ) )
Looks like the parameter for value() step must be constant, it can't use the outcome of another traversal as parameter, so i could not get this working. Perhaps someone with more Gremlin experience know a workaround for this.
AWS have recently introduced 'single' cardinality support on CSV bulk loader:
https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-tutorial-format-gremlin.html
So no more Gremlin-level property value arrangement should be needed.
It would probably be more performant to read in the file from which you are bulk loading and set that property using the vertex id, rather than scanning for a vertex with multiple values for that property.
So your gremlin update query would be as follows.
g.V(${id})
.property(single,${key},${value})
In so far as whether set is a guaranteed order, I do not know. :(

BizTalk Mapping:Source record does not exists but need to map and pass default value

I have a source schema in which a particular record is optional and in the source message instance the record does not exist. I need to map this record to destination record, scenario goes like if the source record doesn't exist, need to map a default value 0 to destination nodes. and If it does exists , need to pass the source node values as it is (followed by few arithmetic operations).
I have tried using various combinations of functoids like logical existence followed by value mapping,record count ,string existence,etc. Also tried using c# within scripting functoid and also xslt , nothing works.its very tough to deal with mapping non existing records. I have several records on top of this record which are mapped just fine and they do exists. having trouble only with this one.No matter how many combination of c# and xslt code i write , it feels like scripting functoid will never accept a non existence record or node link. Mind you that this record if exists ,can repeat multiple times.
Using BizTalk2013r2.
If the record doesn't exist (record is not coming, not even as < record/>) you can use this simple combination of Functoids.
Link the record to Logical Existence, if exist it will be sent by the top Value Mapping. If doesn't exit the second condition will be true and the zero will be sent from the value mapping in the bottom.

Using a Parameter in Expression Transformation

I have a workflow in which I've set up an expression transformation to select $$Param for a particular field, and then within the target properties I've set a delete value. I've tried this by substituting $$Param for a hardcoded value and it works fine, however, for some reason when I put in $$Param, it doesn't actually do the delete. Is there a reason? Am I doing something wrong?
Just for clarification, the workflow executes successfully - no error is thrown but it's not doing what it's supposed to.
Thanks in advance,
$$Param needs to be passed thru a parameter file and you have the option to set an initial value when you declare the parameter in the mapping under Mappings > Parameter and Variables.
Have you looked at the session log to see what's the override value of $$Param is being used? If it's a SQL delete, try to turn see in the session log the query being executed in the database.

biztalk xpath query in orchestration

I am using decide shape in orchestration and I receive 2 xml file.
and i have filter that file using xpatch because depend first node i have to process in different map. I use xpach statement to get find if the first node equal specific value if yes it will process if not it wil be send to second map.
how i should do that? I do not do it usually and try to find out how my statement should look
xpath(ACKSchema(name(/*))== CstmrPmtStsRpt;
How to check if xml file equal specific condition?
thanks
You can use the xpath query function to probe the value in the message, or set the value. The syntax for receiving a string value is
variable = xpath(BiztalkMessage,"string(xpath-query)");
To set a value in the message
xpath(BiztalkMessage,"xpath-query") = value
An easy way to locate the xpath you want to use is to open the schema in the Visual Studio BizTalk project, and select the node that will hold your value. Then look at the properties window and use the 'Instance Xpath' value (see this post for more details)
The xpath query can be a bit verbose, and depending on your situation you could shorten it (with a small loss of fidelity). If you are comparing a string value, you'll want to use the string function;
xpath(msgTestMessage,"string(//MyNode)") == "TestValue"
Without the xpath string function, you'll be receiving the equivalent of a nodeset, rather than the value.
You may not need to use the xpath and decide shape at all if your two xml files have different root nodes.
Using direct bound ports BizTalk can route your messages to the correct "subscriber" for you automatically. You drop the two input messages into the message box database. If you create one subscriber for each message type BizTalk will send the messages to the correct subscriber for you.
BizTalk uses the target namespace and the root node name to decide which subscriber gets which message.