Problem statement:- I am trying to use data variables in pre-request section and I am unable to retrieve values from CSV data variable. I tried using the below two options but still no data is seen when I try to log the value nothing is displayed;
- Data.Variable
- pm.iterationData.get("variable")
Please verify if the below is right?
Data File:
Account_Number,Account_Name,Customer_ID,Currency,Account_Type,Account_Sub, Category
100000002,SWEEPY GROUPS 001,1507400001508,THB,DA,SAV,I
10000019,SWEEPY GROUPS 019,1507400001508,USD,DA,SAV,E
A9871100000020,SWEEPY GROUPS 020,1507400001508,USD,DA,DDA,E
PRE-REQ:
console.log("Customer ID"+JSON.stringify(data.Account_Name))
In Console: resultant
Customer ID undfined
Please suggest an alternative how can I get it running.
Related
Please excuse my lack of knowledge in explaining my problem as i have only just started learning Power Bi.
I am attempting to return data by using a dynamic variable within my source url.
Source = Json.Document(Web.Contents("https://api.****.com/jobs/{ID}/invoices", [Headers=[Authorization="Bearer "&GetToken()]]))
I have successfully returned the data i needed from multiple queries Blank Query 1 Query Names
However, i am trying to run a final query in which a job ID needs to be specified.
Source = Json.Document(Web.Contents("https://api.****.com/jobs/{ID}/invoices", [Headers=[Authorization="Bearer "&GetToken()]]))
With the bold item being the variable.
I have successfully returned values by hard coding the variable (seen below).
Hard coded variable
However, i would like to make dynamic in that it will return the values for all the Job ID's witin the "jobs" table.
Job Id's
I don't know if what im asking is possible, or if my explanation is good enough, but any help would be greatly appreciated!
What you are looking for is a custom function.
Make a function out of your above query by adding (ID) => in the first line and separating "ID" in your URL string.
(ID) =>
let
Source = Json.Document(Web.Contents("https://api.****.com/jobs/{" & ID & "}/invoices", [Headers=[Authorization="Bearer "&GetToken()]]))
in
Source
Of cause you can add all your other transformation steps too.
Now take your JobIDs table and add a column by invoking a custom function, select the above function and take the ID parameter from your ID column.
For every row you'll get a separate table and all that's left is simply expanding these tables into your query.
This will solve your problem.
We have Logstash receiving syslog files and then storing these in an Elasticsearch index.
We are trying to query this index with Kibana to find some particular information but we cannot get the regex queries to work.
The log date we are trying to search within is below.
Field name = message
Field type = keyword
<14>1 2018-05-02T13:53:48.079000Z snrvro04 vco - - [liagent#6876
anctoken="" component="WorkflowManagementServiceImpl" context=""
filepath="/var/log/vco/app-server/integration-server.log"
instanceid="6a6dbf1d-2f72-45db-ab57-04b84aa97b90"
log_message="Workflow 'Get ID of
Workflow/8f59ca66-7472-4efa-ac5f-dfc34059c5f1' updated (with
content)." priority="INFO" product="vro" token="" user="" wfid=""
wfname="" wfstack=""] 2018-05-02 13:53:48.079+0000 vco:
[component="WorkflowManagementServiceImpl" priority="INFO"
thread="https-jsse-nio-0.0.0.0-8281-exec-7" user="" context=""
token="" wfid="" wfname="" anctoken="" wfstack=""
instanceid="6a6dbf1d-2f72-45db-ab57-04b84aa97b90"] Workflow 'Get ID of
Workflow/8f59ca66-7472-4efa-ac5f-dfc34059c5f1' updated (with content).
The information we are trying to search for is:
component="WorkflowManagementServiceImpl"
AND more importantly:
Workflow 'Get ID of Workflow/8f59ca66-7472-4efa-ac5f-dfc34059c5f1'
The top criteria should always be the same, but the Workflow name and ID will change. The only part that remains the same within this bit of text is Workflow ' and the final '
We are currently trying our queries against the Workflow name and ID to see if we can match on that, but our queries return no results.
The regex we currently have is as follows, and we have tried numerous alternatives.
/(?<=Workflow '.*\/)(.*')/
If we run the search * Workflow * (wildcard, without the spaces) - it returns everything with the word Workflow as expected.
If we run the search Workflow we get no results.
If anyone can provide pointers towards where we are going wrong, or getting confused, that would be great!
Thanks
We resolved this by using Grok filters in Logstash to organise/clean the data before it hits the Elasticsearch Indexes, then we were able to search successfully within Kibana.
I have log lines of the following form in my Google Cloud Console:
Updated blacklist info about 123 minions. max_blacklist_per_minion=20, median_blacklist_per_minion=8, blacklist_free_minions=31
And I'm trying to set up some log-based metrics to get a longer-term overview of the values (ie. how are they changing? is it lower or higher than yesterday? etc).
However I didn't find any examples for this scenario in the documentation and what I could think of doesn't seem to work. Specifically I'm trying to understand what I need to select in "Field name" to have access to the log line (so that I can write a regular expression against).
I tried textPayload but that seems to be empty for this log entry. Looking at the actual log entry there should also be a protoPayload.line[0], but that doesn't seem to work either
In the "Metric Editor" built into the logs viewer UI you can use "protoPayload.line.logMessage" as the field name. For some reason the UI doesn't want to suggest 'line' (seems like a bug; same behavior in the filter box).
The log based metric won't distinguish based on the index of the app log line, so something like 'line[0]' won't work. For a distribution all values are extracted. A count metric would count the log entry (ie 1 regardless the number of 'line' matches).
I am trying to get a simple PigActivity to work in Data Pipeline.
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-pigactivity.html#pigactivity
The Input and Output fields are required for this activity. I have them both set to use S3DataNode. Both of these DataNodes have a directoryPath which point to my s3 input and output. I originally tried to use filePath but got the following error:
PigActivity requires 'directoryPath' in 'Output' object.
I am using a custom pig script, also located in S3.
My question is how do I reference these input and output paths in my script?
The example given on the reference uses the stage field (which can be disabled/enabled). My understanding is that this used to convert the data into tables. I don't want to do this as it also requires that you specify a dataFormat field.
Determines whether staging is enabled and allows your Pig script to have access to the staged-data tables, such as ${INPUT1} and ${OUTPUT1}.
I have disabled staging and I am trying to access the data in my script as follows:
input = LOAD '$Input';
But I get the following error:
IOException. org.apache.pig.tools.parameters.ParameterSubstitutionException: Undefined parameter : Input
I have tried using:
input = LOAD '${Input}';
But I get an error for this too.
There is the optional scriptVariable field. Do I have to use some sort of mapping here?
Just using
LOAD 'uri to your s3'
shall work.
Normally this is done for you in staging (table creation) and you do not have to access the URI directly from script and only specify it in S3DataNode.
Make sure you have set the "stage" property of "pigActivity" to be true.
Once I did that the script below started working for me:
part = LOAD ${input1} USING PigStorage(',') AS (p_partkey,p_name,p_mfgr,p_category,p_brand1,p_color,p_type,p_size,p_container);
grpd = GROUP part BY p_color;
${output1} = FOREACH grpd GENERATE group, COUNT(part);
I am trying to pull data from a SharePoint list. The field is a calculated column that takes a yes or no answer and changes the words to archived and non-archived.
I can see the data being formatted correctly in the calculated column in IE but when I try to pull the data it shows up as nothing when I check the variable data.
$site = get-spsite https://extranet./sites/site
$web = get-spweb -Identity https://extranet/sites/site
$list=$web.getlist("https://extranet/sites/site/lists/List");
$View = $list.Views["LISTVIEW"]
$listitems = $list.Getitems($view)
foreach ($listitem in $listitems) {
I have tried this also but get an indexing a null variable error.
$mailboxdb = $listitem.Fields["mailboxdb"] -as [Microsoft.SharePoint.SPFieldCalculated];
$mailboxdb.GetFieldValueAsText($listitem["mailboxdb"]);
I see this also in the $listitems output. ows_MailboxDb='string;#Archived'
But when I check $mailboxdb its empty.
Found this but I don't know what it means by stored results.
In Powershell, although you can reference any field in the list in your script, you can only compare retrieve values from "static" fields - that is, you cannot use calculation fields. PowerShell will not complain - but you will not get results in your script. This is because the .Net library for Sharepoint will not do the field calculation for you - that only happens inside the Sharepoint UI itself.
If you need to have access to a "calculated" field, you actually need to have two fields - the calculated field (usually hidden) and a "stored result" field, which must be updated from the calculated value in the last step of the "Update" workflow. Then you can use the "stored value" field in PowerShell - and also, incidentally, in View calculations in Sharepoint.
You basically have two options here. You can have Powershell do the calculation for you, which is probably the simpler of the two options given the basic nature of your calculation.
The second option, as mentioned at the end of your post is to create a new field which can store the result of the calculation. In your case, you could call it status. Then you would create a workflow that runs whenever a list item is updated or created that stores the results of the calculated field in the results field. This seems redundant to me if you have this field for no other reason than to use the value of the calculated field in a PowerShell script.