wso2 data mapper attribute manipulation - wso2

I am currently using the BETA 2 version of wso2 esb tooling. it is a great improvement from the previous version that I used which is the ALPHA version. so for my question is that is there a way to manipulate the attribute that was mapped from data mapper? so for example is that if have a response minute with a data type of integer and value of 150 and I want to concatenate a string on that integer so the result should be 150 min

For this, you can do a type conversion using 'ToString' operation and and then use 'Concat' operation to concatenate the two strings. You can find more details on the latest improvements of Data Mapper at : https://nuwanpallewela.wordpress.com/2016/07/16/understanding-wso2-data-mapper-5-0-0/

Related

Is there a visual component for write a number directly through PBI in published report?

I would like to allow users to input a integer number to be used in some calculations.
I know that it is possible to use What-if parameters. However, What-if parameters can only be used with value ranges between 0 and 1,000. For the ranges greater than 1,000, the parameter value will be sampled.
For example, I can't write 8,529 because the number will be sampled to 8,521.
Maybe there is some hidden workaround or a custom visual component. I tested with Smart Filter by OKVIZ but it doesn't work in Power BI Service neither in an embedded application.
Thanks a lot!
--- Miguel-Angel
I used the Smart Filter Pro and it works.

How to convert existing Elasticsearch data from string to number

I am streaming AWS Cloudwatch logs (from a Node.js Lambda application) to an AWS Elasticsearch cluster, so that I can view metrics in Kibana.
Some of the data I was streaming was numeric, but was being logged as strings. I've updated the application code to log these as numeric values, however I can't use numeric visualizations in Kibana on those fields because the field type is now mixed -- i.e. in Kibana settings it says 13 fields are defined as several types (string, integer, etc) across the indices that match this pattern...
Is there a straightforward way to force ES / Kibana to treat that field as always numeric? Or convert all of the older logged data from string to number?
My searches have indicated I can do this with some kind of mutation using the ES API, but I can't track down what this API call would actually look like. Disclaimer: Elasticsearch noob.
There are two approaches here:
Convert all the data from strings to numeric values. Essentially, you'll have to reindex the whole data(we can't just change the field type with one click), making sure that the strings are converted / typecast to numeric values. The best way to reindex is to use Ingest Node Pipelines
Pros: Visualizations built on this data will be fast as the data is already in numeric format.
Cons: If the data set is huge this conversion can take long time.
Keep all data in string format as-it-is and use Scripted Fields in Kibana, to convert the data to numeric format at runtime e.g. whenever you visualize
Pros: No need to setup a whole new pipeline to convert the data
Cons: Visualizations on large timeframes might be too slow / heavy for your infrastructure.
Here is the scripted field I created, thanks to Abhishek's answer:
String key = 'myfield';
if (doc.containsKey(key + '.keyword')) {
key += '.keyword';
if (doc[key].size() != 0 && doc[key] != null) {
if (doc[key].value instanceof String) {
return Double.parseDouble(doc[key].value);
}
}
} else if (doc.containsKey(key) && doc[key].size() != 0 && doc[key] != null) {
return doc[key].value;
}

How to use Apache beam to process Historic Time series data?

I have the Apache Beam model to process multiple time series in real time. Deployed on GCP DataFlow, it combines multiple time series into windows, and calculates the aggregate etc.
I now need to perform the same operations over historic data (the same (multiple) time series data) stretching all the way back to 2017. How can I achieve this using Apache beam?
I understand that I need to use the windowing property of Apache Beam to calculate the aggregates etc, but it should accept data from 2 years back onwards
Effectively, I need data as would have been available had I deployed the same pipeline 2 years. This is needed for testing/model training purposes
That sounds like a perfect use case of Beam's focus on event-time processing. You can run the pipeline against any legacy data and get correct results as long as events have timestamps. Without additional context I think you will need to have an explicit step in your pipeline to assign custom timestamps (from 2017) that you will need to extract from the data. To do this you can probably use either:
context.outputWithTimestamp() in your DoFn;
WithTimestamps PTransform;
You might need to have to configure allowed timestamp skew if you have the timestamp ordering issues.
See:
outputWithTimestamp example: https://github.com/apache/beam/blob/efcb20abd98da3b88579e0ace920c1c798fc959e/sdks/java/core/src/test/java/org/apache/beam/sdk/transforms/windowing/WindowingTest.java#L248
documentation for WithTimestamps: https://beam.apache.org/releases/javadoc/2.13.0/org/apache/beam/sdk/transforms/WithTimestamps.html#of-org.apache.beam.sdk.transforms.SerializableFunction-
similar question: Assigning to GenericRecord the timestamp from inner object
another question that may have helpful details: reading files and folders in order with apache beam

How to save a model in tensorflow by using c++

How to save a model in Tensorflow by using c++? I have searched on google and baidu but not find any solutions for it. I then reading the api document of tensorflow, and the introduce is fewer introduction about C++
Model saving is implemented in Python only. There is currently no way to save a model using C++ APIs. C++ APIs allow you to load and use the models, not to train or save them.
Assume you have basic understanding of tensorflow C++ API and know how to construct a graph using the C++ API. You can make use of the 2 functions :
tensorflow::WriteTextProto() : your can get tensorflow::GraphDef (that represents all the operations you defined e.g. Add, multiply, Mean .... etc ) from tensorflow::Scope::ToGraphDef(), save the tensorflow::GraphDef to text protobuf file
tensorflow::checkpoint::TensorSliceWriter saves the current state of parameter matrices to external file (checkpoint), it's little complicated but it works well for me
firstly you'll have to get trained parameter to by calling tensorflow::Session::Run, which will return a list of parameter matrices to output_tensor (see sample below) :
std::vector<tensorflow::Tensor> output_tensor;
tensorflow::Session::Run({}, {"name_of_param_mtx_1", "name_of_param_mtx_2",}, {}, &output_tensor);
where the name_of_param_mtx_1 and name_of_param_mtx_2 above should be the name of your parameter matrices in tensorflow::Variable, e.g.
auto name_of_param_mtx_1 = tensorflow::ops::Variable (root.WithOpName("name_of_param_mtx_1"), {7, 17}, tensorflow::DT_FLOAT);
then you need to prepare following for tensorflow::checkpoint::TensorSliceWriter:
base address of the parameter raw data by calling tensorflow::Tensor.tensor_data().data()
shape of each tensorflow::Tensor , by calling tensorflow::Tensor::dim_size(NUM_DIMENSION). For eaxmple a 7x17 2D parameter matrix, NUM_DIMENSION can be 0 and 1, where tensorflow::Tensor::dim_size(0) is 7 and tensorflow::Tensor::dim_size(1) is 17.
name of this checkpoint, the name must be unique from other checkpoints in one file
create tensorflow::TensorSlice by calling tensorflow::TensorSlice::ParseOrDie("-:-"), it seems that the only argument of tensorflow::TensorSlice::ParseOrDie will be internally analyzed e.g. -:- means taking all items of a matrix. if users only want part of trained parameter matrix e.g. to only take 2nd column of all rows, then the string argument would be likely -:2 , I haven't figured out such advanced usage of tensorflow::TensorSlice::ParseOrDie.
Hope that helps.

Google Charts data encoding

I have recently started looking into Google Charts API for possible use within the product I'm working on. When constructing the URL for a given chart, the data points can be specified in three different formats, unencoded, using simple encoding and using extended encoding (http://code.google.com/apis/chart/formats.html). However, there seems to be no way around the fact that the highest value possible to specify for a data point is using extended encoding and is in that case 4095 (endoded as "..").
Am I missing something here or is this limit for real?
When using the Google Chart API, you will usually need to scale your data yourself so that it fits within the 0-4095 range required by the API.
For example, if you have data values from 0 to 1,000,000 then you could divide all your data by 245 so that it fits within the available range (1000000 / 245 = 4081).
Per data scaling, this may also help you:
http://code.google.com/apis/chart/formats.html#data_scaling
Note the chds parameter option.
You may also wish to consider leveraging a wrapper API that abstracts away some of these ugly details. They are listed here:
http://groups.google.com/group/google-chart-api/web/useful-links-to-api-libraries
I wrote charts4j which has functionality to help you deal with data scaling.