Passing data frame to MS R Server model operationalisation - postman

I am running ML server and I have a service deployed that expects one of its inputs to be a data.frame.
When I connect with R to that API endpoint using mrsdeploy, I am able to pass a data.frame. I would like to do the same in prostman using json.
How can I format my json for lets say an input (data.frame) of characteristics about someone?
I would assume its something like
{
...
"bio": { "age" : 23, "height" : 12, "eyeC" : "red" }
}
I have tried a variety of combinations all getting back an error about converting to data.frame in R

You have to pass the data frame in column format, not rows. Ie, if your data looks like this:
foo bar
1 a
2 b
3 c
Then the API expects the input to be
{
"foo": [1, 2, 3],
"bar": ["a", "b", "c"]
}

So when deploying a service, there should be a Swagger definition created for that service. Cannot you import that definition into Postman, and go from there?
Read here for more information.

Related

How to enforce double quotes on all template values in input transformer

I have a JSON input that I would like to transform to another JSON output. I defined my list of input JSONPaths, and trying to create a simple JSON output in the template like so:
{
"fax": \"<userFax>\"
}
This was one of the formats given in the example from AWS themselves:
{
"instance": \"<instance>\",
"state": [9, \"<state>\", true],
"Transformed": "Yes"
}
However, when I try to update the changes, I get the following error:
Invalid InputTemplate for target ... : [Source: (String)"{ "fax": \"null\" }"; line: 2, column: 13].
Basically, I'd like all incoming values in the input to be converted to strings as an output via the template. This is to prevent values like zip codes from being converted into an integer and having it's leading zero stripped away. But it's confusing that even following the simple example from AWS is failing.

Correct way to read data for a graph

What i want to do is to show some data in a graph. the data is from a pandas data frame that i generated in my main.py file when crunching some numbers.
Now i want to show this in a chartsJS graph in another html.
Is the correct way to leave my data frame that i generated in my main.py file and generate the graph by looking at the main.py file an reading the data frame. or is the correct way to generate a django model and have the graph read the data from a django model?
The data frame will change everyday, hence the graph will be changing daily.
If the latter is correct could someone show me how they would make the model if the data frame is just some text with numbers
print(df["my_data"])
pass: 20
fail: 50
n/a: 8
Here is a basic overview. Let me know where you need elaboration.
views.py
def chart(request):
# chart.js data structure created in python:
data = {
"labels" : ["2020-01-01", "2020-01-02", ...]
"datasets" : [
{
"label" : "series 1",
"data" : [0, 1, ...],
"backgroundColor" : "blue"
},
...
]
}
# send as JsonResponse:
return JsonResponse(data)
script.js
$.ajax({
url : "the/url",
type : "GET",
success : function(response) {
chart = new Chart("<the identifier>", {
type : 'bar',
data : response,
});
}
})

How to read entire test data file in postman as part of pre-request script?

I am trying to read the entire test data file as a part of pre-request script in postman.
I tried the variable pm.iterationData, however ,it prints only the current iteration data set in the colletion runner.
I need the entire test data and load it as an environment variable in postman.
Is there a way?
The solution that i could find for this is, to set the test data in a variable as a part of pre-request script as follows:
let testdataset =
[
{
"name": "xyz",
"address": "abcd",
"value": "Hello"
},
{
"name" : "mno",
"address" : "defg",
"value" : "Mnop"
}
];
The best way I have come up with dealing with this (collecting all data from a file to us in one request) is to:
Have 2 nodes
The first node has
A dummy call to something like https://postman-echo.com/
Code that:
i. stores the table headers in an environment variable;
ii. concatenates the rows into environment variables;
iii. does 'postman.setNextRequest(null)' for all but the last row
The second node
Only runs in the last iteration
Sends collected data in the environment variable to the API
There is (currently) no way to not make any call on the first node at the moment.
See Github ticket for a request to do this: Request a way for nodes in collection to be logic-only, no request issued #5707

Create hierarchy from flat list using map/reduce

Suppose I had a Couch instance full of documents like the following:
{"id":"1","parent":null},
{"id":"2","parent":"1"},
{"id":"3","parent":"1"},
{"id":"4","parent":"3"},
{"id":"5","parent":"null"},
{"id":"6","parent":"5"}
Is there a way using MapReduce to build a view that would return my documents in this format:
{
"id":"1",
"children": [
{"id":"2"},
{"id":"3","children":[
{"id":"4"}
]}
]
},
{
"id":"5",
"children": [ {"id":"6"} ]
}
My instinct says "no" because I imagine you'd need one pass for each level of the hierarchy, and items can be nested indefinitely deep.
By using only map function this can not be achieved, yes. But the reduce will have access to the whole list of documents emitted by the map functions: http://wiki.apache.org/couchdb/Introduction_to_CouchDB_views#Reduce_Functions
To implement that, you will need a robust reduce function, that should also be abled to use "rereduce" efficiently.
In the end, it might be easier to create a view, that will map each document by its parent as key. Example:
function(doc) {
emit(doc.parent, doc._id);
}
This view will allow to query the top level documents with the key "null" and with sub ids like "1", "3" or "5".
A reduce function could added to create a result like this:
null => [1, 5]
1 => [2, 3]
3 => [4]
5 => [6]
The structural tree you whished for is contained therein in a different format and can be created out there.

Create / Update multiple objects from one API response

all new jsfiddle: http://jsfiddle.net/vJxvc/2/
Currently, i query an api that will return JSON like this. The API cannot be changed for now, which is why I need to work around that.
[
{"timestamp":1406111961, "values":[1236.181, 1157.695, 698.231]},
{"timestamp":1406111970, "values":[1273.455, 1153.577, 693.591]}
]
(could be a lot more lines, of course)
As you can see, each line has a timestamp and then an array of values. My problem is, that i would actually like to transpose that. Looking at the first line alone:
{"timestamp":1406111961, "values":[1236.181, 1157.695, 698.231]}
It contains a few measurements taken at the same time. This would need to become this in my ember project:
{
"sensor_id": 1, // can be derived from the array index
"timestamp": 1406111961,
"value": 1236.181
},
{
"sensor_id": 2,
"timestamp": 1406111961,
"value": 1157.695
},
{
"sensor_id": 3,
"timestamp": 1406111961,
"value": 698.231
}
And those values would have to be pushed into the respective sensor models.
The transformation itself is trivial, but i have no idea where i would put it in ember and how i could alter many ember models at the same time.
you could make your model an array and override the normalize method on your adapter. The normalize method is where you do the transformation, and since your json is an array, an Ember.Array as a model would work.
I am not a ember pro but looking at the manual I would think of something like this:
a = [
{"timestamp":1406111961, "values":[1236.181, 1157.695, 698.231]},
{"timestamp":1406111970, "values":[1273.455, 1153.577, 693.591]}
];
b = [];
a.forEach(function(item) {
item.values.forEach(function(value, sensor_id) {
b.push({
sensor_id: sensor_id,
timestamp: item.timestamp,
value: value
});
});
});
console.log(b);
Example http://jsfiddle.net/kRUV4/
Update
Just saw your jsfiddle... You can geht the store like this: How to get Ember Data's "store" from anywhere in the application so that I can do store.find()?