Correct way to read data for a graph - django

What i want to do is to show some data in a graph. the data is from a pandas data frame that i generated in my main.py file when crunching some numbers.
Now i want to show this in a chartsJS graph in another html.
Is the correct way to leave my data frame that i generated in my main.py file and generate the graph by looking at the main.py file an reading the data frame. or is the correct way to generate a django model and have the graph read the data from a django model?
The data frame will change everyday, hence the graph will be changing daily.
If the latter is correct could someone show me how they would make the model if the data frame is just some text with numbers
print(df["my_data"])
pass: 20
fail: 50
n/a: 8

Here is a basic overview. Let me know where you need elaboration.
views.py
def chart(request):
# chart.js data structure created in python:
data = {
"labels" : ["2020-01-01", "2020-01-02", ...]
"datasets" : [
{
"label" : "series 1",
"data" : [0, 1, ...],
"backgroundColor" : "blue"
},
...
]
}
# send as JsonResponse:
return JsonResponse(data)
script.js
$.ajax({
url : "the/url",
type : "GET",
success : function(response) {
chart = new Chart("<the identifier>", {
type : 'bar',
data : response,
});
}
})

Related

I want to get data from JavaScript with Django

There is price filtering written in JavaScript in the template. I want to take the price range given in this filter with dajngo and write a filtering function. I couldn't because I don't know JavaScript. How will I do?
So, i want to write a django function that takes the given start and end values and sorts the products accordingly.
main.js
// PRICE SLIDER
var slider = document.getElementById('price-slider');
if (slider) {
noUiSlider.create(slider, {
start: [1, 100],
connect: true,
tooltips: [true, true],
format: {
to: function(value) {
return value.toFixed(2) + '₼';
},
from: function(value) {
return value
}
},
range: {
'min': 1,
'max': 100
}
});
}
I'm not familiar with noUiSlider but you would need to get the from and to values into Django - you can do that either by submitting a form when clicking FILTER or by sending an AJAX request. I presume you would just submit the form in a standard page submission as you aren't familiar with JS (and therefore AJAX).
def your_view(request)
filter_from = request.POST.get('slider_from')
filter_to = request.POST.get('slider_to')
YourModel.objects.filter(value__gte=filter_from, value__lte=filter_to)
...
You will need to replace slider_from and slider_to with the key values that are sent by the slider input in request.POST - this will be the name of the inputs themselves. You can wrap request.POST in a print statement to easily see what these are. It's just a matter of getting the values and passing them into the filter() function of your model.

Cannot define custom metrics in ray

I'm using a framework called FLOW RL. It enables me to use rllib and ray for my RL algorithm. I have been trying to plot non learning data on tensorboard. Following ray documentation ( link ), I have tried to add custom metrics. Therefore, I need to use the info dict, which is accessed by on_episode_step(info). An "episode" element is supposed to be present in this dictionary. That lets me access to my custom scalars.
However, every time I try to access to the episode element, I get an error because it does not exist in the info dict. Is this normal?
File "examples/rllib/newGreenWaveGrid2.py", line 295, in on_episode_start
episode = info["episode"]
KeyError: 'episode'
def on_episode_step(info):
episode = info["episode"]
whatever = abs(episode.last_observation_for()[2])
episode.user_data["whatever"].append(whatever)
if __name__ == '__main__':
alg_run, gym_name, config = setup_exps()
ray.init(num_cpus=N_CPUS + 1, redirect_output=False)
trials = run_experiments({
flow_params['exp_tag']: {
'run': alg_run,
'env': gym_name,
'config': {
**config,
'callbacks': {
"on_episode_start": on_episode_start,
"on_episode_step": on_episode_step,
"on_episode_end": on_episode_end,
}
},
'checkpoint_freq': 20,
'max_failures': 999,
'stop': {
'training_iteration': 200,
},
},
})

SpreadJS FromJson Chart load

I'm using SpreadJS v12 as a reporting tool. User will enter the page get the wanted data, create charts and save it for later use.
When user saves the report I get Json data (GC.Spread.Sheets.Workbook.toJSon) and save this Json to database and whenever someone tries to reach the same report, I get the Json from database and give it to the page (GC.Spread.Sheets.Workbook.fromJSon). Everything works fine except if there is a chart on page the data source for chart series (xValues and yValues) change. When I check Json format it looks like this: Sheet2!$B$2:$B$25 but in chart it's: Sheet2!$A$1:$A$24 . Am I doing something wrong?
By the way my serialize options: { ignoreFormula: false, ignoreStyle: false, rowHeadersAsFrozenColumns: true, columnHeadersAsFrozenRows: true, doNotRecalculateAfterLoad: false }
this.state.spread = new GC.Spread.Sheets.Workbook(document.getElementById("spreadSheetContent"), { sheetCount: 1 });
This is my save method:
var pageJson = this.state.spread.toJSON(this.serializationOption);
let self = this;
let model = {
Id: "",
Name: reportName,
Query: query,
PageJson: JSON.stringify(pageJson)
}
this.post( { model }, "Query/SaveReportTemplate")
.done(function(reply){
self.createSpreadSheet(reply);
}).fail(function(reply){
self.PopUp(reply, 4 );
});
And this is my load method:
var jsonOptions = {
ignoreFormula: false,
ignoreStyle: false,
frozenColumnsAsRowHeaders: true,
frozenRowsAsColumnHeaders: true,
doNotRecalculateAfterLoad: false
}
this.state.spread.fromJSON(JSON.parse(template.PageJson),jsonOptions);
this.state.spread.repaint();
Well after a long day, I think I've found what's causing the problem and started working around that.
Let's say we have two sheets. Sheet1's index is 0 and Sheet2's index is 1.
Because of the json serialization options like frozenColumnsAsRowHeaders and frozenRowsAsColumnHeaders until Sheet2 is painted row numbers and column number are different in the json.
If there is a formula or a chart in Sheet1 that's referencing Sheet2, their references will point to a different cell from what you set first. So always referencing the sheets that will be painted before is the way to solve this problem.

How to pass base64 encoded image to Tensorflow prediction?

I have a google-cloud-ml model that I can run prediction by passing a 3 dimensional array of float32...
{ 'instances' [ { 'input' : '[ [ [ 0.0 ], [ 0.5 ], [ 0.8 ] ] ... ] ]' } ] }
However this is not an efficient format to transmit images, so I'd like to pass base64 encoded png or jpeg. This document talks about doing that, but what is not clear is what the entire json object looks like. Does the { 'b64' : 'x0welkja...' } go in place of the '[ [ [ 0.0 ], [ 0.5 ], [ 0.8 ] ] ... ] ]', leaving the enclosing 'instances' and 'input' the same? Or some other structure? Or does the tensorflow model have to be trained on base64?
The TensorFlow model does not have to be trained on base64 data. Leave your training graph as is. However, when exporting the model, you'll need to export a model that can accept PNG or jpeg (or possibly raw, if it's small) data. Then, when you export the model, you'll need to be sure to use a name for the output that ends in _bytes. This signals to CloudML Engine that you will be sending base64 encoded data. Putting it all together would like something like this:
from tensorflow.contrib.saved_model.python.saved_model import utils
# Shape of [None] means we can have a batch of images.
image = tf.placeholder(shape = [None], dtype = tf.string)
# Decode the image.
decoded = tf.image.decode_jpeg(image, channels=3)
# Do the rest of the processing.
scores = build_model(decoded)
# The input name needs to have "_bytes" suffix.
inputs = { 'image_bytes': image }
outputs = { 'scores': scores }
utils.simple_save(session, export_dir, inputs, outputs)
The request you send will look something like this:
{
"instances": [{
"b64": "x0welkja..."
}]
}
If you just want an efficient way to send images to a model (and not necessarily base-64 encode it), I would suggest uploading your images(s) to Google Cloud Storage and then having your model read off GCS. This way, you are not limited by image size and you can take advantage of multi-part, multithreaded, resumable uploads etc. that the GCS API provides.
TensorFlow's tf.read_file will directly off GCS. Here's an example of a serving input_fn that will do this. Your request to CMLE would send it an image URL (gs://bucket/some/path/to/image.jpg)
def read_and_preprocess(filename, augment=False):
# decode the image file starting from the filename
# end up with pixel values that are in the -1, 1 range
image_contents = tf.read_file(filename)
image = tf.image.decode_jpeg(image_contents, channels=NUM_CHANNELS)
image = tf.image.convert_image_dtype(image, dtype=tf.float32) # 0-1
image = tf.expand_dims(image, 0) # resize_bilinear needs batches
image = tf.image.resize_bilinear(image, [HEIGHT, WIDTH], align_corners=False)
#image = tf.image.per_image_whitening(image) # useful if mean not important
image = tf.subtract(image, 0.5)
image = tf.multiply(image, 2.0) # -1 to 1
return image
def serving_input_fn():
inputs = {'imageurl': tf.placeholder(tf.string, shape=())}
filename = tf.squeeze(inputs['imageurl']) # make it a scalar
image = read_and_preprocess(filename)
# make the outer dimension unknown (and not 1)
image = tf.placeholder_with_default(image, shape=[None, HEIGHT, WIDTH, NUM_CHANNELS])
features = {'image' : image}
return tf.estimator.export.ServingInputReceiver(features, inputs)
Your training code will train off actual images, just as in rhaertel80's suggestion above. See https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive/08_image/flowersmodel/trainer/task.py#L27 for what the training/evaluation input functions would look like.
I was trying to use #Lak's answer (thanks Lak) to get online predictions for multiple instances in one json file, but kept getting the following error (I had two instances in my test json, hence the shape [2]):
input filename tensor must be scalar but had shape [2]
The problem is that ML engine apparently batches all the instances together and passes them to the serving inpur receiver function, but #Lak's sample code assumes the input is a single instance (it indeed works fine if you have a single instance in your json). I altered the code so that it can process a batch of inputs. I hope it will help someone:
def read_and_preprocess(filename):
image_contents = tf.read_file(filename)
image = tf.image.decode_image(image_contents, channels=NUM_CHANNELS)
image = tf.image.convert_image_dtype(image, dtype=tf.float32) # 0-1
return image
def serving_input_fn():
inputs = {'imageurl': tf.placeholder(tf.string, shape=(None))}
filename = inputs['imageurl']
image = tf.map_fn(read_and_preprocess, filename, dtype=tf.float32)
# make the outer dimension unknown (and not 1)
image = tf.placeholder_with_default(image, shape=[None, HEIGHT, WIDTH, NUM_CHANNELS])
features = {'image': image}
return tf.estimator.export.ServingInputReceiver(features, inputs)
The key changes are that 1) you don't squeeze the input tensor (that would cause trouble in the special case when your json contains only one instance) and, 2) use tf.map_fn to apply the read_and_preprocess function to a batch of input image urls.

Create / Update multiple objects from one API response

all new jsfiddle: http://jsfiddle.net/vJxvc/2/
Currently, i query an api that will return JSON like this. The API cannot be changed for now, which is why I need to work around that.
[
{"timestamp":1406111961, "values":[1236.181, 1157.695, 698.231]},
{"timestamp":1406111970, "values":[1273.455, 1153.577, 693.591]}
]
(could be a lot more lines, of course)
As you can see, each line has a timestamp and then an array of values. My problem is, that i would actually like to transpose that. Looking at the first line alone:
{"timestamp":1406111961, "values":[1236.181, 1157.695, 698.231]}
It contains a few measurements taken at the same time. This would need to become this in my ember project:
{
"sensor_id": 1, // can be derived from the array index
"timestamp": 1406111961,
"value": 1236.181
},
{
"sensor_id": 2,
"timestamp": 1406111961,
"value": 1157.695
},
{
"sensor_id": 3,
"timestamp": 1406111961,
"value": 698.231
}
And those values would have to be pushed into the respective sensor models.
The transformation itself is trivial, but i have no idea where i would put it in ember and how i could alter many ember models at the same time.
you could make your model an array and override the normalize method on your adapter. The normalize method is where you do the transformation, and since your json is an array, an Ember.Array as a model would work.
I am not a ember pro but looking at the manual I would think of something like this:
a = [
{"timestamp":1406111961, "values":[1236.181, 1157.695, 698.231]},
{"timestamp":1406111970, "values":[1273.455, 1153.577, 693.591]}
];
b = [];
a.forEach(function(item) {
item.values.forEach(function(value, sensor_id) {
b.push({
sensor_id: sensor_id,
timestamp: item.timestamp,
value: value
});
});
});
console.log(b);
Example http://jsfiddle.net/kRUV4/
Update
Just saw your jsfiddle... You can geht the store like this: How to get Ember Data's "store" from anywhere in the application so that I can do store.find()?