Cannot define custom metrics in ray - tensorboard

I'm using a framework called FLOW RL. It enables me to use rllib and ray for my RL algorithm. I have been trying to plot non learning data on tensorboard. Following ray documentation ( link ), I have tried to add custom metrics. Therefore, I need to use the info dict, which is accessed by on_episode_step(info). An "episode" element is supposed to be present in this dictionary. That lets me access to my custom scalars.
However, every time I try to access to the episode element, I get an error because it does not exist in the info dict. Is this normal?
File "examples/rllib/newGreenWaveGrid2.py", line 295, in on_episode_start
episode = info["episode"]
KeyError: 'episode'
def on_episode_step(info):
episode = info["episode"]
whatever = abs(episode.last_observation_for()[2])
episode.user_data["whatever"].append(whatever)
if __name__ == '__main__':
alg_run, gym_name, config = setup_exps()
ray.init(num_cpus=N_CPUS + 1, redirect_output=False)
trials = run_experiments({
flow_params['exp_tag']: {
'run': alg_run,
'env': gym_name,
'config': {
**config,
'callbacks': {
"on_episode_start": on_episode_start,
"on_episode_step": on_episode_step,
"on_episode_end": on_episode_end,
}
},
'checkpoint_freq': 20,
'max_failures': 999,
'stop': {
'training_iteration': 200,
},
},
})

Related

I want to get data from JavaScript with Django

There is price filtering written in JavaScript in the template. I want to take the price range given in this filter with dajngo and write a filtering function. I couldn't because I don't know JavaScript. How will I do?
So, i want to write a django function that takes the given start and end values and sorts the products accordingly.
main.js
// PRICE SLIDER
var slider = document.getElementById('price-slider');
if (slider) {
noUiSlider.create(slider, {
start: [1, 100],
connect: true,
tooltips: [true, true],
format: {
to: function(value) {
return value.toFixed(2) + '₼';
},
from: function(value) {
return value
}
},
range: {
'min': 1,
'max': 100
}
});
}
I'm not familiar with noUiSlider but you would need to get the from and to values into Django - you can do that either by submitting a form when clicking FILTER or by sending an AJAX request. I presume you would just submit the form in a standard page submission as you aren't familiar with JS (and therefore AJAX).
def your_view(request)
filter_from = request.POST.get('slider_from')
filter_to = request.POST.get('slider_to')
YourModel.objects.filter(value__gte=filter_from, value__lte=filter_to)
...
You will need to replace slider_from and slider_to with the key values that are sent by the slider input in request.POST - this will be the name of the inputs themselves. You can wrap request.POST in a print statement to easily see what these are. It's just a matter of getting the values and passing them into the filter() function of your model.

Batch prediction Input

I have a tensorflow model deployed on Vertex AI of Google Cloud. The model definition is:
item_model = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=item_vocab, mask_token=None),
tf.keras.layers.Embedding(len(item_vocab) + 1, embedding_dim)
])
user_model = tf.keras.Sequential([
tf.keras.layers.StringLookup(
vocabulary=user_vocab, mask_token=None),
# We add an additional embedding to account for unknown tokens.
tf.keras.layers.Embedding(len(user_vocab) + 1, embedding_dim)
])
class NCF_model(tf.keras.Model):
def __init__(self,user_model, item_model):
super(NCF_model, self).__init__()
# define all layers in init
self.user_model = user_model
self.item_model = item_model
self.concat_layer = tf.keras.layers.Concatenate()
self.feed_forward_1 = tf.keras.layers.Dense(32,activation= 'relu')
self.feed_forward_2 = tf.keras.layers.Dense(64,activation= 'relu')
self.final = tf.keras.layers.Dense(1,activation= 'sigmoid')
def call(self, inputs ,training=False):
user_id , item_id = inputs[:,0], inputs[:,1]
x = self.user_model(user_id)
y = self.item_model(item_id)
x = self.concat_layer([x,y])
x = self.feed_forward_1(x)
x = self.feed_forward_2(x)
x = self.final(x)
return x
The model has two string inputs and it outputs a probability value.
When I use the following input in the batch prediction file, I get an empty prediction file.
Sample of csv input file:
userid,itemid
yuu,190767
yuu,364
yuu,154828
yuu,72998
yuu,130618
yuu,183979
yuu,588
When I use a jsonl file with the following input.
{"input":["yuu", "190767"]}
I get the following error.
('Post request fails. Cannot get predictions. Error: Exceeded retries: Non-OK result 400 ({\n "error": "Failed to process element: 0 key: input of \'instances\' list. Error: INVALID_ARGUMENT: JSON object: does not have named input: input"\n}) from server, retry=3.', 1)
What seems to be going wrong with these inputs?
After a bit of experimenting, I found out what was wrong with the batch prediction input. In the csv file, the item column was being interpreted as an integer whereas the model has a string as an input. I'm not sure why there was no output at all in that case and I couldn't find the logs for the batch prediction.
The correct format for jsonlines was:
["user1", "item1"]
["user2", "item2"]
["user3", "item3"]
The one I used assumed the input was a named layer, 'input'. In all of this, I found the documentation of google cloud to be lacking.

Correct way to read data for a graph

What i want to do is to show some data in a graph. the data is from a pandas data frame that i generated in my main.py file when crunching some numbers.
Now i want to show this in a chartsJS graph in another html.
Is the correct way to leave my data frame that i generated in my main.py file and generate the graph by looking at the main.py file an reading the data frame. or is the correct way to generate a django model and have the graph read the data from a django model?
The data frame will change everyday, hence the graph will be changing daily.
If the latter is correct could someone show me how they would make the model if the data frame is just some text with numbers
print(df["my_data"])
pass: 20
fail: 50
n/a: 8
Here is a basic overview. Let me know where you need elaboration.
views.py
def chart(request):
# chart.js data structure created in python:
data = {
"labels" : ["2020-01-01", "2020-01-02", ...]
"datasets" : [
{
"label" : "series 1",
"data" : [0, 1, ...],
"backgroundColor" : "blue"
},
...
]
}
# send as JsonResponse:
return JsonResponse(data)
script.js
$.ajax({
url : "the/url",
type : "GET",
success : function(response) {
chart = new Chart("<the identifier>", {
type : 'bar',
data : response,
});
}
})

How to make a field required for a post request in an API?

I'm making an API route that returns a random number in Flask. When sending a post request to the end-point, I want it to return an error(s) if a certain field(s) is not in the post request (for example, if there is no "name" in the request, it should return an error).
I've tried doing this with a dictionary, try, and catch. If a field is missing, I add a key: value to the dictionary and if the dictionary isn't empty, to return that. First problem is if there is more than one field missing, it only adds one of them to the error dictionary. Second problem is, I'm also trying to make sure some fields also have a certain value (for example, color needs to be red or blue). If I check for one thing, it works - if I do
if color != "red" or color != "blue":
it will always show an error. Even if I split it up into multiple if statements, it will still be an error. I've searched Google, rephrasing my question at least 30 different times, and most answers I've gotten are about SalesForce (which I'm assuming is some company/software etc).
So...is there a way to make a certain field(s) required? Or am I on the right track with try and catch? If it is through try and catch, how do I make it show more than one error/have a variable a certain thing?
#app.route('/api/get-num', methods=["POST"])
def num():
errors = {}
try:
name = request.json['name']
except:
errors["errors"] = {"name" : "This field is required."}
try:
color = request.json['color']
if color != "red" or color != "blue":
errors["errors"] = {"color" : "Invalid value, must be red or blue."}
except:
errors["errors"] = {"color" : "Invalid value, must be red or blue."}
if len(errors) != 0:
return errors
create_dict = {
'name' : request.json['name'],
'email ': request.json['email'],
'year': request.json['year'],
'color' : request.json['color']
}
return jsonify(create_dict)
Examples:
If name is missing and color is wrong, it should show:
{
"errors": {
"color": [
"Invalid value, must be red or blue."
],
"name": [
"This field is required."
]
}
}
With name missing and color being "red", it's currently showing:
{
"errors": {
"color": "Invalid value, must be red or blue."
}
}
Create a dict with expected errors, in this case it would be all the required fields
errorsDict = {"errors":{"color":[],"name":[]}}
In your try except block append to the nested as items as necessary, example:
errorsDict = {"errors":{"color":[],"name":[]}}
try:
name = request.json['name']
except:
errorsDict["errors"]["name"].append("This field is required.")
try:
color = request.json['color']
except:
errorsDict["errors"]["color"].append( "COLORS MISSING!")

How can I fix my estimator classifier code

Hi I'm new to Tensorflow and I've been practicing with the tensorflow.estimator library. Basically I ran the inbuilt tf.estimator.DNNClassifier algorithm below
import tensorflow as tf
def train_input_fn(features, labels, batch_size):
"""An input function for training"""
# Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))
# Shuffle, repeat, and batch the examples.
return dataset.shuffle(1000).repeat().batch(batch_size)
# Feature columns describe how to use the input.
my_feature_columns = []
for key in landmark_features.keys():
my_feature_columns.append(tf.feature_column.numeric_column(key=key))
# Build a DNN with 2 hidden layers and 10 nodes in each hidden layer.
classifier = tf.estimator.DNNClassifier(feature_columns=my_feature_columns, hidden_units=[10, 10],n_classes=10)
dataset = train_input_fn(landmark_features, emotion_labels, batch_size = 1375 )
However I keep getting the following error:
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpc_tag0rc
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpc_tag0rc', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
Any idea on what I can do to fix my code ?