I'm doing a survey web project at the moment that requires me to use d3.js to perform some kind of bar chart visualization. I'm using Django to program my web application.
The problem I encounter is how do I pass all values related to an object into an array that is used in d3.js?
choice = question.choicelist.get(choice_no=choice_no)
where votes for each choices to the question will be choice 1 = 4, choice 2 = 6, choice 3 = 7, choice 4 = 1.
For ds.js, the most simplest way to read a data set is:
var data = [ 4, 6, 7, 1]
How do I pass the data to my template such that d3.js is able to read it as the code above?
Handiest option: convert it to json. Json is valid javascript, so you can insert it directly into the template, which seems to be what you want. Something like
import json
def your_view(request):
poll_results = [4, 6, 7, 1]
poll_as_json = json.dumps(poll_results)
# Gives you a string '[4, 6, 7, 1]'
...
return render_or_whatever(context={'poll_as_json': poll_as_json})
And in your template:
<script ...>
var data = {{ poll_as_json }};
...
</script>
Something like that?
Related
Given with my current data in db, I want to restrict the post method from adding another data to the database. What I want is restrict the post method in adding another data, and just update the existing data within the db.
Code:
def get(self):
predict = PredictModel.query.all()
return {'Predict': list(x.json() for x in predict)}
def post(self):
data = request.get_json()
new_predict = PredictModel(data['timelag1'],data['timelag2'],data['timelag3'],data['timelag4'],data['timelag5'])
db.session.add(new_predict)
db.session.commit()
db.session.flush()
return new_predict.json(),201
Current data in db:
"Predict": [
{
"timelag1":1,
"timelag2": 1,
"timelag3": 1,
"timelag4": 1,
"timelag5": 1
}
]
}
Data in db after a user entered another data:
"Predict": [
{
"timelag1":2,
"timelag2": 2,
"timelag3": 2,
"timelag4": 2,
"timelag5": 2
}
]
}
I recommend reading this answer concerning how to do the database manipulation (especially the later ones):
Flask-SQLalchemy update a row's information
you will need some kind of primary key or unique identifier to specify the row that you want to change - something like "id"
here's some sample code which will probably work if you adapt it to your case:
instance = User.query.filter(User.id==id)
data=instance.update(dict(json_data))
db.session.commit()
or
num_rows_updated = User.query.filter_by(username='admin').update(dict(email='my_new_email#example.com')))
db.session.commit()
I am using Token Authentication in my Django Rest Framework project. I have noticed that while creating a token, if I specify the created datetime field in the create method, than the token is not created with that value, for example:
new_token = Token.objects.create(user=user, created=datetime(2021, 9, 7, 10, 10, 10, tzinfo=timezone.utc)
will not create the token with the specified datetime but will use the current UTC time. To make it work i have to perform another operation on this object and save the modified object again like this:
new_token = Token.objects.create(user=user)
new_token.created = datetime(2021, 9, 7, 10, 10, 10, tzinfo=timezone.utc)
new_token.save()
I think this will make two queries on the DB, first while creating and then while modifying the created datetime which is not very elegant. Can someone please tell me why Django is not setting the datetime in the first way? I'm very new to this framework so please pardon me if I'm overlooking something very simple here.
Thanks!
Okay, I figured out why this is the behavior of Token. If you see the code of Token model, the created field is defined as follows:
created = models.DateTimeField(_("Created"), auto_now_add=True)
The auto_now_add=True value specifies that this field will automatically be set to the current UTC time.
Now, what I wanted to do was to mock this created datetime in my unit tests to simulate some cases. I found out that you can just mock the django.utils.timezone.now return value to simulate any created datetime like this:
def my_unit_test(self):
with patch('django.utils.timezone.now') as fake_dt:
fake_dt.return_value = datetime(2021, 9, 7, 10, 10, 10, tzinfo=timezone.utc)
# token's created field will be set to the datetime above
token = Token.objects.create(user=test_user)
For creating a custom prediction routine with a Keras (Tensorflow 2.1) model, I am having trouble figuring out what form the json inputs are coming in as, and how to read them in the predictor class for multiple inputs. All of the custom prediction routine examples in the documentation use simple flat single-input lists. If for example we send in our inputs as:
{"instances": [
{
"event_type_input": [1, 2, 20],
"event_dwelltime_input": [1.368, 0.017, 0.0],
"rf_input": [1.2, -2.8]},
{
"event_type_input": [14, 40, 20],
"event_dwelltime_input": [1.758, 13.392, 0.0],
"rf_input": [1.29, -2.87]}
]}
How should we ingest the incoming json in our predictor class?
class MyPredictor(object):
def __init__(self, model):
self.model = model
def predict(self, instances, **kwargs):
inputs = np.array(instances)
# The above example from the docs is wrong for multiple inputs
# What should our inputs be to get the inputs in the right shape
# for our keras model?
outputs = self.model.predict(inputs)
return outputs.tolist()
Our json inputs to google ai platform are a list of dictionaries. However, for a keras model, our inputs need to be in different shape, like the following:
inputs = {
"event_type_input": np.array([[1, 2, 20], [14, 40, 20]]),
"event_dwelltime_input": np.array([[1.368, 0.017, 0.0], [1.758, 13.392, 0.0]])
"rf_input": np.array([[1.2, -2.8], [1.29, -2.87]]}
model.predict(inputs)
Am I right that the thing to do then is just reshape the instances? The only confusion is that if using the tensorflow framework (instead of a custom prediction routine), it handles predicting on the json input fine, and I thought that all the tensorflow framework is doing is calling the .predict method on the instances (unless indeed there is some under-the-hood reshaping of the data. I couldn't find a source to find out what is exactly happening)
Main question: How should we write our predictor class to take in the instances such that we can run the model.predict method on it?
I would suggest creating a new Keras Model and exporting it.
Create a separate Input layer for each of the three inputs to the new Model (with the name of the Input being the name in your JSON struct). Then, in this Model, reshape the inputs, and borrow the weights/structure from your trained model, and export the new model. Something like this:
trained_model = keras.models.load_model(...) # trained model
input1 = keras.Input(..., name='event_type_input')
input2 = keras.Input(..., name='event_dwelltime_input')
input3 = keras.Input(..., name='rf_input')
export_inputs = keras.concatenate([input1, input2, input3])
reshaped_inputs = keras.layers.Lambda(...) # reshape to what the first hidden layer of your trained model expects
layer1 = trained_model.get_layer(index=1)(reshaped_inputs)
layer2 = trained_model.get_layer(index=2)(layer1) # etc. ...
...
exportModel = keras.Model(export_inputs, export_output)
exportModel.save(...)
I am writing a Flask-RESTFUL resource. It returns a Model object called DocumentSet with the following structure:
DocumentSet:
id: int
documents: list of Document
Document is another Model object with the following structure:
Document:
id: int
...other fields...
I want to write a #marshal_with decorator that returns the DocumentSet id along with a list of the Document ids like so:
{
id: 5,
document_ids: [1, 2, 3]
}
I've been banging my head against the output marshaling syntax to no avail. Some of the things I've tried:
{'id': fields.Integer, 'document_ids':fields.List(fields.Integer, attribute='documents.id')}
{'id': fields.Integer, 'document_ids':fields.List(fields.Nested({'id':fields.Integer}), attribute='documents')}
What's the magic incantation?
The magic incantation is
{'id': fields.Integer, 'name': fields.String, 'document-ids':{'id': fields.Integer}}
It was right there in the "Complex Structures" paragraph in the documentation.
I have a ajax post that returns the following data:
{u'related_id': u'9', u'db_name': u'my_field', u'name': u'jsaving_draft', u'value': u'hypothesis sadfasdfadfws asdfasdf'}
In my view I have:
if request.is_ajax():
if "jsaving_draft" in request.body:
results = json.loads(request.body)
print results
save_id = results['related_id']
save_db = results['db_name']
save_value = results['value']
temp = Dbtable.objects.filter(id=save_id).update(save_db=save_value)
How can I specify the specific table element to update based on save_db without hardcoding the database row name. I have a table in my database named.
I tried doing something like like:
Dbtable.objects.filter(id=save_id).update("%s"=save_value) % save_db
but that failed spectacularly. Does anyone have an idea of how I can make this work?
You can use keyword argument unpacking:
Dbtable.objects.filter(id=save_id).update(**{save_db: save_value})
Example:
>>> def test(a,b):
... return a + b
...
>>> test(**{'a': 1, 'b': 2})
3