I've some models created wich i'd like to provide initial data for. The problem is that there are several models, and i'd like to organize the data.
Currently, i've a big JSON file: initial_data.json with the data. I was thinking i could use some comments, but JSON has no comments! I really want to use json.
So, the file is like:
[
{
"model": "app1.Model1",
"pk": 1,
"fields": {
"nombre": "A convenir con el vendedor"
}
},
//many more
{
"model": "app2.Model1",
"pk": 1,
"fields": {
"nombre": "A convenir con el vendedor"
}
},
//many more
{
"model": "app2.Model1",
"pk": 1,
"fields": {
"nombre": "A convenir con el vendedor"
}
},
]
So, i thought i could organize them in different files, and with some initial script load them. The idea is not issue several python manage.py loaddata thisApp.Model But, then it would be difficult to separate the files that are not ment to be loaded at initial time.
Here are the files as example:
+app1
+fixtures
model1.json
model2.json
+app2
+fixtures
model1.json
model2.json
+app3
+fixtures
model1.json
model2.json
Do you have any idea how to keep simple?
like you said, create several files, and write a script that combines them into initial_data.json and invokes the needed django.core.management command. this is what I do.
Call the files that contain initial data "initial_data.json" - syncdb will only load those. You can load the others manually with manage.py loaddata.
https://docs.djangoproject.com/en/dev/howto/initial-data/#automatically-loading-initial-data-fixtures
Related
Working with django JsonField. Using django-entangled in form. I need data format like below. Need suggestion to avail this.
[
{
"name": "Test 1",
"roll": 1,
"section": "A"
},
{
"name": "Test 2",
"roll": 2,
"section": "A"
}
]
With django-entangled this is not possible, because that library does not offer multiple forms of the same kind.
You can however try my next form library django-formset, which can handle multiple forms of the same kind.
I'm using the following spec on my code to generate experiments:
experiment_spec = {
"test_experiment": {
"run": "PPO",
"env": "MultiTradingEnv-v1",
"stop": {
"timesteps_total": 1e6
},
"checkpoint_freq": 100,
"checkpoint_at_end": True,
"local_dir": '~/Documents/experiment/',
"config": {
"lr_schedule": grid_search(LEARNING_RATE_SCHEDULE),
"num_workers": 3,
'observation_filter': 'MeanStdFilter',
'vf_share_layers': True,
"env_config": {
},
}
}
}
ray.init()
run_experiments(experiments=experiment_spec)
Note that I use grid_search to try various learning rates. The problem is "lr_schedule" is defined as:
LEARNING_RATE_SCHEDULE = [
[
[0, 7e-5], # [timestep, lr]
[1e6, 7e-6],
],
[
[0, 6e-5],
[1e6, 6e-6],
]
]
So when the experiment checkpoint is generated it has a lot of [ in it's path name, making the path unreadable to the interpreter. Like this:
~/Documents/experiment/PPO_MultiTradingEnv-v1_0_lr_schedule=[[0, 7e-05], [3500000.0, 7e-06]]_2019-08-14_20-10-100qrtxrjm/checkpoint_40
The logic solution is to manually rename it but I discovered that its name is referenced in other files like experiment_state.json, so the best solution is to set a custom experiment path and name.
I didn't find anything in documentation.
This is my project if it helps
Can someone help?
Thanks in advance
You can set custom trial names - https://ray.readthedocs.io/en/latest/tune-usage.html#custom-trial-names. Let me know if that works for you.
I'm working with Restless and as stated in the documentation, returning Model.objects.all() produces something like this:
{
"objects": [
{
"id": 1,
"title": "First Post!",
"author": "daniel",
"body": "This is the very first post on my shiny-new blog platform...",
"posted_on": "2014-01-12T15:23:46",
},
{
# More here...
}
]
}
This works fine. However, I don't want the "objects" wrapper to be here. My front-end code expects an array.
Is there any way of telling Restless not to wrap the array?
You can do this by overriding method Resource.wrap_list_response(). Default implementation just wraps data in a dictionary (within the objects key), you can modify this to return data unchanged.
all new jsfiddle: http://jsfiddle.net/vJxvc/2/
Currently, i query an api that will return JSON like this. The API cannot be changed for now, which is why I need to work around that.
[
{"timestamp":1406111961, "values":[1236.181, 1157.695, 698.231]},
{"timestamp":1406111970, "values":[1273.455, 1153.577, 693.591]}
]
(could be a lot more lines, of course)
As you can see, each line has a timestamp and then an array of values. My problem is, that i would actually like to transpose that. Looking at the first line alone:
{"timestamp":1406111961, "values":[1236.181, 1157.695, 698.231]}
It contains a few measurements taken at the same time. This would need to become this in my ember project:
{
"sensor_id": 1, // can be derived from the array index
"timestamp": 1406111961,
"value": 1236.181
},
{
"sensor_id": 2,
"timestamp": 1406111961,
"value": 1157.695
},
{
"sensor_id": 3,
"timestamp": 1406111961,
"value": 698.231
}
And those values would have to be pushed into the respective sensor models.
The transformation itself is trivial, but i have no idea where i would put it in ember and how i could alter many ember models at the same time.
you could make your model an array and override the normalize method on your adapter. The normalize method is where you do the transformation, and since your json is an array, an Ember.Array as a model would work.
I am not a ember pro but looking at the manual I would think of something like this:
a = [
{"timestamp":1406111961, "values":[1236.181, 1157.695, 698.231]},
{"timestamp":1406111970, "values":[1273.455, 1153.577, 693.591]}
];
b = [];
a.forEach(function(item) {
item.values.forEach(function(value, sensor_id) {
b.push({
sensor_id: sensor_id,
timestamp: item.timestamp,
value: value
});
});
});
console.log(b);
Example http://jsfiddle.net/kRUV4/
Update
Just saw your jsfiddle... You can geht the store like this: How to get Ember Data's "store" from anywhere in the application so that I can do store.find()?
I'm trying to load in around 30k xml files from clinicaltrials.gov into a mySQL database, and the way I am handling multiple locations, keywords, etc. are in a separate model using ManyToManyFields.
The best way I've figured out is to read the data in using a fixture. So my question is, how do I handle the fields where the data is a pointer to another model?
I unfortunately don't know enough about how ManyToMany/ForeignKeys work, to be able to answer...
Thanks for the help, sample code below: __ represent the ManyToMany fields
{
"pk": trial_id,
"model": trials.trial,
"fields": {
"trial_id": trial_id,
"brief_title": brief_title,
"official_title": official_title,
"brief_summary": brief_summary,
"detailed_Description": detailed_description,
"overall_status": overall_status,
"phase": phase,
"enrollment": enrollment,
"study_type": study_type,
"condition": _______________,
"elligibility": elligibility,
"Criteria": ______________,
"overall_contact": _______________,
"location": ___________,
"lastchanged_date": lastchanged_date,
"firstreceived_date": firstreceived_date,
"keyword": __________,
"condition_mesh": condition_mesh,
}
}
A foreign key is simple the pk of the object you are linking to, a manytomanyfield uses a list of pk's. so
[
{
"pk":1,
"model":farm.fruit,
"fields":{
"name" : "Apple",
"color" : "Green",
}
},
{
"pk":2,
"model":farm.fruit,
"fields":{
"name" : "Orange",
"color" : "Orange",
}
},
{
"pk":3,
"model":person.farmer,
"fields":{
"name":"Bill",
"favorite":1,
"likes":[1,2],
}
}
]
You will need to probably write a conversion script to get this done. Fixtures can be very flimsy; it's difficult to get the working so experiment with a subset before you spend a lot of time converting the 30k records (only to find they might not import)