Problems combining (union) Multipolygons in geodjango - django

I'm using geodjango and postgis (1.x),
What is the best way to combine (union) a list of multipolygons.
in what i assume is rather inefficient i'm looping trough like this
combined = multipolygon
for item in items:
combined = combined.union(item.geom) #geom is a multipolygon
Usually this works fine, but often i'm getting the error Error encountered checking Geometry returned from GEOS C function "GEOSUnion_r".
Here is the geo json version of the item the error is thrown on if it helps
{ "type": "MultiPolygon", "coordinates":
[ [ [ [ -80.077576, 26.572225 ],
[ -80.037729, 26.571180 ],
[ -80.080279, 26.273744 ],
[ -80.147464, 26.310066 ],
[ -80.152851, 26.455851 ],
[ -80.138560, 26.538013 ],
[ -80.077576, 26.572225 ]
] ] ]
}
does anyone have anyideas? the end goal is to take find all the locations (another table) which fall within this list of n polygons (using coordinates__within=combined_area)
Also, the polygons show up fine on the maps in the geodjango admin.

You can always use Union aggregate method. That should be a bit more efficient because everything is computed on the database level which means you don't have to loop over things in Python.
combined_area = FooModel.objects.filter(...).aggregate(area=Union('geom'))['area']
final = BarModel.objects.filter(coordinates__within=combined_area)

Related

How to change the experiment file path generated when running Ray's run_experiments()?

I'm using the following spec on my code to generate experiments:
experiment_spec = {
"test_experiment": {
"run": "PPO",
"env": "MultiTradingEnv-v1",
"stop": {
"timesteps_total": 1e6
},
"checkpoint_freq": 100,
"checkpoint_at_end": True,
"local_dir": '~/Documents/experiment/',
"config": {
"lr_schedule": grid_search(LEARNING_RATE_SCHEDULE),
"num_workers": 3,
'observation_filter': 'MeanStdFilter',
'vf_share_layers': True,
"env_config": {
},
}
}
}
ray.init()
run_experiments(experiments=experiment_spec)
Note that I use grid_search to try various learning rates. The problem is "lr_schedule" is defined as:
LEARNING_RATE_SCHEDULE = [
[
[0, 7e-5], # [timestep, lr]
[1e6, 7e-6],
],
[
[0, 6e-5],
[1e6, 6e-6],
]
]
So when the experiment checkpoint is generated it has a lot of [ in it's path name, making the path unreadable to the interpreter. Like this:
~/Documents/experiment/PPO_MultiTradingEnv-v1_0_lr_schedule=[[0, 7e-05], [3500000.0, 7e-06]]_2019-08-14_20-10-100qrtxrjm/checkpoint_40
The logic solution is to manually rename it but I discovered that its name is referenced in other files like experiment_state.json, so the best solution is to set a custom experiment path and name.
I didn't find anything in documentation.
This is my project if it helps
Can someone help?
Thanks in advance
You can set custom trial names - https://ray.readthedocs.io/en/latest/tune-usage.html#custom-trial-names. Let me know if that works for you.

How to read recursive json data with json spirit

I have a recursive Json file the format is below; I have two parts condition and action. In condition part there can be n-root and leaves pairs, and inside leaves part there can be
additional values. I have problems about handling this data structure using json-spirit. Can anyone had same issue and solved or anyone have any clue. I would be appreciate.
Thanks
{
"condition": {
"root": "&",
"leaves": [ "A",
{ "root": "|",
"leaves": ["p","r"]
}
]
},
"action": ["a=7","event B"]
}
I dont know json-spirit. Do you absolutlely need to use it ?
If not, you may try this : https://github.com/Rel4X/HandyJson
Really easy to use (and I'd love some tests \o/)
Rel4x

Nagiosgraph rrd files not created(maybe because of map file)

I'm having a problem with Nagiosgraph. I have created a nagios check which monitors the traffic on a server/workstation through SNMP and the output of the check is a long string that looks like this:
OK - traffmon eth0:incoming:170KB:outgoing:1606KB eth1:incoming:1576KB:outgoing:170KB eth2:incoming:156:outgoing:0|lo;incoming;25;outgoing;25 tunl0;incoming;0;outgoing;0 gre0;incoming;0;outgoing;0 sit0;incoming;0;outgoing;0 eth0;incoming;170KB;outgoing;1606KB eth1;incoming;1576KB;outgoing;170KB eth2;incoming;156;outgoing;0
I'm interested in the first three interfaces that is why i've separated eth0,eth1,eth2 from the whole string with interfaces(which i considered performance data) and i followed the instructions on http://www.novell.com/coolsolutions/feature/19843.html and i have in my service.cfg
define serviceextinfo{
host_name workstation
service_description Throughput Monitor
action_url /nagiosgraph/cgi-bin/show.cgi?host=$HOSTNAME$&service=$SERVICEDESC$&db=eth0,incoming,outgoing,&geom=500x100&rrdopts%3D-l%200%20-u%2010000%20-t%20Traffic
}
and in my map file i have wrote this to match the things that interested me:
/output:.*traffmon ([0-9]+), ([0-9]+), ([0-9]+), ([0-9]+), ([0-9]+), ([0-9]+), ([0-9]+), ([0-9]+), ([0-9]+)/
and push #s, [ 'eth0',
['incoming', 'GAUGE', $2],
['outgoing', 'GAUGE', $3] ],
[ 'eth1',
['incoming', 'GAUGE', $5],
['outgoing', 'GAUGE', $6] ],
[ 'eth2',
['incoming', 'GAUGE', $8],
['outgoing', 'GAUGE', $9] ];
I wanted to create three tables (eth0, eth1, eth2) with two columns (incoming, outgoing) and from then on to try to represent them nicely. The thing is that usually my rrd files get created automatically, but for this check the folder in the rrd folder with the workstation's name doesn't get created and neither are the .rrd files, and i have the feeling that it has something to do with the map file, maybe the matching is not working or something(i'm saying this because i don't now perl). Any suggestion is appreciated. Thank you
You can try this regex:
/traffmon eth0:incoming:(\d+)(?:KB):outgoing:(\d+)(?:KB) eth1:incoming:(\d+)(?:KB):outgoing:(\d+)(?:KB) eth2:incoming:(\d+):outgoing:(\d+)/
You can test it on rubular: http://rubular.com/r/vj7VXwDPPU
I'm not familiar with how your nagios system works, but if there is room for more perl code, you could also do something like:
my $res = 'OK - traffmon eth0:incoming:170KB:outgoing:1606KB eth1:incoming:1576KB:outgoing:170KB eth2:incoming:156:outgoing:0|lo;incoming;25;outgoing;25 tunl0;incoming;0;outgoing;0 gre0;incoming;0;outgoing;0 sit0;incoming;0;outgoing;0 eth0;incoming;170KB;outgoing;1606KB eth1;incoming;1576KB;outgoing;170KB eth2;incoming;156;outgoing;0';
my #s;
push #s, map {
my #f = split /:/;
[ $f[0], [$f[1], 'GAUGE', $f[2] ], [$f[3], 'GAUGE', $f[4]] ]
} (split(/ |\|/, $res))[3..5];
print Dumper #s;
This splits the string at a space or a pipe |, takes the 3rd to 5th element (which is the first three interfaces) and then does a loop with them. It splits on colon :, builds your data structure and returns it for each interface. The returned data structure is pushed into #s.
Output:
$VAR1 = [
'eth0',
[
'incoming',
'GAUGE',
'170KB'
],
[
'outgoing',
'GAUGE',
'1606KB'
]
];
$VAR2 = [
'eth1',
[
'incoming',
'GAUGE',
'1576KB'
],
[
'outgoing',
'GAUGE',
'170KB'
]
];
$VAR3 = [
'eth2',
[
'incoming',
'GAUGE',
'156'
],
[
'outgoing',
'GAUGE',
'0'
]
];

How do Django Fixtures handle ManyToManyFields?

I'm trying to load in around 30k xml files from clinicaltrials.gov into a mySQL database, and the way I am handling multiple locations, keywords, etc. are in a separate model using ManyToManyFields.
The best way I've figured out is to read the data in using a fixture. So my question is, how do I handle the fields where the data is a pointer to another model?
I unfortunately don't know enough about how ManyToMany/ForeignKeys work, to be able to answer...
Thanks for the help, sample code below: __ represent the ManyToMany fields
{
"pk": trial_id,
"model": trials.trial,
"fields": {
"trial_id": trial_id,
"brief_title": brief_title,
"official_title": official_title,
"brief_summary": brief_summary,
"detailed_Description": detailed_description,
"overall_status": overall_status,
"phase": phase,
"enrollment": enrollment,
"study_type": study_type,
"condition": _______________,
"elligibility": elligibility,
"Criteria": ______________,
"overall_contact": _______________,
"location": ___________,
"lastchanged_date": lastchanged_date,
"firstreceived_date": firstreceived_date,
"keyword": __________,
"condition_mesh": condition_mesh,
}
}
A foreign key is simple the pk of the object you are linking to, a manytomanyfield uses a list of pk's. so
[
{
"pk":1,
"model":farm.fruit,
"fields":{
"name" : "Apple",
"color" : "Green",
}
},
{
"pk":2,
"model":farm.fruit,
"fields":{
"name" : "Orange",
"color" : "Orange",
}
},
{
"pk":3,
"model":person.farmer,
"fields":{
"name":"Bill",
"favorite":1,
"likes":[1,2],
}
}
]
You will need to probably write a conversion script to get this done. Fixtures can be very flimsy; it's difficult to get the working so experiment with a subset before you spend a lot of time converting the 30k records (only to find they might not import)

Writing a GUI to display statistics

I'm working with a hardware simulator for a project. It outputs statistics at the end in a very structured but ugly way. It can be tiresome to read so I would like to write a GUI to help me display it better. Would anybody have an idea of what framework and widgets I could use to quickly and painlessly construct something clean? I would like to be able to navigate the subnodes of the tree and hide (collapse) nodes I'm not interested in.
The statistics output take a form like this
root {
foo = "bar";
foo_num = 1;
machine {
core0 {
fetch {
renamed {
none = 13559;
flags = 3013;
reg_and_flags = 10735;
reg = 8430;
}
width[5] = {
Minimum: 381
Maximum: 17450
Average: 1.248
Total Sum: 28627
Weighted Sum: 35737
Threshold: 3
[ 61.0% ] [ 61.0% ] 0 0 17450 ******************************
[ 1.3% ] [ 62.3% ] 1 1 381
[ 12.1% ] [ 74.4% ] 2 2 3476 ******
[ 3.1% ] [ 77.5% ] 3 3 876 *
[ 22.5% ] [ 100% ] 4 4 6444 ***********
};
status (total 57920) {
[ 0.0% ] rob_full = 0; { (zero) }
[ 35.9% ] ldq_full = 20789;
[ 2.4% ] fetchq_empty = 1394;
[ 0.0% ] physregs_full = 0; { (zero) }
[ 61.7% ] complete = 35737;
[ 0.0% ] stq_full = 0; { (zero) }
}
}
}
}
There is already a parser that creates a kind of tree from a binary file, it is written in C++ so perhaps it is better if a choose a framework for this language. An alternative would be to generate XML output and then use another language to process the information.
I'm not very experienced with visual programming and I don't really know what kind of widgets are available. Any suggestions and pointers would be appreciated.
When I'm just trying to display some information, and I don't really need interaction, I sometimes make the program output a simple html page. It's fast and trivial to do things like tables and images (in virtually any format). If you need graphs, there are web-APIs like Google's chart API.
I'd recommend boost::spirit::qi for parsing, and Qt + QWT - for graphics. They are all C++. QWT (which is based on Qt) has multiple convenient graph widgets out of the box.
spirit: http://www.boost.org/doc/libs/1_46_0/libs/spirit/doc/html/spirit/introduction.html
Qt: http://qt.nokia.com/products
QWT: http://qwt.sourceforge.net/
EDIT
More specifically:
Tree view: http://doc.qt.nokia.com/latest/qtreeview.html
Histograms: http://qwt.sourceforge.net/class_qwt_plot_histogram.html
It's all pretty simple to use, check out the samples to find out exactly how is it done.