Writing a GUI to display statistics - c++

I'm working with a hardware simulator for a project. It outputs statistics at the end in a very structured but ugly way. It can be tiresome to read so I would like to write a GUI to help me display it better. Would anybody have an idea of what framework and widgets I could use to quickly and painlessly construct something clean? I would like to be able to navigate the subnodes of the tree and hide (collapse) nodes I'm not interested in.
The statistics output take a form like this
root {
foo = "bar";
foo_num = 1;
machine {
core0 {
fetch {
renamed {
none = 13559;
flags = 3013;
reg_and_flags = 10735;
reg = 8430;
}
width[5] = {
Minimum: 381
Maximum: 17450
Average: 1.248
Total Sum: 28627
Weighted Sum: 35737
Threshold: 3
[ 61.0% ] [ 61.0% ] 0 0 17450 ******************************
[ 1.3% ] [ 62.3% ] 1 1 381
[ 12.1% ] [ 74.4% ] 2 2 3476 ******
[ 3.1% ] [ 77.5% ] 3 3 876 *
[ 22.5% ] [ 100% ] 4 4 6444 ***********
};
status (total 57920) {
[ 0.0% ] rob_full = 0; { (zero) }
[ 35.9% ] ldq_full = 20789;
[ 2.4% ] fetchq_empty = 1394;
[ 0.0% ] physregs_full = 0; { (zero) }
[ 61.7% ] complete = 35737;
[ 0.0% ] stq_full = 0; { (zero) }
}
}
}
}
There is already a parser that creates a kind of tree from a binary file, it is written in C++ so perhaps it is better if a choose a framework for this language. An alternative would be to generate XML output and then use another language to process the information.
I'm not very experienced with visual programming and I don't really know what kind of widgets are available. Any suggestions and pointers would be appreciated.

When I'm just trying to display some information, and I don't really need interaction, I sometimes make the program output a simple html page. It's fast and trivial to do things like tables and images (in virtually any format). If you need graphs, there are web-APIs like Google's chart API.

I'd recommend boost::spirit::qi for parsing, and Qt + QWT - for graphics. They are all C++. QWT (which is based on Qt) has multiple convenient graph widgets out of the box.
spirit: http://www.boost.org/doc/libs/1_46_0/libs/spirit/doc/html/spirit/introduction.html
Qt: http://qt.nokia.com/products
QWT: http://qwt.sourceforge.net/
EDIT
More specifically:
Tree view: http://doc.qt.nokia.com/latest/qtreeview.html
Histograms: http://qwt.sourceforge.net/class_qwt_plot_histogram.html
It's all pretty simple to use, check out the samples to find out exactly how is it done.

Related

Adding classes to headers with a Pandoc lua filter

I'm putting together a Quarto template for Springer Nature journals. The LaTeX document class has some frustrating aspects that I'm trying to circumvent with lua filters so the template can be used as much as possible with vanilla markdown. One such filter (inspired by this discussion) takes any heading with the class {.backmatter} and for the PDF format, uses \bmhead* instead of \section; this bit works as expected:
local backmatter_inserted = false
function Header(el)
if quarto.doc.isFormat("pdf") then
if not backmatter_inserted and el.classes:includes("backmatter") then
backmatter_inserted = true
el.content = pandoc.utils.stringify(el.content)
return {
pandoc.RawBlock('tex', '\\backmatter'),
pandoc.RawBlock('tex', '\\bmhead*{' .. el.content .. '}')
}
elseif el.classes:includes("backmatter") then
el.content = pandoc.utils.stringify(el.content)
return pandoc.RawBlock('tex', '\\bmhead*{' .. el.content .. '}')
end
elseif quarto.doc.isFormat("html") and el.classes:includes("backmatter") then
el.classes:insert("appendix")
el.classes:insert("unnumbered")
return el
end
end
However, the HTML portion of the filter does not appear to be working as intended. For the HTML output, I'd like to tuck content under these headings away unnumbered in the appendix, e.g. by adding the classes "appendix" and "unnumbered". Neither of these classes currently appear to be added when running this filter.
I tried adding the classes in the md input and it works as expected, with the heading appearing in the appendix unnumbered:
# Heading {.backmatter .unnumbered .appendix}
Sample text
Running pandoc native, this appears as:
Header
3
( "heading" , [ "backmatter" , "unnumbered" , "appendix" ] , [] )
[ Str "Heading" ]
With only {.backmatter} as an input class and running the filter, it appears as:
Header
3
( "heading" , [ "backmatter" ] , [] )
[ Str "Heading" ]
I've read somewhere about a change in handling of Pandoc for section classes: am I missing something here?

Understanding the GN build system in Fuchsia OS, what is `build_api_module`?

GN stands for Generate Ninja. It generates ninja files which build things. The main file is BUILD.GN at the root of the fuchsia source tree
It contains a lot of build_api_module calls:
build_api_module("images") {
testonly = true
data_keys = [ "images" ]
deps = [
# XXX(46415): as the build is specialized by board (bootfs_only)
# for bringup, it is not possible for this to be complete. As this
# is used in the formation of the build API with infrastructure,
# and infrastructure assumes that the board configuration modulates
# the definition of `zircon-a` between bringup/non-bringup, we can
# not in fact have a complete description. See the associated
# conditional at this group also.
"build/images",
# This has the images referred to by $qemu_kernel_label entries.
"//build/zircon/zbi_tests",
]
}
however, it's unclear for me what this does exactly. Looking at its definition on build/config/build_api_module.gn for example:
template("build_api_module") {
if (current_toolchain == default_toolchain) {
generated_file(target_name) {
outputs = [ "$root_build_dir/$target_name.json" ]
forward_variables_from(invoker,
[
"contents",
"data_keys",
"deps",
"metadata",
"testonly",
"visibility",
"walk_keys",
"rebase",
])
output_conversion = "json"
metadata = {
build_api_modules = [ target_name ]
if (defined(invoker.metadata)) {
forward_variables_from(invoker.metadata, "*", [ "build_api_modules" ])
}
}
}
} else {
not_needed([ "target_name" ])
not_needed(invoker, "*")
}
}
it looks like it simply generates a file.
Can someone explain to me how build_api_module("images") ends up building all the zircon kernel images?
The build_api_module() targets generate JSON files that describe something about the current build system configuration. These files are typically consumed by other tools (in some cases dependencies to other build rules) that need to know about the current build.
One example is the tests target which generates the tests.json file. This file is used by fx test to determine which tests are available and match the test name you provide to the component URL to invoke.
Can someone explain to me how build_api_module("images") ends up building all the zircon kernel images?
It doesn't. These targets are descriptive of the current build configuration, they are not prescriptive of what artifacts the build generates. In this specific case, the images.json file is typically used by tools like FEMU and ffx to determine what system images to use on a target device.

How to change the experiment file path generated when running Ray's run_experiments()?

I'm using the following spec on my code to generate experiments:
experiment_spec = {
"test_experiment": {
"run": "PPO",
"env": "MultiTradingEnv-v1",
"stop": {
"timesteps_total": 1e6
},
"checkpoint_freq": 100,
"checkpoint_at_end": True,
"local_dir": '~/Documents/experiment/',
"config": {
"lr_schedule": grid_search(LEARNING_RATE_SCHEDULE),
"num_workers": 3,
'observation_filter': 'MeanStdFilter',
'vf_share_layers': True,
"env_config": {
},
}
}
}
ray.init()
run_experiments(experiments=experiment_spec)
Note that I use grid_search to try various learning rates. The problem is "lr_schedule" is defined as:
LEARNING_RATE_SCHEDULE = [
[
[0, 7e-5], # [timestep, lr]
[1e6, 7e-6],
],
[
[0, 6e-5],
[1e6, 6e-6],
]
]
So when the experiment checkpoint is generated it has a lot of [ in it's path name, making the path unreadable to the interpreter. Like this:
~/Documents/experiment/PPO_MultiTradingEnv-v1_0_lr_schedule=[[0, 7e-05], [3500000.0, 7e-06]]_2019-08-14_20-10-100qrtxrjm/checkpoint_40
The logic solution is to manually rename it but I discovered that its name is referenced in other files like experiment_state.json, so the best solution is to set a custom experiment path and name.
I didn't find anything in documentation.
This is my project if it helps
Can someone help?
Thanks in advance
You can set custom trial names - https://ray.readthedocs.io/en/latest/tune-usage.html#custom-trial-names. Let me know if that works for you.

Mongo DB query - Aggregation $cond if condition implementation in c++ code

I have a mongo DB query as below
db.getCollection('ABC_COLLECTION_01').aggregate
([{ "$group" : {
"_id" : 0,
"total" : { "$sum" : "$columA" },
"total_sub" : { "$sum" : {$cond:{ if: { $gte: [ "$columnB", new ISODate("2018-01-01T04:58:09.000+0100") ] }, then: "$columA", else: 0 }}}
}}])
This query is working fine. If I run this on ABC_COLLECTION_01, it will return result like (just for an example)
total = 250 & total_sub = 120
Now I have to write this query in C++ code using mongo::BSONArrayBuilder as below
//Calculate 2 sum
aBuilder.append(BSON("$group"<<BSON("_id"<<0
<<"total"<<BSON("$sum"<<'$' +columA)
<<"total_sub"<<BSON("$sum"<<'$cond'<<'if'<<'$' +columB<<"$gte"<<rmsmongo::utils::Adaptor::ToMongoDate(StartTime,true)<<'then'<<'$' +columA<<'else'<<0))));
mongo::BSONArray New_AggregationQuery = aBuilder.arr();
std::auto_ptr<dsclient::Cursor> Cursor = _MyCollection.aggregate(New_AggregationQuery, dsclient::Option_aggregate().retry(dsclient::Retry(dsclient::ExponentialRetry, 2)).maxTimeMS(200000));
If you see the $cond I have written for $total_sub is wrong in C++ code- it is not working.
Can you please help me to get it corrected?
Thanks in advance
Please note that the legacy C++ driver which you appear to be using here is end-of-life. I cannot recommend strongly enough that you immediately switch to the new mongocxx driver, found on the master branch of the same repo. It offers significant advantages, not the least of which is that it is actively maintained.
That said, if you look closely, you can see that the if expression in your working query has a subdocument, but in your C++ code, you aren't starting a new BSON subdocument. There are probably other errors.
Seriously though, I really recommend you stop working on this and get migrated to the new driver. Any bugs or non-conformances in the legacy driver (or are you even using 26compat) will never be fixed.

Problems combining (union) Multipolygons in geodjango

I'm using geodjango and postgis (1.x),
What is the best way to combine (union) a list of multipolygons.
in what i assume is rather inefficient i'm looping trough like this
combined = multipolygon
for item in items:
combined = combined.union(item.geom) #geom is a multipolygon
Usually this works fine, but often i'm getting the error Error encountered checking Geometry returned from GEOS C function "GEOSUnion_r".
Here is the geo json version of the item the error is thrown on if it helps
{ "type": "MultiPolygon", "coordinates":
[ [ [ [ -80.077576, 26.572225 ],
[ -80.037729, 26.571180 ],
[ -80.080279, 26.273744 ],
[ -80.147464, 26.310066 ],
[ -80.152851, 26.455851 ],
[ -80.138560, 26.538013 ],
[ -80.077576, 26.572225 ]
] ] ]
}
does anyone have anyideas? the end goal is to take find all the locations (another table) which fall within this list of n polygons (using coordinates__within=combined_area)
Also, the polygons show up fine on the maps in the geodjango admin.
You can always use Union aggregate method. That should be a bit more efficient because everything is computed on the database level which means you don't have to loop over things in Python.
combined_area = FooModel.objects.filter(...).aggregate(area=Union('geom'))['area']
final = BarModel.objects.filter(coordinates__within=combined_area)