How to add a Branch to An Already Existing TTree: ROOT - c++

I have an existing TTree after doing a simulation. I would like to add a Branch to this TTree and I want to call it Muon.Mass to the tree. I would also want to give the Muon.Mass branch a value of 0.1.
How can I write that?
I have seen how to create TTrees from scratch and to have branches of different variables. But I am not sure exactly what to do when I already have a TTree.

You can call the TTree::Branch method on an existing TTree the same way as for a new TTree. Just for filling you need to ensure you only fill the branch. (this is a strongly cut down example from https://github.com/pseyfert/tmva-branch-adder)
void AddABranch(TTree* tree) {
Float_t my_local_variable;
TBranch* my_new_branch = tree->AddBranch( ... /* use address of my_local_variable */ );
for (Long64_t entry = 0 ; entry < tree->GetEntries() ; ++e ) {
tree->GetEntry();
/* something to compute my_local_variable */
my_new_branch->Fill();
}
}
As alternative you might want to look at the root tutorials for tree friends.
As a side note, depending what you want to do with the tree / whom you give the tree to, I advise against using . in branch names as they cause headache when running MakeClass (branch names can contain periods, but c++ variables can't, so the automatically generated class members for each branch will undergo character replacement).

Related

TBB Dynamic Flow Graphs

I'm trying to come up with a way to define a flow graph (think TBB) defined at runtime. Currently, we use TBB to define the nodes and the edges between the nodes at compile time. This is sort of annoying because we have people who want to add processing steps and modify the processing chain without recompiling the whole application or really having to know anything about the application beyond how to add processing kernels. In an ideal world I would have some sort of plugin framework using dlls. We already have the software architected so that each node in TBB represents a processing step so it's pretty easy to add stuff if you're willing to recompile.
As a first step, I was trying to come up with a way to define a TBB flow graph in YAML but it was a massive rabbit hole. Does anyone know if something like this exists before I go all in on implementing this from scratch? It will be a fun project but no point in duplicating work.
I am not sure if anything like this exists in a TBB companion library but it is definitely doable to implement a small subset of the functionalities of Flow Graph configurable at runtime.
If the data that transit through your graph have a well defined type, aka your nodes are basically function_node<T, T> things are manageable. If the Graph transforms data from one type to another it gets more complicated -one solution would be to use a variant of these types and handle the possibly incompatible types at runtime. That really depends on the level of flexibility required.
With:
$ cat nodes.txt
# type concurrency params...
multiply unlimited 2
affine serial 5 -3
and
$ cat edges.txt
# src dst
0 1
1 2
where index 0 is a source node, here is a scaffold of how I would implement it:
using data_t = double;
using node_t = function_node<data_t , data_t >;
graph g;
std::vector<node_t> nodes;
auto node_factory = [&g](std::string type, std::string concurrency, std::string params) -> node_t {
// Implement a dynamic factory of nodes
};
// Add source node first
nodes.push_back(flow::input_node<data_t>(g,
[&]( oneapi::tbb::flow_control &fc ) -> data_t { /*...*/ });
// Parse the node description file and populate the node vector using the factory
for (auto&& n : nodes)
nodes.push_back(node_factory(n.type, n.concurrency, n.params));
// Parse the edge description file and call make_edge accordingly
for (auto&& e : edges)
flow::make_edge(nodes[e.src], nodes[e.dst]);
// Run the graph
nodes[0].activate();
g.wait_for_all();

How to write a proper git pull with libgit2

I want to write a C++ libgit2 wrapper to perform some basic git operations because libgit2 is too much atomic to be used as is (in my opinion).
As libgit2 is written in C, it does not matter if I get a C or C++ oriented solution, I will adapt it by myself.
I encounter difficulties with the gitPull() function that is supposed to be the "git pull" equivalent.
I planned to implement it as follows:
git_remote_fetch()
git_annotated_commit_from_fetchhead()
git_merge() (with the previously got git_annotated_commit)
git_commit_create()
Considering that it is the proper way to do it (tell me if it is not), I struggle with two issues:
How to check if the fetched HEAD is equal or different to the local HEAD ? (in order to know if the merge + commit are needed or not, in other words, if there is something to merge or if we already are up-to-date).
How or where to get the git_oid * id required by git_annotated_commit_from_fetchhead() ? (the last parameter).
I know these questions may look quite basic but I could not find any exploitable information or example neither on the libgit2 API reference documentation nor in the libgit2 samples.
I already have checked the already existing stackoverflow threads about this topic but none of them provide any exploitable code sample.
I will be very grateful if someone could give me some helpful information about how to achieve it or explain what I have misunderstood.
How to check if the fetched HEAD is equal or different to the local HEAD ?
You can call git_merge_analysis, which will tell you if you would need to do a merge. In this case, GIT_MERGE_ANALYSIS_UP_TO_DATE indicates that there is no need to merge (git merge on the command line would respond with "Already up to date").
How or where to get the git_oid * id required by git_annotated_commit_from_fetchhead()?
You can iterate the FETCH_HEAD file with git_repository_fetchhead_foreach. It will provide you all the information necessary to generate an annotated commit from whichever remote branch you want to merge.
For example:
int fetchhead_cb(const char *ref_name, const char *remote_url, const git_oid *oid, unsigned int is_merge, void *payload)
{
if (is_merge)
{
printf("reference: '%s' is the reference we should merge\n");
git_oid_cpy((git_oid *)payload, oid);
}
}
void pull(void)
{
/* here's the id we want to merge */
git_oid id_to_merge;
/* Do the fetch */
/* Populate id with the for_merge branch from FETCH_HEAD */
git_repository_fetchhead_foreach(repo, fetchhead_cb, &id_to_merge);
}
(Note that you'd probably want to actually capture all the details of the FETCH_HEAD information - not just the id - so that you could create an annotated commit from the data, but this is an example of how the git_repository_fetchhead_foreach function works with callbacks.)

SCIP: How to resolve LP after catching a 'node infeasibility' event,

I have a working column generation algorithm in SCIP. Due to specific constraints that I include while generating columns, it might happen that the last pricing round determines that the root node is infeasible (by the Farkas pricer of course).
In case that happens, I would like to 1) relax those specific constraints, 2) resolve the LP, and 3) start pricing columns again.
So, I have created my own EventHandler class, catching the node infeasibility event:
SCIP_DECL_EVENTINITSOL(EventHandler::scip_initsol)
{
SCIP_CALL( SCIPcatchEvent(scip_, SCIP_EVENTTYPE_NODEINFEASIBLE, eventhdlr, NULL, NULL));
return SCIP_OKAY;
}
And, corresponding, the scip_exec virtual method:
SCIP_DECL_EVENTEXEC(EventHandler::scip_exec)
{
double cur_rhs = SCIPgetRhsLinear(scip_, *d_varConsInfo).c_primal_obj_cut);
SCIPchgRhsLinear (scip_, (*d_varConsInfo).c_primal_obj_cut, cur_rhs + DELTA);
return SCIP_OKAY;
}
Where (*d_varConsInfo).c_primal_obj_cut is the specific constraint to be changed, DELTA is a global parameter, and cur_rhs is the current right hand side of the specific constraint. This function is neately called after the node infeasibility proof, however, I do not know how to 'tell' scip that the LP should be resolved and possible new columns should be included. Can somebody help me out with this?
When the event handler catches the NODEINFEASIBLE event, it is already too late to change something about the infeasibility of the problem, the node processing is already finished. Additionally, you are not allowed to change the rhs of a constraint during the solving process (because this means that reductions done before would potentially be invalid).
I would suggest the following: if your Farkas pricing is not able to identify new columns to render the LP feasible again, the node will be declared to be infeasible in the following. Therefore, at the end of Farkas pricing (if you are at the root node), you could just price an auxiliary variable that you add to the constraint that you want to relax, with bounds corresponding to your DELTA. Note that you need to have marked the constraint to be modifiable when creating it. Then, since a variable was added, SCIP will trigger another pricing round.
maybe you should take a look into the PRICERFARKAS method of SCIP (https://scip.zib.de/doc/html/PRICER.php#PRICER_FUNDAMENTALCALLBACKS).
If the current LP relaxation is infeasible, it is the task of the
pricer to generate additional variables that can potentially render
the LP feasible again. In standard branch-and-price, these are
variables with positive Farkas values, and the PRICERFARKAS method
should identify those variables.

c++ best way to realise global switches/flags to control program behaviour without tying the classes to a common point

Let me elaborate on the title:
I want to implement a system that would allow me to enable/disable/modify the general behavior of my program. Here are some examples:
I could switch off and on logging
I could change if my graphing program should use floating or pixel coordinates
I could change if my calculations should be based upon some method or some other method
I could enable/disable certain aspects like maybe a extension api
I could enable/disable some basic integrated profiler (if I had one)
These are some made-up examples.
Now I want to know what the most common solution for this sort of thing is.
I could imagine this working with some sort of singelton class that gets instanced globally or in some other globally available object. Another thing that would be possible would be just constexpr or other variables floating around in a namespace, again globally.
However doing something like that, globally, feels like bad practise.
second part of the question
This might sound like I cant decide what I want, but I want a way to modify all these switches/flags or whatever they are actually called in a single location, without tying any of my classes to it. I don't know if this is possible however.
Why don't I want to do that? Well I like to make my classes somewhat reusable and I don't like tying classes together, unless its required by the DRY principle and or inheritance. I basically couldn't get rid of the flags without modifying the possible hundreds of classes that used them.
What I have tried in the past
Having it all as compiler defines. This worked reasonably well, however I didnt like that I couldnt make it so if the flag file was gone there were some sort of default settings that would make the classes themselves still operational and changeable (through these default values)
Having it as a class and instancing it globally (system class). Worked ok, however I didnt like instancing anything globally. Also same problem as above
Instancing the system class locally and passing it to the classes on construction. This was kinda cool, since I could make multiple instruction sets. However at the same time that kinda ruined the point since it would lead to things that needed to have one flag set the same to have them set differently and therefore failing to properly work together. Also passing it on every construction was a pain.
A static class. This one worked ok for the longest time, however there is still the problem when there are missing dependencies.
Summary
Basically I am looking for a way to have a single "place" where I can mess with some values (bools, floats etc.) and that will change the behaviour of all classes using them for whatever, where said values either overwrite default values or get replaced by default values if said "place" isnt defined.
If a Singleton class does not work for you , maybe using a DI container may fit in your third approach? It may help with the construction and make the code more testable.
There are some DI frameworks for c++, like https://github.com/google/fruit/wiki or https://github.com/boost-experimental/di which you can use.
If you decide to use switch/flags, pay attention for "cyclometric complexity".
If you do not change the skeleton of your algorithm but only his behaviour according to the objets in parameter, have a look at "template design pattern". This method allow you to define a generic algorithm and specify particular step for a particular situation.
Here's an approach I found useful; I don't know if it's what you're looking for, but maybe it will give you some ideas.
First, I created a BehaviorFlags.h file that declares the following function:
// Returns true iff the given feature/behavior flag was specified for us to use
bool IsBehaviorFlagEnabled(const char * flagName);
The idea being that any code in any of your classes could call this function to find out if a particular behavior should be enabled or not. For example, you might put this code at the top of your ExtensionsAPI.cpp file:
#include "BehaviorFlags.h"
static const enableExtensionAPI = IsBehaviorFlagEnabled("enable_extensions_api");
[...]
void DoTheExtensionsAPIStuff()
{
if (enableExtensionsAPI == false) return;
[... otherwise do the extensions API stuff ...]
}
Note that the IsBehaviorFlagEnabled() call is only executed once at program startup, for best run-time efficiency; but you also have the option of calling IsBehaviorFlagEnabled() on every call to DoTheExtensionsAPIStuff(), if run-time efficiency is less important that being able to change your program's behavior without having to restart your program.
As far as how the IsBehaviorFlagEnabled() function itself is implemented, it looks something like this (simplified version for demonstration purposes):
bool IsBehaviorFlagEnabled(const char * fileName)
{
// Note: a real implementation would find the user's home directory
// using the proper API and not just rely on ~ to expand to the home-dir path
std::string filePath = "~/MyProgram_Settings/";
filePath += fileName;
FILE * fpIn = fopen(filePath.c_str(), "r"); // i.e. does the file exist?
bool ret = (fpIn != NULL);
fclose(fpIn);
return ret;
}
The idea being that if you want to change your program's behavior, you can do so by creating a file (or folder) in the ~/MyProgram_Settings directory with the appropriate name. E.g. if you want to enable your Extensions API, you could just do a
touch ~/MyProgram_Settings/enable_extensions_api
... and then re-start your program, and now IsBehaviorFlagEnabled("enable_extensions_api") returns true and so your Extensions API is enabled.
The benefits I see of doing it this way (as opposed to parsing a .ini file at startup or something like that) are:
There's no need to modify any "central header file" or "registry file" every time you add a new behavior-flag.
You don't have to put a ParseINIFile() function at the top of main() in order for your flags-functionality to work correctly.
You don't have to use a text editor or memorize a .ini syntax to change the program's behavior
In a pinch (e.g. no shell access) you can create/remove settings simply using the "New Folder" and "Delete" functionality of the desktop's window manager.
The settings are persistent across runs of the program (i.e. no need to specify the same command line arguments every time)
The settings are persistent across reboots of the computer
The flags can be easily modified by a script (via e.g. touch ~/MyProgram_Settings/blah or rm -f ~/MyProgram_Settings/blah) -- much easier than getting a shell script to correctly modify a .ini file
If you have code in multiple different .cpp files that needs to be controlled by the same flag-file, you can just call IsBehaviorFlagEnabled("that_file") from each of them; no need to have every call site refer to the same global boolean variable if you don't want them to.
Extra credit: If you're using a bug-tracker and therefore have bug/feature ticket numbers assigned to various issues, you can creep the elegance a little bit further by also adding a class like this one:
/** This class encapsulates a feature that can be selectively disabled/enabled by putting an
* "enable_behavior_xxxx" or "disable_behavior_xxxx" file into the ~/MyProgram_Settings folder.
*/
class ConditionalBehavior
{
public:
/** Constructor.
* #param bugNumber Bug-Tracker ID number associated with this bug/feature.
* #param defaultState If true, this beheavior will be enabled by default (i.e. if no corresponding
* file exists in ~/MyProgram_Settings). If false, it will be disabled by default.
* #param switchAtVersion If specified, this feature's default-enabled state will be inverted if
* GetMyProgramVersion() returns any version number greater than this.
*/
ConditionalBehavior(int bugNumber, bool defaultState, int switchAtVersion = -1)
{
if ((switchAtVersion >= 0)&&(GetMyProgramVersion() >= switchAtVersion)) _enabled = !_enabled;
std::string fn = defaultState ? "disable" : "enable";
fn += "_behavior_";
fn += to_string(bugNumber);
if ((IsBehaviorFlagEnabled(fn))
||(IsBehaviorFlagEnabled("enable_everything")))
{
_enabled = !_enabled;
printf("Note: %s Behavior #%i\n", _enabled?"Enabling":"Disabling", bugNumber);
}
}
/** Returns true iff this feature should be enabled. */
bool IsEnabled() const {return _enabled;}
private:
bool _enabled;
};
Then, in your ExtensionsAPI.cpp file, you might have something like this:
// Extensions API feature is tracker #4321; disabled by default for now
// but you can try it out via "touch ~/MyProgram_Settings/enable_feature_4321"
static const ConditionalBehavior _feature4321(4321, false);
// Also tracker #4222 is now enabled-by-default, but you can disable
// it manually via "touch ~/MyProgram_Settings/disable_feature_4222"
static const ConditionalBehavior _feature4222(4222, true);
[...]
void DoTheExtensionsAPIStuff()
{
if (_feature4321.IsEnabled() == false) return;
[... otherwise do the extensions API stuff ...]
}
... or if you know that you are planning to make your Extensions API enabled-by-default starting with version 4500 of your program, you can set it so that Extensions API will be enabled-by-default only if GetMyProgramVersion() returns 4500 or greater:
static ConditionalBehavior _feature4321(4321, false, 4500);
[...]
... also, if you wanted to get more elaborate, the API could be extended so that IsBehaviorFlagEnabled() can optionally return a string to the caller containing the contents of the file it found (if any), so that you could do shell commands like:
echo "opengl" > ~/MyProgram_Settings/graphics_renderer
... to tell your program to use OpenGL for its 3D graphics, or etc:
// In Renderer.cpp
std::string rendererType;
if (IsDebugFlagEnabled("graphics_renderer", &rendererType))
{
printf("The user wants me to use [%s] for rendering 3D graphics!\n", rendererType.c_str());
}
else printf("The user didn't specify what renderer to use.\n");

How to exchange custom data between Ops in Nuke?

This questions is addressed to developers using C++ and the NDK of Nuke.
Context: Assume a custom Op which implements the interfaces of DD::Image::NoIop and
DD::Image::Executable. The node iterates of a range of frames extracting information at
each frame, which is stored in a custom data structure. An custom knob, which is a member
variable of the above Op (but invisible in the UI), handles the loading and saving
(serialization) of the data structure.
Now I want to exchange that data structure between Ops.
So far I have come up with the following ideas:
Expression linking
Knobs can share information (matrices, etc.) using expression linking.
Can this feature be exploited for custom data as well?
Serialization to image data
The custom data would be serialized and written into a (new) channel. A
node further down the processing tree could grab that and de-serialize
again. Of course, the channel must not be altered between serialization
and de-serialization or else ... this is a hack, I know, but, hey, any port
in a storm!
GeoOp + renderer
In cases where the custom data is purely point-based (which, unfortunately,
it isn't in my case), I could turn the above node into a 3D node and pass
point data to other 3D nodes. At some point a render node would be required
to come back to 2D.
I am going into the correct direction with this? If not, what is a sensible
approach to make this data structure available to other nodes, which rely on the
information contained in it?
This question has been answered on the Nuke-dev mailing list:
If you know the actual class of your Op's input, it's possible to cast the
input to that class type and access it directly. A simple example could be
this snippet below:
//! #file DownstreamOp.cpp
#include "UpstreamOp.h" // The Op that contains your custom data.
// ...
UpstreamOp * upstreamOp = dynamic_cast< UpstreamOp * >( input( 0 ) );
if ( upstreamOp )
{
YourCustomData * data = yourOp->getData();
// ...
}
// ...
UPDATE
Update with reference to a question that I received via email:
I am trying to do this exact same thing, pass custom data from one Iop
plugin to another.
But these two plugins are defined in different dso/dll files.
How did you get this to work ?
Short answer:
Compile your Ops into a single shared object.
Long answer:
Say
UpstreamOp.cpp
DownstreamOp.cpp
define the depending Ops.
In a first attempt I compiled the first plugin using only UpstreamOp.cpp,
as usual. For the second plugin I compiled both DownstreamOp.cpp and
UpstreamOp.cpp into that plugin.
Strangely enough that worked (on Linux; didn't test Windows).
However, by overriding
bool Op::test_input( int input, Op * op ) const;
things will break. Creating and saving a Comp using the above plugins still
works. But loading that same Comp again breaks the connection in the node graph
between UpstreamOp and DownstreamOp and it is no longer possible to connect
them again.
My hypothesis is this: since both plugins contain symbols for UpstreamOp it
depends on the load order of the plugins if a node uses instances of UpstreamOp
from the first or from the second plugin. So, if UpstreamOp from the first plugin
is used then any dynamic_cast in Op::test_input() will fail and the two Op cannot
be connected anymore.
It is still surprising that Nuke would even bother to start at all with the above
configuration, since it can be rather picky about symbols from plugins, e.g if they
are missing.
Anyway, to get around this problem I did the following:
compile both Ops into a single shared object, e.g. myplugins.so, and
add TCL script or Python script (init.py/menu.py)which instructs Nuke how to load
the Ops correctly.
An example for a TCL scripts can be found in the dev guide and the instructions
for your menu.py could be something like this
menu = nuke.menu( 'Nodes' ).addMenu( 'my-plugins' )
menu.addCommand('UpstreamOp', lambda: nuke.createNode('UpstreamOp'))
menu.addCommand('DownstreamOp', lambda: nuke.createNode('DownstreamOp'))
nuke.load('myplugins')
So far, this works reliably for us (on Linux & Windows, haven't tested Mac).