Cross-compiling plugins for multiple DCCs - c++

Been using Haxe off and on for a few years, and I feel like this is something the compiler might be abusable for:
Is is possible to add target such that when a Haxe class is compiled, it can introspect the compiled code and generate boilerplate in C++? An example:
In Autodesk Maya to create a plugin node, you have to have a number of special functions overridden in a child class. Attributes on the node have to be declared first in the header, then declared statically in the cpp file, and finally added in a particular way (with error checks) in one of the function overrides. Apart from that, you have to write functions that register and de-register the plugin from the system. The pattern is just non-trivial enough that simple search / replace isn't good enough; you have to replace text in different patterns in different places, and also add function calls depending on the type of attributes and the attribute's settings.
A node with the same behavior in Foundry's Modo is very different-- you have to define three classes and compose functionality.
In both cases, the way you access data from within the node is also different, so if you were to wrap a single C++ function for both programs you'd be doing a lot of work outside the function just to prep the data in an agnostic way.
I'd like to be able to write a single node using Haxe code and generate C++ code from the Haxe class. In the case of Maya, it would subclass from MPxNode and provide the proper function overrides. In the case of Modo, it would generate the required classes properly. And in the future if I wanted to target Cinema 4D, I would add an additional target and compile against that SDK to create its version of the same functionality (probably a Tag).
I've actually done this partly in Python (generating C++ code with stubs for node functionality) and while it works, I've always been curious if there could be a better way to do this through Haxe directly. But again, it's something where the compiler would have to be aware of the structure of the Haxe class and the data within it in order to generate the proper code for each target.
Thanks in advance!

Related

How to call a function on header inclusion?

I'm working on a simple framework for making 2D games. It uses components and systems, which will vary from game to game.
To make it easy for other parts of the engine to loop over all possible systems and / or components, I'd like them to let themselves be known the moment one of them is included (Each has their own header file), in a way creating a list of all possible component types and system types.
I've currently solved this by having a Register struct which is put at the bottom after a system or component definition, passing that component / system pointer as an argument to the constructer of the Register struct, i.e.:
std::vector<Component*> Components
struct Register{
Register(Component* newComponent){
Components.push_back(newComponent);
}
}
Which is then used at the bottom of each component's header:
Register 2DPosReg(&2DPos);
Which makes sure that before we get to our main code all components are listed in Components. In the same fashion I also add the names of these components and some other details to some global vectors.
However, it seems unnecesarily messy to create a temporary object that never gets used just to execute code in it's constructor.
Is there any other way where including the header will make itself 'known' to the rest of the code?
I'd like to avoid my previous solution where I had a long Register(&2DPos, &Vel, &Acc, ...etc) function that would register all options, as any changes to the used components would require re-editing this function.
(Also, first stackoverflow question, apologies if it's long / has beginners mistakes)
However, it seems unnecesarily messy to create a temporary object that never gets used just to execute code in it's constructor.
You are correct with your assessment of the solution's aesthetic qualities. Unfortunately C++ doesn't have a better mechanism to accomplish what you are after.
After all, even the C++ standard library has to employ this technique when it wants to instrument code for execution after header inclusion.
Though, since you did mark this C++17, and you intended to put the object declaration in a header, you need to make it an inline variable:
inline Register whatevs(...);
It should produce one object per-header file.

Documenting fake classes

I have a function which exposes all of my required C++ functions to Lua, there are various tables representing different aspects of my "Scripting API", what I wish to do is use doxygen to make a scripting reference using the C++ code that exposes these script functions.
I have tried to make 'fake' classes in the body of the function, which successfully makes a new entry with the name I have given it, for instance if I make a table named 'Math' which has several functions exposed on it, how would I also make 'fake' member functions in this 'fake' class, I have tried to simply pass in \fn defining the function, however it does not show up as they are not actually real members to add a description to. How would I create this sort of effect in doxygen without hand righting a verbatim definition of every class, but instead treat the comment block as if it were a real class with real members?
It sounds like you're trying to document Lua code as if they were C++. Maybe it's possible, but it's probably more trouble than it's worth.
If you're trying to document Lua code with doxygen, maybe you could try doxygen-lua.
If your Lua API is small, you could just write a page by hand, with \ref's to the relavent C++ code. (Kind of hacky, but I've done this before.)
You could also consider using some other doc generator for your Lua API, such as LuaDoc, or anything else listed on the lua-users wiki DocumentingLuaCode.
I ended up writing a fake .doxy file which had typenames similar to lua values, apparently doxygen will document any type to throw at it.

Parsing different xml messages. Versions

Say we want to Parse a XML messages to Business Objects. We split the process in two parts, namely:
-Parsing the XML messages to XML Grammar Objects.
-Transform XML Objects to Business Objects.
The first part is done automatically, generation a grammar object for each node.
The second part is done following the XML architecture so far. Example:
If we have the XML Message(Simplified):
<Main>
<ChildA>XYZ</ChildA>
<ChildB att1="0">
<InnerChild>YUK</InnerChild>
</ChildB>
</Main>
We could find the following classes:
DecodeMain(Calls DecodeChildA and B)
DecodeChildA
DecodeChildB(Calls DecodeInnerChild)
DecodeInnerChild
The main problem arrives when we need to handle versions of the same messages. Say we have a new version where only DecodeInnerChild changes(e.g.: We need to add an "a" at the end of the value)
It is really important that the solutions agile for further versions and as clean as possible. I considered the following options:
1)Simple Inheritance:Create two classes of DecodeInnerChild. One for each version.
Shortcomming: I will need to create different classes for every parent class to call the right one.
2)Version Parameter: Add to each method an Object with the version as a parameter. This way we will know what to do within each method according to each version.
Shortcoming: Not clean at all. The code of different versions is mixed.
3)Inheritance + Version Parameter: Create 2 classes with a base class for the common code for the nodes that directly changes (Like InnerChild) and add the version as a parameter in each method. When a node call the another class to decode the child object, it will use one or another class depending on the Version parameter.
4)Some kind of executor pattern(I do not know how to do it): Define at the start some kind of specifications object, where all the methods that are going to be used are indicated and I pass this object to a class that is in charge of execute them.
How would you do it? Other ideas are welcomed.
Thanks in advance. :)
How would you do it? Other ideas are welcomed.
Rather than parse XML myself I would as first step let something like CodesynthesisXSD to generate all needed classes for me and work on those. Later when performance or something becomes issue I would possibly start to look aound for more efficient parsers and if that is not fruitful only then i would start to design and write my own parser for specific case.
Edit:
Sorry, I should have been more specific :P, the first part is done
automatically, the whole code is generated from the XML schema.
OK, lets discuss then how to handle the usual situation that with evolution of software you will eventually have evolved input too. I put all silver bullets and magic wands on table here. If and what you implement of them is totally up to you.
Version attribute I have anyway with most things that I create. It is sane to have before backward-compatibility issue that can not be solved elegantly. Most importantly it achieves that when old software fails to parse newer input then it will produce complaint that makes immediately sense to everybody.
I usually also add some interface for converter. So old software can be equipped with converter from newer version of input when it fails to parse that. Also new software can use same converter to parse older input. Plus it is place where to plug converter from totally "alien" input. Win-win-win situation. ;)
On special case of minor change I would consider if it is cheap to make new DecodeInnerChild to be internally more flexible so accepts the value with or without that "a" in end as valid. In converter I have still to get rid of that "a" when converting for older versions.
Often what actually happens is that InnerChild does split and both versions will be used side-by-side. If there is sufficient behavioral difference between two InnerChilds then there is no point to avoid polymorphic InnerChilds. When polymorphism is added then indeed like you say in your 1) all containing classes that now have such polymorphic members have to be altered. Converter should usually on such cases either produce crippled InnerChild or forward to older version that the input is outside of their capabilities.

What is the best design pattern to register data "chunks"?

I have a library which can save/load on disk "chunks" which are POD structs with constant size and unique static CHUNK_ID field. So load looks somethink like this.
void Load(int docId, char* ptr, int type, size_t& size)...
If you want to add new chunk you just add struct with new CHUNK_ID and use Save Load functions to it.
What I want is to force all "chunks" to have functions like PrintHumanReadable, CompareThisTypeOfChunk etc(Ideally program should not compile without such functions). Also I want to mark/register/enumerate all chunk-structs.
I have a few ideas but all of them have problems.
Create base class with pure virtual functions PrintHumanReadable, CompareThisTypeOfChunk.
Problem:breaks pod type and requires library rewriting.
Implement factory which creates chunk struct from CHUNK_ID. Problem: compiles when I add new chunk without required functions.
Could you recomend elegant design solution for my problem?
Implement a simple code generator. You can use something like Mako or Cheetah (both Python libraries). Make a text file containing all the class names, then have the generator build the factory method and a series of methods which aren't really used but which refer to the desired methods in all the classes. This will also make it straightforward to enumerate the classes (again, using generated code).
The proper design pattern for this is called "use Boost.Serialization". It's really the best tool for writing objects to a format and then reading them back later. It can write in text, binary, and even XML formats (and others if you write a proper stream for them). It's can be non-intrusive, so you don't need to modify the objects to serialize them. And so forth.
Once you're using the proper tool for this job, you can then use whatever class hierarchy or other method you like to ensure that the proper functions for an object exist.
If you can't/won't use Boost.Serialization, then you're pretty much stuck with a runtime solution. And since the solution is runtime rather than compile time, there's no way to ensure at compile time that any particular chunk ID has the requisite functions.

Flexible application configuration in C++

I am developing a C++ application used to simulate a real world scenario. Based on this simulation our team is going to develop, test and evaluate different algorithms working within such a real world scenrio.
We need the possibility to define several scenarios (they might differ in a few parameters, but a future scenario might also require creating objects of new classes) and the possibility to maintain a set of algorithms (which is, again, a set of parameters but also the definition which classes are to be created). Parameters are passed to the classes in the constructor.
I am wondering which is the best way to manage all the scenario and algorithm configurations. It should be easily possible to have one developer work on one scenario with "his" algorithm and another developer working on another scenario with "his" different algorithm. Still, the parameter sets might be huge and should be "sharable" (if I defined a set of parameters for a certain algorithm in Scenario A, it should be possible to use the algorithm in Scenario B without copy&paste).
It seems like there are two main ways to accomplish my task:
Define a configuration file format that can handle my requirements. This format might be XML based or custom. As there is no C#-like reflection in C++, it seems like I have to update the config-file parser each time a new algorithm class is added to project (in order to convert a string like "MyClass" into a new instance of MyClass). I could create a name for every setup and pass this name as command line argument.
The pros are: no compilation required to change a parameter and re-run, I can easily store the whole config file with the simulation results
contra: seems like a lot of effort, especially hard because I am using a lot of template classes that have to be instantiated with given template arguments. No IDE support for writing the file (at least without creating a whole XSD which I would have to update everytime a parameter/class is added)
Wire everything up in C++ code. I am not completely sure how I would do this to separate all the different creation logic but still be able to reuse parameters across scenarios. I think I'd also try to give every setup a (string) name and use this name to select the setup via command line arg.
pro: type safety, IDE support, no parser needed
con: how can I easily store the setup with the results (maybe some serialization?)?, needs compilation after every parameter change
Now here are my questions:
- What is your opinion? Did I miss
important pros/cons?
- did I miss a third option?
- Is there a simple way to implement the config file approach that gives
me enough flexibility?
- How would you organize all the factory code in the seconde approach? Are there any good C++ examples for something like this out there?
Thanks a lot!
There is a way to do this without templates or reflection.
First, you make sure that all the classes you want to create from the configuration file have a common base class. Let's call this MyBaseClass and assume that MyClass1, MyClass2 and MyClass3 all inherit from it.
Second, you implement a factory function for each of MyClass1, MyClass2 and MyClass3. The signatures of all these factory functions must be identical. An example factory function is as follows.
MyBaseClass * create_MyClass1(Configuration & cfg)
{
// Retrieve config variables and pass as parameters
// to the constructor
int age = cfg->lookupInt("age");
std::string address = cfg->lookupString("address");
return new MyClass1(age, address);
}
Third, you register all the factory functions in a map.
typedef MyBaseClass* (*FactoryFunc)(Configuration *);
std::map<std::string, FactoryFunc> nameToFactoryFunc;
nameToFactoryFunc["MyClass1"] = &create_MyClass1;
nameToFactoryFunc["MyClass2"] = &create_MyClass2;
nameToFactoryFunc["MyClass3"] = &create_MyClass3;
Finally, you parse the configuration file and iterate over it to find all the entries that specify the name of a class. When you find such an entry, you look up its factory function in the nameToFactoryFunc table and invoke the function to create the corresponding object.
If you don't use XML, it's possible that boost::spirit could short-circuit at least some of the problems you are facing. Here's a simple example of how config data could be parsed directly into a class instance.
I found this website with a nice template supporting factory which I think will be used in my code.