In C++ I have the following two classes that I expose (using Boost) to Python:
struct Foo {
// Empty
};
struct FooContainer {
// I use boost::shared_ptr for compatibility with Boost.Python
vector<boost::shared_ptr<Foo>> foos_;
};
In the Python side I might create a special type of Foo that actually does something instead of being just an empty class, and then add it to a FooContainer:
class Useful(Foo):
def __init__(self, a, b):
self.a = a
self.b = b
x = Useful(3, 5);
# Add 'x' to a `FooContainer`
Back in the C++ side, the FooContainer now has some Foos, but it doesn't know or care that they are from Python. The application runs for a while and the data in the Foo objects changes...
Then I decide I want to save the state of my program so I can load it at a later time. But the problem is that FooContainer doesn't know much about its Foo objects, it doesn't even know that they come from Python and I wouldn't want to pollute my FooContainer with data that doesn't really belong in it (single-responsibility principle and all that).
Do you have any advice on how I should organize my application such that saving and loading data, as well as loading fresh data (ie. not from a state that I saved in the past) can be done in a clear way?
You can use boost::python/pickle, and save the data from python. I only have limited experience with the pickling suite, but it should work provided you override appropriate pickling methods in your classes derived in python (see my answer to this question).
You already have python code that creates the Foos, lets call it populateFoos and somehow you have your program call it.
Now the next thing you need is a storeFoos and loadFoos function that does the saving and loading. If you want to keep it generic define them as storeFunc and loadFunc (or callback, depending on the context).
Depending on your program structure you might also need to keep in python a list of all foos created (or associated to a container).
Related
I have searched the internet for hours at this point. Does anyone know how to parse a namedtuple returned from a python function into a struct or just into separate variables. The part I am having trouble with is getting the data out of the returned pointer. I am calling a python function embedded in C++ using the PyObject_CallFunction() call and I don't know what to do once I have the PyObject* to the returned data.
I am using Python 2.7 for reference.
EDIT: I ended up moving all of the functionality I was trying to do in both Python and C++ to just Python for now. I will update in the near future about attempting the strategy suggested in the comments of this question.
I am calling a python function embedded in C++ using the
PyObject_CallFunction() call and I don't know what to do once I have
the PyObject* to the returned data.
A namedtuple is a tuple subclass that additionally exposes tuple elements as named attributes. This means that you can choose whether to access its data as obj[position] or obj.attribute. The latter is generally more readable, but the former combines well with tuple unpacking. In Python/C, it is probably easier to access it as tuple, since then you can use the convenience function PyArg_ParseTuple, as indicated in the comment.
To extract arbitrary attributes of an object (not necessarily a namedtuple), one would call PyObject_GetAttrString. Given an object describing, say, a point, extracting an attribute such as x might look like this:
PyObject *point = ...; // assume we get a new reference to point
if (!point)
return NULL;
PyObject *x = PyObject_GetAttrString(point, "x");
if (!x) {
// obj.x raised, possibly because point is of a different type
Py_DECREF(point);
return NULL;
}
double x_val = PyFloat_AsDouble(x);
Py_DECREF(x); // x not used below this line
if (x_val == -1 && PyErr_Occurred()) {
// obj.x is not float or float-like
Py_DECREF(point);
return NULL;
}
Py_DECREF(point); // point not used below this line
The error checking and reference counting is quite tedious, but it can be mostly eliminated using guard classes or, better yet, using the classes written by others, such as Boost.Python.
namedtuple is implemented purely in Python. You can see its full source in collections.py. It's very short. The thing to keep in mind is that namedtuple itself is a function which creates a class in the frame in which it is called and then returns this class (not an instance of this class). And it is this returned class that is then used to create instances. So the object which you get is not what you want to pass into C++ if you want to pass individual instances.
C++ creates struct definitions at compile time. namedtuple creates namedtuple classes at run time. If you want to bind them to C++ structs, either use the PyObject to create your newly minted class' instances inside of C++ and assign them to struct elements at compile time. Or create the newly minted class' instances in Python and pass them to C++.
Or you can use _asdict method (provided by namedtuple factory method for all classes it builds) and pass that to C++ to then do the binding of run-time defined data to compile-time defined data.
If you really want to do the bulk of the work in C++, you may also use the Struct module instead of using namedtuple.
namedtuple is really the swiss-army knife of Python for data which stays in Python. It gives positional access, named access, and all the elements are also "properties" (so they have fget accessor method which can be used in maps, filters, etc. instead of having to write your own lambdas).
It's there for things like DB binding (when you don't know which columns will be there at run time). It's less clunky than OrderedDict for converting data from one format into another. When it's used that way, the overhead of processing strings is nothing compared to actual access of the db (even embedded). But I wouldn't use namedtuple for large arrays of structs which are meant to be used in calculations.
I have a class with many objects that I would like to group in some type of container and also access them with some type of identifier.
class Menu {
Object title;
Object play;
Object instructions;
Object pause;
...
};
Having each object listed in the class, shown above, is nice because I can access them like, menu.title, but then I have to retype every name to add it to a container, vector.push_back(title).
Shown below is how I've always solved the problem. I use an enum's integer value to access the corresponding index. objects[TITLE] or objects[PLAY]
class Menu {
std::vector<Object> objects;
};
enum ObjectTypes {
TITLE, PLAY, INSTRUCTIONS, PAUSE, ...
};
I generally dislike this approach because it seems indirect, and when using nested classes and enums, the identifiers can become long and cumbersome. I'm coming from C and somewhat new to C++. Does anybody have a more practical and/or elegant way to solve this problem? C++11 and above welcomed!
The approach you are using is fine. If you want to avoid cumbersome identifiers, you can make a temporary reference to keep things more succinct. For example, instead of calling:
menu.objects[PAUSE].foo();
menu.objects[PAUSE].bar();
menu.objects[PAUSE].baz();
... you could do this when necessary:
Object & pause = menu.objects[PAUSE];
pause.foo();
pause.bar();
pause.baz();
and it would work the same, but without all of the redundant characters.
I think your approach is fine. Using a std::map<ObjectType, Object> instead of a std::vector might be a little more type-safe.
Either way, if you want to save yourself a little typing you could overload the operator[] on Menu:
Object& operator[](index_type i){ return objects[i]; }
const Object& operator[](index_type i) const { return objects[i]; }
Then you can write menu[TITLE].
Other than that, I don't see how it can be any less cumbersome, there is no redundant information there and, as Jeremy pointed out, if you need an object multiple times you can always create a local reference auto& title = menu[TITLE];.
Depending on what other responsibilities Menu has maybe you don't need a Menu class at all and you can just use a map or vector directly?
The best solution to your question depends mostly on your use cases.
I see two main use cases:
You want to represent "functions"of a "device" so you can achieve readable code when manipulating that "device" from your code. Such as a MediaPlayer device having play, stop, pause operations. But you dismissed the option to simply add member functions to your "device" object, such as Play(), because you want to re-use your play code also for another device, such as a tuner device. Also, you want to apply operations to all or a sub set of those "functions", for example Enable()/Disable()/ToString()/Trigger(),Configure()..., which is why the member function approach is not favorable.
You want to create a Document Object model, which is more data focused. Such as an Xml Document.
From what you wrote in your question, I assume you have use case 1 in mind. Your Object type has all the common operations you need.
Yet, then there are the differences between all those "functions".
In order to stick to your simple approach, you would need to manually set up/configure your Object instances, which might in the long run turn out to be annoying, but hardly avoidable:
// Example of "annoying": If you have more than 1 "device",
// you have to write such a function for each of them.
// Also, there is that fishy bit with Object.Id - the association between
// your configuration data and your object you want to configure.
void ConfigureMenu( std::vector& menuItems, MenuConfigurationData& configData )
{
for( auto& menuItem : menuItems )
{
menuItem.Configure( configData[menuItem.Id] ); // spoiler!
}
}
Also, I am inclined to think that even now you have some code not shown in your question, which configures your objects.
With that in mind, you might want to get rid of the idea to write 1 class type per "device" by hand. The next device/menu will need to be treated the same, yet with dedicated more coding.
So, my advice to you is to get rid of your class Menu for good, to abstract your problem and to model your problem rather like: Object is your "function" and a device is simply a set of functions/Objects. Then, your class Menu simply becomes an instance, named Menu.
typedef uint32_t FunctionId; // for example uint32_t...
typedef std::map<FunctionId,Object> Device; // aka. Menu.
Then, in the configuration function you most likely have anyway, you pass in an instance of that Device map and your configuration function fills it with Object, properly configured.
// those enums are still specific to your concrete device (here Menu) but
// you can consider the time it takes writing them an investment which will
// pay off later when you write your code, using your functions.
// You assign the function id which is used in your meta-data.
enum class MenuFunctions : FunctionId { play = ..., title = ..., instructions, ... };
// "generic" configuration function.
// Does not only configure your Object, but also puts it inside the device.
void ConfigureDevice( Device& device, ConfigData& configData )
{ // ...
}
And later in your code, you can access the functions like this:
menu[MenuFunctions::play].Trigger();
There are alternative approaches and variations to this, of course. For example, assuming you have your meta data (config data, device descriptions), you could stop coding all that by hand and instead write some code generator which does the job for you.
Your first version of such a generator could create your configuration functions and the enums for you.
With all that in place, the use case of "nested classes" becomes just a matter of creating collections of Device instances. Now, as all your devices are of the same type, you can compose and sub-group them at your leisure.
A few different approaches below. I'm not saying which is "best". There are pros/cons to them all.
A) Rather than using a vector (e.g. class Menu { std::vector<Object> objects; };), use an array class Menu {std::array<Object,NObjectTypes> objects;}; when your element count is constant.
B) Just use a class, but provide an api to return a std::array<> of references to your objects:
class Menu {
Object title;
Object play;
Object instructions;
Object pause;
...
std::array<Object*,NObjects> allObjects();
};
C) std::tuple can be useful when your types are not all the same.
For menus, I'll often go with "A".
Developing a modular application, I want to inject some helper classes into each module. This should happen automated. Note that my helpers have state, so I can't just make them static and include them where needed.
I could store all helpers in a map with a string key and make it available to the abstract base class all modules inherit from.
std::unordered_map<std::string, void*> helpers;
RendererModule renderer = new RendererModule(helpers); // argument is passed to
// base class constructor
Then inside a module, I could access helpers like this.
std::string file = (FileHelper*)helpers["file"]->Read("C:/file.txt");
But instead, I would like to access the helpers like this.
std::string file = File->Read("C:/file.txt");
To do so, at the moment I separately define members for all helpers in the module base class, and set them for each specific module.
FileHelper file = new FileHelper(); // some helper instances are passed to
// multiple modules, while others are
// newly created for each one
RendererModule renderer = new RendererModule();
renderer->File = file;
Is there a way to automate this, so that I don't have to change to module code when adding a new helper to the application, while remaining with the second syntax? I an not that familiar with C macros, so I don't know if they are capable of that.
I think I see what your dilemma is, but I have no good solution for it. However, since there are no other answers, I will contribute my two cents.
I use the combination of a few strategies to help me with these kinds of problems:
If the helper instance is truly module-specific, I let the module itself create and manage it inside.
If I don't want the module to know about the creation or destruction of the helper(s), or if the lifetime of the helper instance is not tied to the module that is using it, or if I want to share a helper instance among several modules, I create it outside and pass the reference to the entry-point constructor of the module. Passing it to the constructor has the advantage of making the dependency explicit.
If the number of the helpers are high (say more than 2-3) I create an encompassing struct (or simple class) that just contains all the pointers and pass that struct into the constructor of the module or subsystem. For example:
struct Platform { // I sometimes call it "Environment", etc.
FileHelper * file;
LogHelper * log;
MemoryHelper * mem;
StatsHelper * stats;
};
Note: this is not a particularly nice or safe solution, but it's no worse than managing disparate pointers and it is straightforward.
All the above assumes that helpers have no dependency on modules (i.e. they are on a lower abstraction of dependency level and know nothing about modules.) If some helpers are closer to modules, that is, if you start to want to inject module-on-module dependencies into each other, the above strategies really break down.
In these cases (which obviously happen a lot) I have found that a centralized ModuleManager singleton (probably a global object) is the best. You explicitly register your modules into it, along with explicit order of initialization, and it constructs all the modules. The modules can ask this ModuleManager for a reference to other modules by name (kind of like a map of strings to module pointers,) but they do this once and store the pointers internally in any way they want for convenient and fast access.
However, to prevent messy lifetime and order-of-destruction issues, any time a module is constructed or destructed, the ModuleManager notifies all other modules via callbacks, so they have the chance to update their internal pointers to avoid dangling pointers and other problems.
That's it. By the way, you might want to investigate articles and implementations related to the "service locator" pattern.
I have a game that consists of few modules.
One of them is database module.
I want to make it something like that:
Database{
public:
save(&Object); //all my classes in the all modules inherit from Object
load(&Object);
};
What would be the best way to make that module independent from other modules (other modules will store data in Database using save and load functions)?
I consider few solutions:
All objects have something like serialize() method that is inherited from Object class (analogy to Java). Database use that method to get the string and save it. Obvious disadvantages are: all objects have to implement new method and it won't be optimum to save strings (not knowing about the classes' structure).
Make 'manifests' for all the classes (in e.g. text file that will be send to Database). That manifests will describe what the structure of class is (e.g. one string, two double, one rare use int). Disadvantage is flexibility - changing the classes in other modules will have affect on manifests.
All classes has own save and load methods and Database use them. I don't want it, because all classes would have to know about database type and save and load should be in Database class, not distributed in the whole code (it's a main point to make such a module).
Database knows about all other modules (and will know how to save all objects). Bad thing here is a lot of dependencies. Changes in any of modules will affect the Database.
Which way will be good? Or maybe there's a better option?
One solution I've come across is to have all Object subclasses implement a virtual void serialize(ISerializer& serializer) method.
ISerializer would have pure virtual methods like void onInt(int value), void onString(const char* string) etc to be called by the Object subclass inside its serialize()-method. Your Database module could implement ISerializer in two separate classes, DatabaseReader and DatabaseWriter. Later on you could add ObjectInspectionFileDumper, OnScreenObjectStateDebugger or NetworkWriter that also implement ISerializer, but in other modules. Each object only needs to implement the serialize()-method once to gain all those possibilities for extension.
Pros:
Reading and writing is pretty much guaranteed to match up as long as you don't read data for an old version of an object without some kind of versioning scheme on top.
This is an orthogonal design, where the number of Object types and Serializer types can grow independently of each other.
Cons:
Mainly some virtual function overhead, if that is an issue for your project. This isn't something you will typically be doing much during regular gameplay though.
Later, you might come across things you want to call Objects which you don't want to serialize, then it could make sense to separate that out into an ISerializable interface class, only containing the pure virtual serialize()-method. To accommodate serializers where it matters (like debug serializers), you might want to change to void onInt(const char* name, int value) etc instead.
HTH
I had a script with:
Custom language used only for data
Was loaded using a Script class from C++
I had tags like Type, etc
An interface to get a value for a tag - Script::GetValue(Tag, T& value)
The script was used like this:
Script* script("someFile");
script->GetValue("Type", type);
Object* obj = CreateObject(type);
obj->Load(script);
Where Load functions from object was used to load the rest of obj parameters.
Now I changed the script language to lua. My questions is:
Should I keep this way of creating objects(use lua only for data) or should I expose the factory in lua and use it from lua, something like this(in lua):
CreateObject("someType")
SetProperty(someObj, someProperty, someValue)
First of all I want to know which is faster, first or second approach. Do you have other suggestions? Because I'm refactoring this part I'm open to other suggestions. I want to keep lua because is fast, easy to integrate, and small.
You may allow your script environment to create C++ objects or not, depending on your needs.
tolua++ uses all the metatable features to allow a very straightforward manipulation of your c++ types in lua.
For example, this declaration :
// tolua_begin
class SomeCppClass
{
public:
SomeCppClass();
~SomeCppClass();
int some_field;
void some_method();
};
// tolua_end
Will automatically generate the lua bindings to allow this lua scipt :
#!lua
-- obj1 must be deleted manually
local obj1 = SomeCppClass:new()
-- obj1 will be automatically garbage collected
local obj2 = SomeCppClass:new_local()
obj1.some_field = 3 -- direct access to "some_field"
obj2:some_method() -- direct call to "some_method"
obj1:delete()
The advantage of this technique is that your lua code will ve very consistent with the relying C++ code. See http://www.codenix.com/~tolua/tolua++.html
In situations like that, I prefer to setup a bound C function that takes a table of parameters as an argument. So, the Lua script would look like the following.
CreateObject{
Type = "someType"'
someProperty = someValue,
-- ...
}
This table would be on top of the stack in the callback function, and all parameters can be accessed by name using lua_getfield.
You may also want to investigate sandboxing your Lua environment.
The first approach would most likely be faster, but the second approach would probably result in less object initialization code (assuming you're initializing a lot of objects). If you choose the first approach, you can do it manually. If you choose the second approach you might want to use a binding library like Luabind to avoid errors and speed up implementation time, assuming you're doing this for multiple object types and data types.
The simplest approach will probably be to just use Lua for data; if you want to expose the factory and use it via Lua, make sure it's worth the effort first.