Multiplatform class design C++ - c++

I have had an idea for a non standard way to handle multiplatform interfaces in C++ and would like to know if that is generally a bad idea and why.
Currently I can only think of one disadvantage: It is very(?) uncommon do to something like that and maybe its not obvious how it works on first sight:
I have a class that will be used on different platforms, for example CMat4x4f32 (4x4 matrix class using 32 bit floats).
My platform independent interface looks like this:
class CMat4x4f32
{
public:
//Some methods
#include "Mat4x4f32.platform.inl"
};
Mat4x4f32.platform.inl looks like this:
public:
// Fills the matrix from a DirectX11 SSE matrix
void FromXMMatrix(const XMMatrix& _Matrix);
It just adds a platform depending interface to the matrix class.
The .cpp and the Mat4x4f32.platform.inl are located inside subfolders like "win32" or "posix" so in win32 I implement the FromXMMatrix function. My buildsystem adds these subfolders to the include path depending on the platform I build for.
I could even go a step beyond and implement a .platform.cpp that is located inside win32 and contains only the functions I add to the interface for that platform.
I personally think this is a good idea because it makes writing and using interfaces very easy and clean.
Especially in my Renderer library that heavily uses the Matrix class from my base library I can now use platform depending functions (FromXMMatrix) in the DirectX part as if I dont have any other platforms to worry about.
In the base library itself I can still write platform independent code using the common matrix interface.
I also have other classes where this is useful: For example an Error class that collects errors and automatically translates them into readable messages and provide some debugging options.
For win32 I can create error instances from bad DirectX and Win32 HResults and on Linux I can create them from returned errno's. In the base library I have a class that manages these errors using the common error interface.
It heavily reduces code required and prevents having ugly platform depending util classes.
So is this bad or good design and what are the alternatives?

It sounds like you're talking about using the bridge pattern:
http://c2.com/cgi/wiki?BridgePattern
In my personal experience I've developed a lot of platform independent interfaces, with specific implementations using this pattern and it has worked very well, I've often used it with the Pimpl idiom:
http://c2.com/cgi/wiki?PimplIdiom
As in alternatives I've found that this site in general is very good for explaining pros & cons of various patterns and paradigms:
http://c2.com/cgi/wiki

I would recommend you used "pimpl" instead:
class CMat4x4f32
{
public:
CMat4x4f32();
void Foo();
void Bar();
private:
std::unique_ptr<CMat4x4f32Impl> m_impl;
};
And then in build-file configs pull in platform specific .cpp files, where you for instance define your platform specific functions:
class CMat4x4f32::CMat4x4f32Impl
{
public:
void Foo() { /* Actual impl */ }
void Bar() { /* Actual impl */ }
// Fills the matrix from a DirectX11 SSE matrix
void FromXMMatrix(const XMMatrix& _Matrix);
};
CMat4x4f32::CMat4x4f32() : m_impl(new CMat4x4f32Impl()) {}
CMat4x4f32::Foo() { m_impl->Foo(); }
CMat4x4f32::Bar() { m_impl->Bar(); }

Related

UML notation for C++ Link Seam (shared header with different implementation)

In a C++ cross-platform library,
we use shared headers which are compiled with different module versions for each OS.
A.k.a Link-Seam
//Example:
//CommonHeader.h
#ifndef COMMON_HEADER_H
#define COMMON_HEADER_H
class MyClass {
public:
void foo();
}
#endif
.
//WindowsModule.cpp
#include "CommonHeader.h"
void MyClass::foo() {
//DO SOME WINDOWS API STUFF
printf("This implements the Windows version\n");
}
.
//LinuxModule.cpp
#include "CommonHeader.h"
void MyClass::foo() {
//DO SOME LINUX SPECIFIC STUFF HERE
printf("This implements the Linux version\n");
}
Of course, in each build you only select one module, respective to the environment you are using.
This is meant to suppress the indirect call to the functions
My Question is: How to note this relationship in UML ?
"Inheritance"? "Dependency"? "Note"?
class MyClass {
public:
void foo();
}
This is nothing more than a class contract, so basically an interface which you realize in different modules. To visualize that, you can use interface realization notation (like a generalization, but with dashed lines).
The reality is I think that you’ve got only one class in a UML class diagram, being MyClass, with a public operation foo(); it’s just that you have two code implementations of this class, one for Linux and one for Windows. UML Class models are not really setup to answer the question of how you implement this, in your case using c++ with header files: Imagine if instead you in-lined your functions and ended up writing two inline implementations of MyClass, one for Linux and one for Windows. Logically you still have one class in your class diagram (but with two implementations) and the header file (or lack thereof) wouldn’t come into it. What you need therefore is a way to show a how the C++ is structured, not a way to show the logical class constructs. I’m not aware of a specific way in UML to represent code structure, however you could build a code structure model using Artefacts maybe (?) which could denote the c++ file structures.
It could be seen as some kind of "inheritance". There is however no relation between classes as there is just one class - so there is no relation between two. The actual construct of using platform dependent implementation however imitate relation "is a".

Wrapper over Graphics APIs

I'm a huge fan of having a game engine that has the abilty to adapt, not just in what it can do, but also in how it can handle new code. Recently, for my graphics subsystem, I wrote a class to be overriden that works like this:
class LowLevelGraphicsInterface {
virtual bool setRenderTarget(const RenderTarget* renderTarget) = 0;
virtual bool setStreamSource(const VertexBuffer* vertexBuffer) = 0;
virtual bool setShader(const Shader* shader) = 0;
virtual bool draw(void) = 0;
//etc.
};
My idea was to create a list of functions that are universal among most graphics APIs. Then for DirectX11 I would just create a new child class:
class LGI_DX11 : public LowLevelGraphicsInterface {
virtual bool setRenderTarget(const RenderTarget* renderTarget);
virtual bool setStreamSource(const VertexBuffer* vertexBuffer);
virtual bool setShader(const Shader* shader);
virtual bool draw(void);
//etc.
};
Each of these functions would then interface with DX11 directly. I do realize that there is a layer of indirection here. Are people turned off by this fact?
Is this a widely used method? Is there something else I could/should be doing? There is the option of using the preprocessor but that seems messy to me. Someone also mentioned templates to me. What do you guys think?
If the virtual function calls become a problem, there is a compile time method that removes virtual calls using a small amount of preprocessor and a compiler optimization. One possible implementation is something like this:
Declare your base renderer with pure virtual functions:
class RendererBase {
public:
virtual bool Draw() = 0;
};
Declare a specific implementation:
#include <d3d11.h>
class RendererDX11 : public RendererBase {
public:
bool Draw();
private:
// D3D11 specific data
};
Create a header RendererTypes.h to forward declare your renderer based on the type you want to use with some preprocessor:
#ifdef DX11_RENDERER
class RendererDX11;
typedef RendererDX11 Renderer;
#else
class RendererOGL;
typedef RendererOGL Renderer;
#endif
Also create a header Renderer.h to include appropriate headers for your renderer:
#ifdef DX11_RENDERER
#include "RendererDX11.h"
#else
#include "RendererOGL.h"
#endif
Now everywhere you use your renderer refer to it as the Renderer type, include RendererTypes.h in your header files and Renderer.h in your cpp files.
Each of your renderer implementations should be in different projects. Then create different build configurations to compile with whichever renderer implementation you want to use. You don't want to include DirectX code for a Linux configuration for example.
In debug builds, virtual function calls might still be made, but in release builds they are optimized away because you are never making calls through the base class interface. It is only being used to enforce a common signature for your renderer classes at compile time.
While you do need a little bit of preprocessor for this method, it is minimal and doesn't interfere with the readability of your code since it is isolated and limited to some typedefs and includes. The one downside is that you cannot switch renderer implementations at runtime using this method as each implementation will be built to a separate executable. However, there really isn't much need for switching configurations at runtime anyway.
I use the approach with an abstract base class to the render device in my application. Works fine and lets me dynamically choose the renderer to use at runtime. (I use it to switch from DirectX10 to DirectX9 if the former is not supported, i.e. on Windows XP).
I would like to point out that the virtual function call is not the part which costs performance, but the conversion of the argument types involved. To be really generic, the public interface to the renderer uses its own set of parameter types such as a custom IShader and a custom Matrix3D type. No type declared in the DirectX API is visible to the rest of the application, as i.e. OpenGL would have different Matrix types and shader interfaces. The downside of this is really that I have to convert all Matrix and Vector/Point types from my custom type to the one the shader uses in the concrete render device implementation. This is far more expensive than the cost of a virtual function call.
If you do the distinction using the preprocessor, you also need to map the different interface types like this. Many are the same between DirectX10 and DirectX11, but not between DirectX and OpenGL.
Edit: See the answer in c++ Having multiple graphics options for an example implementation.
So, I realize that this is an old question, but I can't resist chiming in. Wanting to write code like this is just a side effect of trying to cope with object-oriented indoctrination.
The first question is whether or not you really need to swap out rendering back-ends, or just think it's cool. If an appropriate back-end can be determined at build time for a given platform, then problem solved: use a plain, non-virtual interface with an implementation selected at build time.
If you find that you really do need to swap it out, still use a non-virtual interface, just load the implementations as shared libraries. With this kind of swapping, you will likely want both engine rendering code and some performance intensive game-specific rendering code factored out and swappable. That way, you can use the common, high-level engine rendering interface for things done mostly by the engine, while still having access to back-end specific code to avoid the conversion costs mentioned by PMF.
Now, it should be said that while swapping with shared libraries introduces indirection, 1. You can easily get the indirection to be < to ~= that of virtual calls and 2. This high-level indirection is never a performance concern in any substantial game/engine. The main benefit is keeping dead code unloaded (and out of the way) and simplifying APIs and overall project design, increasing readability and comprehension.
Beginners aren't typically aware of this, because there is so much blind OO pushing these days, but this style of "OO first, ask questions never" is not without cost. This kind of design has a taxing code comprehension cost and leads to code (much lower-level than this example) that is inherently slow. Object orientation has its place, certainly, but (in games and other performance intensive applications) the best way to design that I have found is to write applications as minimally OO as possible, only conceding when a problem forces your hand. You will develop an intuition for where to draw the line as you gain more experience.

Runtime interfaces and object composition in C++

I am searching for a simple, light-weight solution for interface-based runtime object composition in C++. I want to be able to specify interfaces (methods declarations), and objects (creatable through factory pattern) implementing these. At runtime I want mechanisms to instantiate these objects and interconnect these based on interface-connectors. The method calls at runtime should remain fairly cheap, i.e. only several more instructions per call, comparable to functor patterns.
The whole thing needs to be platform independent (at least MS Windows and Linux). And the solution needs to be licensed liberally, like open source LGPL or (even better) BSD or something, especially allowing use commercial products.
What I do not want are heavy things like networking, inter-process-communication, extra compiler steps (one-time code generation is ok though), or dependencies to some heavy libraries (like Qt).
The concrete scenario is: I have such a mechanism in a larger software, but the mechanism is not very well implemented. Interfaces are realized by base classes exported by Dlls. These Dlls also export factory functions to instantiate the implementing objects, based on hand-written class ids.
Before I now start to redesign and implement something better myself, I want to know if there is something out there which would be even better.
Edit: The solution also needs to support multi-threading environments. Additionally, as everything will happen inside the same process, I do not need data serialization mechanisms of any kind.
Edit: I know how such mechanisms work, and I know that several teaching books contain corresponding examples. I do not want to write it myself. The aim of my question is: Is there some sort of "industry standard" lib for this? It is a small problem (within a single process) and I am really only searching for a small solution.
Edit: I got the suggestion to add a pseudo-code example of what I really want to do. So here it is:
Somewhere I want to define interfaces. I do not care if it's C-Headers or some language and code generation.
class interface1 {
public:
virtual void do_stuff(void) = 0;
};
class interface2 {
public:
virtual void do_more_stuff(void) = 0;
};
Then I want to provide (multiple) implementations. These may even be placed in Dll-based plugins. Especially, these two classes my be implemented in two different Dlls not knowing each other at compile time.
class A : public interface1 {
public:
virtual void do_stuff(void) {
// I even need to call further interfaces here
// This call should, however, not require anything heavy, like data serialization or something.
this->con->do_more_stuff();
}
// Interface connectors of some kind. Here I use something like a template
some_connector<interface2> con;
};
class B : public interface2 {
public:
virtual void do_more_stuff() {
// finally doing some stuff
}
};
Finally, I may application main code I want to be able to compose my application logic at runtime (e.g. based on user input):
void main(void) {
// first I create my objects through a factory
some_object a = some_factory::create(some_guid<A>);
some_object b = some_factory::create(some_guid<B>);
// Then I want to connect the interface-connector 'con' of object 'a' to the instance of object 'b'
some_thing::connect(a, some_guid<A::con>, b);
// finally I want to call an interface-method.
interface1 *ia = a.some_cast<interface1>();
ia->do_stuff();
}
I am perfectly able to write such a solution myself (including all pitfalls). What I am searching for is a solution (e.g. a library) which is used and maintained by a wide user base.
While not widely used, I wrote a library several years ago that does this.
You can see it on GitHub zen-core library, and it's also available on Google Code
The GitHub version only contains the core libraries, which is really all the you need. The Google Code version contains a LOT of extra libraries, primarily for game development, but it does provide a lot of good examples on how to use it.
The implementation was inspired by Eclipse's plugin system, using a plugin.xml file that indicates a list of available plugins, and a config.xml file that indicates which plugins you would like to load. I'd also like to change it so that it doesn't depend on libxml2 and allow you to be able to specify plugins using other methods.
The documentation has been destroyed thanks to some hackers, but if you think this would be useful then I can write enough documentation to get you started.
A co-worker gave me two further tips:
The loki library (originating from the modern c++ book):
http://loki-lib.sourceforge.net/
A boost-like library:
http://kifri.fri.uniza.sk/~chochlik/mirror-lib/html/
I still have not looked at all the ideas I got.

Separating public interface from implementation details

I have to design a Font class that will have multiple implementations spanning platforms or different libraries (Win32 GDI or FreeType for example). So essentially there will be single shared header/interface file and multiple .cpp implementations (selected during build time). I would prefer to keep the public interface (header file) clean from any implementation details, but this usually is hard to achieve. A font object has to drag some sort of a private state - like a handle in GDI, or FreeType face object internally.
In C++, what is the best way to track private implementation details? Should I use static data in implementation files?
Edit: Found this great article on the subject: Separating Interface and Implementation in C++.
p.s. I recall in Objective-C there are private categories that allow you to define a class extension in your private implementation file making quite elegant solution.
You can use the PIMPL design pattern.
This is basically where your object holds a pointer to the actual platform dependent part and passes all platform dependent calls through to this object.
Font.h
class FontPlatform;
class Font
{
public:
Font();
void Stuff();
void StuffPlatform();
private:
FontPlatform* plat;
};
Font.cpp
#include "Font.h"
#if Win
#include "FontWindows.h"
#else
#error "Not implemented on non Win platforms"
#endif
Font::Font()
: plat(new PLATFORM_SPCIFIC_FONT) // DO this cleaner by using factory.
{}
void Font::Stuff() { /* DoStuff */ }
void Font::StuffPlatform()
{
plat->doStuffPlat(); // Call the platform specific code.
}

Design with (pure)virtual C++

First of all I have to mention that I have read many C++ virtual questions in on stackoverflow. I have some knowledge how they work, but when I start the project and try to design something I never consider/use virtual or pure virtual implementations. Maybe it is because I am lack of knowledge how do they work or I don't know how to realize some stuff with them. I think it's bad because I don't use fully Object Oriented development.
Maybe someone can advise me how to get used to them?
Check out abstract base classes and interfaces in Java or C# to get ideas on when pure virtuals are useful.
Virtual functions are pretty basic to OO. Theree are plenty of books out there to help you. Myself, I like Larman's Applying UML and Patterns.
but when I start the project and try to design something I never consider/use virtual or pure virtual implementations.
Here's something you can try:
Figure out the set of classes you use
Do you see some class hierarchies? A Circle is-a Shape sort of relationships?
Isolate behavior
Bubble up/down behavior to form interfaces (base classes) (Code to interfaces and not implementations)
Implement these as virtual functions
The responsibility of defining the exact semantics of the operation(s) rests with the sub-classes'
Create your sub-classes
Implement (override) the virtual functions
But don't force a hierarchy just for the sake of using them. An example from real code I have been working on recently:
class Codec {
public:
virtual GUID Guid() { return GUID_NULL; }
};
class JpegEncoder : public Codec {
public:
virtual GUID Guid() { return GUID_JpegEncoder; }
};
class PngDecoder : public Codec {
public:
virtual GUID Guid() { return GUID_PngDecoder; }
};
I don't have a ton of time ATM, but here is a simple example.
In my job I maintain and application which talks to various hardware devices. Of these devices, many motors are used for various purposes. Now, I don't know if you have done any development with motors and drives, but they are all a bit different, even if they claim to follow a standard like CANOpen. Anyway, you need to create some new code when you switch vendors, perhaps you motor or drive was end-of-life'd, etc. On top of that, this code has to maintain compatibility with older devices, and we also have various models of similar devices. So, all in all, you have to deal with many different motors and interfaces.
Now, in the code I use an abstract class, named "iMotor", which contains only pure virtual functions. In the implementation code only the iMotor class is referenced. I create a dll for different types of motors with different implementations, but they all implement the iMotor interface. So, all that I need to do to add/change a motor is create a new implementation and drop that dll in place of the old one. Because the code which uses these motor implementations deals only with the iMotor interface it never needs to change, only the implementation of how each motor does what it does needs to change.
If you google for design patterns like the "strategy pattern" and "command pattern" you will find some good uses of interfaces and polymorphism. Besides that, design patterns are always very useful to know.
You don't HAVE to use them but they have their advantages.
Generally they are used as an "interface" between 2 different types of functionality that, code wise, aren't very related.
An example would be handling file loading. A simple file handling class would seem to be perfect. However at a later stage you are asked to shift all your files into a single packaged file while maintaining support for individual files for debug purposes. How do you handle loading here? Obviously things will be handled rather differently because suddenly you can't just open a file. Instead you need to be able to look up the files location and then seek to that location before loading, pretty much, as normal.
The obvious thing to do is implement an abstract base class. Perhaps call it BaseFile. The OpenFile function handling will differ dependent on whether you are using the PackageFile or the DiskFile class. So make that a pure virtual.
Then when you derive the PackageFile and DiskFile classes you provide the appropriate implementation for Opening a file.
You can then add something such as
#if !defined( DISK_FILE ) && defined ( _DEBUG )
#define DISK_FILE 1
#elif !defined( DISK_FILE )
#define DISK_FILE 0
#endif
#if DISK_FILE
typedef DiskFile File;
#else
typedef PackageFile File;
#endif
Now you would just use the "File" typedef to do all file handling. Equally if you don't pre-define DISK_FILE as 0 or 1 and debug is set it will automatically load from disk otherwise it will load from the Package file.
Of course such a construct still allows you to load from the Package file in debug simply by defining DISK_FILE to be 1 in advance and it also allows you to use disk access in a release build by setting DISK_FILE to 0.