c++ How Can I Achieve This Class Structure? - c++

I'm racking my brain trying to find out how to write cross platform classes while avoiding the cost of virtual functions and any kind of ugliness in the platform specific versions of classes. Here is what I have tried.
PlatformIndependantClass.hpp
class PlatformIndependantClass {
public:
PlatformIndependantClass();
std::string GetPlatformName();
private:
PlatformIndependantClass* mImplementation;
};
LinuxClass.hpp
#include "PlatformIndependantClass.hpp"
class LinuxClass : public PlatformIndependantClass{
public:
std::string GetPlatformName();
};
WindowsClass.hpp
#include "PlatformIndependantClass.hpp"
class WindowsClass : public PlatformIndependantClass {
public:
std::string GetPlatformName();
};
PlatformIndependantClass.cpp
#include "PlatformIndependantClass.hpp"
#include "LinuxClass.hpp"
#include "WindowsClass.hpp"
PlatformIndependantClass::PlatformIndependantClass() {
#ifdef TARGET_LINUX
mImplementation = new LinuxClass();
#endif
#ifdef TARGET_WINDOWS
mImplementation = new WindowsClass();
#endif
}
std::string PlatformIndependantClass::GetPlatformName() {
return mImplementation->GetPlatformName();
}
LinuxClass.cpp
#include "LinuxClass.hpp"
std::string LinuxClass::GetPlatformName() {
return std::string("This was compiled on linux!");
}
WindowsClass.cpp
#include "WindowsClass.hpp"
std::string WindowsClass::GetPlatformName() {
return std::string("This was compiled on windows!");
}
main.cpp
#include <iostream>
#include "PlatformIndependantClass.hpp"
using namespace std;
int main()
{
PlatformIndependantClass* cl = new PlatformIndependantClass();
cout << "Hello world!" << endl;
cout << "Operating system name is: " << cl->GetPlatformName() << endl;
cout << "Bye!" << endl;
return 0;
}
Now, this compiles fine but I get a segmentation fault. I believe this is because the platform specific classes inherit from PlatformIndependantClass, which on construction, creates an instance of the platform specific class, so I get infinite recursion. Every time I try, I just get extremely confused!
How can I achieve a design like this properly? Or is this just a horrible idea. I have been trying to find out how to write cross platform classes but I just get a load of results about cross platform libraries, any help will be gratefully accepted :)

I think what you are trying to accomplish can be accomplished much easier...
Object.h:
#include <normal includes>
#if WINDOWS
#include <windows includes>
#endif
#if LINUX
#include <linux includes>
#endif
class Object
{
private:
#if WINDOWS
//Windows Specific Fields...
#endif
#if LINUX
//Linux Specific Fields...
#endif
public:
//Function that performs platform specific functionality
void DoPlatformSpecificStuff();
//Nothing platform specific here
void DoStuff();
};
Object.cpp
#include "Object.h"
void Object::DoStuff() { ... }
ObjectWin32.cpp
#if WINDOWS
#include "Object.h"
void Object::DoPlatformSpecificStuff()
{
//Windows specific stuff...
}
#endif
ObjectLinux.cpp
#if LINUX
#include "Object.h"
void Object::DoPlatformSpecificStuff()
{
//Linux specific stuff...
}
#endif
And so on. I think this could accomplish what you are trying in a bit easier fashion. Also, no virtual functions needed.

Starting from the end, yes, truly a horrible idea, as are most ideas that start with "I want to avoid the cost of virtual functions".
As to why you're getting the segmentation fault (stack overflow specifically), it's because you aren't using virtual functions, but static linking. The compiler doesn't know that mImplementation is anything but a PlatformIndependantClass, so when you try to call return mImplementation->GetPlatformName() you're calling the same function over and over.
What you achieved is called shadowing, you're using compile-time function resolution. The compiler will call the GetPlatformName function of the actual type of the variable you're calling it from, since there's no virtual table to overwrite the pointers to the actual functions. Since mImplementation is PlatformIndependantClass, mImplementation->GetPlatformName will always be PlatformIndependantClass::GetPlatformName.
Edit: Of course the question of why you need to create both a Windows and a Linux copy of your engine at the same time comes to mind. You'll never use both of them at the same time, right?
So why not just have two different libraries, one for each system, and link the right one from your makefile. You get the best of all worlds!

Instead of using the constructor to build the platform-specific instance, I would create a static factory method to create the instances:
PlatformIndependantClass* PlatformIndependantClass::getPlatformIndependantClass() {
#ifdef TARGET_LINUX
return new LinuxClass();
#endif
#ifdef TARGET_WINDOWS
return new WindowsClass();
#endif
}
This way you avoid the recursion, and you also don't need your mImplementation pointer.
I would also try to avoid platform-specific classes, but that's another story :)

When you want to have polymorphic behavior without any run-time overhead, you can try the curiously recurring template pattern (CRTP). The base class is a template, and the derived class uses itself as the template parameter for the base. This requires your classes to be defined as templates, which further restricts them to be implemented completely in the header (.hpp) files.
I'm not sure how to apply the pattern in your particular case.

I don't think the constructor is causing the infinite recursion. It's the GetPlatformName() function. Because it's not set as virtual, it can only call itself.
Two solutions: Make that function virtual, or do away with the inheritance completely.
Either way, the cost of a function only calling another function will be more expensive than using virtual functions in the first place. So I would say keep the inheritance, and virtualize the functions specific to the platform, and call them directly, without going through a base class function.

You are correct about the infinte loop. The fix is actually easier than you'd think.
PlatformIndependantClass.hpp
#include //portable headers
struct PlatformDependantClass; //defined in Cpp file
class PlatformIndependantClass {
public:
PlatformIndependantClass();
~PlatformIndependantClass();
std::string GetPlatformName();
private:
std::unique_ptr<PlatformDependantClass> mImplementation; //note, different type
};
LinuxClass.cpp
#ifdef __GNUC__
#include //linux headers
#include "PlatformIndependantClass.hpp"
struct PlatformDependantClass { //linux only stuff
//stuff
};
PlatformIndependantClass() {
mImplementation.reset(new PlatformDependantClass );
}
~PlatformIndependantClass() {
}
std::string PlatformIndependantClass::GetPlatformName() {
return std::string("This was compiled on linux!");
}
#endif //__GNUC__
WindowsClass.cpp
#ifdef _MSC_VER
#include //windows headers
#include "PlatformIndependantClass.hpp"
struct PlatformDependantClass { //windows only stuff
//stuff
};
PlatformIndependantClass() {
mImplementation.reset(new PlatformDependantClass );
}
~PlatformIndependantClass() {
}
std::string PlatformIndependantClass::GetPlatformName() {
return std::string("This was compiled on Windows!");
}
#endif //_MSC_VER
There's only ONE class defined here. In windows, it only compiles and contains windows stuff, and in Linux, it only compiles and contains linux stuff. Note that the void* thing is called an "Opaque pointer" or "pimpl idiom" http://en.wikipedia.org/wiki/Opaque_pointer

Related

Plugins using Pluma

Overview
I am trying to develop a C++ application which allows for user-created plugins.
I found a nice library called Pluma (http://pluma-framework.sourceforge.net/) which functionally seems to be exactly what I want.
After going through their tutorial, I was able to (with a bit of difficulty) convince the plugin to compile. However, it refuses to play nice and connect with the main program; returning various errors depending on how I try to implement them.
Problem
If I comment out the line labeled 'Main problem line' (in the last file, main.cpp), the plugin compiles successfully, and the main app can recognize it, but it says that "Nothing registered by plugin 'libRNCypher'", and none of the functions can be called.
If I compile that line, the main application instead says "Failed to load library 'Plugins/libRNCypher.so'. OS returned error: 'Plugins/libRNCypher.so: undefined symbol: _ZTIN5pluma8ProviderE".
My guess is that it has something to do with the way the plugin was compiled, as compiling it initially did not work and Code::Blocks told me to compile with "-fPIC" as a flag (doing so made it compile).
Code
Code below:
Main.cpp
#include "Pluma/Pluma.hpp"
#include "CryptoBase.h"
int main()
{
pluma::Pluma manager;
manager.acceptProviderType< CryptoBaseProvider >();
manager.loadFromFolder("Plugins", true);
std::vector<CryptoBaseProvider*> providers;
manager.getProviders(providers);
return 0;
}
CryptoBase.h
#ifndef CRYPTOBASE_H_INCLUDED
#define CRYPTOBASE_H_INCLUDED
#include "Pluma/Pluma.hpp"
#include <string>
#include <vector>
#include <bitset>
//Base class from which all crypto plug-ins will derive
class CryptoBase
{
public:
CryptoBase();
~CryptoBase();
virtual std::string GetCypherName() const = 0;
virtual std::vector<std::string> GetCryptoRecApps() const = 0;
virtual void HandleData(std::vector< std::bitset<8> > _data) const = 0;
};
PLUMA_PROVIDER_HEADER(CryptoBase)
#endif // CRYPTOBASE_H_INCLUDED
RNCypher.h (This is part of the plugin)
#ifndef RNCYPHER_H_INCLUDED
#define RNCYPHER_H_INCLUDED
#include <string>
#include <vector>
#include <bitset>
#include "../Encoder/Pluma/Pluma.hpp"
#include "../Encoder/CryptoBase.h"
class RNCypher : public CryptoBase
{
public:
std::string GetCypherName() const
{
return "RNCypher";
}
std::vector<std::string> GetCryptoRecApps() const
{
std::vector<std::string> vec;
vec.push_back("Storage");
return vec;
}
void HandleData(std::vector< std::bitset<8> > _data) const
{
char letter = 'v';
_data.clear();
_data.push_back(std::bitset<8>(letter));
return;
}
};
PLUMA_INHERIT_PROVIDER(RNCypher, CryptoBase);
#endif // RNCYPHER_H_INCLUDED
main.cpp (This is part of the plugin)
#include "../Encoder/Pluma/Connector.hpp"
#include "RNCypher.h"
PLUMA_CONNECTOR
bool connect(pluma::Host& host)
{
host.add( new RNCypherProvider() ); //<- Main problem line
return true;
}
Additional Details
I'm compiling on Ubuntu 16.04, using Code::Blocks 16.01.
The second error message seems to not come from Pluma itself, but a file I also had to link, #include <dlfcn.h> (which might be a Linux file?).
I would prefer to use an existing library rather than write my own code as I would like this to be cross-platform. I am, however, open to any suggestions.
Sorry for all of the code, but I believe this is enough to reproduce the error that I am having.
Thank You
Thank you for taking the time to read this, and thank you in advance for your help!
All the best, and happy holidays!
I was not able to reproduce your problem, however looking at
http://pluma-framework.sourceforge.net/documentation/index.htm,
I've noticed that:
in your RNCypher.h file you miss something like
PLUMA_INHERIT_PROVIDER(RNCypher, CryptoBase)
it seems also that there's no file CryptoBase.cpp containing something like
#include "CryptoBase.h"
PLUMA_PROVIDER_SOURCE(CryptoBase, 1, 1);
finally, in CryptoBase.h I would declare a virtual destructor (see Why should I declare a virtual destructor for an abstract class in C++?) and provide a definition to it, while you should not declare a default constructor without providing a definition to it (see for instance Is it correct to use declaration only for empty private constructors in C++?); of course the last consideration is valid unless there's another file in which you have provided such definitions.

Using CRTP to separate platform specific code

I recently got this idea to separate different platform specific implementations (could be Win32/X, opengl/dx/vulkan, etc...) using CRTP (curiously recurring template pattern): I thought of something like this:
IDisplayDevice.h
#pragma once
#include "OSConfig.h"
namespace cbn
{
template <class TDerived> // Win32 type here
struct IDisplayDevice
{
bool run_frame(void) {
return
static_cast<const TDerived*>(this)->run_frame();
}
// a lot of other methods ...
};
}
Win32DisplayDevice.h:
#pragma once
#include "OSConfig.h"
// make sure it only gets compiled on win32/64
#if defined(CBN_OS_WINDOWS)
namespace cbn
{
class CWin32DisplayDevice
: public IDisplayDevice<CWin32DisplayDevice> {
public:
bool run_frame(void) {
call_hInstance();
call_hWnd();
#ifdef CBN_RENDERAPI_DX11
call_dx11_bufferswap();
#endif
return some_state;
}
private:
};
}
#endif
I would then provide an other implementation the same way in XDisplayDevice.h.
Finally, I would make a common interface in DisplayDevice.h:
#include "Win32DisplayDevice.h"
#include "XDisplayDevice.h"
namespace cbn
{
class CDisplayDevice
{
public:
CBN_INLINE
bool run_frame(void) { return device_->run_frame(); }
private:
#if defined(CBN_OS_WINDOWS)
CWin32DisplayDevice device_;
#elif defined(CBN_OS_LINUX)
CXDisplayDevice device_;
#elif // and so on
#else
// does nothing ...
CNillDisplayDevice device_;
#endif
}
}
So I could call it in main.cpp like:
int main()
{
CDisplayDevice my_device;
while(my_device->run_frame())
{
do_some_magic();
}
}
Do you think this would be a good way to deal with platform specific code ?
PS: I avoid victuals and polymorphism because of platform restraints (android, ps4, etc...) where pointer calls matter.
Consider this code:
struct OpenGLTraits // keep this in it's own files (.h and .cpp)
{
bool run_frame() { /* open gl specific stuff here */ }
};
struct VulkanTraits // keep this in it's own files (.h and .cpp)
{
bool run_frame() { /* vulkan specific stuff here */ }
};
template<typename T>
class DisplayDevice
{
using graphic_traits = T;
graphic_traits graphics; // maybe inject this in constructor?
void do_your_operation()
{
if(!graphics.run_frame()) // subsystem-specific call
{ ... }
}
};
This will use subsystem-specific calls, and abstract them away between a common API, without a virtual call involved. You can even inline the run_frame() implementations.
Edit (address comment question):
consider this:
#ifdef FLAG_SPECIFYING_OPEN_GL
using Device = DisplayDevice<OpenGLTraits>;
#elif FLAG_SPECIFYING_VULKAN
using Device = DisplayDevice<VulkanTraits>;
...
#endif
client code:
Device device;
device.do_your_operation();
I don't really see the benefit of CRTP here, you still have platform specific (as opposed to feature specific) ifdefs within the code, and this tends to make things harder to read and maintain. I usually prefer having different implementations in different source files - and in fact, generally having seperate directories for each platform.
such as:
platform/win64
platform/win32
platform/gnu-linux
platform/freebsd
In this way you can largely avoid the ifdef clutter, and you generally know where to find the platform specific things. You also know what you need to write in order to port things to another platform. The build system can then be made to select the correct source rather than the preprocessor.

How can I change what a class inherits from at compile-time?

In my quest to create a cross-platform GUI Framework, I have hit the following snag:
Suppose I have a central "Window" class, in the project's general, platform-independent include folder:
//include/window.hpp
class Window
{
//Public interface
}
I then have several platform-dependent implementation classes, like so:
//src/{platform}/window.hpp
class WinWindow {...}; //Windows
class OSXWindow {...}; //OSX
class X11Window {...}; //Unix
Finally, there is the original Window class' .cpp file, where I want to "bind" the implementation class to the general class. Purely conceptually, this is what I want to be able to do:
//src/window.cpp
//Suppose we're on Windows
#include "include/window.hpp"
#include "src/win/window.hpp"
class Window : private WinWindow; //Redefine Window's inheritance
I know this is by no means valid C++, and that's the point. I have thought of two possible ways to solve this problem, and I have problems with both.
pImpl-style implementation
Make Window hold a void pointer to an implementing class, and assign that to a different window class for each platform. However, I would have to up-cast the pointer every time I want to perform a platform dependent-operation, not to mention include the platform dependent file everywhere.
Preprocessor directives
class Window :
#ifdef WIN32
private WinWindow
#else ifdef X11
private X11Window //etc.
This, however, sounds more like a hack than an actual solution to the problem.
What to do? Should I change my design completely? Do any of my possible solutions hold a little bit of water?
Using typedef to hide the preprocessor
You could simply typedef the appropriate window type instead:
#ifdef WINDOWS
typedef WinWindow WindowType;
#elif defined // etc
Then your window class could be:
class Window : private WindowType {
};
This isn't a very robust solution, though. It is better to think in a more Object Oriented way, but OO programming in C++ comes at a runtime cost, unless you use the
Curiously repeating template pattern
You could use the curiously repeating template pattern:
template<class WindowType>
class WindowBase {
public:
void doSomething() {
static_cast<WindowType *>(this)->doSomethingElse();
}
};
Then you could do
class WinWindow : public WindowBase<WinWindow> {
public:
void doSomethingElse() {
// code
}
};
And to use it (assuming C++ 14 support):
auto createWindow() {
#ifdef WINDOWS
return WinWindow{};
#elif UNIX
return X11Window{};
#endif
}
With C++ 11 only:
auto createWindow()
->
#ifdef WINDOWS
WinWindow
#elif defined UNIX
X11Window
#endif
{
#ifdef WINDOWS
return WinWindow{};
#elif defined UNIX
return X11Window{};
#endif
}
I recommend using auto when you use it, or using it in combination with a typedef:
auto window = createWindow();
window.doSomething();
Object Oriented Style
You could make your Window class be an abstract class:
class Window {
protected:
void doSomething();
public:
virtual void doSomethingElse() = 0;
};
Then define your platform-dependent classes as subclasses of Window. Then all you'd have to do is have the preprocessor directives in one place:
std::unique_ptr<Window> createWindow() {
#ifdef WINDOWS
return new WinWindow;
#elif defined OSX
return new OSXWindow;
// etc
}
Unfortunately, this incurs a runtime cost through calls to the virtual function. The CRTP version resolves calls to the "virtual function" at compile time instead of at runtime.
Additionally, this requires the Window to be declared on the heap whereas CRTP doesn't; this might be a problem depending on the use case, but in general, it doesn't matter that much.
Ultimately, you do have to use the #ifdef somewhere, so you can determine the platform (or you could use a library that determines the platform, but it probably uses #ifdef too), the question is just where to hide it.
You can use the CRTP pattern to implement static polymorphism:
class WindowBase {
virtual void doSomething() = 0;
};
template<class WindowType>
class Window : public WindowBase {
// Static cast when accessing the actual implementation:
void doSomething() {
static_cast<WindowType*>(this)->doSomethingElse();
}
};
class X11WindowImpl : public Window<X11WindowImpl> {
void doSomethingElse() {
// blah ...
}
};
class Win32WindowImpl : public Window<Win32WindowImpl> {
void doSomethingElse() {
// blah ...
}
};
Since your code will be compiled to satisfy a particular target, this should be the leanest option.
It's okey. You could also write one class and define it's content using #ifdef etc., but your solution isn't a hack. Just a proper way to write multiplatform code if you have no other choice.

Classes included within main() method

If I have some code like
main(int argc, char *argv[])
{
...
#include "Class1.H"
#include "Class2.H"
...
}
Generally the main() method is the starting point of every application and the content within main() is to be executed. Am I right in the assumption that the content of all classes included into main() will be executed when main() is started?
greetings
Streight
No, no, NO.
First of all, you don't #include a file within a function. You #include a file at the beginning of a file, before other declarations. OK, you can use #include anywhere, but you really just shouldn't.
Second, #include doesn't execute anything. It's basically just a copy-paste operation. The contents of the #included file are (effectively) inserted exactly where you put the #include.
Third, if you're going to learn to program in C++, please consider picking up one of our recommended texts.
You commented:
I am working with the multiphaseEulerFoam Solver in OpenFoam and
inside the main() of multiphaseEulerFoam.C are classes included. I
assume that the classes have the right structure to be called in
main()
That may be the case, and I don't doubt that the classes have the right structure to be called from main. The problem is main will be malformed after the #includes because you'll have local class definitions and who knows what else within main.
Consider this. If you have a header:
foo.h
#ifndef FOO_H
#define FOO_H
class Foo
{
public:
Foo (const std::string& val)
:
mVal (val)
{
}
private:
std::string mVal;
};
#endif
And you try to include this in main:
main.cpp
int main()
{
#include "foo.h"
}
After preprocessing the #include directive, the resulting file that the compiler will try to compile will look like this:
preprocessed main.cpp
int main()
{
#ifndef FOO_H
#define FOO_H
class Foo
{
public:
Foo (const std::string& val)
:
mVal (val)
{
}
private:
std::string mVal;
};
#endif
}
This is all kinds of wrong. One, you can't declare local classes like this. Two, Foo won't be "executed", as you seem to assume.
main.cpp should look like this instead:
#include "foo.h"
int main()
{
}
#define and #include are just textual operations that take place during the 'preprocessing' phase of compilation, which is technically an optional phase. So you can mix and match them in all sorts of ways and as long as your preprocessor syntax is correct it will work.
However if you do redefine macros with #undef your code will be hard to follow because the same text could have different meanings in different places in the code.
For custom types typedef is much preferred where possible because you can still benefit from the type checking mechanism of the compiler and it is less error-prone because it is much less likely than #define macros to have unexpected side-effects on surrounding code.
Jim Blacklers Answer # #include inside the main () function
Try to avoid code like this. #include directive inserts contents of the file in its place.
You can simulate the result of your code by copy-pasting file content from Class1.H and Class2.H inside the main function.
Includes do not belong into any function or class method body, this is not a good idea to do.
No code will be executed unless you instantiate one of your classes in your header files.
Code is executed when:
Class is instantiated, then it's constructor method is called and the code inside the method is executed.
If there are variables of a class type inside your instantiated class, they will too run their constructors.
When you call a class method.
Try this example:
#include <iostream>
using namespace std;
int main()
{
class A
{ public:
A() { cout << "A constructor called" << endl; }
};
// A has no instances
class B
{ public:
B() { cout << "B constructor called" << endl; }
void test() { cout << "B test called" << endl; }
} bbb;
// bbb will be new class instance of B
bbb.test(); // example call of test method of bbb instance
B ccc; // another class instance of B
ccc.test(); // another call, this time of ccc instance
}
When you run it, you'll observe that:
there will be no instance of class A created. Nothing will be run from class A.
if you intantiate bbb and ccc, their constructors will be run. To run any other code you must first make a method, for example test and then call it.
This is an openFoam syntax he is correct in saying that open Foam treats #include like calling a function. In OpenFoam using #include Foo.H would run through the code not the class declaration that is done in a different hierarchy level. I would recommend all openFoam related question not be asked in a C++ forum because there is so much stuff built onto C++ in openFoam a lot the rules need to be broken to produce a working code.
You're only including declarations of classes. To execute their code, you need to create class instances (objects).
Also, you shouldn't write #include inside a function or a class method. More often than not it won't compile.

Partial class definition on C++?

Anyone knows if is possible to have partial class definition on C++ ?
Something like:
file1.h:
class Test {
public:
int test1();
};
file2.h:
class Test {
public:
int test2();
};
For me it seems quite useful for definining multi-platform classes that have common functions between them that are platform-independent because inheritance is a cost to pay that is non-useful for multi-platform classes.
I mean you will never have two multi-platform specialization instances at runtime, only at compile time. Inheritance could be useful to fulfill your public interface needs but after that it won't add anything useful at runtime, just costs.
Also you will have to use an ugly #ifdef to use the class because you can't make an instance from an abstract class:
class genericTest {
public:
int genericMethod();
};
Then let's say for win32:
class win32Test: public genericTest {
public:
int win32Method();
};
And maybe:
class macTest: public genericTest {
public:
int macMethod();
};
Let's think that both win32Method() and macMethod() calls genericMethod(), and you will have to use the class like this:
#ifdef _WIN32
genericTest *test = new win32Test();
#elif MAC
genericTest *test = new macTest();
#endif
test->genericMethod();
Now thinking a while the inheritance was only useful for giving them both a genericMethod() that is dependent on the platform-specific one, but you have the cost of calling two constructors because of that. Also you have ugly #ifdef scattered around the code.
That's why I was looking for partial classes. I could at compile-time define the specific platform dependent partial end, of course that on this silly example I still need an ugly #ifdef inside genericMethod() but there is another ways to avoid that.
This is not possible in C++, it will give you an error about redefining already-defined classes. If you'd like to share behavior, consider inheritance.
Try inheritance
Specifically
class AllPlatforms {
public:
int common();
};
and then
class PlatformA : public AllPlatforms {
public:
int specific();
};
You can't partially define classes in C++.
Here's a way to get the "polymorphism, where there's only one subclass" effect you're after without overhead and with a bare minimum of #define or code duplication. It's called simulated dynamic binding:
template <typename T>
class genericTest {
public:
void genericMethod() {
// do some generic things
std::cout << "Could be any platform, I don't know" << std::endl;
// base class can call a method in the child with static_cast
(static_cast<T*>(this))->doClassDependentThing();
}
};
#ifdef _WIN32
typedef Win32Test Test;
#elif MAC
typedef MacTest Test;
#endif
Then off in some other headers you'll have:
class Win32Test : public genericTest<Win32Test> {
public:
void win32Method() {
// windows-specific stuff:
std::cout << "I'm in windows" << std::endl;
// we can call a method in the base class
genericMethod();
// more windows-specific stuff...
}
void doClassDependentThing() {
std::cout << "Yep, definitely in windows" << std::endl;
}
};
and
class MacTest : public genericTest<MacTest> {
public:
void macMethod() {
// mac-specific stuff:
std::cout << "I'm in MacOS" << std::endl;
// we can call a method in the base class
genericMethod();
// more mac-specific stuff...
}
void doClassDependentThing() {
std::cout << "Yep, definitely in MacOS" << std::endl;
}
};
This gives you proper polymorphism at compile time. genericTest can non-virtually call doClassDependentThing in a way that gives it the platform version, (almost like a virtual method), and when win32Method calls genericMethod it of course gets the base class version.
This creates no overhead associated with virtual calls - you get the same performance as if you'd typed out two big classes with no shared code. It may create a non-virtual call overhead at con(de)struction, but if the con(de)structor for genericTest is inlined you should be fine, and that overhead is in any case no worse than having a genericInit method that's called by both platforms.
Client code just creates instances of Test, and can call methods on them which are either in genericTest or in the correct version for the platform. To help with type safety in code which doesn't care about the platform and doesn't want to accidentally make use of platform-specific calls, you could additionally do:
#ifdef _WIN32
typedef genericTest<Win32Test> BaseTest;
#elif MAC
typedef genericTest<MacTest> BaseTest;
#endif
You have to be a bit careful using BaseTest, but not much more so than is always the case with base classes in C++. For instance, don't slice it with an ill-judged pass-by-value. And don't instantiate it directly, because if you do and call a method that ends up attempting a "fake virtual" call, you're in trouble. The latter can be enforced by ensuring that all of genericTest's constructors are protected.
or you could try PIMPL
common header file:
class Test
{
public:
...
void common();
...
private:
class TestImpl;
TestImpl* m_customImpl;
};
Then create the cpp files doing the custom implementations that are platform specific.
#include will work as that is preprocessor stuff.
class Foo
{
#include "FooFile_Private.h"
}
////////
FooFile_Private.h:
private:
void DoSg();
How about this:
class WindowsFuncs { public: int f(); int winf(); };
class MacFuncs { public: int f(); int macf(); }
class Funcs
#ifdef Windows
: public WindowsFuncs
#else
: public MacFuncs
#endif
{
public:
Funcs();
int g();
};
Now Funcs is a class known at compile-time, so no overheads are caused by abstract base classes or whatever.
As written, it is not possible, and in some cases it is actually annoying.
There was an official proposal to the ISO, with in mind embedded software, in particular to avoid the RAM ovehead given by both inheritance and pimpl pattern (both approaches require an additional pointer for each object):
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0309r0.pdf
Unfortunately the proposal was rejected.
As written, it is not possible.
You may want to look into namespaces. You can add a function to a namespace in another file. The problem with a class is that each .cpp needs to see the full layout of the class.
Nope.
But, you may want to look up a technique called "Policy Classes". Basically, you make micro-classes (that aren't useful on their own) then glue them together at some later point.
Either use inheritance, as Jamie said, or #ifdef to make different parts compile on different platforms.
For me it seems quite useful for definining multi-platform classes that have common functions between them that are platform-independent.
Except developers have been doing this for decades without this 'feature'.
I believe partial was created because Microsoft has had, for decades also, a bad habit of generating code and handing it off to developers to develop and maintain.
Generated code is often a maintenance nightmare. What habits to that entire MFC generated framework when you need to bump your MFC version? Or how do you port all that code in *.designer.cs files when you upgrade Visual Studio?
Most other platforms rely more heavily on generating configuration files instead that the user/developer can modify. Those, having a more limited vocabulary and not prone to be mixed with unrelated code. The configuration files can even be inserted in the binary as a resource file if deemed necessary.
I have never seen 'partial' used in a place where inheritance or a configuration resource file wouldn't have done a better job.
Since headers are just textually inserted, one of them could omit the "class Test {" and "}" and be #included in the middle of the other.
I've actually seen this in production code, albeit Delphi not C++. It particularly annoyed me because it broke the IDE's code navigation features.
Dirty but practical way is using #include preprocessor:
Test.h:
#ifndef TEST_H
#define TEST_H
class Test
{
public:
Test(void);
virtual ~Test(void);
#include "Test_Partial_Win32.h"
#include "Test_Partial_OSX.h"
};
#endif // !TEST_H
Test_Partial_OSX.h:
// This file should be included in Test.h only.
#ifdef MAC
public:
int macMethod();
#endif // MAC
Test_Partial_WIN32.h:
// This file should be included in Test.h only.
#ifdef _WIN32
public:
int win32Method();
#endif // _WIN32
Test.cpp:
// Implement common member function of class Test in this file.
#include "stdafx.h"
#include "Test.h"
Test::Test(void)
{
}
Test::~Test(void)
{
}
Test_Partial_OSX.cpp:
// Implement OSX platform specific function of class Test in this file.
#include "stdafx.h"
#include "Test.h"
#ifdef MAC
int Test::macMethod()
{
return 0;
}
#endif // MAC
Test_Partial_WIN32.cpp:
// Implement WIN32 platform specific function of class Test in this file.
#include "stdafx.h"
#include "Test.h"
#ifdef _WIN32
int Test::win32Method()
{
return 0;
}
#endif // _WIN32
Suppose that I have:
MyClass_Part1.hpp, MyClass_Part2.hpp and MyClass_Part3.hpp
Theoretically someone can develop a GUI tool that reads all these hpp files above and creates the following hpp file:
MyClass.hpp
class MyClass
{
#include <MyClass_Part1.hpp>
#include <MyClass_Part2.hpp>
#include <MyClass_Part3.hpp>
};
The user can theoretically tell the GUI tool where is each input hpp file and where to create the output hpp file.
Of course that the developer can theoretically program the GUI tool to work with any varying number of hpp files (not necessarily 3 only) whose prefix can be any arbitrary string (not necessarily "MyClass" only).
Just don't forget to #include <MyClass.hpp> to use the class "MyClass" in your projects.
Declaring a class body twice will likely generate a type redefinition error. If you're looking for a work around. I'd suggest #ifdef'ing, or using an Abstract Base Class to hide platform specific details.
You can get something like partial classes using template specialization and partial specialization. Before you invest too much time, check your compiler's support for these. Older compilers like MSC++ 6.0 didn't support partial specialization.
This is not possible in C++, it will give you an error about redefining already-defined
classes. If you'd like to share behavior, consider inheritance.
I do agree on this. Partial classes is strange construct that makes it very difficult to maintain afterwards. It is difficult to locate on which partial class each member is declared and redefinition or even reimplementation of features are hard to avoid.
Do you want to extend the std::vector, you have to inherit from it. This is because of several reasons. First of all you change the responsibility of the class and (properly?) its class invariants. Secondly, from a security point of view this should be avoided.
Consider a class that handles user authentication...
partial class UserAuthentication {
private string user;
private string password;
public bool signon(string usr, string pwd);
}
partial class UserAuthentication {
private string getPassword() { return password; }
}
A lot of other reasons could be mentioned...
Let platform independent and platform dependent classes/functions be each-others friend classes/functions. :)
And their separate name identifiers permit finer control over instantiation, so coupling is looser. Partial breaks encapsulation foundation of OO far too absolutely, whereas the requisite friend declarations barely relax it just enough to facilitate multi-paradigm Separation of Concerns like Platform Specific aspects from Domain-Specific platform independent ones.
I've been doing something similar in my rendering engine. I have a templated IResource interface class from which a variety of resources inherit (stripped down for brevity):
template <typename TResource, typename TParams, typename TKey>
class IResource
{
public:
virtual TKey GetKey() const = 0;
protected:
static shared_ptr<TResource> Create(const TParams& params)
{
return ResourceManager::GetInstance().Load(params);
}
virtual Status Initialize(const TParams& params, const TKey key, shared_ptr<Viewer> pViewer) = 0;
};
The Create static function calls back to a templated ResourceManager class that is responsible for loading, unloading, and storing instances of the type of resource it manages with unique keys, ensuring duplicate calls are simply retrieved from the store, rather than reloaded as separate resources.
template <typename TResource, typename TParams, typename TKey>
class TResourceManager
{
sptr<TResource> Load(const TParams& params) { ... }
};
Concrete resource classes inherit from IResource utilizing the CRTP. ResourceManagers specialized to each resource type are declared as friends to those classes, so that the ResourceManager's Load function can call the concrete resource's Initialize function. One such resource is a texture class, which further uses a pImpl idiom to hide its privates:
class Texture2D : public IResource<Texture2D , Params::Texture2D , Key::Texture2D >
{
typedef TResourceManager<Texture2D , Params::Texture2D , Key::Texture2D > ResourceManager;
friend class ResourceManager;
public:
virtual Key::Texture2D GetKey() const override final;
void GetWidth() const;
private:
virtual Status Initialize(const Params::Texture2D & params, const Key::Texture2D key, shared_ptr<Texture2D > pTexture) override final;
struct Impl;
unique_ptr<Impl> m;
};
Much of the implementation of our texture class is platform-independent (such as the GetWidth function if it just returns an int stored in the Impl). However, depending on what graphics API we're targeting (e.g. Direct3D11 vs. OpenGL 4.3), some of the implementation details may differ. One solution could be to inherit from IResource an intermediary Texture2D class that defines the extended public interface for all textures, and then inherit a D3DTexture2D and OGLTexture2D class from that. The first problem with this solution is that it requires users of your API to be constantly mindful of which graphics API they're targeting (they could call Create on both child classes). This could be resolved by restricting the Create to the intermediary Texture2D class, which uses maybe a #ifdef switch to create either a D3D or an OGL child object. But then there is still the second problem with this solution, which is that the platform-independent code would be duplicated across both children, causing extra maintenance efforts. You could attempt to solve this problem by moving the platform-independent code into the intermediary class, but what happens if some of the member data is used by both platform-specific and platform-independent code? The D3D/OGL children won't be able to access those data members in the intermediary's Impl, so you'd have to move them out of the Impl and into the header, along with any dependencies they carry, exposing anyone who includes your header to all that crap they don't need to know about.
API's should be easy to use right and hard to use wrong. Part of being easy to use right is restricting the user's exposure to only the parts of the API they should be using. This solution opens it up to be easily used wrong and adds maintenance overhead. Users should only have to care about the graphics API they're targeting in one spot, not everywhere they use your API, and they shouldn't be exposed to your internal dependencies. This situation screams for partial classes, but they are not available in C++. So instead, you might simply define the Impl structure in separate header files, one for D3D, and one for OGL, and put an #ifdef switch at the top of the Texture2D.cpp file, and define the rest of the public interface universally. This way, the public interface has access to the private data it needs, the only duplicate code is data member declarations (construction can still be done in the Texture2D constructor that creates the Impl), your private dependencies stay private, and users don't have to care about anything except using the limited set of calls in the exposed API surface:
// D3DTexture2DImpl.h
#include "Texture2D.h"
struct Texture2D::Impl
{
/* insert D3D-specific stuff here */
};
// OGLTexture2DImpl.h
#include "Texture2D.h"
struct Texture2D::Impl
{
/* insert OGL-specific stuff here */
};
// Texture2D.cpp
#include "Texture2D.h"
#ifdef USING_D3D
#include "D3DTexture2DImpl.h"
#else
#include "OGLTexture2DImpl.h"
#endif
Key::Texture2D Texture2D::GetKey() const
{
return m->key;
}
// etc...