C++ Multiple Libraries Define Same Class Name - c++

I am developing a project in which I have a vendor library, say vendor.h, for the specific Arduino-compatible board I'm using which defines class HTTPClient that conflicts with an Arduino system library, HTTPClient.h, which also defines class HTTPClient.
These two classes are unrelated other than having the same name, and the vendor implementation of an HTTP client is far less capable than the Arduino system library's implementation, so I'd prefer to use the latter. But I can't omit including the former, because I need quite a bit from the vendor.h. Essentially, I have the problem posed here, but with classes rather than functions. I have the full code of both, but given that one is a system library and the other is a vendor library, I'm reluctant to fork and edit either, as that adds lots of merging work down the road if either of them are updated, so my preference would be to find a tidy solution that doesn't edit either header.
I've tried a variety of solutions posted in other SO questions:
I do not want to leave out either header, as I need vendor.h for quite a few things and need the capabilities of HTTPClient.h's client implementation
Proper namespaces in the headers would solve the problem, I would prefer to avoid editing either header
I tried wrapping the #include <HTTPClient.h> in a namespace in my main.cpp, but that caused linking errors, as it's not a header-only library, so the header & cpp weren't in the same namespace
I tried a simple wrapper as proposed for the function in the above linked SO question in which the header contained just a forward declaration of my wrapper class & the associated cpp contained the actual class definition. This gave a compiler error of error: aggregate 'HTTP::Client client' has incomplete type and cannot be defined (Code sample of this attempt below)
main.cpp:
#include <vendor.h>
#include "httpclientwrapper.h"
HTTP::Client client;
httpclientwrapper.h:
#ifndef INC_HTTPCLIENTWRAPPER_H
#define INC_HTTPCLIENTWRAPPER_H
namespace HTTP {
class Client;
}
#endif
httpclientwrapper.cpp:
#include "httpclientwrapper.h"
#include <HTTPClient.h>
namespace HTTP {
class Client : public ::HTTPClient {};
}
In that example, I can't inherit from HTTPClient in a class definition in my header, as that will reintroduce the duplicate class name to the global namespace in my main program (hence the perhaps misguided attempt to see if a forward declaration would do the trick). I suspect that I can resolve the issue by completely duplicating the class definition of HTTPClient in my wrapper class above rather than trying to use inheritance. I would then add member definitions to my wrapper cpp which pass the call to HTTPClient's members. Before I go through the trouble of rewriting (or more likely, copy/pasting) the entire HTTPClient definition from HTTPClient.h into my own wrapper, I was wondering if there was a better or more proper way to resolve the conflict?
Thanks for you help!

As a solution was never proposed, I'm posting an answer that summarizes my research and my ultimate resolution. Mostly, I encourage the use of namespaces, because proper uses of namespaces would have eliminated the conflict. However, Arduino environments try to keep things simple to lower the barrier of entry, eschewing "complicated" features of C++, so more advanced use cases will likely continue to run into issues like this. From other SO answers and forum posts (cited where I could), here are some methods for avoiding name conflicts like this:
If you can edit the source
Edit the source code to remove the conflict or add a namespace to one of both libraries. If this is an open source library, submit a pull request. This is the cleanest solution. However, if you can't push your changes back upstream (such as when one is a system library for some hardware), you may end up with merge issues down the road when the maintainer/developer updates the libraries.
If you can't edit the source
Credit for part of this: How to avoid variable/function conflicts from two libraries in C++
For libraries that are header only libraries (or all functions are inline)
(ie, they have only a .h file without a .o or .cpp)
Include the library inside a namespace. In most code, this is frowned upon as poor form, but if you're already in a situation where you are trying to cope with a library that doesn't contain itself nicely, it's a clean and simple way to contain the code in a namespace and avoid name conflicts.
main.cpp
namespace foo {
#include library.h
}
int main() {
foo::bar(1);
}
For libraries with functions
The above method will fail to link at compile time, because the declarations in the header will be inside the namespace, but the definitions of those functions are not.
Instead, create a wrapper header and implementation file. In the header, declare your namespace and functions you wish to use, but do not import the original library. In the implementation file, import your library, and use the functions inside your new namespaced functions. That way, the one conflicting library is not imported into the same place as the other.
wrapper.h
namespace foo {
int bar(int a);
}
wrapper.cpp
#include "wrapper.h"
#include "library.h"
namespace foo {
int bar(int a) {
return ::bar(a);
}
}
main.cpp
#include "wrapper.h"
int main() {
foo::bar(1);
}
You could also, for the sake of consistency, wrap both libraries so they're each in their own namespace. This method does mean that you will have to put in the effort to write a wrapper for every function you plan to use. This gets more complicated, however, when you need to use classes from the library (see below).
For libraries with classes
This is an extension of the wrapper function model from above, but you will need to put in more work, and there are a few more drawbacks. You can't write a class that inherits from the library's class, as that would require importing the original library in your wrapper header prior to defining your class, so you must write a complete wrapper class. You also cannot have a private member of your class of the type from the original class that you can delegate calls to for the same reason. The attempt at using a forward declaration I described in my question also did not work, as the header file needs a complete declaration of the class to compile. This left me the below implementation, which only works in the cases of a singleton (which was my use case anyway).
The wrapper header file should almost completely duplicate the public interface of the class you want to use.
wrapper.h
namespace foo {
Class Bar() {
public:
void f(int a);
bool g(char* b, int c, bool d);
char* h();
};
}
The wrapper implementation file then creates an instance and passes the calls along.
wrapper.cpp
#include "wrapper.h"
#include "library.h"
namespace foo {
::Bar obj;
void Bar::f(int a) {
return obj.f(a);
}
bool Bar::g(char* b, int c, bool d) {
return obj.g(b, c, d);
}
char* Bar::h() {
return obj.h();
}
}
The main file will interact with only a single instance of the original class, no matter how many times your wrapper class in instantiated.
main.cpp
#include "wrapper.h"
int main() {
foo::Bar obj;
obj.f(1);
obj.g("hello",5,true);
obj.h();
}
Overall, this strikes me as a flawed solution. To fully wrap this class, I think the this could be modified to add a factory class that would be fully contained inside the wrapper implementation file. This class would instantiate the original library class every time your wrapper class is instantiated, and then track these instances. In this way, your wrapper class could keep an index to its associated instance in the factory and bypass the need to have that instance as its own private member. This seemed like a significant amount of work, and I did not attempt to do so, but would look something like the code below. (This probably needs some polish and a real look at its memory usage!)
The wrapper header file adds a constructor & private member to store an instance id
wrapper.h
namespace foo {
Class Bar() {
public:
Bar();
void f(int a);
bool g(char* b, int c, bool d);
char* h();
private:
unsigned int instance;
};
}
The wrapper implementation file then adds a factory class to manage instances of the original library's class
wrapper.cpp
#include "wrapper.h"
#include "library.h"
namespace foo {
class BarFactory {
public:
static unsigned int new() {
instances[count] = new ::Bar();
return count++;
}
static ::Bar* get(unsigned int i) {
return instances[i];
}
private:
BarFactory();
::Bar* instances[MAX_COUNT]
int count;
};
void Bar::Bar() {
instance = BarFactory.new();
}
void Bar::f(int a) {
return BarFactory.get(i)->f(a);
}
bool Bar::g(char* b, int c, bool d) {
return BarFactory.get(i)->g(b, c, d);
}
char* Bar::h() {
return BarFactory.get(i)->h();
}
}
The main file remains unchanged
main.cpp
#include "wrapper.h"
int main() {
foo::bar obj;
obj.f(1);
obj.g("hello",5,true);
obj.h();
}
If all of this seems like a lot of work, then you're thinking the same thing I did. I implemented the basic class wrapper, and realized it wasn't going to work for my use case. And given the hardware limitations of the Arduino, I ultimately decided that rather than add more code to be able to use the HTTPClient implementation in either library, I wrote my own HTTP implementation library in the end, and so used none of the above and saved several hundred kilobytes of memory. But I wanted to share here in case somebody else was looking to answer the same question!

Related

Is it bad to use #include in the middle of code?

I keep reading that it's bad to do so, but I don't feel those answers fully answer my specific question.
It seems like it could be really useful in some cases. I want to do something like the following:
class Example {
private:
int val;
public:
void my_function() {
#if defined (__AVX2__)
#include <function_internal_code_using_avx2.h>
#else
#include <function_internal_code_without_avx2.h>
#endif
}
};
If using #include in the middle of code is bad in this example, what would be a good practice approach for to achieve what I'm trying to do? That is, I'm trying to differentiate a member function implementation in cases where avx2 is and isn't available to be compiled.
No it is not intrinsically bad. #include was meant to allow include anywhere. It's just that it's uncommon to use it like this, and this goes against the principle of least astonishment.
The good practices that were developed around includes are all based on the assumption of an inclusion at the start of a compilation unit and in principle outside any namespace.
This is certainly why the C++ core guidelines recommend not to do it, being understood that they have normal reusable headers in mind:
SF.4: Include .h files before other declarations in a file
Reason
Minimize context dependencies and increase readability.
Additional remarks: How to solve your underlying problem
Not sure about the full context. But first of all, I wouldn't put the function body in the class definition. This would better encapsulate the implementation specific details for the class consumers, which should not need to know.
Then you could use conditional compilation in the body, or much better opt for some policy based design, using templates to configure the classes to be used at compile time.
I agree with what #Christophe said. In your case I would write the following code
Write a header commonInterface.h
#pragma once
#if defined (__AVX2__)
void commonInterface (...) {
#include <function_internal_code_using_avx2.h>
}
#else
void commonInterface (...) {
#include <function_internal_code_without_avx2.h>
}
#endif
so you hide the #if defined in the header and still have good readable code in the implementation file.
#include <commonInterface>
class Example {
private:
int val;
public:
void my_function() {
commonInterface(...);
}
};
#ifdef __AVX2__
# include <my_function_avx2.h>
#else
# include <my_function.h>
#endif
class Example {
int val;
public:
void my_function() {
# ifdef __AVX2__
my_function_avx2(this);
# else
my_function(this);
# endif
}
};
Whether it is good or bad really depends on the context.
The technique is often used if you have to write a great amount of boilerplate code. For example, the clang compiler uses it all over the place to match/make use of all possible types, identifiers, tokens, and so on. Here is an example, and here another one.
If you want to define a function differently depending on certain compile-time known parameters, it's seen cleaner to put the definitions where they belong to be.
You should not split up a definition of foo into two seperate files and choose the right one at compile time, as it increases the overhead for the programmer (which is often not just you) to understand your code.
Consider the following snippet which is, at least in my opinion, much more expressive:
// platform.hpp
constexpr static bool use_avx2 = #if defined (__AVX2__)
true;
#else
false;
#endif
// example.hpp
class Example {
private:
int val;
public:
void my_function() {
if constexpr(use_avx2) {
// code of "functional_internal_code_using_avx2.h"
}
else {
// code of "functional_internal_code_without_avx2.h"
}
};
The code can be improved further by generalizing more, adding layers of abstractions that "just define the algorithm" instead of both the algorithm and platform-specific weirdness.
Another important argument against your solution is the fact that both functional_internal_code_using_avx2.h and functional_internal_code_without_avx2.h require special attention:
They do not build without example.h and it is not obvious without opening any of the files that they require it. So, specific flags/treatment when building the project have to be added, which is difficult to maintain as soon as you use more than one such functional_internal_code-files.
I am not sure what you the bigger picture is in your case, so whatever follows should be taken with a grain of salt.
Anyway: #include COULD happen anywhere in the code, BUT you could think of it as a way of separating code / avoiding redundancy. For definitions, this is already well covered by other means. For declarations, it is the standard approach.
Now, this #includes are placed at the beginning as a courtesy to the reader who can catch up more quickly on what to expect in the code to follow, even for #ifdef guarded code.
In your case, it looks like you want a different implementation of the same functionality. The to-go approach in this case would be to link a different portion of code (containing a different implementation), rather than importing a different declaration.
Instead, if you want to really have a different signature based on your #ifdef then I would not see a more effective way than having #ifdef in the middle of the code. BUT, I would not consider this a good design choice!
I define this as bad coding for me. It makes code hard to read.
My approach would be to create a base class as an abstract interface and create specialized implementations and then create the needed class.
E.g.:
class base_functions_t
{
public:
virtual void function1() = 0;
}
class base_functions_avx2_t : public base_functions_t
{
public:
virtual void function1()
{
// code here
}
}
class base_functions_sse2_t : public base_functions_t
{
public:
virtual void function1()
{
// code here
}
}
Then you can have a pointer to your base_functions_t and instanciate different versions. E.g.:
base_functions_t *func;
if (avx2)
{
func = new base_functions_avx2_t();
}
else
{
func = new base_functions_sse2_t();
}
func->function1();
As a general rule I would say that it's best to put headers that define interfaces first in your implementation files.
There are of course also headers that don't define any interfaces. I'm thinking mainly of headers that use macro hackery and are intended to be included one or more times. This type of header typically doesn't have include guards. An example would be <cassert>. This allows you to write code something like this
#define NDEBUG 1
#include <cassert>
void foo() {
// do some stuff
assert(some_condition);
}
#undef NDEBUG
#include <cassert>
void bar() {
assert(another_condition);
}
If you only include <cassert> at the start of your file you will have no granularity for asserts in your implementation file other than all on or all off. See here for more discussion on this technique.
If you do go down the path of using conditional inclusion as per your example then I would strongly recommend that you use an editor like Eclipse or Netbeans that can do inline preprocessor expansion and visualization. Otherwise the loss of locality that inclusion brings can severely hurt readability.

Fixing self-blocking includes in a module based library

I have written a simple templated, module-based header library. With module-based, I mean that one can include only string.h or dynarray.h and the header will pull in all of its dependencies.
Now I'm facing an issue with missing types because of the way this system works.
A module does:
#include all dependencies
Define an interface class Foo
#include an implementation file
Unfortunately, in some situations, two interfaces need to be available before including any implementations. I have broken down the problem here:
string.h
#pragma once
// A string depends on a DynArray.
#include "dynarray.h"
template<typename E>
class String {
public:
DynArray<E> arr;
/* ... */
};
// Include the implementation of all the different functions (irrelevant here)
#include "string_impl.h"
dynarray.h
#pragma once
// The dynarray header has no direct dependencies
template<typename E>
class DynArray {
public:
/* ... */
E& Get(int index);
};
// Include the implementation of all the different functions
#include "dynarray_impl.h"
dynarray_impl.h
#pragma once
// The dynarray implementation needs the OutOfRangeException class
#include "out_of_range_exception.h"
template<typename E>
E& DynArray<E>::Get(int index) {
if (index >= size) {
throw OutOfRangeException("Some error message");
}
}
out_of_range_exception.h
class OutOfRangeException {
public:
String message;
OutOfRangeException(String message) {
/* ... */
}
};
Due to including the implementation of a module at its bottom, when including string.h somewhere, the content of dynarray_impl.h and with it out_of_range_exception.h comes before the string class interface. So String is not defined in OutOfRangeException.
Obviously, the solution is to delay only the implementation part of dynarray (dynarr_impl.h) after the definition of the string interface. The problem is that I have no idea how to do this without creating some kind of common header file, which is not compatible with a module based approach.
Your problem is that you have one file for both interface and implementation.
#includeing that file represents both depending on the interface of X and the implementation of X.
Sometimes you just want to depend on the interface of X.
X interface:
#include all dependencies of the interface
Define an interface class X.
X implementation:
#include the interface
#include all dependencies of the implementation
Define the implementation of class X.
In a sense, these are two separate modules, where one depends on the other. This permits clients to depend only on the interface of another type, or only on its implementation, or first on the interface, then later on the implementation.
Usually you can just #include "X.h", except when you have a circular dependency of implementations. Then somewhere you have to break the chain with a #include "X_interface.h"
If you really want to use a single header file, you can do away with #pragma once and have header files that can be included in "two modes". This can massively slow down build times as any such mechanism will require the compiler to open files just to check if there is any code there; most compilers can detect #ifdef header-guards and #pragma once and avoid reopening files it knows won't contain anything of interest. But a fancy "can be included multiple times in different modes" header file cannot be handled by that technique.

Why is it common to not include the word 'class' in C++ .cpp files?

Most classes appear to be separated between declaration and definition in the following form using namespace qualifier to define the class:
// test.h
class test
{
public:
void func1(void);
private:
void func2(void);
};
// test.cpp
void test::func1(void)
{
//whatever
}
void test::func2(void)
{
//whatever
}
Why don't we typically see people use the keyword class in the .cpp file? Like in the following form:
// test.cpp
class test {
void func1(void)
{
//whatever
}
void func2(void)
{
//whatever
}
};
Is it just convention to use the namespace qualifiers? Or because it make more sense when you starting implementing a class via multiple source files?
Let's view this question from another angle...
It is possible to use the same syntax for both, but it's "the other one"; the following is perfectly valid:
namespace ns
{
int foo();
}
int ns::foo() { return 0; }
Looked at like this, it's the opposite question that's interesting, "why is it common to include the word 'namespace' in .cpp files?"
There's one substantial difference between namespaces and classes that makes namespace {} necessary in so many places: namespaces are open to extension, but classes are defined entirely by their (one and only) definition.
Like with classes, you can't add anything to a namespace using the syntax above; you can't add a function bar above with only int ns::bar() { return 9; }, the only way to add names to a namespace is "from within".
And, as many have discovered, it's convenient to wrap an entire file in a namespace and not use the qualified names, even if you're not adding any names to it.
Hence the popularity of "namespace": it's a convenience enabled by the extensibility of namespaces.
Another issue is that the meaning of your "test.cpp" would depend on whether the class definition has already been seen by the compiler – without it, that's a valid and complete definition of a class with two private functions.
This kind of "action from a distance" depending on possibly very distant code is painful to work with.
It's also worth noting that namespaces were added some twenty years after "C with classes" was created, when C++ was a well established language, and changing the meaning of a construct that literally hasn't changed in decades is pretty much unthinkable.
Partularly if all it does is save a few keystrokes.

How to wrap a C struct in a C++ class and keep the same name?

Let's say the Acme company releases a useful library with an extremely ugly C API. I'd like to wrap the structs and related functions in C++ classes. It seems like I can't use the same names for the wrapper classes, because the original library is not inside a namespace.
Something like this is not possible, right?
namespace AcmesUglyStuff {
#include <acme_stuff.h> // declares a struct Thing
}
class Thing {
public:
...
private:
AcmesUglyStuff::Thing thing;
};
Linking will be a problem.
The only way I can think to wrap the library, and not pollute my namespace with the C library names, is a hack like this, reserving space in the class:
// In mything.h
namespace wrapper {
class Thing {
public:
...
private:
char impl[SIZE_OF_THING_IN_C_LIB];
};
}
// In thing.cc
#include <acme_stuff.h>
wrapper::Thing::Thing() {
c_lib_function((::Thing*)impl); // Thing here referring to the one in the C lib
}
Is that the only way? I'd like to avoid putting prefixes on all my class names, like XYThing, etc.
Seems like you're making this harder than it needs to be.
#include "acme_stuff.h" // puts all of its names in global namespace
namespace acme {
class Thing {
public:
// whatever
private:
::Thing thing;
};
}
Now just use acme::Thing rather than Thing.
If it's really important to you to not have the C names in the global namespace, then you need a level of indirection:
namespace acme {
class Thing {
public:
Thing();
~Thing();
// whatever
private:
void *acme_thing;
};
}
In your implementation file, #include "acme_stuff.h", in your constructor create a new ::Thing object and store its address in acme_thing, in your destructor delete it, and in your member functions cast acme_thing to type ::Thing*.
It's not a good idea to try to name something the exact same thing as something else. (I mean equal fully-qualified names, including all namespaces.) If some library has already grabbed the obvious best name in the global namespace, you'll need to pick a different name.
You could put your class Thing in a namespace as Pete Becker suggests, and then use ::Thing to access Acme's Thing. That would be fine if you're prepared to always access your class through it's fully namespace-qualified name (e.g. My::Thing). It's tempting to try using My::Thing; or using namespace My;, but that won't work, because any translation unit that includes the definition of your class (e.g. via a header file you create) must necessarily pull Acme's Thing into the global namespace first (otherwise an "Undefined symbol" compilation error would occur when parsing the definition of My::Thing).
Is it really a C API? Try to extern "C" {} to whole included header to solve the linking problem.
namespace AcmesUglyStuff {
extern "C" {
#include <acme_stuff.h>
}
}

Partial class definition on C++?

Anyone knows if is possible to have partial class definition on C++ ?
Something like:
file1.h:
class Test {
public:
int test1();
};
file2.h:
class Test {
public:
int test2();
};
For me it seems quite useful for definining multi-platform classes that have common functions between them that are platform-independent because inheritance is a cost to pay that is non-useful for multi-platform classes.
I mean you will never have two multi-platform specialization instances at runtime, only at compile time. Inheritance could be useful to fulfill your public interface needs but after that it won't add anything useful at runtime, just costs.
Also you will have to use an ugly #ifdef to use the class because you can't make an instance from an abstract class:
class genericTest {
public:
int genericMethod();
};
Then let's say for win32:
class win32Test: public genericTest {
public:
int win32Method();
};
And maybe:
class macTest: public genericTest {
public:
int macMethod();
};
Let's think that both win32Method() and macMethod() calls genericMethod(), and you will have to use the class like this:
#ifdef _WIN32
genericTest *test = new win32Test();
#elif MAC
genericTest *test = new macTest();
#endif
test->genericMethod();
Now thinking a while the inheritance was only useful for giving them both a genericMethod() that is dependent on the platform-specific one, but you have the cost of calling two constructors because of that. Also you have ugly #ifdef scattered around the code.
That's why I was looking for partial classes. I could at compile-time define the specific platform dependent partial end, of course that on this silly example I still need an ugly #ifdef inside genericMethod() but there is another ways to avoid that.
This is not possible in C++, it will give you an error about redefining already-defined classes. If you'd like to share behavior, consider inheritance.
Try inheritance
Specifically
class AllPlatforms {
public:
int common();
};
and then
class PlatformA : public AllPlatforms {
public:
int specific();
};
You can't partially define classes in C++.
Here's a way to get the "polymorphism, where there's only one subclass" effect you're after without overhead and with a bare minimum of #define or code duplication. It's called simulated dynamic binding:
template <typename T>
class genericTest {
public:
void genericMethod() {
// do some generic things
std::cout << "Could be any platform, I don't know" << std::endl;
// base class can call a method in the child with static_cast
(static_cast<T*>(this))->doClassDependentThing();
}
};
#ifdef _WIN32
typedef Win32Test Test;
#elif MAC
typedef MacTest Test;
#endif
Then off in some other headers you'll have:
class Win32Test : public genericTest<Win32Test> {
public:
void win32Method() {
// windows-specific stuff:
std::cout << "I'm in windows" << std::endl;
// we can call a method in the base class
genericMethod();
// more windows-specific stuff...
}
void doClassDependentThing() {
std::cout << "Yep, definitely in windows" << std::endl;
}
};
and
class MacTest : public genericTest<MacTest> {
public:
void macMethod() {
// mac-specific stuff:
std::cout << "I'm in MacOS" << std::endl;
// we can call a method in the base class
genericMethod();
// more mac-specific stuff...
}
void doClassDependentThing() {
std::cout << "Yep, definitely in MacOS" << std::endl;
}
};
This gives you proper polymorphism at compile time. genericTest can non-virtually call doClassDependentThing in a way that gives it the platform version, (almost like a virtual method), and when win32Method calls genericMethod it of course gets the base class version.
This creates no overhead associated with virtual calls - you get the same performance as if you'd typed out two big classes with no shared code. It may create a non-virtual call overhead at con(de)struction, but if the con(de)structor for genericTest is inlined you should be fine, and that overhead is in any case no worse than having a genericInit method that's called by both platforms.
Client code just creates instances of Test, and can call methods on them which are either in genericTest or in the correct version for the platform. To help with type safety in code which doesn't care about the platform and doesn't want to accidentally make use of platform-specific calls, you could additionally do:
#ifdef _WIN32
typedef genericTest<Win32Test> BaseTest;
#elif MAC
typedef genericTest<MacTest> BaseTest;
#endif
You have to be a bit careful using BaseTest, but not much more so than is always the case with base classes in C++. For instance, don't slice it with an ill-judged pass-by-value. And don't instantiate it directly, because if you do and call a method that ends up attempting a "fake virtual" call, you're in trouble. The latter can be enforced by ensuring that all of genericTest's constructors are protected.
or you could try PIMPL
common header file:
class Test
{
public:
...
void common();
...
private:
class TestImpl;
TestImpl* m_customImpl;
};
Then create the cpp files doing the custom implementations that are platform specific.
#include will work as that is preprocessor stuff.
class Foo
{
#include "FooFile_Private.h"
}
////////
FooFile_Private.h:
private:
void DoSg();
How about this:
class WindowsFuncs { public: int f(); int winf(); };
class MacFuncs { public: int f(); int macf(); }
class Funcs
#ifdef Windows
: public WindowsFuncs
#else
: public MacFuncs
#endif
{
public:
Funcs();
int g();
};
Now Funcs is a class known at compile-time, so no overheads are caused by abstract base classes or whatever.
As written, it is not possible, and in some cases it is actually annoying.
There was an official proposal to the ISO, with in mind embedded software, in particular to avoid the RAM ovehead given by both inheritance and pimpl pattern (both approaches require an additional pointer for each object):
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0309r0.pdf
Unfortunately the proposal was rejected.
As written, it is not possible.
You may want to look into namespaces. You can add a function to a namespace in another file. The problem with a class is that each .cpp needs to see the full layout of the class.
Nope.
But, you may want to look up a technique called "Policy Classes". Basically, you make micro-classes (that aren't useful on their own) then glue them together at some later point.
Either use inheritance, as Jamie said, or #ifdef to make different parts compile on different platforms.
For me it seems quite useful for definining multi-platform classes that have common functions between them that are platform-independent.
Except developers have been doing this for decades without this 'feature'.
I believe partial was created because Microsoft has had, for decades also, a bad habit of generating code and handing it off to developers to develop and maintain.
Generated code is often a maintenance nightmare. What habits to that entire MFC generated framework when you need to bump your MFC version? Or how do you port all that code in *.designer.cs files when you upgrade Visual Studio?
Most other platforms rely more heavily on generating configuration files instead that the user/developer can modify. Those, having a more limited vocabulary and not prone to be mixed with unrelated code. The configuration files can even be inserted in the binary as a resource file if deemed necessary.
I have never seen 'partial' used in a place where inheritance or a configuration resource file wouldn't have done a better job.
Since headers are just textually inserted, one of them could omit the "class Test {" and "}" and be #included in the middle of the other.
I've actually seen this in production code, albeit Delphi not C++. It particularly annoyed me because it broke the IDE's code navigation features.
Dirty but practical way is using #include preprocessor:
Test.h:
#ifndef TEST_H
#define TEST_H
class Test
{
public:
Test(void);
virtual ~Test(void);
#include "Test_Partial_Win32.h"
#include "Test_Partial_OSX.h"
};
#endif // !TEST_H
Test_Partial_OSX.h:
// This file should be included in Test.h only.
#ifdef MAC
public:
int macMethod();
#endif // MAC
Test_Partial_WIN32.h:
// This file should be included in Test.h only.
#ifdef _WIN32
public:
int win32Method();
#endif // _WIN32
Test.cpp:
// Implement common member function of class Test in this file.
#include "stdafx.h"
#include "Test.h"
Test::Test(void)
{
}
Test::~Test(void)
{
}
Test_Partial_OSX.cpp:
// Implement OSX platform specific function of class Test in this file.
#include "stdafx.h"
#include "Test.h"
#ifdef MAC
int Test::macMethod()
{
return 0;
}
#endif // MAC
Test_Partial_WIN32.cpp:
// Implement WIN32 platform specific function of class Test in this file.
#include "stdafx.h"
#include "Test.h"
#ifdef _WIN32
int Test::win32Method()
{
return 0;
}
#endif // _WIN32
Suppose that I have:
MyClass_Part1.hpp, MyClass_Part2.hpp and MyClass_Part3.hpp
Theoretically someone can develop a GUI tool that reads all these hpp files above and creates the following hpp file:
MyClass.hpp
class MyClass
{
#include <MyClass_Part1.hpp>
#include <MyClass_Part2.hpp>
#include <MyClass_Part3.hpp>
};
The user can theoretically tell the GUI tool where is each input hpp file and where to create the output hpp file.
Of course that the developer can theoretically program the GUI tool to work with any varying number of hpp files (not necessarily 3 only) whose prefix can be any arbitrary string (not necessarily "MyClass" only).
Just don't forget to #include <MyClass.hpp> to use the class "MyClass" in your projects.
Declaring a class body twice will likely generate a type redefinition error. If you're looking for a work around. I'd suggest #ifdef'ing, or using an Abstract Base Class to hide platform specific details.
You can get something like partial classes using template specialization and partial specialization. Before you invest too much time, check your compiler's support for these. Older compilers like MSC++ 6.0 didn't support partial specialization.
This is not possible in C++, it will give you an error about redefining already-defined
classes. If you'd like to share behavior, consider inheritance.
I do agree on this. Partial classes is strange construct that makes it very difficult to maintain afterwards. It is difficult to locate on which partial class each member is declared and redefinition or even reimplementation of features are hard to avoid.
Do you want to extend the std::vector, you have to inherit from it. This is because of several reasons. First of all you change the responsibility of the class and (properly?) its class invariants. Secondly, from a security point of view this should be avoided.
Consider a class that handles user authentication...
partial class UserAuthentication {
private string user;
private string password;
public bool signon(string usr, string pwd);
}
partial class UserAuthentication {
private string getPassword() { return password; }
}
A lot of other reasons could be mentioned...
Let platform independent and platform dependent classes/functions be each-others friend classes/functions. :)
And their separate name identifiers permit finer control over instantiation, so coupling is looser. Partial breaks encapsulation foundation of OO far too absolutely, whereas the requisite friend declarations barely relax it just enough to facilitate multi-paradigm Separation of Concerns like Platform Specific aspects from Domain-Specific platform independent ones.
I've been doing something similar in my rendering engine. I have a templated IResource interface class from which a variety of resources inherit (stripped down for brevity):
template <typename TResource, typename TParams, typename TKey>
class IResource
{
public:
virtual TKey GetKey() const = 0;
protected:
static shared_ptr<TResource> Create(const TParams& params)
{
return ResourceManager::GetInstance().Load(params);
}
virtual Status Initialize(const TParams& params, const TKey key, shared_ptr<Viewer> pViewer) = 0;
};
The Create static function calls back to a templated ResourceManager class that is responsible for loading, unloading, and storing instances of the type of resource it manages with unique keys, ensuring duplicate calls are simply retrieved from the store, rather than reloaded as separate resources.
template <typename TResource, typename TParams, typename TKey>
class TResourceManager
{
sptr<TResource> Load(const TParams& params) { ... }
};
Concrete resource classes inherit from IResource utilizing the CRTP. ResourceManagers specialized to each resource type are declared as friends to those classes, so that the ResourceManager's Load function can call the concrete resource's Initialize function. One such resource is a texture class, which further uses a pImpl idiom to hide its privates:
class Texture2D : public IResource<Texture2D , Params::Texture2D , Key::Texture2D >
{
typedef TResourceManager<Texture2D , Params::Texture2D , Key::Texture2D > ResourceManager;
friend class ResourceManager;
public:
virtual Key::Texture2D GetKey() const override final;
void GetWidth() const;
private:
virtual Status Initialize(const Params::Texture2D & params, const Key::Texture2D key, shared_ptr<Texture2D > pTexture) override final;
struct Impl;
unique_ptr<Impl> m;
};
Much of the implementation of our texture class is platform-independent (such as the GetWidth function if it just returns an int stored in the Impl). However, depending on what graphics API we're targeting (e.g. Direct3D11 vs. OpenGL 4.3), some of the implementation details may differ. One solution could be to inherit from IResource an intermediary Texture2D class that defines the extended public interface for all textures, and then inherit a D3DTexture2D and OGLTexture2D class from that. The first problem with this solution is that it requires users of your API to be constantly mindful of which graphics API they're targeting (they could call Create on both child classes). This could be resolved by restricting the Create to the intermediary Texture2D class, which uses maybe a #ifdef switch to create either a D3D or an OGL child object. But then there is still the second problem with this solution, which is that the platform-independent code would be duplicated across both children, causing extra maintenance efforts. You could attempt to solve this problem by moving the platform-independent code into the intermediary class, but what happens if some of the member data is used by both platform-specific and platform-independent code? The D3D/OGL children won't be able to access those data members in the intermediary's Impl, so you'd have to move them out of the Impl and into the header, along with any dependencies they carry, exposing anyone who includes your header to all that crap they don't need to know about.
API's should be easy to use right and hard to use wrong. Part of being easy to use right is restricting the user's exposure to only the parts of the API they should be using. This solution opens it up to be easily used wrong and adds maintenance overhead. Users should only have to care about the graphics API they're targeting in one spot, not everywhere they use your API, and they shouldn't be exposed to your internal dependencies. This situation screams for partial classes, but they are not available in C++. So instead, you might simply define the Impl structure in separate header files, one for D3D, and one for OGL, and put an #ifdef switch at the top of the Texture2D.cpp file, and define the rest of the public interface universally. This way, the public interface has access to the private data it needs, the only duplicate code is data member declarations (construction can still be done in the Texture2D constructor that creates the Impl), your private dependencies stay private, and users don't have to care about anything except using the limited set of calls in the exposed API surface:
// D3DTexture2DImpl.h
#include "Texture2D.h"
struct Texture2D::Impl
{
/* insert D3D-specific stuff here */
};
// OGLTexture2DImpl.h
#include "Texture2D.h"
struct Texture2D::Impl
{
/* insert OGL-specific stuff here */
};
// Texture2D.cpp
#include "Texture2D.h"
#ifdef USING_D3D
#include "D3DTexture2DImpl.h"
#else
#include "OGLTexture2DImpl.h"
#endif
Key::Texture2D Texture2D::GetKey() const
{
return m->key;
}
// etc...