In a C++ cross-platform library,
we use shared headers which are compiled with different module versions for each OS.
A.k.a Link-Seam
//Example:
//CommonHeader.h
#ifndef COMMON_HEADER_H
#define COMMON_HEADER_H
class MyClass {
public:
void foo();
}
#endif
.
//WindowsModule.cpp
#include "CommonHeader.h"
void MyClass::foo() {
//DO SOME WINDOWS API STUFF
printf("This implements the Windows version\n");
}
.
//LinuxModule.cpp
#include "CommonHeader.h"
void MyClass::foo() {
//DO SOME LINUX SPECIFIC STUFF HERE
printf("This implements the Linux version\n");
}
Of course, in each build you only select one module, respective to the environment you are using.
This is meant to suppress the indirect call to the functions
My Question is: How to note this relationship in UML ?
"Inheritance"? "Dependency"? "Note"?
class MyClass {
public:
void foo();
}
This is nothing more than a class contract, so basically an interface which you realize in different modules. To visualize that, you can use interface realization notation (like a generalization, but with dashed lines).
The reality is I think that you’ve got only one class in a UML class diagram, being MyClass, with a public operation foo(); it’s just that you have two code implementations of this class, one for Linux and one for Windows. UML Class models are not really setup to answer the question of how you implement this, in your case using c++ with header files: Imagine if instead you in-lined your functions and ended up writing two inline implementations of MyClass, one for Linux and one for Windows. Logically you still have one class in your class diagram (but with two implementations) and the header file (or lack thereof) wouldn’t come into it. What you need therefore is a way to show a how the C++ is structured, not a way to show the logical class constructs. I’m not aware of a specific way in UML to represent code structure, however you could build a code structure model using Artefacts maybe (?) which could denote the c++ file structures.
It could be seen as some kind of "inheritance". There is however no relation between classes as there is just one class - so there is no relation between two. The actual construct of using platform dependent implementation however imitate relation "is a".
Related
In C++, classes are usually declared like this:
// Object.h
class Object
{
void doSomething();
}
// Object.cpp
#include "Object.h"
void Object::doSomething()
{
// do something
}
I understand that this improves compile time because having the class in one file makes you recompile it whenever you change either the implementation or the interface (see this).
However, from and OOP point of view, I don't see how separating the interface from the implementation helps. I've read a lot of other questions and answers, but the problem I have is that if you define the methods for a class properly (in separate header/source files), then how can you make a different implementation? If you define Object::method in two different places, then how will the compiler know which one to call? Do you declare the Object::method definitions in different namespaces?
Any help would be appreciated.
If you want one interface and multiple implementations in the same program then you use an abstract virtual base.
Like so:
class Printer {
public:
virtual void print_string(const char *s) = 0;
virtual ~Printer();
};
Then you can have implementations:
class EpsonPrinter : public Printer {
public:
void print_string(const char *s) override;
};
class LexmarkPrinter : public Printer {
public:
void print_string(const char *s) override;
};
On the other hand, if you are looking at code which implements OS independence, it might have several subdirectories, one for each OS. The header files are the same, but the source files for Windows are only built for Windows and the source files for Linux/POSIX are only built for Linux.
However, from [an] OOP point of view, I don't see how separating the interface from the implementation helps.
It doesn't help from an OOP point of view, and isn't intended to. This is a text inclusion feature of C++ which is inherited from C, a language that has no direct support for object-oriented programming.
Text inclusion for modularity is a feature borrowed, in turn, from assembly languages. It is almost an antithesis to object-oriented programming or basically anything that is good in the area of computer program organization.
Text inclusion allows your C++ compiler to interoperate with ancient object file formats which do not store any type information about symbols. The Object.cpp file is compiled to this object format, resulting in an Object.o file or Object.obj or what have you on your platform. When other parts of the program use this module, they almost solely trust the information that is written about it in Object.h. Nothing useful emanates out of the Object.o file except for symbols accompanied by numeric information like their offsets and sizes. If the information in the header doesn't correctly reflect Object.obj, you have undefined behavior (mitigated, in some cases, by C++'s support for function overloading, which turns mismatched function calls into unresolving symbols, thanks to name mangling).
For instance if the header declares a variable extern int foo; but the object file is the result of compiling double foo = 0.0; it means that the rest of the program is accessing a double object as an int. What prevents this from happening is that Object.cpp includes its own header (thereby forcing the mismatch between the declaration and definition to be caught by the compiler) and that you have a sane build system in place which ensures that Object.cpp is rebuilt if anything touches Object.h. If that check is based on timestamps, you must also have a sane file system and version control system that don't do wacky things with timestamps.
If you define Object::method in two different places, then how will the compiler know which one to call?
It won't, and in fact you will be breaking the "One Definition Rule" if you do this, which results in undefined behavior, no diagnostic required, according to the standards.
If you want to define multiple implementations for a class interface, you should use inheritance in some way.
One way that you might do it is, use a virtual base class and override some of the methods in different subclasses.
If you want to manipulate instances of the class as value types, then you can use the pImpl idiom, combined with virtual inheritance. So you would have one class, the "pointer" class, which exposes the interface, and holds a pointer to an abstract virtual base class type. Then, in the .cpp file, you would define the virtual base class, and define multiple subclasses of it, and different constructors of the pImpl class would instantiate different of the subclasses as the implementation.
If you want to use static polymorphism, rather than run-time polymorphism, you can use the CRTP idiom (which is still ultimately based on inheritance, just not virtual inheritance).
I have had an idea for a non standard way to handle multiplatform interfaces in C++ and would like to know if that is generally a bad idea and why.
Currently I can only think of one disadvantage: It is very(?) uncommon do to something like that and maybe its not obvious how it works on first sight:
I have a class that will be used on different platforms, for example CMat4x4f32 (4x4 matrix class using 32 bit floats).
My platform independent interface looks like this:
class CMat4x4f32
{
public:
//Some methods
#include "Mat4x4f32.platform.inl"
};
Mat4x4f32.platform.inl looks like this:
public:
// Fills the matrix from a DirectX11 SSE matrix
void FromXMMatrix(const XMMatrix& _Matrix);
It just adds a platform depending interface to the matrix class.
The .cpp and the Mat4x4f32.platform.inl are located inside subfolders like "win32" or "posix" so in win32 I implement the FromXMMatrix function. My buildsystem adds these subfolders to the include path depending on the platform I build for.
I could even go a step beyond and implement a .platform.cpp that is located inside win32 and contains only the functions I add to the interface for that platform.
I personally think this is a good idea because it makes writing and using interfaces very easy and clean.
Especially in my Renderer library that heavily uses the Matrix class from my base library I can now use platform depending functions (FromXMMatrix) in the DirectX part as if I dont have any other platforms to worry about.
In the base library itself I can still write platform independent code using the common matrix interface.
I also have other classes where this is useful: For example an Error class that collects errors and automatically translates them into readable messages and provide some debugging options.
For win32 I can create error instances from bad DirectX and Win32 HResults and on Linux I can create them from returned errno's. In the base library I have a class that manages these errors using the common error interface.
It heavily reduces code required and prevents having ugly platform depending util classes.
So is this bad or good design and what are the alternatives?
It sounds like you're talking about using the bridge pattern:
http://c2.com/cgi/wiki?BridgePattern
In my personal experience I've developed a lot of platform independent interfaces, with specific implementations using this pattern and it has worked very well, I've often used it with the Pimpl idiom:
http://c2.com/cgi/wiki?PimplIdiom
As in alternatives I've found that this site in general is very good for explaining pros & cons of various patterns and paradigms:
http://c2.com/cgi/wiki
I would recommend you used "pimpl" instead:
class CMat4x4f32
{
public:
CMat4x4f32();
void Foo();
void Bar();
private:
std::unique_ptr<CMat4x4f32Impl> m_impl;
};
And then in build-file configs pull in platform specific .cpp files, where you for instance define your platform specific functions:
class CMat4x4f32::CMat4x4f32Impl
{
public:
void Foo() { /* Actual impl */ }
void Bar() { /* Actual impl */ }
// Fills the matrix from a DirectX11 SSE matrix
void FromXMMatrix(const XMMatrix& _Matrix);
};
CMat4x4f32::CMat4x4f32() : m_impl(new CMat4x4f32Impl()) {}
CMat4x4f32::Foo() { m_impl->Foo(); }
CMat4x4f32::Bar() { m_impl->Bar(); }
I have a class Foo, which I do not implement directly, but wrap external libraries (e.g FooXternal1 or FooXternal2 )
One way that I have seen to do this, is using preprocessor directives as
#include "config.h"
#include "foo.h"
#ifdef _FOOXTERNAL1_WRAPPER_
//implementation of class Foo using FooXternal1
#endif
#ifdef _FOOXTERNAL2_WRAPPER_
//implementation of class Foo using FooXternal2
#endif
and a config.h is used to define these preprocessor flags (_FOOXTERNAL1_WRAPPER_ and _FOOEXTERNAL2_WRAPPER_).
I have the impression this is frowned upon by the C++ programmer community because it uses preprocessor directives, is hard to debug, etc. Further, it does not allow for the parallel existence of both implementations.
I thought about making Foo a base class and inheriting from it to allow for both implementations to exist in parallel with each other. But I ran into two problems:
Pure virtual functions: cannot instatiate an object of type 'Foo', which I need during use.
Virtual functions run the risk of running an object with no (proper) implementation.
Am I missing something? Is there a cleaner way to do this?
EDIT : To summarize, there are 3(.5?!) ways to doing the wrapping- 2(.5) are given by icepack, and the last by Sergey
1- Use factory methods
2- Use preprocessor directives
2.5- Use makefile or IDE to effectively do the work of the preprocessor directives
3.5- Use templates suggested by Sergay
I am working on an embedded system where resources are limited, I decided to use template<enum = default_library>, with template specialization. It is easy to understand for later users; at least thats what I think
If all method names of external implementations are similar, you can use templates. Let external implementations look like:
class FooX1
{
public:
void Met1()
{
std::cout << "X1\n";
}
};
class FooX2
{
public:
void Met1()
{
std::cout << "X2\n";
}
};
Then you can use several variants.
Variant 1. You can declare member of a template type and wrap all calls to external implementation, even with some preparations before the call. Don't forget to delete impl in ~Foo destructor.
template<typename FooX>
class FooVariant1
{
public:
FooVariant1()
{
impl=new FooX();
}
void Met1Wrapper()
{
impl->Met1();
}
private:
FooX *impl;
};
Usage:
FooVariant1<FooX1> bar;
bar.Met1Wrapper();
Variant 2. You can inherit from a template parameter. In this case you don't declare any members, but just call implementation's methods by their names.
template<typename FooX>
class FooVariant2 : public FooX
{
};
Usage:
FooVariant2<FooX1> bar;
bar.Met1();
A disadvantage of using templates is that there is no easy way to change implementations in runtime. But in return you get much more optimal code, because types are generated in compile-time and there is no table of virtual functions, which can make the program slower.
If you want the 2 implementations to coexist at runtime, interface is the way to go (for example, you can use a factory method design pattern to instantiate the concrete object, like #n.m. has suggested).
If you can decide at compilation time what is the implementation that you need, you have several options:
Still use interface. This will allow an easy transition if in the future you'll need both implementations at runtime.
Use preprocessor directives. There is nothing wrong here as far as C++ is considered. It's a pure design issue.
Put the implementations in different files and configure your compiler to compile either one of them according to settings - this is actually similar to using preprocessor directives but it's cleaner and doesn't add garbage to your code (since the flags are in the solution/makefile/whatever your compiler uses).
The only thing I'd frown upon is including both implementations in the same source file. That might get confusing. Otherwise, this is one of the things preprocessor flags are good at, especially if you're not linking both libraries at the same time. It's just like supporting multiple operating systems. Provide a consistent interface in all cases and hide the implementation details somewhere else.
Does type Foo need to hold any information specific to each library? If not, you might be able to get away with this:
#include "Foo.h"
#if defined _FOOXTERNAL1_WRAPPER_
#include "Foo_impl1.cpp"
#elif defined _FOOXTERNAL2_WRAPPER_
#include "Foo_impl2.cpp"
#else
#error "Warn about a missing define here"
#endif
This way you don't have to bother with virtual functions or inheritance and you still prevent any member functions from going unimplemented.
Keep Foo abstract. Provide a factory method
Foo* MakeFoo();
that allocates a new object of either type FooImpl1 or FooImpl2, and returns its address.
Wikipedia on Factory Method pattern.
I know it's possible to do class implementation in more than one file(yes, I know that this is bad idea), but I want to know if it's possible to write class definition in separate files without getting a redefinition error (maybe some tricks, or else...)
No, not the same class, but why would you?
You could define two classes with the same name in the same namespace (the same signature! That will actually be the same class at all, just defined in two ways) in two different header files and compile your program, if your source files don't include both headers. Your application could even be linked, but then at runtime things won't work as you expect (it'll be undefined behavior!), unless the classes are identical.
See Charles Bailey's answer for a more formal explanation.
EDIT:
If what you want is to have a single class defined by more than one file, in order to (for example) add some automatically generated functions to your class, you could #include a secondary file in the middle of your class.
/* automatically generated file - autofile.gen.h */
void f1( ) { /* ... */ }
void f2( ) { /* ... */ }
void f3( ) { /* ... */ }
/* ... */
/* your header file with your class to be expanded */
class C
{
/* ... */
#include "autofile.gen.h"
/* ... */
};
Not as nice and clean as the partial keyword in C#, but yes, you can split your class into multiple header files:
class MyClass
{
#include "my_class_aspect1.h"
#include "my_class_aspect2.h"
#include "my_class_aspect3.h"
};
#include "my_class_aspect1.inl"
#include "my_class_aspect2.inl"
#include "my_class_aspect3.inl"
Well sort of ...
If you have 1 header file as follows
ClassDef1.h:
class ClassDef
{
protected:
// blah, etc.
public:
// more blah
and another as follows
ClassDef2.h:
public:
// Yet more blah.
};
The class will effectively have been defined across 2 files.
Beyond that sort of trickery .. AFAIK, no you can't.
Yes, you can. Each definition must occur in a separate translation unit but there are heavy restrictions on multiple definitions.
Each definition must consist of the same sequence of tokens and in each definition corresponding names must refer to the same entity (or an entity within the definition of the class itself).
See 3.2 [basic.def.odr] / 5 of ISO 14882:2003 for full details.
Yes. Usually, people put the declaration part in .h file and then put the function body implementations in .cpp file
Put the declarations in a .h file, then implement in as many files as you want, but include the .h in each.
But, you cannot implement the same thing in more than one file
I know this is a late post, but a friend just asked me a similar question, so I thought I would post our discussion:
From what I understand you want to do ThinkingStiff, it's not possible in C++. I take it you want something like "partial class" in C#.
Keep in mind, "partial class" in .NET is necessary so you don't have to put your entire implementation in one big source file. Unlike C++, .NET does not allow your implementation to be separate from the declaration, therefore the need for "partial class". So, Rafid it's not a nice feature, it's a necessary one, and a reinvention of the wheel IMHO.
If your class definition is so large you need to split it across more than one source file, then it can probably be split into smaller, decoupled classes and the primary class can use the mediator design pattern that ties them all together.
If your desire is to hide private members, such as when developing a commercial library and you don't wish to give away intelectual property (or hints to that affect), then you can use pure abstract classes. This way, all that is in the header is simple class with all the public definitions (API) and a factory function for creating an instance of it. All the implementation details are in your source file(s).
Another alternative is similar to the above, but uses a normal class (not a pure virtual one), but still exposes only the public interface. In your source file(s), you do all the work inside a nameless namespace, which can include friend functions from your class definition when necessary, etc. A bit more work, but a reasonable technique.
I bolded those things you might want to lookup and consider when solving your problem.
This is one of the nice features of C#, but unfortunately it is not supported in C++.
Perhaps something like policy based design would do the trick? Multiple Inheritience gives you the possiblity to compose classes in ways you can't in other languages. of course there are gotchas associated with MI so make sure you know what you are doing.
The short answer is yes, it is possible. The correct answer is no, you should never even think about doing it (and you will be flogged by other programmers should you disregard that directive).
You can split the definitions of the member functions into other translation units, but not of the class itself. To do that, you'd need to do preprocessor trickery.
// header file with function declarations
class C {
public:
void f(int foo, int bar);
int g();
};
// this goes into a seperate file
void C::f(int foo, int bar) {
// code ...
}
// this in yet another
void C::g() {
// code ...
}
There must be a reason why you want to split between two headers, so my guess is that you really want 2 classes that are joined together to form components of a single (3rd) class.
This question already has answers here:
Is the PIMPL idiom really used in practice?
(12 answers)
Closed 8 years ago.
Backgrounder:
The PIMPL Idiom (Pointer to IMPLementation) is a technique for implementation hiding in which a public class wraps a structure or class that cannot be seen outside the library the public class is part of.
This hides internal implementation details and data from the user of the library.
When implementing this idiom why would you place the public methods on the pimpl class and not the public class since the public classes method implementations would be compiled into the library and the user only has the header file?
To illustrate, this code puts the Purr() implementation on the impl class and wraps it as well.
Why not implement Purr directly on the public class?
// header file:
class Cat {
private:
class CatImpl; // Not defined here
CatImpl *cat_; // Handle
public:
Cat(); // Constructor
~Cat(); // Destructor
// Other operations...
Purr();
};
// CPP file:
#include "cat.h"
class Cat::CatImpl {
Purr();
... // The actual implementation can be anything
};
Cat::Cat() {
cat_ = new CatImpl;
}
Cat::~Cat() {
delete cat_;
}
Cat::Purr(){ cat_->Purr(); }
CatImpl::Purr(){
printf("purrrrrr");
}
I think most people refer to this as the Handle Body idiom. See James Coplien's book Advanced C++ Programming Styles and Idioms. It's also known as the Cheshire Cat because of Lewis Caroll's character that fades away until only the grin remains.
The example code should be distributed across two sets of source files. Then only Cat.h is the file that is shipped with the product.
CatImpl.h is included by Cat.cpp and CatImpl.cpp contains the implementation for CatImpl::Purr(). This won't be visible to the public using your product.
Basically the idea is to hide as much as possible of the implementation from prying eyes.
This is most useful where you have a commercial product that is shipped as a series of libraries that are accessed via an API that the customer's code is compiled against and linked to.
We did this with the rewrite of IONA's Orbix 3.3 product in 2000.
As mentioned by others, using his technique completely decouples the implementation from the interface of the object. Then you won't have to recompile everything that uses Cat if you just want to change the implementation of Purr().
This technique is used in a methodology called design by contract.
Because you want Purr() to be able to use private members of CatImpl. Cat::Purr() would not be allowed such an access without a friend declaration.
Because you then don't mix responsibilities: one class implements, one class forwards.
For what is worth, it separates the implementation from the interface. This is usually not very important in small size projects. But, in large projects and libraries, it can be used to reduce the build times significantly.
Consider that the implementation of Cat may include many headers, may involve template meta-programming which takes time to compile on its own. Why should a user, who just wants to use the Cat have to include all that? Hence, all the necessary files are hidden using the pimpl idiom (hence the forward declaration of CatImpl), and using the interface does not force the user to include them.
I'm developing a library for nonlinear optimization (read "lots of nasty math"), which is implemented in templates, so most of the code is in headers. It takes about five minutes to compile (on a decent multi-core CPU), and just parsing the headers in an otherwise empty .cpp takes about a minute. So anyone using the library has to wait a couple of minutes every time they compile their code, which makes the development quite tedious. However, by hiding the implementation and the headers, one just includes a simple interface file, which compiles instantly.
It does not necessarily have anything to do with protecting the implementation from being copied by other companies - which wouldn't probably happen anyway, unless the inner workings of your algorithm can be guessed from the definitions of the member variables (if so, it is probably not very complicated and not worth protecting in the first place).
If your class uses the PIMPL idiom, you can avoid changing the header file on the public class.
This allows you to add/remove methods to the PIMPL class, without modifying the external class's header file. You can also add/remove #includes to the PIMPL too.
When you change the external class's header file, you have to recompile everything that #includes it (and if any of those are header files, you have to recompile everything that #includes them, and so on).
Typically, the only reference to a PIMPL class in the header for the owner class (Cat in this case) would be a forward declaration, as you have done here, because that can greatly reduce the dependencies.
For example, if your PIMPL class has ComplicatedClass as a member (and not just a pointer or reference to it) then you would need to have ComplicatedClass fully defined before its use. In practice, this means including file "ComplicatedClass.h" (which will also indirectly include anything ComplicatedClass depends on). This can lead to a single header fill pulling in lots and lots of stuff, which is bad for managing your dependencies (and your compile times).
When you use the PIMPL idiom, you only need to #include the stuff used in the public interface of your owner type (which would be Cat here). Which makes things better for people using your library, and means you don't need to worry about people depending on some internal part of your library - either by mistake, or because they want to do something you don't allow, so they #define private public before including your files.
If it's a simple class, there's usually isn't any reason to use a PIMPL, but for times when the types are quite big, it can be a big help (especially in avoiding long build times).
Well, I wouldn't use it. I have a better alternative:
File foo.h
class Foo {
public:
virtual ~Foo() { }
virtual void someMethod() = 0;
// This "replaces" the constructor
static Foo *create();
}
File foo.cpp
namespace {
class FooImpl: virtual public Foo {
public:
void someMethod() {
//....
}
};
}
Foo *Foo::create() {
return new FooImpl;
}
Does this pattern have a name?
As someone who is also a Python and Java programmer, I like this a lot more than the PIMPL idiom.
Placing the call to the impl->Purr inside the .cpp file means that in the future you could do something completely different without having to change the header file.
Maybe next year they discover a helper method they could have called instead and so they can change the code to call that directly and not use impl->Purr at all. (Yes, they could achieve the same thing by updating the actual impl::Purr method as well, but in that case you are stuck with an extra function call that achieves nothing but calling the next function in turn.)
It also means the header only has definitions and does not have any implementation which makes for a cleaner separation, which is the whole point of the idiom.
We use the PIMPL idiom in order to emulate aspect-oriented programming where pre, post and error aspects are called before and after the execution of a member function.
struct Omg{
void purr(){ cout<< "purr\n"; }
};
struct Lol{
Omg* omg;
/*...*/
void purr(){ try{ pre(); omg-> purr(); post(); }catch(...){ error(); } }
};
We also use a pointer-to-base class to share different aspects between many classes.
The drawback of this approach is that the library user has to take into account all the aspects that are going to be executed, but only sees his/her class. It requires browsing the documentation for any side effects.
I just implemented my first PIMPL class over the last couple of days. I used it to eliminate problems I was having, including file *winsock2.*h in Borland Builder. It seemed to be screwing up struct alignment and since I had socket things in the class private data, those problems were spreading to any .cpp file that included the header.
By using PIMPL, winsock2.h was included in only one .cpp file where I could put a lid on the problem and not worry that it would come back to bite me.
To answer the original question, the advantage I found in forwarding the calls to the PIMPL class was that the PIMPL class is the same as what your original class would have been before you pimpl'd it, plus your implementations aren't spread over two classes in some weird fashion. It's much clearer to implement the public members to simply forward to the PIMPL class.
Like Mr Nodet said, one class, one responsibility.
I don't know if this is a difference worth mentioning but...
Would it be possible to have the implementation in its own namespace and have a public wrapper / library namespace for the code the user sees:
catlib::Cat::Purr(){ cat_->Purr(); }
cat::Cat::Purr(){
printf("purrrrrr");
}
This way all library code can make use of the cat namespace and as the need to expose a class to the user arises a wrapper could be created in the catlib namespace.
I find it telling that, in spite of how well-known the PIMPL idiom is, I don't see it crop up very often in real life (e.g., in open source projects).
I often wonder if the "benefits" are overblown; yes, you can make some of your implementation details even more hidden, and yes, you can change your implementation without changing the header, but it's not obvious that these are big advantages in reality.
That is to say, it's not clear that there's any need for your implementation to be that well hidden, and perhaps it's quite rare that people really do change only the implementation; as soon as you need to add new methods, say, you need to change the header anyway.