Will using same class name within multiple namespaces get me into trouble? I also try to remove dependency to math library. What do you think about following design.
first file
#define MATH_RECTANGLE_EXISTS
namespace math {
class Rectangle : Object2D {
public:
float perimeter();
float area();
float x,y,w,h;
};
}
other file
#define GRAPHIC_RECTANGLE_EXISTS
#ifndef MATH_RECTANGLE_EXISTS
//is this a good idea to remove dependency?
namespace math {
class Rectangle {
public:
float x,y,w,h;
}
}
#endif
namespace graphics {
class Rectangle : math::Rectangle {
public:
void Draw(Canvas &canvas);
void Translate(float x, float y);
};
}
EDIT
What about this approach to remove dependency?
** 1st file**
namespace common {
class Rectangle {
float x,y,w,h;
};
}
math lib file
#define MATH_RECTANGLE_EXISTS
namespace math {
class Rectangle : public common::Rectangle, public Object2D {
public:
float perimeter();
float area();
};
}
graphic file
#define GRAPHIC_RECTANGLE_EXISTS
namespace graphics {
#ifndef MATH_RECTANGLE_EXISTS
class Rectangle : public math::Rectangle {
#else
class Rectangle : public common::Rectangle {
#endif
public:
void Draw(Canvas &canvas);
void Translate(float x, float y);
};
}
Thanks in advance.
I don't see the problem with reusing the same identifier within different namespaces, that was they were created for after all.
However I would strongly urge you NOT to 'simulate' the inclusion of math::Rectangle. If you need the file then include it, but what you are doing is called copy/paste programming, and it leads to a good number of problems, essentially because your two pieces of code are not synchronized so any bug fix / feature addition to one is not reported on the other.
EDIT: answer to the Edit ;)
It is not clear from the comments so I will state it:
If you need the dependency (because you really USE the functionality offered), then you HAVE to include the header. On the other hand, if you only use inheritance to get something that have 4 corners and nearly no method, then you're better up rolling a new Rectangle class with the minimum functionality.
I can think of an edge case though. I am under the impression that you are not so much interested in the functionality but in fact interested in the possibility of reusing the methods in the Math library that have been tailored to take a math::Rectangle as a parameter.
According to Herb Sutter (in C++ Coding Standards I think), the free functions that are bundled with a class are part of the class public interface. So if you want those classes, you actually need the inheritance.
Now I can understand that you may have some reluctance in including a library that may be huge (I don't know your Math library). In this case you could consider splitting the Math library in two:
A MathShapes library, comprising the basic shapes and the methods that act upon them
A Math library, which includes MathShapes and add all the other stuff
This way you would only depend on the MathShapes library.
On the other hand, if you absolutely do not want the dependency, then a blunt copy/paste will do, but your solution of testing the presence of Math::Rectangle by testing the presence of its header guard is ill-fitted:
It only works if you get the header guard correctly
AND if the include is actually performed BEFORE the include of Graphics::Rectangle
Note that in the case in which Graphics::Rectangle is included before Math::Rectangle you may have some compilation issues...
So make up your mind on whether or not you want the dependency.
That is rather what namespaces are for, and a rectangle is both a mathematical and a graphical object.
The attempt to avoid including a header file however is very ill-advised. It achieves nothing other than a maintenance headache. A change in math::Rectangle should cause a rebuild of graphics::Rectangle - if they end up in a mismatch and you hide that from the compiler, you'll end up with a harder to debug run-time error.
Related
I'm trying to make class functions I can tack on to other classes, like with nested classes. I'm still fairly new to C++, so I may not actually be trying to use nested classes, but to the best of my knowledge that's where I'm at.
Now, I've just written this in Chrome, so it has no real use, but I wanted to keep the code short.
I'm compiling on Windows 7, using Visual Studio 2015.
I have two classes in file_1.h:
#pragma once
#include "file_2.h"
class magic_beans{
public:
magic_beans();
~magic_beans();
int getTotal();
private:
double total[2]; //they have magic fractions
}
class magic_box{
public:
magic_box(); //initiate
~magic_box(); //make sure all objects have been executed
void update();
magic_beans beans; //works fine
magic_apples apples; //does not work
private:
int true_rand; //because it's magic
};
... And I have one class in file_2.h:
#pragma once
#include "file_1.h"
class magic_apples{
public:
magic_apples();
~magic_apples();
int getTotal();
private:
double total[2];
}
Now, I've found that I can simply change:
magic_apples apples;
To:
class magic_apples *apples;
And in my constructor I add:
apples = new magic_apples;
And in my destructor, before you ask:
delete apples;
Why must I refer to a class defined in an external file using pointers, whereas one locally defined is fine?
Ideally I would like to be able to define magic_apples the same way I can define magic_beans. I'm not against using pointers but to keep my code fairly uniform I'm interested in finding an alternative definition method.
I have tried a few alternative defines of magic_apples within my magic_box class in file_1.h but I have been unable to get anything else to work.
You have a circular dependency, file_1.h depends on file_2.h which depends on file_1.h etc. No amount of header include guards or pragmas can solve that problem.
There are two ways of solving the problem, and one way is by using forward declarations and pointers. Pointers solve it because using a pointer you don't need a complete type.
The other way to solve it is to break the circular dependency. By looking at your structures that you show, it seems magic_apples doesn't need the magic_beans type, so you can break the circle by simply not includeing file_1.h. So file_2.h should look like
#pragma once
// Note no include file here!
class magic_apples{
public:
magic_apples();
~magic_apples();
int getTotal();
private:
double total[2];
}
Introduction:
I come from a mechanical engineering background, but took a class in embedded software programming (on a lovely little robot) with the intention of improving some skills I had in programming already. However, the class was largely unsatisfactory in what I hoped to achieve (basically, it taught the basics of c++ with some very superficial composition patterns).
Question We were told to make our code somewhat object oriented by defining classes for various parts of the code. Since all the parts were very dependent of each other, the general structure looked as follows (basically, a Drive, Sensors and WorldModel class with some dependencies, and a Director class trying to make our robot solve the task at hand)
class Drive{
void update();
Drive(Sensors & sensors);
private:
Sensors & sensors
};
class Sensors{
void update();
}
class WorldModel {
void update();
WorldModel(Sensors & sensors, Drive & drive);
private:
Sensors & sensors;
Drive & drive;
};
class Director {
void update();
Director(Sensors & sensors, Drive & drive, WorldModel & worldmodel);
private:
Sensors & sensors;
Drive & drive;
WorldModel & worldmodel;
};
This is actually an extremely condensed version. It seems to me however that this is not really object oriented code as much as Clumsily Split-Up Code™. In particular, it seemed almost impossible to make e.g. the Sensors class get data from the Drive class without some fudging around in the Director class (i.e., first perform a function in the Drive class to get the velocity setpoint, and then provide that to the update() method in the Sensors class to do some Kalman filtering).
How does one create a project in c++ with various parts being very dependent on each other, without this becoming a problem? I read an SO answer on interfaces but I'm not sure how to apply that to this problem - is that even the way to go here? Is there a design pattern (not necessarily an object oriented one) that is suitable for projects such as this one?
No, there's not a design pattern for projects "like this".
Design patterns are not the goal.
So, let me put a few guesses straight:
you want light weight code (because otherwise you'd be using Java, right)
you want maintainable code (because otherwise, spaghetti would be fine)
you want idiomatic code
Here's what I'd do:
declare classes in separate headers
use forward defines to reduce header coupling
move implementations in the corresponding source files
keep unwanted implementation dependencies out of the header file. Optionally use the Pimpl Idiom here.
e.g. if you use library X to implement Y::frobnicate don't include libX.h in your Y.h. Instead, include it in Y.cpp only.
If you find that you need class member declaration that would require libX.h in the header, use the Pimpl Idiom.
I don't know what else you could want here :)
Maybe, if you need "interfaces" consider using template composition. Policy, strategy, state patterns. E.g. Instead of
#include <set>
struct ISensors {
virtual int get(int id) const = 0;
virtual int set(int id, int newval) const = 0;
virtual std::set<int> sensors() const = 0;
};
class Drive {
void update();
Drive(ISensors &sensors);
private:
ISensors &sensors;
};
You could consider
template <typename Sensors>
class Drive {
void update();
Drive(Sensors &sensors);
private:
Sensors &sensors;
};
Which leaves you free to implement Sensors in any which way that statically compiles. The "limitation" is that the injection of dependencies needs to be statically defined/typed. The benefit is ultimate flexibility and zero-overhead: e.g. you couldn't have virtual member function templates, but you can use this as a Sensors policy:
struct TestSensors {
int get(int) { return 9; }
int set(int, int) { return -9; }
template<typename OutputIterator>
OutputIterator sensors(OutputIterator out) const {
int available[] = { 7, 8, 13, 21 };
return std::copy(std::begin(available), std::end(available), out);
}
};
using TestDrive = Drive<TestSensors>;
I am working on a little game engine but I got stuck at something. Explanation : I have two classes, cEntity And ObjectFactory :
cEntity
class cEntity:public cEntityProperty
{
Vector2 position;
Vector2 scale;
public:
cEntity(void);
cEntity(const cEntity&);
~cEntity(void);
public:
void init();
void render();
void update();
void release();
};
ObjectFactory
#include "cEntity.h"
#include <vector>
class ObjectFactory
{
static std::vector<cEntity> *entityList;
static int i, j;
public:
static void addEntity(cEntity entity) {
entityList->push_back(entity);
}
private:
ObjectFactory(void);
~ObjectFactory(void);
};
std::vector<cEntity> *ObjectFactory::entityList = new std::vector<cEntity>();
Now I am adding new cEnity to ObjectFactory in cEntity constructor but facing an error related to circular references: for using ObjectFactor::addEntity() I need to define the ObjectFactory.h in cEntity class but it creates a circular reference.
I think your code might have an underlying architectural issue given how you have described the problem.
Your ObjectFactory should be handling the cEntities, which in turn should be unaware of the "level above". From the description of the problem you are having, it implies that you're not sure what class is in charge of what job.
Your cEntitys should expose an interface (i.e. all the stuff marked "public" in a class) that other bits of code interact with. Your ObjectFactory (which is a bit badly named if doing this job, but whatever) should in turn use that interface. The cEntitys shouldn't care who is using the interface: they have one job to do, and they do it. The ObjectFactory should have one job to do that requires it to keep a list of cEntitys around. You don't edit std::string when you use it elsewhere: why is your class any different?
That being said, there's two parts to resolving circular dependencies (beyond "Don't create code that has circular dependencies in the first place" - see the first part to this answer. That's the best way to avoid this sort of problem in my opinion)
1) Include guards. Do something like this to each header (.h) file:
#ifndef CENTITY_H
#define CENTITY_H
class cEntity:public cEntityProperty
{
Vector2 position;
Vector2 scale;
public:
cEntity(void);
cEntity(const cEntity&);
~cEntity(void);
public:
void init();
void render();
void update();
void release();
};
#endif
What this does:
The first time your file is included, CENTITY_H is not defined. The ifndef macro is thus true, and moves to the next line (defining CENTITY_H), before it moves onto the rest of your header.
The second time (and all future times), CENTITY_H is defined, so the ifndef macro skips straight to the endif, skipping your header. Subsequently, your header code only ever ends up in your compiled program once. If you want more details, try looking up how the Linker process.
2) Forward-declaration of your classes.
If ClassA needs a member of type ClassB, and ClassB needs a member of type ClassA you have a problem: neither class knows how much memory it needs to be allocated because it's dependant on another class containing itself.
The solution is that you have a pointer to the other class. Pointers are a fixed and known size by the compiler, so we don't have a problem. We do, however, need to tell the compiler to not worry too much if it runs into a symbol (class name) that we haven't previously defined yet, so we just add class Whatever; before we start using it.
In your case, change cEntity instances to pointers, and forward-declare the class at the start. You are now able to freely use ObjectFactory in cEntity.
#include "cEntity.h"
#include <vector>
class cEntity; // Compiler knows that we'll totally define this later, if we haven't already
class ObjectFactory
{
static std::vector<cEntity*> *entityList; // vector of pointers
static int i, j;
public:
static void addEntity(cEntity* entity) {
entityList->push_back(entity);
}
// Equally valid would be:
// static void addEntity(cEntity entity) {
// entityList->push_back(&entity);}
// (in both cases, you're pushing an address onto the vector.)
// Function arguments don't matter when the class is trying to work out how big it is in memory
private:
ObjectFactory(void);
~ObjectFactory(void);
};
std::vector<cEntity*> *ObjectFactory::entityList = new std::vector<cEntity*>();
I'm currently working on a little game engine project in C++ using DirectX for rendering. The rendering part of the engine consists of classes such as Model and Texture. Because I would like to keep it (relatively) simple to switch to another rendering library (e.g. OpenGL) (and because I suppose it's just good encapsulation), I would like to keep the public interfaces of these classes completely devoid of any references to DirectX types, i.e. I would like to avoid providing public functions such as ID3D11ShaderResourceView* GetTextureHandle();.
This becomes a problem, however, when a class such as Model requires the internal texture handle used by Texture to carry out its tasks - for instance when actually rendering the model. For simplicity's sake, let's replace DirectX with an arbitrary 3D rendering library that we'll call Lib3D. Here is an example demonstrating the issue I'm facing:
class Texture {
private:
Lib3DTexture mTexture;
public:
Texture(std::string pFileName)
: mTexture(pFileName)
{
}
};
class Model {
private:
Texture* mTexture;
Lib3DModel mModel;
public:
Model(std::string pFileName, Texture* pTexture)
: mTexture(pTexture), mModel(pFileName)
{
}
void Render()
{
mModel.RenderWithTexture( /* how do I get the Lib3DTexture member from Texture? */ );
}
};
Of course, I could provide a public GetTextureHandle function in Texture that simply returns a pointer to mTexture, but this would mean that if I change the underlying rendering library, I would also have to change the type returned by that function, thus changing the public interface of Texture. Worse yet, maybe the new library isn't even structured the same way, meaning I'd have to provide entirely new functions!
The best solution I can think of is making Model a friend of Texture so that it can access Texture's members directly. This seems slightly unwieldy, however, as I add more classes that make use of Texture. I have never used friendship much at all, so I'm not sure if this is even an acceptable usage case.
So, my questions are:
Is declaring Model a friend of Texture an acceptable use of
friendship? Would it be a good solution?
If no, what would you
recommend? Do I need to redesign my class structure
completely? In that case, any tips?
PS: I realize that the title is not very descriptive and I apologize for that, but I didn't really know how to put it.
Whether it is an acceptable use of friendship is debatable. With every feature, even good ones, that you use, you risk that anti-patterns form in your code. So just use it with moderation and be cautious for anti-patterns.
While you can use friendships you can also simply use inheritance i.e. IGLTexture : ITexture and cast to the appropriate interface wherever implementation detail needs to be accessed. For instance IGLTexture could expose everything opengl related.
And there is even another paradigm that could be used. pimpl which stands for
private implementation. In short rather than exposing implementation detail
within the class, you just supply all implementation detail in a class whose implementation is unspecified publicly. I've been using this approach myself with little second regrets.
//header
class Texture
{
int width, height, depth;
struct Impl;
char reserved[32];
*Impl impl;
Texture();
...
};
//cpp
struct Texture::Impl
{
union
{
int myopenglhandle;
void* mydirectxpointer;
};
};
Texture::Texture()
{
impl = new (reserved) Impl();
}
You need to abstract this mo-fo.
class TextureBase{
public:
virtual Pixels* getTexture() = 0;
virtual ~TextureBase(){}
};
class Lib3DTexture: public TextureBase {
private:
Lib3DTexture mTexture;
public:
Texture(std::string pFileName)
: mTexture(pFileName)
{
}
Pixels* getTexture(){ return mTexture.pixels(); }
};
class Renderable{
public:
virtual void render()const = 0;
virtual ~Renderable(){}
};
class ModelBase: public Renderable{
public:
virtual ModelData* getModelData() = 0;
virtual ~ModelBase(){}
};
class Lib3DModel : ModelBase{
private:
TextureBase* texture;
ModelBase* model;
public:
Lib3DModel(std::string pFileName, Texture* pTexture): mTexture(pTexture), mModel(pFileName){}
void render()const{
model.renderWithTexture( texture.getPixels() );
}
};
class World: public Renderable{
private:
std::vector< std::shared_ptr<Renderable> > visibleData;
public:
void render()const{
for_each(visiableData.begin(),visiableData.end(),std::mem_fun(Renderable::render));
}
};
you get the idea, not guaranteeing it compiles but just to give you an idea.Also check out user2384250 comment, good idea as well.
Make Texture a template with a default template parameter using DirectX.
So you can do this:
template<typename UnderlyingType = Lib3DTexture> class Texture {
private:
UnderlyingType mTexture;
public:
Texture(std::string pFileName)
: mTexture(pFileName)
{
}
UnderlyingType UnderlyingTexture(); //returns UnderlyingType, no matter which library you use
};
I think this could be a clean way of solving that problem, and easily allowing the switching out of underlying libraries.
Since the 2 APIs are mutually exclusive and since you probably don't need to switch between the 2 at runtime, I think you should aim at building 2 different executables, one for each of the underlying API.
By that I mean use:
#if OpenGL_implementation
...
#else // DirectX
...
#if
This may or may not be the sexy solution you were looking for. But I believe this is the cleaner and simpler solution. Going with heavy template use (resp. heavy polymorphic behaviour) will probably cause even more code bloat than an #if solution and it will also compile (resp. run) slower as well. :)
In other words, if you can afford to have the 2 behaviours you want in 2 different executables, you should not allow this to have an impact on your software architecture. Just build 2 sexy, twin software solutions instead of 1 fat one. :)
From my experience, using C++ inheritance for those sort of problems often ends a quite complex and unmaintainable project.
There are basically two solutions:
Abstract all data types, making them not depend on the rendering layer at all. You will have to copy some data structures from rendering layer, but you only need to replace rendering code.
Choose a portable render layer (OpenGL) and stick to it.
I have some classes implementing some computations which I have
to optimize for different SIMD implementations e.g. Altivec and
SSE. I don't want to polute the code with #ifdef ... #endif blocks
for each method I have to optimize so I tried a couple of other
approaches, but unfotunately I'm not very satisfied of how it turned
out for reasons I'll try to clarify. So I'm looking for some advice
on how I could improve what I have already done.
1.Different implementation files with crude includes
I have the same header file describing the class interface with different
"pseudo" implementation files for plain C++, Altivec and SSE only for the
relevant methods:
// Algo.h
#ifndef ALGO_H_INCLUDED_
#define ALGO_H_INCLUDED_
class Algo
{
public:
Algo();
~Algo();
void process();
protected:
void computeSome();
void computeMore();
};
#endif
// Algo.cpp
#include "Algo.h"
Algo::Algo() { }
Algo::~Algo() { }
void Algo::process()
{
computeSome();
computeMore();
}
#if defined(ALTIVEC)
#include "Algo_Altivec.cpp"
#elif defined(SSE)
#include "Algo_SSE.cpp"
#else
#include "Algo_Scalar.cpp"
#endif
// Algo_Altivec.cpp
void Algo::computeSome()
{
}
void Algo::computeMore()
{
}
... same for the other implementation files
Pros:
the split is quite straightforward and easy to do
there is no "overhead"(don't know how to say it better) to objects of my class
by which I mean no extra inheritance, no addition of member variables etc.
much cleaner than #ifdef-ing all over the place
Cons:
I have three additional files for maintenance; I could put the Scalar
implementation in the Algo.cpp file though and end up with just two but the
inclusion part will look and fell a bit dirtier
they are not compilable units per-se and have to be excluded from the
project structure
if I do not have the specific optimized implementation yet for let's say
SSE I would have to duplicate some code from the plain(Scalar) C++ implementation file
I cannot fallback to the plain C++ implementation if nedded; ? is it even possible
to do that in the described scenario ?
I do not feel any structural cohesion in the approach
2.Different implementation files with private inheritance
// Algo.h
class Algo : private AlgoImpl
{
... as before
}
// AlgoImpl.h
#ifndef ALGOIMPL_H_INCLUDED_
#define ALGOIMPL_H_INCLUDED_
class AlgoImpl
{
protected:
AlgoImpl();
~AlgoImpl();
void computeSomeImpl();
void computeMoreImpl();
};
#endif
// Algo.cpp
...
void Algo::computeSome()
{
computeSomeImpl();
}
void Algo::computeMore()
{
computeMoreImpl();
}
// Algo_SSE.cpp
AlgoImpl::AlgoImpl()
{
}
AlgoImpl::~AlgoImpl()
{
}
void AlgoImpl::computeSomeImpl()
{
}
void AlgoImpl::computeMoreImpl()
{
}
Pros:
the split is quite straightforward and easy to do
much cleaner than #ifdef-ing all over the place
still there is no "overhead" to my class - EBCO should kick in
the semantic of the class is much more cleaner at least comparing to the above
that is private inheritance == is implemented in terms of
the different files are compilable, can be included in the project
and selected via the build system
Cons:
I have three additional files for maintenance
if I do not have the specific optimized implementation yet for let's say
SSE I would have to duplicate some code from the plain(Scalar) C++ implementation file
I cannot fallback to the plain C++ implementation if nedded
3.Is basically method 2 but with virtual functions in the AlgoImpl class. That
would allow me to overcome the duplicate implementation of plain C++ code if needed
by providing an empty implementation in the base class and override in the derived
although I will have to disable that behavior when I actually implement the optimized
version. Also the virtual functions will bring some "overhead" to objects of my class.
4.A form of tag dispatching via enable_if<>
Pros:
the split is quite straightforward and easy to do
much cleaner than #ifdef ing all over the place
still there is no "overhead" to my class
will eliminate the need for different files for different implementations
Cons:
templates will be a bit more "cryptic" and seem to bring an unnecessary
overhead(at least for some people in some contexts)
if I do not have the specific optimized implementation yet for let's say
SSE I would have to duplicate some code from the plain(Scalar) C++ implementation
I cannot fallback to the plain C++ implementation if needed
What I couldn't figure out yet for any of the variants is how to properly and
cleanly fallback to the plain C++ implementation.
Also I don't want to over-engineer things and in that respect the first variant
seems the most "KISS" like even considering the disadvantages.
You could use a policy based approach with templates kind of like the way the standard library does for allocators, comparators and the like. Each implementation has a policy class which defines computeSome() and computeMore(). Your Algo class takes a policy as a parameter and defers to its implementation.
template <class policy_t>
class algo_with_policy_t {
policy_t policy_;
public:
algo_with_policy_t() { }
~algo_with_policy_t() { }
void process()
{
policy_.computeSome();
policy_.computeMore();
}
};
struct altivec_policy_t {
void computeSome();
void computeMore();
};
struct sse_policy_t {
void computeSome();
void computeMore();
};
struct scalar_policy_t {
void computeSome();
void computeMore();
};
// let user select exact implementation
typedef algo_with_policy_t<altivec_policy_t> algo_altivec_t;
typedef algo_with_policy_t<sse_policy_t> algo_sse_t;
typedef algo_with_policy_t<scalar_policy_t> algo_scalar_t;
// let user have default implementation
typedef
#if defined(ALTIVEC)
algo_altivec_t
#elif defined(SSE)
algo_sse_t
#else
algo_scalar_t
#endif
algo_default_t;
This lets you have all the different implementations defined within the same file (like solution 1) and compiled into the same program (unlike solution 1). It has no performance overheads (unlike virtual functions). You can either select the implementation at run time or get a default implementation chosen by the compile time configuration.
template <class algo_t>
void use_algo(algo_t algo)
{
algo.process();
}
void select_algo(bool use_scalar)
{
if (!use_scalar) {
use_algo(algo_default_t());
} else {
use_algo(algo_scalar_t());
}
}
As requested in the comments, here's a summary of what I did:
Set up policy_list helper template utility
This maintains a list of policies, and gives them a "runtime check" call before calling the first suitable implementaiton
#include <cassert>
template <typename P, typename N=void>
struct policy_list {
static void apply() {
if (P::runtime_check()) {
P::impl();
}
else {
N::apply();
}
}
};
template <typename P>
struct policy_list<P,void> {
static void apply() {
assert(P::runtime_check());
P::impl();
}
};
Set up specific policies
These policies implement a both a runtime test and an actual implementation of the algorithm in question. For my actual problem impl took another template parameter that specified what exactly it was they were implementing, here though the example assumes there is only one thing to be implemented. The runtime tests are cached in a static bool for some (e.g. the Altivec one I used) the test was really slow. For others (e.g. the OpenCL one) the test is actually "is this function pointer NULL?" after one attempt at setting it with dlsym().
#include <iostream>
// runtime SSE detection (That's another question!)
extern bool have_sse();
struct sse_policy {
static void impl() {
std::cout << "SSE" << std::endl;
}
static bool runtime_check() {
static bool result = have_sse();
// have_sse lives in another TU and does some cpuid asm stuff
return result;
}
};
// Runtime OpenCL detection
extern bool have_opencl();
struct opencl_policy {
static void impl() {
std::cout << "OpenCL" << std::endl;
}
static bool runtime_check() {
static bool result = have_opencl();
// have_opencl lives in another TU and does some LoadLibrary or dlopen()
return result;
}
};
struct basic_policy {
static void impl() {
std::cout << "Standard C++ policy" << std::endl;
}
static bool runtime_check() { return true; } // All implementations do this
};
Set per architecture policy_list
Trivial example sets one of two possible lists based on ARCH_HAS_SSE preprocessor macro. You might generate this from your build script, or use a series of typedefs, or hack support for "holes" in the policy_list that might be void on some architectures skipping straight to the next one, without trying to check for support. GCC sets some preprocessor macors for you that might help, e.g. __SSE2__.
#ifdef ARCH_HAS_SSE
typedef policy_list<opencl_policy,
policy_list<sse_policy,
policy_list<basic_policy
> > > active_policy;
#else
typedef policy_list<opencl_policy,
policy_list<basic_policy
> > active_policy;
#endif
You can use this to compile multiple variants on the same platform too, e.g. and SSE and no-SSE binary on x86.
Use the policy list
Fairly straightforward, call the apply() static method on the policy_list. Trust that it will call the impl() method on the first policy that passes the runtime test.
int main() {
active_policy::apply();
}
If you take the "per operation template" approach I mentioned earlier it might be something more like:
int main() {
Matrix m1, m2;
Vector v1;
active_policy::apply<matrix_mult_t>(m1, m2);
active_policy::apply<vector_mult_t>(m1, v1);
}
In that case you end up making your Matrix and Vector types aware of the policy_list in order that they can decide how/where to store the data. You can also use heuristics for this too, e.g. "small vector/matrix lives in main memory no matter what" and make the runtime_check() or another function test the appropriateness of a particular approach to a given implementation for a specific instance.
I also had a custom allocator for containers, which produced suitably aligned memory always on any SSE/Altivec enabled build, regardless of if the specific machine had support for Altivec. It was just easier that way, although it could be a typedef in a given policy and you always assume that the highest priority policy has the strictest allocator needs.
Example have_altivec():
I've included a sample have_altivec() implementation for completeness, simply because it's the shortest and therefore most appropriate for posting here. The x86/x86_64 CPUID one is messy because you have to support the compiler specific ways of writing inline ASM. The OpenCL one is messy because we check some of the implementation limits and extensions too.
#if HAVE_SETJMP && !(defined(__APPLE__) && defined(__MACH__))
jmp_buf jmpbuf;
void illegal_instruction(int sig) {
// Bad in general - https://www.securecoding.cert.org/confluence/display/seccode/SIG32-C.+Do+not+call+longjmp%28%29+from+inside+a+signal+handler
// But actually Ok on this platform in this scenario
longjmp(jmpbuf, 1);
}
#endif
bool have_altivec()
{
volatile sig_atomic_t altivec = 0;
#ifdef __APPLE__
int selectors[2] = { CTL_HW, HW_VECTORUNIT };
int hasVectorUnit = 0;
size_t length = sizeof(hasVectorUnit);
int error = sysctl(selectors, 2, &hasVectorUnit, &length, NULL, 0);
if (0 == error)
altivec = (hasVectorUnit != 0);
#elif HAVE_SETJMP_H
void (*handler) (int sig);
handler = signal(SIGILL, illegal_instruction);
if (setjmp(jmpbuf) == 0) {
asm volatile ("mtspr 256, %0\n\t" "vand %%v0, %%v0, %%v0"::"r" (-1));
altivec = 1;
}
signal(SIGILL, handler);
#endif
return altivec;
}
Conclusion
Basically you pay no penalty for platforms that can never support an implementation (the compiler generates no code for them) and only a small penalty (potentially just a very predictable by the CPU test/jmp pair if your compiler is half-decent at optimising) for platforms that could support something but don't. You pay no extra cost for platforms that the first choice implementation runs on. The details of the runtime tests vary between the technology in question.
If the virtual function overhead is acceptable, option 3 plus a few ifdefs seems a good compromise IMO. There are two variations that you could consider: one with abstract base class, and the other with the plain C implementation as the base class.
Having the C implementation as the base class lets you gradually add the vector optimized versions, falling back on the non-vectorized versions as you please, using an abstract interface would be a little cleaner to read.
Also, having separate C++ and vectorized versions of your class let you easily write unit tests that
Ensure that the vectorized code is giving the right result (easy to mess this up, and vector floating registers can have different precision than FPU, causing different results)
Compare the performance of the C++ vs the vectorized. It's often good to make sure the vectorized code is actually doing you any good. Compilers can generate very tight C++ code that sometimes does as well or better than vectorized code.
Here's one with the plain-c++ implementations as the base class. Adding an abstract interface would just add a common base class to all three of these:
// Algo.h:
class Algo_Impl // Default Plain C++ implementation
{
public:
virtual ComputeSome();
virtual ComputeSomeMore();
...
};
// Algo_SSE.h:
class Algo_Impl_SSE : public Algo_Impl // SSE
{
public:
virtual ComputeSome();
virtual ComputeSomeMore();
...
};
// Algo_Altivec.h:
class Algo_Impl_Altivec : public Algo_Impl // Altivec implementation
{
public:
virtual ComputeSome();
virtual ComputeSomeMore();
...
};
// Client.cpp:
Algo_Impl *myAlgo = 0;
#ifdef SSE
myAlgo = new Algo_Impl_SSE;
#elseif defined(ALTIVEC)
myAlgo = new Algo_Impl_Altivec;
#else
myAlgo = new Algo_Impl_Default;
#endif
...
You may consider to employ adapter patterns. There are a few types of adapters and it's quite an extensible concept. Here is an interesting article Structural Patterns: Adapter and Façade
that discusses very similar matter to the one in your question - the Accelerate framework as an example of the Adapter patter.
I think it is a good idea to discuss a solution on the level of design patterns without focusing on implementation detail like C++ language. Once you decide that the adapter states the right solutiojn for you, you can look for variants specific to your implemementation. For example, in C++ world there is known adapter variant called generic adapter pattern.
This isn't really a whole answer: just a variant on one of your existing options. In option 1 you've assumed that you include algo_altivec.cpp &c. into algo.cpp, but you don't have to do this. You could omit algo.cpp entirely, and have your build system decide which of algo_altivec.cpp, algo_sse.cpp, &c. to build. You'd have to do something like this anyway whichever option you use, since each platform can't compile every implementation; my suggestion is only that whichever option you choose, instead of having #if ALTIVEC_ENABLED everywhere in the source, where ALTIVEC_ENABLED is set from the build system, you just have the build system decide directly whether to compile algo_altivec.cpp .
This is a bit trickier to achieve in MSVC than make, scons, &c., but still possible. It's commonplace to switch in a whole directory rather than individual source files; that is, instead of algo_altivec.cpp and friends, you'd have platform/altivec/algo.cpp, platform/sse/algo.cpp, and so one. This way, when you have a second algorithm you need platform-specific implementations for, you can just add the extra source file to each directory.
Although my suggestion's mainly intended to be a variant of option 1, you can combine this with any of your options, to let you decide in the build system and at runtime which options to offer. In that case, though, you'll probably need implementation-specific header files too.
In order to hide the implementation details you may just use an abstract interface with static creator and provide three 3 implementation classes:
// --------------------- Algo.h ---------------------
#pragma once
typedef boost::shared_ptr<class Algo> AlgoPtr;
class Algo
{
public:
static AlgoPtr Create(std::string type);
~Algo();
void process();
protected:
virtual void computeSome() = 0;
virtual void computeMore() = 0;
};
// --------------------- Algo.cpp ---------------------
class PlainAlgo: public Algo { ... };
class AltivecAlgo: public Algo { ... };
class SSEAlgo: public Algo { ... };
static AlgoPtr Algo::Create(std::string type) { /* Factory implementation */ }
Please note, that since PlainAlgo, AlivecAlgo and SSEAlgo classes are defined in Algo.cpp, they are only seen from this compilation unit and therefore the implementation details hidden from the outside world.
Here is how one can use your class then:
AlgoPtr algo = Algo::Create("SSE");
algo->Process();
It seems to me that your first strategy, with separate C++ files and #including the specific implementation, is the simplest and cleanest. I would only add some comments to your Algo.cpp indicating which methods are in the #included files.
e.g.
// Algo.cpp
#include "Algo.h"
Algo::Algo() { }
Algo::~Algo() { }
void Algo::process()
{
computeSome();
computeMore();
}
// The following methods are implemented in separate,
// platform-specific files.
// void Algo::computeSome()
// void Algo::computeMore()
#if defined(ALTIVEC)
#include "Algo_Altivec.cpp"
#elif defined(SSE)
#include "Algo_SSE.cpp"
#else
#include "Algo_Scalar.cpp"
#endif
Policy-like templates (mixins) are fine until the requirement to fall back to default implementation. It's runtime opeation and should be handled by runtime polymorphism. Strategy pattern can handle this fine.
There's one drawback of this approach: Strategy-like algorithm implemented cannot be inlined. Such inlining can provide reasonable performance improvement in rare cases. If this is an issue you'll need to cover higher-level logic by Strategy.