Making a passthrough interface to library - c++

Sorry if this question has been asked before. I'm really not sure how to search for it.
What I'm trying to do is build up a library of functions for myself to make life easier in my future school work. I built up a set of functions that handle Geometry based things, like 3D vectors and points and planes along with all of their associated methods (dot, cross, scale).
Through some googling, I found the Eigen library which implements everything I want, including alot of SSE stuff to make it fast.
I don't understand SSE and alot of the stuff they did in that library, so I don't want to use it as my development library. What I mean by that is, I want to code everything using my dumb, slow versions that I know how they work so I can make sure I'm doing everything right. Then once I'm sure the math is coming out right, I'd like to switch it over to using the Eigen library for efficiency.
Is there some way to make it so I can set a flag, or pass a variable, (maybe some template thing?) so that I can switch my code from my stuff to the Eigen stuff without having to rewrite my applications?
Basically I'd like to make a standard interface between the two libraries.
The Eigen library code would be included in the same directory. I'd have my Geometry.h, and then a Eigen folder that contains all of that library.
I don't know how to bridge the gap between the two. Would I need to write wrapper methods for all of their stuff so it has a common API?
EDIT ::
I should also add that I'd like this to be as fast as possible. For things like dot, cross, and matrix functions that could get called many times, I'd like to not have to go through virtual functions or too complex of wrapper methods that could negatively impact performance.
Am I asking too much?
END EDIT
Something like this is what I want in effect. But I'm pretty sure this doesn't do what I want.
#ifdef BASIC_LIBRARY
class Vector3
{
float x,y,z;
float dot(Vector3);
}
#endif
#ifdef EIGEN_LIBRARY
//some sort of passthrough to eigen library
#endif

I'd suggest using namespaces for this:
namespace geometry {
class Vector3 {
...
};
}
namespace eigen {
using geometry::Vector3;
...
}
Basically this would allow you to define your basic geometry types & operations, and then use selected portions directly from within the other namespace.

Related

How to make functions with conditional implementations for different platforms. without linking the other implementation [duplicate]

I'm having a bit of a go at developing a platform abstraction library for an application I'm writing, and struggling to come up with a neat way of separating my platform independent code from the platform specific code.
As I see it there are two basic approaches possible: platform independent classes with platform specific delegates, or platform independent classes with platform specific derived classes. Are there any inherent advantages/disadvantages to either approach? And in either case, what's the best mechanism to set up the delegation/inheritance relationship such that the process is transparent to a user of the platform independent classes?
I'd be grateful for any suggestions as to a neat architecture to employ, or even just some examples of what people have done in the past and the pros/cons of the given approach.
EDIT: in response to those suggesting Qt and similar, yes I'm purposely looking to "reinvent the wheel" as I'm not just concerned with developing the app, I'm also interested in the intellectual challenge of rolling my own platform abstraction library. Thanks for the suggestion though!
I'm using platform neutral header files, keeping any platform specific code in the source files (using the PIMPL idiom where neccessary). Each platform neutral header has one platform specific source file per platform, with extensions such as *.win32.cpp, *.posix.cpp. The platform specific ones are only compiled on the relevent platforms.
I also use boost libraries (filesystem, threads) to reduce the amount of platform specific code I have to maintain.
It's platform independent classes declarations with platform specific definitions.
Pros: Works fairly well, doesn't rely on the preprocessor - no #ifdef MyPlatform, keeps platform specific code readily identifiable, allows compiler specific features to be used in platform specific source files, doesn't pollute the global namespace by #including platform headers.
Cons: It's difficult to use inheritance with pimpled classes, sometimes the PIMPL structs need their own headers so they can be referenced from other platform specific source files.
Another way is to have platform independent conventions, but substitute platform specific source code at compile time.
That is to say that if you imagine a component, Foo, that has to be platform specific (like sockets or GUI elements), but has these public members:
class Foo {
public:
void write(const char* str);
void close();
};
Every module that has to use a Foo, obviously has #include "Foo.h", but in a platform specific make file you might have -IWin32, which means that the compiler looks in .\Win32 and finds a Windows specific Foo.h which contains the class, with the same interface, but maybe Windows specific private members etc.
So there is never any file which contains Foo as written above, but only sets of platform specific files which are only used when selected by a platform specific make file.
Have a look at ACE. It has a pretty good abstraction using templates and inheritance.
I might go for a policy-type thing:
template<typename Platform>
struct PlatDetails : private Platform {
std::string getDetails() const {
return std::string("MyAbstraction v1.0; ") + getName();
}
};
// For any serious compatibility functions, these would
// of course have to be in different headers, and the implementations
// would call some platform-specific functions to get precise
// version numbers. Using PImpl would be a smart idea for these
// classes if they need any platform-specific members, since as
// Joe Gauterin says, you want to avoid your application code indirectly
// including POSIX or Windows system headers, containing useless definitions.
struct Windows {
std::string getName() const { return "Windows"; }
};
struct Linux {
std::string getName() const { return "Linux"; }
};
#ifdef WIN32
typedef PlatDetails<Windows> PlatformDetails;
#else
typedef PlatDetails<Linux> PlatformDetails;
#endif
int main() {
std::cout << PlatformDetails().getName() << "\n";
}
There's not a whole lot to choose though between doing this, and doing regular simulated dynamic binding with CRTP, so that the generic thing is the base and the specific thing the derived class:
template<typename Platform>
struct PlatDetails {
std::string getDetails() const {
return std::string("MyAbstraction v1.0; ") +
static_cast<Platform*>(this)->getName();
}
};
struct Windows : PlatDetails<Windows> {
std::string getName() const { return "Windows"; }
};
struct Linux : PlatDetails<Linux> {
std::string getName() const { return "Linux"; }
};
#ifdef WIN32
typedef Windows PlatformDetails;
#else
typedef Linux PlatformDetails;
#endif
int main() {
std::cout << PlatformDetails().getName() << "\n";
}
Basically in the latter version, getName must be public (although I think you can use friend) and so must be the inheritance, whereas in the former, the inheritance can be private and/or the interface functions can be protected, if desired. So the adaptor can be a firewall between the interface the platform has to implement, and the interface your application code uses. Furthermore you can have multiple policies in the former (i.e. multiple platform-dependent facets used by the same platform-independent class), but not for the latter.
The advantage of either of them over versions with delegates or non-template-using inheritance, is that you don't need any virtual functions. Arguably this isn't a whole lot of advantage, considering how scary both policy-based design and CRTP are at first contact.
In practice, though, I agree with quamrana that normally you can just have different implementations of the same thing on different platforms:
// Or just set the include path with -I or whatever
#ifdef WIN32
#include "windows/platform.h"
#else
#include "linux/platform.h"
#endif
struct PlatformDetails {
std::string getDetails() const {
return std::string("MyAbstraction v1.0; ") +
porting::getName();
}
};
// windows/platform.h
namespace porting {
std::string getName() { return "Windows"; }
}
// linux/platform.h
namespace porting {
std::string getName() { return "Linux"; }
}
If you like to use a full-blown c++ framework available for many platforms and permissive copylefted, use Qt.
So... you don't want to simply use Qt? For real work using C++, I'd very highly recommend it. It's an absolutely excellent cross-platform toolkit. I just wrote a few plugins to get it working on the Kindle, and now the Palm Pre. Qt makes everything easy and fun. Downright rejuvenating, even. Well, until your first encounter with QModelIndex, but they've supposedly realized they over-engineered it and they're replacing it ;)
As an academic exercise though, this is an interesting problem. As a wheel re-inventor myself, I've even done it a few times now. :)
Short answer: I'd go with PIMPL. (Qt sources have examples a-plenty)
I've used base classes and platform specific derived classes in the past, but it usually ends up a bit messier than I had in mind. I've also done part of an implementation using some degree of function pointers for platform specific bits, and I was even less happy with that.
Both times I ended up with a very strong feeling that I was over-architecting and had lost my way.
I found using private implementation classes (PIMPL) with different platforms specific bits in different files easiest to write AND debug. However... don't be too afraid of an #ifdef or two, if it's just a few lines and very clear what's going on. I hate cluttered or nested #ifdef logic, but one or two here and there can really help avoid code duplication.
With PIMPL, you're not constantly refactoring your design as you discover new bits that require different implementations between platforms. That way be dragons.
At the implementation level, hidden from the application... there's nothing wrong with a few platform specific derived classes either. If two platform implementations are fairly well defined and share almost no data members, they'd be a good candidate for that. Just do it after realizing that, not before out of some idea that everything needs to fit your selected pattern.
If anything, the biggest gripe I have about coding today is how easily people seem to get lost in idealism. PIMPL is a pattern, having platform specific derived classes is another pattern. Using function pointers is a pattern. There's nothing that says they're mutually exclusive.
However, as a general guideline... start with PIMPL.
There're also the big boys, such as Qt4 (complete framework + GUI),GTK+ (gui-only afaik), and Boost (framework only, no GUI), all 3 support most platforms, GTK+ is C, Qt4/Boost are C++ and for the most part template based.
You might also want to take a look at poco:
The POCO C++ Libraries (POCO stands for POrtable COmponents) are open source C++ class libraries that simplify and accelerate the development of network-centric, portable applications in C++. The libraries integrate perfectly with the C++ Standard Library and fill many of the functional gaps left open by it. Their modular and efficient design and implementation makes the POCO C++ Libraries extremely well suited for embedded development, an area where the C++ programming language is becoming increasingly popular, due to its suitability for both low-level (device I/O, interrupt handlers, etc.) and high-level object-oriented development. Of course, the POCO C++ Libraries are also ready for enterprise-level challenges.
(source: pocoproject.org)

Where in the Eigen source is the sum() function defined for a particular matrix?

I am new to Eigen and trying to get a feel for the layout. I noticed that each matrix has a sum() function that returns the sum of all the coefficients in a given matrix. I was interested in how it was implemented, since I wanted to find the best way to loop through an eigen matrix. I went into the source code and found the following interface in "DenseBase.h":
EIGEN_DEVICE_FUNC Scalar sum() const;
Perhaps I misunderstood how Eigen is designed, but I thought it was the case that all functions were defined in their header files. I also looked in "Matrix.h" and "MatrixBase.h", and was unable to find the implementation. Which header file is the definition in?
The sum() function comes from Eigen::internal::scalar_sum_op<Scalar>() being called through redux in Redux.h. scalar_sum_op is defined in Functors.h. After that I lost interest. I found this two ways. The first was to use Visual Studio and right click on sum() and choosing "Go to Definition", following the trail until I was satisfied. The second was to use grep searching for sum() and again following the trail.
If you read through Redux.h you will get a feel for how the developers did it. They spend considerable effort vectorizing and unrolling things to make them work fast. I would say the best way to loop through an Eigen matrix is to use the provided interfaces to do what you want. I doubt you have a use case that has not been covered by the interface somehow.

Swig wrapping GLM library

I am working on a 2D game engine at the moment and have hit a stumbling block whilst implementing the LUA scripting / Interpreter.
I'm using SWIG and have got the basics all working fine.
In the engine I use the GLM (OpenGL Mathematics Libary http://glm.g-truc.net/) for all Vector and Matrix related areas.
I really need to expose glm (at a basic level) to the via SWIG to LUA so I can call methods in LUA like:
actor:GetPosition() <- Which returns a glm::vec2
GLM is quite a complex library (possibly an understatement lol) and I do not require the whole thing exposed, that would be ridiculous amounts of work I assume. I simply want to be able to access the xy components of the glm::vec2 class.
I'm sure this must be easy as SWIG doesn't require a full class definition and there must be a way to let SWIG just assume that the glm::vec2 class just has x,y params.
I'm not sure If using Proxy classes in SWIG is the way to do this ? or some other method ? I'm quite new to LUA integration and SWIG also.
One route I really don't want to go down is ditching GLM and writing my own Vector/Matrix library which is much more simple, no templates etc and I can then simply wrap with SWIG, but I feel this would be a waste of time and I would ultimately end up with a less powerful Math Library :(.
Thanks in advance and I can provide more info if necessary.
The big problem with GLM is that its vector types use things that have dubious C++ validity. Things that many compilers will allow, but are not allowed by the standard. SWIG uses its own parser to work, so it's going to have a hard time making heads or tails of GLM's definitions.
In a case like this, I would suggest using something like Luabind rather than SWIG. It gives a bit more fine-grained control in such circumstances, and it has the benefit of not using its own parser. Granted, you could effectively rewrite the prototypes for the important parts of GLM in your .swig file.
A solution in luabind would look like this:
#include <luabind/luabind.hpp>
extern "C" int init(lua_State* L)
{
using namespace luabind;
using namespace glm;
open(L);
module(L)
[
class_<dvec2>("dvec2")
.def_readwrite("x", &dvec2::x)
.def_readwrite("y", &dvec2::y)
];
return 0;
}

Linux g++ Embedding Prolog Logic Engine Within C++

I have some logic in a C++ program that is not only insanely complex, it requires multiple solutions for which Prolog is ideal. It's sort of like a firewall config script, checking input for actions, but sometimes more that one action is required.
What I want is something like this:
class PrologEngine
{
LoadLogic(const char* filename) throw PrologException; // Load a file of prolog rules, predicates facts etc in textual format. Must be callable multiple times to load AND COMPILE (for speed) prolog rule files.
std::vector<std::string> Evaluate(const char* predicate_in_string_form = "execute(input, Result)") throw PrologException; Returns a vector of matching predicates in text form.
};
It needs no ability to call back into C++.
AMI Prolog seems to get it, but it's not available on Linux. I'm trying to use SWI-Prolog and can only find 2 examples and and incredibly byzantine API (my opinion)
Can anyone point me to an example that is close to what I'm looking for?
There is A C++ interface to SWI-Prolog, that's high level.
I'm fighting with it, here an example of bridging to OpenGL:
PREDICATE(glEvalCoord1d, 1) {
double u = A1;
glEvalCoord1d( u );
return TRUE;
}
This clean code hides many 'bizantinism', using implicit type conversion and some macro. The interface is well tought and bidirectional: to call Prolog from C++ there are PlCall ('run' a query, similar to Evaluate you expose in the answer) or a more structured PlQuery, for multiple results...
If you don't need to link to openGl, or can wait to ear about the answer that hopefully I'll get from SWI-Prolog mailing list, you should evaluate it.
If you don't mind rewriting the prolog code for use in a native c++ header only library, I'd look into the castor library:
http://www.mpprogramming.com/cpp/

Parsing C++ to make some changes in the code

I would like to write a small tool that takes a C++ program (a single .cpp file), finds the "main" function and adds 2 function calls to it, one in the beginning and one in the end.
How can this be done? Can I use g++'s parsing mechanism (or any other parser)?
If you want to make it solid, use clang's libraries.
As suggested by some commenters, let me put forward my idea as an answer:
So basically, the idea is:
... original .cpp file ...
#include <yourHeader>
namespace {
SpecialClass specialClassInstance;
}
Where SpecialClass is something like:
class SpecialClass {
public:
SpecialClass() {
firstFunction();
}
~SpecialClass() {
secondFunction();
}
}
This way, you don't need to parse the C++ file. Since you are declaring a global, its constructor will run before main starts and its destructor will run after main returns.
The downside is that you don't get to know the relative order of when your global is constructed compared to others. So if you need to guarantee that firstFunction is called
before any other constructor elsewhere in the entire program, you're out of luck.
I've heard the GCC parser is both hard to use and even harder to get at without invoking the whole toolchain. I would try the clang C/C++ parser (libparse), and the tutorials linked in this question.
Adding a function at the beginning of main() and at the end of main() is a bad idea. What if someone calls return in the middle?.
A better idea is to instantiate a class at the beginning of main() and let that class destructor do the call function you want called at the end. This would ensure that that function always get called.
If you have control of your main program, you can hack a script to do this, and that's by far the easiet way. Simply make sure the insertion points are obvious (odd comments, required placement of tokens, you choose) and unique (including outlawing general coding practices if you have to, to ensure the uniqueness you need is real). Then a dumb string hacking tool to read the source, find the unique markers, and insert your desired calls will work fine.
If the souce of the main program comes from others sources, and you don't have control, then to do this well you need a full C++ program transformation engine. You don't want to build this yourself, as just the C++ parser is an enormous effort to get right. Others here have mentioned Clang and GCC as answers.
An alternative is our DMS Software Reengineering Toolkit with its C++ front end. DMS, using its C++ front end, can parse code (for a variety of C++ dialects), builds ASTs, carry out full name/type resolution to determine the meaning/definition/use of all symbols. It provides procedural and source-to-source transformations to enable changes to the AST, and can regenerate compilable source code complete with original comments.