I have a QT input listener class, that signals stdin inputs in a running QCoreApplication. I want to use that on both windows and linux.
My current approach is to use #ifdef Q_OS_WIN inside both header and cpp to execute the platform-specific code. As I know, that #ifdef is considered harmful and should be avoided, I want to refactor this in a manner where I have one single header file inputlistener.h and let the build system choose between a specific windows/inputlistener.cpp or linux/inputlistener.cpp, maybe with an additional inputlistener_global.cpp that holds the code, which is not platform specific.
However, I can't find a solution, how to get the #ifdef in the header out of the way.
How can I achieve that?
Here is my current approach:
#inputlistener.h
#ifndef INPUTLISTENER_H
#define INPUTLISTENER_H
#include <QtCore>
class inputlistener : public QObject {
Q_OBJECT
private:
#ifdef Q_OS_WIN
QWinEventNotifier* m_notifier;
#else
QSocketNotifier* m_notifier;
#endif
signals:
void inputeventhappened(int keycode);
private slots:
void readyRead();
public:
inputlistener();
};
#endif // INPUTLISTENER_H
#inputlistener.cpp
#include "inputlistener.h"
#include "curses.h"
#ifdef Q_OS_WIN
#include <windows.h>
#endif
inputlistener::inputlistener()
{
#ifdef Q_OS_WIN
m_notifier = new QWinEventNotifier(GetStdHandle(STD_INPUT_HANDLE));
connect(m_notifier, &QWinEventNotifier::activated
#else
m_notifier = new QSocketNotifier(0, QSocketNotifier::Read, this);
connect(m_notifier, &QSocketNotifier::activated
#endif
,
this, &inputlistener::readyRead);
readyRead(); // data might be already available without notification
}
void inputlistener::readyRead()
{
// It's OK to call this with no data available to be read.
int c;
while ((c = getch()) != ERR) {
emit inputeventhappened(c);
}
}
You can create separate EventListener.cpp files for windows and unix and put these files into subdirectories like (win, linux). To the makefile or to the projectfile you can add one implementation file based on the current platform. The compiler will compile just one file for the current platform.
With this method you can avoid ifdefing totally.
If the definitions are different you can use pImpl idiom to separate the implementation details of a class: https://cpppatterns.com/patterns/pimpl.html
You can create WinEventListener and UnixEventListener (or something else) each using it's own implementation (instead of trying to fit it into one via ifdefs), each implementing common Listener interface (and residing in separate files).
Then, have a factory function that'd return listener appropriate to OS. That they there would be only one single place that might require ifdefs.
But in general, ifdefing something might be the best or the only course of action (e.g. when you already abstracting something). Conditional compilation is one of the few valid/justified usages of preprocessor (it's what it was made for).
Also, in your particular case, be sure there is not already appropriate code/class in Qt lib. For most common stuff chances are there is already existing abstraction (or recommended ways to do that).
Related
There exists quite a bit of discussions on feature flags/toggles and why you would use them but most of the discussion on implementing them center around (web or client) apps. If your product/artifact is a C or C++ library and your public headers are affected by the flags, how would you implement them?
The "naive" way of doing it doesn't really work:
/// Does something
/**
* Does something really cool
#ifdef FEATURE_FOO
* #param fooParam describe param for foo
#endif
*/
void doSomethingCool(
#ifdef FEATURE_FOO
int fooParam = 42
#endif
);
You wouldn't want to ship something like this.
Your library that you ship was built for a certain feature flag combination, clients shouldn't need to #define the same feature flags to make things work
The ifdefs in your public header are ugly
And most importantly, if you disable your flag, you don't want clients to see anything about the disabled features - maybe it is something upcoming and you don't want to show your stuff until it is ready
Running the preprocessor on the file to get the header for distribution doesn't really work because that would not only act on feature flags but also do everything else the preprocessor does.
What would be a technical solution to this that doesn't have these flaws?
This kind of goo ends up in a codebase due to versioning. Broad topic with very few happy answers. But you certainly want to avoid making it more difficult then it needs to be. Focus on the kind of compatibility you want to provide.
The syntax proposed in the snippet is only required when you need binary compatibility. It keeps the library compatible with a doSomethingCool() call in the client code (passing no argument) without having to compile that client code. In other words, the client programmer does nothing at all beyond copying the updated .dll or .so file, does not need any updated headers and it is entirely your burden to get the feature flags right. Binary compatibility is pretty difficult to pull off reliably, beyond the flag wrangling, easy to make a mistake.
But what you are actually talking about is source compatibility, you do provide the user with an updated header and he rebuilds his code to use the library update. In which case you don't need the feature flag, the C++ compiler by itself ensures that an argument is passed, it will be 42. No flag required at all, either on your end or the user's end.
Another way to do it is by providing an overload. In other words, both a doSomethingCool() and a doSomethingCool(int) function. The client programmer keeps using the original overload until he's ready to move ahead. You also favor an overload when the function body has to change too much. If these functions are not virtual then it even provides link compatibility, could be useful in some select case. No feature flags required.
I'd say it's a relatively broad question, but I'll trow in my two cents.
First, you really want to separate the public headers from implementation (source and internal headers, if any). The public header that gets installed (e.g., at /usr/include) should contain function declaration and, preferably, a constant boolean to inform the client whether the library has a certain feature compiled in or not, as so:
#define FEATURE_FOO 1
void doSomethingCool();
Such a header is generally generated. Autotools is de facto standard tools for this purpose in GNU/Linux. Otherwise you can write your own scripts to do this.
For completeness, in .c file you should have the
void doSomethingCool(
#ifdef FEATURE_FOO
int fooParam = 42
#endif
);
It's also up to your distribution tools to keep the installed headers and library binaries in sync.
Use the forward declarations
Hide implementation by using a pointer (Pimpl idiom)
this code id quoted from the previous link:
// Foo.hpp
class Foo {
public:
//...
private:
struct Impl;
Impl* _impl;
};
// Foo.cpp
struct Foo::Impl {
// stuff
};
Binary compatibility is not a forte of C++, it probably isn’t worth considering.
For C, you might construct something like an interface class, so that your first touch with the library is something like:
struct kv {
char *tag;
int val;
};
int Bind(struct kv *compat, void **funcs, void **stamp);
and your access to the library is now:
#define MyStrcpy(src, dest) (funcs->mystrcpy((stamp)(src),(dest)))
The contract is that Bind provides/constructs an appropriate (func, stamp) pair for the attribute set you provided; or fails if it cannot. Note that Bind is the only bit that has to know about multiple layouts of *funcs,*stamp; so it can transparently provide robust interface for this reduced version of the problem.
If you wanted to get really fancy, you might be able to achieve the same by re-writing the PLT that the dlopen/dlsym prepare for you, but:
You are grossly expanding your attack surface.
You are adding a lot of complexity for very little gain.
You are adding platform / architecture specific code where none is warranted.
A few downsides remain. You have to invoke Bind before any part of your program/library attempts to use it. Attempts to solve that lead straight to hell (Finding C++ static initialization order problems), which must make N.Wirth smile. If you get too clever with your Bind(), you will wish you hadn’t. You might want to be careful about re-entrency, since a given client might Bind multiple times for different attribute sets (users are such a pain).
That's how I would manage this in pure C.
First of all the features, I would pack them in a single unsigned int 32/64 bits long to keep them as compact as possible.
Second step a private header to use only in library compilation, where I would define a macro to create the API function wrapper, and the internal function:
#define CoolFeature1 0x00000001 //code value as 0 to disable feature
#define CoolFeature2 0x00000010
#define CoolFeature3 0x00000100
.... // Other features
#define Cool CoolFeature1 | CoolFeature2 | CoolFeature3 | ... | CoolFeature_n
#define ImplementApi(ret, fname, ...) ret fname(__VA_ARGS__) \
{ return Internal_#fname(Cool, __VA_ARGS__);} \
ret Internal_#fname(unsigned long Cool, __VA_ARGS__)
#include "user_header.h" //Include the standard user header where there is no reference to Cool features
Now we have a wrapper with a standard prototype that will be available in the user definition header, and an internal version which keep an addition flag group to specify optional features.
When coding using the macro you can write:
ImplementApi(int, MyCoolFunction, int param1, float param2, ...)
{
// Your code goes here
if (Cool & CoolFeature2)
{
// Do something cool
}
else
{
// Flat life ...
}
...
return 0;
}
In the case above you'll get 2 definitions:
int Internal_MyCoolFunction(unsigned long Cool, int param1, float param2, ...);
int MyCoolFunction(int param1, float param2, ...)
You can eventually add in the macro, for the API function, the attributes for export if you're distribuiting a dynamic library.
You can even use the same definition header if the definition of ImplementApi macro is done on the compiler command line, in that case the following simple definition in the header will do:
#define ImplementApi(ret, fname, ...) ret fname(__VA_ARGS__);
The last will generate only the exported API prototypes.
This suggestion, of course, is not exhaustive. There a lot of more adjustments you can do to make more elegant and automatic the definitions. I.e. including a sub header with function list to create only API function prototypes for the user, and both, internal and API, for developers.
Why are you using defines for feature flags? Feature flags are supposed to enable you to turn features on and off runtime, not compile time.
In the code you would then case out implementation as early as possible using interfaces and concrete classes that are chosen based on the feature flag.
If users of the header files arent supposed to be able to access the feature flags, then create header files that you dont distribute, that are only included in the implementation c/cpp files. You can then flip the flags in the private headers when you compile the library that they link to.
If you are keeping features internal until you are ready to release, you can move the feature flag into the public header, or just remove the feature flag entirely and switch to using the new implementation.
Sloppy example if you want this compile time:
public_class.h
class Thing
{
public:
void DoSomething();
}
private_class_feature1.h
#define USE_FEATURE_1
class NewFeatureImp
{
public:
static void CoolNewWay1();
}
public_class.cpp
#include “public_class.h”
#include “private_class_feature1.h”
void Thing::DoSomething()
{
#ifdef USE_FEATURE_1
NewFeatureImpl::CoolNewWay();
#else
// Regular impl
#endif
}
I'm working on a C++ project which should run on Linux and Windows 7+. This is also my first week with C++ after a very simple and short basics course some years back.
Let's say I need to access the filesystem, but as OS's have different APIs for that I need to create a wrapper class to make things consistent.
Would the following work:
Have a base class File. From file I inherit WinFile and LinuxFile, which implement the base class public methods (e.g. createFile, readFile, etc.). Then in both sub-classes I implement the public methods to map to platform specific methods (WINAPI file handling and UNIX file handling).
Then I would use a preprocessor directive to conditionally load either WinFile or LinuxFile in the main application:
int main()
{
#if defined WIN32
WinFile fileSystem;
#elif defined LINUX
LinuxFile fileSystem;
#endif
// Both of above contain the same public method API.
std::string filedata;
filedata = fileSystem.readFile(...);
...
}
My gut says that this should work, but are there any drawbacks? Will this become a maintainability problem easily? Are preprocessor directives considered "hacks" or something? I know they're used with header include guards and such, but they're compiler related logic, not application related logic.
Any other ways to achieve what I'm trying to do here?
You could define the API in a header file and move the implementation into cpp files.
Add the .cpp source files dependent on your os (or guard the .cpp files through) macros, for example:
// File.h
class File
{
public:
void open(std::string);
};
// File_impl_win.cpp (compiled when win)
void File::open(std::string)
{
// impl
}
// File_impl_lin.cpp (compiled when linux)
void File::open(std::string)
{
// impl
}
The advantage is that you don't need to distinguish between a LinuxFile and WindowsFile, you got a single api instead.
But there is an amazing crossplatform boost library for filesystem usage already, boost filesystem, which you could use.
I was wondering if there is an elegant way to solve this problem. Suppose there's a common header eg
// common.h
#ifndef COMMON_H
#define COMMON_H
#define ENABLE_SOMETHING
//#define ENABLE_SOMETHING_ELSE
#define ENABLE_WHATEVER
// many others
#endif
Now this file is included by, let's say 100 other header files and the various #define are used to enable or disable some parts of code which are confined to just 1-2 files.
Everytime a single #define is changed the whole project seems to be rebuilt (I'm working on Xcode 5.1), which makes sense as it must be literally replaced all around the code and the compiler can't know a priori where it's used.
I'm trying to find a better way to manage this, to avoid long compilation times, as these defines are indeed changed many times. Splitting each define in their corresponding file/files could be a solution but I'd like the practical way to have everything packed together.
So I was wondering if there is a pattern which is usually used to solve this problem, I was thinking about having
// common.h
class Enables
{
static const bool feature;
};
// common..cpp
bool Enables::feature = false;
Will this be semantically equivalent when compiling optimized binary? (eg. code inside false enables will totally disappear).
You have two distinct problems here:
Splitting each define in their corresponding file/files could be a solution but I'd like the practical way to have everything packed together.
This is your first problem. If I undestand correctly, if you have more than one functional area, you are not interested in having to include a header for each of them (but a single header for everything).
Apply these steps:
do split the code by functionality, into different headers; Each header should contain (at most) what was enabled by a single #define FEATURESET (and be completely agnostic to the existence of the FEATURESET macro).
ensure each header is only compiled once (add #pragma once at the beginning of each feature header file)
add a convenience header file that performs #if or #ifdef based on your defined features, and includes the feature files as required:
// parsers.h
// this shouldn't be here: #pragma once
#ifdef PARSEQUUX_SAFE
#include <QuuxSafe.h>
#elif defined PARSEQUUX_FAST
#include <QuuxFast.h>
#else
#include <QuuxSafe.h>
#endif
// eventually configure static/global class factory here
// see explanation below for mentions of class factory
Client code:
#include <parsers.h> // use default Quux parser
#define PARSEQUUX_SAFE
#include <parsers.h> // use safe (but slower) Quux parser
So I was wondering if there is a pattern which is usually used to solve this problem
This is your second problem.
The canonical way to enable functionality by feature in C++, is to define feature API, in terms of base classes, class factories and programming to a generic interface.
// common.h
#pragma once
#include <Quux.h> // base Quux class
struct QuuxFactory
{
enum QuuxType { Simple, Feathered };
static std::unique_ptr<Quux> CreateQuux(int arg);
static QuuxType type;
};
// common.cpp:
#include <common.h>
#include <SimpleQuux.h> // SimpleQuux: public Quux
#include <FeatheredQuux.h> // FeatheredQuux: public Quux
std::unique_ptr<Quux> QuuxFactory::CreateQuux(int arg)
{
switch(type) {
case Simple:
return std::unique_ptr<Quux>{new SimpleQuux{arg}};
case Feathered:
return std::unique_ptr<Quux>{new FeatheredQuux{arg}};
};
// TODO: handle errors
}
Client code:
// configure behavior:
QuuxFactory::type = QuuxFactory::FeatheredQuux;
// ...
auto quux = QuuxFactory::CreateQuux(10); // creates a FeatheredQuux in this case
This has the following advantages:
it is straightforward and uses no macros
it is reusable
it provides an adequate level of abstraction
it uses no macros (as in "at all")
the actual implementations of the hypothetical Quux functionality are only included in one file (as an implementation detail, compiled only once). You can include common.h wherever you want and it will not include SimpleQuux.h and FeatheredQuux.h at all.
As a generic guideline, you should write your code, such that it requires no macros to run. If you do, you will find that any macros you want to add over it, are trivial to add. If instead you rely on macros from the start to define your API, the code will be unusable (or close to unusable) without them.
There is a way to split defines but still use one central configuration header.
main_config.h (it must not have an include guard or #pragma once, because that would cause strange results if main_config.h is included more than once in one compilation unit):
#ifdef USES_SOMETHING
#include "something_config.h"
#endif
#ifdef USES_WHATEVER
#include "whatever_config.h"
#endif
something_config.h (must not have include guards for the same reason as main_config.h):
#define ENABLE_SOMETHING
All source and header files would #include only main_config.h, but before the include they must declare what part of it would they be referring to:
some_source.cpp:
#define USES_SOMETHING
#include "main_config.h"
some_other_file.h:
#define USES_WHATEVER
#include "main_config.h"
my problem is that i would like to organize my code so i can have a debug and release version of the same methods, and i can have multiple definitions of the same methods for different targeted platforms.
Basically the core of the problem is the same for both, i need to have the same signature but with different definitions associated.
What is the best way to organize my code on the filesystem and for compilation and production so i can keep this clean and separated ?
Thanks.
// #define DEBUG - we're making a non debug version
#ifdef DEBUG
// function definition for debug
#else
// function definition for release
#endif
The same can be done for different operating systems. There's of course the problem of recompilating all of it, which can be a pain in the ass in C++.
I suggest you to intervene at source level and not on header files (just to be sure to keep same interfaces), something like:
//Foo.h
class Foo{
void methodA();
void methodB();
};
//Foo.cpp
// common method
Foo::methodA() { }
#ifdef _DEBUG_
Foo::methodB() { }
#elif _PLATFORM_BAR_
Foo::methodB() { }
#else
Foo:methodB() { }
#endif
If, instead, you want to keep everything separated, you will have to work on a higher lever, the preprocessor is not enough to conditionally include a .cpp file instead that another. You will have to work with the makefile or whatever you use.
Another choice could be the one of having source files that simply disappear when not on specific platform, eg:
//Foo.h
class Foo{
void methodA();
void methodB();
};
//FooCommon.cpp
void Foo::methodA() { }
//FooDebug.cpp
#ifdef _DEBUG_H
void Foo::methodB() { }
#endif
//FooRelease.cpp
#ifndef _DEBUG_H_
void Foo::methodB() { }
#endif
If your compiler allows, you can try keeping the source files for each version in a separate subfolder (eg #include "x86_d/test.h") then using global macro definitions to control the flow:
#define MODE_DEBUG
#ifdef MODE_DEBUG
#include "x86dbg/test.h"
#else
#include "x86rel/test.h"
#endif
You can also use a similar structure for member function definitions, so that you can have two different definitions in the same file. Many compilers also use their own defines for global macros as well, so instead of #define MODE_DEBUG above, you might be able to use something like #ifdef _CPP_RELEASE or maybe even define one through a compiler flag.
Here's a little problem I've been thinking about for a while now that I have not found a solution for yet.
So, to start with, I have this function guard that I use for debugging purpose:
class FuncGuard
{
public:
FuncGuard(const TCHAR* funcsig, const TCHAR* funcname, const TCHAR* file, int line);
~FuncGuard();
// ...
};
#ifdef _DEBUG
#define func_guard() FuncGuard __func_guard__( TEXT(__FUNCSIG__), TEXT(__FUNCTION__), TEXT(__FILE__), __LINE__)
#else
#define func_guard() void(0)
#endif
The guard is intended to help trace the path the code takes at runtime by printing some information to the debug console. It is intended to be used such as:
void TestGuardFuncWithCommentOne()
{
func_guard();
}
void TestGuardFuncWithCommentTwo()
{
func_guard();
// ...
TestGuardFuncWithCommentOne();
}
And it gives this as a result:
..\tests\testDebug.cpp(121):
Entering[ void __cdecl TestGuardFuncWithCommentTwo(void) ]
..\tests\testDebug.cpp(114):
Entering[ void __cdecl TestGuardFuncWithCommentOne(void) ]
Leaving[ TestGuardFuncWithCommentOne ]
Leaving[ TestGuardFuncWithCommentTwo ]
Now, one thing that I quickly realized is that it's a pain to add and remove the guards from the function calls. It's also unthinkable to leave them there permanently as they are because it drains CPU cycles for no good reasons and it can quickly bring the app to a crawl. Also, even if there were no impacts on the performances of the app in debug, there would soon be a flood of information in the debug console that would render the use of this debug tool useless.
So, I thought it could be a good idea to enable and disable them on a per-file basis.
The idea would be to have all the function guards disabled by default, but they could be enabled automagically in a whole file simply by adding a line such as
EnableFuncGuards();
at the top of the file.
I've thought about many a solutions for this. I won't go into details here since my question is already long enough, but let just say that I've tried more than a few trick involving macros that all failed, and one involving explicit implementation of templates but so far, none of them can get me the actual result I'm looking for.
Another restricting factor to note: The header in which the function guard mechanism is currently implemented is included through a precompiled header. I know it complicates things, but if someone could come up with a solution that could work in this situation, that would be awesome. If not, well, I certainly can extract that header fro the precompiled header.
Thanks a bunch in advance!
Add a bool to FuncGuard that controls whether it should display anything.
#ifdef NDEBUG
#define SCOPE_TRACE(CAT)
#else
extern bool const func_guard_alloc;
extern bool const func_guard_other;
#define SCOPE_TRACE(CAT) \
NppDebug::FuncGuard npp_func_guard_##__LINE__( \
TEXT(__FUNCSIG__), TEXT(__FUNCTION__), TEXT(__FILE__), \
__LINE__, func_guard_##CAT)
#endif
Implementation file:
void example_alloc() {
SCOPE_TRACE(alloc);
}
void other_example() {
SCOPE_TRACE(other);
}
This:
uses specific categories (including one per file if you like)
allows multiple uses in one function, one per category or logical scope (by including the line number in the variable name)
compiles away to nothing in NDEBUG builds (NDEBUG is the standard I'm-not-debugging macro)
You will need a single project-wide file containing definitions of your category bools, changing this 'settings' file does not require recompiling any of the rest of your program (just linking), so you can get back to work. (Which means it will also work just fine with precompiled headers.)
Further improvement involves telling the FuncGuard about the category, so it can even log to multiple locations. Have fun!
You could do something similar to the assert() macro where having some macro defined or not changes the definition of assert() (NDEBUG in assert()'s case).
Something like the following (untested):
#undef func_guard
#ifdef USE_FUNC_GUARD
#define func_guard() NppDebug::FuncGuard __npp_func_guard__( TEXT(__FUNCSIG__), TEXT(__FUNCTION__), TEXT(__FILE__), __LINE__)
#else
#define func_guard() void(0)
#endif
One thing to remember is that the include file that does this can't have include guard macros (at least not around this part).
Then you can use it like so to get tracing controlled even within a compilation unit:
#define USE_FUNC_GUARD
#include "funcguard.h"
// stuff you want traced
#undef USE_FUNC_GUARD
#include "funcguard.h"
// and stuff you don't want traced
Of course this doesn't play 100% well with pre-compiled headers, but I think that subsequent includes of the header after the pre-compiled stuff will still work correctly. Even so, this is probably the kind of thing that shouldn't be in a pre-compiled header set.