I don't know what this concept is called, so title may sound weird. Imagine the following scenario:
main.cpp:
#define SOME_KEYWORD
int main()
{
foo();
return 0;
}
other.cpp:
void foo()
{
//Do some stuff
#ifdef SOME_KEYWORD
//Do some additional stuff
#endif
}
I've tried it out and it doesn't work if #define is present in other file. Is there a way around this? (I'd rather not to modify function parameters just to achieve this, since it will only be present at development time and functions can be many layers of abstraction away.)
And, I guess this is a C way to do things, I don't know if that would be considered as a good practice in C++, if not, what are the alternative ways?
In c++, from c++17, a constexpr-if would be a good way to go about doing this. e.g. in some header file:
// header.hpp
#pragma once
constexpr bool choice = true; // or false, if you don't want to compile some additional stuff
and in an implementation file:
#include "header.hpp"
void foo()
{
//Do some stuff
if constexpr(choice)
{
//Do some additional stuff
}
}
Note that is not a drop in replacement for #define, but it works in many cases.
A preprocessor symbol defined in one translation unit is not visible in a different translation unit. As suggested in a comment you can define it in a header and then include where needed (its not a keyword, so I chose a better name):
// defines.h
#define SOME_SYMBOL
// other.cpp
#include "defines.h
Conditional compilation via preprocessor macros has some uses, eg conditionally compiling platform specific code or excluding debug code from release builds. For anything else I would not use it, because when overused it can create a big mess and is error-prone (eg too easy to forget to include defines.h). Consider to make foo a template:
template <bool SOME_FLAG>
void foo()
{
//Do some stuff
if constexpr (SOME_FLAG) {
//Do some additional stuff
}
}
And if you still want to make use of the preprocessor, this allows you to concentrate usage of macros to a single location:
// main.cpp
#define SOME_SYMBOL
#ifdef SOME_SYMBOL
constexpr bool flag = true;
#else
constexpr bool flag = false;
int main()
{
foo<flag>();
return 0;
}
I don't know what this concept is called
Generally, pre-processing. More specifically, the pre-processor is used here to conditionally compile the program.
This a common technique that is used to create portable interfaces over platform specific ones. Sometimes it is used to enable or suppress debugging features.
I've tried it out and it doesn't work if #define is present in other file.
Macros only affect the file where they are defined.
Is there a way around this?
Define the macro in all of the files where you use it. Typically, this is achieved by including the definition from a header, or by specifying a compiler option.
And, I guess this is a C way to do things, I don't know if that would be considered as a good practice in C++, if not, what are the alternative ways?
There is no complete alternative in C++. In some cases they can be replaced or combined with templates and if constexpr.
Related
I want to use conditional compilation for testing different properties of my code; however, I don't want to pollute the global namespace. Would someone be kind enough to let me know if there is a way to use conditional compilation without using #define?
I have searched for an option, but most of the other posts refer to the usage of static const, etc to choose different code during run-time. I, however, want to compile a different code. For example, instead of:
#define A_HASH_DEFINE
...
#ifdef A_HASH_DEFINE
Some code
#elif ANOTHER_HASH_DEFINE
Some other code
#endif
I would like to be able to use something with a scope, such as:
scope::A_SCOPED_HASH_DEFINE
...
#ifdef scope::A_SCOPED_HASH_DEFINE
Some code
#elif scope::ANOTHER_SCOPED_HASH_DEFINE
Some other code
#endif
If you're using C++17, you should use if constexpr.
It is essentially an if statement where the branch is chosen at compile-time, and any not-taken branches are discarded. It is cleaner than having the #ifdefs splattered throughout your code.
#ifdef _DEBUG
constexpr bool debug_mode = true;
#else
constexpr bool debug_mode = false;
#endif
if constexpr (debug_mode) {
//debug code
}
You can read more about how it replaces #if … #else in FooNathan's blog:
The year is 2017 - Is the preprocessor still needed in C++?
When using preprocessor definitions we always have to deal with the tradeoff that we are polluting the "global namespace".
It's not really the global namespace, of course, but it's own namespace: trouble is, these macro names actually take effect in every scope, due to their nature.
We simply accept this.
We try to limit them by perhaps keeping them to individual translation units. Or, if they need to be in a header, we switch to const bools instead.
If you need conditional compilation in the truest sense and you can spell this in non-preprocessor C++ using if constexpr, then so much the better.
Otherwise it's just something we have to deal with. We at least try to use descriptive names and avoid using common terms that may conflict with third-party headers. If/when they do, we change them.
If you're still finding that your macros are too polluting, then it could be that your switching logic encapsulates too much code. In such a case, you may consider moving the logic into your build system and changing which source files you build in the first place.
For example, an OpenGL renderer implementation versus a DirectX renderer implementation (an example that only works if you switch between these at build time, as you would be with a macro!).
When dealing with conditional compilation it is very common to omit declaring the define at compile time, a nice trick to make it more type-safe is to use an enumeration (a boolean value may also work) to ensure a valid value is defined at compile time.
Example
enum class SystemEnum { MAC, LINUX, WINDOWS };
const SystemEnum mySystem = SystemEnum::MY_SYSTEM;
void func() {
#if MY_SYSTEM == MAC
doMacStuff();
#elif MY_SYSTEM == LINUX
doLinuxStuff();
#elif MY_SYSTEM == WINDOWS
doWindowsStuff();
#else
#error "You must define MY_SYSTEM to compile this"
#endif
if constexpr (mySystem == SystemEnum::MAC)
doMacStuff();
else if constexpr (mySystem == SystemEnum::LINUX)
doLinuxStuff();
else if constexpr (mySystem == SystemEnum::WINDOWS)
doWindowsStuff();
else
doError();
}
I am creating a library (.lib) in c++ with Visual Studio 2008. I would like to set a variable to change the behaviour of the library depending on the variable. Simplifying a lot, something like this:
#ifdef OPTION1
i = 1;
#else
i = 0;
#endif
But the variable (in this case OPTION1) should not be defined in the library itself, but in the code that links to the library, so that just changing the definition of the variable I could obtain different behaviours from the program, but always linking to the same library.
Is this possible, and how? Or is there a more elegant way to achieve what I want?
To pull this off, the code which depends on the macro must be compiled as part of the code which links to the library, not as part of the library itself. The best you could do is something like this:
In your public .h file:
namespace LibraryPrivate {
void functionForOptionSet();
void functionForOptionUnset();
}
#ifdef OPTION1
inline void dependentBehaviour() {
LibraryPrivate::functionForOptionSet();
}
#else
inline void dependentBehaviour() {
LibraryPrivate::functionForOptionUnset();
}
#endif
In you library's .cpp file:
namespace LibraryPrivate {
void functionForOptionSet()
{ i = 1; }
void functionForOptionUnset()
{ i = 0; }
}
That is, you have to implement both options in the library, but you can (partially) limit the interface based on the macro. Kind of something like what WinAPI does with char vs. wchar_t functions: if provides both SomeFunctionA(char*) and SomeFunctionW(wchar_t*) and then a macro SomeFunction which expands to one of those.
The simple answer is no. Things like #ifdef are entirely
processed by the compiler (and in fact, by a preprocessor phase
of the compiler, before it even parses the code); a .lib file
has already been compiled.
One solution would be to supply the library in source form, and
let the client compile it as part of his project. This has an
additional advantage in that you automatically support all
versions of the compiler, with all possible combinations of
compiler options. And the disadvantage that your library will
be used with versions of the compiler and compiler options that
you've never tested, and that possibly you cannot even test.
Otherwise, you'll need to use a variable, and ifs and ?:,
rather than #ifdef. And you'll have to arrange some means of
setting the variable.
Finally, if there's only one such variable, you might consider
furnishing two different sets of versions of the library: one
with it set, and one without. The client then decides which one
he wants to use. In many ways, this is the simplest solution,
but it definitely doesn't scale—with a hundred such
variables, if they're independent, you'll need 2^100 different
sets of variants, and that won't fit on any disk.
my problem is that i would like to organize my code so i can have a debug and release version of the same methods, and i can have multiple definitions of the same methods for different targeted platforms.
Basically the core of the problem is the same for both, i need to have the same signature but with different definitions associated.
What is the best way to organize my code on the filesystem and for compilation and production so i can keep this clean and separated ?
Thanks.
// #define DEBUG - we're making a non debug version
#ifdef DEBUG
// function definition for debug
#else
// function definition for release
#endif
The same can be done for different operating systems. There's of course the problem of recompilating all of it, which can be a pain in the ass in C++.
I suggest you to intervene at source level and not on header files (just to be sure to keep same interfaces), something like:
//Foo.h
class Foo{
void methodA();
void methodB();
};
//Foo.cpp
// common method
Foo::methodA() { }
#ifdef _DEBUG_
Foo::methodB() { }
#elif _PLATFORM_BAR_
Foo::methodB() { }
#else
Foo:methodB() { }
#endif
If, instead, you want to keep everything separated, you will have to work on a higher lever, the preprocessor is not enough to conditionally include a .cpp file instead that another. You will have to work with the makefile or whatever you use.
Another choice could be the one of having source files that simply disappear when not on specific platform, eg:
//Foo.h
class Foo{
void methodA();
void methodB();
};
//FooCommon.cpp
void Foo::methodA() { }
//FooDebug.cpp
#ifdef _DEBUG_H
void Foo::methodB() { }
#endif
//FooRelease.cpp
#ifndef _DEBUG_H_
void Foo::methodB() { }
#endif
If your compiler allows, you can try keeping the source files for each version in a separate subfolder (eg #include "x86_d/test.h") then using global macro definitions to control the flow:
#define MODE_DEBUG
#ifdef MODE_DEBUG
#include "x86dbg/test.h"
#else
#include "x86rel/test.h"
#endif
You can also use a similar structure for member function definitions, so that you can have two different definitions in the same file. Many compilers also use their own defines for global macros as well, so instead of #define MODE_DEBUG above, you might be able to use something like #ifdef _CPP_RELEASE or maybe even define one through a compiler flag.
I hate macros. I'm trying to avoid using them as much as I can, but I occasionally need them to enable / disable features in my code. Typically:
#ifdef THREAD_SAFE
typedef boost::mutex Mutex;
typedef boost::mutex::scoped_lock ScopedLock;
#else
typedef struct M { } Mutex;
typedef struct S { S(M m) { } } ScopedLock;
#endif
This way I can leave my actual code unchanged. I'm trusting the compiler to remove the placebo code when the macro is undefined.
I'm aware that template specialization could be a solution, but that would involve a lot of rewriting / code duplicating.
No need to be a C++ expert to guess there's something wrong with the way I'm cheating on the compiler. I'm looking for a better solution.
What you are using aren't macros, but normal preprocessor capabilities. Also, you're not relying on the compiler, but the preprocessor.
The compiler will only ever see one of the two versions, the other gets eliminated before the compilation step. Nothing wrong with using the preprocessor to do (conditional) inclusion/exclusion of code. It isn't any kind of "cheating", that's totally what the preprocessor is there for.
Macros are the only good way to get information from the build system into the program. The other alternative is writing your own code-generation scripts, or tools like SWIG.
The problem I see here is the unnecessary use of typedef. I think this is better because it limits the introduction of new symbols (single-letter ones!), and keeps code looking more canonical.
#ifdef THREAD_SAFE
using boost::mutex;
#else
struct mutex {
struct scoped_lock {
scoped_lock(mutex const &m) { }
};
};
#endif
While I wouldn't recommend it for this simple case, you can separate out the stuff that changes and implement it in a separate translation unit, then let your build system select the right file. This would be more appropriate when there are more sweeping changes than just making a variable go away, like pulling out Windows library calls for the Unix equivalent.
I am trying to write something in c++ with an architecture like:
App --> Core (.so) <-- Plugins (.so's)
for linux, mac and windows. The Core is implicitly linked to App and Plugins are explicitly linked with dlopen/LoadLibrary to App. The problem I have:
static variables in Core are duplicated at run-time -- Plugins and App have different copys of them.
at least on mac, when a Plugin returns a pointer to App, dynamic casting that pointer in App always result in NULL.
Can anyone give me some explanations and instructions for different platforms please? I know this may seem lazy to ask them all here but I really cannot find a systematic answer to this question.
What I did in the entry_point.cpp for a plugin:
#include "raw_space.hpp"
#include <gamustard/gamustard.hpp>
using namespace Gamustard;
using namespace std;
namespace
{
struct GAMUSTARD_PUBLIC_API RawSpacePlugin : public Plugin
{
RawSpacePlugin(void):identifier_("com.gamustard.engine.space.RawSpacePlugin")
{
}
virtual string const& getIdentifier(void) const
{
return identifier_;
}
virtual SmartPtr<Object> createObject(std::string const& name) const
{
if(name == "RawSpace")
{
Object* obj = NEW_EX RawSpaceImp::RawSpace;
Space* space = dynamic_cast<Space*>(obj);
Log::instance().log(Log::LOG_DEBUG, "createObject: %x -> %x.", obj, space);
return SmartPtr<Object>(obj);
}
return SmartPtr<Object>();
}
private:
string identifier_;
};
SmartPtr<Plugin> __plugin__;
}
extern "C"
{
int GAMUSTARD_PUBLIC_API gamustardDLLStart(void) throw()
{
Log::instance().log(Log::LOG_DEBUG, "gamustardDLLStart");
__plugin__.reset(NEW_EX RawSpacePlugin);
PluginManager::instance().install(weaken(__plugin__));
return 0;
}
int GAMUSTARD_PUBLIC_API gamustardDLLStop(void) throw()
{
PluginManager::instance().uninstall(weaken(__plugin__));
__plugin__.reset();
Log::instance().log(Log::LOG_DEBUG, "gamustardDLLStop");
return 0;
}
}
Some Background
Shared libraries in C++ are quite difficult because the standard says nothing about them. This means that every platform has a different way of doing them. If we restrict ourselves to Windows and some *nix variant (anything ELF), the differences are subtle. The first difference is Shared Object Visibility. It is highly recommended that you read that article so you get a good overview of what visibility attributes are and what they do for you, which will help save you from linker errors.
Anyway, you'll end up with something that looks like this (for compiling with many systems):
#if defined(_MSC_VER)
# define DLL_EXPORT __declspec(dllexport)
# define DLL_IMPORT __declspec(dllimport)
#elif defined(__GNUC__)
# define DLL_EXPORT __attribute__((visibility("default")))
# define DLL_IMPORT
# if __GNUC__ > 4
# define DLL_LOCAL __attribute__((visibility("hidden")))
# else
# define DLL_LOCAL
# endif
#else
# error("Don't know how to export shared object libraries")
#endif
Next, you'll want to make some shared header (standard.h?) and put a nice little #ifdef thing in it:
#ifdef MY_LIBRARY_COMPILE
# define MY_LIBRARY_PUBLIC DLL_EXPORT
#else
# define MY_LIBRARY_PUBLIC DLL_IMPORT
#endif
This lets you mark classes, functions and whatever like this:
class MY_LIBRARY_PUBLIC MyClass
{
// ...
}
MY_LIBRARY_PUBLIC int32_t MyFunction();
This will tell the build system where to look for the functions when it calls them.
Now: To the actual point!
If you're sharing constants across libraries, then you actually should not care if they are duplicated, since your constants should be small and duplication allows for much optimization (which is good). However, since you appear to be working with non-constants, the situation is a little different. There are a billion patterns to make a cross-library singleton in C++, but I naturally like my way the best.
In some header file, let's assume you want to share an integer, so you would do have in myfuncts.h:
#ifndef MY_FUNCTS_H__
#define MY_FUNCTS_H__
// include the standard header, which has the MY_LIBRARY_PUBLIC definition
#include "standard.h"
// Notice that it is a reference
MY_LIBRARY_PUBLIC int& GetSingleInt();
#endif//MY_FUNCTS_H__
Then, in the myfuncts.cpp file, you would have:
#include "myfuncs.h"
int& GetSingleInt()
{
// keep the actual value as static to this function
static int s_value(0);
// but return a reference so that everybody can use it
return s_value;
}
Dealing with templates
C++ has super-powerful templates, which is great. However, pushing templates across libraries can be really painful. When a compiler sees a template, it is the message to "fill in whatever you want to make this work," which is perfectly fine if you only have one final target. However, it can become an issue when you're working with multiple dynamic shared objects, since they could theoretically all be compiled with different versions of different compilers, all of which think that their different template fill-in-the-blanks methods is correct (and who are we to argue -- it's not defined in the standard). This means that templates can be a huge pain, but you do have some options.
Don't allow different compilers.
Pick one compiler (per operating system) and stick to it. Only support that compiler and require that all libraries be compiled with that same compiler. This is actually a really neat solution (that totally works).
Don't use templates in exported functions/classes
Only use template functions and classes when you're working internally. This does save a lot of hassle, but overall is quite restrictive. Personally, I like using templates.
Force exporting of templates and hope for the best
This works surprisingly well (especially when paired with not allowing different compilers).
Add this to standard.h:
#ifdef MY_LIBRARY_COMPILE
#define MY_LIBRARY_EXTERN
#else
#define MY_LIBRARY_EXTERN extern
#endif
And in some consuming class definition (before you declare the class itself):
// force exporting of templates
MY_LIBRARY_EXTERN template class MY_LIBRARY_PUBLIC std::allocator<int>;
MY_LIBRARY_EXTERN template class MY_LIBRARY_PUBLIC std::vector<int, std::allocator<int> >;
class MY_LIBRARY_PUBLIC MyObject
{
private:
std::vector<int> m_vector;
};
This is almost completely perfect...the compiler won't yell at you and life will be good, unless your compiler starts changing the way it fills in templates and you recompile one of the libraries and not the other (and even then, it might still work...sometimes).
Keep in mind that if you're using things like partial template specialization (or type traits or any of the more advanced template metaprogramming stuff), all the producer and all its consumers are seeing the same template specializations. As in, if you have a specialized implementation of vector<T> for ints or whatever, if the producer sees the one for int but the consumer does not, the consumer will happily create the wrong type of vector<T>, which will cause all sorts of really screwed up bugs. So be very careful.