I hate macros. I'm trying to avoid using them as much as I can, but I occasionally need them to enable / disable features in my code. Typically:
#ifdef THREAD_SAFE
typedef boost::mutex Mutex;
typedef boost::mutex::scoped_lock ScopedLock;
#else
typedef struct M { } Mutex;
typedef struct S { S(M m) { } } ScopedLock;
#endif
This way I can leave my actual code unchanged. I'm trusting the compiler to remove the placebo code when the macro is undefined.
I'm aware that template specialization could be a solution, but that would involve a lot of rewriting / code duplicating.
No need to be a C++ expert to guess there's something wrong with the way I'm cheating on the compiler. I'm looking for a better solution.
What you are using aren't macros, but normal preprocessor capabilities. Also, you're not relying on the compiler, but the preprocessor.
The compiler will only ever see one of the two versions, the other gets eliminated before the compilation step. Nothing wrong with using the preprocessor to do (conditional) inclusion/exclusion of code. It isn't any kind of "cheating", that's totally what the preprocessor is there for.
Macros are the only good way to get information from the build system into the program. The other alternative is writing your own code-generation scripts, or tools like SWIG.
The problem I see here is the unnecessary use of typedef. I think this is better because it limits the introduction of new symbols (single-letter ones!), and keeps code looking more canonical.
#ifdef THREAD_SAFE
using boost::mutex;
#else
struct mutex {
struct scoped_lock {
scoped_lock(mutex const &m) { }
};
};
#endif
While I wouldn't recommend it for this simple case, you can separate out the stuff that changes and implement it in a separate translation unit, then let your build system select the right file. This would be more appropriate when there are more sweeping changes than just making a variable go away, like pulling out Windows library calls for the Unix equivalent.
Related
I don't know what this concept is called, so title may sound weird. Imagine the following scenario:
main.cpp:
#define SOME_KEYWORD
int main()
{
foo();
return 0;
}
other.cpp:
void foo()
{
//Do some stuff
#ifdef SOME_KEYWORD
//Do some additional stuff
#endif
}
I've tried it out and it doesn't work if #define is present in other file. Is there a way around this? (I'd rather not to modify function parameters just to achieve this, since it will only be present at development time and functions can be many layers of abstraction away.)
And, I guess this is a C way to do things, I don't know if that would be considered as a good practice in C++, if not, what are the alternative ways?
In c++, from c++17, a constexpr-if would be a good way to go about doing this. e.g. in some header file:
// header.hpp
#pragma once
constexpr bool choice = true; // or false, if you don't want to compile some additional stuff
and in an implementation file:
#include "header.hpp"
void foo()
{
//Do some stuff
if constexpr(choice)
{
//Do some additional stuff
}
}
Note that is not a drop in replacement for #define, but it works in many cases.
A preprocessor symbol defined in one translation unit is not visible in a different translation unit. As suggested in a comment you can define it in a header and then include where needed (its not a keyword, so I chose a better name):
// defines.h
#define SOME_SYMBOL
// other.cpp
#include "defines.h
Conditional compilation via preprocessor macros has some uses, eg conditionally compiling platform specific code or excluding debug code from release builds. For anything else I would not use it, because when overused it can create a big mess and is error-prone (eg too easy to forget to include defines.h). Consider to make foo a template:
template <bool SOME_FLAG>
void foo()
{
//Do some stuff
if constexpr (SOME_FLAG) {
//Do some additional stuff
}
}
And if you still want to make use of the preprocessor, this allows you to concentrate usage of macros to a single location:
// main.cpp
#define SOME_SYMBOL
#ifdef SOME_SYMBOL
constexpr bool flag = true;
#else
constexpr bool flag = false;
int main()
{
foo<flag>();
return 0;
}
I don't know what this concept is called
Generally, pre-processing. More specifically, the pre-processor is used here to conditionally compile the program.
This a common technique that is used to create portable interfaces over platform specific ones. Sometimes it is used to enable or suppress debugging features.
I've tried it out and it doesn't work if #define is present in other file.
Macros only affect the file where they are defined.
Is there a way around this?
Define the macro in all of the files where you use it. Typically, this is achieved by including the definition from a header, or by specifying a compiler option.
And, I guess this is a C way to do things, I don't know if that would be considered as a good practice in C++, if not, what are the alternative ways?
There is no complete alternative in C++. In some cases they can be replaced or combined with templates and if constexpr.
There exists quite a bit of discussions on feature flags/toggles and why you would use them but most of the discussion on implementing them center around (web or client) apps. If your product/artifact is a C or C++ library and your public headers are affected by the flags, how would you implement them?
The "naive" way of doing it doesn't really work:
/// Does something
/**
* Does something really cool
#ifdef FEATURE_FOO
* #param fooParam describe param for foo
#endif
*/
void doSomethingCool(
#ifdef FEATURE_FOO
int fooParam = 42
#endif
);
You wouldn't want to ship something like this.
Your library that you ship was built for a certain feature flag combination, clients shouldn't need to #define the same feature flags to make things work
The ifdefs in your public header are ugly
And most importantly, if you disable your flag, you don't want clients to see anything about the disabled features - maybe it is something upcoming and you don't want to show your stuff until it is ready
Running the preprocessor on the file to get the header for distribution doesn't really work because that would not only act on feature flags but also do everything else the preprocessor does.
What would be a technical solution to this that doesn't have these flaws?
This kind of goo ends up in a codebase due to versioning. Broad topic with very few happy answers. But you certainly want to avoid making it more difficult then it needs to be. Focus on the kind of compatibility you want to provide.
The syntax proposed in the snippet is only required when you need binary compatibility. It keeps the library compatible with a doSomethingCool() call in the client code (passing no argument) without having to compile that client code. In other words, the client programmer does nothing at all beyond copying the updated .dll or .so file, does not need any updated headers and it is entirely your burden to get the feature flags right. Binary compatibility is pretty difficult to pull off reliably, beyond the flag wrangling, easy to make a mistake.
But what you are actually talking about is source compatibility, you do provide the user with an updated header and he rebuilds his code to use the library update. In which case you don't need the feature flag, the C++ compiler by itself ensures that an argument is passed, it will be 42. No flag required at all, either on your end or the user's end.
Another way to do it is by providing an overload. In other words, both a doSomethingCool() and a doSomethingCool(int) function. The client programmer keeps using the original overload until he's ready to move ahead. You also favor an overload when the function body has to change too much. If these functions are not virtual then it even provides link compatibility, could be useful in some select case. No feature flags required.
I'd say it's a relatively broad question, but I'll trow in my two cents.
First, you really want to separate the public headers from implementation (source and internal headers, if any). The public header that gets installed (e.g., at /usr/include) should contain function declaration and, preferably, a constant boolean to inform the client whether the library has a certain feature compiled in or not, as so:
#define FEATURE_FOO 1
void doSomethingCool();
Such a header is generally generated. Autotools is de facto standard tools for this purpose in GNU/Linux. Otherwise you can write your own scripts to do this.
For completeness, in .c file you should have the
void doSomethingCool(
#ifdef FEATURE_FOO
int fooParam = 42
#endif
);
It's also up to your distribution tools to keep the installed headers and library binaries in sync.
Use the forward declarations
Hide implementation by using a pointer (Pimpl idiom)
this code id quoted from the previous link:
// Foo.hpp
class Foo {
public:
//...
private:
struct Impl;
Impl* _impl;
};
// Foo.cpp
struct Foo::Impl {
// stuff
};
Binary compatibility is not a forte of C++, it probably isn’t worth considering.
For C, you might construct something like an interface class, so that your first touch with the library is something like:
struct kv {
char *tag;
int val;
};
int Bind(struct kv *compat, void **funcs, void **stamp);
and your access to the library is now:
#define MyStrcpy(src, dest) (funcs->mystrcpy((stamp)(src),(dest)))
The contract is that Bind provides/constructs an appropriate (func, stamp) pair for the attribute set you provided; or fails if it cannot. Note that Bind is the only bit that has to know about multiple layouts of *funcs,*stamp; so it can transparently provide robust interface for this reduced version of the problem.
If you wanted to get really fancy, you might be able to achieve the same by re-writing the PLT that the dlopen/dlsym prepare for you, but:
You are grossly expanding your attack surface.
You are adding a lot of complexity for very little gain.
You are adding platform / architecture specific code where none is warranted.
A few downsides remain. You have to invoke Bind before any part of your program/library attempts to use it. Attempts to solve that lead straight to hell (Finding C++ static initialization order problems), which must make N.Wirth smile. If you get too clever with your Bind(), you will wish you hadn’t. You might want to be careful about re-entrency, since a given client might Bind multiple times for different attribute sets (users are such a pain).
That's how I would manage this in pure C.
First of all the features, I would pack them in a single unsigned int 32/64 bits long to keep them as compact as possible.
Second step a private header to use only in library compilation, where I would define a macro to create the API function wrapper, and the internal function:
#define CoolFeature1 0x00000001 //code value as 0 to disable feature
#define CoolFeature2 0x00000010
#define CoolFeature3 0x00000100
.... // Other features
#define Cool CoolFeature1 | CoolFeature2 | CoolFeature3 | ... | CoolFeature_n
#define ImplementApi(ret, fname, ...) ret fname(__VA_ARGS__) \
{ return Internal_#fname(Cool, __VA_ARGS__);} \
ret Internal_#fname(unsigned long Cool, __VA_ARGS__)
#include "user_header.h" //Include the standard user header where there is no reference to Cool features
Now we have a wrapper with a standard prototype that will be available in the user definition header, and an internal version which keep an addition flag group to specify optional features.
When coding using the macro you can write:
ImplementApi(int, MyCoolFunction, int param1, float param2, ...)
{
// Your code goes here
if (Cool & CoolFeature2)
{
// Do something cool
}
else
{
// Flat life ...
}
...
return 0;
}
In the case above you'll get 2 definitions:
int Internal_MyCoolFunction(unsigned long Cool, int param1, float param2, ...);
int MyCoolFunction(int param1, float param2, ...)
You can eventually add in the macro, for the API function, the attributes for export if you're distribuiting a dynamic library.
You can even use the same definition header if the definition of ImplementApi macro is done on the compiler command line, in that case the following simple definition in the header will do:
#define ImplementApi(ret, fname, ...) ret fname(__VA_ARGS__);
The last will generate only the exported API prototypes.
This suggestion, of course, is not exhaustive. There a lot of more adjustments you can do to make more elegant and automatic the definitions. I.e. including a sub header with function list to create only API function prototypes for the user, and both, internal and API, for developers.
Why are you using defines for feature flags? Feature flags are supposed to enable you to turn features on and off runtime, not compile time.
In the code you would then case out implementation as early as possible using interfaces and concrete classes that are chosen based on the feature flag.
If users of the header files arent supposed to be able to access the feature flags, then create header files that you dont distribute, that are only included in the implementation c/cpp files. You can then flip the flags in the private headers when you compile the library that they link to.
If you are keeping features internal until you are ready to release, you can move the feature flag into the public header, or just remove the feature flag entirely and switch to using the new implementation.
Sloppy example if you want this compile time:
public_class.h
class Thing
{
public:
void DoSomething();
}
private_class_feature1.h
#define USE_FEATURE_1
class NewFeatureImp
{
public:
static void CoolNewWay1();
}
public_class.cpp
#include “public_class.h”
#include “private_class_feature1.h”
void Thing::DoSomething()
{
#ifdef USE_FEATURE_1
NewFeatureImpl::CoolNewWay();
#else
// Regular impl
#endif
}
I am currently working on a general-purpose C++ library.
Well, I like using real-word function names and actually my project has a consistent function naming system. The functions (or methods) start with a verb if they do not return bool (in this case they start with is_)
The problem is this can be somewhat problematic for some programmers. Consider this function:
#include "something.h"
int calculate_geometric_mean(int* values)
{
//insert code here
}
I think such functions seem to be formal, so I name my functions so.
However I designed a simple Macro system for the user to switch function names.
#define SHORT_NAMES
#include "something.h"
#ifdef SHORT_NAMES
int calc_geometric_mean(int* values)
#else
int calculate_geometric_mean(int* values)
#endif
{
//some code
}
Is this wiser than using alias (since each alias of function will be allocated in the memory), or is this solution a pure evil?
FWIW, I don't think this dual-naming system adds a lot of value. It does, however, has the potential for causing a lot of confusion (to put it mildly).
In any case, if you are convinced is a great idea, I would implement it through inline functions rather than macros.
// something.h
int calculate_geometric_mean(int* values); // defined in the .cpp file
inline int calc_geo_mean(int* values) {
return calculate_geometric_mean(values);
}
What symbols will be exported to the object file/library? What if you attempt to use the other version? Will you distribute two binaries with their own symbols?
So - no, bad idea.
Usually, the purpose behind a naming system is to aid the readability and understanding of the code.
Now, you effectively have 2 systems, each of which has a rationale. You're already forcing the reader/maintainer to keep two approaches to naming in mind, which dilutes the end goal of readability. Never mind the ugly #defines that end up polluting your code base.
I'd say choose one system and stick to it, because consistency is the key. I wouldn't say this solution is pure evil per se - I would say that this is not a solution to begin with.
If I want to define a value only if it is not defined, I do something like this :
#ifndef THING
#define THING OTHER_THING
#endif
What if THING is a typedef'd identifier, and not defined? I would like to do something like this:
#ifntypedef thing_type
typedef uint32_t thing_type
#endif
The issue arose because I wanted to check to see if an external library has already defined the boolean type, but I'd be open to hearing a more general solution.
There is no such thing in the language, nor is it needed. Within a single project you should not have the same typedef alias referring to different types ever, as that is a violation of the ODR, and if you are going to create the same alias for the same type then just do it. The language allows you to perform the same typedef as many times as you wish and will usually catch that particular ODR (within the same translation unit):
typedef int myint;
typedef int myint; // OK: myint is still an alias to int
//typedef double myint; // Error: myint already defined as alias to int
If what you are intending to do is implementing a piece of functionality for different types by using a typedef to determine which to use, then you should be looking at templates rather than typedefs.
C++ does not provide any mechanism for code to test presence of typedef, the best you can have is something like this:
#ifndef THING_TYPE_DEFINED
#define THING_TYPE_DEFINED
typedef uint32_t thing_type
#endif
EDIT:
As #David, is correct in his comment, this answers the how? part but importantly misses the why? It can be done in the way above, If you want to do it et all, but important it you probably don't need to do it anyways, #David's answer & comment explains the details, and I think that answers the question correctly.
No there is no such facility in C++ at preprocessing stage. At the max can do is
#ifndef thing_type
#define thing_type uint32_t
#endif
Though this is not a good coding practice and I don't suggest it.
Preprocessor directives (like #define) are crude text replacement tools, which know nothing about the programming language, so they can't act on any language-level definitions.
There are two approaches to making sure a type is only defined once:
Structure the code so that each definition has its place, and there's no need for multiple definitions
#define a preprocessor macro alongside the type, and use #ifndef to check for the macro definition before defining the type.
The first option will generally lead to more maintainable code. The second could cause subtle bugs, if you accidentally end up with different definitions of the type within one program.
As other have already said, there are no such thing, but if you try to create an alias to different type, you'll get a compilation error :
typedef int myInt;
typedef int myInt; // ok, same alias
typedef float myInt; // error
However, there is a thing called ctag for finding where a typedef is defined.
The problem is actually real PITA, because some APIs or SDKs redefine commonly used things. I had issue that header files for a map processing software (GIS) were redefining TRUE and FALSE (generally used by windows SDK)keywords to integer literals instead of true and false keywords ( obviously, that can break SOMETHING). And yes, famous joke "#define true false" is relevant.
define would never feel a typedef or constant declared in C\C++ code because preprocessor doesn't analyze code, it only scans for # statements. And it modifies code prior to giving it to syntax analyzer. SO, in general, it's not possible.
https://msdn.microsoft.com/en-us/library/5xkf423c.aspx?f=255&MSPPError=-2147217396
That one isn't portable so far, though there were known request to implement it in GCC. I think, it also counts as "extension" in MSVC. It's a compiler statement, not a preprocessor statement, so it will not "feel" defined macros, it would detect only typedefs outside of function body. "full type" there means that it will react on full definition, ignoring statements like "class SomeClass;". Use it at own risk.
Edit: apparently it also supported on MacOS now and by Intel comiler with -fms-dialect flag (AIX\Linux?)
This might not directly answer the question, but serve as a possible solution to your problem.
Why not try something like this?
#define DEFAULT_TYPE int // just for argument's sake
#ifndef MY_COOL_TYPE
#define MY_COOL_TYPE DEFAULT_TYPE
#endif
typedef MY_COOL_TYPE My_Cool_Datatype_t;
Then if you want to customize the type, you can either define MY_COOL_TYPE somewhere above this (like in a "configure" header that is included at the top of this header) or pass it as a command line argument when compiling (as far as I know you can do this with GCC and LLVM, maybe others, too).
No there is nothing like what you wanted. I have had your same problem with libraries that include their owntypedefs for things like bool. It gets to be a problem when they just don't care about what you use for bool or if any other libs might be doing the same thing!!
So here's what I do. I edit the header file for the libs that do such things and find the typedef bool and add some code like this:
#ifdef USE_LIBNAME_BOOL
typedef unsigned char bool; // This is the lib's bool implementation
#else
#include <stdbool.h>
#endif
Notice that I included if I didn't want to use the libs' own bool typdef. This means that you need C99 support or later.
As mentioned before this is not included in the C++ standard, but you might be able to use autotools to get the same functionality.
You could use the ac_cxx_bool macro to make sure bool is defined (or different routines for different datatypes).
The solution I ended up using was including stdbool.h. I know this doesn't solve the question of how to check if a typedef is already defined, but it does let me ensure that the boolean type is defined.
This is a good question. C and Unix have a history together, and there are a lot of Unix C typedefs not available on a non-POSIX platform such as Windows (shhh Cygwin people). You'll need to decide how to answer this question whenever you're trying to write C that's portable between these systems (shhhhh Cygwin people).
If cross-platform portability is what you need this for, then knowing the platform-specific preprocessor macro for the compilation target is sometimes helpful. E.g. windows has the _WIN32 preprocessor macro defined - it's 1 whenever the compilation target is 32-bit ARM, 64-bit ARM, x86, or x64. But it's presence also informs us that we're on a Windows machine. This means that e.g. ssize_t won't be available (ssize_t, not size_t). So you might want to do something like:
#ifdef _WIN32
typedef long ssize_t;
#endif
By the way, people in this thread have commented about a similar pattern that is formally called a guard. You see it in header files (i.e. interfaces or ".h" files) a lot to prevent multiple inclusion. You'll hear about header guards.
/// #file poop.h
#ifndef POOP_H
#define POOP_H
void* poop(Poop* arg);
#endif
Now I can include the header file in the implementation file poop.c and some other file like main.c, and I know they will always compile successfully and without multiple inclusion, whether they are compiled together or individually, thanks to the header guards.
Salty seadogs write their header guards programmatically or with C++11 function-like macros. If you like books I recommend Jens Gustedt's "Modern C".
It is not transparent but you can try to compile it one time without typedef (just using the alias), and see if it compiles or not.
There is not such things.
It is possible to desactivate this duplicate_typedef compilator error.
"typedef name has already been declared (with same type)"
On a another hand, for some standardized typedef definition there is often a preprocessor macro defined like __bool_true_false_are_defined for bool that can be used.
I am currently writing various optimizations for some code. Each of theses optimizations has a big impact on the code efficiency (hopefully) but also on the source code. However I want to keep the possibility to enable and disable any of them for benchmarking purpose.
I traditionally use the #ifdef OPTIM_X_ENABLE/#else/#endif method, but the code quickly become too hard to maintain.
One can also create SCM branches for each optimizations. It's much better for code readability until you want to enable or disable more than a single optimization.
Is there any other and hopefully better way work with optimizations ?
EDIT :
Some optimizations cannot work simultaneously. I may need to disable an old optimization to bench a new one and see which one I should keep.
I would create a branch for an optimization, benchmark it until you know it has a significant improvement, and then simply merge it back to trunk. I wouldn't bother with the #ifdefs once it's back on trunk; why would you need to disable it once you know it's good? You always have the repository history if you want to be able to rollback a particular change.
There are so many ways of choosing which part of your code that will execute. Conditional inclusion using the preprocessor is usually the hardest to maintain, in my experience. So try to minimize that, if you can. You can separate the functionality (optimized, unoptimized) in different functions. Then call the functions conditionally depending on a flag. Or you can create an inheritance hierarchy and use virtual dispatch. Of course it depends on your particular situation. Perhaps if you could describe it in more detail you would get better answers.
However, here's a simple method that might work for you: Create two sets of functions (or classes, whichever paradigm you are using). Separate the functions into different namespaces, one for optimized code and one for readable code. Then simply choose which set to use by conditionally using them. Something like this:
#include <iostream>
#include "optimized.h"
#include "readable.h"
#define USE_OPTIMIZED
#if defined(USE_OPTIMIZED)
using namespace optimized;
#else
using namespace readable;
#endif
int main()
{
f();
}
Then in optimized.h:
namespace optimized
{
void f() { std::cout << "optimized selected" << std::endl; }
}
and in readable.h:
namespace readable
{
void f() { std::cout << "readable selected" << std::endl; }
}
This method does unfortunately need to use the preprocessor, but the usage is minimal. Of course you can improve this by introducing a wrapper header:
wrapper.h:
#include "optimized.h"
#include "readable.h"
#define USE_OPTIMIZED
#if defined(USE_OPTIMIZED)
using namespace optimized;
#else
using namespace readable;
#endif
Now simply include this header and further minimize the potential preprocessor usage. Btw, the usual separation of header/cpp should still be done.
Good luck!
I would work at class level (or file level for C) and embed all the various versions in the same working software (no #ifdef) and choose one implementation or the other at runtime through some configuration file or command line options.
It should be quite easy as optimizations should not change anything at internal API level.
Another way if you'are using C++ can be to instantiate templates to avoid duplicating high level code or selecting a branch at run-time (even if this is often an acceptable option, some switches here and there are often not such a big issue).
In the end various optimized backend could eventually be turned to libraries.
Unit Tests should be able to work without modifying them with every variant of implementation.
My rationale is that embedding every variant mostly change software size, and it's very rarely a problem. This approach also has other benefits : you can take care easily of changing environment. An optimization for some OS or some hardware may not be one on another. In many cases it will even be easy to choose the best version at runtime.
You may have two (three/more) version of function you optimise with names like:
function
function_optimized
which have identical arguments and return same results.
Then you may #define selector in som header like:
#if OPTIM_X_ENABLE
#define OPT(f) f##_optimized
#else
#define OPT(f) f
#endif
Then call functions having optimized variants as OPT(function)(argument, argument...). This method is not so aestetic but it does so.
You may go further and use re#define names for all your optimized functions:
#if OPTIM_X_ENABLE
#define foo foo_optimized
#define bar bar_optimized
...
#endif
And leave caller code as is. Preprocessor does function substitution for you. I like it most, because it works transparently while per-function (and also per datatype and per variable) grained which is enough in most cases for me.
More exotic method is to make separate .c file for non-optimized and optimized code and compile only one of them. They may have same names but with different paths, so switching can be made by change single option in command line.
I'm confused. Why don't you just find out where each performance problem is, fix it, and continue. Here's an example.