Avoiding conditional code compilation [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Google C++ Style Guide (http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Preprocessor_Macros) says:
"Instead of using a macro to conditionally compile code ... well, don't do that at all"
Why is it so bad to have functions like
void foo()
{
// some code
#ifdef SOME_FUNCTIONALITY
// code
#endif
// more code
}
?

As they say in the doc you linked to:
Macros mean that the code you see is not the same as the code the compiler sees. This can introduce unexpected behavior, especially since macros have global scope.
It's not too bad if you have just one conditional compilation, but can get quick complicated if you start having nested ones like:
#if PS3
...
#if COOL_FEATURE
...
#endif
...
#elif XBOX
...
#if COOL_FEATURE
...
#endif
...
#elif PC
...
#if COOL_FEATURE
...
#endif
...
#end

I believe some the arguments against it go:
#ifdef cuts across C++ expression/statement/function/class syntax. That is to say, like goto it is too flexible for you to trust yourself to use it.
Suppose the code in // code compiles when SOME_FUNCTIONALITY is not defined. Then just use if with a static const bool and trust your compiler to eliminate dead code.
Suppose the code in // code doesn't compile when SOME_FUNCTIONALITY is not defined. Then you're creating a dog's breakfast of valid code mixed with invalid code, and relevant code with irrelevant code, that could probably be improved by separating the two cases more thoroughly.
The preprocessor was a terrible mistake: Java is way better than C or C++, but if we want to muck around near the metal we're stuck with them. Try to pretend the # character doesn't exist.
Explicit conditionals are a terrible mistake: polymorphism baby!
Google's style guide specifically mentions testing: if you use #ifdef, then you need two separate executables to test both branches of your code. This is hassle, you should prefer a single executable, that can be tested against all supported configurations. The same objection would logically apply to a static const bool, of course. In general testing is easier when you avoid static dependencies. Prefer to inject them, even if the "dependency" is just on a boolean value.
I'm not wholly sold on any argument individually -- personally I think messy code is still occasionally the best for a particular job under particular circumstances. But the Google C++ style guide is not in the business of telling you to use your best judgement. It's in the business of setting a uniform coding style, and eliminating some language features that the authors don't like or don't trust.

Related

What is the best practice for define in C++? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was wondering if there are any official recommendations regarding the use of define in c++ language, precisely is it best to define in your header or your source file?
I am asking this to know if there are any official standards to live by, or is it just plain subjective... I don't need the whole set of standards but the source or a link to the guidelines, will suffice.
LATER EDIT:
What is the explanation of the fact that const and constexpr have become the status quo, I am referring to define used as means of avoiding repetitive typing, it is clear in my mind that programmers should use the full potential of the c++ oop compiler. On the other hand, if it is so feared, why not remove it altogether? I mean, as far as I understand, define is used solely for conditional compilation, especially, as in making the same code work on different compilers.
Secondary, tiny question, the potential for errors is also the main reason why java doesn't have true C-style define?
A short list of #define use guidelines for C++, points 2, 4, 6 and 7 actually address the question:
Avoid them
Use them for the the common "include guard" pattern in header files
Otherwise, don't use them, unless you can explain, why you are using #define and not const, constexpr, or an inline or a template function, etc, instead.
Use them to allow giving compile time options from compiler command line, but only when having the option as run-time option is not feasible or desirable.
Use them when whatever library you are using requires using them (example: disable assert() function )
In general, put everything in the most narrow possible scope. For some uses of #define macros, this means #define just before a function in .cpp file, then #undef right after the function.
The exact use case for #define determines if it should be in .h or in .cpp file. But note that most use cases are actually in violation of 3. above, and you should actually not use #define.

List all available function prototypes from within C/C++? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Is there any way within a C or C++ program of getting information on all the functions that could be called? Perhaps a compiler macro of some sort? I know that there are programs that could take in source files or .o files and get the symbols or the prototypes, and I suppose I could just run those programs within a c program, but I'm curious about maybe returning function pointers to functions or an array of function prototypes available in the current scope, or something related?
I'm not phrasing this very well, but the question is part of my curiosity of what I can learn about a program from within the program (and not necessarily by just reading its own code). I kind of doubt that there is anything like what I'm asking for, but I'm curious.
Edit: It appears that what I was wondering about but didn't know how to describe very well was whether reflection was possible in C or C++. Thank you for your answers.
The language doesn't support reflection yet. However, since you are looking for some sources of information, take a look at the Boost.Reflect library to help you add reflection to your code, to a certain extent. Also, look at ClangTooling and libclang for libraries that let you do automated code-analysis.
C and C++ have no way to gather the names of all the functions available.
However, you can use macros to test standards (ANSI, ISO, POSIX, etc) compliance, which can then be used to guarantee the presence of each standard's functions.
For example, if _POSIX_C_SOURCE is defined, you can (usually) assume that functions specified by POSIX will be available:
#ifdef _POSIX_C_SOURCE
/* you can safely call POSIX functions */
#else
/* the system probably isn't POSIX compliant */
#endif
Edit: If you're on a Linux system, you can find some common compatibility macros under feature_test_macros(7). OS X and the BSDs should have roughly the same macros, even though they may not have that manual page. Windows uses the WINVER and _WIN32_WINNT macros to control function visibility across releases.
No.
C++ meta-programming power is weak don't include any form of reflection. You can however use tools like gcc-xml to parse a C++ program and export its content in a easier to analyze format.
Writing your own parser for C++ to extract function declaration is going to be a nightmare unless you only need to do that on your specific project and you're ready to cut some corners.

Is Using #ifdef a correct strategy [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
So,I have a requirement that to do a particular task (say multithreading) that is totally os dependent (or like win32/linux api call).
Now i read somewhere, that using #ifdef we can actually write os dependent code
#ifdef __linux__
/*some linux codes*/
#endif
Now my question is....
Is it the right way to write my code(i.e using ifdef) and then releasing a single .cpp file for both windows and linux? Or should i break my code into two parts and release two different builds- one for linux and one for windows?
Edit:
Seems like question is way too broad, and that generates a lot of opinions.
Differentiate between the two approaches that i mentioned on the basis of Performance, build size etc(any other factor that i may have missed).
Class A {
.
.// Some variables and methods
.
};
class B: public A {
void DoSomething() {
// COntains linux codes and some windows code
}
};
If suppose I don't use #ifdef, how am i going to write dosomething() method that calls right piece of code at right time
Solution #1: Use existing, debugged, documented library (e.g. boost) to hide the platform differences. It uses lots of #ifdef's internally, but you don't have to worry about that.
Solution #2: Write your own platform independent library (see solution #1 for a better approach) and hide all the #ifdef's inside.
Solution #3: Do it in macros (ugh, but see ACE (although most of ACE is in a library, too.)
Solution #4: Use #ifdefs throughout your code whenever a platform difference arises.
Solution #4 is suitable for very-small, throw-away code programs.
Solution #3 is suitable if you are programming in the 1990's.
Solution #2 is suitable only if you can't use a real library for non-technical reasons.
Conclusion: Use Solution #1.
It's possible to use #ifdef for this, but it quickly leads to
unmaintainable code. A better solution is to abstract the
functionality into a class, and provide two different
implementations (two different source files) for that class.
(Even back in the days of C, we'd define a set of functions in
a header, and provide different source files for their
implementation.)
I generally put give the source files the same name, but put
them in platform dependent directories, e.g.: thread.hh, with
the sources in Posix/thread.cc and Windows/thread.cc.
Alternatively, you can put the implementations in files with
different names: posix_thread.cc and windows_thread.cc.
If you need dependencies in a header, the directory approach
also works. Or you can use something like:
#include systemDependentHeader(thread.hh)
, where systemDependentHeader is a macro which does some token
pasting (with a token defined on the command line) and
stringizing.
Of course, in the case of threading, C++11 offers a standard
solution, which is what you should use; if you can't,
boost::thread isn't too far from the standard (I think). More
generally, if you can find the work already done, you should
take advantage of it. (But verify the quality of the library
first. In the past, we had to back out of using ACE because it
was so buggy.)
If you need to develop your code for different platform you have to consider the following:
You can use #ifdef or #if defined(x) but you have to confine it only inside a header file, better if this file is called "platform.h". Inside your source code you can use the macros defined inside paltform.h file. So your business logic is same for both platform.
Let me provide you an example:
PLATFORM.H
// A platform depended print function inside platform.h file
#if defined( _EMBEDDED_OS_ )
#include <embedded_os.h>
#define print_msg(message) put_uart_bytes(message)
#elif defined( _WINDOWS_ )
#include <windows.h>
#define print_msg(message) printf(message)
#else
#error undefined_platform
#endif
SOURCE.CPP
void main()
{
print_msg("Ciao Mondo!");
}
As you see your source is same for each platform and your business logic is not dirty by several #ifdef directives

Using Define for throwing exceptions [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Currently, I am refactoring some old project which has been written by somebody of our ex-workers. I have encountered with wrapping of throwing exception with a define.
Something like that:
#define THROWIT(msg) throw common::error(msg)
Example from the code:
#define THROW_FD_ERROR( fd, op )\
throw common::error_fd( errno,\
__ERR_FD_API_3\
.arg( fd )\
.arg( op )\
.arg( strerror(errno) ),\
__FILE__,\
__LINE__ )
I can see some benefits of it, but they not so huge for me to do it in a such way.
Anyway, is it a common technic?
In you opinion what advantages can be gained from it?
Are you using defines for throwing exception?
if yes what the purpose of that ?
UPD: add define from the code
UPD2: Thanks all for your answers. I've decided to take out all macros. In purpose of debuging I will extend the base error class with backtrace info, in my opinion it is better than just using standart defines for file and line.
Typically, the preprocessor is only used if you need a preprocessor-specific feature, like __FILE__ or __LINE__. This macro does nothing a function cannot and therefore it is quite atypical and bad.
The Macro as presented doesn't have a whole lot of benefit.
However, a macro can have a benefit if you want to include file name, function name and line numbers in the exception message:
#define POSSIBLY_USEFUL_THROWIT(msg) throw common::error(__FILE__, __FUNCTION__, __LINE__, msg)
Oh, and THROWIT is a horrible name for this.
Alf highlights a good point:
You can use a macro to collect the information, and it's the only way
to do it. However, tying that to the throwing of an exception is a
conflation of responsibilities. This means you would need separate
such macros for logging, UI message, and so on. A single macro would
be far preferable.
I think what he means is having something like this:
// Construct new temporary object source_line_info
#define CURRENT_SRC_LINE_INFO() common::source_line_info(__FILE__, __FUNCTION__, __LINE__)
and then using it like this:
throw common::error(CURRENT_SRC_LINE_INFO(), msg);
to have only that part macro'fied that really needs it.
Personally, I would then prefer to have an additional macro like
#define THROW_COMMON_ERROR(...) throw common::error(CURRENT_SRC_LINE_INFO(), ...
Because if I'm going to have a "macro call" on multiple lines, I might just as well make it as short and as centralized as possible, even if that means introducing another macro.
No. Don't. Bad. It makes the code harder to understand and isn't all that shorter to type.
If you really must, use a function. But I don't think you really must, in this case.
Advantages are that there are less characters to type and that you could change the throw declaration (like throwing another type) at a single point (the macro). However, you could also use a usual function instead of a macro. Using macros where a function can do exactly the same is considered no good practice because of the problems macros have (like no scoping and possible pollution of other files that include the macro defining header. Macros are at most a tool to be used when no other language feature can do the same thing and you desperately need it.
Thus, I would not consider this good practice.
No, it's better to use inline functions in C++. Macro's are substituted without compiler's checks. Preprocessor macros should be used where no other way to do the task.

Dos and Don'ts of Conditional Compile [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
When is doing conditional compilation a good idea and when is it a horribly bad idea?
By conditional compile I mean using #ifdefs to only compile certain bits of code in certain conditions. The #defineds themselves may be in either a common header file or introduced via the -D compiler directive.
The good ideas:
header guards (you can't do much better for portability)
conditional implementation (juggling with platform differences)
debug specific checks (asserts, etc...)
per suggestion: extern "C" { and } so that the same headers may be used by the C++ implementation and by the C clients of the API
The bad idea:
changing the API between compile flags, since it forces the client to changes its uses with the same compile flags... urk!
Don't put ifdef in your code.
It makes it really hard to read and understand. Please make the code as easy to read as possable for the maintainer (he knows where you live and owns an Axe).
Hide the conditional code in separate functions and use the ifdef to define what functions are being used.
DONT use the else part to make a definition. If you are ding that you are saying one platform is unique and all others are the same. This is unlikely, what is more likely is that you know what happens on a couple of platforms but you should use the #else section to stick a #error so when it is ported to a new platform a developer has to explicitly fix the condition for his platform.
x.h
#if defined(WINDOWS)
#define MyPlatfromSleepSeconds(x) sleep(x * 1000)
#elif defined (UNIX)
#define MyPlatfromSleepSeconds(x) Sleep(x)
#else
#error "Please define appropriate sleep for your platform"
#endif
Don;t be tempted to expand a macro into multiple lines of code. That leads to madness.
p.h
#if defined(SOLARIS_3_1_1)
#define DO_SOME_TASK(x,y) doPartA(x); \
doPartB(y); \
couple(x,y)
#elif defined(WINDOWS)
#define DO_SOME_TASK(x,y) doAndCouple(x,y)
#else
#error "Please define appropriate DO_SOME_TASK for your platform"
#endif
If you develop the code on windows then test on solaris 3_1_1 later you may find unexpected bugs when people do things like:
int loop;
for(loop = 0;loop < 10;++loop)
DO_SOME_TASK(loop,loop); // Windows works fine()
// Solaras. Only doPartA() is in the loop.
// The other statements are done when the loop finishes
Basically, you should try to keep the amount of code that is conditionally compiled to a minimum, because you should be trying to test all that and having lots of conditions makes that more difficult. It also reduces the readability of the code; conditionally compiling whole files is clearer, e.g., by putting platform-specific code in a separate file for each platform and having that all have the same API from the perspective of the rest of the problem. Also try to avoid using it in function headers; again, that's because that's a place where it is particularly confusing.
But that's not to say that you should never use conditional compilation. Just try to keep it short and minimal. (Where I can, I use conditional compilation to control the definitions of other macros which are then just used in the rest of the code; that seems to be clearer to me at least.)
It's a bad idea whenever you don't know what you're doing. It can be a good idea when you're effectively solving an issue this way :).
The way you describe conditional compiling, include guards are part of it. It's not only a good idea to use it. It's a way to avoid compilation errors.
For me, conditional compiling is also a way to target multiple compilers and operating systems. I'm involved in a lib that's supposed to be compileable on Windows XP and newer, 32 or 64 bit, using MinGW and Visual C++, on Linux 32 and 64 bit using gcc/g++ and on MacOS using I-don't-know-what (I'm not maintaining that, but I assume it's a gcc port). Without the preprocessor conditions, it would be pretty much impossible to create a single source file that's compileable anywhere.
Another pragmatic use of conditional compiles is to "comment out" sections of code which contain standard "C" comments (i.e. /* */). Some compilers do not allow nesting of these comments, for example:
/* comment out block of code
.... code ....
/* This is a standard
* comment.
*/ ... oopos! Some compilers try to compile code after this closing comment.
.... code ....
end of block of code*/
(As you can see in the syntax highlighting, StackOverflow does not nest comments.)
instead you can use#ifdef to get the right effect, for example:
#ifdef _NOT_DEFINED_
.... code ....
/* This is a standard
* comment.
*/
.... code ....
#endif
In the past if you wanted to produce truly portable code, you'd have to resort to some form of conditional compilation. With there being a proliferation of portable libraries (such as APR, boost etc.) this reason has little weight IMHO. If you are using conditional compilation simply compile out blocks of code that are not need for particular builds, you should really revisit your design - I should imagine that this would become a nightmare to maintain.
Having said all that, if you do need to use conditional compilation, I would hide as much as I can away from the main body of the code and limit to to very specific cases that are very well understood.
Good/justifiable uses are based on cost/benefit analysis. Obviously, people here are very conscious of the risks:
in linking objects that saw different versions of classes, functions etc.
in making code hard to understand, test and reason about
But, there are uses which often fall into the net-benefit category:
header guards
code customisations for distinct software "ecosystems", such as Linux versus Windows, Visual C++ versus GCC, CPU-specific optimisations, sometimes word size and endianness factors (though with C++ you can often determine these at compile via template hackery, but that may prove messier still) - abstracts away lower-level differences to provide a consistent API across those environments
interacting with existing code that uses preprocessor defines to select versions of APIs, standards, behaviours, thread safety, protocols etc. (sad but true)
compilation that may use optional features when available (think of GNU configure scripts and all the tests they perform on OS interfaces etc)
request that extra code be generated in a translation unit, such as adding main() for a standalone app versus without for a library
controlling code inclusion for distinct logical build modes such as debug and release
It is always a bad idea. What it does is effectively create multiple versions of your source code, all of which need to be tested, which is a pain, to say the least. Unfortunately, like many bad things it is sometimes unavoidable. I use it in very small amounts when writing code that needs to be ported between Windows and Linux, but if I found myself doing it a lot, I would consider alternatives, such as having two separate development sub-trees.