Using Define for throwing exceptions [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Currently, I am refactoring some old project which has been written by somebody of our ex-workers. I have encountered with wrapping of throwing exception with a define.
Something like that:
#define THROWIT(msg) throw common::error(msg)
Example from the code:
#define THROW_FD_ERROR( fd, op )\
throw common::error_fd( errno,\
__ERR_FD_API_3\
.arg( fd )\
.arg( op )\
.arg( strerror(errno) ),\
__FILE__,\
__LINE__ )
I can see some benefits of it, but they not so huge for me to do it in a such way.
Anyway, is it a common technic?
In you opinion what advantages can be gained from it?
Are you using defines for throwing exception?
if yes what the purpose of that ?
UPD: add define from the code
UPD2: Thanks all for your answers. I've decided to take out all macros. In purpose of debuging I will extend the base error class with backtrace info, in my opinion it is better than just using standart defines for file and line.

Typically, the preprocessor is only used if you need a preprocessor-specific feature, like __FILE__ or __LINE__. This macro does nothing a function cannot and therefore it is quite atypical and bad.

The Macro as presented doesn't have a whole lot of benefit.
However, a macro can have a benefit if you want to include file name, function name and line numbers in the exception message:
#define POSSIBLY_USEFUL_THROWIT(msg) throw common::error(__FILE__, __FUNCTION__, __LINE__, msg)
Oh, and THROWIT is a horrible name for this.
Alf highlights a good point:
You can use a macro to collect the information, and it's the only way
to do it. However, tying that to the throwing of an exception is a
conflation of responsibilities. This means you would need separate
such macros for logging, UI message, and so on. A single macro would
be far preferable.
I think what he means is having something like this:
// Construct new temporary object source_line_info
#define CURRENT_SRC_LINE_INFO() common::source_line_info(__FILE__, __FUNCTION__, __LINE__)
and then using it like this:
throw common::error(CURRENT_SRC_LINE_INFO(), msg);
to have only that part macro'fied that really needs it.
Personally, I would then prefer to have an additional macro like
#define THROW_COMMON_ERROR(...) throw common::error(CURRENT_SRC_LINE_INFO(), ...
Because if I'm going to have a "macro call" on multiple lines, I might just as well make it as short and as centralized as possible, even if that means introducing another macro.

No. Don't. Bad. It makes the code harder to understand and isn't all that shorter to type.
If you really must, use a function. But I don't think you really must, in this case.

Advantages are that there are less characters to type and that you could change the throw declaration (like throwing another type) at a single point (the macro). However, you could also use a usual function instead of a macro. Using macros where a function can do exactly the same is considered no good practice because of the problems macros have (like no scoping and possible pollution of other files that include the macro defining header. Macros are at most a tool to be used when no other language feature can do the same thing and you desperately need it.
Thus, I would not consider this good practice.

No, it's better to use inline functions in C++. Macro's are substituted without compiler's checks. Preprocessor macros should be used where no other way to do the task.

Related

C++: What are the best practices regarding runtime warning errors and messages? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Just started playing around with SDL2 again this morning, although the fact that SDL is used in the example here is not relevant, for the purposes of this question we could consider any framework / toolkit / library which can produce runtime errors.
Started writing the following code:
if(SDL_Init(SDL_INIT_VIDEO) < 0)
{
std::cerr << SDL_GetError() << std::endl;
}
I don't think this is very good.
This style of outputting messages to cerr is not easy to change in future if we needed to do so. If we want to print error messages to a file instead, then we would have to find all occurances of this code snipped and change cout to a file. This would clearly not be practical or easy.
It isn't very flexible. Perhaps we might want to have errors, warnings, and general info messages. They might need to go to different places depending on how the user has configured our program. Perhaps some users want to see all warnings and info and others want to see nothing but the most "critical" error messages.
From working on other projects I have seen things used which look like macros.
For example, the following sticks in my mind:
DT_THROW_IF(condition, "message");
My assumption is that this is implemented as a macro. If condition evaluates to true then message appears in the output. (cout/cerr)
The use of a macro like this might go some way to addressing the above issues, however I have heard that it is not good practice to make extensive uses of macros.
What are some best practices for error message and warning message handeling in C or C++ programs? What good solutions are available and when is their use appropriate?
Using macros for this is good practice. It enables implicit use of __FILE__, __LINE__, __func__, etc. For example, BOOST_THROW_EXCEPTION bundles up all this metadata about the exception for you.
Personally I always create a project-specific set of macros similar to your DT_THROW_IF. This allows capturing the full metadata at the throw site without clutter. And if the macros are constructed properly, there is no downside, and they rarely need to be modified or maintained.
As an example of what I'm talking about, here's one open source project of mine which has such macros: https://github.com/jzwinck/pccl/blob/master/throw.hpp - they are production tested (with GCC) and you're welcome to use them.

What is the best practice for define in C++? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was wondering if there are any official recommendations regarding the use of define in c++ language, precisely is it best to define in your header or your source file?
I am asking this to know if there are any official standards to live by, or is it just plain subjective... I don't need the whole set of standards but the source or a link to the guidelines, will suffice.
LATER EDIT:
What is the explanation of the fact that const and constexpr have become the status quo, I am referring to define used as means of avoiding repetitive typing, it is clear in my mind that programmers should use the full potential of the c++ oop compiler. On the other hand, if it is so feared, why not remove it altogether? I mean, as far as I understand, define is used solely for conditional compilation, especially, as in making the same code work on different compilers.
Secondary, tiny question, the potential for errors is also the main reason why java doesn't have true C-style define?
A short list of #define use guidelines for C++, points 2, 4, 6 and 7 actually address the question:
Avoid them
Use them for the the common "include guard" pattern in header files
Otherwise, don't use them, unless you can explain, why you are using #define and not const, constexpr, or an inline or a template function, etc, instead.
Use them to allow giving compile time options from compiler command line, but only when having the option as run-time option is not feasible or desirable.
Use them when whatever library you are using requires using them (example: disable assert() function )
In general, put everything in the most narrow possible scope. For some uses of #define macros, this means #define just before a function in .cpp file, then #undef right after the function.
The exact use case for #define determines if it should be in .h or in .cpp file. But note that most use cases are actually in violation of 3. above, and you should actually not use #define.

Is Using #ifdef a correct strategy [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
So,I have a requirement that to do a particular task (say multithreading) that is totally os dependent (or like win32/linux api call).
Now i read somewhere, that using #ifdef we can actually write os dependent code
#ifdef __linux__
/*some linux codes*/
#endif
Now my question is....
Is it the right way to write my code(i.e using ifdef) and then releasing a single .cpp file for both windows and linux? Or should i break my code into two parts and release two different builds- one for linux and one for windows?
Edit:
Seems like question is way too broad, and that generates a lot of opinions.
Differentiate between the two approaches that i mentioned on the basis of Performance, build size etc(any other factor that i may have missed).
Class A {
.
.// Some variables and methods
.
};
class B: public A {
void DoSomething() {
// COntains linux codes and some windows code
}
};
If suppose I don't use #ifdef, how am i going to write dosomething() method that calls right piece of code at right time
Solution #1: Use existing, debugged, documented library (e.g. boost) to hide the platform differences. It uses lots of #ifdef's internally, but you don't have to worry about that.
Solution #2: Write your own platform independent library (see solution #1 for a better approach) and hide all the #ifdef's inside.
Solution #3: Do it in macros (ugh, but see ACE (although most of ACE is in a library, too.)
Solution #4: Use #ifdefs throughout your code whenever a platform difference arises.
Solution #4 is suitable for very-small, throw-away code programs.
Solution #3 is suitable if you are programming in the 1990's.
Solution #2 is suitable only if you can't use a real library for non-technical reasons.
Conclusion: Use Solution #1.
It's possible to use #ifdef for this, but it quickly leads to
unmaintainable code. A better solution is to abstract the
functionality into a class, and provide two different
implementations (two different source files) for that class.
(Even back in the days of C, we'd define a set of functions in
a header, and provide different source files for their
implementation.)
I generally put give the source files the same name, but put
them in platform dependent directories, e.g.: thread.hh, with
the sources in Posix/thread.cc and Windows/thread.cc.
Alternatively, you can put the implementations in files with
different names: posix_thread.cc and windows_thread.cc.
If you need dependencies in a header, the directory approach
also works. Or you can use something like:
#include systemDependentHeader(thread.hh)
, where systemDependentHeader is a macro which does some token
pasting (with a token defined on the command line) and
stringizing.
Of course, in the case of threading, C++11 offers a standard
solution, which is what you should use; if you can't,
boost::thread isn't too far from the standard (I think). More
generally, if you can find the work already done, you should
take advantage of it. (But verify the quality of the library
first. In the past, we had to back out of using ACE because it
was so buggy.)
If you need to develop your code for different platform you have to consider the following:
You can use #ifdef or #if defined(x) but you have to confine it only inside a header file, better if this file is called "platform.h". Inside your source code you can use the macros defined inside paltform.h file. So your business logic is same for both platform.
Let me provide you an example:
PLATFORM.H
// A platform depended print function inside platform.h file
#if defined( _EMBEDDED_OS_ )
#include <embedded_os.h>
#define print_msg(message) put_uart_bytes(message)
#elif defined( _WINDOWS_ )
#include <windows.h>
#define print_msg(message) printf(message)
#else
#error undefined_platform
#endif
SOURCE.CPP
void main()
{
print_msg("Ciao Mondo!");
}
As you see your source is same for each platform and your business logic is not dirty by several #ifdef directives

Avoiding conditional code compilation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Google C++ Style Guide (http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Preprocessor_Macros) says:
"Instead of using a macro to conditionally compile code ... well, don't do that at all"
Why is it so bad to have functions like
void foo()
{
// some code
#ifdef SOME_FUNCTIONALITY
// code
#endif
// more code
}
?
As they say in the doc you linked to:
Macros mean that the code you see is not the same as the code the compiler sees. This can introduce unexpected behavior, especially since macros have global scope.
It's not too bad if you have just one conditional compilation, but can get quick complicated if you start having nested ones like:
#if PS3
...
#if COOL_FEATURE
...
#endif
...
#elif XBOX
...
#if COOL_FEATURE
...
#endif
...
#elif PC
...
#if COOL_FEATURE
...
#endif
...
#end
I believe some the arguments against it go:
#ifdef cuts across C++ expression/statement/function/class syntax. That is to say, like goto it is too flexible for you to trust yourself to use it.
Suppose the code in // code compiles when SOME_FUNCTIONALITY is not defined. Then just use if with a static const bool and trust your compiler to eliminate dead code.
Suppose the code in // code doesn't compile when SOME_FUNCTIONALITY is not defined. Then you're creating a dog's breakfast of valid code mixed with invalid code, and relevant code with irrelevant code, that could probably be improved by separating the two cases more thoroughly.
The preprocessor was a terrible mistake: Java is way better than C or C++, but if we want to muck around near the metal we're stuck with them. Try to pretend the # character doesn't exist.
Explicit conditionals are a terrible mistake: polymorphism baby!
Google's style guide specifically mentions testing: if you use #ifdef, then you need two separate executables to test both branches of your code. This is hassle, you should prefer a single executable, that can be tested against all supported configurations. The same objection would logically apply to a static const bool, of course. In general testing is easier when you avoid static dependencies. Prefer to inject them, even if the "dependency" is just on a boolean value.
I'm not wholly sold on any argument individually -- personally I think messy code is still occasionally the best for a particular job under particular circumstances. But the Google C++ style guide is not in the business of telling you to use your best judgement. It's in the business of setting a uniform coding style, and eliminating some language features that the authors don't like or don't trust.

What would make C++ preprocessor macros an accepted development tool?

Apparently the preprocessor macros in C++ are
justifiably feared and shunned by the C++ community.
However, there are several cases where C++ macros are beneficial.
Seeing as preprocessor macros can be extremely useful and can reduce repetitive code in a very straightforward manner --
-- leaves me with the question, what exactly is it that makes preprocessor macros "evil", or, as the question title says, which feature (or removal of feature) would be needed from preprocessor macros to make them useful as a "good" development tool (instead of a fill-in that everyone's ashamed of when using it). (After all, the Lisp languages seem to embrace macros.)
Please Note: This is not about #include or #pragma or #ifdef. This is about #define MY_MACRO(...) ...
Note: I do not intend for this question to be subjective. Should you think it is, feel free to vote to move it to programmers.SE.
Macros are widely considered evil because the preprocessor is a stupid text replacement tool that has little to none knowledge of C/C++.
Four very good reasons why macros are evil can be found in the C++ FAQ Lite.
Where possible, templates and inline functions are a better choice. The only reason I can think of why C++ still needs the preprocessor is for #includes and comment removal.
A widely disputed advantage is to use it to reduce code repetition; but as you can see by the boost preprocessor library, much effort has to be put to abuse the preprocessor for simple logic such as loops, leading to ugly syntax. In my opinion, it is a better idea to write scripts in a real high-level programming language for code generation instead of using the preprocessor.
Most preprocessor abuse come from misunderstanding, to quote Paul Mensonides(the author of the Boost.Preprocessor library):
Virtually all
issues related to the misuse of the preprocessor stems from attempting to
make object-like macros look like constant variables and function-like
macro invocations look like underlying-language function calls. At best,
the correlation between function-like macro invocations and function calls
should be incidental. It should never be considered to be a goal. That
is a fundamentally broken mentality.
As the preprocessor is well integrated into C++, its easier to blur the line, and most people don't see a difference. For example, ask someone to write a macro to add two numbers together, most people will write something like this:
#define ADD(x, y) ((x) + (y))
This is completely wrong. Runs this through the preprocessor:
#define ADD(x, y) ((x) + (y))
ADD(1, 2) // outputs ((1) + (2))
But the answer should be 3, since adding 1 to 2 is 3. Yet instead a macro is written to generate a C++ expression. Not only that, it could be thought of as a C++ function, but its not. This is where it leads to abuse. Its just generating a C++ expression, and a function is a much better way to go.
Furthermore, macros don't work like functions at all. The preprocessor works through a process of scanning and expanding macros, which is very different than using a call stack to call functions.
There are times it can be acceptable for macros to generate C++ code, as long as it isn't blurring the lines. Just like if you were to use python as a preprocessor to generate code, the preprocessor can do the same, and has the advantage that it doesn't need an extra build step.
Also, the preprocessor can be used with DSLs, like here and here, but these DSLs have a predefined grammar in the preprocessor, that it uses to generate C++ code. Its not really blurring the lines since it uses a different grammar.
Macros have one notable feature - they are very easy to abuse and rather hard to debug. You can write just about anything with macros, then macros are expanded into one-liners and when nothing works you have very hard time debugging the resulting code.
The feature alone makes one think ten times on whether and how to use macros for their task.
And don't forget that macros are expanded before actual compilation, so they automatically ignore namespaces, scopes, type safety and a ton of other things.
The most important thing about macros is that they have no scope, and do not care about context. They are almost a dump text replacement tool. So when you #define max(.... then everywhere where you have a max it gets replaced; so if someone adds overly generic macro names in their headers, they tend to influence code that they were not intended to.
Another thing is that when used without care, they lead to quite hard to read code, since no one can easily see what the macro could evaluate to, especially when multiple macros are nested.
A good guideline is to choose unique names, and when generating boilerplate code, #undef them as soon as possible to not pollute the namespace.
Additionally, they do not offer type safety or overloading.
Sometimes macros are arguably a good tool to generate boilerplate code, like with the help of boost.pp you could create a macro that helps you creating enums like:
ENUM(xenum,(a,b,(c,7)));
which could expand to
enum xenum { a, b, c=7 };
std::string to_string( xenum x ) { .... }
Things like assert() that need to react on NDEBUG are also often easier to implement as macros
There a many uses where a C developper uses Macros and an C++ developper uses templates.
There obviously corner cases where they're useful, but most of the time it's bad habits from the C world applied to C++ by people that believe there such a language called C/C++
So it's easier to say "it's evil" than risking a developper misuses them.
Macros do not offer type safety
Problems where parameters are executed twice e.g. #define MAX(a,b) ((a)>(b) ? (a) : (b)) and apply it for MAX(i++, y--)
Problems with debugging as their names do not occur in the symbol table.
Forcing the programmer to use proper naming for the macros... and better tools to track replacement of macros would fix most my problems. I can't really say I've had major issues so far... It's something you burn yourself with and learn to take special care later on. But they badly need better integration with IDEs, debuggers.