beginner MACRO vs. const conceptual idea [duplicate] - c++

This question already has answers here:
Inline functions vs Preprocessor macros
(14 answers)
C/C++ macros instead of const [duplicate]
(4 answers)
Closed 7 years ago.
What is the most significant difference of these two max operations? Which one do
you prefer to use in your system, and why?
#define max(a,b) (a)<(b)?(b):(a)
int max (const int a, const int b) { return (a) < (b) ? (b) : (a); }
I am trying to see if I am on the right track for the above question. My first thought is obviously that the #define indicates a preprocessor directive, or MACRO, named "max". Therefore, anywhere "max" is encountered in the program, it will be replaced with the defined value of this macro. Macros also dont require any memory allocation, so we can expect faster execution times.
The const keyword, on the other hand, does require memory allocation, and is not able to be changed by the executing program. The overall consensus through my notes and some online sources seems to be that macros are more efficient/faster since they do not require the memory allocation. Therefore, it would seem I would prefer to use macros for their speed advantages.
Basically my question is, am I nailing the main differences between these two? Or am I missing something major?

Related

#define was changed to constexpr auto [duplicate]

This question already has answers here:
Is it bad practice to specify an array size using a variable instead of `#define` in C++? (C error: variably modified at file scope) [closed]
(3 answers)
Closed 3 years ago.
I defined some code in c++, ex:
#define array_width 3;
Visual Studio will suggest changing to:
constexpr auto array_width = 3;
what's the reason to change? and what is the benefit?
Thanks.
The main reason for these suggestions is that the preprocessor does nothing but simple textual replacement (no type checking or similar things a compiler performs). There are many potential pitfalls when using the preprocessor - when you can avoid it, do so. `constexpr´ is one of the building blocks that allow for fewer macros these days.
To back this with an authority: From S. Meyers, Effective C++, Item 2 ("Prefer consts, enums, and inlines to #defines"):
Things to Remember
For simple constants, prefer const objects or enums to #defines
[...]
From S. Meyers, Effective Modern C++, Item 15 ("Use constexpr whenever possible"):
Things to Remember
constexpr objects are const and are initialized with values known during
compilation.
[...]
constexpr objects and functions may be used in a wider range of contexts than non-constexpr objects and functions.
Macros work by substituting text. With macro the following example code will be ill-formed:
struct foo
{
int array_width{};
};
So in modern C++ one should prefer to avoid macros when there are alternatives available. Also it is a good idea to use UNIQUE_PREFIX_UPPER_CASE naming convention for macros to avoid possible clashes with normal code.

C++ correct references [duplicate]

This question already has answers here:
Whats the difference between these two C++ syntax for passing by reference? [duplicate]
(1 answer)
Placement of the asterisk in pointer declarations
(14 answers)
Closed 4 years ago.
Can someone explain what the difference is, if any, between these two lines:
int& i;
int &i;
I know these are both references and both seem to work fine. Is there a reason to use one over the other? Is there any rule saying what is the right way?
Thanks in advance.
There is absolutely no difference in meaning between these two, it is a purely stylistic matter. Just pick one and try to be consistent within a project.
I believe the examples in the language standard put the & symbol on the left - that's as good a reason as any to prefer one way over the other, I suppose.
That said, as you've written it, neither line is valid code, because you can't have an uninitialised reference. You would need something like:
int a = 10;
int& b = a;

Divide if different than 0 [duplicate]

This question already has answers here:
Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?
(9 answers)
Closed 6 years ago.
I often have this kind of statement in my code :
(b != 0) ? a / b : a
In terms of speed and best C++ pratice, is it better to do a function
float divifnotzero(a,b) { ... return ... }
or a preprocessor macro like this ?
#define divifnotzero(a,b) ((b!=0)?a/b:a)
The pre-processor is just going to replace the code wherever you use the macro, so there is no difference there. As for a function, your compiler will almost certainly inline it, so again there should be no difference in speed. So, given that I would go with a function for readability.
Preprocessor macros inline any code you put in them. A function call allows you to reduce the size of the executable at the expense of some slight overhead. Based solely on that, in this instance you would want to use a preprocessor macro.
In practice, functions can be inlined just like preprocessor macros with the inline keyword, which gets rid of the overhead. The compiler can generally decide on whether or not to inline a function itself; one like this would almost certainly have that happen. Go for the function call, unless you're specifically compiling the program without optimizations while still valuing speed.

Is it possible to implement C++ as a C library with macros? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
Would it be theoretically possible to implement C++ keywords, classes syntax etc. using a C library with right macros? (i.e. prepare a library which would make any C++ code compile using C compiler). I guess the answer is probably "no", but I wonder if you can prove it.
C does not have function overloading, but C++ does.
It seems to me that it would probably be impossible to make this simple C++ code compile in C:
bool Add(int a, int b);
bool Add(string a, string b);
(two overloaded functions: same name, different parameters, different implementation)
C would report an error similar to "redefinition of an existing function".
C++ would compile it with no problems.
No.
For a specific proof, consider this template.
template<size_t n>
struct fact {
static const int value = n * fact<n-1>::value;
};
template<>
struct fact<0> {
static const int value = 1;
};
Even if you could write an extremely sophisticated preprocessing macro to translate this into C, the preprocessor only runs once. It does not loop or run recursively (which this template definition requires to function correctly.) So you cannot implement this template within C with macros only.
You may be able to do a subset of C++, but the preprocessor is fundamentally unsuited for this situation.
EDIT: Re: Boost.Preprocessor. Iteration is faked in the Boost.Preprocessor. There is an iteration limit of 256 in all cases, this is because the preprocessor fakes loops using repeated calls. See boost/repetition/for.hpp for an example.
The template mechanism is turing complete. The macro processor is not. /story
Operator overloading might represent an issue.
You can go with code generation, g++ frontend probably does something of this kind. Fork it and fix it.

C/C++ macros instead of const [duplicate]

This question already has answers here:
What is the difference between #define and const? [duplicate]
(6 answers)
Closed 9 years ago.
The macro #define MAX 80 is equivalent to const int MAX = 80; Both are constant and cannot be modified.
Isn't it better to use the macro instead of the constant integer? The constant integer takes memory. The macro's name is replaced by its value by the pre-processor, right? So it wouldn't take memory.
Why would I use const int rather than the macro?
Reason #1: Scoping. Macros totally ignore scope.
namespace SomeNS {
enum Functor {
MIN = 0
, AVG = 1
, MAX = 2
};
}
If the above code happens to be included in a file after the definition of the MAX macro, it will happily get preprocessed into 80 = 2, and fail compiling spectacularly.
Additionally, const variables are type safe, can be safely initialised with constant expressions (without need for parentheses) etc.
Also note that when the compiler has access to the const variable's definition when using it, it's allowed to "inline" its value. So if you never take its address, it does no even need not take up space.
There are a few reasons actually :
Scoping : you can't define a scope for a macro. It is present at global scope, period. Thus you can't have class-specific constants, you can't have private constants, etc. Also, you could end up with name collision, if you end up declaring something with the same name of a macro that you don't even know exists (in some lib/header you included f.e.)
Debugging : as the preprocessor just replaces instances of the macro with its value, it can become tricky to know why you got an error with a specific value (or just a specific behavior that you didn't expect...) . You have to remember where this value comes from. It is even more important in the case of reusable code, as you can even don't understand where does a value comes from, if it has been defined as a macro in a header you didn't write (thus it's not very good to do this yourself)
Adresses : a const variable is, well, a variable. It means notably that you can pass its adress around (when const pointers or const reference are needed), but you can't with macro
Type safety : you can specify a type for a const variable, something you can't for a macro.
As a general rule, I'd say that (in my opinion) you should avoid #define directives when you have a clear alternative (i.e. const variables, enums, inlines).
The thing is they aren't the same. The macro is just text substitution by the preprocessor while the const is a normal variable.
If someone ever tries to shadow MAX within a function (like const in MAX = 32;) they get a really weird error message when MAX is a macro.
In C++ the language-idiomatic approach is to use constants rather than macros. Trying to save a few bytes of memory (if it even saves them) doesn't seem worth the cost in readability.
1) Debugging is the main one for me. It's difficult for a debugger to resolve MAX to the value at run time, but it can do it with the const int version.
2) You don't get any type information with #define. If you're using a template-based function; say std::max where your other datum is a const int then the macro version will fail but the const int version will not. To work around that you'd have to use #define MAX 80U which is ugly.
3) You cannot control scoping with #define; it will apply to the whole compilation unit following the #define statement.