Inline functions instead of simple macros with constant - c++

In general in C++, you want to use constant instead of defining constants with #define as there is type checking and this is a good thing.
#define MYCONST 10; // NO
const int MYCONST = 10; // OK.
This is fine, but suppose I want to improve the performance of my app; if I have to read that constant still I might read it (I hope to be correct) from any cache level from L1 to L3 and this would introduce slowness.
Would it be better to define that constant as simple inline function like below?
inline int MYCONST()
{
return 10;
}
Am I correct when I should expect some improvement?
According to here for integer it seems that it depends on the compiler and the type I am using.

No and no: when you define something like
const int MYCONST = 10;
The value will not be read from "any cache level" but the compiler (at least any compiler build in the last 20 years) will issue exactly the same code as if you had used macros (or literals, which is equivalent), i.e. it will be placed directly inside the machine code.
Therefore your second suggestion (using an inline function) will not only have no performance benefit at all but prevent many uses of constants (like char my_array[MYCONST]), not to mention the lack of readability, wasted space etc. of your code.
Just follow the main C++ credo and use constants, there's nothing wrong with that :) ...

I think that defining a const is better practice anyway, but I also suspect that many compilers would not be able correctly to process a construct such as
char myBuffer[MYCONST()];
without issuing an error message.

Related

#define or constexpr, which is more suitable here to maximal efficiency?

I have a constant string value
std::string name_to_use = "";
I need to use this value in just one place, calling the below function on it
std::wstring foo (std::string &x) {...};
// ...
std::wstring result = foo (name_to_use);
I can simply not declare the variable and use a string literal in the function call instead, but to allow easy configuration of name_to_use I decided to declare at the beginning of the file.
Now, since I am not really modifying name_to_use I thought why not use a #define preprocessing directive so I do not have to store name_to_use as a const anywhere in memory while the main program runs continuously (a GUI is displayed).
It worked fine, but then I came across constexpr. A user on stackoverflow has said to use it instead of #define as it is a safer option.
However, constexpr std::string name_to_use is still going to leak memory in this case right? Since it's not actually replacing occurrences of name_to_use with a value but holding a reference to it computed at compile time (which does not offer me any benefit here anyway, if I'm not mistaken?).
If you #define it to "", then at each call there'll be a conversion from c-string to std::string, which is pretty inefficient. However, you can (usually) pass macro defines as arguments to compiler, which helps customization. Even in that case, it makes sense to write the static constexpr std::string name_to_use.
With static constexpr std::string name_to_use = ...;, the problem of conversion goes away (likely done compile-time). Don't expect the compiler not to do optimizations - if it's a compile-time string, it might happen that the entire function is optimized away (but still, the object will exists and the code will adhere to the as-if rule).
To combine the two, you can do:
#ifdef NAME_TO_USE
constexpr const std::string = # NAME_TO_USE;
#else
constexpr const std::string = "";
#endif
Also, as others said, please consider std::string_view to avoid allocation.
The user is saying well, and you did understand right.
Using the constexpr method will allocate a constant when the macro will just replace itself at compile time. The only benefit of the first is that it is typed, and can make your code a little bit safer when you compile it.
This being said, the choice is yours. Do you want to have a non-typed macro that doesn't add any operation on run-time, or a typed constant that use a little bit of memory at parsing ?

Numerical constant in C++

I saw a lot of questions concerning what should be preferred between static const vs #define vs enum.
In none of them was discussed the aspect of resources, which is important in embedded system.
I don't like to use #define but it doesn't consume resources like static. What about enum, does in consume RAM\ROM?
In addition, I was told that if i use e.g. const int it will not consume either RAM nor ROM, assuming I do not use pointer\reference to this variable.
Is this true?
If this is true, wouldn't it be a good solution to use const int inside a namespace in order to save resources?
Remark: I am using C++ 2003 standard and can't use 2011 standard. (I already saw enum class and constexpr in 2011, but i can't use them.)
... if i use e.g. const int it will not consume either RAM nor ROM, assuming I do not use pointer\reference to this variable
Exactly. If you use #define or enum, the compiler has no reason to put your constant in data memory - it will always put it in your code. If you use const int, the compiler will do the same only (mostly?) if your code has no references to it (i.e. when it's not ODR-used).
There are benign-looking constructs that silently introduce references into your code, so in practice you cannot assume const int will behave the same way as enum. For example:
const int UNIVERSE = 42;
int calc = ...;
int answer = max(calc, UNIVERSE);
If you use max from the std namespace (like you should), it will introduce a const reference to UNIVERSE to your code, which may add 42 to your data ROM.
Also:
bool condition = ...;
int answer = condition ? 0 : UNIVERSE;
The same problem here (see here for details); your code may behave differently with const int than either of #define or enum.
C++11 introduced constexpr, which will avoid these subtle problems. So if you cannot use C++11 and want maximum performance (or minimum memory wasted), you should use #define or enum. Also, if you want your constants to have type other than int, you're stuck with #define:
#define UNIVERSE (uint64_t)42
Your assumptions make no sense. All data has to be stored somewhere. You cannot store data in thin air.
The difference between #define and const is that the former tends to embed the constants with the code memory, while the latter may possibly give the variable a dedicated address in data memory. In either case, they will consume the same amount of memory.
If you don't enable any compiler optimizations, it is however possible that variables stored at dedicated address in data memory will result in a ever so slight change in memory consumption, it the case where the instruction fetching that data could as well have contained the raw value.
But these kind of micro-optimizations is not something you should even consider. We are talking about ROM memory, which you should have plenty of.
The advantage of such constants allocated at a fixed address, is that you can take their address and also view their values in a debugger. So they aren't necessarily the same thing as #defined ones.
Variables that are not used by your program will get optimized away.
The definition of a type does not, by itself, consume memory (ROM); except for debug information (if present).
So, simply;
enum MyValues {
Value1,
Value2
};
Will not consume ROM. However, instantiations of the type (actual objects) will consume memory (loaded from ROM and then later in RAM).
MyValues val1 = Value1;
Above, val1 consume memory.
So what is the difference to the #define or const int alternatives?
Clarity.
The preprocessor (with the #define) and the compiler (with the const int) are smart enough to do pretty much exactly the same thing in most cases (outside of things such as taking the address of the const int). The enum is language construct that groups related values together, it's not the only one, but it is a very natural one.

Will it be odd to #define inside a C++ function?

My little C++ function needs to calculate a simple timeout value.
CalcTimeout(const mystruct st)
{
return (st.x + 100) * st.y + 200;
}
The numbers 100 and 200 would be confusing to read the code later so I would like to use #define for them. But these defines are only going to be needed for this function only, can I define them inside the function? The advantages this way are:
It is very local values and nobody else needs to know about it
Being closer to where it is used, the intent is clear, it has no other use, they are like local variables (except that they are not)
The disadvantage can be it is rather crude way to define something like local variable/const but it is obviously not local.
Other than that would this be odd to #define inside a C++ function? Most of the time we use #defines at the top of the file. Is using const variables better in any way in replacing a fixed local hard coded value like this?
The objective really is make code more readable/understandable.
Don't use a macro to define a constant; use a constant.
const int thingy = 100; // Obviously, you'll choose a better name
const int doodad = 200;
return (st.x + thingy) * st.y + doodad;
Like macros that expand to constant expressions, these can be treated as compile-time constants. Unlike macros, these are properly scoped within the function.
If you do have a reason for defining a macro that's only used locally, you can use #undef to get rid of it once you're done. But in general, you should avoid macros when (like here) there's a language-level construct that does what you want.
In C++ specifically it would be rather weird to see macros being used for that purpose. In C++ const completely replaces macros for defining manifest constants. And const works much better. (In C you'd have to stick with #define in many (or most) cases, but your question is tagged C++).
Having said that, pseudo-local macros sometimes come handy in C++ (especially in pre-C++11 versions of the language). If for some reason you have to #define such a macro "inside" a function, it is a very good idea to make an explicit #undef fro that macro at the end of the same scope. (I enclosed the word inside in quotes since preprocessor does not really care about scopes and can't tell "inside" from "outside".) That way you will be able to simulate the scoped visibility behavior other local identifiers have, instead of having a "locally" defined macro to spill out into the rest of the code all the way to the end of the translation unit.

Why using define or static const? [duplicate]

Is it better to use static const vars than #define preprocessor? Or maybe it depends on the context?
What are advantages/disadvantages for each method?
Pros and cons between #defines, consts and (what you have forgot) enums, depending on usage:
enums:
only possible for integer values
properly scoped / identifier clash issues handled nicely, particularly in C++11 enum classes where the enumerations for enum class X are disambiguated by the scope X::
strongly typed, but to a big-enough signed-or-unsigned int size over which you have no control in C++03 (though you can specify a bit field into which they should be packed if the enum is a member of struct/class/union), while C++11 defaults to int but can be explicitly set by the programmer
can't take the address - there isn't one as the enumeration values are effectively substituted inline at the points of usage
stronger usage restraints (e.g. incrementing - template <typename T> void f(T t) { cout << ++t; } won't compile, though you can wrap an enum into a class with implicit constructor, casting operator and user-defined operators)
each constant's type taken from the enclosing enum, so template <typename T> void f(T) get a distinct instantiation when passed the same numeric value from different enums, all of which are distinct from any actual f(int) instantiation. Each function's object code could be identical (ignoring address offsets), but I wouldn't expect a compiler/linker to eliminate the unnecessary copies, though you could check your compiler/linker if you care.
even with typeof/decltype, can't expect numeric_limits to provide useful insight into the set of meaningful values and combinations (indeed, "legal" combinations aren't even notated in the source code, consider enum { A = 1, B = 2 } - is A|B "legal" from a program logic perspective?)
the enum's typename may appear in various places in RTTI, compiler messages etc. - possibly useful, possibly obfuscation
you can't use an enumeration without the translation unit actually seeing the value, which means enums in library APIs need the values exposed in the header, and make and other timestamp-based recompilation tools will trigger client recompilation when they're changed (bad!)
consts:
properly scoped / identifier clash issues handled nicely
strong, single, user-specified type
you might try to "type" a #define ala #define S std::string("abc"), but the constant avoids repeated construction of distinct temporaries at each point of use
One Definition Rule complications
can take address, create const references to them etc.
most similar to a non-const value, which minimises work and impact if switching between the two
value can be placed inside the implementation file, allowing a localised recompile and just client links to pick up the change
#defines:
"global" scope / more prone to conflicting usages, which can produce hard-to-resolve compilation issues and unexpected run-time results rather than sane error messages; mitigating this requires:
long, obscure and/or centrally coordinated identifiers, and access to them can't benefit from implicitly matching used/current/Koenig-looked-up namespace, namespace aliases etc.
while the trumping best-practice allows template parameter identifiers to be single-character uppercase letters (possibly followed by a number), other use of identifiers without lowercase letters is conventionally reserved for and expected of preprocessor defines (outside the OS and C/C++ library headers). This is important for enterprise scale preprocessor usage to remain manageable. 3rd party libraries can be expected to comply. Observing this implies migration of existing consts or enums to/from defines involves a change in capitalisation, and hence requires edits to client source code rather than a "simple" recompile. (Personally, I capitalise the first letter of enumerations but not consts, so I'd be hit migrating between those two too - maybe time to rethink that.)
more compile-time operations possible: string literal concatenation, stringification (taking size thereof), concatenation into identifiers
downside is that given #define X "x" and some client usage ala "pre" X "post", if you want or need to make X a runtime-changeable variable rather than a constant you force edits to client code (rather than just recompilation), whereas that transition is easier from a const char* or const std::string given they already force the user to incorporate concatenation operations (e.g. "pre" + X + "post" for string)
can't use sizeof directly on a defined numeric literal
untyped (GCC doesn't warn if compared to unsigned)
some compiler/linker/debugger chains may not present the identifier, so you'll be reduced to looking at "magic numbers" (strings, whatever...)
can't take the address
the substituted value need not be legal (or discrete) in the context where the #define is created, as it's evaluated at each point of use, so you can reference not-yet-declared objects, depend on "implementation" that needn't be pre-included, create "constants" such as { 1, 2 } that can be used to initialise arrays, or #define MICROSECONDS *1E-6 etc. (definitely not recommending this!)
some special things like __FILE__ and __LINE__ can be incorporated into the macro substitution
you can test for existence and value in #if statements for conditionally including code (more powerful than a post-preprocessing "if" as the code need not be compilable if not selected by the preprocessor), use #undef-ine, redefine etc.
substituted text has to be exposed:
in the translation unit it's used by, which means macros in libraries for client use must be in the header, so make and other timestamp-based recompilation tools will trigger client recompilation when they're changed (bad!)
or on the command line, where even more care is needed to make sure client code is recompiled (e.g. the Makefile or script supplying the definition should be listed as a dependency)
My personal opinion:
As a general rule, I use consts and consider them the most professional option for general usage (though the others have a simplicity appealing to this old lazy programmer).
Personally, I loathe the preprocessor, so I'd always go with const.
The main advantage to a #define is that it requires no memory to store in your program, as it is really just replacing some text with a literal value. It also has the advantage that it has no type, so it can be used for any integer value without generating warnings.
Advantages of "const"s are that they can be scoped, and they can be used in situations where a pointer to an object needs to be passed.
I don't know exactly what you are getting at with the "static" part though. If you are declaring globally, I'd put it in an anonymous namespace instead of using static. For example
namespace {
unsigned const seconds_per_minute = 60;
};
int main (int argc; char *argv[]) {
...
}
If this is a C++ question and it mentions #define as an alternative, then it is about "global" (i.e. file-scope) constants, not about class members. When it comes to such constants in C++ static const is redundant. In C++ const have internal linkage by default and there's no point in declaring them static. So it is really about const vs. #define.
And, finally, in C++ const is preferable. At least because such constants are typed and scoped. There are simply no reasons to prefer #define over const, aside from few exceptions.
String constants, BTW, are one example of such an exception. With #defined string constants one can use compile-time concatenation feature of C/C++ compilers, as in
#define OUT_NAME "output"
#define LOG_EXT ".log"
#define TEXT_EXT ".txt"
const char *const log_file_name = OUT_NAME LOG_EXT;
const char *const text_file_name = OUT_NAME TEXT_EXT;
P.S. Again, just in case, when someone mentions static const as an alternative to #define, it usually means that they are talking about C, not about C++. I wonder whether this question is tagged properly...
#define can lead to unexpected results:
#include <iostream>
#define x 500
#define y x + 5
int z = y * 2;
int main()
{
std::cout << "y is " << y;
std::cout << "\nz is " << z;
}
Outputs an incorrect result:
y is 505
z is 510
However, if you replace this with constants:
#include <iostream>
const int x = 500;
const int y = x + 5;
int z = y * 2;
int main()
{
std::cout << "y is " << y;
std::cout << "\nz is " << z;
}
It outputs the correct result:
y is 505
z is 1010
This is because #define simply replaces the text. Because doing this can seriously mess up order of operations, I would recommend using a constant variable instead.
Using a static const is like using any other const variables in your code. This means you can trace wherever the information comes from, as opposed to a #define that will simply be replaced in the code in the pre-compilation process.
You might want to take a look at the C++ FAQ Lite for this question:
http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.7
A static const is typed (it has a type) and can be checked by the compiler for validity, redefinition etc.
a #define can be redifined undefined whatever.
Usually you should prefer static consts. It has no disadvantage. The prprocessor should mainly be used for conditional compilation (and sometimes for really dirty trics maybe).
Defining constants by using preprocessor directive #define is not recommended to apply not only in C++, but also in C. These constants will not have the type. Even in C was proposed to use const for constants.
Always prefer to use the language features over some additional tools like preprocessor.
ES.31: Don't use macros for constants or "functions"
Macros are a major source of bugs. Macros don't obey the usual scope
and type rules. Macros don't obey the usual rules for argument
passing. Macros ensure that the human reader sees something different
from what the compiler sees. Macros complicate tool building.
From C++ Core Guidelines
As a rather old and rusty C programmer who never quite made it fully to C++ because other things came along and is now hacking along getting to grips with Arduino my view is simple.
#define is a compiler pre processor directive and should be used as such, for conditional compilation etc.. E.g. where low level code needs to define some possible alternative data structures for portability to specif hardware. It can produce inconsistent results depending on the order your modules are compiled and linked. If you need something to be global in scope then define it properly as such.
const and (static const) should always be used to name static values or strings. They are typed and safe and the debugger can work fully with them.
enums have always confused me, so I have managed to avoid them.
Please see here: static const vs define
usually a const declaration (notice it doesn't need to be static) is the way to go
If you are defining a constant to be shared among all the instances of the class, use static const. If the constant is specific to each instance, just use const (but note that all constructors of the class must initialize this const member variable in the initialization list).

What are the major advantages of const versus #define for global constants?

In embedded programming, for example, #define GLOBAL_CONSTANT 42 is preferred to const int GLOBAL_CONSTANT = 42; for the following reasons:
it does not need place in RAM (which is usually very limited in microcontrollers, and µC applications usually need a large number of global constants)
const needs not only a storage place in the flash, but the compiler generates extra code at the start of the program to copy it.
Against all these advantages of using #define, what are the major advantages of using const?
In a non-µC environment memory is usually not such a big issue, and const is useful because it can be used locally, but what about global constants? Or is the answer just "we should never ever ever use global constants"?
Edit:
The examples might have caused some misunderstanding, so I have to state that they are in C. If the C compiler generated the exact same code for the two, I think that would be an error, not an optimization.
I just extended the question to C++ without thinking much about it, in the hopes of getting new insights, but it was clear to me, that in an object-oriented environment there is very little space for global constants, regardless whether they are macros or consts.
Are you sure your compiler is too dumb to optimize your constant by inserting its value where it is needed instead of putting it into memory? Compilers usually are good in optimizations.
And the main advantage of constants versus macros is that constants have scope. Macros are substituted everywhere with no respect for scope or context. And it leads to really hard to understand compiler error messages.
Also debuggers are not aware of macros.
More can be found here
The answer to your question varies for C and C++.
In C, const int GLOBAL_CONSTANT is not a constant in C, So the primary way to define a true constant in C is by using #define.
In C++, One of the major advantage of using const over #define is that #defines don't respect scopes so there is no way to create a class scoped namespace. While const variables can be scoped in classes.
Apart from that there are other subtle advantages like:
Avoiding Weird magical numbers during compilation errors:
If you are using #define those are replaced by the pre-processor at time of precompilation So if you receive an error during compilation, it will be confusing because the error message wont refer the macro name but the value and it will appear a sudden value, and one would waste lot of time tracking it down in code.
Ease of Debugging:
Also for same reasons mentioned in #2, while debugging #define would provide no help really.
Another reason that hasn't been mentioned yet is that const variables allow the compiler to perform explicit type-checking, but macros do not. Using const can help prevent subtle data-dependent errors that are often difficult to debug.
I think the main advantage is that you can change the constant without having to recompile everything that uses it.
Since a macro change will effectively modify the contents of the file that use the macro, recompilation is necessary.
In C the const qualifier does not define a constant but instead a read-only object:
#define A 42 // A is a constant
const int a = 42; // a is not constant
A const object cannot be used where a real constant is required, for example:
static int bla1 = A; // OK, A is a constant
static int bla2 = a; // compile error, a is not a constant
Note that this is different in C++ where the const really qualifies an object as a constant.
The only problems you list with const sum up as "I've got the most incompetent compiler I can possibly imagine". The problems with #define, however, are universal- for example, no scoping.
There's no reason to use #define instead of a const int in C++. Any decent C++ compiler will substitute the constant value from a const int in the same way it does for a #define where it is possible to do so. Both take approximately the same amount of flash when used the same way.
Using a const does allow you to take the address of the value (where a macro does not). At that point, the behavior obviously diverges from the behavior of a Macro. The const now needs a space in the program in both flash and in RAM to live so that it can have an address. But this is really what you want.
The overhead here is typically going to be an extra 8 bytes, which is tiny compared to the size of most programs. Before you get to this level of optimization, make sure you have exhausted all other options like compiler flags. Using the compiler to carefully optimize for size and not using things like templates in C++ will save you a lot more than 8 bytes.