Numerical constant in C++ - c++

I saw a lot of questions concerning what should be preferred between static const vs #define vs enum.
In none of them was discussed the aspect of resources, which is important in embedded system.
I don't like to use #define but it doesn't consume resources like static. What about enum, does in consume RAM\ROM?
In addition, I was told that if i use e.g. const int it will not consume either RAM nor ROM, assuming I do not use pointer\reference to this variable.
Is this true?
If this is true, wouldn't it be a good solution to use const int inside a namespace in order to save resources?
Remark: I am using C++ 2003 standard and can't use 2011 standard. (I already saw enum class and constexpr in 2011, but i can't use them.)

... if i use e.g. const int it will not consume either RAM nor ROM, assuming I do not use pointer\reference to this variable
Exactly. If you use #define or enum, the compiler has no reason to put your constant in data memory - it will always put it in your code. If you use const int, the compiler will do the same only (mostly?) if your code has no references to it (i.e. when it's not ODR-used).
There are benign-looking constructs that silently introduce references into your code, so in practice you cannot assume const int will behave the same way as enum. For example:
const int UNIVERSE = 42;
int calc = ...;
int answer = max(calc, UNIVERSE);
If you use max from the std namespace (like you should), it will introduce a const reference to UNIVERSE to your code, which may add 42 to your data ROM.
Also:
bool condition = ...;
int answer = condition ? 0 : UNIVERSE;
The same problem here (see here for details); your code may behave differently with const int than either of #define or enum.
C++11 introduced constexpr, which will avoid these subtle problems. So if you cannot use C++11 and want maximum performance (or minimum memory wasted), you should use #define or enum. Also, if you want your constants to have type other than int, you're stuck with #define:
#define UNIVERSE (uint64_t)42

Your assumptions make no sense. All data has to be stored somewhere. You cannot store data in thin air.
The difference between #define and const is that the former tends to embed the constants with the code memory, while the latter may possibly give the variable a dedicated address in data memory. In either case, they will consume the same amount of memory.
If you don't enable any compiler optimizations, it is however possible that variables stored at dedicated address in data memory will result in a ever so slight change in memory consumption, it the case where the instruction fetching that data could as well have contained the raw value.
But these kind of micro-optimizations is not something you should even consider. We are talking about ROM memory, which you should have plenty of.
The advantage of such constants allocated at a fixed address, is that you can take their address and also view their values in a debugger. So they aren't necessarily the same thing as #defined ones.
Variables that are not used by your program will get optimized away.

The definition of a type does not, by itself, consume memory (ROM); except for debug information (if present).
So, simply;
enum MyValues {
Value1,
Value2
};
Will not consume ROM. However, instantiations of the type (actual objects) will consume memory (loaded from ROM and then later in RAM).
MyValues val1 = Value1;
Above, val1 consume memory.
So what is the difference to the #define or const int alternatives?
Clarity.
The preprocessor (with the #define) and the compiler (with the const int) are smart enough to do pretty much exactly the same thing in most cases (outside of things such as taking the address of the const int). The enum is language construct that groups related values together, it's not the only one, but it is a very natural one.

Related

Is it possible to make variable truly read-only in C++?

By using the const qualifier a variable is supposed to be made read-only. As an example, an int marked const cannot be assigned to:
const int number = 5; //fine to initialize
number = 3; //error, number is const
At first glance it looks like this makes it impossible to modify the contents of number. Unfortunately, actually it can be done. As an example const_cast could be used (*const_cast<int*>(&number) = 3). This is undefined behavior, but this doesn't guarantee that number does not actually get modified. It could cause the program to crash, but it could also simply modify the value and continue.
Is it possible to make it actualy impossible to modify a variable?
A possible need for this might be security concerns. It might need to be of highest importance that some very valuable data must not be changed or that a piece of data being sent must not be modified.
No, this is not the concern of a programming language. Any "access" protection is only superficial and only exists at compile-time.
Memory of a computer can always be modified at runtime if you have the corresponding rights. Your OS might provide you with facilities to secure pages of memory though, e.g VirtualProtect() under Windows.
(Notice that an "attacker" could use the same facilities to restore the access if he has the privilege to do so)
Also I assume that there might be hardware solutions for this.
There is also the option of encrypting the data in question. Yet it appears to be a chicken-and-egg situation as the private key for the encryption and decryption has to be stored somewhere in memory as well (with a software-only solution).
While most of the answers in this thread are correct, but they are related to const, while the OP is asking for a way to have a constant value defined and used in the source code. My crystal ball says that OP is looking for symbolic constants (preprocessor #define statements).
#define NUMBER 3
//... some other code
std::cout<<NUMBER;
This way, the developer is able to parametrize values and maintain them easily, while there's virtually no (easy) way to alter it once the program is compiled and launched.
Just keep in mind that const variables are visible to debuggers, while symbolic constants are not, but they require no additional memory. Another criteria is the type checking, which is absent in case of symbolic constants, as well as for macros.
const is not intended to make a variable read-only.
The meaning of const x is basically:
Hey compiler, please prevent me from casually writing code in this scope which changes x.
That's very different from:
Hey compiler, please prevent any changes to x in this scope.
Even if you don't write any const_cast's yourself - the compiler will still not assume that const'ed entities won't change. Specifically, if you use the function
int foo(const int* x);
the compiler cannot assume that foo() doesn't change the memory pointed to by x.
You could use your value without a variable
Variables vary... so, naturally, a way to prevent that is using values which aren't stored in variables. You can achieve that by using...
an enumeration with a single value: enum : int { number = 1 }.
the preprocessor: #define NUMBER 1 <- Not recommended
a function: inline int get_number() { return 1; }
You could use implementation/platform-specific features
As #SebastianHoffman suggests, typical platforms allow marking some of a process' virtual memory space as read-only, so that attempts to change it result in an access violation signal to the process and the suspension of its execution. This is not a solution within the language itself, but it is often useful. Example: When you use string literals, e.g.:
const char* my_str = "Hello world";
const_cast<char*>(my_str)[0] = 'Y';
Your process will likely fail, with a message such as:
Segmentation fault (core dumped)
If you know the program at compile-time, you can place the data in read-only memory. Sure, someone could get around this, but security is about layers rather than absolutes. This makes it harder. C++ has no concept of this, so you'll have to inspect the resulting binary to see if it's happened (this could be scripted as a post-build check).
If you don't have the value at compile-time, your program depends on being able to change / set it at runtime, so you fundamentally cannot stop that from happening.
Of course, you can make it harder though things like const so the code is compiled assuming it won't change / programmers have a harder time accidentally changing it.
You may also find constexpr an interesting tool to explore here.
There is no way to specify what code does that does not adhere to the specification.
In your example number is truly constant. You correctly note that modifiying it after a const_cast would be undefined beahvior. And indeed it is impossible to modify it in a correct program.

Inline functions instead of simple macros with constant

In general in C++, you want to use constant instead of defining constants with #define as there is type checking and this is a good thing.
#define MYCONST 10; // NO
const int MYCONST = 10; // OK.
This is fine, but suppose I want to improve the performance of my app; if I have to read that constant still I might read it (I hope to be correct) from any cache level from L1 to L3 and this would introduce slowness.
Would it be better to define that constant as simple inline function like below?
inline int MYCONST()
{
return 10;
}
Am I correct when I should expect some improvement?
According to here for integer it seems that it depends on the compiler and the type I am using.
No and no: when you define something like
const int MYCONST = 10;
The value will not be read from "any cache level" but the compiler (at least any compiler build in the last 20 years) will issue exactly the same code as if you had used macros (or literals, which is equivalent), i.e. it will be placed directly inside the machine code.
Therefore your second suggestion (using an inline function) will not only have no performance benefit at all but prevent many uses of constants (like char my_array[MYCONST]), not to mention the lack of readability, wasted space etc. of your code.
Just follow the main C++ credo and use constants, there's nothing wrong with that :) ...
I think that defining a const is better practice anyway, but I also suspect that many compilers would not be able correctly to process a construct such as
char myBuffer[MYCONST()];
without issuing an error message.

const vs #define in modern compilers

I've read a few things saying that a #define doesn't take up any memory, but a colleague at work was very insistent that modern compilers don't have any differences when it comes to const int/strings.
#define STD_VEC_HINT 6;
const int stdVecHint = 6;
The conversation came about because an old bit of code that was being modernised that dealt with encryption that had its key as a #define.
I always thought that a variable would end up getting a memory address which would show its contents, but maybe compiling under release removes such stuff.
A good compiler will not allocate space for a const variable that can be elided. In C++, const variables at the module scope are also implicity static in visibility, so it's easier for the compiler to optimize out the variable as well. The link time optimization feature of GCC helps as well to do cross-module optimization.
Don't forget the even more important fact that const variables have proper scoping and type safety, which are missing from #define.
As with so many thinks, it depends..!
A #define will just inject the constant straight into your code, so it won't take up any memory. The same is potentially true for a const.
However, you can take the address of a const:
const int *value = &stdVecHint;
And since you're taking its address the compiler will need to store the constant in memory in order to generate an address, so in this case it will require memory.
The compiler is likely to replace occurrences with stdVecHint with the literal 6 everywhere it is used. An address and memory space will be taken up if you take its address explicitly, but then again this is a moot point since you couldn't do that with the STD_VEC_HINT. Pedantically, yes, stdVecHint is a variable with internal linkage and every translation unit that sees the definition will have its own copy of it. In practice, it shouldn't increase the memory footprint.
Pre-processor command uses this type of command like #define ,....
and no need to allocate memory because
and change name of constant [#define pi 3.14] like pi with 3.14 and then compile code
but in const command [const float pi=3.14;] need to memory allocation
For compatibility with older versions of the preprocessor is no problem, but future unclear
be successfull

manifest constants vs C++ keyword "const"

Reading the book of Meyers (item 2 "Prefer const to #define) I'd like to understand some sentences that I list below:
With reference to the comparison between #define ASPECT_RATIO 1.653 and const aspect_ratio = 1.653 Meyers asks that "... in the case of floating point constant (such as in this example) use of the constant may yield smaller code than using #define."
The questions are:
With smaller code Meyers means the a smaller space on disk of executable file?
Why it is smaller? I thought that this may be valid on system with 32 bit because in this case an int (or pointer) needs 4 bytes and a double 8 bytes. Because ASPECT_RATIO may not get entered into symbol table the name is replaced with the value, while in other cases may be used a const pointer to a unique double value. In this case this concept would no longer be valid on machines with 64 bit (because pointer and double are the same number of bytes). I do not know if I explained well what I mean, and especially if this idea is correct?
Then Meyers asks that " ...though good compilers won't set aside storage for const objects of integral types (unless you create a pointer or reference to the object) sloppy compilers may, and you may not be willing to set aside memory for such objects..."
In this context the memory is the RAM occupied by the process in execution? If it is correct to verify this I can use task manager (in Win) or top (in Linux)?
First, micro-optimizations are stupid. Don't care about a couple of constant double values eating up all your RAM. It won't happen. If it does, handle it then, not before you know it's even relevant.
Second, #define can have nasty side effects if used too much, even with the ALL_CAPS_DEFINES convention. Sooner or later you're going to mistakenly make a short macro that is used in some other variable's name, with preprocessor replacement giving you an unfathomable and avoidable error and no debuggability at all. As the linked question in the question comments states, macro's lack namespace and class scope, and are definitely bad in C++.
Third, C++11 adds constexpr, which allows typesafe macro-performant (whatever this misnomer should mean) constant expressions. There are even those (see the C++ Lounge in SO Chat) that do whole calculations at compile time using constexpr. Unfortunately, not all major compilers claiming C++11 support, actually support enough C++11 features to be truly useful (I'm looking at you, MSVC2012!).
The reason that it "may" yield smaller code is that multiple uses of a define will probably (probably: optimizers do weird stuff) also generate the same constant again and again. Whereas using a const will only generate one definition, and then reference the same definition (if the optimizer doesn't calculate stuff inline).
The linker outputs several parts when linking your executable. Some part contains constants, some other part executable code. Wether or not your (operating) system loads the executable into ram before executing, is not defined within the C++ standard. I've used systems where the code executes from flash storage, so only the stack and dynamically allocated memory uses ram.

What are the major advantages of const versus #define for global constants?

In embedded programming, for example, #define GLOBAL_CONSTANT 42 is preferred to const int GLOBAL_CONSTANT = 42; for the following reasons:
it does not need place in RAM (which is usually very limited in microcontrollers, and µC applications usually need a large number of global constants)
const needs not only a storage place in the flash, but the compiler generates extra code at the start of the program to copy it.
Against all these advantages of using #define, what are the major advantages of using const?
In a non-µC environment memory is usually not such a big issue, and const is useful because it can be used locally, but what about global constants? Or is the answer just "we should never ever ever use global constants"?
Edit:
The examples might have caused some misunderstanding, so I have to state that they are in C. If the C compiler generated the exact same code for the two, I think that would be an error, not an optimization.
I just extended the question to C++ without thinking much about it, in the hopes of getting new insights, but it was clear to me, that in an object-oriented environment there is very little space for global constants, regardless whether they are macros or consts.
Are you sure your compiler is too dumb to optimize your constant by inserting its value where it is needed instead of putting it into memory? Compilers usually are good in optimizations.
And the main advantage of constants versus macros is that constants have scope. Macros are substituted everywhere with no respect for scope or context. And it leads to really hard to understand compiler error messages.
Also debuggers are not aware of macros.
More can be found here
The answer to your question varies for C and C++.
In C, const int GLOBAL_CONSTANT is not a constant in C, So the primary way to define a true constant in C is by using #define.
In C++, One of the major advantage of using const over #define is that #defines don't respect scopes so there is no way to create a class scoped namespace. While const variables can be scoped in classes.
Apart from that there are other subtle advantages like:
Avoiding Weird magical numbers during compilation errors:
If you are using #define those are replaced by the pre-processor at time of precompilation So if you receive an error during compilation, it will be confusing because the error message wont refer the macro name but the value and it will appear a sudden value, and one would waste lot of time tracking it down in code.
Ease of Debugging:
Also for same reasons mentioned in #2, while debugging #define would provide no help really.
Another reason that hasn't been mentioned yet is that const variables allow the compiler to perform explicit type-checking, but macros do not. Using const can help prevent subtle data-dependent errors that are often difficult to debug.
I think the main advantage is that you can change the constant without having to recompile everything that uses it.
Since a macro change will effectively modify the contents of the file that use the macro, recompilation is necessary.
In C the const qualifier does not define a constant but instead a read-only object:
#define A 42 // A is a constant
const int a = 42; // a is not constant
A const object cannot be used where a real constant is required, for example:
static int bla1 = A; // OK, A is a constant
static int bla2 = a; // compile error, a is not a constant
Note that this is different in C++ where the const really qualifies an object as a constant.
The only problems you list with const sum up as "I've got the most incompetent compiler I can possibly imagine". The problems with #define, however, are universal- for example, no scoping.
There's no reason to use #define instead of a const int in C++. Any decent C++ compiler will substitute the constant value from a const int in the same way it does for a #define where it is possible to do so. Both take approximately the same amount of flash when used the same way.
Using a const does allow you to take the address of the value (where a macro does not). At that point, the behavior obviously diverges from the behavior of a Macro. The const now needs a space in the program in both flash and in RAM to live so that it can have an address. But this is really what you want.
The overhead here is typically going to be an extra 8 bytes, which is tiny compared to the size of most programs. Before you get to this level of optimization, make sure you have exhausted all other options like compiler flags. Using the compiler to carefully optimize for size and not using things like templates in C++ will save you a lot more than 8 bytes.