By using the const qualifier a variable is supposed to be made read-only. As an example, an int marked const cannot be assigned to:
const int number = 5; //fine to initialize
number = 3; //error, number is const
At first glance it looks like this makes it impossible to modify the contents of number. Unfortunately, actually it can be done. As an example const_cast could be used (*const_cast<int*>(&number) = 3). This is undefined behavior, but this doesn't guarantee that number does not actually get modified. It could cause the program to crash, but it could also simply modify the value and continue.
Is it possible to make it actualy impossible to modify a variable?
A possible need for this might be security concerns. It might need to be of highest importance that some very valuable data must not be changed or that a piece of data being sent must not be modified.
No, this is not the concern of a programming language. Any "access" protection is only superficial and only exists at compile-time.
Memory of a computer can always be modified at runtime if you have the corresponding rights. Your OS might provide you with facilities to secure pages of memory though, e.g VirtualProtect() under Windows.
(Notice that an "attacker" could use the same facilities to restore the access if he has the privilege to do so)
Also I assume that there might be hardware solutions for this.
There is also the option of encrypting the data in question. Yet it appears to be a chicken-and-egg situation as the private key for the encryption and decryption has to be stored somewhere in memory as well (with a software-only solution).
While most of the answers in this thread are correct, but they are related to const, while the OP is asking for a way to have a constant value defined and used in the source code. My crystal ball says that OP is looking for symbolic constants (preprocessor #define statements).
#define NUMBER 3
//... some other code
std::cout<<NUMBER;
This way, the developer is able to parametrize values and maintain them easily, while there's virtually no (easy) way to alter it once the program is compiled and launched.
Just keep in mind that const variables are visible to debuggers, while symbolic constants are not, but they require no additional memory. Another criteria is the type checking, which is absent in case of symbolic constants, as well as for macros.
const is not intended to make a variable read-only.
The meaning of const x is basically:
Hey compiler, please prevent me from casually writing code in this scope which changes x.
That's very different from:
Hey compiler, please prevent any changes to x in this scope.
Even if you don't write any const_cast's yourself - the compiler will still not assume that const'ed entities won't change. Specifically, if you use the function
int foo(const int* x);
the compiler cannot assume that foo() doesn't change the memory pointed to by x.
You could use your value without a variable
Variables vary... so, naturally, a way to prevent that is using values which aren't stored in variables. You can achieve that by using...
an enumeration with a single value: enum : int { number = 1 }.
the preprocessor: #define NUMBER 1 <- Not recommended
a function: inline int get_number() { return 1; }
You could use implementation/platform-specific features
As #SebastianHoffman suggests, typical platforms allow marking some of a process' virtual memory space as read-only, so that attempts to change it result in an access violation signal to the process and the suspension of its execution. This is not a solution within the language itself, but it is often useful. Example: When you use string literals, e.g.:
const char* my_str = "Hello world";
const_cast<char*>(my_str)[0] = 'Y';
Your process will likely fail, with a message such as:
Segmentation fault (core dumped)
If you know the program at compile-time, you can place the data in read-only memory. Sure, someone could get around this, but security is about layers rather than absolutes. This makes it harder. C++ has no concept of this, so you'll have to inspect the resulting binary to see if it's happened (this could be scripted as a post-build check).
If you don't have the value at compile-time, your program depends on being able to change / set it at runtime, so you fundamentally cannot stop that from happening.
Of course, you can make it harder though things like const so the code is compiled assuming it won't change / programmers have a harder time accidentally changing it.
You may also find constexpr an interesting tool to explore here.
There is no way to specify what code does that does not adhere to the specification.
In your example number is truly constant. You correctly note that modifiying it after a const_cast would be undefined beahvior. And indeed it is impossible to modify it in a correct program.
Related
I saw a lot of questions concerning what should be preferred between static const vs #define vs enum.
In none of them was discussed the aspect of resources, which is important in embedded system.
I don't like to use #define but it doesn't consume resources like static. What about enum, does in consume RAM\ROM?
In addition, I was told that if i use e.g. const int it will not consume either RAM nor ROM, assuming I do not use pointer\reference to this variable.
Is this true?
If this is true, wouldn't it be a good solution to use const int inside a namespace in order to save resources?
Remark: I am using C++ 2003 standard and can't use 2011 standard. (I already saw enum class and constexpr in 2011, but i can't use them.)
... if i use e.g. const int it will not consume either RAM nor ROM, assuming I do not use pointer\reference to this variable
Exactly. If you use #define or enum, the compiler has no reason to put your constant in data memory - it will always put it in your code. If you use const int, the compiler will do the same only (mostly?) if your code has no references to it (i.e. when it's not ODR-used).
There are benign-looking constructs that silently introduce references into your code, so in practice you cannot assume const int will behave the same way as enum. For example:
const int UNIVERSE = 42;
int calc = ...;
int answer = max(calc, UNIVERSE);
If you use max from the std namespace (like you should), it will introduce a const reference to UNIVERSE to your code, which may add 42 to your data ROM.
Also:
bool condition = ...;
int answer = condition ? 0 : UNIVERSE;
The same problem here (see here for details); your code may behave differently with const int than either of #define or enum.
C++11 introduced constexpr, which will avoid these subtle problems. So if you cannot use C++11 and want maximum performance (or minimum memory wasted), you should use #define or enum. Also, if you want your constants to have type other than int, you're stuck with #define:
#define UNIVERSE (uint64_t)42
Your assumptions make no sense. All data has to be stored somewhere. You cannot store data in thin air.
The difference between #define and const is that the former tends to embed the constants with the code memory, while the latter may possibly give the variable a dedicated address in data memory. In either case, they will consume the same amount of memory.
If you don't enable any compiler optimizations, it is however possible that variables stored at dedicated address in data memory will result in a ever so slight change in memory consumption, it the case where the instruction fetching that data could as well have contained the raw value.
But these kind of micro-optimizations is not something you should even consider. We are talking about ROM memory, which you should have plenty of.
The advantage of such constants allocated at a fixed address, is that you can take their address and also view their values in a debugger. So they aren't necessarily the same thing as #defined ones.
Variables that are not used by your program will get optimized away.
The definition of a type does not, by itself, consume memory (ROM); except for debug information (if present).
So, simply;
enum MyValues {
Value1,
Value2
};
Will not consume ROM. However, instantiations of the type (actual objects) will consume memory (loaded from ROM and then later in RAM).
MyValues val1 = Value1;
Above, val1 consume memory.
So what is the difference to the #define or const int alternatives?
Clarity.
The preprocessor (with the #define) and the compiler (with the const int) are smart enough to do pretty much exactly the same thing in most cases (outside of things such as taking the address of the const int). The enum is language construct that groups related values together, it's not the only one, but it is a very natural one.
I read this line in a book:
It is provably impossible to build a compiler that can actually
determine whether or not a C++ function will change the value of a
particular variable.
The paragraph was talking about why the compiler is conservative when checking for const-ness.
Why is it impossible to build such a compiler?
The compiler can always check if a variable is reassigned, a non-const function is being invoked on it, or if it is being passed in as a non-const parameter...
Why is it impossible to build such a compiler?
For the same reason that you can't write a program that will determine whether any given program will terminate. This is known as the halting problem, and it's one of those things that's not computable.
To be clear, you can write a compiler that can determine that a function does change the variable in some cases, but you can't write one that reliably tells you that the function will or won't change the variable (or halt) for every possible function.
Here's an easy example:
void foo() {
if (bar() == 0) this->a = 1;
}
How can a compiler determine, just from looking at that code, whether foo will ever change a? Whether it does or doesn't depends on conditions external to the function, namely the implementation of bar. There's more than that to the proof that the halting problem isn't computable, but it's already nicely explained at the linked Wikipedia article (and in every computation theory textbook), so I'll not attempt to explain it correctly here.
Imagine such compiler exists. Let's also assume that for convenience it provides a library function that returns 1 if the passed function modifies a given variable and 0 when the function doesn't. Then what should this program print?
int variable = 0;
void f() {
if (modifies_variable(f, variable)) {
/* do nothing */
} else {
/* modify variable */
variable = 1;
}
}
int main(int argc, char **argv) {
if (modifies_variable(f, variable)) {
printf("Modifies variable\n");
} else {
printf("Does not modify variable\n");
}
return 0;
}
Don't confuse "will or will not modify a variable given these inputs" for "has an execution path which modifies a variable."
The former is called opaque predicate determination, and is trivially impossible to decide - aside from reduction from the halting problem, you could just point out the inputs might come from an unknown source (eg. the user). This is true of all languages, not just C++.
The latter statement, however, can be determined by looking at the parse tree, which is something that all optimizing compilers do. The reason they do is that pure functions (and referentially transparent functions, for some definition of referentially transparent) have all sorts of nice optimizations that can be applied, like being easily inlinable or having their values determined at compile-time; but to know if a function is pure, we need to know if it can ever modify a variable.
So, what appears to be a surprising statement about C++ is actually a trivial statement about all languages.
I think the key word in "whether or not a C++ function will change the value of a particular variable" is "will". It is certainly possible to build a compiler that checks whether or not a C++ function is allowed to change the value of a particular variable, you cannot say with certainty that the change is going to happen:
void maybe(int& val) {
cout << "Should I change value? [Y/N] >";
string reply;
cin >> reply;
if (reply == "Y") {
val = 42;
}
}
I don't think it's necessary to invoke the halting problem to explain that you can't algorithmically know at compile time whether a given function will modify a certain variable or not.
Instead, it's sufficient to point out that a function's behavior often depends on run-time conditions, which the compiler can't know about in advance. E.g.
int y;
int main(int argc, char *argv[]) {
if (argc > 2) y++;
}
How could the compiler predict with certainty whether y will be modified?
It can be done and compilers are doing it all the time for some functions, this is for instance a trivial optimisation for simple inline accessors or many pure functions.
What is impossible is to know it in the general case.
Whenever there is a system call or a function call coming from another module, or a call to a potentially overriden method, anything could happen, included hostile takeover from some hacker's use of a stack overflow to change an unrelated variable.
However you should use const, avoid globals, prefer references to pointers, avoid reusing variables for unrelated tasks, etc. that will makes the compiler's life easier when performing aggressive optimisations.
There are multiple avenues to explaining this, one of which is the Halting Problem:
In computability theory, the halting problem can be stated as follows: "Given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever". This is equivalent to the problem of deciding, given a program and an input, whether the program will eventually halt when run with that input, or will run forever.
Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.
If I write a program that looks like this:
do tons of complex stuff
if (condition on result of complex stuff)
{
change value of x
}
else
{
do not change value of x
}
Does the value of x change? To determine this, you would first have to determine whether the do tons of complex stuff part causes the condition to fire - or even more basic, whether it halts. That's something the compiler can't do.
Really surprised that there isn't an answer that using the halting problem directly! There's a very straightforward reduction from this problem to the halting problem.
Imagine that the compiler could tell whether or not a function changed the value of a variable. Then it would certainly be able to tell whether the following function changes the value of y or not, assuming that the value of x can be tracked in all the calls throughout the rest of the program:
foo(int x){
if(x)
y=1;
}
Now, for any program we like, let's rewrite it as:
int y;
main(){
int x;
...
run the program normally
...
foo(x);
}
Notice that, if, and only if, our program changes the value of y, does it then terminate - foo() is the last thing it does before exiting. This means we've solved the halting problem!
What the above reduction shows us is that the problem of determining whether a variable's value changes is at least as hard as the halting problem. The halting problem is known to be incomputable, so this one must be also.
As soon as a function calls another function that the compiler doesn't "see" the source of, it either has to assume that the variable is changed, or things may well go wrong further below. For example, say we have this in "foo.cpp":
void foo(int& x)
{
ifstream f("f.dat", ifstream::binary);
f.read((char *)&x, sizeof(x));
}
and we have this in "bar.cpp":
void bar(int& x)
{
foo(x);
}
How can the compiler "know" that x is not changing (or IS changing, more appropriately) in bar?
I'm sure we can come up with something more complex, if this isn't complex enough.
It is impossible in general to for the compiler to determine if the variable will be changed, as have been pointed out.
When checking const-ness, the question of interest seems to be if the variable can be changed by a function. Even this is hard in languages that support pointers. You can't control what other code does with a pointer, it could even be read from an external source (though unlikely). In languages that restrict access to memory, these types of guarantees can be possible and allows for more aggressive optimization than C++ does.
To make the question more specific I suggest the following set of constraints may have been what the author of the book may have had in mind:
Assume the compiler is examining the behavior of a specific function with respect to const-ness of a variable. For correctness a compiler would have to assume (because of aliasing as explained below) if the function called another function the variable is changed, so assumption #1 only applies to code fragments that don't make function calls.
Assume the variable isn't modified by an asynchronous or concurrent activity.
Assume the compiler is only determining if the variable can be modified, not whether it will be modified. In other words the compiler is only performing static analysis.
Assume the compiler is only considering correctly functioning code (not considering array overruns/underruns, bad pointers, etc.)
In the context of compiler design, I think assumptions 1,3,4 make perfect sense in the view of a compiler writer in the context of code gen correctness and/or code optimization. Assumption 2 makes sense in the absence of the volatile keyword. And these assumptions also focus the question enough to make judging a proposed answer much more definitive :-)
Given those assumptions, a key reason why const-ness can't be assumed is due to variable aliasing. The compiler can't know whether another variable points to the const variable. Aliasing could be due to another function in the same compilation unit, in which case the compiler could look across functions and use a call tree to statically determine that aliasing could occur. But if the aliasing is due to a library or other foreign code, then the compiler has no way to know upon function entry whether variables are aliased.
You could argue that if a variable/argument is marked const then it shouldn't be subject to change via aliasing, but for a compiler writer that's pretty risky. It can even be risky for a human programmer to declare a variable const as part of, say a large project where he doesn't know the behavior of the whole system, or the OS, or a library, to really know a variable won't change.
Even if a variable is declared const, doesn't mean some badly written code can overwrite it.
// g++ -o foo foo.cc
#include <iostream>
void const_func(const int&a, int* b)
{
b[0] = 2;
b[1] = 2;
}
int main() {
int a = 1;
int b = 3;
std::cout << a << std::endl;
const_func(a,&b);
std::cout << a << std::endl;
}
output:
1
2
To expand on my comments, that book's text is unclear which obfuscates the issue.
As I commented, that book is trying to say, "let's get an infinite number of monkeys to write every conceivable C++ function which could ever be written. There will be cases where if we pick a variable that (some particular function the monkeys wrote) uses, we can't work out whether the function will change that variable."
Of course for some (even many) functions in any given application, this can be determined by the compiler, and very easily. But not for all (or necessarily most).
This function can be easily so analysed:
static int global;
void foo()
{
}
"foo" clearly does not modify "global". It doesn't modify anything at all, and a compiler can work this out very easily.
This function cannot be so analysed:
static int global;
int foo()
{
if ((rand() % 100) > 50)
{
global = 1;
}
return 1;
Since "foo"'s actions depends on a value which can change at runtime, it patently cannot be determined at compile time whether it will modify "global".
This whole concept is far simpler to understand than computer scientists make it out to be. If the function can do something different based on things can change at runtime, then you can't work out what it'll do until it runs, and each time it runs it may do something different. Whether it's provably impossible or not, it's obviously impossible.
In embedded programming, for example, #define GLOBAL_CONSTANT 42 is preferred to const int GLOBAL_CONSTANT = 42; for the following reasons:
it does not need place in RAM (which is usually very limited in microcontrollers, and µC applications usually need a large number of global constants)
const needs not only a storage place in the flash, but the compiler generates extra code at the start of the program to copy it.
Against all these advantages of using #define, what are the major advantages of using const?
In a non-µC environment memory is usually not such a big issue, and const is useful because it can be used locally, but what about global constants? Or is the answer just "we should never ever ever use global constants"?
Edit:
The examples might have caused some misunderstanding, so I have to state that they are in C. If the C compiler generated the exact same code for the two, I think that would be an error, not an optimization.
I just extended the question to C++ without thinking much about it, in the hopes of getting new insights, but it was clear to me, that in an object-oriented environment there is very little space for global constants, regardless whether they are macros or consts.
Are you sure your compiler is too dumb to optimize your constant by inserting its value where it is needed instead of putting it into memory? Compilers usually are good in optimizations.
And the main advantage of constants versus macros is that constants have scope. Macros are substituted everywhere with no respect for scope or context. And it leads to really hard to understand compiler error messages.
Also debuggers are not aware of macros.
More can be found here
The answer to your question varies for C and C++.
In C, const int GLOBAL_CONSTANT is not a constant in C, So the primary way to define a true constant in C is by using #define.
In C++, One of the major advantage of using const over #define is that #defines don't respect scopes so there is no way to create a class scoped namespace. While const variables can be scoped in classes.
Apart from that there are other subtle advantages like:
Avoiding Weird magical numbers during compilation errors:
If you are using #define those are replaced by the pre-processor at time of precompilation So if you receive an error during compilation, it will be confusing because the error message wont refer the macro name but the value and it will appear a sudden value, and one would waste lot of time tracking it down in code.
Ease of Debugging:
Also for same reasons mentioned in #2, while debugging #define would provide no help really.
Another reason that hasn't been mentioned yet is that const variables allow the compiler to perform explicit type-checking, but macros do not. Using const can help prevent subtle data-dependent errors that are often difficult to debug.
I think the main advantage is that you can change the constant without having to recompile everything that uses it.
Since a macro change will effectively modify the contents of the file that use the macro, recompilation is necessary.
In C the const qualifier does not define a constant but instead a read-only object:
#define A 42 // A is a constant
const int a = 42; // a is not constant
A const object cannot be used where a real constant is required, for example:
static int bla1 = A; // OK, A is a constant
static int bla2 = a; // compile error, a is not a constant
Note that this is different in C++ where the const really qualifies an object as a constant.
The only problems you list with const sum up as "I've got the most incompetent compiler I can possibly imagine". The problems with #define, however, are universal- for example, no scoping.
There's no reason to use #define instead of a const int in C++. Any decent C++ compiler will substitute the constant value from a const int in the same way it does for a #define where it is possible to do so. Both take approximately the same amount of flash when used the same way.
Using a const does allow you to take the address of the value (where a macro does not). At that point, the behavior obviously diverges from the behavior of a Macro. The const now needs a space in the program in both flash and in RAM to live so that it can have an address. But this is really what you want.
The overhead here is typically going to be an extra 8 bytes, which is tiny compared to the size of most programs. Before you get to this level of optimization, make sure you have exhausted all other options like compiler flags. Using the compiler to carefully optimize for size and not using things like templates in C++ will save you a lot more than 8 bytes.
In many languages you're allowed to declare a variable and use it before initializing it.
For example, in C++, you can write a snippet such as:
int x;
cout << x;
This would of course return unpredictable (well, unless you knew how your program was mapping out memory) results, but my question is, why is this behavior allowed by compilers?
Is there some application for or efficiency that results from allowing the use of uninitialized memory?
edit: It occurred to me that leaving initialization up to the user would minimize writes for memory mediums that have limited lifespans (write-cycles). Just a specific example under the aforementioned heading of 'performance'. Thanks.
My thoughts (and I've been wrong before, just ask my wife) are that it's simply a holdover from earlier incarnations of the language.
Early versions of C did not allow you to declare variables anywhere you wanted in a function, they had to be at the top (or maybe at the start of a block, it's hard to remember off the top of my head since I rarely do that nowadays).
In addition, you have the understandable desire to set a variable only when you know what it should be. There's no point in initialising a variable to something if the next thing you're going to do with it is simply overwrite that value (that's where the performance people are coming from here).
That's why it's necessary to allow uninitialised variables though you still shouldn't use them before you initialise them, and the good compilers have warnings to let you know about it.
In C++ (and later incarnations of C) where you can create your variable anywhere in a function, you really should create it and initialise it at the same time. But that wasn't possible early on. You had to use something like:
int fn(void) {
int x, y;
/* Do some stuff to set y */
x = y + 2;
/* Do some more stuff */
}
Nowadays, I'd opt for:
int fn(void) {
int y;
/* Do some stuff to set y */
int x = y + 2;
/* Do some more stuff */
}
The oldest excuse in programming : it improves performance!
edit: read your comments and I agree - years ago the focus on performance was on the number of CPU cycles. My first C compiler was traditional C (the one prior to ANSI C) and it allowed all sorts of abominations to compile. In these modern times performance is about the number of customer complaints. As I tell new graduates we hire - 'I don't care how quickly the program gives a wrong answer'. Use all the tools of modern compilers and development, write less bugs and everyone can go home on time.
Some API's are designed to return data via variables passed in, e.g:
bool ok;
int x = convert_to_int(some_string, &ok);
It may set the value of 'ok' inside the function, so initializing it is a waste.
(I'm not advocating this style of API.)
The short answer is that for more complicated cases, the compiler may not be able to determine whether a variable is used before initialization or not.
eg.
int x;
if (external_function() == 2) {
x = 42;
} else if (another_function() == 3) {
x = 49;
}
yet_another_function( &x );
cout << x; // Is this a use-before-definition?
Good compilers will give a warning message if they can spot a probable use-before-initialize error, but for complex cases - especially involving multiple compilation units - there's no way for the compiler to tell.
As to whether a language should allow the concept of uninitialized variables, that is another matter. C# is slightly unusual in defining every variable as being initialized with a default value. Most languages (C++/C/BCPL/FORTRAN/Assembler/...) leave it up to the programmer as to whether initialization is appropriate. Good compilers can sometimes spot unnecessary initializations and eliminate them, but this isn't a given. Compilers for more obscure hardware tend to have less effort put into optimization (which is the hard part of compiler writing) so languages targeting such hardware tend not to require unnecessary code generation.
Perhaps in some cases, it is faster to leave the memory uninitialised until it is needed (for example, if you return from a function before using a variable). I typically initialise everything anyway, I doubt it makes any real different in performance. The compiler will have its own way of optimising away useless initialisations, I'm sure.
Some languages have default values for some variable types. That being said I doubt there are performance benefits in any language to not explicitly initializing them. However the downsides are:
The possibility that it must be initialized and without being done you risk a crash
Unanticipated values
Lack of clarity and purpose for other programmers
My suggestion is always initialize your variables and the consistency will pay for itself.
Depending on the size of the variable, leaving the value uninitialized in the name of performance might be regarded as micro-optimization. Relatively few programs (when compared to the broad array of software types out there) would be negatively affected by the extra two-or-three cycles necessary to load a double; however, supposing the variable were quite large, delaying initialization until it is abundantly clear initialization is required is probably a good idea.
for loop style
int i;
for(i=0;i<something;++i){
.......
}
do something with i
and you would prefer the for loop to look like for(init;condition;inc)
here's one with an absolute necessity
bool b;
do{
....
b = g();
....
}while(!b);
horizontal screen real estate with long nested names
longer lived scoping for debugging visibility
very occasionally performance
This is a C++ disaster, check out this code sample:
#include <iostream>
void func(const int* shouldnotChange)
{
int* canChange = (int*) shouldnotChange;
*canChange += 2;
return;
}
int main() {
int i = 5;
func(&i);
std::cout << i;
return 0;
}
The output was 7!
So, how can we make sure of the behavior of C++ functions, if it was able to change a supposed-to-be-constant parameter!?
EDIT: I am not asking how can I make sure that my code is working as expected, rather I am wondering how to believe that someone else's function (for instance some function in some dll library) isn't going to change a parameter or posses some behavior...
Based on your edit, your question is "how can I trust 3rd party code not to be stupid?"
The short answer is "you can't." If you don't have access to the source, or don't have time to inspect it, you can only trust the author to have written sane code. In your example, the author of the function declaration specifically claims that the code will not change the contents of the pointer by using the const keyword. You can either trust that claim, or not. There are ways of testing this, as suggested by others, but if you need to test large amounts of code, it will be very labour intensive. Perhaps moreso than reading the code.
If you are working on a team and you have a team member writing stuff like this, then you can talk to them about it and explain why it is bad.
By writing sane code.
If you write code you can't trust, then obviously your code won't be trustworthy.
Similar stupid tricks are possible in pretty much any language. In C#, you can modify the code at runtime through reflection. You can inspect and change private class members. How do you protect against that? You don't, you just have to write code that behaves as you expect.
Apart from that, write a unittest testing that the function does not change its parameter.
The general rule in C++ is that the language is designed to protect you from Murphy, not Machiavelli. In other words, its meant to keep a maintainance programmer from accidentally changing a variable marked as const, not to keep someone from deliberatly changing it, which can be done in many ways.
A C-style cast means all bets are off. It's sort of like telling the compiler "Trust me, I know this looks bad, but I need to do this, so don't tell me I'm wrong." Also, what you've done is actually undefined. Casting off const-ness and then modifying the value means the compiler/runtime can do anything, including e.g. crash your program.
The only thing I can suggest is to allocate the variable shouldNotChange from a memory page that is marked as read-only. This will force the OS/CPU to raise an error if the application attempts to write to that memory. I don't really recommend this as a general method of validating functions just as an idea you may find useful.
The simplest way to enforce this would be to just not pass a pointer:
void func(int shouldnotChange);
Now a copy will be made of the argument. The function can change the value all it likes, but the original value will not be modified.
If you can't change the function's interface then you could make a copy of the value before calling the function:
int i = 5;
int copy = i
func(©);
Don't use C style casts in C++.
We have 4 cast operators in C++ (listed here in order of danger)
static_cast<> Safe (When used to 'convert numeric data types').
dynamic_cast<> Safe (but throws exceptions/returns NULL)
const_cast<> Dangerous (when removing const).
static_cast<> Very Dangerous (When used to cast pointer types. Not a very good idea!!!!!)
reinterpret_cast<> Very Dangerous. Use this only if you understand the consequences.
You can always tell the compiler that you know better than it does and the compiler will accept you at face value (the reason being that you don't want the compiler getting in the way when you actually do know better).
Power over the compiler is a two edged sword. If you know what you are doing it is a powerful tool the will help, but if you get things wrong it will blow up in your face.
Unfortunately, the compiler has reasons for most things so if you over-ride its default behavior then you better know what you are doing. Cast is one the things. A lot of the time it is fine. But if you start casting away const(ness) then you better know what you are doing.
(int*) is the casting syntax from C. C++ supports it fully, but it is not recommended.
In C++ the equivalent cast should've been written like this:
int* canChange = static_cast<int*>(shouldnotChange);
And indeed, if you wrote that, the compiler would NOT have allowed such a cast.
What you're doing is writing C code and expecting the C++ compiler to catch your mistake, which is sort of unfair if you think about it.