issue related to const and pointers - c++

I have written 2 programs. Please go through both the programs and help me in understanding why variable 'i' and '*ptr' giving different values.
//Program I:
//Assumption: Address of i = 100, address of ptr = 500
int i = 5;
int *ptr = (int *) &i;
*ptr = 99;
cout<<i; // 99
cout<<&i;// 100
cout<<ptr; // 100
cout<<*ptr; // 99
cout<<&ptr; // 500
//END_Program_I===============
//Program II:
//Assumption: Address of i = 100, address of ptr = 500
const int i = 5;
int *ptr = (int *) &i;
*ptr = 99;
cout<<i; // 5
cout<<&i;// 100
cout<<ptr; // 100
cout<<*ptr; // 99
cout<<&ptr; // 500
//END_PROGRAM_II===============
The confusion is: Why variable i still coming as 5, even though *ptr ==99?

In the following three lines, you are modifying a constant:
const int i = 5;
int *ptr = (int *) &i;
*ptr = 99;
This is undefined behavior. Anything can happen. So don't do it.
As for what's happening underneath in this particular case:
Since i is const, the compiler assumes it will not change. Therefore, it simply inlines the 5 to each place where it is used. That's why printing out i shows the original value of 5.

All answer will probably talk about "undefined behavior", since you are attempting the logical nonsense of modifying a constant.
Although this is technically perfect, let me give you some hints about why this happens (about "how", see Mysticial answer).
It happens because C++ is by design an "imperfectly specified language". The "imperfection" consist in a number of "undefined behaviors" that pervade the language specification.
In fact, language designers deliberately choose that -in some circumstances- instead of say "if you do this, will gave you that", (that may be: you got this code, or you got this error) thay prefer to say "we don't define what will happen".
This lets the compiler manufacturers free to decide what to do. And since there are many compiler working on many platforms, may be the optimal solution for one in not necessarily the optimal solution for another (that may have rely to a machine with a different instruction set) and hence you (as a programmer) are left in the dramatic situation that you'll never know what to expect, and even if you test it, you cannot trust the result of the test, since in another situation (compiling the same code with a different compiler or just a different version of it, or for a different platform) it will be different.
The "bad" thing, here, is that a compiler should warn when an undefined behavior is hit (forcing a const should be warned as a potential bug, especially if the compiler does const-inlining otimizations, since it is a nonsense if a const is allowed to be changed), as mot likely it does, if you specify the proper flag (may be -W4 or -wall or -pedantic or similar, depending of the compiler you have).
In particular the line
int *ptr = (int *) &i;
should issue a warning like:
warning: removing cv-qualifier from &i.
So that, if you correct your program as
const int *ptr = (const int *) &i;
to satisfy the waarning, you wil get an error at
*ptr = 99;
as
error: *ptr is const
thus making the problem evident.
Moral of the story:
From a legal point of view, you wrote bad code since it is -by language definition- relying on undefined behavior.
From a moral point of view: the compiler kept an unfair behavior: performing const-inlining (replacing cout << i with cout << 5) after accepting (int*)&i is a self-contradition, and incoherent behavior should at least be warned.
If it wants to do one thing must not accept the other, or vice-versa.
So check if there is a flag you can set to be warned, and if not, report to the compiler manufacturer its unfairness: it didn't warn about its own contradiction.

const int i = 5;
Implies that the variable i is a const and it cannot/should not be changed, it is Imuttable and changing it through a pointer results in Undefined Behavior.
An Undefined Behavior means that the program is ill-formed and any behavior is possible. Your program might seem to work as desired, or not or it might even crash. All safe bets are off.
Remember the Rule:
It is Undefined Behavior to modify an const variable. Don't ever do it.

You're attempting to modify a constant through a pointer, which is undefined. This means anything unexpected can happen, from the correct output, to the wrong output, to the program crashing.

Related

How are these two pieces of code different?

I tried these lines of code and found out shocking output. I am expecting some reason related to initialisation either in general or in for loop.
1.)
int i = 0;
for(i++; i++; i++){
if(i>10) break;
}
printf("%d",i);
Output - 12
2.)
int i;
for(i++; i++; i++){
if(i>10) break;
}
printf("%d",i);
Output - 1
I expected the statements "int i = 0" and "int i" to be the same.What is the difference between them?
I expected the statements "int i = 0" and "int i" to be the same.
No, that was a wrong expectation on your part. If a variable is declared outside of a function (as a "global" variable), or if it is declared with the static keyword, it's guaranteed to be initialized to 0 even if you don't write = 0. But variables defined inside functions (ordinary "local" variables without static) do not have this guaranteed initialization. If you don't explicitly initialize them, they start out containing indeterminate values.
(Note, though, that in this context "indeterminate" does not mean "random". If you write a program that uses or prints an uninitialized variable, often you'll find that it starts out containing the same value every time you run your program. By chance, it might even be 0. On most machines, what happens is that the variable takes on whatever value was left "on the stack" by the previous function that was called.)
See also these related questions:
Non-static variable initialization
Static variable initialization?
See also section 4.2 and section 4.3 in these class notes.
See also question 1.30 in the C FAQ list.
Addendum: Based on your comments, it sounds like when you fail to initialize i, the indeterminate value it happens to start out with is 0, so your question is now:
"Given the program
#include <stdio.h>
int main()
{
int i; // note uninitialized
printf("%d\n", i); // prints 0
for(i++; i++; i++){
if(i>10) break;
}
printf("%d\n", i); // prints 1
}
what possible sequence of operations could the compiler be emitting that would cause it to compute a final value of 1?"
This can be a difficult question to answer. Several people have tried to answer it, in this question's other answer and in the comments, but for some reason you haven't accepted that answer.
That answer again is, "An uninitialized local variable leads to undefined behavior. Undefined behavior means anything can happen."
The important thing about this answer is that it says that "anything can happen", and "anything" means absolutely anything. It absolutely does not have to make sense.
The second question, as I have phrased it, does not really even make sense, because it contains an inherent contradiction, because it asks, "what possible sequence of operations could the compiler be emitting", but since the program contains Undefined behavior, the compiler isn't even obliged to emit a sensible sequence of operations at all.
If you really want to know what sequence of operations your compiler is emitting, you'll have to ask it. Under Unix/Linux, compile with the -S flag. Under other compilers, I don't know how to view the assembly-language output. But please don't expect the output to make any sense, and please don't ask me to explain it to you (because I already know it won't make any sense).
Because the compiler is allowed to do anything, it might be emitting code as if your program had been written, for example, as
#include <stdio.h>
int main()
{
int i; // note uninitialized
printf("%d\n", i); // prints 0
i++;
printf("%d\n", i); // prints 1
}
"But that doesn't make any sense!", you say. "How could the compiler turn "for(i++; i++; i++) ..." into just "i++"? And the answer -- you've heard it, but maybe you still didn't quite believe it -- is that when a program contains undefined behavior, the compiler is allowed to do anything.
The difference is what you already observed. The first code initializes i the other does not. Using an unitialized value is undefined behaviour (UB) in c++. The compiler assumes UB does not happen in a correct program, and hence is allowed to emit code that does whatever.
Simpler example is:
int i;
i++;
Compiler knows that i++ cannot happen in a correct program, and the compiler does not bother to emit correct output for wrong input, hece when you run this code anything could happen.
For further reading see here: https://en.cppreference.com/w/cpp/language/ub
The is a rule of thumb that (among other things) helps to avoid uninitialized variables. It is called Almost-Always-Auto, and it suggests to use auto almost always. If you write
auto i = 0;
You cannot forget to initialize i, because auto requires an initialzer to be able to deduce the type.
PS: C and C++ are two different languages with different rules. Your second code is UB in C++, but I cannot answer your question for C.

Out of bounds pointer arithmetic not detected?

According to Wikipedia and this, this code is undefined behavior:
#include <iostream>
int main(int, char**) {
int data[1] = {123};
int* p = data + 5; // undefined behavior
std::cout << *(p - 5) << std::endl;
}
Compiled with clang++-6.0 -fsanitize=undefined and executed, the undefined behavior is detected which is fantastic, I get this message:
ub.cpp:5:19: runtime error: index 5 out of bounds for type 'int [1]'
But when I don't use an array, the undefined behavior is not detectable:
#include <iostream>
int main(int, char**) {
int data = 123;
int* p = &data + 5; // undefined behavior
std::cout << *(p - 5) << std::endl;
}
The sanitizer detects nothing, even though this is still undefined behavior. Valgrind also does not show any problem. Any way to detect this undefined behavior?
Since I am never accessing any invalid data, this is not a duplicate of Recommended way to track down array out-of-bound access/write in C program.
The standard very clearly specifies that most forms of undefined behaviour is "no diagnostic required". Meaning that your compiler is under no obligation to diagnose UB (which would also be unreasonable, since it is very hard to do so in many cases). Instead, the compiler is allowed to just assume that you "of course did not" write any UB and generate code as if you didn't. And if you did, that's on you and you get to keep the broken pieces.
Some tools (like asan and ubsan and turning your compilers warning level to 11) will detect some UB for you. But not all.
Your compiler implementors are not out to harm you. They do try to warn you of UB when they can. So, at the very least you should enable all warnings and let them help you as best they can.
One way to detect UB is to have intimate knowledge of the C++ standard and read code really carefully. But, you cannot really do better than that + let some tools help you find the low-hanging fruit. You just have to know (all) the rules and know what you are doing.
There are no training wheels or similar in C++.

C++/Address Space: 2 Bytes per adress?

I was just trying something and i was wondering how this could be. I have the following Code:
int var1 = 132;
int var2 = 200;
int *secondvariable = &var2;
cout << *(secondvariable+2) << endl << sizeof(int) << endl;
I get the Output
132
4
So how is it possible that the second int is only 2 addresses higher? I mean shouldn't it be 4 addresses? I'm currently under WIN10 x64.
Regards
With cout << *(secondvariable+2) you don't print a pointer, you print the value at secondvariable[2], which is an invalid indexing and lead to undefined behavior.
If you want to print a pointer then drop the dereference and print secondvariable+2.
While you already are far in the field of undefined behaviour (see Some programmer dude's answer) due to indexing an array out of bounds (a single variable is considered an array of length 1 for such matters), some technical background:
Alignment! Compilers are allowed to place variables at addresses such that they can be accessed most efficiently. As you seem to have gotten valid output by adding 2*sizeof(int) to the second variable's address, you apparently have reached the first one by accident. Apparently, the compiler decided to leave a gap in between the two variables so that both can be aligned to addresses dividable by 8.
Be aware, though, that you don't have any guarantee for such alignment, different compilers might decide differently (or same compiler on another system), and alignment even might be changed via compiler flags.
On the other hand, arrays are guaranteed to occupy contiguous memory, so you would have gotten the expected result in the following example:
int array[2];
int* a0 = &array[0];
int* a1 = &array[1];
uintptr_t diff = static_cast<uintptr_t>(a1) - static_cast<uintptr_t>(a0);
std::cout << diff;
The cast to uintptr_t (or alternatively to char*) assures that you get address difference in bytes, not sizes of int...
This is not how C++ works.
You can't "navigate" your scope like this.
Such pointer antics have completely undefined behaviour and shall not be relied upon.
You are not punching holes in tape now, you are writing a description of a program's semantics, that gets converted by your compiler into something executable by a machine.
Code to these abstractions and everything will be fine.

Undefined behaviour observed in C++/memory allocation

#include <iostream>
using namespace std;
int main()
{
int a=50;
int b=50;
int *ptr = &b;
ptr++;
*ptr = 40;
cout<<"a= "<<a<<" b= "<<b<<endl;
cout<<"address a "<<&a<<" address b= "<<&b<<endl;
return 0;
}
The above code prints :
a= 50 b= 50
address a 0x7ffdd7b1b710 address b= 0x7ffdd7b1b714
Whereas when I remove the following line from the above code
cout<<"address a "<<&a<<" address b= "<<&b<<endl;
I get output as
a= 40 b= 50
My understanding was that the stack grows downwards, so the second answers seems to be the correct one. I am not able to understand why the print statement would mess up the memory layout.
EDIT:
I forgot to mention, I am using 64 bit x86 machine, with OS as ubuntu 14.04 and gcc version 4.8.4
First of all, it's all undefined behavior. The C++ standard says that you can increment pointers only as long as you are in array boundaries (plus one element after), with some more exceptions for standard layout classes, but that's about it. So, in general, snooping around with pointers is uncharted territory.
Coming to your actual code: since you are never asking for its address, probably the compiler either just left a in a register, or even straight propagated it as a constant throughout the code. For this reason, a never touches the stack, and you cannot corrupt it using the pointer.
Notice anyhow that the compiler isn't restricted to push/pop variables on the stack in the order of their declaration - they are reordered in whatever order they seem fit, and actually they can even move in the stack frame (or be replaced) throughout the function - and a seemingly small change in the function may make the compiler to alter completely the stack layout. So, even comparing the addresses as you did says nothing about the direction of stack growth.
UB - You have taken a pointer to b, you move that pointer ptr++ which means you are pointing to some unknown, un-assigned memory and you try to write on that memory region, which will cause an Undefined Behavior.
On VS 2008, debugging it step-by-step will throw this message for you which is very self-explanatory::

Is this compiler optimization inconsistency entirely explained by undefined behaviour?

During a discussion I had with a couple of colleagues the other day I threw together a piece of code in C++ to illustrate a memory access violation.
I am currently in the process of slowly returning to C++ after a long spell of almost exclusively using languages with garbage collection and, I guess, my loss of touch shows, since I've been quite puzzled by the behaviour my short program exhibited.
The code in question is as such:
#include <iostream>
using std::cout;
using std::endl;
struct A
{
int value;
};
void f()
{
A* pa; // Uninitialized pointer
cout<< pa << endl;
pa->value = 42; // Writing via an uninitialized pointer
}
int main(int argc, char** argv)
{
f();
cout<< "Returned to main()" << endl;
return 0;
}
I compiled it with GCC 4.9.2 on Ubuntu 15.04 with -O2 compiler flag set. My expectations when running it were that it would crash when the line, denoted by my comment as "writing via an uninitialized pointer", got executed.
Contrary to my expectations, however, the program ran successfully to the end, producing the following output:
0
Returned to main()
I recompiled the code with a -O0 flag (to disable all optimizations) and ran the program again. This time, the behaviour was as I expected:
0
Segmentation fault
(Well, almost: I didn't expect a pointer to be initialized to 0.) Based on this observation, I presume that when compiling with -O2 set, the fatal instruction got optimized away. This makes sense, since no further code accesses the pa->value after it's set by the offending line, so, presumably, the compiler determined that its removal would not modify the observable behaviour of the program.
I reproduced this several times and every time the program would crash when compiled without optimization and miraculously work, when compiled with -O2.
My hypothesis was further confirmed when I added a line, which outputs the pa->value, to the end of f()'s body:
cout<< pa->value << endl;
Just as expected, with this line in place, the program consistently crashes, regardless of the optimization level, with which it was compiled.
This all makes sense, if my assumptions so far are correct.
However, where my understanding breaks somewhat is in case where I move the code from the body of f() directly to main(), like so:
int main(int argc, char** argv)
{
A* pa;
cout<< pa << endl;
pa->value = 42;
cout<< pa->value << endl;
return 0;
}
With optimizations disabled, this program crashes, just as expected. With -O2, however, the program successfully runs to the end and produces the following output:
0
42
And this makes no sense to me.
This answer mentions "dereferencing a pointer that has not yet been definitely initialized", which is exactly what I'm doing, as one of the sources of undefined behaviour in C++.
So, is this difference in the way optimization affects the code in main(), compared to the code in f(), entirely explained by the fact that my program contains UB, and thus compiler is technically free to "go nuts", or is there some fundamental difference, which I don't know of, between the way code in main() is optimized, compared to code in other routines?
Your program has undefined behaviour. This means that anything may happen. The program is not covered at all by the C++ Standard. You should not go in with any expectations.
It's often said that undefined behaviour may "launch missiles" or "cause demons to fly out of your nose", to reinforce that point. The latter is more far-fetched but the former is feasible, imagine your code is on a nuclear launch site and the wild pointer happens to write a piece of memory that starts global thermouclear war..
Writing unknown pointers has always been something which could have unknown consequences. What's nastier is a currently-fashionable philosophy which suggests that compilers should assume that programs will never receive inputs that cause UB, and should thus optimize out any code which would test for such inputs if such tests would not prevent UB from occurring.
Thus, for example, given:
uint32_t hey(uint16_t x, uint16_t y)
{
if (x < 60000)
launch_missiles();
else
return x*y;
}
void wow(uint16_t x)
{
return hey(x,40000);
}
a 32-bit compiler could legitimately replace wow with an unconditional call to
launch_missiles without regard for the value of x, since x "can't possibly" be greater than 53687 (any value beyond that would cause the calculation of x*y to overflow. Even though the authors of C89 noted that the majority of compilers of that era would calculate the correct result in a situation like the above, since the Standard doesn't impose any requirements on compilers, hyper-modern philosophy regards it as "more efficient" for compilers to assume programs will never receive inputs that would necessitate reliance upon such things.