#include <iostream>
using namespace std;
int main() {
cout<<(int *)16 - (int *)10 ;
return 0;
}
This code section produces an output 5, i could not understand the output?
There is no point in rationalising about this: you do not have an array that contains elements who live at 10 and 16 in memory. Therefore, the subtraction operation is undefined and anything can happen.
Speaking practically, since the difference between the two addresses is [probably] not a multiple of sizeof(int), your compiler appears to be chickening out and returning abject nonsense.
Fortunately, you never have a reason to write this code in your projects, so it doesn't matter.
Related
This question already has answers here:
Why is it that we can write outside of bounds in C?
(7 answers)
Is accessing a global array outside its bound undefined behavior?
(8 answers)
Undefined, unspecified and implementation-defined behavior
(9 answers)
Closed 11 months ago.
I wrote a code for entering element and displaying the array at the same time. The code works but since char A[4] is static memory why does not it terminate/throw error after entering more than four elements? Code:
#include <iostream>
using namespace std;
void display(char arr[],int n)
{
for(int i=0; i<n; i++)
cout<<arr[i]<<" ";
return;
}
int main()
{
char A[4];
int i=0;
char c;
for(;;)
{
cout<<"Enter an element (enter p to end): ";
cin>>c;
if(c=='p')
break;
A[i]=c;
i++;
display(A,i);
system("clear");
}
return 0;
}
Writing outside of an array by using an index that is negative or too big is "undefined behavior" and that doesn't mean that the program will halt with an error.
Undefined behavior means that anything can happen and the most dangerous form this can take (and it happens often) is that nothing happens; i.e. the program seems to be "working" anyway.
However maybe that later, possibly one million instructions executed later, a perfectly good and valid section of code will behave in absurd ways.
The C++ language has been designed around the idea that performance is extremely important and that programmers make no mistakes; therefore the runtime doesn't waste time checking if array indexes are correct (what's the point if the programmers never use invalid ones? it's just a waste of time).
If you write outside of an array what normally happens is that you're overwriting other things in bad ways, possibly breaking complex data structures containing pointers or other indexes that later will trigger strange behaviors. This in turn will get more code to do even crazier things and finally, some code will do something that is so bad that even the OS (that doesn't know what the program wants to do) can tell the operation is nonsense (for example because you're trying to write outside the whole address space that was given to the process) and kills your program (segfault).
Inspecting where the segfault is coming from unfortunately will only reveal what was the last victim in which the code is correct but that was using a data structure that was corrupted by others, not the first offender.
Just don't make mistakes, ok? :-)
The code works but since char A[4] is static memory why does not it terminate/throw error after entering more than four elements?
The code has a bug. It will not work correctly until you fix the bug. It really is that simple.
I initialized a variable i with a value 3, then put a statement (++i)++ in my code. But, in C, it is showing an error "lvalue required as increment operand". But, if I put this similar code in c++, it works and showing double increment with an output 5. However, one of my friends tried on his compiler using c and it gave an output 4.
//using c
#include <stdio.h>
int main()
{
int i=3;
(++i)++;
printf("%d",i);
return 0;
}
//using c++
#include <bits/stdc++.h>
using namespace std;
int main()
{
int i=3;
(++i)++;
cout << i << endl;
return 0;
}
I am using GNU GCC compiler.
This is known to be undefined behavior. Syntactically this program is correct in C++, and the compiler produces some binary code... but the standard allows it to produce ANY code, even something that returns 100 or formats your disk. In real situations you may observe very strange abnormal scenarios, for example the compiler can drop the whole code after your (++i)++ statement, because the standard allows it to do whatever it wants right after the program enters into the status of UB. In your case that would mean that there would be no output at all (or the program would print "Hello World" instead of the integer value).
I believe that you are just conducting an experiment. The result is: both your compiler and your friend's are correct.
I decided to compile and run this piece of code (out of curiosity) and the G++ compiler successfully compiled the program. I was expecting to see a compile error or a runtime error, or at least the values of a and b swapped (as 5 > 1), since the std::sort() function is being called with two pointers to integers.
(Please note that I know this is not a good practice and I was basically just playing with pointers)
#include <iostream>
#include <algorithm>
int main() {
int a{5};
int b{4};
int c{1};
int* aptr = &a;
int* bptr = &b;
std::sort(aptr, bptr);
std::cout << a << ' ' << b << ' ' << c << '\n';
return 0;
}
However, upon executing the program, the output I got was this:
5 4 1
My question is, how did C++ allow this call to the std::sort() function? And how did it not end up actually sorting everything between the memory addresses of a and b (potentially including even garbage values in memory)?
I mean, if we tried this with C-style arrays like this (std::sort(arr, arr+n)) it would successfully sort the C-style array, because arr and arr+n are basically just pointers where n is the size of the array and arr is the pointer to the first element.
(I'm sorry if this question sounds stupid. I'm still learning C++.)
Your program is ill formed, no diagnostic required. You passed pointers that do not form a range to a std algorithm.
Any behaviour whatsoever by the program is conforming to the C++ standard.
Compilers optimize around the fact that pointers to unrelated objects are incomparable and their difference is undefined. A sort here would trip over so much UB the optimizer could eliminate branches like crazy (as any branch with UB can be eliminated and replaced with the alternative (whatever code the alternate branch is a legal result of UB)).
Good C++ coding style thus focuses on avoiding UB and IL-NDR code.
C++ accepts your code as it is syntactically right. But it doesn't work because sort(it1, it2) expects it1 one to be some starting position of an array and it2 to be the ending position of the same array. you have provided two different arrays to the sort function which can yield any of two following situations:
positionof(it1) < positionof(it2): suppose in computer's memory array a and b are stored in the like this- 5(a), -1, -2, 10, 4(b). then the sort function will sort from 5 to 4 resulting in : -2(a),-1,4,5,10(b).
positionof(it1) > positionof(it2) (your machine's case): the sort function will do nothing as left_position > right_position.
Whenever I am writing this following code, I am getting garbage(unexpected) output in some online compiler, but if I use code block then getting satisfied output. So my question is why I am getting this type of output?
for example, if I input
5 7
+ 5
- 10
- 20
+ 40
- 20
then I am getting
22 1
in the code block. But in the online compiler, it's something else.
#include<iostream>
#include<cstdlib>
using namespace std;
int main()
{
int have, n, i;
int kid=0;
cin>>n>>have;
int line[n];
for(i=0;i<n;i++)
{
cin>>line[i];
if(line[i]>=0)
have+=line[i];
else
{
if(have>=abs(line[i]))
have+=line[i];
else
kid++;
}
}
cout<<have<<" "<<kid<<endl;
}
The main problem I can see in your code is this:
int line[n];
This is known as a VLA (Variable Length Array) and it is not supported in C++. It is valid in C. Most compilers still allow this behaviour due to the fact that C++ is based on C, but it is not valid C++ code. In a previous question, I found out that clang supports designated initializers, when gcc and vc++ did not. The reason is because some compilers like clang, support c99-extensions by default. My point is that just because the code compiles, it doesn't mean it's always right.
If you compile with the -pedantic argument, you will see that the compiler is warning you about this being a C99 feature. Have a look at the rextester example here. From the comments below, using -pedantic-errors in the compiler flags, will prompt an error.
If you know the size of the array before run-time, then you should use a static array int line[4];, but if you don't then you need to use a dynamic array. std::vector is essentially a dynamic array that also handles memory for you. It's easy to use and very efficient. std::vector<int> line;
You can read more about the vector container here: http://www.cplusplus.com/reference/vector/vector/
Btw, I tried your code in rextester, ideone and repl.it and I got the same results: 22 1. I think what you are witnessing it undefined behaviour.
Also, you can qualify int n with constexpr and it'll be fine.
constexr int n = 200;
int line[n]; //now it's ok.
But this again means that you know the size of the array at compile time.
#include <bits/stdc++.h>
using namespace std;
int main()
{
vector<int>p;
p.push_back(30);
p.push_back(60);
p.push_back(20);
p.erase(p.end());
for(int i = 0; i < p.size(); ++i)
cout<<p[i]<<" ";
}
The above code throws error as it is understood that p.end() point to null pointer.
While this code below is running fine and output is 30 60. Can anyone explain this?
#include <bits/stdc++.h>
using namespace std;
#define mp make_pair
int main()
{
vector<pair<int,int>>p;
p.push_back(mp(30,2));
p.push_back(mp(60,5));
p.push_back(mp(20,7));
p.erase(p.end());
for(int i = 0; i < p.size(); ++i)
cout<<p[i].first<<" ";
}
From std::vector::erase :
The iterator pos must be valid and dereferenceable. Thus the end() iterator (which is valid, but is not dereferencable) cannot be used as a value for pos.
So your code is invalid in both cases and invokes undefined behaviour. This means anything can happen including crash, work, or whatever it wants.
The above code throws error as it is understood that ...
That code is not guaranteed to "throw an error". Rather, the behaviour is undefined. Throwing an error is one possible behaviour. If it does throw an error, you can count yourself lucky as it might have been difficult to find your bug otherwise.
... as it is understood that p.end() point to null pointer.
No, p.end() does not "point to a null pointer". It points to the end of the vector, where end of the vector is defined as the position after the last element.
While this code below is running fine and output is 30 60. Can anyone explain this?
"Running fine" and "output is 30 60" are possible behaviours when behaviour is undefined. Everything is a possible behaviour when it is undefined. But of course, there is no guarantee that it will run fine. As far as the language is concerned, the program could just as well not be running fine tomorrow.
I have checked it on many online compilers but the output is same!!
Ouput being the same on many online compilers is also possible behaviour when behaviour is undefined. There is no guarantee that some compiler would behave differently, just as much as there is no guarantee for them behaving the same.
No matter how many compilers you try, it is impossible to verify that a program is correct simply by executing it and observing the output that you hoped for. The only way to prove a program correct, is to verify all pre-conditions and invariants imposed on the program are satisfied.