Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
A very simple program testing the STL implementations between Intel compiler on linux (Intel 18.0) and visual c++ on windows (MSVC 2015).
Primarily, how to make linux fatal out/ crash the same way as windows (since i have a huge codebase). Technically i was expecting a signal 11 from linux, not to throw a garbage value everytime for whatever the size of vector i tested.
Can someone explain what is happening under the hood (memory allocation wise and their rules and if it is implementation/platform/compiler dependent).? Just for my understanding.
'
#include "iostream"
#include "vector"
using namespace std;
int main(int argc,char* argv[])
{
vector<int> v;
//v.resize(5); (my bad, please ignore this, i was testing and posted incorrect version)
cout<<"Initial vector size: "<<v.size()<<endl;
for(int i=1;i<=5;++i)
{
v.push_back(i);
}
cout<<"size of vector after: "<<v.size()<<endl;
for(int j=5;j>=0;--j) // Notice my upper bound.
{
cout<< "printing " <<v[j]<<std::endl;
}
return 0;
}
Both had no problems compiling as one can expect. Subsequently, Windows crashed with a nice message "vector subscript out of range" while linux threw some garbage value everytime and continued.
Assuming you put v.resize(5) by mistake, here are some answers:
how to make linux fatal out/ crash the same way as windows (since i have a huge codebase).
Use std::vector::at() that validates index and throws exception by design. std::vector::operator[] not suppose to validate index and even if some platform does it for some configuration (looks like Windows compiler in debug has such validation), you cannot demand it everywhere.
Technically i was expecting a signal 11 from linux, not to throw a garbage value everytime for whatever the size of vector i tested.
It is a problem of your expectation. Giving invalid index to std::vector::operator[] leads to Undefined Behaviour, and you cannot expect that to give signal 11 or something else in particular.
FYR:
std::vector::operator[]
Returns a reference to the element at specified location pos. No bounds checking is performed.
std::vector::at()
If pos is not within the range of the container, an exception of type std::out_of_range is thrown.
emphasis is mine.
As for memory allocation, it is implementation specific, most of them would not allocate memory exactly for 5 elements but more in advance, to make it efficient, so that's why your code on Linux did not crush, but produced garbage, as you read value from uninitialized memory. But that can change any moment and you should not rely on this behavior in any way.
Related
This question already has answers here:
Accessing an array out of bounds gives no error, why?
(18 answers)
Undefined, unspecified and implementation-defined behavior
(9 answers)
Closed 14 days ago.
If a declare an array of size 4 and initialize it.
Int arr[4]={1,2,3,4}
Why there is no error while I do
arr[4] =5 or for any other index . While the size of array is 4 .
Why there is not error . And what is logic behind this ????
I know there is not error . But I could not understanding logic behind this .
As opposed to some other languages, C++ does not perform any runtime bounds checking as you have observed. The size of the array is not actually stored anywhere and not available at runtime. As such, at runtime no such checks can be performed. This was a conscious choice by the language creators to make the language as fast as possible while still being able to opt into bounds checking with the solutions I have outlined below.
Instead, what happens is known as undefined behaviour which C++ has a lot of. This must be always avoided as the outcome is not guaranteed and could be different every time you run the program. It might overwrite something else, it may crash, etc.
You have several solution paths here:
Most commonly you will want to use std::array such that you can access its .size() and then check for it before accessing:
if (i < arr.size())
arr[i] = value;
else // handle error
If you want to handle the error down the line you can use std::array and its .at() method which will throw a std::out_of_range in case the index is out of bounds: https://en.cppreference.com/w/cpp/container/array/at
Example: http://coliru.stacked-crooked.com/a/b90cbd04ce68fac4 (make sure you are using C++ 17 to run the code in the example).
If you just need the bounds checking for debugging you can enable that option in your compiler, e.g., GCC STL bound checking (that also requires the use of std::array)
This question already has answers here:
What is a buffer overflow and how do I cause one?
(12 answers)
Closed 5 years ago.
#include<iostream>
using namespace std;
int main(void)
{
char name[5];
cout << "Name: ";
cin.getline(name, 20);
cout << name;
}
Output:
Name: HelloWorld
HelloWorld
Shouldn't this give an error or something?
Also when I write an even longer string,
Name: HelloWorld Goodbye
HelloWorld Goodbye
cmd exits with an error.
How is this possible?
Compiler: G++ (GCC 7), Nuwen
OS: Windows 10
It's called buffer overflow and is a common source of code bugs and exploits. It's the developers responsibility to ensure it doesn't happen. character strings wil be printed until they reach the first '\0' character
The code produces "undefined behavior". This means, anything might happen. In your case, the program works unexpectedly. It might however do something completely different with different compiler flags or on a different system.
Shouldn't this give an error or something.
No. The compiler cannot know that you will input a long string, thus there cannot be any compiler error. You also don't throw any runtime exception here. It is up to you to make sure the program can handle long strings.
Your code has encountered UB, also known as undefined behaviour, which, as Wikipedia defines, the result of executing computer code whose behavior is not prescribed by the language specification to which the code adheres. It usually occurs when you do note define variables properly, in this case a too small char array.
Even -Wall flag will not give any warning. So you can use tools like valgrind and gdb to detect memory leaks and buffer overflows
You can check those questions:
Array index out of bound in C
No out of bounds error
They have competent answers.
My short answer, based on those already given in the questions I posted:
Your code implements an Undefined Behavior(buffer overflow) so it doesn't give an error when you run it once. But some other time it may give. It's a chance thing.
When you enter a longer string, you actually corrupt the memory (stack) of the program (i.e you overwrite the memory which should contain some program-related data, with your data) and so the return code of your program ends up being different than 0 which interprets as an error. The longer the string, the higher the chance of screwing things up (sometimes even short strings screw things up)
You can read more here: https://en.wikipedia.org/wiki/Buffer_overflow
Just out of curiosity, I'm trying to generate a stack overflow. This code generates a Stack Overflow according to the OP, but when I run it on my machine, it generates a segmentation fault:
#include <iostream>
using namespace std;
int num = 11;
unsigned long long int number = 22;
int Divisor()
{
int result;
result = number%num;
if (result == 0 && num < 21)
{
num+1;
Divisor();
if (num == 20 && result == 0)
{
return number;
}
}
else if (result != 0)
{
number++;
Divisor();
}
}
int main ()
{
Divisor();
cout << endl << endl;
system ("PAUSE");
return 0;
}
Also, according to this post, some examples there should also do the same. Why is it I get segmentation faults instead?
Why is it I get segmentation faults instead?
The segmentation fault, what you're seeing, is a side-effect of the stack overflow. The reason is stack overflow, the result is segmentation fault.
From the wikipedia article for "stack overflow" (emphasis mine)
.... When a program attempts to use more space than is available on the call stack (that is, when it attempts to access memory beyond the call stack's bounds, which is essentially a buffer overflow), the stack is said to overflow, typically resulting in a program crash.
A stack overflow can lead to following errors:
SIGSEGV (segmentation violation) signal for the process.
SIGILL (illegal instruction) signal.
SIGBUS an access to an invalid address.
For more read Program Error Signals. Since the behavior is undefined any of the above can come up on different systems/architectures.
You are essentially asking: what is the behavior of undefined behavior?
The answer is: undefined behavior is behavior which is not defined. Anything might happen.
Researching why you get a certain undefined behavior on a certain system is most often pointless exercise.
Undefined, unspecified and implementation-defined behavior
In the case of stack overflow, the program might overwrite other variables in RAM, or corrupt the running function's own return address, or attempt to modify memory outside its given address range etc etc. Depending on system, you might get hardware exceptions and various error signals such as SIGSEGV (on POSIX systems), or sudden program crashes, or "program seems to be working fine", or something else.
The other answers posted are all correct.
However, if the intent of your question is to understand why you do not see a printed error stating that a stack overflow has occurred, the answer is that some run-time libraries explicitly detect and report stack overflows, while others do not, and simply crash with a segfault.
In particular, it looks like at least some versions of Windows detect Stackoverflows and turn them into exceptions, since the documentation suggests you can handle them.
A stack overflow is a cause, a segmentation fault is the result.
On linux and other unix like systems a segmentation fault may be the result, among other things, of a stack overflow. You don't get any specific information that the program encountered a stack overflow.
In the first post you're linking, the person is running the code on Windows which may behave differently, and e.g. detect a stack overflow specifically.
I guess you're using a compiler that doesn't have stack checking enabled.
Stack checking is a rather simple mechanism, it kills the program stating that a Stack Overflow Happened as soon as the stack pointer flies past the stack bound. It is often disabled for optimisation purposes, because a program will almost certainly crash on a stack overflow anyway.
Why a segfault? Well, without stack checking enabled, your program doesn't stop after using up the stack, and continues right into unrelated (and quite often protected) memory, which it tries to modify to use as another stack frame for a new function invokation. Madness ensues, and a segfault happens.
Recently I met a memory release problem. First, the blow is the C codes:
#include <stdio.h>
#include <stdlib.h>
int main ()
{
int *p =(int*) malloc(5*sizeof (int));
int i ;
for(i =0;i<5; i++)
p[i ]=i;
p[i ]=i;
for(i =0;i<6; i++)
printf("[%p]:%d\n" ,p+ i,p [i]);
free(p );
printf("The memory has been released.\n" );
}
Apparently, there is the memory out of range problem. And when I use the VS2008 compiler, it give the following output and some errors about memory release:
[00453E80]:0
[00453E84]:1
[00453E88]:2
[00453E8C]:3
[00453E90]:4
[00453E94]:5
However when I use the gcc 4.7.3 compiler of cygwin, I get the following output:
[0x80028258]:0
[0x8002825c]:1
[0x80028260]:2
[0x80028264]:3
[0x80028268]:4
[0x8002826c]:51
The memory has been released.
Apparently, the codes run normally, but 5 is not written to the memory.
So there are maybe some differences between VS2008 and gcc on handling these problems.
Could you guys give me some professional explanation on this? Thanks In Advance.
This is normal as you have never allocated any data into the mem space of p[5]. The program will just print what ever data was stored in that space.
There's no deterministic "explanation on this". Writing data into the uncharted territory past the allocated memory limit causes undefined behavior. The behavior is unpredictable. That's all there is to it.
It is still strange though to see that 51 printed there. Typically GCC will also print 5 but fail with memory corruption message at free. How you managed to make this code print 51 is not exactly clear. I strongly suspect that the code you posted is not he code you ran.
It seems that you have multiple questions, so, let me try to answer them separately:
As pointed out by others above, you write past the end of the array so, once you have done that, you are in "undefined behavior" territory and this means that anything could happen, including printing 5, 6 or 0xdeadbeaf, or blow up your PC.
In the first case (VS2008), free appears to report an error message on standard output. It is not obvious to me what this error message is so it is hard to explain what is going on but you ask later in a comment how VS2008 could know the size of the memory you release. Typically, if you allocate memory and store it in pointer p, a lot of memory allocators (the malloc/free implementation) store at p[-1] the size of the memory allocated. In practice, it is common to also store at address p[p[-1]] a special value (say, 0xdeadbeaf). This "canary" is checked upon free to see if you have written past the end of the array. To summarize, your 5*sizeof(int) array is probably at least 5*sizeof(int) + 2*sizeof(char*) bytes long and the memory allocator used by code compiled with VS2008 has quite a few checks builtin.
In the case of gcc, I find it surprising that you get 51 printed. If you wanted to investigate wwhy that is exactly, I would recommend getting an asm dump of the generated code as well as running this under a debugger to check if 5 is actually really written past the end of the array (gcc could well have decided not to generate that code because it is "undefined") and if it is, to put a watchpoint on that memory location to see who overrides it, when, and why.
This question already has answers here:
Accessing an array out of bounds gives no error, why?
(18 answers)
Closed 9 years ago.
Why does this work without any errors?
int array[2];
array[5] = 21;
cout << array[5];
It printed out 21 just fine. But check this out! I changed 5 to 46 and it still worked. But when I put 47, it didn't print anything. And showed no errors anywhere. What's up with that!?!?
Because it's simply undefined behaviour (there is no checks for bounds of arrays in C++). Anything can happen.
Simply array[5] is equivalent to *(&array[0] + 5), you are trying to write/read to memory, that you are not allocate.
In the C and C++ language there are very few runtime errors.
When you make a mistake what happens instead is that you get "undefined behavior" and that means that anything may happen. You can get a crash (i.e. the OS will stop the process because it's doing something nasty) or you can just corrupt memory in your program and things seems to work anyway until someone needs to use that memory. Unfortunately the second case is by far the most common so when a program writes outside an array normally the program crashes only one million instructions executed later, in a perfectly innocent and correct part.
The main philosophical assumption of C and C++ is that a programmer never makes mistakes like accessing an array with an out-of-bounds index, deallocating twice the same pointer, generating a signed integer overflow during a computation, dereferencing a null pointer and so on.
This is also the reason for which trying to learn C/C++ just using a compiler and experimenting by writing code is a terrible idea because you will not be notified of this pretty common kind of error.
The array has 2 elements, but you are assigning array[5] = 21; which means 21 is in memory outside the array. Depends on your system and environment array[46] is a valid memory to hold a number but array[47] isn't.
You should do this
int array[47];
array[47] = 21;
cout << array[47];