What happens to class members when malloc is used instead of new? - c++

I'm studying for a final exam and I stumbled upon a curious question that was part of the exam our teacher gave last year to some poor souls. The question goes something like this:
Is the following program correct, or not? If it is, write down what the program outputs. If it's not, write down why.
The program:
#include<iostream.h>
class cls
{ int x;
public: cls() { x=23; }
int get_x(){ return x; } };
int main()
{ cls *p1, *p2;
p1=new cls;
p2=(cls*)malloc(sizeof(cls));
int x=p1->get_x()+p2->get_x();
cout<<x;
return 0;
}
My first instinct was to answer with "the program is not correct, as new should be used instead of malloc". However, after compiling the program and seeing it output 23 I realize that that answer might not be correct.
The problem is that I was expecting p2->get_x() to return some arbitrary number (whatever happened to be in that spot of the memory when malloc was called). However, it returned 0. I'm not sure whether this is a coincidence or if class members are initialized with 0 when it is malloc-ed.
Is this behavior (p2->x being 0 after malloc) the default? Should I have expected this?
What would your answer to my teacher's question be? (besides forgetting to #include <stdlib.h> for malloc :P)

Is this behavior (p2->x being 0 after malloc) the default? Should I have expected this?
No, p2->x can be anything after the call to malloc. It just happens to be 0 in your test environment.
What would your answer to my teacher's question be? (besides forgetting to #include for malloc :P)
What everyone has told you, new combines the call to get memory from the freestore with a call to the object's constructor. Malloc only does half of that.
Fixing it: While the sample program is wrong. It isn't always wrong to use "malloc" with classes. It is perfectly valid in a shared memory situation you just have to add an in-place call to new:
p2=(cls*)malloc(sizeof(cls));
new(p2) cls;

new calls the constructor, malloc will not. So your object will be in an unknown state.

The actual behaviour is unknown, because new acts pretty the same like malloc + constructor call.
In your code, the second part is missing, hence, it could work in one case, it could not, but you can't say exactly.

Why can't 0 be an arbitrary number too? Are you running in Debug mode? What compiler?
VC++ pre-fills newly allocated memory with a string of 0xCC byte values (in debug mode of-course) so you would not have obtained a zero for an answer if you were using it.

Malloc makes no guarantuee to zero out the memory it allocated and the result of the programm is undefined.
Otherwise there are many other things that keep this program from being correct C++. cout is in namespace std, malloc needs to included through#include <cstdlib> and iostream.h isn't standard compliant either.

Related

How does void(**rptr)() and the call in main() work in this solution to printing 1-100 with no loops in C++?

In this Quora answer i stumbled upon this piece of code and would like to understand whats happening: How can I print 1 to 100 in C++ without a loop, goto or recursion?
I asked my programming teacher and he said he's not too familiar with alloca(), but he was sure this program had undefined behavior, and that i'd better ask on SO.
Worth to note that the OP of the answer on Quora gave no guarantee this would work on someone else's system.
I have trouble understanding what void(**rptr)() does and how the call in main() works, why * 200?.
#include <iostream>
#include <stdlib.h>
int num;
void(**rptr)();
void foo() {
if(num >= 100) exit(0);
std::cout << ++num << std::endl;
*rptr++ = foo;
}
int main() {
rptr = (void(**)())alloca(sizeof(*rptr) * 200) - 1;
foo();
return 0;
}
This is a horrible hack which leverages undefined behavoir. Analysing undefined behavoir is pretty pointless, but sometimes its interesting to dig in and find out exactly why.
Basically, what is happening is the alloca(...) is allocating enough memory to store 200 function pointers on the stack. So far, unusual but nothing bad. But the key is the -1 at the end - its returning memory which is one before this store. So rptr is pointing to the stack into an unknown location.
Then foo is called. At the end of foo we write the address of foo to rptr. But rptr is one before the valid memory so we're overwriting something else.
What that something else happens to be is the return address for where foo returns to (which should be main). So instead of returning to main, it basically "returns" to the start of foo. And this repeats until we get to the exit.
Its basically moderately controlled stack smashing. And will only work on architectures with calling conventions where the return address is put onto the stack in this manor.

How to make uninitiated pointer not equal to 0/null?

I am a c++ learner. Others told me "uninitiatied pointer may point to anywhere". How to prove that by code.?I made a little test code but my uninitiatied pointer always point to 0. In which case it does not point to 0? Thanks
#include <iostream>
using namespace std;
int main() {
int* p;
printf("%d\n", p);
char* p1;
printf("%d\n", p1);
return 0;
}
Any uninitialized variable by definition has an indeterminate value until a value is supplied, and even accessing it is undefined. Because this is the grey-area of undefined behaviour, there's no way you can guarantee that an uninitialized pointer will be anything other than 0.
Anything you write to demonstrate this would be dictated by the compiler and system you are running on.
If you really want to, you can try writing a function that fills up a local array with garbage values, and create another function that defines an uninitialized pointer and prints it. Run the second function after the first in your main() and you might see it.
Edit: For you curiosity, I exhibited the behavior with VS2015 on my system with this code:
void f1()
{
// junk
char arr[24];
for (char& c : arr) c = 1;
}
void f2()
{
// uninitialized
int* ptr[4];
std::cout << (std::uintptr_t)ptr[1] << std::endl;
}
int main()
{
f1();
f2();
return 0;
}
Which prints 16843009 (0x01010101). But again, this is all undefined behaviour.
Well, I think it is not worth to prove this question, because a good coding style should be used and this say's: Initialise all variables! One example: If you "free" a pointer, just give them a value like in this example:
char *p=NULL; // yes, this is not needed but do it! later you may change your program an add code beneath this line...
p=(char *)malloc(512);
...
free(p);
p=NULL;
That is a safe and good style. Also if you use free(p) again by accident, it will not crash your program ! In this example - if you don't set NULL to p after doing a free(), your can use the pointer by mistake again and your program would try to address already freed memory - this will crash your program or (more bad) may end in strange results.
So don't waste time on you question about a case where pointers do not point to NULL. Just set values to your variables (pointers) ! :-)
It depends on the compiler. Your code executed on an old MSVC2008 displays in release mode (plain random):
1955116784
1955116784
and in debug mode (after croaking for using unitialized pointer usage):
-858993460
-858993460
because that implementation sets uninitialized pointers to 0xcccccccc in debug mode to detect their usage.
The standard says that using an uninitialized pointer leads to undefined behaviour. That means that from the standard anything can happen. But a particular implementation is free to do whatever it wants:
yours happen to set the pointers to 0 (but you should not rely on it unless it is documented in the implementation documentation)
MSVC in debug mode sets the pointer to 0xcccccccc in debug mode but AFAIK does not document it (*), so we still cannot rely on it
(*) at least I could not find any reference...

C Program works for me but shows runtime error online

The following code worked fine for me (code blocks 10.05) and showed no compile-time/runtime errors for various test cases.
But showed runtime error as I submitted it online on a programming website.
#include<stdio.h>
#include<stdlib.h>
/*
Here comes newPos()
*/
int main()
{
int t,i,n,k,j;
scanf("%d",&t);
int* a;
for(i=0;i<t;i++)
{
scanf("%d",&n);
free(a);
a=(int*) malloc(n);
for(j=0;j<n;j++)
scanf("%d",&a[j]);
scanf("%d",&k);
printf("%d\n",newPos(a,n,k));
}
return 0;
}
And then I changed it into a .cpp file after making a few changes.
i.e., instead of free(a) I used the statement, delete a; and instead of a=(int*) malloc(n), I used the statement, a=new int[n];
Then it executed successfully both on my compiler and online.
First error:
You are not allocating enough memory to store n integer values. So you should change:
a=(int*) malloc(n);
to:
a=malloc(n * sizeof(int));
I have also removed the cast since it's useless and could hide a forgotten include.
Second error:
You must not free a before allocating memory. Free the memory only at the end of your loop.
C/C++ mix:
In the comments of this answer, people are talking about the need or not to cast, in particular in C++. In C, you should not cast.
If you are willing to do C++ code, you should use new and delete instead of malloc and free. Honestly, I don't know if a cast is needed in C++ when using malloc, because in C++, I always use new. But please, don't write C code with a C++ compiler. Choose between C and C++ depending on your needs.
You are freeing before allocating:
free(a); // This can lead to Undefined Behavior because a is containing some junk value
a=(int*) malloc(n);
Also, no specific need to cast return type of malloc and check your malloc argument you are not specifying size in bytes correctly. But in C++ the case is required (Since you tagged both C and C++).
Do I cast the result of malloc?
a=(int*) malloc(n*sizeof(int));
Aside from the mentioned allocation size problem, you can't free(a) unless you have either already allocated something, or have initialized a to have the value NULL.
This is because your argument to malloc() is wrong. The function has no idea what "unit" you're going to be using, so the unit for the argument is always "bytes". With C++'s new[] operator, that's operating a higher level in the language so it can take the type's size into account.
Thus, change your allocation to:
a = malloc(n * sizeof *a);
This removes the pointless and annoying cast, and also adds the missing sizeof to scale the argument by the number of bytes in each pointed-to object. This is a good form of malloc() usage to remember.
Also don't free() random pointers.

deleting an array the wrong way [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How could pairing new[] with delete possibly lead to memory leak only?
I was always told that it's not safe to call delete on an array allocated with new[]. You should always pair new with delete and new[] with delete[].
So I was surprised to discover that the following code compiles and runs ok, in both Debug and Release mode under VS2008.
class CBlah
{
public:
CBlah() : m_i(0) {}
private:
int m_i;
};
int _tmain(int argc, _TCHAR* argv[])
{
for(;;)
{
CBlah * p = new CBlah[1000]; // with []
delete p; // no []
}
return 0;
}
It took me a while to figure out why this works at all, and I think it's just luck and some undefined behaviour.
BUT... it made me wonder... why doesn't Visual Studio pick this up, at least in the Debug memory manager? Is it because there's lots of code out there that makes this mistake and they don't want to break it, or do they feel it's not the job of the Debug memory manager to catch this kind of mistake?
Any thoughts? Is this kind of misuse common?
It will certainly compile ok, because there is no information in the pointer (compile-time) which will see if pointer points to array or what. For example:
int* p;
cin>>x;
if(x == 0)
p = new int;
else
p = new int [10];
delete p; //correct or not? :)
Now , about running ok. This is called undefined behavior in C++, that is, there is no guarantee what will happen - everything can run OK, you can get a segfault, you can get just wrong behavior, or your computer may decide to call 911. UB <=> no guarantee
It's undefined behavior and everything is fair in love, war and undefined behavior...:)
According to MDSN, it translates delete to delete[] when trying to delete an array. (See there, for instance). Though you should have a warning after compiling.
The reason the Debug Memory Manager does not pick up on this error is probably because it it not implemented at the level of new/delete, but at the level of the memory manager that gets invoked by new/delete to allocate the required memory.
At that point, the distinction between array new and scalar new is gone.
You can read these SO answers and links about delete and delete[]: About delete, operator delete, delete[], ...
I don't know what makes you think it "works ok". It compiles and completes without crashing. That does not mean necessarily there was no leak or heap corruption. Also if you got away with it this time, it doesn't necessarily make it a safe thing to do.
Sometimes even a buffer overwrite is something you will "get away with" because the bytes you have written to were not used (maybe they are padding for alignment). Still you should not go around doing it.
Incidentally new T[1] is a form of new[] and still requires a delete[] even though in this instance there is only one element.
Interesting point.
Once I did a code review and tried to convince programmers to fix new[]-delete mismatch.
I've argumented with "Item 5" from Effective C++ by Scott Meyers. However, they argumented with "What do you want, it works well!" and proved, that there was no memory leakage.
However, it worked only with POD-types. Looks like, MS tries to fix the mismatch as pointed out by Raveline.
What would happen, if you added a destructor?
#include <iostream>
class CBlah
{
static int instCnt;
public:
CBlah() : m_i(0) {++instCnt;}
~CBlah()
{
std::cout <<"d-tor on "<< instCnt <<" instance."<<std::endl;
--instCnt;
}
private:
int m_i;
};
int CBlah::instCnt=0;
int main()
{
//for(;;)
{
CBlah * p = new CBlah[10]; // with []
delete p; // no []
}
return 0;
}
Whatever silly"inteligence" fix is added to VS, the code is not portable.
Remember that "works properly" is within the universe of "undefined behavior". It is quite possible for a particular version of a particular compiler to implement this in such a way that it works for all intents and purposes. The important thing to remember is that this is not guaranteed and you can't really ever be sure it's working 100%, and you can't know that it will work with the next version of the compiler. It's also not portable, since another compiler might work in a different fashion.
This works because the particular C++ runtime library it was linked with uses the same heap for both operator new and operator new[]. Many do, but some don't, which is why the practice is not recommended.
The other big difference is that if CBlah had a non-trivial destructor, the delete p; would only call it for the first object in the array, whereas delete[] p; is sure to call it for all the objects.

Stack corruption in C++

In C++, in which way the stack may get corrupted. One way I guess is to overwriting the stack variables by accessing an array beyond its boundaries. Is there any other way that it can get corrupted?
You could have a random/undefined pointer that ends up pointing to the stack, and write though that.
An assembly function could incorrectly setup/modify/restore the stack
Cosmic waves could flips bits in the stack.
Radioactive elements in the chip's casing could flip bits.
Anything in the kernel could go wrong and accidentally change your stack memory.
But those are not particular to C++, which doesn't have any idea of the stack.
Violations of the One Definition Rule can lead to stack corruption. The following example looks stupid, but I've seen it a couple of times with different libraries compiled in different configurations.
header.h
struct MyStruct
{
int val;
#ifdef LARGEMYSTRUCT
char padding[16];
#endif
}
file1.cpp
#define LARGEMYSTRUCT
#include "header.h"
//Here it looks like MyStruct is 20 bytes in size
void func(MyStruct s)
{
memset(s.padding, 0, 16); //corrupts the stack as below file2.cpp does not have LARGEMYSTRUCT declared and declares Mystruct with 4 bytes
return; //Will probably crash here as the return pointer has been overwritten
}
file2.cpp
#include "header.h"
//Here it looks like MyStruct is only 4 bytes in size.
extern void func(MyStruct s);
void caller()
{
MyStruct s;
func(s); //push four bytes on to the stack
}
Taking pointers to stack variables is a good way:
void foo()
{
my_struct s;
bar(&s);
}
If bar keeps a copy of the pointer then anything can happen in the future.
Summing up: Stack corruption happens when there's stray pointers pointing to the stack.
The C++ standard does not define stack/heap. Further, there are a number of ways to invoke undefined behavior in a program -- all of which may corrupt your stack (it's UB, after all). The short answer is -- your question is too vague to have a meaningful answer.
Calling a function with the wrong calling convention.
(though this is technically compiler-specific, not a question of C++, every C++ compiler has to deal with that.)
Throwing an exception inside a destructor is a good candidate. It would mess up the stack unwinding.