Ignoring usefulness of such practice. (Though real-life examples are welcome, of course.)
For example, the following program outputs the correct value for a:
#include <iostream>
using namespace std;
int main()
{
int a = 11111;
int i = 30;
int* pi = new (&i) int();
cout << a << " " << endl;
}
But isn't new-allocation supposed to create some bookkeeping information adjacent to i (for correct subsequent deallocation), which in this case is supposed to corrupt the stack around i?
Yes, it's perfectly OK to perform placement-new with a pointer to an object on the stack. It will just use that specific pointer to construct the object in. Placement-new isn't actually allocating any memory - you have already provided that part. It only does construction. The subsequent deletion won't actually be delete - there is no placement delete - since all you need to do is call the object's destructor. The actual memory is managed by something else - in this case your stack object.
For example, given this simple type:
struct A {
A(int i)
: i(i)
{
std::cout << "make an A\n";
}
~A() {
std::cout << "delete an A\n";
}
int i;
};
The following is completely reasonable, well-behaved code:
char buf[] = {'x', 'x', 'x', 'x', 0};
std::cout << buf << std::endl; // xxxx
auto a = new (buf) A{'a'}; // make an A
std::cout << a->i << std::endl; // 97
a->~A(); // delete an A
The only case where this would be invalid would be if your placement-new-ed object outlasts the memory you new-ed it on - for the same reason that returning a dangling pointer is always bad:
A* getAnA(int i) {
char buf[4];
return new (buf) A(5); // oops
}
Placement new constructs the element in place and does not allocate memory.
The "bookkeeping information" in this case is the returned pointer which ought to be used to destroy the placed object.
There is no delete associated with the placement since placement is a construction. Thus, the required "clean up" operation for placement new is destruction.
The "usual steps" are
'Allocate' memory
Construct element(s) in place
Do stuff
Destroy element(s) (reverse of 2)
'Deallocate' memory (reverse of 1)
(Where memory can be stack memory which is neither required to be explicitly allocated nor deallocated but comes and goes with a stack array or object.)
Note: If "placing" an object into memory of the same type on the stack one should keep in mind that there's automatic destruction at the end of the object's lifetime.
{
X a;
a.~X(); // destroy a
X * b = new (&a) X(); // Place new X
b->~X(); // destroy b
} // double destruction
No, because you don't delete an object which has been placement-newed, you call its destructor manually.
struct A {
A() { std::cout << "A()\n"; }
~A() { std::cout << "~A()\n"; }
};
int main()
{
alignas(A) char storage[sizeof(A)];
A *a = new (storage) A;
std::cout << "hi!\n";
a->~A();
std::cout << "bye!\n";
}
Output:
A()
hi!
~A()
bye!
In your case, there's no need to call a destructor either, because int is trivially-destructible, meaning that its destructor would be a no-op anyway.
Beware though not to invoke placement-new on an object that is still alive, because not only will you corrupt its state, but its destructor will also be invoked twice (once when you call it manually, but also when the original object should have been deleted, for exampel at the end of its scope).
Placement new does construction, not allocation, so there's no bookkeeping information to be afraid of.
I can at the moment think of one possible use case, though it (in this form) would be a bad example of encapsulation:
#include <iostream>
using namespace std;
struct Thing {
Thing (int value) {
cout << "such an awesome " << value << endl;
}
};
union Union {
Union (){}
Thing thing;
};
int main (int, char **) {
Union u;
bool yes;
cin >> yes;
if (yes) {
new (&(u.thing)) Thing(42);
}
return 0;
}
Live here
Though even when the placement new is hidden in some member function, the construction still happens on the stack.
So: I didn't look in the standard, but can't think of why placement new on the stack shouldn't be permitted.
A real world example should be somewhere in the source of https://github.com/beark/ftl ... in their recursive union, which is used for the sum type.
Related
I have a Structure defined called Node.
Now, I do:
Node* temp;
temp = new Node();
temp is a pointer to a Node, which itself is a complex data type.
Question-1: Is memory allocation on heap contiguous?
Question-2: Which block of memory on heap does 'temp' exactly point to? Is it the memory address of the very first data member in the struct Node?
Now, I do:
delete temp;
Question-3: This deallocates the memory. So, does temp point to a garbage value now or does it point to NULL?
Question-1: Is memory allocation on heap contiguous?
Yes, you are given a contiguous block. It does not necesarily means that two consecutive allocations will give you consecutive blocks.
Question-2: Which block of memory on heap does 'temp' exactly point to? Is it the memory address of the very first data member in the struct Node?
Not necesarily, depends on how Node is defined and what is the ABI for your platform and compiler.
Question-3: This deallocates the memory. So, does temp point to a garbage value now or does it point to NULL?
Yes, it keeps pointing to the same address (now free, likely to be reallocated eventually) and it is up to you to set it to NULL.
For question-2. Let's try a little experiment:
1) plain struct
#include <iostream>
class A {
public:
int a;
};
int main() {
A *b=new A();
std::cout << std::hex << b << std::endl;
std::cout << std::hex << &(b->a) << std::endl;
}
and the result (compiled with cygwin)
0x800102d0
0x800102d0
So in this case we're fine.
2) Inherited
#include <iostream>
class A {
public:
int a;
};
class B: public A {
public:
int b;
};
int main() {
B *b=new B();
std::cout << std::hex << b << std::endl;
std::cout << std::hex << &(b->b) << std::endl;
std::cout << std::hex << &(b->a) << std::endl;
}
And the result:
0x800102d0
0x800102d4
0x800102d0
So, B* is no longer the pointer to B's first element, but the pointer to A's first element.
3) Virtual classes
#include <iostream>
class A {
public:
virtual ~A() { }
int a;
};
class B: public A {
public:
int b;
};
int main() {
B *b=new B();
std::cout << std::hex << b << std::endl;
std::cout << std::hex << &(b->b) << std::endl;
std::cout << std::hex << &(b->a) << std::endl;
}
and the result
0x800102d0
0x800102d8
0x800102d4
Now the pointer is pointing neither to the first element of A nor the first
element of B, but to a control structure.
The object itself is allocated from contiguous memory. If you mean where from the heap consecutive new allocate from: there is no way to know. The heap manager is free to do whatever it wants. In the implementations I have investigated, they are peeled from the initial heap consecutively, but after some delete and free calls, the free list is searched for a suitable block under one of several algorithms include "first fit", "best fit", etc.
Yes, the pointer is usually to the lowest address element of the allocated class instance. But this is not always true for derived classes, virtual objects, etc.
Yes, delete deallocates the object, but the pointer is left pointing to the space that was allocated. It is "bad" to dereference the pointer after delete. If there is any chance of testing it again, it is good practice to set the pointer to NULL after delete.
delete temp; temp = NULL;
What is the difference between these two instantiation and method call types?
Take this code for example:
class Test
{
public:
Test(int nrInstance)
{
std::cout << "Class " << nrInstance << " instanced " << std::endl;
}
~Test() { }
int retornaValue()
{
return value;
}
private:
const int value = 10;
};
int main(int argc, char *argv[])
{
Test *test1 = new Test(1);
Test test2(2);
std::cout << test1->retornaValue() << std::endl;
std::cout << test2.retornaValue() << std::endl;
return 0;
}
From what ive read, using the first way, the variable is allocated in the heap, and the second, in the stack, but arent both inside the Main scope, and being deallocated after the function exits?
Also, calling methods is different in both examples, why?
Your right that both variables are in the Main scope and deallocated after the function exits, but in the first case it is the Test* value that is deallocated, not the Test instance itself. Once the pointer is deallocated, the class instance is leaked. In the second case, the Test instance is on the stack, so the instance itself is deallocated.
Also, calling methods is different in both examples, why?
Unless overloaded, foo->bar is equivalent to (*foo).bar. The calling syntax is different because in the first case, test1 is a pointer to an instance, and in the second, test2 is an instance.
but arent both inside the Main scope, and being deallocated after the
function exits?
No not at all... *test1 will not be deallocated until you call delete on it.
but arent both inside the Main scope, and being deallocated after the function exits?
The stack instance is unwound as the scope is closed, and thus deallocated. The pointer is as well, but the object it points to is not. You must explicitly delete instances allocated with new.
In the first example, you are creating both a pointer to the object on the stack, and the object itself on the heap.
In the second example you are creating the object itself on the stack.
The syntax difference is the difference between calling a function through a pointer and calling it on an object directly.
You are wrong about the first example being cleaned up, you need a delete before the pointer goes out of scope or you have what is known as a memory leak. Memory leaks will be cleaned up by the OS when the program exits, but good practice is to avoid them.
You've clearly stated the difference in your question, one is on the stack, and one is on the heap. And yes, when main() exits, it will be deallocated. Now let's take a different approach.
#include <iostream>
using namespace std;
class MemoryLeak{
private:
int m_instance
public:
MemoryLeak(int i) : m_instance(i)
{ cout << "MemoryLeak " << i << " created.\n"; }
~MemoryLeak() { cout << "MemoryLeak " << m_instance << " deleted.\n"; }
};
void makeMemoryLeak(){
static int instance = 0
MemoryLeak mem1(++instance);
MemoryLeak* mem2 = new MemoryLeak(++instance);
}
int main(){
for(int x = 0; x < 10; ++x){
makeMemoryLeak();
cout << endl;
}
cin.get(); // Wait to close
return 0;
}
You will see 20 "New MemoryLeak created." lines but only 10 "MemoryLeak deleted" lines. So those other 10 instances are still in memory until you close the program. Now let's say that that program never shuts down, and MemoryLeak has a size of 20 bytes. and makeMemoryLeak() runs once a minute. After one day, or 1440 minutes, you'll have 28.125 kb of memory that is taken up, but you have no way to access.
The solution would be to change makeMemoryLeak()
void makeMemoryLeak(){
MemoryLeak mem1;
MemoryLeak* mem2 = new MemoryLeak();
delete mem2;
}
Regarding what gets created and destroyed:
struct Test {};
void func()
{
// instantiate ONE automatic object (on the stack) [test1]
Test test1;
// Instantiate ONE automatic object (on the stack) [test2]
// AND instantiate ONE object from the free store (heap) [unnamed]
Test* test2 = new Test; // two objects!!
} // BOTH automatic variables are destroyed (test1 & test2) but the
// unnamed object created with new (that test2 was pointing at)
// is NOT destroyed (memory leak)
if you dont delete Test1 (Test*) before coming out of main, it will cause memory leak.
Consider the program below. It has been simplified from a complex case. It fails on deleting the previous allocated memory, unless I remove the virtual destructor in the Obj class. I don't understand why the two addresses from the output of the program differ, only if the virtual destructor is present.
// GCC 4.4
#include <iostream>
using namespace std;
class Arena {
public:
void* alloc(size_t s) {
char* p = new char[s];
cout << "Allocated memory address starts at: " << (void*)p << '\n';
return p;
}
void free(void* p) {
cout << "The memory to be deallocated starts at: " << p << '\n';
delete [] static_cast<char*> (p); // the program fails here
}
};
struct Obj {
void* operator new[](size_t s, Arena& a) {
return a.alloc(s);
}
virtual ~Obj() {} // if I remove this everything works as expected
void destroy(size_t n, Arena* a) {
for (size_t i = 0; i < n; i++)
this[n - i - 1].~Obj();
if (a)
a->free(this);
}
};
int main(int argc, char** argv) {
Arena a;
Obj* p = new(a) Obj[5]();
p->destroy(5, &a);
return 0;
}
This is the output of the program in my implementation when the virtual destructor is present:
Allocated memory address starts at: 0x8895008
The memory to be deallocated starts at: 0x889500c
RUN FAILED (exit value 1)
Please don't ask what the program it's supposed to do. As I said it comes from a more complex case where Arena is an interface for various types of memory. In this example the memory is just allocated and deallocated from the heap.
this is not the pointer returned by the new at line char* p = new char[s]; You can see that the size s there is bigger than 5 Obj instances. The difference (which should be sizeof (std::size_t)) is in additional memory, containing the length of the array, 5, immediately before the address contained in this.
OK, the spec makes it clear:
http://sourcery.mentor.com/public/cxx-abi/abi.html#array-cookies
2.7 Array Operator new Cookies
When operator new is used to create a new array, a cookie is usually stored to remember the allocated length (number of array elements) so that it can be deallocated correctly.
Specifically:
No cookie is required if the array element type T has a trivial destructor (12.4 [class.dtor]) and the usual (array) deallocation function (3.7.3.2 [basic.stc.dynamic.deallocation]) function does not take two arguments.
So, the virtual-ness of the destructor is irrelevant, what matters is that the destructor is non-trivial, which you can easily check, by deleting the keyword virtual in front of the destructor and observe the program crashing.
Based on chills' answer, if you want to make it "safe":
#include <type_traits>
a->free(this - (std::has_trivial_destructor<Obj>::value ? 1 : 0));
I defined a class foo as follows:
class foo {
private:
static int objcnt;
public:
foo() {
if(objcnt==8)
throw outOfMemory("No more space!");
else
objcnt++;
}
class outOfMemory {
public:
outOfMemory(char* msg) { cout << msg << endl;}
};
~foo() { cout << "Deleting foo." << endl; objcnt--;}
};
int foo::objcnt = 0;
And here's the main function:
int main() {
try {
foo* p = new foo[3];
cout << "p in try " << p << endl;
foo* q = new foo[7];
}catch(foo::outOfMemory& o) {
cout << "Out-of-memory Exception Caught." << endl;
}
}
It is obvious that the line "foo* q = new foo[7];" only creates 5 objects successfully, and on the 6th object an Out-of-memory exception is thrown. But it turns out that there's only 5 destructor calls, and destrcutor is not called for the array of 3 objects stored at the position p points to. So I am wondering why? How come the program only calls the destructor for those 5 objects?
The "atomic" C++ allocation and construction functions are correct and exception-safe: If new T; throws, nothing leaks, and if new T[N] throws anywhere along the way, everything that's already been constructed is destroyed. So nothing to worry there.
Now a digression:
What you always must worry about is using more than one new expression in any single unit of responsibility. Basically, you have to consider any new expression as a hot potato that needs to be absorbed by a fully-constructed, responsible guardian object.
Consider new and new[] strictly as library building blocks: You will never use them in high-level user code (perhaps with the exception of a single new in a constructor), and only inside library classes.
To wit:
// BAD:
A * p = new A;
B * q = new B; // Ouch -- *p may leak if this throws!
// Good:
std::unique_ptr<A> p(new A);
std::unique_ptr<B> q(new B); // who cares if this throws
std::unique_ptr<C[3]> r(new C[3]); // ditto
As another aside: The standard library containers implement a similar behaviour: If you say resize(N) (growing), and an exception occurs during any of the constructions, then all of the already-constructed elements are destroyed. That is, resize(N) either grows the container to the specified size or not at all. (E.g. in GCC 4.6, see the implementation of _M_fill_insert() in bits/vector.tcc for a library version of exception-checked range construction.)
Destructors are only called for the fully constructed objects - those are objects whose constructors completed normally. That only happens automatically if an exception is thrown while new[] is in progress. So in your example the destructors will be run for five objects fully constructed during q = new foo[7] running.
Since new[] for the array that p points to completed successfully that array is now handled to your code and the C++ runtime doesn't care of it anymore - no destructors will be run unless you do delete[] p.
You get the behavior you expect when you declare the arrays on the heap:
int main()
{
try
{
foo p[3];
cout << "p in try " << p << endl;
foo q[7];
}
catch(foo::outOfMemory& o)
{
cout << "Out-of-memory Exception Caught." << endl;
}
}
In your code only the pointers were local automatic variables. Pointers don't have any associated cleanup when the stack is unwound. As others have pointed out this is why you generally do not have RAW pointers in C++ code they are usually wrapped inside a class object that uses the constructor/destructor to control their lifespan (smart pointer/container).
As a side note. It is usually better to use std::vector than raw arrays (In C++11 std::array is also useful if you have a fixed size array). This is because the stack has a limited size and these object puts the bulk of the data in the heap. The extra methods provided by these class objects make them much nicer to handle in the rest of your code and if you absolutely must have an old style array pointer to pass to a C function they are easy to obtain.
int main()
{
try
{
std::vector<foo> p(3);
cout << "p in try " << p << endl;
std::vector<foo> q(7);
// Now you can pass p/q to function much easier.
}
catch(foo::outOfMemory& o)
{
cout << "Out-of-memory Exception Caught." << endl;
}
}
I'm a little confused about the best practice for how to do this. Say I have a class that for example allocs some memory. I want it to self destruct like an auto but also put it in a vector for some reason unknown.
#include <iostream>
#include <vector>
class Test {
public:
Test();
Test(int a);
virtual ~Test();
int counter;
Test * otherTest;
};
volatile int count = 0;
Test::Test(int a) {
count++;
counter = count;
std::cout << counter << "Got constructed!\n";
otherTest = new Test();
otherTest->counter = 999;
}
Test::Test() {
count++;
counter = count;
std::cout << counter << "Alloced got constructed!\n";
otherTest = NULL;
}
Test::~Test() {
if(otherTest != 0){
std::cout << otherTest->counter << " 1Got destructed" << counter << "\n";
otherTest->counter = 888;
std::cout << otherTest->counter << " 2Got destructed" << counter << "\n";
}
}
int vectorTest(){
Test a(5);
std::vector<Test> vecTest;
vecTest.push_back(a);
return 1;
}
int main(){
std::cout << "HELLO WORLD\n";
vectorTest();
std::cout << "Prog finished\n";
}
In this case my destructor gets called twice all from counter 1, the alloc' object has already been set to 888 (or in a real case freed leading to bad access to a deleted object). What's the correct case for putting a local variable into a vector, is this some kind of design that would never happen sensibly. The following behaves differently and the destructor is called just once (which makes sense given its an alloc).
int vectorTest(){
//Test a(5);
std::vector<Test> vecTest;
vecTest.push_back(*(new Test(5)));
return 1;
}
How can I make the local variable behave the same leading to just one call to the destructor? Would a local simply never be put in a vector? But aren't vectors preferred over arrays, what if there are a load of local objects I want to initialize separately and place into the vector and pass this to another function without using free/heap memory? I think I'm missing something crucial here. Is this a case for some kind of smart pointer that transfers ownership?
A vector maintains its own storage and copies values into it. Since you did not implement a copy constructor, the default one is used, which just copies the value of the pointer. This pointer is thus deleted twice, once by the local variable destructor and once by the vector. Don't forget the rule of three. You either need to implement the copy and assignment operators, or just use a class that already does this, such as shared_ptr.
Note that this line causes a memory leak, since the object you allocated with new is never deleted:
vecTest.push_back(*(new Test(5)));
In addition to what Dark Falcon wrote: To avoid reallocating when inserting into a vector, you typically implement a swap function to swap local element with a default-constructed one in the vector. The swap would just exchange ownership of the pointer and all will be well. The new c++0x also has move-semantics via rvalue-references to help with this problem.
More than likely, you'd be better off having your vector hold pointers to Test objects instead of Test objects themselves. This is especially true for objects (like this test object) that allocate memory on the heap. If you end up using any algorithm (e.g. std::sort) on the vector, the algorithm will be constantly allocating and deallocating memory (which will slow it down substantially).