Using malloc/free to simulate new/delete - c++

I created an example classes (only for learning purposes) which don't have to use constructor initialization lists because I want to get the same effects using new/delete and malloc/free. What are other constraints besides not using constructor initialization list? Do you think that the following code simulate the new/delete behavior in the right way?
#include <iostream>
using namespace std;
class X
{
private:
int* a;
public:
X(int x)
{
this->a = new int;
*(this->a) = x;
}
~X() { delete this->a; }
int getA() { return *(this->a); }
};
class Y
{
private:
int* a;
public:
void CTor(int x)
{
this->a = new int;
*(this->a) = x;
}
void DTor() { delete this->a; }
int getA(){ return *(this->a); }
};
void main()
{
X *xP = new X(44);
cout<<xP->getA()<<endl;
delete xP;
Y *yP = static_cast<Y*>(malloc(sizeof(Y)));
yP->CTor(44);
cout<<yP->getA()<<endl;
yP->DTor();
free(yP);
system("pause");
}
Without using delete xP, destructor will be called automatically when program ends, but a free store won't be freed (i.e. free store for xP, free store for field a will be freed) . When using delete xP destructor is called and then a free store is completely freed.
Please correct me if I'm wrong.

The actual answer to your question is that you are incorrect, not calling delete xP will lead to a memory leak. It is very likely that the OS then cleans up the memory for you, but it's not GUARANTEED by the C++ runtime library that this happens - it's something that OS's generally offer as a nice service [and very handy it is too, since it allows us to write imperfect code and have code that does things like if (condition) { cout << "Oh horror, condition is true, I can't continue" << endl; exit(2); } without having to worry about cleaning everything up when we've got something that went horribly wrong].
My main point, however, is that this sort of stuff gets very messy as soon as you add members that in themselves need destruction, e.g.
class X
{
private:
std::string str;
...
};
There is no trivial way to call the destructor for str. And it gets even worse if you decide to use exceptions and automated cleanup.

There are many subtle differences that distinguish new from malloc and delete from free. I believe they are only equivalent if the class is POD type.
I should warn that you can never mix the two types freely. A new must always be balanced by a delete and never by a free. Similarly malloc must be balanced only by free. Mixing the two results in undefined behaviour.

Related

how to assert a pointer's pointed content is on heap? [duplicate]

Example:
bool isHeapPtr(void* ptr)
{
//...
}
int iStack = 35;
int *ptrStack = &iStack;
bool isHeapPointer1 = isHeapPtr(ptrStack); // Should be false
bool isHeapPointer2 = isHeapPtr(new int(5)); // Should be true
/* I know... it is a memory leak */
Why, I want to know this:
If I have in a class a member-pointer and I don't know if the pointing object is new-allocated. Then I should use such a utility to know if I have to delete the pointer.
But:
My design isn't made yet. So, I will program it that way I always have to delete it. I'm going to avoid rubbish programming
There is no way of doing this - and if you need to do it, there is something wrong with your design. There is a discussion of why you can't do this in More Effective C++.
In the general case, you're out of luck, I'm afraid - since pointers can have any value, there's no way to tell them apart. If you had knowledge of your stack start address and size (from your TCB in an embedded operating system, for example), you might be able to do it. Something like:
stackBase = myTCB->stackBase;
stackSize = myTCB->stackSize;
if ((ptrStack < stackBase) && (ptrStack > (stackBase - stackSize)))
isStackPointer1 = TRUE;
The only "good" solution I can think of is to overload operator new for that class and track it. Something like this (brain compiled code):
class T {
public:
void *operator new(size_t n) {
void *p = ::operator new(n);
heap_track().insert(p);
return p;
}
void operator delete(void* p) {
heap_track().erase(p);
::operator delete(p);
}
private:
// a function to avoid static initialization order fiasco
static std::set<void*>& heap_track() {
static std::set<void*> s_;
return s_;
}
public:
static bool is_heap(void *p) {
return heap_track().find(p) != heap_track().end();
}
};
Then you can do stuff like this:
T *x = new X;
if(T::is_heap(x)) {
delete x;
}
However, I would advise against a design which requires you to be able to ask if something was allocated on the heap.
Well, get out your assembler book, and compare your pointer's address to the stack-pointer:
int64_t x = 0;
asm("movq %%rsp, %0;" : "=r" (x) );
if ( myPtr < x ) {
...in heap...
}
Now x would contain the address to which you'll have to compare your pointer to. Note that it will not work for memory allocated in another thread, since it will have its own stack.
here it is, works for MSVC:
#define isheap(x, res) { \
void* vesp, *vebp; \
_asm {mov vesp, esp}; \
_asm {mov vebp, ebp}; \
res = !(x < vebp && x >= vesp); }
int si;
void func()
{
int i;
bool b1;
bool b2;
isheap(&i, b1);
isheap(&si, b2);
return;
}
it is a bit ugly, but works. Works only for local variables. If you pass stack pointer from calling function this macro will return true (means it is heap)
In mainstream operating systems, the stack grows from the top while the heap grows from the bottom. So you might heuristically check whether the address is beyond a large value, for some definition of "large." For example, the following works on my 64-bit Linux system:
#include <iostream>
bool isHeapPtr(const void* ptr) {
return reinterpret_cast<unsigned long long int>(ptr) < 0xffffffffull;
}
int main() {
int iStack = 35;
int *ptrStack = &iStack;
std::cout << isHeapPtr(ptrStack) << std::endl;
std::cout << isHeapPtr(new int(5)) << std::endl;
}
Note that is a crude heuristic that might be interesting to play with, but is not appropriate for production code.
First, why do you need to know this? What real problem are you trying to solve?
The only way I'm aware of to make this sort of determination would be to overload global operator new and operator delete. Then you can ask your memory manager if a pointer belongs to it (the heap) or not (stack or global data).
Even if you could determine whether a pointer was on one particular heap, or one particular stack, there can be multiple heaps and multiple stacks for one application.
Based on the reason for asking, it is extremely important for each container to have a strict policy on whether it "owns" pointers that it holds or not. After all, even if those pointers point to heap-allocated memory, some other piece of code might also have a copy of the same pointer. Each pointer should have one "owner" at a time, though ownership can be transferred. The owner is responsible for destructing.
On rare occasions, it is useful for a container to keep track of both owned and non-owned pointers - either using flags, or by storing them separately. Most of the time, though, it's simpler just to set a clear policy for any object that can hold pointers. For example, most smart pointers always own their container real pointers.
Of course smart pointers are significant here - if you want an ownership-tracking pointer, I'm sure you can find or write a smart pointer type to abstract that hassle away.
Despite loud claims to the contrary, it is clearly possible to do what you want, in a platform-dependent way. However just because something is possible, that does not automatically make it a good idea. A simple rule of stack==no delete, otherwise==delete is unlikely to work well.
A more common way is to say that if I allocated a buffer, then I have to delete it, If the program passes me a buffer, it is not my responsibility to delete it.
e.g.
class CSomething
{
public:
CSomething()
: m_pBuffer(new char[128])
, m_bDeleteBuffer(true)
{
}
CSomething(const char *pBuffer)
: m_pBuffer(pBuffer)
, m_bDeleteBuffer(false)
{
}
~CSomething()
{
if (m_bDeleteBuffer)
delete [] m_pBuffer;
}
private:
const char *m_pBuffer;
bool m_bDeleteBuffer;
};
You're trying to do it the hard way. Clarify your design so it's clear who "owns" data and let that code deal with its lifetime.
here is universal way to do it in windows using TIP:
bool isStack(void* x)
{
void* btn, *top;
_asm {
mov eax, FS:[0x08]
mov btn, eax
mov eax, FS:[0x04]
mov top, eax
}
return x < top && x > btn;
}
void func()
{
int i;
bool b1;
bool b2;
b1 = isStack(&i);
b2 = isStack(&si);
return;
}
The only way I know of doing this semi-reliably is if you can overload operator new for the type for which you need to do this. Unfortunately there are some major pitfalls there and I can't remember what they are.
I do know that one pitfall is that something can be on the heap without having been allocated directly. For example:
class A {
int data;
};
class B {
public:
A *giveMeAnA() { return &anA; }
int data;
A anA;
};
void foo()
{
B *b = new B;
A *a = b->giveMeAnA();
}
In the above code a in foo ends up with a pointer to an object on the heap that was not allocated with new. If your question is really "How do I know if I can call delete on this pointer." overloading operator new to do something tricky might help you answer that question. I still think that if you have to ask that question you've done something very wrong.
How could you not know if something is heap-allocated or not? You should design the software to have a single point of allocation.
Unless you're doing some truly exotic stuff in an embedded device or working deep in a custom kernel, I just don't see the need for it.
Look at this code (no error checking, for the sake of example):
class A
{
int *mysweetptr;
A()
{
mysweetptr = 0; //always 0 when unalloc'd
}
void doit()
{
if( ! mysweetptr)
{
mysweetptr = new int; //now has non-null value
}
}
void undoit()
{
if(mysweetptr)
{
delete mysweetptr;
mysweetptr = 0; //notice that we reset it to 0.
}
}
bool doihaveit()
{
if(mysweetptr)
return true;
else
return false;
}
~A()
{
undoit();
}
};
In particular, notice that I am using the null value to determine whether the pointer has been allocated or not, or if I need to delete it or not.
Your design should not rely on determining this information (as others have pointed out, it's not really possible). Instead, your class should explicitly define the ownership of pointers that it takes in its constructor or methods. If your class takes ownership of those pointers, then it is incorrect behavior to pass in a pointer to the stack or global, and you should delete it with the knowledge that incorrect client code may crash. If your class does not take ownership, it should not be deleting the pointer.

Appropriate destruction of certain class members

I have a class with several pointer members that can be reallocated. When I use the LoadOBJ() function, I'm supposed to replace the already held data but I'm having trouble with garbage collection. Below is some code.
class Object3d{
public:
int nVertex;
int nFace;
int nVertexNormal;
Vector3d *vertex;
Vector3d *vertexNormal;
Face3d *face;
void LoadOBJ(char*);
Object3d():nVertex(0), nFace(0), vertex(NULL), face(NULL){}
Object3d(char*);
~Object3d(){}
};
Face3d:
struct Face3d{
int nFaceV;
int *iVertex;
int *iVertexNormal;
Face3d():nFaceV(0){}
};
Everytime I load a new object with the LoadOBJ() function, I want to delete the previously allocated memory, rather than just use new and leak previously allocated memory.
I'm not sure how to do this. This is what I thought of for now:
void *vGarbage, *vGarbage2,
*fGarbage,
*iGarbage, *iGarbage2;
//nFace is the previous number of faces; in the case of a new object, it's 0
for(int i=0; i<nFace; i++)
{
iGarbage=face[i].iVertex;
iGarbage2=face[i].iVertexNormal;
delete[] iGarbage;
delete[] iGarbage2;
}
vGarbage=vertex;
fGarbage=face;
vGarbage2=vertexNormal;
delete[] vGarbage;
delete[] vGarbage2;
delete[] fGarbage;
The above code runs everytime I use LoadOBJ(), but there still is memory leak. I'm also wondering if this is the right way to do it?
To clarify where the problem/question is: why do I still have memory leak? And, is there better/cleaner way to delete the previously allocated memory?
Check out C++11's smart_pointers, they provide the ability of allocating memory which, when the object goes out of scope, will be freed automatically.
#include <memory>
#include <iostream>
struct Foo {
Foo() { std::cout << "Foo...\n"; }
~Foo() { std::cout << "~Foo...\n"; }
};
struct D {
void operator()(Foo* p) const {
std::cout << "Call delete for Foo object...\n";
delete p;
}
};
int main()
{
{
std::cout << "constructor with no managed object\n";
std::shared_ptr<Foo> sh1;
}
{
std::cout << "constructor with object\n";
std::shared_ptr<Foo> sh2(new Foo);
std::shared_ptr<Foo> sh3(sh2);
std::cout << sh2.use_count() << '\n';
std::cout << sh3.use_count() << '\n';
}
{
std::cout << "constructor with object and deleter\n";
std::shared_ptr<Foo> sh4(new Foo, D());
}
}
Output:
constructor with no managed object constructor with object Foo... 2 2
~Foo... constructor with object and deleter Foo... Call delete for Foo
object... ~Foo...
(http://en.cppreference.com/w/cpp/memory/shared_ptr/shared_ptr)
Remember that for each new a delete should be called when freeing memory. Local pointers can be dangerous if they get destroyed and you didn't free memory before that point.
The RAII paradigm in object-oriented C++ has been designed specifically to make resource management (and also memory management) easy.
If disaster has already been done, you can clean your code up with something like http://deleaker.com/ or equivalent memory leak-seeker software.
Also: if you can't use C++11 or you can't use a C++11-supporting compiler, take a chance of implementing smart pointers yourself, it shouldn't be too hard and will surely help your memory problems.
I understand you want to free the memory occupied by Object3d::vertex, Object3d::vertexNormal and Object3d::face before reasigning these members. First, you should provide a custom destructor for your Face3d so that you no longer need to care for it's members in the containing class. That is:
face3d::~face3d() {
if (iVertex) delete[] iVertex;
if (iVertexNormal) delete[] iVertexNormal;
}
In your Object3d class, you can use a dedicated clean-up function:
void Object3d::cleanup() {
if (face) delete[] face;
face = nullptr;
if (vertex) delete[] vertex;
vertex = nullptr;
if (vertexNormal) delete[] vertexNormal;
vertexNormal = nullptr;
nVertex = 0;
nFace = 0;
nVertexNormal = 0;
}
Btw, In the destructor Object3d::~Object3d() you must call that function as well.
This question might answer yours. I think that you have to cast the void pointer to a specific one, like int*, to make it work. But the behaviour is highly dependent of the compiler you use.
edit: the advice of using smart pointers is probably the easiest and safest way of solving your problem.
Use std::vector instead of manually managed arrays:
struct Face3d{
int nFaceV;
std::vector<int> iVertex;
std::vector<int> iVertexNormal;
Face3d():nFaceV(0){}
};
class Object3d{
public:
std::vector<Vector3d> vertex;
std::vector<Vector3d> vertexNormal;
std::vector<Face3d> face;
void LoadOBJ(char*);
Object3d():nVertex(0), nFace(0), vertex(NULL), face(NULL){}
Object3d(char*);
~Object3d(){}
};
This frees you from the burden to write destructors. As already said above, this is exemplifies the RAII pattern in C++ which should be used instead of manual resource management.
As a general comment, public data members are almost always a bad idea because it breaks encapsulation. Object3d should provide some services to clients and keep its internal private.

C++11 memory pool design pattern?

I have a program that contains a processing phase that needs to use a bunch of different object instances (all allocated on the heap) from a tree of polymorphic types, all eventually derived from a common base class.
As the instances may cyclically reference each other, and do not have a clear owner, I want allocated them with new, handle them with raw pointers, and leave them in memory for the phase (even if they become unreferenced), and then after the phase of the program that uses these instances, I want to delete them all at once.
How I thought to structure it is as follows:
struct B; // common base class
vector<unique_ptr<B>> memory_pool;
struct B
{
B() { memory_pool.emplace_back(this); }
virtual ~B() {}
};
struct D : B { ... }
int main()
{
...
// phase begins
D* p = new D(...);
...
// phase ends
memory_pool.clear();
// all B instances are deleted, and pointers invalidated
...
}
Apart from being careful that all B instances are allocated with new, and that noone uses any pointers to them after the memory pool is cleared, are there problems with this implementation?
Specifically I am concerned about the fact that the this pointer is used to construct a std::unique_ptr in the base class constructor, before the derived class constructor has completed. Does this result in undefined behaviour? If so is there a workaround?
In case you haven't already, familiarize yourself with Boost.Pool. From the Boost documentation:
What is Pool?
Pool allocation is a memory allocation scheme that is very fast, but
limited in its usage. For more information on pool allocation (also
called simple segregated storage, see concepts concepts and Simple Segregated Storage.
Why should I use Pool?
Using Pools gives you more control over how memory is used in your
program. For example, you could have a situation where you want to
allocate a bunch of small objects at one point, and then reach a point
in your program where none of them are needed any more. Using pool
interfaces, you can choose to run their destructors or just drop them
off into oblivion; the pool interface will guarantee that there are no
system memory leaks.
When should I use Pool?
Pools are generally used when there is a lot of allocation and
deallocation of small objects. Another common usage is the situation
above, where many objects may be dropped out of memory.
In general, use Pools when you need a more efficient way to do unusual
memory control.
Which pool allocator should I use?
pool_allocator is a more general-purpose solution, geared towards
efficiently servicing requests for any number of contiguous chunks.
fast_pool_allocator is also a general-purpose solution but is geared
towards efficiently servicing requests for one chunk at a time; it
will work for contiguous chunks, but not as well as pool_allocator.
If you are seriously concerned about performance, use
fast_pool_allocator when dealing with containers such as std::list,
and use pool_allocator when dealing with containers such as
std::vector.
Memory management is tricky business (threading, caching, alignment, fragmentation, etc. etc.) For serious production code, well-designed and carefully optimized libraries are the way to go, unless your profiler demonstrates a bottleneck.
Your idea is great and millions of applications are already using it. This pattern is most famously known as «autorelease pool». It forms a base for ”smart” memory management in Cocoa and Cocoa Touch Objective-C frameworks. Despite the fact that C++ provides hell of a lot of other alternatives, I still think this idea got a lot of upside. But there are few things where I think your implementation as it stands may fall short.
The first problem that I can think of is thread safety. For example, what happens when objects of the same base are created from different threads? A solution might be to protect the pool access with mutually exclusive locks. Though I think a better way to do this is to make that pool a thread-specific object.
The second problem is invoking an undefined behavior in case where derived class's constructor throws an exception. You see, if that happens, the derived object won't be constructed, but your B's constructor would have already pushed a pointer to this to the vector. Later on, when the vector is cleared, it would try to call a destructor through a virtual table of the object that either doesn't exist or is in fact a different object (because new could reuse that address).
The third thing I don't like is that you have only one global pool, even if it is thread-specific, that just doesn't allow for a more fine grained control over the scope of allocated objects.
Taking the above into account, I would do a couple of improvements:
Have a stack of pools for more fine-grained scope control.
Make that pool stack a thread-specific object.
In case of failures (like exception in derived class constructor), make sure the pool doesn't hold a dangling pointer.
Here is my literally 5 minutes solution, don't judge for quick and dirty:
#include <new>
#include <set>
#include <stack>
#include <cassert>
#include <memory>
#include <stdexcept>
#include <iostream>
#define thread_local __thread // Sorry, my compiler doesn't C++11 thread locals
struct AutoReleaseObject {
AutoReleaseObject();
virtual ~AutoReleaseObject();
};
class AutoReleasePool final {
public:
AutoReleasePool() {
stack_.emplace(this);
}
~AutoReleasePool() noexcept {
std::set<AutoReleaseObject *> obj;
obj.swap(objects_);
for (auto *p : obj) {
delete p;
}
stack_.pop();
}
static AutoReleasePool &instance() {
assert(!stack_.empty());
return *stack_.top();
}
void add(AutoReleaseObject *obj) {
objects_.insert(obj);
}
void del(AutoReleaseObject *obj) {
objects_.erase(obj);
}
AutoReleasePool(const AutoReleasePool &) = delete;
AutoReleasePool &operator = (const AutoReleasePool &) = delete;
private:
// Hopefully, making this private won't allow users to create pool
// not on stack that easily... But it won't make it impossible of course.
void *operator new(size_t size) {
return ::operator new(size);
}
std::set<AutoReleaseObject *> objects_;
struct PrivateTraits {};
AutoReleasePool(const PrivateTraits &) {
}
struct Stack final : std::stack<AutoReleasePool *> {
Stack() {
std::unique_ptr<AutoReleasePool> pool
(new AutoReleasePool(PrivateTraits()));
push(pool.get());
pool.release();
}
~Stack() {
assert(!stack_.empty());
delete stack_.top();
}
};
static thread_local Stack stack_;
};
thread_local AutoReleasePool::Stack AutoReleasePool::stack_;
AutoReleaseObject::AutoReleaseObject()
{
AutoReleasePool::instance().add(this);
}
AutoReleaseObject::~AutoReleaseObject()
{
AutoReleasePool::instance().del(this);
}
// Some usage example...
struct MyObj : AutoReleaseObject {
MyObj() {
std::cout << "MyObj::MyObj(" << this << ")" << std::endl;
}
~MyObj() override {
std::cout << "MyObj::~MyObj(" << this << ")" << std::endl;
}
void bar() {
std::cout << "MyObj::bar(" << this << ")" << std::endl;
}
};
struct MyObjBad final : AutoReleaseObject {
MyObjBad() {
throw std::runtime_error("oops!");
}
~MyObjBad() override {
}
};
void bar()
{
AutoReleasePool local_scope;
for (int i = 0; i < 3; ++i) {
auto o = new MyObj();
o->bar();
}
}
void foo()
{
for (int i = 0; i < 2; ++i) {
auto o = new MyObj();
bar();
o->bar();
}
}
int main()
{
std::cout << "main start..." << std::endl;
foo();
std::cout << "main end..." << std::endl;
}
Hmm, I needed almost exactly the same thing recently (memory pool for one phase of a program that gets cleared all at once), except that I had the additional design constraint that all my objects would be fairly small.
I came up with the following "small-object memory pool" -- perhaps it will be of use to you:
#pragma once
#include "defs.h"
#include <cstdint> // uintptr_t
#include <cstdlib> // std::malloc, std::size_t
#include <type_traits> // std::alignment_of
#include <utility> // std::forward
#include <algorithm> // std::max
#include <cassert> // assert
// Small-object allocator that uses a memory pool.
// Objects constructed in this arena *must not* have delete called on them.
// Allows all memory in the arena to be freed at once (destructors will
// be called).
// Usage:
// SmallObjectArena arena;
// Foo* foo = arena::create<Foo>();
// arena.free(); // Calls ~Foo
class SmallObjectArena
{
private:
typedef void (*Dtor)(void*);
struct Record
{
Dtor dtor;
short endOfPrevRecordOffset; // Bytes between end of previous record and beginning of this one
short objectOffset; // From the end of the previous record
};
struct Block
{
size_t size;
char* rawBlock;
Block* prevBlock;
char* startOfNextRecord;
};
template<typename T> static void DtorWrapper(void* obj) { static_cast<T*>(obj)->~T(); }
public:
explicit SmallObjectArena(std::size_t initialPoolSize = 8192)
: currentBlock(nullptr)
{
assert(initialPoolSize >= sizeof(Block) + std::alignment_of<Block>::value);
assert(initialPoolSize >= 128);
createNewBlock(initialPoolSize);
}
~SmallObjectArena()
{
this->free();
std::free(currentBlock->rawBlock);
}
template<typename T>
inline T* create()
{
return new (alloc<T>()) T();
}
template<typename T, typename A1>
inline T* create(A1&& a1)
{
return new (alloc<T>()) T(std::forward<A1>(a1));
}
template<typename T, typename A1, typename A2>
inline T* create(A1&& a1, A2&& a2)
{
return new (alloc<T>()) T(std::forward<A1>(a1), std::forward<A2>(a2));
}
template<typename T, typename A1, typename A2, typename A3>
inline T* create(A1&& a1, A2&& a2, A3&& a3)
{
return new (alloc<T>()) T(std::forward<A1>(a1), std::forward<A2>(a2), std::forward<A3>(a3));
}
// Calls the destructors of all currently allocated objects
// then frees all allocated memory. Destructors are called in
// the reverse order that the objects were constructed in.
void free()
{
// Destroy all objects in arena, and free all blocks except
// for the initial block.
do {
char* endOfRecord = currentBlock->startOfNextRecord;
while (endOfRecord != reinterpret_cast<char*>(currentBlock) + sizeof(Block)) {
auto startOfRecord = endOfRecord - sizeof(Record);
auto record = reinterpret_cast<Record*>(startOfRecord);
endOfRecord = startOfRecord - record->endOfPrevRecordOffset;
record->dtor(endOfRecord + record->objectOffset);
}
if (currentBlock->prevBlock != nullptr) {
auto memToFree = currentBlock->rawBlock;
currentBlock = currentBlock->prevBlock;
std::free(memToFree);
}
} while (currentBlock->prevBlock != nullptr);
currentBlock->startOfNextRecord = reinterpret_cast<char*>(currentBlock) + sizeof(Block);
}
private:
template<typename T>
static inline char* alignFor(char* ptr)
{
const size_t alignment = std::alignment_of<T>::value;
return ptr + (alignment - (reinterpret_cast<uintptr_t>(ptr) % alignment)) % alignment;
}
template<typename T>
T* alloc()
{
char* objectLocation = alignFor<T>(currentBlock->startOfNextRecord);
char* nextRecordStart = alignFor<Record>(objectLocation + sizeof(T));
if (nextRecordStart + sizeof(Record) > currentBlock->rawBlock + currentBlock->size) {
createNewBlock(2 * std::max(currentBlock->size, sizeof(T) + sizeof(Record) + sizeof(Block) + 128));
objectLocation = alignFor<T>(currentBlock->startOfNextRecord);
nextRecordStart = alignFor<Record>(objectLocation + sizeof(T));
}
auto record = reinterpret_cast<Record*>(nextRecordStart);
record->dtor = &DtorWrapper<T>;
assert(objectLocation - currentBlock->startOfNextRecord < 32768);
record->objectOffset = static_cast<short>(objectLocation - currentBlock->startOfNextRecord);
assert(nextRecordStart - currentBlock->startOfNextRecord < 32768);
record->endOfPrevRecordOffset = static_cast<short>(nextRecordStart - currentBlock->startOfNextRecord);
currentBlock->startOfNextRecord = nextRecordStart + sizeof(Record);
return reinterpret_cast<T*>(objectLocation);
}
void createNewBlock(size_t newBlockSize)
{
auto raw = static_cast<char*>(std::malloc(newBlockSize));
auto blockStart = alignFor<Block>(raw);
auto newBlock = reinterpret_cast<Block*>(blockStart);
newBlock->rawBlock = raw;
newBlock->prevBlock = currentBlock;
newBlock->startOfNextRecord = blockStart + sizeof(Block);
newBlock->size = newBlockSize;
currentBlock = newBlock;
}
private:
Block* currentBlock;
};
To answer your question, you're not invoking undefined behaviour since nobody is using the pointer until the object is fully constructed (the pointer value itself is safe to copy around until then). However, it's a rather intrusive method, as the object(s) themselves need to know about the memory pool. Additionally, if you're constructing a large number of small objects, it would likely be faster to use an actual pool of memory (like my pool does) instead of calling out to new for every object.
Whatever pool-like approach you use, be careful that the objects are never manually deleteed, because that would lead to a double free!
I still think this is an interesting question without a definitive reply, but please let me break it down into the different questions you are actually asking:
1.) Does inserting a pointer to a base class into a vector before initialisation of a subclass prevent or cause issues with retrieving inherited classes from that pointer. [slicing for example.]
Answer: No, so long as you are 100% sure of the relevant type that is being pointed to, this mechanism does not cause these issues however note the following points:
If the derived constructor fails, you are left with an issue later when you are likely to have a dangling pointer at least sitting in the vector, as that address space it [the derived class] thought it was getting would be freed to the operating environment on failure, but the vector still has the address as being of the base class type.
Note that a vector, although kind of useful, is not the best structure for this, and even if it was, there should be some inversion of control involved here to allow the vector object to control initialisation of your objects, so that you have awareness of success/failure.
These points lead to the implied 2nd question:
2.) Is this a good pattern for pooling?
Answer: Not really, for the reasons mentioned above, plus others (Pushing a vector past it's end point basically ends up with a malloc which is unnecessary and will impact performance.) Ideally you want to use a pooling library, or a template class, and even better, separate the allocation/de-allocation policy implementation away from the pool implementation, with a low level solution already being hinted at, which is to allocate adequate pool memory from pool initialisation, and then use this using pointers to void from within the pool address space (See Alex Zywicki's solution above.) Using this pattern, the pool destruction is safe as the pool which will be contiguous memory can be destroyed en masse without any dangling issues, or memory leaks through losing all references to an object (losing all reference to an object whose address is allocated through the pool by the storage manager leaves you with dirty chunk/s, but will not cause a memory leak as it is managed by the pool implementation.
In the early days of C/C++ (before mass proliferation of the STL), this was a well discussed pattern and many implementations and designs can be found out there in good literature: As an example:
Knuth (1973 The art of computer programming: Multiple volumes), and for a more complete list, with more on pooling, see:
http://www.ibm.com/developerworks/library/l-memory/
The 3rd implied question seems to be:
3) Is this a valid scenario to use pooling?
Answer: This is a localised design decision based on what you are comfortable with, but to be honest, your implementation (no controlling structure/aggregate, possibly cyclic sharing of sub sets of objects) suggests to me that you would be better off with a basic linked list of wrapper objects, each of which contains a pointer to your superclass, used only for addressing purposes. Your cyclical structures are built on top of this, and you simply amend/grow shrink the list as required to accommodate all of your first class objects as required, and when finished, you can then easily destroy them in effectively an O(1) operation from within the linked list.
Having said that, I would personally recommend that at this time (when you have a scenario where pooling does have a use and so you are in the right mind-set) to carry out the building of a storage management/pooling set of classes that are paramaterised/typeless now as it will hold you in good stead for the future.
This sounds what I have heard called a Linear Allocator.
I will explain the basics of how I understand how it works.
Allocate a block of memory using ::operator new(size);
Have a void* that is your Pointer to the next free space in memory.
You will have an alloc(size_t size) function that will give you a pointer to the location in the block from step one for you to construct on to using Placement New
Placement new looks like... int* i = new(location)int(); where location is a void* to a block of memory you alloced from the allocator.
when you are done with all of your memory you will call a Flush() function that will dealloc the memory from the pool or at least wipe the data clean.
I programmed one of these recently and i will post my code here for you as well as do my best to explain.
#include <iostream>
class LinearAllocator:public ObjectBase
{
public:
LinearAllocator();
LinearAllocator(Pool* pool,size_t size);
~LinearAllocator();
void* Alloc(Size_t size);
void Flush();
private:
void** m_pBlock;
void* m_pHeadFree;
void* m_pEnd;
};
don't worry about what i'm inheriting from. i have been using this allocator in conjunction with a memory pool. but basically instead of getting the memory from operator new i am getting memory from a memory pool. the internal workings are the same essentially.
Here is the implementation:
LinearAllocator::LinearAllocator():ObjectBase::ObjectBase()
{
m_pBlock = nullptr;
m_pHeadFree = nullptr;
m_pEnd=nullptr;
}
LinearAllocator::LinearAllocator(Pool* pool,size_t size):ObjectBase::ObjectBase(pool)
{
if (pool!=nullptr) {
m_pBlock = ObjectBase::AllocFromPool(size);
m_pHeadFree = * m_pBlock;
m_pEnd = (void*)((unsigned char*)*m_pBlock+size);
}
else{
m_pBlock = nullptr;
m_pHeadFree = nullptr;
m_pEnd=nullptr;
}
}
LinearAllocator::~LinearAllocator()
{
if (m_pBlock!=nullptr) {
ObjectBase::FreeFromPool(m_pBlock);
}
m_pBlock = nullptr;
m_pHeadFree = nullptr;
m_pEnd=nullptr;
}
MemoryBlock* LinearAllocator::Alloc(size_t size)
{
if (m_pBlock!=nullptr) {
void* test = (void*)((unsigned char*)m_pEnd-size);
if (m_pHeadFree<=test) {
void* temp = m_pHeadFree;
m_pHeadFree=(void*)((unsigned char*)m_pHeadFree+size);
return temp;
}else{
return nullptr;
}
}else return nullptr;
}
void LinearAllocator::Flush()
{
if (m_pBlock!=nullptr) {
m_pHeadFree=m_pBlock;
size_t size = (unsigned char*)m_pEnd-(unsigned char*)*m_pBlock;
memset(*m_pBlock,0,size);
}
}
This code is fully functional except for a few lines which will need to be changed because of my inheritance and use of the memory pool. but I bet you can figure out what needs to change and just let me know if you need a hand changing the code. This code has not been tested in any sort of professional manor and is not guaranteed to be thread safe or anything fancy like that. i just whipped it up and thought i could share it with you since you seemed to need help.
I also have a working implementation of a fully generic memory pool if you think it may help you. I can explain how it works if you need.
Once again if you need any help let me know. Good luck.

Calling delete on automatic objects

Consider the following c++ code:
class test
{
public:
int val;
test():val(0){}
~test()
{
cout << "Destructor called\n";
}
};
int main()
{
test obj;
test *ptr = &obj;
delete ptr;
cout << obj.val << endl;
return 0;
}
I know delete should be called only on dynamically allocated objects but what would happen to obj now ?
Ok I get that we are not supposed to do such a thing, now if i am writing the following implementation of a smart pointer, how can i make sure that such a thing does't happen.
class smart_ptr
{
public:
int *ref;
int *cnt;
smart_ptr(int *ptr)
{
ref = ptr;
cnt = new int(1);
}
smart_ptr& operator=(smart_ptr &smptr)
{
if(this != &smptr)
{
// House keeping
(*cnt)--;
if(*cnt == 0)
{
delete ref;
delete cnt;
ref = 0;
cnt = 0;
}
// Now update
ref = smptr.ref;
cnt = smptr.cnt;
(*cnt)++;
}
return *this;
}
~smart_ptr()
{
(*cnt)--;
if(*cnt == 0)
{
delete ref;
delete cnt;
ref = 0;
cnt = 0;
}
}
};
You've asked two distinct questions in your post. I'll answer them separately.
but what would happen to obj now ?
The behavior of your program is undefined. The C++ standard makes no comment on what happens to obj now. In fact, the standard makes no comment what your program does before the error, either. It simply is not defined.
Perhaps your compiler vendor makes a commitment to what happens, perhaps you can examine the assembly and predict what will happen, but C++, per se, does not define what happens.
Practially speaking1, you will likely get a warning message from your standard library, or you will get a seg fault, or both.
1: Assuming that you are running in either Windows or a UNIX-like system with an MMU. Other rules apply to other compilers and OSes.
how can i make sure that [deleteing a stack variable] doesn't happen.
Never initialize smart_ptr with the address of a stack variable. One way to do that is to document the interface to smart_ptr. Another way is to redefine the interface so that the user never passes a pointer to smart_ptr; make smart_ptr responsible for invoking new.
Your code has undefined behaviour because you used delete on a pointer that was not allocated with new. This means anything could happen and it's impossible to say what would happen to obj.
I would guess that on most platforms your code would crash.
Delete's trying to get access to obj space in memory, but opperation system don't allow to do this and throws (core dumped) exception.
It's undefined what will happen so you can't say much. The best you can do is speculate for particular implementations/compilers.
It's not just undefined behavior, like stated in other answers. This will almost certainly crash.
The first issue is with attempting to free a stack variable.
The second issue will occur upon program termination, when test destructor will be called for obj.

Could you give me a real work example of memory leak?

I heard a lot of memory leak vulnerability, but I could not find a real work example of memory leak, could you provide a real work example of memory leak, maybe of some big open source project and explain the solution to me
thx.
It's really simple actually. In your main put:
char* c = new char[4];
Then exit. That's a memory leak. Any new that doesn't get followed by delete is a leak.
This answer has some good examples, but like my comment has said, it will be fairly hard to find a released application with a leak that an outside observer can look at and easily identify.
I am screaming, cursing and yelling everyday about code like this in our (huge) legacy code base:
// returns raw pointer with changing conventions who's the owner...
HelpFoo* Foo::GetFoo(Bar* pBar, OtherFoo* pFoo)
{
// all 'local' variables even those allocated on freestore declared
// and initialized in a large block at the beginning of the function/method
HelpFoo *A = new HelpFoo;
OtherHelpFoo *B, *C;
EvenMore *D = new EvenMore;
// and so on, these blocks can be huge...
// a complicated spaghetti code in here, with dozens of nested 'ifs'
if (/* some expression */) {
} else if (/* some other expression */) {
// and so on... then suddenly:
if (/* some other nested expression */) {
// I forgot that I've allocated other memory at the beginning...
return A;
}
}
// some miserably written logic here and suddenly
if (D) delete D; return A;
// call to some other function with cryptical name without any
// kind of knowledge what happens with the resource:
FooTakesReferenceToPointer(&A);
// suddenly returning something completely different
// what should I free, A, D...?
return C;
}
I tried to write in comments what the problems are. Clearly, forget about exceptions. The spaghetti code is so bad that nobody can tell what the logic actually is. Therefore it is really, really easy to forget to free memory and that happens very, very frequently. Solution 1: Throw away and rewrite everything. Solution 2: Keep spaghetti as it is, replace all newed resources by smart pointers and make_shared or make_unique, let compiler yell. Of course, first write a test suite (which didn't exist before) to guarantee the same (often screwed) behaviour for all possible sets of inputs (which are not documented).
EDIT
As james said this is undefined behaviourso no promises
You could do something like this:
#include <vector>
class Base
{
public:
Base()
{
baseData = new char [1024];
}
~Base()
{
delete [] baseData;
}
private:
char* baseData;
};
class Derived : public Base
{
public:
Derived()
{
derivedData = new char[1024];
}
~Derived()
{
delete [] derivedData;
}
private:
char* derivedData;
};
int main()
{
std::vector<Base*> datablocks;
datablocks.push_back(new Base());
datablocks.push_back(new Derived());
for(unsigned int i = 0; i < datablocks.size(); ++i)
{
delete datablocks[i];
}
datablocks.clear();
return 0;
}
The data in the Derived class wont be removed here since we are calling delete on a Base* and the Base class does not declare a virtual destructor.
A lot examples could be given here. Just allocate some memory and do not free it.
A good example for this would be the following:
char* pBuffer = new char[ 1024 ]; // or something else, dynamically allocated
// do something here
// now suppose, calling f() throws
f();
// do some other things
delete[] pBuffer;
When f() throws, if the exception is not caught, delete[] will never be executed. Thus, memory leak.
This is one of the best examples why smart pointers should be used.
Another example would be - a function, returning pointer to dynamically allocated memory. The user, often, may forget to free this memory. Something like:
char
char* f()
{
return new char[ 1024 ];
}
//...
// in some other function
char* pSomething = f();
// do some stuff here and return
Imagine you're processing network data and create polymorphic "message objects" based on the data:
while (true)
{
char buf[1024];
size_t len = read_from_network(buf, 1024); // fictitious, for demonstration only
Message * p = Message::Parse(buf, len); // allocates new, dynamic, concrete object
engine.process(p);
}
The engine object may chose to store the object somewhere and use it again later, and if nobody takes care of deleting it, you have a perfect leak.
While the other answers give enough hints, some 'real world' memory leaks which I have seen in our applications.
I don't remember if this was found before or after the release, but, I guess that doesn't matter.
void f()
{
BYTE* b = NULL;
f = open a file;
while (!f.end())
{
int size = getNextRecordSize(f);
b = new BYTE;
readNextRecord(f,b);
process record;
}
delete b;
}
Bit hard to detect this. The reviewers might take it for granted that the memory is deleted properly by seeing the delete call. However, it deletes only the memory allocated for the last record. Rest is leaked.
class A
{
public:
BYTE* get()
{
allocate a new buffer, copy the someData buffer and return that.
The client is expected to delete it
};
private:
BYTE* someData;
};
void f()
{
A a;
B.initialize(a.get()); // It is so convenient to use the pointer. It is not obvious from the function name
// that the result of get has to be deleted.
}
One example I often run across in our code is in image understanding functions, where a temporary 8bit memory is allocated, and never released (yeah, I know, when you do a new, do a delete right afterwards...)
unsigned char* dataToBeUsed = new unsigned char[imgsize];
memcpy(dataToBeUsed, original, imgsize);
// use and process the data here
return value;
The allocated memory is never released -> memory leak. Windows will kill the memory when the application is exited completely, but before that within the application that memory is just lost -> leaked.
A memory leak occurs when the programmer has the memory leak of forgetting to free allocated memory :-)
linebuffer = new char[4096];
/* do things */
/* forget to free memory */
Normally, if you cause a memory leak and then exit the program, it is not harmful, since the operating system normally frees the resources allocated by the program. The problem arises when your application runs for a long period of time (for example, a service). If your program causes memory leaks, then you will run out system's memory, unless the operating system has mechanisms to avoid that; in such case, it will terminate your program.
So, be careful and eat fish: it's very good for memory :-)
To give you a real-world example, a bit of googling turned up this memory leak in 389 Directory Server (a RedHat Open Source product).
Just lose the pointer to dynamically allocated memory:
void foo()
{
int *arr = new int[100];
}