Consider the following contrived example:
class AllocatedClass {
public:
AllocatedClass() : dummy(0) {}
private:
int dummy;
};
class AllocatingClass {
public:
AllocatingClass() : list() {}
~AllocatingClass() {
// CANNOT delete the elements in list here because there may
// be more than on instance of AllocatingClass sharing the same
// static list
}
AddNewObject() {
list.push_back(new AllocatedClass());
}
private:
static std::vector<AllocatedClass*> list;
};
In implemetation file
std::vector<AllocatedClass*> AllocatingClass::list;
Putting aside whether multiple instances of a class should share a list of dynamically allocated objects is a good idea, is there a way clean up these new'ed AllocatedClass objects at the end of the program? Does it matter if these never get deleted considering I don't want to delete them until application end?
If the lifetime of the object is the lifetime of the execution of the program then there is no need to free the memory with code. The memory will be free automatically.
Many Linux command line tools do not free their memory in code for performance reasons. (Faster to automatically free pages of memory, than to free each object one by one.)
Another strategy is to keep a separate list of unique AllocatedClass instances, then free them from that list later on (switching ownership of the objects). Like std::list<AllocatedClass*> to_be_freed.
is there a way clean up these new'ed AllocatedClass objects at the end
of the program?
One solution is to use std::shared_ptr and have the deallocation done automatically.
#include <memory>
#include <vector>
#include <iostream>
class AllocatedClass
{
public:
AllocatedClass(int n = 0) : dummy(n) {}
~AllocatedClass() { std::cout << "I am being destroyed" << '\n'; }
private:
int dummy;
};
class AllocatingClass
{
public:
AllocatingClass() {}
void AddNewObject(int num = 0)
{ shared_list.push_back(std::make_shared<AllocatedClass>(num)); }
private:
static std::vector<std::shared_ptr<AllocatedClass>> shared_list;
};
std::vector<std::shared_ptr<AllocatedClass>> AllocatingClass::shared_list;
AllocatingClass ac;
int main()
{
ac.AddNewObject();
ac.AddNewObject(1);
}
Live Example
Note that the destructors are called automatically for the objects that were placed in the vector.
(BTW, it isn't a good idea to name your member variable list).
Most of the time(very close to all the time) an object that creates dynamically allocated objects should have a programmer defined destructor to free the memory when the object reaches the end of its life. AllocatingClass should have a destructor because you allocate dynamic memory with new.
~AllocatingClass() {
for (int i = 0 ; i < list.size();i++) {
if(list[i] != NULL) {
delete list[i];
list[i] = NULL;
}
}
This should both provide a method for de-allocating memory and safety so that you do not delete an already deleted pointer.
Related
I designed my application to run such that I call a raw pointer on a class deriving from a Singleton:
class SimulatedIndexFixingStore : public Singleton<SimulatedIndexFixingStore>
{
private:
friend Singleton<SimulatedIndexFixingStore>;
typedef std::unordered_map<std::string, std::unordered_map<int, std::map<Date, double>>> Dictionary;
mutable Dictionary simulatedFixings_;
SimulatedIndexFixingStore& operator=(const SimulatedIndexFixingStore& ) = delete;
SimulatedIndexFixingStore(const SimulatedIndexFixingStore& ) = delete;
SimulatedIndexFixingStore();
~SimulatedIndexFixingStore()
{
std::cout << "\nCalling destructor of store...\n";
}
public:
// creates raw pointers to this store and passes them around...
static SimulatedIndexFixingStore* createPointerToStore()
{
auto tmp = new SimulatedIndexFixingStore() <----- memory allocation on the heap!;
return tmp;
}
};
I then use the raw pointer generated by the static member function SimulatedIndexFixingStore* createPointerToStore() and pass them around my application.
Question:
I don't call delete on the pointer prior to leaving the Main() function as this would call the destructor on the static Singleton instance, so this means I have a memory leak in my app. Correct?
I know for a fact that the destructor ~ SimulatedIndexFixingStore() is not getting called because the message is not printed out.
The instrument profiler of my application doesn't seem to agree there is a memory leak. Can someone please help me understand?
Thanks,
Amine
I am working on an open-source library that has a memory leak in it. The library is a data streaming service built around boost::asio. The server side uses heap memory management system which provides memory to hold a finite number of samples while they wait to get pushed accross a tcp connection. When the server is first constructed, a heap of memory for all the old samples is allocated. From this heap, after a sample is passed accross the socket, the memory is returned to the heap.
This is fine, unless all that pre-allocated heap is already taken. Here is the function that creates a 'sample':
sample_p new_sample(double timestamp, bool pushthrough) {
sample *result = pop_freelist();
if (!result){
result = new(new char[sample_size_]) sample(fmt_, num_chans_, this);
}
return sample_p(result);
}
sample_p is just a typedef'd smart pointer templated to the sample class.
The offending line is in the middle. When there isn't a chunk of memory on the freelist, we need to make some. This leaks memory.
My question is why is this happening? Since I shove the new sample into a smart pointer, shouldn't the memory be freed when it goes out of scope (it gets popped off of a stack later on.)? Do I need to somehow handle the memory allocated on the inside---i.e. the memory allocated by new char[sample_size_]? If yes, how can I do that?
Edit:
#RichardHodges here is a compile-able MCVE. This is highly simplified but I think it captures exactly the problem I am facing in the original code.
#include <boost/intrusive_ptr.hpp>
#include <boost/lockfree/spsc_queue.hpp>
#include <iostream>
typedef boost::intrusive_ptr<class sample> sample_p;
typedef boost::lockfree::spsc_queue<sample_p> buffer;
class sample {
public:
double data;
class factory{
public:
friend class sample;
sample_p new_sample(int size, double data) {
sample* result = new(new char[size]) sample(data);
return sample_p(result);
}
};
sample(double d) {
data = d;
}
void operator delete(void *x) {
delete[](char*)x;
}
/// Increment ref count.
friend void intrusive_ptr_add_ref(sample *s) {
}
/// Decrement ref count and reclaim if unreferenced.
friend void intrusive_ptr_release(sample *s) {
}
};
void push_sample(buffer &buff, const sample_p &samp) {
while (!buff.push(samp)) {
sample_p dummy;
buff.pop(dummy);
}
}
int main(void){
buffer buff(1);
sample::factory factory_;
for (int i = 0; i < 10; i++)
push_sample(buff, factory_.new_sample(100,0.0));
std::cout << "press any key to exit" << std::endl;
char foo;
std::cin >> foo;
return 0;
}
When I step through the code, I note that my delete operator never gets called on the sample pointers. I guess that the library I'm working on (which again, I didn't write, so I am still learning its ways) is mis-using the intrusive_ptr type.
You are allocating the memory with new[] so you need to deallocate it with delete[] (on a char*). The smart pointer probably calls delete by default, so you should provide a custom deleter that calls delete[] (after manually invoking the destructor of the sample). Here is an example using std::shared_ptr.
auto s = std::shared_ptr<sample>(
new (new char[sizeof(sample)]) sample,
[](sample* p) {
p->~sample();
delete[] reinterpret_cast<char*>(p);
}
);
However, why you are using placement new when your buffer only contains one object? Why not just use regular new instead?
auto s = std::shared_ptr<sample>(new sample);
Or even better (with std::shared_ptr), use a factory function.
auto s = std::make_shared<sample>();
I want to allocate objects on the heap according to a string entered by the user but I cannot access the pointers or objects outside the function although they are on the heap. As well, I tried to allocate them using unique pointers but I'm still getting an error saying "not declared" etc.
How can I create these objects, a user can create eight objects concurrently with spaces between the words. (e.g. "Avalanche Toolbox Paperdoll" and so on)?
string user_input;
getline(cin, user_input);
istringstream iss(user_input);
copy(istream_iterator<string>(iss),
istream_iterator<string>(),
back_inserter(vec));
for(int i=0; i<8; i++)
{
if(vec.at(i)=="Avalanche")
{
Avalanche *avalanche = new Avalanche(5);
std::unique_ptr<Avalanche> ptr(new Avalanche(5)); // even tried using unique pointer.
cout<<"Avalanche: "<<avalanche->c_game<<endl;
}
else if(vec.at(i)=="Bureaucrat")
{
Bureaucrat *bureaucrat = new Bureaucrat(5);
cout<<"Bureaucrat: "<<bureaucrat->c_game<<endl;
}
else if(vec.at(i)=="Toolbox")
{
Toolbox *toolbox = new Toolbox(5);
cout<<"Toolbox: "<<toolbox->c_game<<endl;
}
else if(vec.at(i)=="Crescendo")
{
Crescendo *crescendo = new Crescendo(5);
cout<<"Crescendo: "<<crescendo->c_game<<endl;
}
else if(vec.at(i)=="Paperdoll")
{
Paperdoll *paperdoll = new Paperdoll(5);
cout<<"Paperdoll: "<<paperdoll->c_game<<endl;
}
else if(vec.at(i)=="FistfullODollars")
{
Fistfullodollars *fistfullodollars = new Fistfullodollars(5);
cout<<"FistfullOdollars: "<<fistfullodollars->c_game<<endl;
}
}
cout<<ptr.c_game<<endl; // give an error not declared
Your object lifetime is limited to it's scope. In your case, the scope is within two closest {}. You need to remember that when you have a (smart)pointer to allocated memory, you actually have TWO objects - one is your (smart) pointer, and the other is the object the pointer points to. When the pointer goes out of scope, you can not reference the actual object by this pointer anymore. And if there is no other pointer pointing to this object, the object is lost forever, and you have is a classic example of so called memory leak - there is an object somewhere there, but is unaccessible, and the memory occupied by it is lost and can not be reused.
Think about it in a following way - you have a book and a library card. The book is somewhere in the vast storage, and unless you know where to look (using library card) you will never find it. So if you lost all copies of your library cards, the book is as good as lost to you - though it is obviously somewhere there.
The answer is, that your objects created are not deleted, since you have created them on the heap, as you pointed out, but the pointer variables go out of scope. The solution is, to put the declaration outside of the scopes and do the assignment in the scope:
string user_input;
getline(cin, user_input);
istringstream iss(user_input);
copy(istream_iterator<string>(iss),
istream_iterator<string>(),
back_inserter(vec));
Bureaucrat *bureaucrat;
Toolbox *toolbox;
Crescendo *crescendo;
Paperdoll *paperdoll;
Fistfullodollars *fistfullodollars;
for(int i=0; i<8; i++)
{
if(vec.at(i)=="Avalanche")
{
Avalanche *avalanche = new Avalanche(5);
std::unique_ptr<Avalanche> ptr(new Avalanche(5)); // even tried using unique pointer.
cout<<"Avalanche: "<<avalanche->c_game<<endl;
}
else if(vec.at(i)=="Bureaucrat")
{
bureaucrat = new Bureaucrat(5);
cout<<"Bureaucrat: "<<bureaucrat->c_game<<endl;
}
else if(vec.at(i)=="Toolbox")
{
toolbox = new Toolbox(5);
cout<<"Toolbox: "<<toolbox->c_game<<endl;
}
else if(vec.at(i)=="Crescendo")
{
crescendo = new Crescendo(5);
cout<<"Crescendo: "<<crescendo->c_game<<endl;
}
else if(vec.at(i)=="Paperdoll")
{
paperdoll = new Paperdoll(5);
cout<<"Paperdoll: "<<paperdoll->c_game<<endl;
}
else if(vec.at(i)=="FistfullODollars")
{
fistfullodollars = new Fistfullodollars(5);
cout<<"FistfullOdollars: "<<fistfullodollars->c_game<<endl;
}
}
cout<<ptr.c_game<<endl; // give an error not declared
ptr is declared on the stack, it's the object it's pointing at which is delete bound.
when if(vec.at(i)=="Avalanche") {} is over, ptr (which is stack allocated) gets out of scope and no longer accesible.
Even if you allocate your object dynamically you still need a pointer variable to keep track of where that memory is. Your ptr variable is destroyed at the end of the enclosing braces.
I'm guessing all your classes Avalanche, Bureaucrat... inherit from some base class with a c_game member variable. Perhaps something a bit like this:
struct MyBase {
virtual ~MyBase(){} // This is important!
MyBase(std::string c_game) : c_game(c_game) {}
std::string c_game;
};
struct Avalanche : public MyBase {
Avalanche(int num) : MyBase("Avalanche"), num(num) {}
int num;
};
struct Fistfullodollars : public MyBase {
Fistfullodollars(int num) : MyBase("Fistfullodollars"), num(num) {}
int num;
};
//...
In order to keep track of these dynamically allocated objects you could construct a container of (smart) pointers to this base class:
std::vector<std::unique_ptr<MyBase>> objects;
for(size_t i=0; i!=vec.size(); i++)
{
if(vec.at(i)=="Avalanche")
{
objects.push_back(std::make_unique<Avalanche>(5));
std::cout<<"Avalanche: "<<objects.back()->c_game<<"\n";
}
else if(vec.at(i)=="FistfullODollars")
{
objects.push_back(std::make_unique<Fistfullodollars>(5));
std::cout<<"FistfullOdollars: "<<objects.back()->c_game<<"\n";
}
//...
}
for (const auto& ptr : objects)
{
std::cout << ptr->c_game << "\n";
}
Live demo.
I have a class with several pointer members that can be reallocated. When I use the LoadOBJ() function, I'm supposed to replace the already held data but I'm having trouble with garbage collection. Below is some code.
class Object3d{
public:
int nVertex;
int nFace;
int nVertexNormal;
Vector3d *vertex;
Vector3d *vertexNormal;
Face3d *face;
void LoadOBJ(char*);
Object3d():nVertex(0), nFace(0), vertex(NULL), face(NULL){}
Object3d(char*);
~Object3d(){}
};
Face3d:
struct Face3d{
int nFaceV;
int *iVertex;
int *iVertexNormal;
Face3d():nFaceV(0){}
};
Everytime I load a new object with the LoadOBJ() function, I want to delete the previously allocated memory, rather than just use new and leak previously allocated memory.
I'm not sure how to do this. This is what I thought of for now:
void *vGarbage, *vGarbage2,
*fGarbage,
*iGarbage, *iGarbage2;
//nFace is the previous number of faces; in the case of a new object, it's 0
for(int i=0; i<nFace; i++)
{
iGarbage=face[i].iVertex;
iGarbage2=face[i].iVertexNormal;
delete[] iGarbage;
delete[] iGarbage2;
}
vGarbage=vertex;
fGarbage=face;
vGarbage2=vertexNormal;
delete[] vGarbage;
delete[] vGarbage2;
delete[] fGarbage;
The above code runs everytime I use LoadOBJ(), but there still is memory leak. I'm also wondering if this is the right way to do it?
To clarify where the problem/question is: why do I still have memory leak? And, is there better/cleaner way to delete the previously allocated memory?
Check out C++11's smart_pointers, they provide the ability of allocating memory which, when the object goes out of scope, will be freed automatically.
#include <memory>
#include <iostream>
struct Foo {
Foo() { std::cout << "Foo...\n"; }
~Foo() { std::cout << "~Foo...\n"; }
};
struct D {
void operator()(Foo* p) const {
std::cout << "Call delete for Foo object...\n";
delete p;
}
};
int main()
{
{
std::cout << "constructor with no managed object\n";
std::shared_ptr<Foo> sh1;
}
{
std::cout << "constructor with object\n";
std::shared_ptr<Foo> sh2(new Foo);
std::shared_ptr<Foo> sh3(sh2);
std::cout << sh2.use_count() << '\n';
std::cout << sh3.use_count() << '\n';
}
{
std::cout << "constructor with object and deleter\n";
std::shared_ptr<Foo> sh4(new Foo, D());
}
}
Output:
constructor with no managed object constructor with object Foo... 2 2
~Foo... constructor with object and deleter Foo... Call delete for Foo
object... ~Foo...
(http://en.cppreference.com/w/cpp/memory/shared_ptr/shared_ptr)
Remember that for each new a delete should be called when freeing memory. Local pointers can be dangerous if they get destroyed and you didn't free memory before that point.
The RAII paradigm in object-oriented C++ has been designed specifically to make resource management (and also memory management) easy.
If disaster has already been done, you can clean your code up with something like http://deleaker.com/ or equivalent memory leak-seeker software.
Also: if you can't use C++11 or you can't use a C++11-supporting compiler, take a chance of implementing smart pointers yourself, it shouldn't be too hard and will surely help your memory problems.
I understand you want to free the memory occupied by Object3d::vertex, Object3d::vertexNormal and Object3d::face before reasigning these members. First, you should provide a custom destructor for your Face3d so that you no longer need to care for it's members in the containing class. That is:
face3d::~face3d() {
if (iVertex) delete[] iVertex;
if (iVertexNormal) delete[] iVertexNormal;
}
In your Object3d class, you can use a dedicated clean-up function:
void Object3d::cleanup() {
if (face) delete[] face;
face = nullptr;
if (vertex) delete[] vertex;
vertex = nullptr;
if (vertexNormal) delete[] vertexNormal;
vertexNormal = nullptr;
nVertex = 0;
nFace = 0;
nVertexNormal = 0;
}
Btw, In the destructor Object3d::~Object3d() you must call that function as well.
This question might answer yours. I think that you have to cast the void pointer to a specific one, like int*, to make it work. But the behaviour is highly dependent of the compiler you use.
edit: the advice of using smart pointers is probably the easiest and safest way of solving your problem.
Use std::vector instead of manually managed arrays:
struct Face3d{
int nFaceV;
std::vector<int> iVertex;
std::vector<int> iVertexNormal;
Face3d():nFaceV(0){}
};
class Object3d{
public:
std::vector<Vector3d> vertex;
std::vector<Vector3d> vertexNormal;
std::vector<Face3d> face;
void LoadOBJ(char*);
Object3d():nVertex(0), nFace(0), vertex(NULL), face(NULL){}
Object3d(char*);
~Object3d(){}
};
This frees you from the burden to write destructors. As already said above, this is exemplifies the RAII pattern in C++ which should be used instead of manual resource management.
As a general comment, public data members are almost always a bad idea because it breaks encapsulation. Object3d should provide some services to clients and keep its internal private.
I have a program that contains a processing phase that needs to use a bunch of different object instances (all allocated on the heap) from a tree of polymorphic types, all eventually derived from a common base class.
As the instances may cyclically reference each other, and do not have a clear owner, I want allocated them with new, handle them with raw pointers, and leave them in memory for the phase (even if they become unreferenced), and then after the phase of the program that uses these instances, I want to delete them all at once.
How I thought to structure it is as follows:
struct B; // common base class
vector<unique_ptr<B>> memory_pool;
struct B
{
B() { memory_pool.emplace_back(this); }
virtual ~B() {}
};
struct D : B { ... }
int main()
{
...
// phase begins
D* p = new D(...);
...
// phase ends
memory_pool.clear();
// all B instances are deleted, and pointers invalidated
...
}
Apart from being careful that all B instances are allocated with new, and that noone uses any pointers to them after the memory pool is cleared, are there problems with this implementation?
Specifically I am concerned about the fact that the this pointer is used to construct a std::unique_ptr in the base class constructor, before the derived class constructor has completed. Does this result in undefined behaviour? If so is there a workaround?
In case you haven't already, familiarize yourself with Boost.Pool. From the Boost documentation:
What is Pool?
Pool allocation is a memory allocation scheme that is very fast, but
limited in its usage. For more information on pool allocation (also
called simple segregated storage, see concepts concepts and Simple Segregated Storage.
Why should I use Pool?
Using Pools gives you more control over how memory is used in your
program. For example, you could have a situation where you want to
allocate a bunch of small objects at one point, and then reach a point
in your program where none of them are needed any more. Using pool
interfaces, you can choose to run their destructors or just drop them
off into oblivion; the pool interface will guarantee that there are no
system memory leaks.
When should I use Pool?
Pools are generally used when there is a lot of allocation and
deallocation of small objects. Another common usage is the situation
above, where many objects may be dropped out of memory.
In general, use Pools when you need a more efficient way to do unusual
memory control.
Which pool allocator should I use?
pool_allocator is a more general-purpose solution, geared towards
efficiently servicing requests for any number of contiguous chunks.
fast_pool_allocator is also a general-purpose solution but is geared
towards efficiently servicing requests for one chunk at a time; it
will work for contiguous chunks, but not as well as pool_allocator.
If you are seriously concerned about performance, use
fast_pool_allocator when dealing with containers such as std::list,
and use pool_allocator when dealing with containers such as
std::vector.
Memory management is tricky business (threading, caching, alignment, fragmentation, etc. etc.) For serious production code, well-designed and carefully optimized libraries are the way to go, unless your profiler demonstrates a bottleneck.
Your idea is great and millions of applications are already using it. This pattern is most famously known as «autorelease pool». It forms a base for ”smart” memory management in Cocoa and Cocoa Touch Objective-C frameworks. Despite the fact that C++ provides hell of a lot of other alternatives, I still think this idea got a lot of upside. But there are few things where I think your implementation as it stands may fall short.
The first problem that I can think of is thread safety. For example, what happens when objects of the same base are created from different threads? A solution might be to protect the pool access with mutually exclusive locks. Though I think a better way to do this is to make that pool a thread-specific object.
The second problem is invoking an undefined behavior in case where derived class's constructor throws an exception. You see, if that happens, the derived object won't be constructed, but your B's constructor would have already pushed a pointer to this to the vector. Later on, when the vector is cleared, it would try to call a destructor through a virtual table of the object that either doesn't exist or is in fact a different object (because new could reuse that address).
The third thing I don't like is that you have only one global pool, even if it is thread-specific, that just doesn't allow for a more fine grained control over the scope of allocated objects.
Taking the above into account, I would do a couple of improvements:
Have a stack of pools for more fine-grained scope control.
Make that pool stack a thread-specific object.
In case of failures (like exception in derived class constructor), make sure the pool doesn't hold a dangling pointer.
Here is my literally 5 minutes solution, don't judge for quick and dirty:
#include <new>
#include <set>
#include <stack>
#include <cassert>
#include <memory>
#include <stdexcept>
#include <iostream>
#define thread_local __thread // Sorry, my compiler doesn't C++11 thread locals
struct AutoReleaseObject {
AutoReleaseObject();
virtual ~AutoReleaseObject();
};
class AutoReleasePool final {
public:
AutoReleasePool() {
stack_.emplace(this);
}
~AutoReleasePool() noexcept {
std::set<AutoReleaseObject *> obj;
obj.swap(objects_);
for (auto *p : obj) {
delete p;
}
stack_.pop();
}
static AutoReleasePool &instance() {
assert(!stack_.empty());
return *stack_.top();
}
void add(AutoReleaseObject *obj) {
objects_.insert(obj);
}
void del(AutoReleaseObject *obj) {
objects_.erase(obj);
}
AutoReleasePool(const AutoReleasePool &) = delete;
AutoReleasePool &operator = (const AutoReleasePool &) = delete;
private:
// Hopefully, making this private won't allow users to create pool
// not on stack that easily... But it won't make it impossible of course.
void *operator new(size_t size) {
return ::operator new(size);
}
std::set<AutoReleaseObject *> objects_;
struct PrivateTraits {};
AutoReleasePool(const PrivateTraits &) {
}
struct Stack final : std::stack<AutoReleasePool *> {
Stack() {
std::unique_ptr<AutoReleasePool> pool
(new AutoReleasePool(PrivateTraits()));
push(pool.get());
pool.release();
}
~Stack() {
assert(!stack_.empty());
delete stack_.top();
}
};
static thread_local Stack stack_;
};
thread_local AutoReleasePool::Stack AutoReleasePool::stack_;
AutoReleaseObject::AutoReleaseObject()
{
AutoReleasePool::instance().add(this);
}
AutoReleaseObject::~AutoReleaseObject()
{
AutoReleasePool::instance().del(this);
}
// Some usage example...
struct MyObj : AutoReleaseObject {
MyObj() {
std::cout << "MyObj::MyObj(" << this << ")" << std::endl;
}
~MyObj() override {
std::cout << "MyObj::~MyObj(" << this << ")" << std::endl;
}
void bar() {
std::cout << "MyObj::bar(" << this << ")" << std::endl;
}
};
struct MyObjBad final : AutoReleaseObject {
MyObjBad() {
throw std::runtime_error("oops!");
}
~MyObjBad() override {
}
};
void bar()
{
AutoReleasePool local_scope;
for (int i = 0; i < 3; ++i) {
auto o = new MyObj();
o->bar();
}
}
void foo()
{
for (int i = 0; i < 2; ++i) {
auto o = new MyObj();
bar();
o->bar();
}
}
int main()
{
std::cout << "main start..." << std::endl;
foo();
std::cout << "main end..." << std::endl;
}
Hmm, I needed almost exactly the same thing recently (memory pool for one phase of a program that gets cleared all at once), except that I had the additional design constraint that all my objects would be fairly small.
I came up with the following "small-object memory pool" -- perhaps it will be of use to you:
#pragma once
#include "defs.h"
#include <cstdint> // uintptr_t
#include <cstdlib> // std::malloc, std::size_t
#include <type_traits> // std::alignment_of
#include <utility> // std::forward
#include <algorithm> // std::max
#include <cassert> // assert
// Small-object allocator that uses a memory pool.
// Objects constructed in this arena *must not* have delete called on them.
// Allows all memory in the arena to be freed at once (destructors will
// be called).
// Usage:
// SmallObjectArena arena;
// Foo* foo = arena::create<Foo>();
// arena.free(); // Calls ~Foo
class SmallObjectArena
{
private:
typedef void (*Dtor)(void*);
struct Record
{
Dtor dtor;
short endOfPrevRecordOffset; // Bytes between end of previous record and beginning of this one
short objectOffset; // From the end of the previous record
};
struct Block
{
size_t size;
char* rawBlock;
Block* prevBlock;
char* startOfNextRecord;
};
template<typename T> static void DtorWrapper(void* obj) { static_cast<T*>(obj)->~T(); }
public:
explicit SmallObjectArena(std::size_t initialPoolSize = 8192)
: currentBlock(nullptr)
{
assert(initialPoolSize >= sizeof(Block) + std::alignment_of<Block>::value);
assert(initialPoolSize >= 128);
createNewBlock(initialPoolSize);
}
~SmallObjectArena()
{
this->free();
std::free(currentBlock->rawBlock);
}
template<typename T>
inline T* create()
{
return new (alloc<T>()) T();
}
template<typename T, typename A1>
inline T* create(A1&& a1)
{
return new (alloc<T>()) T(std::forward<A1>(a1));
}
template<typename T, typename A1, typename A2>
inline T* create(A1&& a1, A2&& a2)
{
return new (alloc<T>()) T(std::forward<A1>(a1), std::forward<A2>(a2));
}
template<typename T, typename A1, typename A2, typename A3>
inline T* create(A1&& a1, A2&& a2, A3&& a3)
{
return new (alloc<T>()) T(std::forward<A1>(a1), std::forward<A2>(a2), std::forward<A3>(a3));
}
// Calls the destructors of all currently allocated objects
// then frees all allocated memory. Destructors are called in
// the reverse order that the objects were constructed in.
void free()
{
// Destroy all objects in arena, and free all blocks except
// for the initial block.
do {
char* endOfRecord = currentBlock->startOfNextRecord;
while (endOfRecord != reinterpret_cast<char*>(currentBlock) + sizeof(Block)) {
auto startOfRecord = endOfRecord - sizeof(Record);
auto record = reinterpret_cast<Record*>(startOfRecord);
endOfRecord = startOfRecord - record->endOfPrevRecordOffset;
record->dtor(endOfRecord + record->objectOffset);
}
if (currentBlock->prevBlock != nullptr) {
auto memToFree = currentBlock->rawBlock;
currentBlock = currentBlock->prevBlock;
std::free(memToFree);
}
} while (currentBlock->prevBlock != nullptr);
currentBlock->startOfNextRecord = reinterpret_cast<char*>(currentBlock) + sizeof(Block);
}
private:
template<typename T>
static inline char* alignFor(char* ptr)
{
const size_t alignment = std::alignment_of<T>::value;
return ptr + (alignment - (reinterpret_cast<uintptr_t>(ptr) % alignment)) % alignment;
}
template<typename T>
T* alloc()
{
char* objectLocation = alignFor<T>(currentBlock->startOfNextRecord);
char* nextRecordStart = alignFor<Record>(objectLocation + sizeof(T));
if (nextRecordStart + sizeof(Record) > currentBlock->rawBlock + currentBlock->size) {
createNewBlock(2 * std::max(currentBlock->size, sizeof(T) + sizeof(Record) + sizeof(Block) + 128));
objectLocation = alignFor<T>(currentBlock->startOfNextRecord);
nextRecordStart = alignFor<Record>(objectLocation + sizeof(T));
}
auto record = reinterpret_cast<Record*>(nextRecordStart);
record->dtor = &DtorWrapper<T>;
assert(objectLocation - currentBlock->startOfNextRecord < 32768);
record->objectOffset = static_cast<short>(objectLocation - currentBlock->startOfNextRecord);
assert(nextRecordStart - currentBlock->startOfNextRecord < 32768);
record->endOfPrevRecordOffset = static_cast<short>(nextRecordStart - currentBlock->startOfNextRecord);
currentBlock->startOfNextRecord = nextRecordStart + sizeof(Record);
return reinterpret_cast<T*>(objectLocation);
}
void createNewBlock(size_t newBlockSize)
{
auto raw = static_cast<char*>(std::malloc(newBlockSize));
auto blockStart = alignFor<Block>(raw);
auto newBlock = reinterpret_cast<Block*>(blockStart);
newBlock->rawBlock = raw;
newBlock->prevBlock = currentBlock;
newBlock->startOfNextRecord = blockStart + sizeof(Block);
newBlock->size = newBlockSize;
currentBlock = newBlock;
}
private:
Block* currentBlock;
};
To answer your question, you're not invoking undefined behaviour since nobody is using the pointer until the object is fully constructed (the pointer value itself is safe to copy around until then). However, it's a rather intrusive method, as the object(s) themselves need to know about the memory pool. Additionally, if you're constructing a large number of small objects, it would likely be faster to use an actual pool of memory (like my pool does) instead of calling out to new for every object.
Whatever pool-like approach you use, be careful that the objects are never manually deleteed, because that would lead to a double free!
I still think this is an interesting question without a definitive reply, but please let me break it down into the different questions you are actually asking:
1.) Does inserting a pointer to a base class into a vector before initialisation of a subclass prevent or cause issues with retrieving inherited classes from that pointer. [slicing for example.]
Answer: No, so long as you are 100% sure of the relevant type that is being pointed to, this mechanism does not cause these issues however note the following points:
If the derived constructor fails, you are left with an issue later when you are likely to have a dangling pointer at least sitting in the vector, as that address space it [the derived class] thought it was getting would be freed to the operating environment on failure, but the vector still has the address as being of the base class type.
Note that a vector, although kind of useful, is not the best structure for this, and even if it was, there should be some inversion of control involved here to allow the vector object to control initialisation of your objects, so that you have awareness of success/failure.
These points lead to the implied 2nd question:
2.) Is this a good pattern for pooling?
Answer: Not really, for the reasons mentioned above, plus others (Pushing a vector past it's end point basically ends up with a malloc which is unnecessary and will impact performance.) Ideally you want to use a pooling library, or a template class, and even better, separate the allocation/de-allocation policy implementation away from the pool implementation, with a low level solution already being hinted at, which is to allocate adequate pool memory from pool initialisation, and then use this using pointers to void from within the pool address space (See Alex Zywicki's solution above.) Using this pattern, the pool destruction is safe as the pool which will be contiguous memory can be destroyed en masse without any dangling issues, or memory leaks through losing all references to an object (losing all reference to an object whose address is allocated through the pool by the storage manager leaves you with dirty chunk/s, but will not cause a memory leak as it is managed by the pool implementation.
In the early days of C/C++ (before mass proliferation of the STL), this was a well discussed pattern and many implementations and designs can be found out there in good literature: As an example:
Knuth (1973 The art of computer programming: Multiple volumes), and for a more complete list, with more on pooling, see:
http://www.ibm.com/developerworks/library/l-memory/
The 3rd implied question seems to be:
3) Is this a valid scenario to use pooling?
Answer: This is a localised design decision based on what you are comfortable with, but to be honest, your implementation (no controlling structure/aggregate, possibly cyclic sharing of sub sets of objects) suggests to me that you would be better off with a basic linked list of wrapper objects, each of which contains a pointer to your superclass, used only for addressing purposes. Your cyclical structures are built on top of this, and you simply amend/grow shrink the list as required to accommodate all of your first class objects as required, and when finished, you can then easily destroy them in effectively an O(1) operation from within the linked list.
Having said that, I would personally recommend that at this time (when you have a scenario where pooling does have a use and so you are in the right mind-set) to carry out the building of a storage management/pooling set of classes that are paramaterised/typeless now as it will hold you in good stead for the future.
This sounds what I have heard called a Linear Allocator.
I will explain the basics of how I understand how it works.
Allocate a block of memory using ::operator new(size);
Have a void* that is your Pointer to the next free space in memory.
You will have an alloc(size_t size) function that will give you a pointer to the location in the block from step one for you to construct on to using Placement New
Placement new looks like... int* i = new(location)int(); where location is a void* to a block of memory you alloced from the allocator.
when you are done with all of your memory you will call a Flush() function that will dealloc the memory from the pool or at least wipe the data clean.
I programmed one of these recently and i will post my code here for you as well as do my best to explain.
#include <iostream>
class LinearAllocator:public ObjectBase
{
public:
LinearAllocator();
LinearAllocator(Pool* pool,size_t size);
~LinearAllocator();
void* Alloc(Size_t size);
void Flush();
private:
void** m_pBlock;
void* m_pHeadFree;
void* m_pEnd;
};
don't worry about what i'm inheriting from. i have been using this allocator in conjunction with a memory pool. but basically instead of getting the memory from operator new i am getting memory from a memory pool. the internal workings are the same essentially.
Here is the implementation:
LinearAllocator::LinearAllocator():ObjectBase::ObjectBase()
{
m_pBlock = nullptr;
m_pHeadFree = nullptr;
m_pEnd=nullptr;
}
LinearAllocator::LinearAllocator(Pool* pool,size_t size):ObjectBase::ObjectBase(pool)
{
if (pool!=nullptr) {
m_pBlock = ObjectBase::AllocFromPool(size);
m_pHeadFree = * m_pBlock;
m_pEnd = (void*)((unsigned char*)*m_pBlock+size);
}
else{
m_pBlock = nullptr;
m_pHeadFree = nullptr;
m_pEnd=nullptr;
}
}
LinearAllocator::~LinearAllocator()
{
if (m_pBlock!=nullptr) {
ObjectBase::FreeFromPool(m_pBlock);
}
m_pBlock = nullptr;
m_pHeadFree = nullptr;
m_pEnd=nullptr;
}
MemoryBlock* LinearAllocator::Alloc(size_t size)
{
if (m_pBlock!=nullptr) {
void* test = (void*)((unsigned char*)m_pEnd-size);
if (m_pHeadFree<=test) {
void* temp = m_pHeadFree;
m_pHeadFree=(void*)((unsigned char*)m_pHeadFree+size);
return temp;
}else{
return nullptr;
}
}else return nullptr;
}
void LinearAllocator::Flush()
{
if (m_pBlock!=nullptr) {
m_pHeadFree=m_pBlock;
size_t size = (unsigned char*)m_pEnd-(unsigned char*)*m_pBlock;
memset(*m_pBlock,0,size);
}
}
This code is fully functional except for a few lines which will need to be changed because of my inheritance and use of the memory pool. but I bet you can figure out what needs to change and just let me know if you need a hand changing the code. This code has not been tested in any sort of professional manor and is not guaranteed to be thread safe or anything fancy like that. i just whipped it up and thought i could share it with you since you seemed to need help.
I also have a working implementation of a fully generic memory pool if you think it may help you. I can explain how it works if you need.
Once again if you need any help let me know. Good luck.