I'm trying to design the internal mechanics of a simple embedded application. Chunks of data arrive on the network which need to be delivered to components determined by an addressing mechanism. Multiple components may subscribe to the same address. I want to design an architecture where incoming chunks are encapsulated into wrapper objects allocated from a memory pool. Each component might hold onto the wrappers (and the data inside them) as long as they need and it should be freed when all components let go of it. At that time it's returned to the pool and again ready to be allocated. The pool exhausting is not a concern.
I plan to use this memory pool implementation which satisfies Allocator. For the automatic destruction of wrapper objects I plan to use std::shared_ptr so when all components free the wrapper, it is automatically destroyed and the used memory returned to the pool.
What I don't see is how these two concepts can come together. If I allocate memory from the pool directly (by calling allocate()), it would give me a pointer to the block of data which is fine but then how will deallocate() be called automatically? Or do I need to use another container for my wrapper objects like std::list and pass it the memory pool allocator?
You can use std::shared_ptr with a custom allocator using std::allocate_shared. This is probably what you want anyway, since I'm assuming you want the control block (i.e. reference counts) to be allocated using the pool allocator as well.
When constructing an object using std::allocate_shared, a copy of the allocator is stored inside the shared_ptr, so the correct deallocate() will be called on destruction.
Note that you could also create your std::shared_ptr using a custom deleter, e.g.:
auto allocator = getAllocator<Foo>();
auto ptr = std::shared_ptr<Foo>(new(allocator.allocate()) Foo,
[&allocator](auto * ptr) { allocator.deallocate(ptr); });
However, as I mentioned that's probably not what you want, since space for the reference counts will not be allocated using your allocator object.
Btw, in case you're still "shopping around", here is another memory pool implementation which I quite like: foonathan::memory. It provides its own allocate_shared.
Related
I'd like to create some shared pointers like this
typedef boost::pool_allocator<MyClass> PoolAlloc;
std::shared_ptr<MyClass> p = std::allocate_shared<MyClass, PoolAlloc>(PoolAlloc());
As I am going to be doing this a bunch of times, memory gets taken from the pool.
As the program flow continues the smart pointers get deleted / go out of scope and the memory is returned to the pool.
However the pool memory is still allocated and every once in a while I'd like to cut back on that unused memory and give it back to the OS.
For this purpose boost has the boost::singleton_pool::release_memory() method. However boost::singleton_pool is templated on the size of the allocated objects, like this:
boost::singleton_pool<boost::pool_allocator_tag, sizeof(MyClass)>::
release_memory();
I am now unsure whether this call would have the desired effect (my guess is no), because allocate_shared does not actually allocate from the given pool but from a different pool (holding objects of an unknown type of size sizeof(MyClass) + sizeof(Control_Block)).
My question is how to properly release_memory() from a boost::pool_allocator used with allocate_shared(). Would using boost::allocate_shared() instead of std::allocate_shared() make a difference?
I am currently writing an allocator that should be usable by C++ standard data structures, i.e., it implements the Allocator concept.
The allocator is quite simple: It allocates chunks of x objects and always hands out the next object if the current chunk is not full, otherwise it allocates a new chunk.
Now my question is: How to handle these chunks when the allocator is destroyed/copied/moved? The allocator concept says nothing about what must happen in these cases.
Here are my thoughts:
Destruction: The allocator may destroy all its chunks. But then, no object that uses any of the allocated objects may outlive the allocator.
Copying: The most straightforward idea would be to copy the chunks. But on second thought, this makes no sense: Nobody knows the address of the objects in the copied chunks, so they are just copied without any benefit. Maybe a copied allocator should start with an empty list of chunks.
Moving: The chunks should be moved to the new allocator. The old one should be left with an empty list of chunks.
Are my assumptions correct? If not, then why and where is this defined?
Allocators usually are lightweight objects that can be copied around and destroyed, e.g. in the standard container classes. Therefore they should not do the heavy memory management themselves but relay it to some more permanent memory manager object. If you do that, the lifetime of the memory chunks does not depend on the allocator lifetimes but on the lifetime of the memory manager object. The lifetime thoughts therefore have to be applied to both types of objects:
Allocator (short lifetime):
Copying/Moving: copy the reference to the memory manager.
Destruction: either does nothing (external lifetime management of the memory manager) or possibly destroys the memory manager, e.g. each allocator has a shared_ptr to the memory manager
Memory manager (long lifetime):
Copying should be forbidden, it makes no sense to duplicate the manager and its managed storage
Moving could be allowed, but does not make much sense. The memory manager could evene be a singleton-like class, i.e. one fixed instance that does not need to be moved around.
Destruction should involve the destruction of the managed memory, since no other object knows how to deallocate the managed storage.
I understand that boost intrusive collections ultimately store references to the objects and thus that objects need their own lifetime management.
I was wondering if I can simply use boost pool to manage that lifetime. When I want to store a new object in boost intrusive list, could I just allocate an object from boost pool and store it within the list? Then when I delete from the list then I deallocate using boost pool.
The answer is yes.
It's also not very typical.
If you want to control when and where memory is allocated, you use a pool.
If you want to decouple the memory layout of your datastructure and it's semantics, you use an intrusive container.
So, there is a sweet spot, but it would look more like:
decorate element type with intrusive hooks (e.g. for intrusive map)
you create new elements in some type of "optimal" memory layout (this could well be a vector<MyElement, custom_allocator>)
Loose remarks:
then when I delete from the list then I deallocate using boost pool
A typical scenario for using a pool is expressly when you want to /not/ have to deallocate the elements (beware of non-trivial destructors). Otherwise, you just move the inefficiencies of the heap local to the pool (fragmentation, locking)
the objects need their own lifetime management
This sounds slightly off. In fact, the object need not have "their own" lifetime management. It's just that their lifetime isn't governed by the intrusive data structure they participate in.
E.g. by storing all elements in a vector, you get contiguous storage and the lifetime of all the elements is governed by the vector[1]. Hence you can decouple element lifetime and allocation from the container semantics
[1] any issues surrounding vector reallocation are usually prevented by reserving enough capacity up front. If you do, you will realize this is very very similar to a fixed-size pool allocator, but with the added guarantee of zero fragmentation. If you didn't need the latter, you could do a list<T, pool_allocator<T> > so you get locality of reference but stable references on insertion/deletion. Etc. etc.
I am using few library functions that return a pointer created either using malloc or new.
So, I have my own customer deallocator based on what type of allocation was used.
E.g
shared_ptr<int> ptr1(LibFunctA(), &MallocDeleter); //LibFunctA returns pointer created using malloc
shared_ptr<int> ptr2(LibFunctB(), &newDeleter); //LibFunctB returns pointer created using new
Now, I understand this is a very naive use of deallocator above but what other scenarios is it heavily used for ?
Also, how can one use a customer allocator ? I tried to assign a custom allocator as below but now how do I actually get it called ? Where does this kind of feature help ?
shared_ptr<int> ptr3(nullptr_t, &CustomDeleter, &CustomAllocator); //assume both functs are defined somewhere.
I don't see anything "naive" about using deleters that way. It is the main purpose of the feature after all; to destroy pointer objects that aren't allocated using the standard C++ methods.
Allocators are for when you need control of how the shared_ptr's control block of memory is allocated and deleted. For example, you might have a pool of memory that you want these things to come from, or if you're in a memory-limited situation where allocation of memory via new is simply not acceptable. And since the type of the control block is up to shared_ptr, there's no other way to be able to control how it is allocated except with some kind of allocator.
Custom deleters for shared_ptr are very useful for wrapping some (usually) C resource that you need to later call a freeing function on. For example, you might do something like:
shared_ptr<void> file(::CreateFileW(...), ::CloseHandle);
Examples like this abound in C libraries. This saves from having to manually free the resource later and take care of possible exceptions and other nasties.
I think the custom allocator will be used to allocate space for the "shared count" object, that stores a copy of the deallocator (deleter) and the reference counter.
As for what a custom deleter can be used for...
One use was already mentioned: make shared_ptr compatible with objects that must be deleted by some special function (like FILE which is deleted by fclose), without having to wrap it into a helper-class that takes care of the proper deletion.
Another use for a custom deleter is pools. The pool can hand out shared_ptr<T> that were initialized with a "special" deleter, which doesn't really delete anything, but returns the object to the pool instead.
And one other thing: the deleter is already necessary to implement some shared_ptr features. E.g. the type that's deleted is always fixed at creation time, and independent of the type of the shared_ptr that's being initialized.
Vou can create a shared_ptr<Base> by actually initializing it with a Derived. shared_ptr guarantees that when the object is deleted, it will be deleted as a Derived, even if Base does not have a virtual dtor. To make this possible, shared_ptr already has to store some information about how the object shall be deleted. So allowing the user to specify a completely custom deleter doesn't cost anything (in terms of runtime performance), and doesn't require much additional code either.
There are probably dozens of other scenarios where one can make good use of the custom deleter, that's just what I have come up with so far.
I need to create a pool of objects to eliminate dynamic allocations. Is it efficient to use std::stack to contain pointers of allocated objects?
I suspect every time I push the released object back to stack a new stack element will be dynamically allocated. Am I right? Should I use std::vector to make sure nothing new is allocated?
Whether a stack is suited for your particular purpose or not is an issue I will not deal with. Now, if you are concerned about the number of allocations, the default internal container for a std::stack is an std::deque<>. It will not need to allocate new memory for the stack in each push (as long as it has space) and when it allocates it does not need to relocate all existing elements as an std::vector<> would.
You can tell the stack to use an std::vector<> as underlying container with the second template argument:
std::stack< int, std::vector<int> > vector_stack;
STL containers of pointers don't do anything with the objects they point to, that's up to you, so you are responsible for not leaking any memory etc. Have a look at Boost Pointer Container Library or try storing the actual objects, you will save yourself hassle in the long run.
If you want to reduce the amount of dynamic allocations made by the container and you know roughly how many objects you need to store, you can use vector's 'reserve()' method, which will preallocate the memory you request in one shot.
You can also specify the number of records you want in the constructor, but this way will create x objects for you and then store them, which might not be want you want.
If, for some technical reason dynamic allocation is out completely, you might want to try using boost::pool as your allocator, (as you know you can specify a different std library memory allocator if you don't want to use the default one).
That said, when I tested it, the default one was always faster, at least with g++ so it may not be worth the effort. Make sure you profile it rather than assume you can out code the standards commitee!
Doing ANY allocs during a free is WRONG due to nothrow-guarantees. If you have to do an alloc to do a free and the alloc throws where do you put the pointer? You either quietly capture the exception and leak or propagate the exception. Propagating the exception means objects that use your YourObject cant be put in STL containers. And leaking is, well, leaking. In either case you have violated the rules.
But what data structure to use depends on your object lifetime control idiom.
Is the idiom an object pool to be used with a factory method(s) and freeInstance
YourObject* pO = YourObject::getInstance(... reinitialize parameters ...);
......object public lifetime....
pO->freeInstance();
or a memory pool to be used with a class specific operator new/operator delete (or an allocator)?
YourObject::operator new(size_t);
......object public lifetime....
delete pO;
If it is an object pool and you have an idea about the number of YourObject*'s use vector in released code and deque or preferably a circular buffer (as deque has no reserve so you have to add this where as a dynamically self sizing circular buffer is precisely what you want) in debug code and reserve the approximate number. Allocate LIFO in release and FIFO in debug so you have history in debug.
In the path where there are no free objects, remember to do the reserve(nMade+1) on the YourObject* collection before you dynamically create an object.
(The reason for this reserve is two fold. First, it must be done at createInstance time Second, it simplifies the code. For otherwise you have the possibility of throwing a std::badalloc in freeInstance which may make destructor guarantees hard to guarantee. OUCH! e.g. - class Y has an YourObject* in it and does a freeInstance for that YourObject* in its destructor - if you don't reserve the space for the YourObject* when you make it where do you store that pointer at freeInstance time? If you reserve the space afterwards in getInstance then you have to catch the std::badalloc for the reserve, release the just made YourObject, and rethrow.)
If it is a memory pool then in the memory blocks use an intrusive singly linked list in release and doubly linked list in debug (I am assuming that sizeof(YourObject)>=2*sizeof(void*)) BTW there are a lot of MemoryPool implementations out there. Again allocate LIFO in release and FIFO in debug so you have history in debug.
BTW if you use the factory idiom don't skip on the overloaded getIntances() and add reinit methods. That just opens up the possibility of leaving out a reinit call. The getInstance methods are your "constructors" in the sense that it is they that get the object to the state that you want. Note that in the object pool case you need a freeInstance which may have to do "destructor like" things to the object.
In this case it makes some sense to speak of "public class invariants" and "private class invariants" - the object sits in a limbo state where public class invariants may NOT be satisfied while in the free pool. Its a YourObject as fas as a YourObject is concerned but all of the public class invariants may not be satisfied. It is the job of YourObject::getInstance to both get an instance AND ensure that its public invariants are satisfied. In a complementary fashion freeInstance releases resources that may have been acquired by getInstance to ensure the "public class invariants" were satisfied may be released during the objects "idle time" on the free list.
LIFO in release also has the SIGNIFICANT benefit of caching the last used objects/blocks where as FIFO is guaranteed not to cache if there are a sufficiently large number of objects/blocks - or even page if the number is larger! But you probably already realized this as you decided to use a stack.
strong text