Would it be possible in C++ to create a custom allocator that works simply like this:
{
// Limit memory to 1024 KB
ScopedMemoryPool memoryPool(1024 * 1024);
// From here on all heap allocations ('new', 'malloc', ...) take memory from the pool.
// If the pool is depleted these calls result in an exception being thrown.
// Examples:
std::vector<int> integers(10);
int a * = new int [10];
}
I couldn't find something like this in the boost libraries, or anywhere else.
Is there a fundamental problem that makes this impossible?
You would need to create a custom allocator that you pass in as a template param to vector. This custom allocator would essentially wrap the access to your pool and do whatever size validations that it wants.
Yes you can make such a construct, it's used in many games, but you'll basically need to implement your own containers and call memory allocation methods of that pool that you've created.
You could also experiment with writing a custom allocator for the STL containers, although it seems that that sort of work is generally advised against. (I've done it before and it was tedious, but I don't remember any specific problems.)
Mind- writing your own memory allocator is not for the faint of heart. You could take a look at Doug Lea's malloc, which provides "memory spaces", which you could use in your scoping construct somehow.
I will answer a different question. Look at 'efficient c++' book. One of the things they discuss is implementing this kind of thing. That was for a web server
For this particular thing you can either mess at the c++ layer by overriding new and supplying custom allocators to the STL.
Or you can mess at the malloc level, start with a custom malloc and work from there (like dmalloc)
Is there a fundamental problem that makes this impossible?
Arguing about program behavior would become fundamentally impossible. All sorts of weird issues will come up. Certain sections of the code may or may not execute though this will seeminly have no effect on the next sections which may work un-hindered. Certain sections may always fail. Dealing with the standard-library or any other third party library will become extremely difficult. There may be fragmentations at run-time at times and at times not.
If intent is that all allocations within that scope occur with that allocator object, then it's essentially a thread-local variable.
So, there will be multithreading issues if you use a static or global variable to implement it. Otherwise, not a bad workaround for the statelessness of allocators.
(Of course, you'll need to pass a second template argument eg vector< int, UseScopedPool >.)
Related
I'm wrestling with some pain being caused by std::allocator_traits::construct. In order for a container to be a "conforming" user of the allocator concept, it needs to use construct rather than placement new to construct objects. This is very sticky for me. Currently I have a class (class A) that is designed to be allocator aware, and at some point it needs to create another instance of some other class (class B) in allocated memory. The problem is that class B implements the construction of the new object. If I could use placement new, this wouldn't be an issue: A would handle allocation, pass B the memory address, and B would construct into that. But since the construction needs to be performed via construct, I need to inject the allocator type into B, templating it, which creates a huge mess.
It's bad enough that I am seriously considering just using placement new, and static asserting that my instance of the allocator does not have a construct method (note that the static construct function calls the instance method if it exists, otherwise it calls placement new). I have never felt the tiniest urge to write a construct method for an allocator. The cost of making this part of the allocator concept seems very high to me; construction has gotten entangled with allocation, where allocators were supposed to help separate them. What justifies the existence of construct/destruct? Insight into the design decision, examples of real (not toy) use cases, or thoughts on the gravity of electing to simply use placement new appreciated.
There is a similar question; std::allocator construct/destroy vs. placement new/p->~T(). It was asked quite a long time ago, and I don't find the answer accepted there as sufficient. Logging is a bit trite as a use case, and even then: why is the allocator logging the actual construction of objects? It can log allocations and deallocations in allocate and deallocate, it doesn't answer the question in the sense of: why was construction made a province of the allocator in the first place? I'm hoping to find a better answer; it's been quite a few years and much about allocators has changed since then (e.g. allocators being stateful since 11).
A few points:
There really isn't a std container concept. The container requirements tables in the standard are there to document the containers specified by the standard.
If you have a container that wants to interact with std::allocator_traits<Alloc>, all you have to do is assume that Alloc conforms to the minimum C++11 allocator requirements and interact with it via std::allocator_traits<Alloc>.
You are not required to call std::allocator_traits<Alloc>::construct.
You are forbidden from calling Alloc::construct because it may not exist.
The standard-specified containers are required to call std::allocator_traits<Alloc>::construct only for container::value_type, and are forbidden from using std::allocator_traits<Alloc>::construct on any other types the container may need to construct (e.g. internal nodes).
Why was construct included in the "allocator concept" way back in C++98?
Probably because the committee at the time felt that this would ease dealing with x86 near and far pointers -- a problem that no longer exists today.
That being said, std::scoped_allocator_adaptor is a modern real-world example of an allocator that customizes both construct and destroy. For the detailed specification of those customizations I point you towards the latest C++1z working draft, N4567. The spec is not simple, and that is why I'm not attempting to reproduce it here.
I am writing a small particle system in C++ and am yet unsure about how I should manage the particle related data -- should it be stored in a static or dynamic array, in a linked list, some mixture of both, or whatever else one might think of?
At the moment I don't want to make a choice but would rather like to use an abstract class for memory mangement that on the one hand provides me with allocation and deallocation routines and on the other hand takes care of deallocation of the supplied resources in its destructor. I hope that in this way I can change between and test different particle management strategies quickly and transparently.
1) Is this a reasonable thing to do?
2) If yes: Are there any libraries that provide such functionality?
Thank you for you help!
For a particle system you may wish to consider using one std::vector for each coordinate, velocity, colour channel etc for each particle. Eg
std::vector<float> x(100);
std::vector<float> vx(100);
etc
Instead of
std::vector<Particle> p(100)
This is known as SOA (structure-of-array) rather than AOS (array of structures). The former is more amenable to vectorization.
The rule of thumb is to use std::vector unless you really have a reason to chose something else. At the moment you can stick with it. To control memory management at the low level you can supply a vector with your own allocator in case std::allocator which will use std::new_allocator should be replaced. If your main concern is extensive deleting and allocating single object than definitely you might consider writing your own user-defined allocator which will allocate from pool of fixed-sized elements organized into linked list, because conventional and more general oeprator new() is not efficient in case of many calls to allocate or deallocate objects one at a time.
To test different containers is a reasonable thing IMO, however vector should suffice. In order to decide if
1) Is this a reasonable thing to do?
and thus such tests should be covered at all - you have to think about the operations you are going to use extensively.
2) If yes: Are there any libraries that provide such functionality?
I don't know about such library.
I know that it's a very old debate that has already been discussed many times all over the world. But I'm currently having troubles deciding which method I should use rather than another between static and dynamic arrays in a particular case. Actually, I woudn't have used C++11, I would have used static arrays. But I'm now confused since there could be equivalent benefits with both.
First solution:
template<size_t N>
class Foo
{
private:
int array[N];
public:
// Some functions
}
Second solution:
template<size_t N>
class Foo
{
private:
int* array;
public:
// Some functions
}
I can't happen to chose since the two have their own advantages:
Static arrays are faster, and we don't care about memory managment at all.
Dynamic arrays do not weigth anything as long as memory is not allocated. After that, they are less handy to use than static arrays. But since C++11, we can have great benefits from move semantics, which we can not use with static arrays.
I don't think there is one good solution, but I would like to get some advice or just to know what you think of all that.
I will actually disagree with the "it depends". Never use option 2. If you want to use a translationtime constant, always use option 1 or std::array. The one advantage you listed, that dynamic arrays weigh nothing until allocated, is actually a horrible, huge disadvantage, and one that needs to be pointed out with great emphasis.
Do not ever have objects that have more than one phase of construction. Never, ever. That should be a rule committed to memory through some large tattoo. Just never do it.
When you have zombies objects that are not quite alive yet, though not quite dead either, the complexity in managing their lifetime grows exponentially. You have to check in every method whether it is fully alive, or only pretending to be alive. Exception safety requires special cases in your destructor. Instead of one simple construction and automatic destruction, you've now added requirements that must be checked at N different places (# methods + dtor). And the compiler doesn't care if you check. And other engineers won't have this requirement broadcast, so they may adjust your code in unsafe ways, using variables without checking. And now all these methods have multiple behaviors depending on the state of the object, so every user of the object needs to know what to expect. Zombies will ruin your (coding) life.
Instead, if you have two different natural lifetimes in your program, use two different objects. But that means you have two different states in your program, so you should have a state machine, with one state having just one object and another state with both, separated by an asynchronous event. If there is no asynchronous event between the two points, if they all fit in one function scope, then the separation is artifical and you should be doing single phase construction.
The only case where a translation time size should translate to a dynamic allocation is when the size is too large for the stack. This then gets to memory optimisation, and it should always be evaluated using memory and profiling tools to see what's best. Option 2 will never be best (it uses a naked pointer - so again we lose RAII and any automatic cleanup and management, adding invariants and making the code more complex and easily breakable by others). Vector (as suggested by bitmask) would be the appropriate first thought, though you may not like the heap allocation costs in time. Other options might be static space in your application's image. But again, these should only be considered once you've determined that you have a memory constraint and what to do from there should be determined by actual measurable needs.
Use neither. You're better off using std::vector in nearly any case. In the other cases, that heavily depends on the reason why std::vector would be insufficient and hence cannot be answered generally!
I'm currently having a problem to decide which one I should use more than another in a particular case.
You'll need to consider your options case-by-case to determine the optimal solution for the given context -- that is, a generalization cannot be made. If one container were ideal for every scenario, the other would be obsolete.
As mentioned already, consider using std implementations before writing your own.
More details:
Fixed Length
Be careful of how much of the stack you consume.
May consume more memory, if you treat it as a dynamically sized container.
Fast copies.
Variable Length
Reallocation and resizing can be costly.
May consume more memory than needed.
Fast moves.
The better choice also requires you understand the complexity of creation, copy, assign, etc. of the element types.
And if you do use std implementations, remember that implementations may vary.
Finally, you can create a container for these types which abstract the implementation details and dynamically select an appropriate data member based on the size and context -- abstracting the detail behind a general interface. This is also useful at times to disable features, or to make some operations (e.g. costly copies) more obvious.
In short, you need to know a lot about the types and usage, and measure several aspects of your program to determine the optimal container type for a specific scenario.
I need to create a custom allocator for std:: objects (particularly and initially for std::vector) but it might eventually come to use others
The reason I need to create a custom allocator is that I need to track allocated (heap & stack) resources by individual components of the application (this is an inherent feature of the application). I will need the custom allocator to monitor the heap portion of the resources, so it is essential that I'm able to pass to the std::vector constructor something like
trackerId idToTrackUsage;
myAlloca<int> allocator(idToTrackUsage);
vector<int> Foo( allocator );
However, after reading a bit I found this little bomb about the STL / C++ standard (see references) saying that all allocator instances of a given type should be equivalent (that is that == should return true for any two instances) and, most terminal; any allocator should be able to deallocate memory allocated by any other instance (that is, without having a way to know what that other instance might be). In short, allocators cannot have state.
So I'm trying to find the best way around this. Any clever ideas? I really really REALLY don't want to have to keep a custom version of std::vector around.
EDIT: i read about scoped allocators for c++0x on http://www2.research.att.com/~bs/C++0xFAQ.html#scoped-allocator but i couldn't really get far into understanding how this applies to my problem. If anyone thinks c++0x alleviates this problem, please comment
References:
Allocator C++ article in Wikipedia
Some random further reading courtesy of Google
Aside from the obvious answer ("if you violate any requirement, that's undefined behavior, good night and thanks for playing"), I imagine the worst that would likely happen, is that the vector implementation can rely on the requirement that "all instances of the allocator class are interchangeable" in the obvious way:
vector(const Allocator &youralloc = Allocator()) {
const Allocator hawhaw;
// use hawhaw and ignore youralloc.
// They're interchangeable, remember?
}
Looking at the source, GCC's vector implementation (which I think is based eventually on SGI's original STL implementation) does sort-of store a copy of the allocator object passed into that constructor, so there's some hope that this won't happen.
I'd say try it and see, and document what you've done very carefully, so that anyone attempting to use your code on an implementation that you haven't checked, knows what's going on. Implementers are encouraged in the standard to relax the restrictions on allocators, so it would be a dirty trick to make it look as though they're relaxed when really they aren't. Which doesn't mean it won't happen.
If you're really lucky, there's some documentation for your container implementation that talks about allocators.
You could, of course, leave a pointer to whatever state you need in any allocated blocks. This does of course mean that any per-block state must be stored in that block, and the allocator instances would act more like handles than actual objects in as of themselves.
Making the allocator state static would do the trick, if you're able to work with that. It does mean that all allocators of that type will have to share their state, but from your requirements, that sounds like it could be acceptable
To respond to your edit: yes, in C++0x or C++11, allocators can have state.
My application problem is the following -
I have a large structure foo. Because these are large and for memory management reasons, we do not wish to delete them when processing on the data is complete.
We are storing them in std::vector<boost::shared_ptr<foo>>.
My question is related to knowing when all processing is complete. First decision is that we do not want any of the other application code to mark a complete flag in the structure because there are multiple execution paths in the program and we cannot predict which one is the last.
So in our implementation, once processing is complete, we delete all copies of boost::shared_ptr<foo>> except for the one in the vector. This will drop the reference counter in the shared_ptr to 1. Is it practical to use shared_ptr.use_count() to see if it is equal to 1 to know when all other parts of my app are done with the data.
One additional reason I'm asking the question is that the boost documentation on the shared pointer shared_ptr recommends not using "use_count" for production code.
Edit -
What I did not say is that when we need a new foo, we will scan the vector of foo pointers looking for a foo that is not currently in use and use that foo for the next round of processing. This is why I was thinking that having the reference counter of 1 would be a safe way to ensure that this particular foo object is no longer in use.
My immediate reaction (and I'll admit, it's no more than that) is that it sounds like you're trying to get the effect of a pool allocator of some sort. You might be better off overloading operator new and operator delete to get the effect you want a bit more directly. With something like that, you can probably just use a shared_ptr like normal, and the other work you want delayed, will be handled in operator delete for that class.
That leaves a more basic question: what are you really trying to accomplish with this? From a memory management viewpoint, one common wish is to allocate memory for a large number of objects at once, and after the entire block is empty, release the whole block at once. If you're trying to do something on that order, it's almost certainly easier to accomplish by overloading new and delete than by playing games with shared_ptr's use_count.
Edit: based on your comment, overloading new and delete for class sounds like the right thing to do. If anything, integration into your existing code will probably be easier; in fact, you can often do it completely transparently.
The general idea for the allocator is pretty much the same as you've outlined in your edited question: have a structure (bitmaps and linked lists are both common) to keep track of your free objects. When new needs to allocate an object, it can scan the bit vector or look at the head of the linked list of free objects, and return its address.
This is one case that linked lists can work out quite well -- you (usually) don't have to worry about memory usage, because you store your links right in the free object, and you (virtually) never have to walk the list, because when you need to allocate an object, you just grab the first item on the list.
This sort of thing is particularly common with small objects, so you might want to look at the Modern C++ Design chapter on its small object allocator (and an article or two since then by Andrei Alexandrescu about his newer ideas of how to do that sort of thing). There's also the Boost::pool allocator, which is generally at least somewhat similar.
If you want to know whether or not the use count is 1, use the unique() member function.
I would say your application should have some method that eliminates all references to the Foo from other parts of the app, and that method should be used instead of checking use_count(). Besides, if use_count() is greater than 1, what would your program do? You shouldn't be relying on shared_ptr's features to eliminate all references, your application architecture should be able to eliminate references. As a final check before removing it from the vector, you could assert(unique()) to verify it really is being released.
I think you can use shared_ptr's custom deleter functionality to call a particular function when the last copy has been released. That way, you're not using use_count at all.
You would need to hold something other than a copy of the shared_ptr in your vector so that the shared_ptr is only tracking the outstanding processing.
Boost has several examples of custom deleters in the shared_ptr docs.
I would suggest that instead of trying to use the shared_ptr's use_count to keep track, it might be better to implement your own usage counter. this way you will have full control over this rather than using the shared_ptr's one which, as you rightly suggest, is not recommended. You can also pre-set your own counter to allow for the number of threads you know will need to act on the data, rather than relying on them all being initialised at the beginning to get their copies of the structure.