As far as I know, when a vector runs out of space the allocator is used to create new space. However, I want to create a custom resize policy that will remove the bottom 25% of elements instead and maintain the same size all the time. This is to build a cache that has limited space.
Is there a method or default functor I can override to get the behavior I want?
TL;DR, you are trying to use the wrong container.
The allocator is responsible for the allocation, deallocation etc. of the memory as required by the container. It is the responsibility of the container to implement the required semantics and it uses the allocators to assist it in doing so.
std::vector is probably not the best choice for the cache you describe, or at least not in its raw form.
You can look to boost (circular_buffer) as an alternative.
Given the vector you mention, you could also look to wrap that with the cache interface you desire, but changing the allocator is not the correct route. Changes to the allocator will leave the vector thinking there are valid objects in the "lower" 25% of the container, whilst the allocator has already removed them (or the memory to them).
Related
I want to write a custom memory manager/allocator for learning. I'm tempted to have a master allocator that requests n bytes of ram from the heap (via new). This would be followed by several allocator... Adaptors? Each would interface with the master, requesting a block of memory to manage, these would be stack, linear, pool, slab allocators etc each managing allocations from their slice of the master pool allocator.
The problem I have is whether I should write custom allocator_traits to interface with these for the various STL containers; or if I should just ignore the adaptor idea and simply overload new and delete to use the custom pool allocator/manager, the master one.
What I'm interested in understanding is what tangible benefit I would gain from having separate allocators for STL containers? It seems like the default std::allocator calls new and delete as needed so if I overload those to instead request from my big custom memory pool, I'd get all the benefit without the kruft of custom std::allocator code.
Or is this a matter where certain types of allocator models, like using a stack allocator for a std::deque would work better than the default allocator? And if so, wouldn't the normal stl implementation already specialise the default allocator for the various container types, or otherwise be optimised in the calls to the default allocator?
If it matters at all, I'm using C++20 via GCC 10+
If you want to replace the global allocator, including in every library you are using, you don't have to use std::allocator.
std allocators let you do things like create temporary allocation pools. Suppose you have some data structures you can guarantee will not outlive a certain scope, and you know that (whatever is allocated) 90%+ will remain allocated to the end of the scope.
A relatively simple std allocator could hand outmemory, never recycle it, and clean it up at the end of the scope much faster than any global new or delete operator could.
Whenever you have special knowledge of the contents and lifetime patterns of a container, you could hand-tune an allocator for that specific container. The standard allocator cannot. Sometimes when you are willing to make compromises that the std containers are not, you can patch their behavior with a custom allocator.
std::deque cannot efficiently use a stack allocator, because it cannot presume you'll mainly use it as a stack. You might use it mainly a queue. A stack allocator when you use it mainly as a queue would be a disaster; but if you used it 90%+ as a stack, a stack allocator could be much faster at the cost of modest memory overhead (and if 99%+, a stack allocator that handles the exceptional case and cleans up the non-stack based operations).
Finally, allocators can permit you to distinguish between kinds of containers. You might want the memory for your document (persistent) state to be allocated in one region of memory, and your "scratch" non-persistent data to be allocated elsewhere.
And yes, using a std allocator is something you should consider not doing. Optimization is fungible, and tweaking low level memory allocation is something you can work on after you have made the rest of the system more efficient and functional. Only when you have something that works, isn't fast enough, and you have identified new/delete as a fundamental bottle neck you can't design around should you say "ok, time to replace allocation!"
Use Case: Security Software needs to shred memory on delete, 'cause it cannot afford to let sensitive data remain somewhere in the physical RAM, optionally accessible by later instantiated processes. The delete operators of standard run-times won't do this expensive operation. Overwriting the heap operators might lead to linker problems with libraries depending on the runtime versions of those.
Answering the two questions in-order:
Should I write custom allocator_traits to interface my allocators for the various STL containers?
Yes, for easy manipulations. Pretty soon in the implementation, situations such as controlling memory overlaps would arise. For example, while stress-testing the implementation at full capacity of individual allocators and figuring out an algorithm for re-allocation. In this regard, you would need to specialize the allocator_traits class for the allocators rather than implement its member types from scratch using new and delete operators.
The reason allocator_traits is used is because it facilitates easy handling of certain rules that need to be respected. Such rules occur all across memory management. [Refer here for three such rules during allocator construction.]
What tangible benefit I would gain from having separate allocators for STL containers?
Absolute control of how the master allocator assigns, re-assigns, copies, moves, and destructs memory (with added controls over quantifying/enhancing performance). Pretty cool, isn't it! If the default std allocator is used, you would loose this control and rely on a (albeit very good) default implementation of memory management.
I was looking into how custom containers are created, such as eastl's container and several other models and I see that they all use an "allocator", much like std::vector does with std::allocator. Which got me thinking, why do new implementations of a vector container use an allocator when they typically have an underlying memory management override for new and delete?
Being able to replace operator new() and operator delete() (and their array versions) at program level may be sufficient for small program. If you have programs consisting of many millions lines of code, running many different threads this isn't at all suitable. You often want or even need better control. To make the use of custom allocators effective, you also need to be able to allocate subobjects using the same objects as the outer allocator.
For example, consider the use of memory arena to be used when answering a request in some sort of a server which is probably running multiple threads. Getting memory from operator new() is probably fairly expensive because it involves allocating a lock and finding a suitable chunk of memory in a heap which is getting more and more fragmented. To avoid this, you just want to allocate a few chunks of memory (ideally just one but you may not know the needed size in advance) and put all objects there. An allocator can do this. To do so, you need to inform all entities allocating memory about this chunk of memory, i.e. you need to pass the allocator to everything possibly allocating memory. If you allocate e.g. a std::vector<std::string, A> the std::string objects should know about the allocator: just telling the std::vector<std::string, A> where and how to allocate memory isn't enough to avoid most memory allocations: you also need to tell it to the std::string (well, actually the std::basic_string<char, std::char_traits<char>, B> for a suitable allocator type B which is related to A).
That is, if you really mean to take control of your memory allocations, you definitely want to pass allocators to everything which allocates memory. Using replaced versions of the global memory management facilities may help you but it is fairly constrained. If you just want to write a custom container and memory allocation isn't much of your concern you don't necessarily need to bother. In big systems which are running for extensive periods of time memory allocation is one of the many concerns, however.
Allocators are classes that define memory models to be used by Standard Library containers.
Every Standard Library container has its own default allocator, However the users of the container can provide their own allocators over the default.
This is for additional flexibility.
It ensures that users can provide their own allocator which provides an alternate form of memory management(eg: Memory Pools) apart from the regular heap.
If you want to produce a standard-compatible container then the answer is of course yes... allocators are described in the standard so they are required.
In my personal experience however allocators are not that useful... therefore if you are developing a container for a specific use to overcome some structural limitation of the standard containers then I'd suggest to forget about allocators unless you really see a reason for using them.
If instead you are developing a container just because you think you can do better than the standard vector then my guess is that you are wasting your time. I don't like the allocator idea design (dropping on the type something that shouldn't be there) but luckily enough they can be just ignored. The only annoyance with allocators when you don't need them (i.e. always) is probably some more confusion in error messages.. that however are a mess anyway.
Has anyone seen an allocator that calls mlock(2) to prevent an STL container's contents from being swapped to disk?
There is afaik only one tricky part to writing such an allocator, namely minimizing the number of mlocked pages by clustering allocations to be mlocked. Ergo, one should probably start by modifying some shared memory allocator?
If I wanted to implement this (which is difficult to imagine, because I find it hard to believe it's the right solution to any problem :^), I'd try to do it by using a boost::pool_allocator (which provides a standard library compatible Allocator from a pool) and then - I forget the details; think it'll involve the RequestedSize template argument to singleton_pool and a user_allocator ? - there will be some way of having that sit on top of a pool which requests bigger chunks of memory by the mechanism of your choice which in your case would be allocation of mlocked pages.
I am working on a plugin for an application, where the memory should be allocated by the Application and keep track of it. Hence, memory handles should be obtained from the host application in the form of buffers and later on give them back to the application. Now, I am planning on using STL Vectors and I am wondering what sort of memory allocation does it use internally.
Does it use 'new' and 'delete' functions internally? If so, can I just overload 'new' and 'delete' with my own functions? Or should I create my own template allocator which looks like a difficult job for me since I am not that experienced in creating custom templates.
Any suggestions/sample code are welcome. Memory handles can be obtained from the application like this
void* bufferH = NULL;
bufferH = MemReg()->New_Mem_Handle(size_of_buffer);
MemReg()->Dispose_Mem_Handle(bufferH); //Dispose it
vector uses std::allocator by default, and std::allocator is required to use global operator new (that is, ::operator new(size_t)) to obtain the memory (20.4.1.1). However, it isn't required to call it exactly once per call to allocator::allocate.
So yes, if you replace global operator new then vector will use it, although not necessarily in a way that really allows your implementation to manage memory "efficiently". Any special tricks you want to use could, in principle, be made completely irrelevant by std::allocator grabbing memory in 10MB chunks and sub-allocating.
If you have a particular implementation in mind, you can look at how its vector behaves, which is probably good enough if your planned allocation strategy is inherently platform-specific.
STL containers use an allocator they are given at construction time, with a default allocator that uses operator new and operator delete.
If you find the default is not working for you, you can provide a custom allocator that conforms to the container's requirements. There are some real-world examples cited here.
I would measure performance using the default first, and optimize only if you really need to. The allocator abstraction offers you a relatively clean way to fine-tune here without major redesign. How you use the vector could have far more performance impact than the underlying allocator (reserve() in advance, avoid insert and removal in the middle of the range of elements, handle copy construction of elements efficiently - the standard caveats).
std::vector uses the unitialized_* functions to construct its elements from raw memory (using placement new). It allocates storage using whatever allocator it was created with, and by default, that allocator uses ::operator new(size_t) and ::operator delete(void *p) directly (i.e., not a type specific operator new).
From this article, "The concept of allocators was originally introduced to provide an abstraction for different memory models to handle the problem of having different pointer types on certain 16-bit operating systems (such as near, far, and so forth)" ...
"The standard provides an allocator that internally uses the global operators 'new' and 'delete'"
The author also points out the alocator interface isn't that scary. As Neil Buchanan would say, "try it yourself!"
The actual std::allocator has been optimized for a rather large extent of size objects. It isn't the best when it comes to allocating many small objects nor is it the best for many large objects. That being said, it also wasn't written for multi-threaded applications.
May I suggest, before attempting to write your own you check out the Hoard allocator if you're going the multi-threaded route. (Or you can check out the equally appealing Intel TBB page too.)
Looking at vector, I realized that I have never used the second argument when creating vectors.
std::vector<int> myInts; // this is what I usually do
std::vector<int, ???> myOtherInts; // but is there a second argument there?
Looking at the link above it says that it is for:
Allocator object to be used instead of constructing a new one.
or, as for this one:
Allocator: Type of the allocator object used to define the storage allocation model. By default, the allocator class template for type T is used, which defines the simplest memory allocation model and is value-independent.
I guess it has to do with something with memory management. However, I am not sure how to use that.
Any pointers regarding this?
The default allocator, std::allocator<>, will handle all allocations made by std::vector<> (and others). It will make new allocations from the heap each time a new allocation is needed.
By providing a custom allocator, you can for instance allocate a big chunk of memory up front and then slice it up and hand out smaller pieces when separate allocations are needed. This will increase the allocation speed dramatically, which is good for example in games, at the cost of increased complexity as compared to the default allocator.
Some std type implementations have internal stack-based storage for small amounts of data. For instance, std::basic_string<> might use what is called a small string optimization, where only strings longer than some fixed length, say 16 characters (just an example!), gets an allocation from the allocator, otherwise an internal array is used.
Custom allocators are rarely used in general case. Some examples of where they can be useful:
Optimization for a specific pattern of allocations. For example, a concurrent program can pre-allocate a large chunk of memory via standard means at the beginning of task execution and then shave off pieces off it without blocking on the global heap mutex. When task is completed, entire memory block can be disposed of. To use this technique with STL containers, a custom allocator can be employed.
Embedded software, where a device has several ranges of memory with different properties (cached/noncached, fast/slow, volatile/persistent etc). A custom allocator can be used to place objects stored in an STL container in a specific memory region.
Maybe this will help: http://www.codeguru.com/cpp/cpp/cpp_mfc/stl/article.php/c4079
You may try google for: stl allocator.
Allocators (STL) help you to manage memory for your objects in vector class. you may use the custom allocator for different memory model( etc).
Hi you can find example of custom allocator http://www.codeproject.com/KB/cpp/allocator.aspx