I am writing a memory-managing template class in which I want to create a C-style array of fixed size, to serve as a heap. I keep the objects stored in an array like this:
T v[SIZE];
As this only serves the role as a heap that can hold T objects, I don't want the T default constructor to get automatically called for every object in the array.
I thought about the solution to define the heap like this:
char v[SIZE * sizeof(T)];
...but this will give me alignment problems.
Is there any better way to achieve this?
ADD: As I have special run time requirements, it is essential that this class doesn't do any allocations on the global heap.
ADD 2: SIZE is a template argument and known at compile-time.
The standard containers use allocators to seperate allocation/deallocation from construction/destruction. The standard library supplies a single allocator which allocates on the heap.
This code declares an array big enough to hold SIZE elements of type T with the correct allignment:
typedef typename std::tr1::aligned_storage<sizeof(T),std::tr1::alignment_of<T>::value>::type aligned_storage;
aligned_storage array[SIZE];
The solution using std::allocator can't be used to declare an array on the stack, and as the standard containers require that custom allocators hold no state, a custom allocator can't be portably used to allocate on the stack either.
If your compiler doesn't support std::tr1::alignment_of you can use boost::alignment_of instead.
What you are looking for is called an Allocator. A good overview can be found here: http://www.codeproject.com/KB/cpp/allocator.aspx
What I'd probably do is create an array of char (about like you've already considered), but allocate enough space for one more object than you really need. You'll then need to write a bit of code to find the correct alignment for your objects in that space.
The largest alignment requirement for an object is its own size (otherwise arrays of those objects couldn't be contiguous, which is required). Therefore, you pick the first address in the char-buffer that's a multiple of sizeof(T), and start your array from there.
Weird, but should work:
long long v[size * sizeof(T)/sizeof(long long)+1];
This buffer will be alligned on 64 bits.
But better will be allocate memory by new.
In any case, if size is variable - compiller will allocate memory dynamically through malloc/new (and not on stack).
EDIT: You can't use std::auto_ptr to automaticaly free buffer. Instead scoped_arr from boost can be used
You can use a struct to handle the alignment issue.
struct Tbuffer { char data_[ sizeof( T ) ]; }
struct Tbuffer heap[ SIZE ];
How I probably will do it (after looking on EASTL fixed_vector implementation):
PRAGMA_PRE_ALIGN(sizeof(T)) char[SIZE * sizeof(T)]; PRAGMA_POST_ALIGN(sizeof(T))
...and implement compiler specific PRAGMA_PRE_ALIGN and PRAGMA_POST_ALIGN macros, that insert the correct #pragmas.
Unfortunately, boost and tr1 are not possible for this project.
Related
Suppose I want to create a memory pool so that I can control the memory locality of a large number of small, polymorphism-free objects. Naively I would do std::array<T, MAX_SIZE>, or even just T[MAX_SIZE] and then use placement new. However, both of these insist on initializing all of the objects, which in turn:
Forces the objects to have a default constructor
Could hurt my program's startup time if these objects are expensive
Could open a massive can of static initialization worms if my buffer is static
So what I really want is the buffer contents to stay uninitialized until placement new is called on each slot. The solution I've found is to declare the buffer as std::array<char, MAX_SIZE*sizeof(T)> instead, but this throws away type safety completely and is super ugly. Is there a better way?
Also, why is C++ like this? I find it super bizarre that C++ is so pedantic about uninitialized objects while it cheerfully destroys my object invariants via object slicing, and has no problem leaving my enums uninitialized...
You cannot use std::array<char, MAX_SIZE*sizeof(T)> as a backing storage for arbitrary types as there is no guarantee that the array will be well-aligned to alignof(T).
The idiomatic way to allocate uninitialized space is to use std::aligned_storage_t. For your case, it would be std::aligned_storage_t<sizeof(T) * MAX_SIZE, alignof(T)>. Then you can placement new (or use the standard functions related to uninitialized memory) at the correct offsets into this array without causing undefined behavior.
If you don't care about heap allocations, you should use std::vector<T> as NathanOliver suggests in the comments.
Since you will ever only store a single type in your pool, instead of std::aligned_storage_t, you could also use a union with a single member:
template <class T>
union uninit_holder {
uninit_holder() {} // Intentionally empty
~uninit_holder() {} // Ditto
T val;
};
And use a std::array with it:
std::array<uninit_holder<T>, MAX_SIZE> arr;
When you want to create one, you can placement new into the object and call std::destroy_at to destroy one when you no longer need it.
(Since this answer proposes a completely different approach, I added another answer instead of editing my original one.)
I have a container class that manages the underlying memory in different chunks. The number of chunks varies with the number of objects stored in the container. I allocate a new chunk of memory whenever the container is about to exceed the currently available memory. And also, deallocate a chunk whenever it is no longer in use. So, the number chunks are variable during runtime. Therefore, I've to store the chunk pointers in a dynamically growing/shrinking array.
/*** Container Class ***/
template<class T, class Allocator = std::allocator<T>>
class CustomContainer{
public:
// Some member methods here..
private:
void createNewChunk()
{
// Some code goes here ...
newChunkAddr = new T*[CHUNK_SIZE];
}
void destroyChunk(T* chunkAddr)
{
// Some code goes here ...
delete [] chunkAddr;
}
private:
/*** Members ***/
// Some other members ...
std::size_t size = 0;
T** chunks = nullptr;
Allocator allocator;
};
Everything is well as long as the system that uses this container has a heap, thus a properly implemented operator new. The problem occurs when the system doesn't use the operator new and the user assumes that the container will allocate any dynamic resource using the Allocator it provides.
Immediately after a quick brainstorm, I thought that I can use the allocate method of the std::allocator_traits class for allocating the space required. Unfortunately, that method can only allocate areas of a size which is an exact multiple of the template value type used in the allocator. Here is the explanation of the corresponding function:
Uses the allocator a to allocate n*sizeof(Alloc::value_type) bytes of
uninitialized storage. An array of type Alloc::value_type[n] is
created in the storage, but none of its elements are constructed.
Here is the question, what is the proper way of allocating/deallocating the space for storing the chunk pointers?
what is the proper way of allocating/deallocating the space for storing the chunk pointers?
Any correct way of allocating memory is a proper way. Each have their benefits and drawbacks. You should choose based on which benefits and drawbacks are important to your use case.
You could use static storage if you wish to avoid dynamic storage, but that would limit your maximum number of chunks. Or if you don't mind dynamic storage, then you can use the global allocator.
You could even let the user customise that allocation by providing a separate custom allocator.
The way that standard containers do - which is also a proper way, but not the only proper way - is they would create a new allocator of type std::allocator_traits<Allocator>::rebind_alloc<T*>. If you do go the way of using separate custom allocators, this would be a reasonable default for the second allocator.
I've to store the chunk pointers in a dynamically growing/shrinking array.
There's a container for that purpose in the standard library: std::vector. You could use that within your container.
P.S. Your the description of your container is quite similar to std::deque. Consider whether it would be simpler to just use that.
I would like to have a class that contains an array member, but the constructor lets me set the size of an array member.
Is this doable? I do not thing I need dynamic allocation, since once the class instances are created, there is no need for the array to change size, it is just that each class instance will have a different size.
Despite several comments suggest that this would be impossible, it is actually not impossible.
The simplest way, of course, is to use an indirection and allocate the array during construction just the normal way (with a = new type[size] and calling delete[] a - not delete a - in the destructor).
But if for some reason you really do not want to have the array data being allocated separately from your object, you can use placement-new to construct your object into a pre-allocated buffer that is large enough to contain all your elements. This avoids a separate allocation for your array and you can still have dynamic size.
I would not recommend using this technique, though, unless you really have a demanding use case for it.
I am writing a memory manager in c++. The aim is to allocate a set amount of memory at the start using malloc and then overload new and delete so that it uses that memory. I almost have it working my only problem is how i am keeping track of what is where in the memory.
I created a vector of structs which holds information such as size, location and if it is free or not.
The problem is when i call push_back it attempts to use my overloaded new function. This is where it fails because it can't use my overloaded new until it has pushed back the first structure of information.
Does anyone know how i can resolve this or a better way to keep track of the memory?
Don't overload global operator new!
The easiest and (WARNING; subjective ->) best solution would be to define your own Allocator which you'll use when dealing with allocation on the free-store (aka. heap). All STL containers have support for passing an AllocatorType as a template argument.
Overloading global operator new/operator delete might seem like a neat solution, but I can almost guarantee you that it will cause you troubles as the developing goes by.
Inside this custom made allocator you can keep track of what goes where, but make the internal std::vector (or whatever you'd like to use, a std::map seems more fitting to me) will use the default operator new/operator delete.
How do I create my own allocator?
The link below will lead you to a nice document with information regarding this matter:
stdcxx.apache.org - Building Your Own Allocators (heavily recommended)
Using a custom allocator when required/wanted will make you not run into any chicken and egg problem when trying to allocate memory for the allocator that will allocate memory, but the allocator must have allocated memory to use the allocator methods.. and what will allocate memory for the allocator but the allocator? Well we will need to allocate memory for that allocator and that allocator must have it's own allocator, though that allocator need memory, provided by another allocator?
Maybe I should just get myself a dog instead, they don't lay eggs - right?
create a class and overload new only in this class. you will not have problems with your vector. you will be able to use your own new with ::new A and the normal new with new A
class C
{
public:
void* operator new( size_t n ) ;
// ...
} ;
otherwise, you can use your own operator function rather than overload operator new :
a basic idea of an allocator :
int *i = myGetMem(i); // and myGetMem() allocates sizeof(*i) bytes of memory.
so you will not have problems with using the vector.
in fact, a real memory allocator keeps the information you put on the vector in the memory allocated it self :
you can take an algorithm for getmem/freemem to adapt it to your case. it can be helpfull.
e.g. : i want to allocate 10 bytes, the memory at #1024 contain information about memory allocated and the allocator returns an adress after 1024, maybe #1030 (depending of the information stored) as the start of allocated memory. so the user gets adress 1030 and he has memory between 1030 and 103A.
when calling the deallocator, the information at the beginning is used to correctly free the memory and to put it back in the list of avaible memory.
(the list of availvle memory is stored in popular alorithms in an array of linked lists of free memories organized by size with algorithms to avoid and minimize fragmentation)
this can resolve your need to the vector.
You can create a vector using any custom allocator.
It is declared in the following manner:
std::vector<YourStruct, YourOwnAllocator> memory_allocations;
YourOwnAllocator is going to be a class which will allocate the data needed for the vector bypassing your overloaded operators.
In needs to provide all the methods and typedefs listed here.
I have a class which requiring a large amount of memory.
class BigClass {
public:
BigClass() {
bf1[96000000-1] = 1;
}
double bf1[96000000];
};
I can only initiate the class by "new" a object in heap memory.
BigClass *c = new BigClass();
assert( c->bf1[96000000-1] == 1 );
delete c;
If I initiate it without "new". I will get a segmentation fault in runtime.
BigClass c; // SIGSEGV!
How can I determine the memory limit? or should I better always use "new"?
First of all since you've entitled this C++ and not C why are you using arrays? Instead may I suggest vector<double> or, if contiguous memory is causing problems deque<double> which relaxes the constraint on contiguous memory without removing the nearly constant time lookup.
Using vector or deque may also alleviate other seg fault issues which could plague your project at a later date. For instance, overrunning bounds in your array. If you convert to using vector or deque you can use the .at(x) member function to retrieve and set values in your collection. Should you attempt to write out of bounds, that function will throw an error.
The stack have a fixed size that is dependant on the compiler options. See your compiler documentation to change the stack size for your executable.
Anyway, for big objects, prefer using new or better : smart pointers like shared_pointer (from boost or from std::tr1 or std:: if you have very recent compiler).
You shouldn't play that game ever. Your code could be called from another function or on a thread with a lower stack size limit and then your code will break nastily. See this closely related question.
If you're in doubt use heap-allocation (new) - either directly with smart pointers (like auto_ptr) or indirectly using std::vector.
There is no platform-independent way of determining the memory limit. For "large" amounts of memory, you're far safer allocating on the heap (i.e. using new); you can check for success by comparing the resulting pointer against NULL, or catching std::bad_alloc exceptions.
The way your class is designed is, as you discovered, quite fragile. Instead of always allocating your objects on the heap, instead your class itself should allocate the huge memory block on the heap, preferably with std::vector, or possibly with a shared_ptr if vector doesn't work for some reason. Then you don't have to worry about how your clients use the object, it's safe to put on the stack or the heap.
On Linux, in the Bash shell, you can check the stack size with ulimit -s. Variables with automatic storage duration will have their space allocated on the stack. As others have said, there are better ways of approaching this:
Use a std::vector to hold your data inside your BigClass.
Allocate the memory for bf1 inside BigClass's constructor and then free it in the destructor.
If you must have a large double[] member, allocate an instance of BigClass with some kind of smart pointer; if you don't need shared access something as simple as std::auto_ptr will let you safely construct/destroy your object:
std::auto_ptr<BigClass>(new BigClass) myBigClass;
myBigClass->bf1; // your array