Memory Demands: Heap vs Stack in C++ - c++

So I had a strange experience this evening.
I was working on a program in C++ that required some way of reading a long list of simple data objects from file and storing them in the main memory, approximately 400,000 entries. The object itself is something like:
class Entry
{
public:
Entry(int x, int y, int type);
Entry(); ~Entry();
// some other basic functions
private:
int m_X, m_Y;
int m_Type;
};
Simple, right? Well, since I needed to read them from file, I had some loop like
Entry** globalEntries;
globalEntries = new Entry*[totalEntries];
entries = new Entry[totalEntries];// totalEntries read from file, about 400,000
for (int i=0;i<totalEntries;i++)
{
globalEntries[i] = new Entry(.......);
}
That addition to the program added about 25 to 35 megabytes to the program when I tracked it on the task manager. A simple change to stack allocation:
Entry* globalEntries;
globalEntries = new Entry[totalEntries];
for (int i=0;i<totalEntries;i++)
{
globalEntries[i] = Entry(.......);
}
and suddenly it only required 3 megabytes. Why is that happening? I know pointer objects have a little bit of extra overhead to them (4 bytes for the pointer address), but it shouldn't be enough to make THAT much of a difference. Could it be because the program is allocating memory inefficiently, and ending up with chunks of unallocated memory in between allocated memory?

Your code is wrong, or I don't see how this worked. With new Entry [count] you create a new array of Entry (type is Entry*), yet you assign it to Entry**, so I presume you used new Entry*[count].
What you did next was to create another new Entry object on the heap, and storing it in the globalEntries array. So you need memory for 400.000 pointers + 400.000 elements. 400.000 pointers take 3 MiB of memory on a 64-bit machine. Additionally, you have 400.000 single Entry allocations, which will all require sizeof (Entry) plus potentially some more memory (for the memory manager -- it might have to store the size of allocation, the associated pool, alignment/padding, etc.) These additional book-keeping memory can quickly add up.
If you change your second example to:
Entry* globalEntries;
globalEntries = new Entry[count];
for (...) {
globalEntries [i] = Entry (...);
}
memory usage should be equal to the stack approach.
Of course, ideally you'll use a std::vector<Entry>.

First of all, without specifying which column exactly you were watching, the number in task manager means nothing. On a modern operating system it's difficult even to define what you mean with "used memory" - are we talking about private pages? The working set? Only the stuff that stays in RAM? does reserved but not committed memory count? Who pays for memory shared between processes? Are memory mapped file included?
If you are watching some meaningful metric, it's impossible to see 3 MB of memory used - your object is at least 12 bytes (assuming 32 bit integers and no padding), so 400000 elements will need about 4.58 MB. Also, I'd be surprised if it worked with stack allocation - the default stack size in VC++ is 1 MB, you should already have had a stack overflow.
Anyhow, it is reasonable to expect a different memory usage:
the stack is (mostly) allocated right from the beginning, so that's memory you nominally consume even without really using it for anything (actually virtual memory and automatic stack expansion makes this a bit more complicated, but it's "true enough");
the CRT heap is opaque to the task manager: all it sees is the memory given by the operating system to the process, not what the C heap has "really" in use; the heap grows (requesting memory to the OS) more than strictly necessary to be ready for further memory requests - so what you see is how much memory it is ready to give away without further syscalls;
your "separate allocations" method has a significant overhead. The all-contiguous array you'd get with new Entry[size] costs size*sizeof(Entry) bytes, plus the heap bookkeeping data (typically a few integer-sized fields); the separated allocations method costs at least size*sizeof(Entry) (size of all the "bare elements") plus size*sizeof(Entry *) (size of the pointer array) plus size+1 multiplied by the cost of each allocation. If we assume a 32 bit architecture with a cost of 2 ints per allocation, you quickly see that this costs size*24+8 bytes of memory, instead of size*12+8 for the contiguous array in the heap;
the heap normally really gives away blocks that aren't really the size you asked for, because it manages blocks of fixed size; so, if you allocate single objects like that you are probably paying also for some extra padding - supposing it has 16 bytes blocks, you are paying 4 bytes extra per element by allocating them separately; this moves out memory estimation to size*28+8, i.e. an overhead of 16 bytes per each 12-byte element.

Related

I don't understand about memory issue of appending string

Runtime error: pointer index expression with base 0x000000000000 overflowed to 0xffffffffffffffff for frequency sort
In first answer of that link, it says that appending char to string can cause memory issue.
string s = "";
char c = 'a';
int max = INT_MAX;
for(int j=0;j<max;j++)
s = s + c;
The answer explains [s=s+c in above code copies the same string again and again so it will cause memory issue.] But I don't understand why that code copies the same string again and again.
Is there someone who is likely to make me understand that part :)?
I don't understand why that code copies the same string again and
again.
Okay, let's look at the what happens each time the loop is iterated:
s = s + c;
There are three things the program has to do in order to execute that line of code:
Compute the temporary value s + c -- to do that, the program has to create a temporary, anonymous std::string object, and allocate for it (from the heap) an internal byte-buffer that is at least one byte larger than the number of chars currently in s (so that it can hold all of s's old contents, plus the additional char provided by c)
Set s equal to the temporary-string. In C++03 and earlier, this would be done by reallocating s's internal byte-buffer to be larger, then copying all of the bytes from the temporary-string into s's new/larger buffer. C++11 optimizes this a bit via the new move-assignment operator, so that all the bytes don't have to be copied; rather, s can simply take ownership of the temporary-string's byte-buffer.
Free the temporary string's resources, now that we're done using it. In practice, this takes the form of the std::string class's destructor calling delete[] on the old (no-longer-large-enough) byte-buffer.
Given that the above is going to be performed at least 2 billion times in a loop, it's already quite inefficient.
However, what I think the answer you referred to was particularly concerned about was heap fragmentation. Keep in mind that heap allocation doesn't work by magic; when you (or the std::string class, or anyone) asks to allocate N bytes of memory from the heap, the heap implementation's job is to find N bytes of contiguous memory and return it. And since there is no provision in C++ for moving blocks of memory around (as doing so would invalidate any pointers that the program might have pointing into those blocks of memory), the heap can't create an N-byte contiguous-memory-chunk out of smaller chunks; instead, there has to be a range of contiguous-memory-space already available. For example, it does the heap no good to have a total of 1GB of memory available, if that 1GB of memory is made up of thousands of nonconsecutive 1KB chunks and the caller is asking for a 2KB allocation.
Therefore, the heap's job is to efficiently allocate chunks of memory of the sizes the program requests, and when they are freed again, it will try to glue them back together into larger chunks again if it can, but it may not always be able to. Certain patterns of allocating and freeing memory may result in heap fragmentation, which is simply a large number of discontinuous memory allocations that render the small regions of free memory between them unusable for large allocations.
Whether or not this particular allocate/free pattern would cause that, I'm not sure; given that only one or two buffers are being allocated at a time, the heap may be able to reabsorb them back into adjacent free-memory chunks as they get freed again -- it probably depends on the particular heap algorithm the system is using, as well as on whether any other threads are allocating/freeing heap memory while this is going on. But I wouldn't be too surprised if there are systems out there where it would cause problems (particularly on 16-bit or 32-bit systems where virtual address space is limited, or embedded systems that don't use virtual memory)

c++ Object memory management

I am trying to understand how memory is allocated as the number of objects increases. Following test program creates 400million objects and its memory occupancy is approximately 23GB.
Why would it occupy such huge memory while the single object size is just 16 bytes. Ideally i would assume it should be 16bytes multiplied by 400million.
struct class1 {
long long id;
double value=0.0;
class1(long long id) {
this.id = id;
}
};
for (int i=1;i<=400000000;i++) {
class1 *t = new class1(i);
}
If the question is how to write the program with the same number of objects and a reduced memory footprint, the answer is to overload the new and delete operators and write your own allocator that is specialised in doling out blocks of 16 bytes from large pools.
The general purpose allocator is inefficient because it is general purpose, and it is written to be fast rather than to save memory when allocating small objects. Most programs don't stress machine capacity with millions of small allocations.
Every allocation has with it a "header" which includes extra information about the block. It definitely included the size of the block (so a "delete" know how much menory to free), a free saying if the block is free or allocated, the number of elements in the array (if it's an array) and so further.
Allocation two blocks, and subtract their pointers, and you should get the actual size taken up by each block.
Then, allocate a large array of these objects (say 100,000), and then just one, and subtract those pointers. That should show something closer to the 16 * 100,000 size, as that block would have only one header.

C++ allocates more bytes than asked?

int main(int argc, const char* argv[])
{
for(unsigned int i = 0; i < 10000000; i++)
char *c = new char;
cin.get();
}
In the above code, why does my program use 471MB memory instead of 10MB as one would expect?
Allocation of RAM comes from a combined effort of the runtime library and the operating system. In order to identify the one byte your example code requests, there is some structure which identifies this memory to the runtime. It is sometimes a double linked list, but it's defined by the operating system and runtime implementation.
You can analogize it this way: If you have a linked list container, what you're interested in is simply what you've placed inside each link, but the container must have pointers to the other links in the containers in order to maintain the linked list.
If you use a debugger, or some other debugging tool to track memory, these structures can be even larger, making each allocation more costly.
RAM isn't typically allocated out of an array, but it is possible to overload the new operator to change allocation behavior. It could be possible specifically allocate from an array (a large one in your example) so that allocations behaved as you seem to have expected, and in some applications this is a specific strategy to control memory and improve performance (though the details are usually more complex than that simple illustration).
The allocation not only contains the allocated memory itself, but at least one word telling delete how much memory it has to release; moreover that is a number that has to be correctly aligned, so there will be a certain padding after the allocated char to ensure that the next block is correctly aligned. On a 64 bit machine, that means at least 16 bytes per allocation (8 bytes to hold the size, 1 byte to hold the character, and 7 bytes padding to ensure correct alignment).
However most probably that's not the only data stored; to help the memory allocator to find free memory, additional data is likely stored; if one assumes that data to consist of three pointers, one gets to a total 40 bytes per allocation, which matches your data quite well.
Note also that the allocator will also request a bit more memory from the operating system than needed for the actual allocation, so that it won't need to do an expensive OS call for every little allocation. That is, the run time library allocates larger chunks of memory from the operating system, and then cuts those in smaller pieces for your program's allocations. Thus generally there will be some memory allocated from the operating system (and thus showing up in the task manager), but not yet allocated to a certain object in your program.

Managing a Contiguous Chunk of Memory without Malloc/New or Free/Delete

How would one go about creating a custom MemoryManager to manage a given, contiguous chunk of memory without the aid of other memory managers (such as Malloc/New) in C++?
Here's some more context:
MemManager::MemManager(void* memory, unsigned char totalsize)
{
Memory = memory;
MemSize = totalsize;
}
I need to be able to allocate and free up blocks of this contiguous memory using a MemManager. The constructor is given the total size of the chunk in bytes.
An Allocate function should take in the amount of memory required in bytes and return a pointer to the start of that block of memory. If no memory is remaining, a NULL pointer is returned.
A Deallocate function should take in the pointer to the block of memory that must be freed and give it back to the MemManager for future use.
Note the following constraints:
-Aside from the chunk of memory given to it, the MemManager cannot use ANY dynamic memory
-As originally specified, the MemManager CANNOT use other memory managers to perform its functions, including new/malloc and delete/free
I have received this question on several job interviews already, but even hours of researching online did not help me and I have failed every time. I have found similar implementations, but they have all either used malloc/new or were general-purpose and requested memory from the OS, which I am not allowed to do.
Note that I am comfortable using malloc/new and free/delete and have little trouble working with them.
I have tried implementations that utilize node objects in a LinkedList fashion that point to the block of memory allocated and state how many bytes were used. However, with those implementations I was always forced to create new nodes onto the stack and insert them into the list, but as soon as they went out of scope the entire program broke since the addresses and memory sizes were lost.
If anyone has some sort of idea of how to implement something like this, I would greatly appreciate it. Thanks in advance!
EDIT: I forgot to directly specify this in my original post, but the objects allocated with this MemManager can be different sizes.
EDIT 2: I ended up using homogenous memory chunks, which was actually very simple to implement thanks to the information provided by the answers below. The exact rules regarding the implementation itself were not specified, so I separated each block into 8 bytes. If the user requested more than 8 bytes, I would be unable to give it, but if the user requested fewer than 8 bytes (but > 0) then I would give extra memory. If the amount of memory passed in was not divisible by 8 then there would be wasted memory at the end, which I suppose is much better than using more memory than you're given.
I have tried implementations that utilize node objects in a LinkedList
fashion that point to the block of memory allocated and state how many
bytes were used. However, with those implementations I was always
forced to create new nodes onto the stack and insert them into the
list, but as soon as they went out of scope the entire program broke
since the addresses and memory sizes were lost.
You're on the right track. You can embed the LinkedList node in the block of memory you're given with reinterpret_cast<>. Since you're allowed to store variables in the memory manager as long as you don't dynamically allocate memory, you can track the head of the list with a member variable. You might need to pay special attention to object size (Are all objects the same size? Is the object size greater than the size of your linked list node?)
Assuming the answers to the previous questions to be true, you can then process the block of memory and split it off into smaller, object sized chunks using a helper linked list that tracks free nodes. Your free node struct will be something like
struct FreeListNode
{
FreeListNode* Next;
};
When allocating, all you do is remove the head node from the free list and return it. Deallocating is just inserting the freed block of memory into the free list. Splitting the block of memory up is just a loop:
// static_cast only needed if constructor takes a void pointer; can't perform pointer arithmetic on void*
char* memoryEnd = static_cast<char*>(memory) + totalSize;
for (char* blockStart = block; blockStart < memoryEnd; blockStart += objectSize)
{
FreeListNode* freeNode = reinterpret_cast<FreeListNode*>(blockStart);
freeNode->Next = freeListHead;
freeListHead = freeNode;
}
As you mentioned the Allocate function takes in the object size, the above will need to be modified to store metadata. You can do this by including the size of the free block in the free list node data. This removes the need to split up the initial block, but introduces complexity in Allocate() and Deallocate(). You'll also need to worry about memory fragmentation, because if you don't have a free block with enough memory to store the requested amount, there's nothing that you can do other than to fail the allocation. A couple of Allocate() algorithms might be:
1) Just return the first available block large enough to hold the request, updating the free block as necessary. This is O(n) in terms of searching the free list, but might not need to search a lot of free blocks and could lead to fragmentation problems down the road.
2) Search the free list for the block that has the smallest amount free in order to hold the memory. This is still O(n) in terms of searching the free list because you have to look at every node to find the least wasteful one, but can help delay fragmentation problems.
Either way, with variable size, you have to store metadata for allocations somewhere as well. If you can't dynamically allocate at all, the best place is before or after the user requested block; you can add features to detect buffer overflows/underflows during Deallocate() if you want to add padding that is initialized to a known value and check the padding for a difference. You can also add a compact step as mentioned in another answer if you want to handle that.
One final note: you'll have to be careful when adding metadata to the FreeListNode helper struct, as the smallest free block size allowed is sizeof(FreeListNode). This is because you are storing the metadata in the free memory block itself. The more metadata you find yourself needing to store for your internal purposes, the more wasteful your memory manager will be.
When you manage memory, you generally want to use the memory you manage to store any metadata you need. If you look at any of the implementations of malloc (ptmalloc, phkmalloc, tcmalloc, etc...), you'll see that this is how they're generally implemented (neglecting any static data of course). The algorithms and structures are very different, for different reasons, but I'll try to give a little insight into what goes into generic memory management.
Managing homogeneous chunks of memory is different than managing non-homogeneous chunks, and it can be a lot simpler. An example...
MemoryManager::MemoryManager() {
this->map = std::bitset<count>();
this->mem = malloc(size * count);
for (int i = 0; i < count; i++)
this->map.set(i);
}
Allocating is a matter of finding the next bit in the std::bitset (compiler might optimize), marking the chunk as allocated and returning it. De-allocation just requires calculating the index, and marking as unallocated. A free list is another way (what's described here), but it's a little less memory efficient, and might not use CPU cache well.
A free list can be the basis for managing non-homogenous chunks of memory though. With this, you need to store the size of the chunks, in addition to the next pointer in the chunk of memory. The size lets you split larger chunks into smaller chunks. This generally leads to fragmentation though, since merging chunks is non-trivial. This is why most data structures keep lists of same sized chunks, and try to map requests as closely as possible.

Why is the heap after array allocation so large

I've got a very basic application that boils down to the following code:
char* gBigArray[200][200][200];
unsigned int Initialise(){
for(int ta=0;ta<200;ta++)
for(int tb=0;tb<200;tb++)
for(int tc=0;tc<200;tc++)
gBigArray[ta][tb][tc]=new char;
return sizeof(gBigArray);
}
The function returns the expected value of 32000000 bytes, which is approximately 30MB, yet in the Windows Task Manager (and granted it's not 100% accurate) gives a Memory (Private Working Set) value of around 157MB. I've loaded the application into VMMap by SysInternals and have the following values:
I'm unsure what Image means (listed under Type), although irrelevant of that its value is around what I'm expecting. What is really throwing things out for me is the Heap value, which is where the apparent enormous size is coming from.
What I don't understand is why this is? According to this answer if I've understood it correctly, gBigArray would be placed in the data or bss segment - however I'm guessing as each element is an uninitialised pointer it would be placed in the bss segment. Why then would the heap value be larger by a silly amount than what is required?
It doesn't sound silly if you know how memory allocators work. They keep track of the allocated blocks so there's a field storing the size and also a pointer to the next block, perhaps even some padding. Some compilers place guarding space around the allocated area in debug builds so if you write beyond or before the allocated area the program can detect it at runtime when you try to free the allocated space.
you are allocating one char at a time. There is typically a space overhead per allocation
Allocate the memory on one big chunk (or at least in a few chunks)
Do not forget that char* gBigArray[200][200][200]; allocates space for 200*200*200=8000000 pointers, each word size. That is 32 MB on a 32 bit system.
Add another 8000000 char's to that for another 8MB. Since you are allocating them one by one it probably can't allocate them at one byte per item so they'll probably also take the word size per item resulting in another 32MB (32 bit system).
The rest is probably overhead, which is also significant because the C++ system must remember how many elements an array allocated with new contains for delete [].
Owww! My embedded systems stuff would roll over and die if faced with that code. Each allocation has quite a bit of extra info associated with it and either is spaced to a fixed size, or is managed via a linked list type object. On my system, that 1 char new would become a 64 byte allocation out of a small object allocator such that management would be in O(1) time. But in other systems, this could easily fragment your memory horribly, make subsequent new and deletes run extremely slowly O(n) where n is number of things it tracks, and in general bring doom upon an app over time as each char would become at least a 32 byte allocation and be placed in all sorts of cubby holes in memory, thus pushing your allocation heap out much further than you might expect.
Do a single large allocation and map your 3D array over it if you need to with a placement new or other pointer trickery.
Allocating 1 char at a time is probably more expensive. There are metadata headers per allocation so 1 byte for a character is smaller than the header metadata so you might actually save space by doing one large allocation (if possible) that way you mitigate the overhead of each individual allocation having its own metadata.
Perhaps this is an issue of memory stride? What size of gaps are between values?
30 MB is for the pointers. The rest is for the storage you allocated with the new call that the pointers are pointing to. Compilers are allowed to allocate more than one byte for various reasons, like to align on word boundaries, or give some growing room in case you want it later. If you want 8 MB worth of characters, leave the * off your declaration for gBigArray.
Edited out of the above post into a community wiki post:
As the answers below say, the issue here is I am creating a new char 200^3 times, and although each char is only 1 byte, there is overhead for every object on the heap. It seems creating a char array for all chars knocks the memory down to a more believable level:
char* gBigArray[200][200][200];
char* gCharBlock=new char[200*200*200];
unsigned int Initialise(){
unsigned int mIndex=0;
for(int ta=0;ta<200;ta++)
for(int tb=0;tb<200;tb++)
for(int tc=0;tc<200;tc++)
gBigArray[ta][tb][tc]=&gCharBlock[mIndex++];
return sizeof(gBigArray);
}