There are a number of garbage collection libraries for C++.
I am kind of confused how the pointer tracking works.
In particular, suppose we have a base pointer P and a list of other pointers who are computed as offsets from P using an array.
Ex,
P2 = P+offset[0]
How does the garbage collector know P2 is still in scope? It has no direct reference but it's still accessible.
Probably the most popular C++ gc is
https://en.m.wikipedia.org/wiki/Boehm_garbage_collector
But following their example syntax it seems very easy to break so I must not be understanding something.
This question cannot be answered in general. There are different systems that may be regarded as garbage collection for C++; for example, Herb Sutter's deferred_ptr is basically a garbage collecting smart pointer. I've personally implemented another version of this idea, similar to Sutter's but less fancy.
I can answer about Boehm, however. How the Boehm garbage collector recognizes pointers when it does its "mark" phase, is basically by scanning memory and assuming that things that look like pointers are pointers.
The garbage collector knows all the areas of memory where user data is and it knows all of the pointers that it has allocated and how big those allocations were. It just looks for chains of pointers starting from "root segments" defined as below, where by "look" we mean explicitly scanning memory for 64 bit values that are the same as one of the GC allocations it has done.
From here:
Since it cannot generally tell where pointer variables are located, it
scans the following root segments for pointers:
The registers. Depending on the architecture, this may be done using assembly code, or by calling a setjmp-like function which
saves register contents on the stack.
The stack(s). In the case of a single-threaded application, on most platforms this is done by scanning the memory between (an
approximation of) the current stack pointer and GC_stackbottom. (For
Itanium, the register stack scanned separately.) The GC_stackbottom
variable is set in a highly platform-specific way depending on the
appropriate configuration information in gcconfig.h. Note that the
currently active stack needs to be scanned carefully, since
callee-save registers of client code may appear inside collector
stack frames, which may change during the mark process. This is
addressed by scanning some sections of the stack "eagerly",
effectively capturing a snapshot at one point in time.
Static data region(s). In the simplest case, this is the region between DATASTART and DATAEND, as defined in gcconfig.h. However, in
most cases, this will also involve static data regions associated
with dynamic libraries. These are identified by the mostly
platform-specific code in dyn_load.c.
The address space for 64-bit pointers is huge so false positives will be rare, but even if they occur, false positives would just be leaks, that last as long as there happens to be some other variable in the memory the mark phase scans that is exactly the same value as some 64-bit pointer that was allocated by the garbage collector.
Related
I'm studying for my data organization final and I'm going over stacks and heaps because I know they will be on the final and I'm going to need to know the differences.
I know what the Stack is and what the Heap is.
But I'm confused on what a stack is and what a heap is.
The Stack is a place in the RAM where memory is stored, if it runs out of space, a stackoverflow occurs. Objects are stored here by default, it reallocates memory when objects go out of scope, and it is faster.
The Heap is a place in the RAM where memory is stored, if it runs out of space, the OS will assign it more. For an object to be stored on the Heap it needs to be told by using the, new, operator, and will only be deallocated if told. fragmentation problems can occur, it is slower then the Stack, and it handles large amounts of memory better.
But what is a stack, and what is a heap? is it the way memory is stored? for example a static array or static vector is a stack type and a dynamic array, linked list a heap type?
Thank you all!
"The stack" and "the heap" are memory lumps used in a specific way by a program or operating system. For example, the call stack can hold data pertaining to function calls and the heap is a region of memory specifically used for dynamically allocating space.
Contrast these with stack and heap data structures.
A stack can be thought of as an array where the last element in will be the first element out. Operations on this are called push and pop.
A heap is a data structure that represents a special type of graph where each node's value is greater than that of the node's children.
On a side note, keep in mind that "the stack" or "the heap" or any of the stack/heap data structures are unique to any given programming language but are simply concepts in the field of computer science.
I won't get into virtual memory (read about that if you want) so let's simplify and say you have RAM of some size.
You have your code with static initialized data, with some static uninitialized data (static in C++ means like global vars). You have your code.
When you compile something compiler (and linker) will organize and translate your code to machine code (byte code, ones and zeroes) in a following way:
Binary file (and object files) is organized into segments (portions of RAM).
First you have DATA segment. This is the segment that contains values of initialized variables. so if u have variables i.e. int a=3, b = 4 they will go to DATA segment (4 bytes of RAM containing 00000003h, and other 4 bytes containing 000000004h, hexadecimal notation). They are stored consecutively.
Then you have Code segment. All your code is translated into machine code (1s and 0s) and stored in this segment consecutively.
Then you have BSS segment. There goes uninitialized global vars (all static vars that weren't initialized).
Then you have STACK segment. This is reserved for stack. Stack size is determined by operating system by default. You can change this value but i won't get into this now. All local variables go here. When you call some function first func args are pushed to stack, then return address (where to come back when u exit function), then some computer registers are pushed here, and finally all local variables declared in the function get their reserved space on stack.
And you have HEAP segment. This is part of the RAM (size is also determined by OS) where the objects and data are stored using operator new.
Then all of the segments are piled one after the other DATA, CODE, BSS, STACK, HEAP. There are some other segments, but they are not of interest here, and that is loaded in RAM by the operating system. Binary file also has some headers containing information from which location (address in memory) your code begins.
So in short, they are all parts of RAM, since everything that is being executed is loaded into RAM (can't be in ROM (read only), nor HDD since HDD its just for storing files.
When specifically referring to C++'s memory model, the heap and stack refer to areas of memory. It is easy to confuse this with the stack data structure and heap data structure. They are, however, separate concepts.
When discussing programming languages, stack memory is called 'the stack' because it behaves like a stack data structure. The heap is a bit of a misnomer, as it does not necessarily (or likely) use a heap data structure. See Why are two different concepts both called "heap"? for a discussion of why C++'s heap and the data structure's names are the same, despite being two different concepts.
So to answer your question, it depends on the context. In the context of programming languages and memory management, the heap and stack refer to areas of memory with specific properties. Otherwise, they refer to specific data structures.
The technical definition of "a stack" is a Last In, First Out (LIFO) data structure where data is pushed onto and pulled off of the top. Just like with a stack of plates in the real world, you wouldn't pull one out from the middle or bottom, you [usually] wouldn't pull data out of the middle of or the bottom of a data structure stack. When someone talks about the stack in terms of programming, it can often (but not always) mean the hardware stack, which is controlled by the stack pointer register in the CPU.
As far as "a heap" goes, that generally becomes much more nebulous in terms of a definition everyone can agree on. The best definition is likely "a large amount of free memory from which space is allocated for dynamic memory management." In other words, when you need new memory, be it for an array, or an object created with the new operator, it comes from a heap that the OS has reserved for your program. This is "the heap" from the POV of your program, but just "a heap" from the POV of the OS.
The important thing for you to know about stacks is the relationship between the stack and function/method calls. Every function call reserves space on the stack, called a stack frame. This space contains your auto variables (the ones declared inside the function body). When you exit from the function, the stack frame and all the auto variables it contains disappear.
This mechanism is very cheap in terms of CPU resources used, but the lifetime of these stack-allocated variables is obviously limited by the scope of the function.
Memory allocations (objects) on the heap, on the other hand, can live "forever" or as long as you need them without regards to the flow of control of your program. The down side is since you don't get automatic lifetime management of these heap allocated objects, you have to either 1) manage the lifetime yourself, or 2) use special mechanisms like smart pointers to manage the lifetime of these objects. If you get it wrong your program has memory leaks, or access data that may change unexpectedly.
Re: Your question about A stack vs THE stack: When you are using multiple threads, each thread has a separate stack so that each thread can flow into and out of functions/methods independently. Most single threaded programs have only one stack: "the stack" in common terminology.
Likewise for heaps. If you have a special need, it is possible to allocate multiple heaps and choose at allocation time which heap should be used. This is much less common (and a much more complicated topic than I have mentioned here.)
I am a beginner programmer with some experience at c and c++ programming. I was assigned by the university to make a physics simulator, so as you might imagine there's a big emphasis on performance.
My questions are the following:
How many assembly instructions does an instance data member access
through a pointer translate to (i.e for an example vector->x )?
Is it much more then say another approach where you simply access the
memory through say a char* (at the same memory location of variable
x), or is it the same?
Is there a big impact on performance
compiler-wise if I use an object to access that memory location or
if I just access it?
Another question regarding the subject would be
whether or not accessing heap memory is faster then stack memory
access?
C++ is a compiled language. Accessing a memory location through a pointer is the same regardless of whether that's a pointer to an object or a pointer to a char* - it's one instruction in either case. There are a couple of spots where C++ adds overhead, but it always buys you some flexibility. For example, invoking a virtual function requires an extra level of indirection. However, you would need the same indirection anyway if you were to emulate the virtual function with function pointers, or you would spend a comparable number of CPU cycles if you were to emulate it with a switch or a sequence of ifs.
In general, you should not start optimizing before you know what part of your code to optimize. Usually only a small part of your code is responsible for the bulk of the CPU time used by your program. You do not know what part to optimize until you profile your code. Almost universally it's programmer's code, not the language features of C++, that is responsible for the slowdown. The only way to know for sure is to profile.
On x86, a pointer access is typically one extra instruction, above and beyond what you normally need to perform the operation (e.x. y = object->x; would be one load of the address in object, and one load of the value of x, and one store to y - in x86 assembler both loads and stores are mov instructions with memory target). Sometimes it's "zero" instructions, because the compiler can optimise away the load of the object pointer. In other architectures, it's really down to how the architecture works - some architectures have very limited ways of accessing memory and/or loading addresses to pointers, etc, making it awkward to access pointers.
Exactly the same number of instructions - this applies for all
As #2 - objects in themselves have no impact at all.
Heap memory and stack memory is the same kind. One answer says that "stack memory is always in the caceh", which is true if it's "near the top of the stack", where all the activity goes on, but if you have an object that is being passed around that was created in main, and a pointer to it is used to pass it around for several layers of function calls, and then access through the pointer, there is an obvious chance that this memory hasn't been used for a long while, so there is no real difference there either). The big difference is that "heap memory is plenty of space, stack is limited" along with "running out of heap is possible to do limited recovery, running out of stack is immediate end of execution [without tricks that aren't very portable]"
If you look at class as a synonym for struct in C (which aside from some details, they really are), then you will realize that class and objects are not really adding any extra "effort" to the code generated.
Of course, used correctly, C++ can make it much easier to write code where you deal with things that are "do this in a very similar way, but subtly differently". In C, you often end up with :
void drawStuff(Shape *shapes, int count)
{
for(i = 0; i < count; i++)
{
switch (shapes[i].shapeType)
{
case Circle:
... code to draw a circle ...
break;
case Rectangle:
... code to draw a rectangle ...
break;
case Square:
...
break;
case Triangle:
...
break;
}
}
}
In C++, we can do this at the object creation time, and your "drawStuff" becoems:
void drawStuff(std::vector<Shape*> shapes)
{
for(auto s : shapes)
{
s->Draw();
}
}
"Look Ma, no switch..." ;)
(Of course, you do need a switch or something to do the selection of which object to create, but once choice is made, assuming your objects and the surrounding architecture are well defined, everything should work "magically" like the above example).
Finally, if it's IMPORTANT with performance, then run benchmarks, run profiling and check where the code is spending it's time. Don't optimise too early (but if you have strict performance criteria for something, keep an eye on it, because deciding on the last week of a project that you need to re-organise your data and code dramatically because performance sucks due to some bad decision is also not the best of ideas!). And don't optimise for individual instructions, look at where the time is spent, and come up with better algorithms WHERE you need to. (In the above example, using const std::vector<Shape*>& shapes will effectively pass a pointer to the shapes vector passed in, instead of copying the entire thing - which may make a difference if there are a few thousand elements in shapes).
It depends on your target architecture. An struct in C (and a class in C++) is just a block of memory containing the members in sequence. An access to such a field through a pointer means adding an offset to the pointer and loading from there. Many architectures allow a load to already specify an offset to the target address, meaning that there is no performance penalty there; but even on extreme RISC machines that don't have that, adding the offset should be so cheap that the load completely shadows it.
Stack and heap memory are really the same thing. Just different areas. Their basic access speed is therefore the same. The main difference is that the stack will most likely already be in the cache no matter what, whereas heap memory might not be if it hasn't been accessed lately.
Variable. On most processors instructions are translated to something called microcode, similar to how Java bytecode are translated to processor-specific instructions before you run it. How many actual instructions you get are different between different processor manufacturers and models.
Same as above, it depends on processor internals most of us know little about.
1+2. What you should be asking are how many clock cycles these operations take. On modern platforms the answer are one. It does not matter how many instructions they are, a modern processor have optimizations to make both run on one clock cycle. I will not get into detail here. I other words, when talking about CPU load there are no difference at all.
Here you have the tricky part. While there are no difference in how many clock cycles the instruction itself take, it needs to have data from memory before it can run - this can take a HUGE ammount of clock cycles. Actually someone proved a few years ago that even with a very optimized program a x86 processor spends at least 50% of its time waiting for memory access.
When you use stack memory you are actually doing the same thing as creating an array of structs. For the data, instructions are not duplicated unless you have virtual functions. This makes data aligned and if you are going to do sequential access, you will have optimal cache hits. When you use heap memory you will create an array of pointers, and each object will have its own memory. This memory will NOT be aligned and therefore sequential access will have a lot of cache misses. And cache misses are what really will your application slower and should be avoided at all cost.
I do not know exactly what you are doing but in many cases even using objects are much slower than plain arrays. An array of objects are aligned [object1][object2] etc. If you do something like pseudocode "for each object o {o.setX() = o.getX() + 1}"... this means that you will only access one variable and your sequential access would therefore jump over the other variables in each object and get more cache misses than if your X-variables where aligned in their own array. And if you have code that use all variables in your object, standard arrays will not be slower than object array. It will just load the different arrays into different cache blocks.
While standard arrays are faster in C++ they are MUCH faster in other languages like Java, where you should NEVER store bulk data in objects - as Java objects use more memory and are always stored at the heap. This are the most common mistake that C++ programmers do in Java, and then complain that Java are slow. However if they know how to write optimal C++ programs they store data in arrays which are as fast in Java as in C++.
What I usually do are a class to store the data, that contains arrays. Even if you use the heap, its just one object which becomes as fast as using the stack. Then I have something like "class myitem { private: int pos; mydata data; public getVar1() {return data.getVar1(pos);}}". I do not write out all of the code here, just illustrating how I do this. Then when I iterate trough it the iterator class do not actually return a new myitem instance for each item, it increase the pos value and return the same object. This means you get a nice OO API while you actually only have a few objects and and nicely aligned arrays. This pattern are the fastest pattern in C++ and if you don't use it in Java you will know pain.
The fact that we get multiple function calls do not really matter. Modern processors have something called branch prediction which will remove the cost of the vast majority of those calls. Long before the code actually runs the branch predictor will have figured out what the chains of calls do and replaced them with a single call in the generated microcode.
Also even if all calls would run each would take far less clock cycles the memory access they require, which as I pointed out makes memory alignment the only issue that should bother you.
As far as I know, the near pointers (or not far pointers) in C/C++ have address value which is very small than the actual address in RAM. So, if I keep on incrementing a pointer (say int type or any object type pointer) then at particular value it will be roll over. Now, my question is that: Is after rolling back the value pointed by it valid or not (assuming, I have a large size of data in the memory)?
I know this is a strange question to ask but I have a situation where I am continuously allocating and deallocating memory. I am finding that at particular point the binary crashes due to invalid address value like
0x20, 0x45 or 0x10101 etc.
I was wondering that the issue is due to roll over of the pointer value and since the address is getting rollover due to pointer therefore it is showing invalid address and crashing when it is being accessed.
I hope the situation I am referring is similar to the question is being asked. Even if they are different, I would like to know answers to both. I tried searching on "continuous incrementing pointers" but didn't find my answer.
EDIT: This is a new code compiled with G++ 4.1.2 20080704 (Red Hat 4.1.2-48) on Red Hat linux.
Actually the code is very large to share. But I can brief it in words:
There are 3 threads:
First thread: It creates allocates Alert class object and pushes it into the queue.
Second thread: It reads Alert from the queue, process it.
Third thread: It release the memory allocated to Alert objects after 20-30 minutes of processing.
I have already verified that the 3rd thread is not deallocating it before processed by 2nd thread.
But since the Alerts are generated on regular basis (i.e. around Thousands in a second) so I was suspecting the issue mentioned in the main question.
Points to note in my implementation:
I am using linux pipe queue to push it from one thread to other. For that I am pushing only address value of the object from sender side, and ensured to not delete the object there. Is this a possible way of corruption? Following is the code of this particular task:
Alert* l_alert = new Alert(ADD_ACTION,
l_vehicleType,
l_vehicleNo,
l_projPolyline,
l_speed,
l_slotId);
m_ResultHandler->SendToWorker(&l_alert);
Implementation of queue functions:
S32 SendToWorker(queueDataType *p_instPtr)
{
S32 ret_val=SUCCESS;
QueueObj.Lock();
ret_val = QueueObj.Signal();
QueueObj.push(*p_instPtr);
QueueObj.UnLock();
return ret_val;
}
S32 GetFromReceiver(queueDataType *p_instPtr)
{
QueueObj.Lock();
while(QueueObj.size() == 0)
QueueObj.Wait();
*p_instPtr = QueueObj.front();
QueueObj.pop();
QueueObj.UnLock();
return SUCCESS;
}
Receiver End:
m_alertQueue->GetFromReceiver(&l_alert)
What is the OS? Are you using virtual memory? The C standard says that a pointer is allowed to point to one address past the end of an array (but not dereferenced).
Pointing anywhere else is undefined behaviour.
The concept "near" and "far" pointers is a concept that mainly exists in compilers for x86 in 16-bit mode, where a "near" pointer is a 16-bit offset to a default segment of some sort, and a "far" pointer has a segment and offset value in the pointer itself. In 32- and 64-bit OS's, pointers are (generally) just an offset within a flat memory model (all segments are based at address zero).
A pointer can, according to the C standard point "to single object or an array of elements, and one past that". Anything else is undefined behaviour by the standard. One reason for this statement is to support segmented memory, where pointers may not be easy to compare between different segments (in particular not if the segments don't have a direct base-address, such as in OS/2 1.x, which used 16-bit protected mode, so code doesn't have easy access to the base-address of a segment. Since segments CAN overlap, it's not possible to tell if base address A + offset A is the same or different from base address B + offset B).
What actually happens if you have a pointer that doesn't fulfil this criteria is as stated "undefined". In an x86 environment, the real answer is that "it won't crash, and nothing bad will happen if you read the memory", but of course, if you try to write to memory that isn't "yours", then something bad could happen. Exactly what depends on what memory you are overwriting, and what the use of that memory is. It's impossible to say exactly what happens without knowing exactly what the memory is used for, and what value is written to it.
In a modern 32- or 64-bit OS, accessing memory that is "invalid" will definitely cause the program to crash, because modern OS's have memory protection that prevent "wild memory accesses".
I'm building a memory manager for C++ using a very .NET style approach. In doing so I need to know which objects are considered reachable; and object is considered reachable if a reachable object has a handle to the object in question. So this poses the question of which object(s) are the root of our search? The answer would be that these "eve" objects are on the stack, be it in the form of a handle to a managed object or an instance of a scope-local object that itself has a handle to a managed object.
I've read through some articles on this and also checked out implementation details on the MSDN about the StackWalk method in the Win32 API.
As always any help is greatly appreciated. And please don't advise against making a memory manager, or suggest alternatives such as smart pointers. I fully understand what I am doing. Thanks!
Your requirements sort of seem similar to a small project I’m working on at the moment, but my goal isn’t to make a memory manager, my goal is to instrument dmalloc (and the debug-mode long-running application within which it is running) with the ability to periodically halt execution and scan memory looking for heap allocations for which there are no references. Sort of like a “dumb” garbage collector, but not with the goal of freeing memory; instead, with the goal of logging leaked allocations for later analysis (along with stacktraces captured at allocation-time, which I’ve already added to dmalloc). Note that as a general-purpose memory manager’s garbage collector, this will be a pretty inefficient process and will take a “long” time to run (I’m not done yet, but I won’t be surprised if each time it runs it halts normal program execution for over 10 seconds), but for my own purposes I don’t care too much about performance because I’ll enable it only once every few months to test for new memory leaks in my company’s product.
In any case, I assume your memory manager will be the only source of heap memory in your application? And that threads in your system operate in a fully shared-memory environment, where no thread has any memory, including stack space and thread-local storage space, that cannot be seen from other threads? If so...
I believe there are just four categories of memory within which you may find pointers to heap allocations:
On the callstacks of each thread
Within heap allocations themselves
In statically allocated writable memory (.bss & .data/.sdata, but
not .rdata/.rodata)
In thread-local storage space for each thread
You are already aware that pointers to heap allocations may occur on the stack. Pointers to allocations may also (may instead) be stored in heap objects themselves, and not even stored on the stack. Your question suggests you may be hoping to use the stack as a “root” of your garbage collector’s search; I’m taking this to mean you hope to be able to follow pointers on the stack outwards to other allocations, searching from one object to another through memory until you’ve traversed all objects in memory and found all pointers to all allocations. "Root" pointers may also exist in statically allocated objects, which can be referenced directly without there even being a pointer to such an object on the stack, so you can't just assume all allocations are reachable from "pointers" you find in the stack. Also, unfortunately with C++, unless you’re able to know the structure of each allocation (which you won’t without help from the compiler), you’ll have to assume that any location is possibly a pointer. So you’ll have to scan through each of these four categories of memory looking for potential pointers to all existing allocations, flagging each with a “possibly still in use” flag if you find a value in memory that matches the address of an allocation, whether or not it’s actually a pointer. As you scan through memory, at each byte location (or at each byte location evenly divisible by sizeof(void*), if you know your platform can’t have pointers at misaligned addresses), you’ll have to search your list of allocations to see if that value is in your list of allocations.
Since you're confident that you know what you’re doing, your memory manager is probably tracking these allocations in a balanced tree structure (perhaps a red-black tree or Andersson tree) which gives you O(log n) insertion & lookup on those allocations, but the constant of proportionality for navigating those trees is going to really kill your garbage collector’s performance. Before doing your garbage collection scan, you’ll want to copy the tree’s allocation pointers into a flat contiguous buffer (i.e. an “array”) in order (i.e. ascending or descending using inorder traversal). I suggest an array of void* of each allocation’s address and a separate bit-array (not bool array) with one bit per allocation, initialized to all-zeros, where an allocation’s corresponding bit is set to 1 if you find a potential reference to it. This will still give you O(log n) lookup (using binary search) while you’re scanning for garbage collection, but with a much more manageable constant of proportionality for your lookups; in addition, this more compact data structure will tend to have better cache hit performance than a balanced tree.
Now I’ll discuss each of the three categories of memory you’d have to scan:
The callstacks of each thread
For this, you’ll have to be able to query your thread manager for the top & bottom of each thread’s stacks. If you can only get the current stack pointer for each thread, then you may be able to use a “backtrace” API to get a list of function return addresses on that stack. From that, you can scan back toward each stack’s base (which you don’t know), ticking off each return address in order until you get to the last return address, where you’ve then found the stack base (or close enough). And for the “current thread”, be sure to not include any stackframes associated with your memory manager; i.e., back up a few stackframes & ignore the ones associated with your garbage collector, or else you might find addresses of leaked allocations in your garbage collector’s local variables and mistake them for
Within heap allocations themselves
Heap objects can reference each other, and you could have a network of leaked objects that all reference each other yet as a group, they are leaked. You don't want to see their pointers to each other & treat them as "in-use", so you have to handle these carefully... and last. Once all other categories are finished, you can collapse/split your flat array of void* allocation addresses, making a separate list of "considered in-use" allocations and "not yet verified" allocations. Scan through the "considered in-use" allocations looking for potential pointers to allocations still in the "not yet verified" list. As you find any, move them from the "not yet verified" list to the end of the "considered in-use" list so that you'll eventually scan those as well.
In statically allocated writable memory (.bss & .data/.sdata, but not
.rdata/.rodata)
For this, you’ll need to get symbols from your linker to the start & end (or length) of each of these sections. If such symbols don’t already exist or you can’t get that information from a platform API, you’ll need to get your linker command script (linker script) and modify it to add & initialize global symbols to the start address & end address (or length) of each of these sections. The .bss section contains uninitialized global, file scope, and class static data members. The .data/.sdata section(s) contain non-const pre-initialized global, file scope, and class static data members. You don’t need to worry about the .rdata/.rodata section(s) because your program won’t be writing heap-allocation addresses into static const data.
In thread-local storage space for each thread
For this, you’ll have to be able to query your thread manager for the thread-local storage space for each thread, or else part of the startup of each thread must be to add its thread-local storage to a list of thread-local space for the application, and remove it when the thread exits.
If you’re still on board and want to do this, by now you’ve probably realized it’s a bigger project than you may have initially thought. Let me know how it goes!
class foo { }
writeln(foo.classinfo.init.length); // = 8 bytes
class foo { char d; }
writeln(foo.classinfo.init.length); // = 9 bytes
Is d actually storing anything in those 8 bytes, and if so, what? It seems like a huge waste, If I'm just wrapping a few value types then the the class significantly bloats the program, specifically if I am using a lot of them. A char becomes 8 times larger while an int becomes 3 times as large.
A struct's minimum size is 1 byte.
In D, object have a header containing 2 pointer (so it may be 8bytes or 16 depending on your architecture).
The first pointer is the virtual method table. This is an array that is generated by the compiler filled with function pointer, so virtual dispatch is possible. All instances of the same class share the same virtual method table.
The second pointer is the monitor. It is used for synchronization. It is not sure that this field stay here forever, because D emphasis local storage and immutability, which make synchronization on many objects useless. As this field is older than these features, it is still here and can be used. However, it may disapear in the future.
Such header on object is very common, you'll find the same in Java or C# for instance. You can look here for more information : http://dlang.org/abi.html
D uses two machine words in each class instance for:
A pointer to the virtual function table. This contains the addresses of virtual methods. The first entry points towards the class's classinfo, which is also used by dynamic casts.
The monitor, which allows the synchronized(obj) syntax, documented here.
These fields are described in the D documentation here (scroll down to "Class Properties") and here (scroll down to "Classes").
I don't know the particulars of D, but in both Java and .net, every class object contains information about its type, and also holds information about whether it's the target of any monitor locks, whether it's eligible for finalization cleanup, and various other things. Having a standard means by which all objects store such information can make many things more convenient for both users and implementers of the language and/or framework. Incidentally, in 32-bit versions of .net, the overhead for each object is 8 bytes except that there is a 12-byte minimum object size. This minimum stems from the fact that when the garbage-collector moves objects around, it needs to temporarily store in the old location a reference to the new one as well as some sort of linked data structure that will permit it to examine arbitrarily-deep nested references without needing an arbitrarily-large stack.
Edit
If you want to use a class because you need to be able to persist references to data items, space is at a premium, and your usage patterns are such that you'll know when data items are still useful and when they become obsolete, you may be able to define an array of structures, and then pass around indices to the array elements. It's possible to write code to handle this very efficiently with essentially zero overhead, provided that the structure of your program allows you to ensure that every item that gets allocated is released exactly once and things are not used once they are released.
If you would not be able to readily determine when the last reference to an object is going to go out of scope, eight bytes would be a very reasonable level of overhead. I would expect that most frameworks would force objects to be aligned on 32-bit boundaries (so I'm surprised that adding a byte would push the size to nine rather than twelve). If a system is going have a garbage collector that works better than a Commodore 64(*), it would need to have an absolute minimum of a bit of overhead per object to indicate which things are used and which aren't. Further, unless one wants to have separate heaps for objects which can contain supplemental information and those which can't, one will every object to either include space for a supplemental-information pointer, or include space for all the supplemental information (locking, abandonment notification requests, etc.). While it might be beneficial in some cases to have separate heaps for the two categories of objects, I doubt the benefits would very often justify the added complexity.
(*) The Commodore 64 garbage collector worked by allocating strings from the top of memory downward, while variables (which are not GC'ed) were allocated bottom-up. When memory got full, the system would scan all variables to find the reference to the string that was stored at the highest address. That string would then be moved to the very top of memory and all references to it would be updated. The system would then scan all variables to find the reference to the string at the highest address below the one it just moved and update all references to that. The process would repeat until it didn't find any more strings to move. This algorithm didn't require any extra data to be stored with strings in memory, but it was of course dog slow. The Commodore 128 garbage collector stored with each string in GC space a pointer to the variable that holds a reference and a length byte that could be used to find the next lower string in GC space; it could thus check each string in order to find out whether it was still used, relocating it to the top of memory if so. Much faster, but at the cost of three bytes' overhead per string.
You should look into the storage requirements for various types. Every instruction, storage allocation (ie:variable/object, etc) uses up a specific amount of space. In c# an Int32 type integer object should store integer information to the tune of 4 bytes (32bit). It might have other information, too, because it is an object, but your character data type probably only requires 1 byte of information. If you have constructs like for or while in your class, those things will take up space, too, because each of those things is telling your class to do something. The class itself requires a number of instructions to be created in memory, which would account for the 8 initial bytes.
Take an assembler language course. You'll learn all you ever wanted to know and then some about why your programs use however much memory or take up however much storage when compiled.