Why C/C++ have memory issue? - c++

I have read lots of programmers saying and writing when programming in C/C++ there are lots of issue related to memory. I am planning to learn to program in C/C++. I have beginner knowledge of C/C++ and I want to see some short sample why C/C++ can have issues with memory management. Please Provide some samples.

There are many ways you can corrupt or leak memory in C or C++. These errors are some of the most difficult to diagnose, because they are often not easily reproducible.
For example, it is simple to fail to free memory you have allocated. For example, this will do a "double free", trying to free a twice and failing to free b:
char *a = malloc(128*sizeof(char));
char *b = malloc(128*sizeof(char));
b = a;
free(a);
free(b); // will not free the pointer to the original allocated memory.
An example buffer overrun, which corrupts arbitrary memory follows. It is a buffer overrun because you do not know how long str is. If it is longer than 256 bytes, then it will write those bytes somewhere in memory, possible overwriting your code, possibly not.
void somefunc(char *str) {
char buff[256];
strcpy(buff, str);
}

Basically, in these languages, you have to manually request every bit of memory that is not a local variable known at compile time, and you have to manually release it when you don't need it anymore. The are libraries (so-called smart pointers) that can automate this process to some degree, but they are not applicable everywhere. Furthermore, there are absolutely no limits to how you can (try to) access memory via pointer arithmetics.
Manual memory management can lead to a number of bugs:
If you forget to release some memory, you have a memory leak
If you use more memory than you've requested for a given pointer, you have a buffer overrun.
If you release memory and keep using a "dangling pointer" to it, you have undefined behaviour (usually the program crashes)
If you miscalculate your pointer arithmetics, you have a crash, or corrupted data
And many of these problems are very hard to diagnose and debug.

I am planning to learn to program in C/C++
What exactly do you mean by that? Do you want to learn to program in C, or do you want to learn to program in C++? I would not recommend learning both languages at once.
From a user's perspective, memory management in C++ is a lot easier than in C, because most of it is encapsulated by classes, for example std::vector<T>. From a conceptual perspective, C's memory management is arguable a lot simpler. Basically, there's just malloc and free :)

I can honestly say that I have no "issues" with memory allocation when programming in C++. The last time I had a memory leak was over 10 years ago, and was due to blind stupidity on my part. If you write code using RAII, standard library containers and a small modicum of common sense, the problem really does not exist.

One of the common memory management problems in C and C++ is related to the lack of bounds checking on arrays. Unlike Java (for example), C and C++ do not check to make sure that the array index falls within the actual array bounds. Because of this, it is easy to accidentally overwrite memory. For example (C++):
char *a = new char[10];
a[12] = 'x';
There will be no compile or runtime error associated with the above code, except that your code will overwrite memory that shouldn't be.

The reason that this is commonly cited as something that is different in C/C++ is that many modern languages perform memory management and garbage collection. In C/C++ this is not the case (for better or for worse). You need to manually allocate and deallocate memory, and failing to do so correctly results in a memory leak that would not be possible in a language that performs memory management for you.

When new is used to gain a block of memory the size reserved by the operating system may be bigger than your request, but never smaller. Because of this and the fact that delete doesn't immediately return the memory to the operating system, when you inspect the whole memory that your program is using you may be made to believe that your application has serious memory leaks. So inspecting the number of bytes the whole program is using should not be used as a way of detecting memory errors. Only if the memory manager indicates a large and continuous growth of memory used should you suspect memory leaks.

It is obvious you have to release that memory which is no longer in use or not use in future, but you have to determine at the begining of the programme , the memory requirement is static or varied when it execute. If it dynamic you take enough memory for your program work that time may be your program consume extra memory.
so you have to release that memory which not in use and create that time when it is needed.
just like
struct student
{
char name[20];
int roll;
float marks;
}s[100];
Here I suppose 100 student in class. Student may be more than 100 or less 100. If more than 100 then your programme will lose information or less 100 then programme will run but waste of memory, it can be big.
so we are usually create a record dynamically at the time of execution.
just like
struct student *s;
s=(struct student *)malloc(sizeof(struct student));
scanf("%s %d %f",s->name,s->roll,s->marks);
if not in use then delete it form momery space.
free(s);
That is good manner for programming, if you not remove from memory then at one time it can full your memory stack and it can be hang.

One thing not mentioned so far is performance and why you want to manually manage memory.
Managing memory is hard to do accurately especially as a program becomes more complex(especially when you use threads and when the lifetime of pieces of memory becomes complex(ie when it gets hard when its hard to tell exactly when you don't need a piece of information)) even with powerful modern programing tools like valgrind.
So why would you want to manually manage memory,a few reasons:
--To understand how it works/to
--To implement garbage collection/automatic memory management you need to manually
--With some lower level things such as a kernel you need the flexibility manual control of memory can give.
--Most importantly if you do manual memory management right you can gain a large speedup/lower memory overhead(better performance), an associated problem with garbage collection(although its getting better as better garbage collectors such as the hotspot jvm one are written) is that you can't control memory management so it hard to do stuff with real time stuff(guaranteeing deadlines for certain objectives like car brakes and pacemakers, try special real time gc) and programs that interact with users might freeze up for a little bit or lag(this would suck for a game).
A lot of "Modern C++" (it has been said that C++ can be thought of as multiple languages depending on how you use it) as opposed to c with classes(or x and y C++ feature) use a compromise by frequently using simple optional gc/automatic memory management(note that optional gc can be worse at memory management then mandatory because when its mandatory its a simpler system) features and and some manual memory management. Depending on how you do this it can have some advantages and disadvantages of using gc and doing manual memory management. Optional GC is also available with some C libraries but its less common with c then c++.

Related

Memory defragmentation/heap compaction - commonplace in managed languages, but not in C++. Why?

I've been reading up a little on zero-pause garbage collectors for managed languages. From what I understand, one of the most difficult things to do without stop-the-world pauses is heap compaction. Only very few collectors (eg Azul C4, ZGC) seem to be doing, or at least approaching, this.
So, most GCs introduce dreaded stop-the-world pauses the compact the heap (bad!). Not doing this seems extremely difficult, and does come with a performance/throughput penalty. So either way, this step seems rather problematic.
And yet - as far as I know, most if not all GCs still do compact the heap occasionally. I've yet to see a modern GC that doesn't do this by default. Which leads me to believe: It has to be really, really important. If it wasn't, surely, the tradeoff wouldn't be worth it.
At the same time, I have never seen anyone do memory defragmentation in C++. I'm sure some people somewhere do, but - correct me if I am wrong - it does not at all seem to be a common concern.
I could of course imagine static memory somewhat lessens this, but surely, most codebases would do a fair amount of dynamic allocations?!
So I'm curious, why is that?
Are my assumptions (very important in managed languages; rarely done in C++) even correct? If yes, is there any explanation I'm missing?
Garbage collection can compact the heap because it knows where all of the pointers are. After all, it just finished tracing them. That means that it can move objects around and adjust the pointers (references) to the new location.
However, C++ cannot do that, because it doesn't know where all the pointers are. If the memory allocation library moved things around, there could be dangling pointers to the old locations.
Oh, and for long running processes, C++ can indeed suffer from memory fragmentation. This was more of a problem on 32-bit systems because it could fail to allocate memory from the OS, because it might have used up all of the available 1 MB memory blocks. In 64-bit it is almost impossible to create so many memory mappings that there is nowhere to put a new one. However, if you ended up with a 16 byte memory allocation in each 4K memory page, that's a lot of wasted space.
C and C++ applications solve that by using storage pools. For a web server, for example, it would start a pool with a new request. At the end of that web request, everything in the pool gets destroyed. The pool makes a nice, constant sized block of RAM that gets reused over and over without fragmentation.
Garbage collection tends to use recycling pools as well, because it avoids the strain of running a big GC trace and reclaim at the end of a connection.
One method some old operating systems like Apple OS 9 used before virtual memory was a thing is handles. Instead of a memory pointer, allocation returned a handle. That handle was a pointer to the real object in memory. When the operating system needed to compact memory or swap it to disk it would change the handle.
I have actually implemented a similar system in C++ using an array of handles into a shared memory map psuedo-database. When the map was compacted then the handle table was scanned for affected entries and updated.
Generic memory compaction is not generally useful nor desirable because of its costs.
What may be desirable is to have no wasted/fragmented memory and that can be achieved by other methods than memory compaction.
In C++ one can come up with a different allocation approach for objects that do cause fragmentation in their specific application, e.g. double-pointers or double-indexes to allow for object relocation; object pools or arenas that prevent or minimize fragmentation. Such solutions for specific object types is superior to generic garbage collection because they employ application/business specific knowledge which allows to minimize the scope/cost of object storage maintenance and also happen at most appropriate times.
A research found that garbage collected languages require 5 times more memory to achieve performance of non-GC equivalent programs. Memory fragmentation is more severe in GC languages.

How to make a simple tool to detect double free or memory overflow in Linux for a large project?

I have a large embedded project that has Linux running. Also, it has various process and threads running. I can't log all the malloc and new calls as it will make the box - Embedded Set-top box sluggish. Also, sluggishness might cause a crash because of mutex time out or other things. Thus, I want to make a tool that can help to debug the issues related to memory like - memory overflow.
For example, when you do a malloc of 4 bytes. But, you write 8 bytes. This may create a problem on the other chunk of data allocated.The other chunk of data header can be tampered. Thus, free() will fail or crash. How can I make a tool to detect such issue. Also, a tool to track down the memory leaks. Is there a way to do so? I can't use valgrind as it slows down my STB. So, I want to develop my tool that can check for the memory header corruption or memory leaks. Just based on my choice, it can do either memory corruption check or memory leak detection. Also, it should be a light weight.
Firstly there is probably no way to call this "simple".
Secondly if you are using C++ I highly suggest not using malloc/free but rather new/delete. The options for overriding those operators are much more flexible.
C++ provides a number of tools to improve memory safety really:
smart pointers (the performance cost really is worth the safety improvement)
Encapsulating things in classes. for example if you use std::array::at(i) it will throw an exception if your access is out of bounds. ref
lastly having proper usage of asserts in your code can go a long way to catch errors.
My point is merely that you should not depend on your debugging tools to negate the necessity of using good C++ programming methods.
Ok so now next you need to override new and delete.
A google search will provide many ways to do this.
link1
For your problem it probably makes more sense to overload delete/new globally.
Buffer overflow detection
This is the first part of your problem.
What you need to do is allocate additional memory in your new overloaded instruction so that there are some memory buffer regions before and after the memory and then return only the centre part.
How big a buffer is your choice.
pseudo code:
inline void* operator new(size_t s)
{
void* mem = malloc(s+2*BUFFER);
memset(mem,0x5A,s+2*BUFFER);
return (mem+BUFFER)
}
At some stage in the future you need to check that the BUFFER regions kept the values of 0x5A. You should probably do this in the call to free() but you can also have your own function to do this which you call periodically. In order to speed up this process use a function like memcmp perhaps.
Memory leak detection
Detecting memory leaks is not trivial.
Firstly I suggest using stack-based objects when ever possible to all-together avoid allocating memory on the heap when not needed.
The main question regarding memory leaks is to know if a certain memory block shouldn't been deleted or not.
99% of your memory leak problems can probably be solved just by using smart pointers.
However one of the most difficult memory leaks to catch is that of a growing data structure. (say for example a linked list that grows slowly over time)
Firstly in your overloaded new/malloc functions keep a list of all memory currently allocated. And also a counter of the total number of memory allocated.
Method 1: threshold detection:
Essentially every-time your program's memory usage exceeds a threshold amount you report this and increase the threshold. If your program continues to exceed thresholds as it keeps running something is wrong.
Method 2: Comparative analysis:
In pseudeo code:
Value1 = currentAmountOfMemoryUsed;
runSomeCode()
if (currentAmountOfMemoryUsed != Value1) reportProblem()
If this is possible depends a lot on what happens in runSomeCode() as some code can legally "save" up some memory for when it runs again later.
Method 3: Leak detection on program exit:
The premise is that if your code is 100% correctly written every bit of memory allocated should be freed at the time your program exists.
This method once again is not always possible because perhaps your program needs to run indefinitely and also your program might segfault because of your errors long before this can be detected.
Compiler support
On a lower level most compilers have some support to get into the whole memory management system but the way to handle this is 100% compiler/platform specific. e.g. Visual Studio C++
This is why I highly suggest not using malloc/free directly as this is problematic for debugging in this way as well as breaks the constructor/destructor design patterns of C++.
overriding malloc/free
There is however a more hands-on approach to overriding malloc/free.
That is by defining your own malloc/free functions.
Typically under debugging this will then use macro's to include FILE and LINE in the call:
#ifdef NDEBUG
#define myMalloc(s) myMallocImplementation(s,__FILE__,__LINE__);
#else
#define myMalloc(s) malloc(s)
#endif
What this allows is that your malloc implementation can then save the source location where the memory was allocated. This approach will however not catch malloc/free usage within libraries you are using.
This is a bit harder to do with new/delete calls as it would normally require some amount of digging into the call-stack at run-time to find out who called your new() function and that again is fairly compiler specific.
Also see: MSDN blog article
Memory freezing
Given everything I like to also just mention something that is very common in safety critical code (as used in motor vehicles and/or airplanes ect)
Outside of initialization a safety-critical program is usually not allowed to use malloc/free/new/delete. So all memory allocations must happen during initialization and then once the program and then usually malloc/free is frozen in some way. Any call to malloc/free after that will cause an assert.
This can be quite a heavy limitation to work with in a C++ environment but it does make for very robust code.
Note this does nothing for buffer overflow access or invalid pointer access problems.

About Memory Management When mentioning C++

When someone mention the memory management the c++ is capable of doing, how can i see this thing? is this done in theory like guessing?
i took a logical design intro course and it covered the systems of numbers and boolean algebra and combinational logic,will this help?
so say in Visual Studio , is there some kind of a tool to visualize the memory,i hope im not being ridiculous here ?
thank you.
C++ has a variety of memory areas:
space used for global and static variables, which is pre-allocated by the compiler
"stack" memory that's used for preserving caller context during function calls, passing some function arguments (others may fit in CPU registers), and local variables
"heap" memory that's allocated using new or new[](C++'s preferred approach) or malloc (a lower-level function inherited from C), and released with delete, delete[] or free respectively.
The heap is important in that it supports run-time requests for arbitrary amounts of memory, and the usage persists until delete or free is explicitly used, rather than being tied to the lifetime of particular function calls as per stack memory.
I'm not aware of any useful tools for visualising and categorising the overall memory usage of a running C++ program, less still for relating that back to which pointers in the source code currently have how much memory associated with them. As a very general guideline, it's encouraged to write code in such a way that pointers are only introduced when the program is ready to point them at something, and they go out of scope when they will no longer pointer at something. When that's impractical, it can be useful to set them to NULL (0), so that if you're monitoring the executing program in a debugger you can tell the pointer isn't meant to point to legitimate data at that point.
Memory management is not something that you can easily visualize while you are programming. Instead, it refers to how your program is allocating and freeing memory while it is running. Many debuggers will provide a way to halt a program while it is running and view information about the dynamic memory that it has allocated. You can plan your classes and interfaces with proper memory management techniques, but it's not as simple as "hit this button for a chart of your memory usage".
You can also implement something like this to keep track of your memory allocations and warn you about anything that your program didn't free. A garbage collector can free you from some of the hassles associated with memory management.
In Visual Studio, there is a Memory Window (Alt+6) that will let you read/write memory manually provided it is a valid memory location for the operation you are trying to do, during debugging.
On Windows platform, you can get a first initial feel of memory management using tools like perfmon.exe, taskmgr.exe and many other tools from sysinternals

Heap Behavior in C++

Is there anything wrong with the optimization of overloading the global operator new to round up all allocations to the next power of two? Theoretically, this would lower fragmentation at the cost of higher worst-case memory consumption, but does the OS already have redundant behavior with this technique, or does it do its best to conserve memory?
Basically, given that memory usage isn't as much of an issue as performance, should I do this?
The default memory allocator is probably quite smart and will deal well with large numbers of small to medium sized objects, as this is the most common case. For all allocators, the number of bytes requested is never always the amount allocated. For example, if you say:
char * p = new char[3];
the allocator almost certainly does something like:
char * p = new char[16]; // or some minimum power of 2 block size
Unless you can demonstrate that you have an actual problem with allocations, you should not consider writing your own version of new.
You should try implementing it for fun. As soon as it works, throw it away.
Should you do this? No.
Two reasons:
Overloading the global new operator will inevitably cause you pain, especially when external libraries take dependency on the stock versions.
Modern OS implementation of the heap already take fragmentation into consideration. If you're on Windows, you can look into "Low Fragmentation Heap" if you have a special need.
To summarize, don't mess with it unless you can prove (by profiling) that it is a problem to begin with. Don't optimize pre-maturely.
I agree with Neil, Alienfluid and Fredoverflow that in most cases you don't want to write your own memory allocator, but I still wrote my own memory allocator about 15 years and refined it over the years (first version was with malloc/free redefinition, later versions using global new/delete operators) and in my experience, the advantages can be enormous:
Memory leak tracing can be built in your application. No need to run external applications that slow down your applications.
If you implement different strategies, you sometimes find difficult problems just switching to a different memory allocation strategy
To find difficult memory-related bugs, you can easily add logging to your memory allocator and even further refine it (e.g. log all news and deletes for memory of size N bytes)
You can use page-allocation strategies, where you allocate a complete 4KB page and set the page size so that buffer overflows are caught immediately
You can add logic to delete to print out if memory is freed twice
It's easy to add a red zone to memory allocations (a checksum before the allocated memory and one after the allocated memory) to find buffer overflows/underflows more quickly
...

Will Garbage Collected C be Faster Than C++?

I had been wondering for quite some time on how to manager memory in my next project. Which is writing a DSL in C/C++.
It can be done in any of the three ways.
Reference counted C or C++.
Garbage collected C.
In C++, copying class and structures from stack to stack and managing strings separately with some kind of GC.
The community probably already has a lot of experience on each of these methods. Which one will be faster? What are the pros and cons for each?
A related side question. Will malloc/free be slower than allocating a big chunk at the beginning of the program and running my own memory manager over it? .NET seems to do it. But I am confused why we can't count on OS to do this job better and faster than what we can do ourselves.
It all depends! That's a pretty open question. It needs an essay to answer it!
Hey.. here's one somebody prepared earlier:
http://lambda-the-ultimate.org/node/2552
http://www.hpl.hp.com/personal/Hans_Boehm/gc/issues.html
It depends how big your objects are, how many of them there are, how fast they're being allocated and discarded, how much time you want to invest optimizing and tweaking to make optimizations. If you know the limits of how much memory you need, for fast performance, I would think you can't really beat grabbing all the memory you need from the OS up front, and then managing it yourself.
The reason it can be slow allocating memory from the OS is that it deals with lots of processes and memory on disk and in ram, so to get memory it's got to decide if there is enough. Possibly, it might have to page another processes memory out from ram to disk so it can give you enough. There's lots going on. So managing it yourself (or with a GC collected heap) can be far quicker than going to the OS for each request. Also, the OS usually deals with bigger chunks of memory, so it might round up the size of requests you make meaning you could waste memory.
Have you got a real hard requirement for going super quick? A lot of DSL applications don't need raw performance. I'd suggest going with whatever's simplest to code. You could spend a lifetime writing memory management systems and worrying which is best.
Why would garbage collected C be faster than C++? The only garbage collectors available for C are pretty inefficient things, more designed to plug memory leaks than to actually improve the quality of your code.
In any case, C++ has the potential for reaching better performance with less code (note that it's only a potential. It's also very possible to write C++ code that is far slower than the equivalent C).
Considering the current state of both languages, GC's are not currently going to improve performance in your code. GC's can be made very efficient in languages designed for it. C/C++ are not among those. ;)
Apart from that, it's impossible to say. Languages don't have a speed. It doesn't make sense to ask which language is faster. It depends on 1) the specific code, 2) the compiler that compiles it, and 3) the system it's running on (hardware as well as OS).
malloc is a fairly slow operation, far slower than the .NET equivalents, so yes, if you are performing a lot of small allocations, you may be better off allocating a large pool of memory once, and then using chunks of that.
The reason is that the OS has to find a free chunk of memory, basically by following a linked list of all free memory areas. In .NET, a new() call is basically nothing more than moving the heap pointer as many bytes as required by the allocation.
uh ... It depends how you write the garbage collection system for your DSL. Neither C or C++ comes with a garbage collection facility built-in but either could be used to write a very efficient or a very inefficient garbage collector. Writing such a thing, by the way, is a non-trivial task.
DSLs are often written in higher level languages such as Ruby or Python specifically because the language writer can leverage the garbage collection and other facilities of the language. C and C++ are great for writing full, industrial strength languages but you certainly need to know what you are doing to use them - knowledge of yacc and lex is especially useful here but a good understanding of dynamic memory management is important also, as you say. You could also check out keykit, an open source music DSL written in C, if you still like the idea of a DSL in C/C++.
With most garbage collection implementations, allocation can see a speed improvement, but then you have the additional cost of the collection phase which can be triggered at any point in your program's execution, leading to a sudden (seemingly random) delay.
As for your second question, it depends on your memory management algorithms. You'd be safe sticking with your library's default malloc implementation, but there are alternatives which boast better performance.
A related side question. Will malloc/free be slower than allocating a big chuck at the begining of the program and running my own memory manager over it? .NET seems to do it. But I am confused why we can't count on OS to do this job better and faster than what we can do ourselves.
The problem with letting the OS handle memory allocation is that it introduces indeterministic behaviour. There's no way for the programmer to know how long the OS will take to return a new chunk of memory - an allocation may be quite costly if memory has to be paged out to disk.
Preallocating therefore might be a good idea, especially when using a copying garbage collector. It'll increase memory consumption, but allocation will be fast because in most cases it'll just be a pointer increment.
As people have pointed out - GC is faster to allocate (because it just gives you the next block on its list), but slower overall (because it has to compact the heap regularly, in order for allocs to be fast).
so - go for the compromise solution (which is actually pretty damn good):
You create your own heaps, one for each size of object you generally allocate (or 4-byte, 8 byte, 16-byte, 32-byte, etc) then, when you want a new piece of memory you grab the last 'block' on the appropriate heap. Because you pre-allocate from these heaps, all you need to do when allocating is grab the next free block. This works better than the standard allocator because you are happily wasting memory - if you want to allocate 12 bytes, you'll give up a whole 16 byte block from the 16-byte heap. You keep a bitmap of used v free blocks so you can allocate quickly without wasting loads of memory or needing to compact.
Also, because you're running several heaps, highly-parallel systems work much better as you don't need to lock so often (ie you have multiple locks for each heap so you don't get contention nearly as much)
Try it - we used it to replace the standard heap on a very intensive application, performance went up by quite a lot.
BTW. the reason the standard allocators are slow is that they try not to waste memory - so if you allocate a 5 byte, 7 byte and 32 bytes from the standard heap, it'll keep those 'boundaries'. Next time you need to allocate, it'll walk through those looking for enough space to give you what you asked for. That worked well for low-memory systems, but you only have to look at how much memory most apps use today to see that GC systems go the other way, and try to make allocations as fast as possible whilst caring nothing for how much memory is wasted.
The problem has a lot of variables, but if your application is written with garbage collection in mind, and if you exploit the special features of the Boehm collector, such as different allocation calls for blocks that don't contain pointers, then as a general rule your application
- Will have simpler interfaces
- Will run somewhat faster
- Will require from 1.2x to 2x the space
than a similar application using explicit memory management.
For documentation and evidence supporting these claims, you can see the information on Boehm's web site, and also Ben Zorn's several papers on the measured cost of conservative garbage collection.
Most importantly you'll save a ton of effort and won't have to worry about a significant class of memory-management bugs.
The issue of C vs C++ is orthogonal, but GC will definitely be faster than reference counting, especially when there's no compiler support for reference counting.
Neither C nor C++ will give you garbage for free. What they will give you is memory allocation libraries (which provide malloc/free, etc). There are many online resources to algorithms for writing garbage collection libraries. A good start is link text
Most non GC languages will allocate and de-allocate the memory as needed and no longer needed. GC'd languages usually allocate large chunks of memory before hand and only free the memory when idle and not in the middle of a intensive task so I am going to yes if GC kicks in at correct time.
The D programming language is a garbage collected language and ABI compatible with C and partly ABI compatible with C++. This Page shows some benchmarks between string performance in C++ and D.
I suggest that if you have written a program where memory allocation and deallocation (explicitly or GC'ed) is the bottleneck, then you should re-think your architecture, design and implementation.
If you don't want to explicitly manage memory, don't use C/C++. There are plenty of languages with either reference counting or compiler-supported garbage collectors that will probably work much better for you.
C/C++ are designed in an environment where the programmer manages their own memory. Trying to retrofit GC or ref counting onto them may help some, but you'll find that you either have to compromise the performance of the GC (because it doesn't have any compiler hinting as to where pointers might be), or you'll find new and fascinating ways that you can screw up the reference counts or the GC or whatever.
I know it sounds like a good idea, but really, you should just grab a language more suited to the task.