Is calloc better than malloc? - c++

I have just learned about the C calloc() function the other day. Having read its description and how it differs from malloc (1, 2), I get the idea that, as a non-embedded programmer, I should always use calloc(). But is that really the case?
One reservation I have is the extra delay for accessing the calloc()-ed memory, but I also wonder if there are cases when switching from malloc() to calloc() will break the program in some more serious way.
P. S. The zero-initializing aspect of calloc() is quite clear to me. What I'm interested in learning about is the other difference between calloc() and malloc() - lazy memory allocation provided by calloc(). Please don't post an answer if you're going to focus purely on the memory initialization aspect.

This is really a situation-dependent decision. Rule of thumb is
If you're first writing into the allocated memory, malloc() is better (less possible overhead).
Example: Consider the following scenario
char * pointer = NULL;
//allocation
strcpy(pointer, source);
here, allocation can be very well using malloc().
If there's a possibility of read-before-write with the allocated memory, go for calloc(), as it initializes memory. This way you can avoid the problem with unitialized memory read-before-write scenario which invokes undefined behavior.
Example:
char * pointer = NULL;
//allocation
strcat(pointer, source);
Here, strcat() needs the first argument to be a string already, and using malloc() to allocate cannot guarantee that. As calloc() zero-initializes the memory, it will serve the purpose here and thus, calloc() is the way to go for this case.
To elaborate the second scenario, quoting from C11, chapter §7.24.3.1 (follow my emphasis)
The strcat() function appends a copy of the string pointed to by s2 (including the
terminating null character) to the end of the string pointed to by s1. The initial character
of s2 overwrites the null character at the end of s1. [....]
So, in this case, the destination pointer should be a pointer to a string. Allocating via calloc() guarantees that while allocating using malloc() cannot guarantee that, as we know, from chapter §7.22.3.4
The malloc function allocates space for an object whose size is specified by size and
whose value is indeterminate.
EDIT:
One possible scenario where malloc() is advised over calloc(), is writing test stubs used for unit / integration testing. In that case, use of calloc() can hide potential bugs which arrive with cases similar to the later one.

The main difference between malloc and calloc is that calloc will zero-initialize your buffer, and malloc will leave the memory uninitialized.
This gets to the common programming idiom of "don't pay for what you don't use". In other words, why zero-initialize something (which has a cost) if you don't necessarily need to (yet)?
As a side note since you tagged C++: manual memory usage using new/delete is frowned upon in modern C++ (except in rare cases of memory pools, etc). Use of malloc/free is even more rare and should be used very sparingly.

Use calloc for zero-filled allocations, but only when the zero-filling is really needed.
You should always use calloc(count,size) instead of buff=malloc(total_size); memset(buff,0,total_size).
The call to zero-memset is the key. Both malloc and calloc are translated into OS calls which do lots of optimizations, use hardware tricks whenever possible, etc. But there is little that OS can do with memset.
On the other hand, when do you need to zero-fill the allocated memory? The only common use is for zero-ended arbitrary length elements, such as C-strings. If that's the case, sure, go with calloc.
But if you allocate the structures in which the elements are either fixed-length or carry the length of arbitrary-sized elements with them (such as C++ strings and vectors), zero-filling is not helpful at all, and if you try to rely on it, it can lead to tricky bugs.
Suppose you write your custom linked list, and decide to skip the zeroing of the pointer to the next node by allocating the memory for the node with calloc. It works fine, then someone uses it with custom placement new, which doesn't zero-fill. Trouble is, sometimes it will be zero-filled, and can pass all the usual testing, go in production, and there it will crash, crash sometimes, the dreaded unrepeatable bug.
For debug purposes, zero-filling is usually not that good, either. 0 is too common, you rarely can write something like assert(size); because it's usually a valid value, too, you handle it with if(!size), not with asserts. On the debugger it won't catch your eye, either, there are usually zeros everywhere in your memory. The best practice is to avoid unsigned types for the lengths (signed lengths can be useful for runtime error handling and some of the most common overflow checks, too). So, while buff=malloc(total_size); memset(buff,0,total_size) is to be avoided, the following is OK:
const signed char UNINIT_MEM=MY_SENTINEL_VALUE;
buff=malloc(total_size);
#if DEBUG_MEMORY
memset(buff,UNINIT_MEM,total_size);
#endif
In debug mode, runtime library or even OS do this for you sometimes, for example check this excellent post on VC++-specific sentinel values.

It's all about what you want to do with the memory. malloc returns uninitialized (and possibly not even real yet) memory. calloc return real, zero'ed memory. If you need it zero'ed, then yes, calloc is your best option. If you don't, why pay for zero'ing with a latency hit when you don't need it?

malloc() is far more common in C code than calloc().
A text search for "malloc" will miss the calloc() calls.
Replacement libraires will often have mymalloc(), myrealloc(), myfree() but not mycalloc().
Zero-initialisation of pointers and reals isn't actually guaranteed to have the expected effect, though on every major platform all bits zero is NULL for a pointer and 0.0 for a real.
calloc() tends to hide bugs. Debug malloc usually sets a fill pattern like DEADBEEF which evaluates to a large negative number and doesn't look like real data. So the program quickly crashes and, with a debugger, the error is flushed out.

Related

`std::string` allocations are my current bottleneck - how can I optimize with a custom allocator?

I'm writing a C++14 JSON library as an exercise and to use it in my personal projects.
By using callgrind I've discovered that the current bottleneck during a continuous value creation from string stress test is an std::string dynamic memory allocation. Precisely, the bottleneck is the call to malloc(...) made from std::string::reserve.
I've read that many existing JSON libraries such as rapidjson use custom allocators to avoid malloc(...) calls during string memory allocations.
I tried to analyze rapidjson's source code but the large amount of additional code and comments, plus the fact that I'm not really sure what I'm looking for, didn't help me much.
How do custom allocators help in this situation?
Is a memory buffer preallocated somewhere (where? statically?) and std::strings take available memory from it?
Are strings using custom allocators "compatible" with normal strings?
They have different types. Do they have to be "converted"? (And does that result in a performance hit?)
Code notes:
Str is an alias for std::string.
By default, std::string allocates memory as needed from the same heap as anything that you allocate with malloc or new. To get a performance gain from providing your own custom allocator, you will need to be managing your own "chunk" of memory in such a way that your allocator can deal out the amounts of memory that your strings ask for faster than malloc does. Your memory manager will make relatively few calls to malloc, (or new, depending on your approach) under the hood, requesting "large" amounts of memory at once, then deal out sections of this (these) memory block(s) through the custom allocator. To actually achieve better performance than malloc, your memory manager will usually have to be tuned based on known allocation patterns of your use cases.
This kind of thing often comes down to the age-old trade off of memory use versus execution speed. For example: if you have a known upper bound on your string sizes in practice, you can pull tricks with over-allocating to always accommodate the largest case. While this is wasteful of your memory resources, it can alleviate the performance overhead that more generalized allocation runs into with memory fragmentation. As well as making any calls to realloc essentially constant time for your purposes.
#sehe is exactly right. There are many ways.
EDIT:
To finally address your second question, strings using different allocators can play nicely together, and usage should be transparent.
For example:
class myalloc : public std::allocator<char>{};
myalloc customAllocator;
int main(void)
{
std::string mystring(customAllocator);
std::string regularString = "test string";
mystring = regularString;
std::cout << mystring;
return 0;
}
This is a fairly silly example and, of course, uses the same workhorse code under the hood. However, it shows assignment between strings using allocator classes of "different types". Implementing a useful allocator that supplies the full interface required by the STL without just disguising the default std::allocator is not as trivial. This seems to be a decent write up covering the concepts involved. The key to why this works, in the context of your question at least, is that using different allocators doesn't cause the strings to be of different type. Notice that the custom allocator is given as an argument to the constructor not a template parameter. The STL still does fun things with templates (such as rebind and Traits) to homogenize allocator interfaces and tracking.
What often helps is the creation of a GlobalStringTable.
See if you can find portions of the old NiMain library from the now defunct NetImmerse software stack. It contains an example implementation.
Lifetime
What is important to note is that this string table needs to be accessible between different DLL spaces, and that it is not a static object. R. Martinho Fernandes already warned that the object needs to be created when the application or DLL thread is created / attached, and disposed when the thread is destroyed or the dll is detached, and preferrably before any string object is actually used. This sounds easier than it actually is.
Memory allocation
Once you have a single point of access that exports correctly, you can have it allocate a memory buffer up-front. If the memory is not enough, you have to resize it and move the existing strings over. Strings essentially become handles to regions of memory in this buffer.
Placement new
Something that often works well is called the placement new() operator, where you can actually specify where in memory your new string object needs to be allocated. However, instead of allocating, the operator can simply grab the memory location that is passed in as an argument, zero the memory at that location, and return it. You can also keep track of the allocation, the actual size of the string etc.. in the Globalstringtable object.
SOA
Handling the actual memory scheduling is something that is up to you, but there are many possible ways to approach this. Often, the allocated space is partitioned in several regions so that you have several blocks per possible string size. A block for strings <= 4 bytes, one for <= 8 bytes, and so on. This is called a Small Object Allocator, and can be implemented for any type and buffer.
If you expect many string operations where small strings are incremented repeatedly, you may change your strategy and allocate larger buffers from the start, so that the number of memmove operations are reduced. Or you can opt for a different approach and use string streams for those.
String operations
It is not a bad idea to derive from std::basic_str, so that most of the operations still work but the internal storage is actually in the GlobalStringTable, so that you can keep using the same stl conventions. This way, you also make sure that all the allocations are within a single DLL, so that there can be no heap corruption by linking different kinds of strings between different libraries, since all the allocation operations are essentially in your DLL (and are rerouted to the GlobalStringTable object)
Custom allocators can help because most malloc()/new implementations are designed for maximum flexibility, thread-safety and bullet-proof workings. For instance, they must gracefully handle the case that one thread keeps allocating memory, sending the pointers to another thread that deallocates them. Things like these are difficult to handle in a performant way and drive the cost of malloc() calls.
However, if you know that some things cannot happen in your application (like one thread deallocating stuff another thread allocated, etc.), you can optimize your allocator further than the standard implementation. This can yield significant results, especially when you don't need thread safety.
Also, the standard implementation is not necessarily well optimized: Implementing void* operator new(size_t size) and void operator delete(void* pointer) by simply calling through to malloc() and free() gives an average performance gain of 100 CPU cycles on my machine, which proves that the default implementation is suboptimal.
I think you'd be best served by reading up on the EASTL
It has a section on allocators and you might find fixed_string useful.
The best way to avoid a memory allocation is don't do it!
BUT if I remember JSON correctly all the readStr values either gets used as keys or as identifiers so you will have to allocate them eventually, std::strings move semantics should insure that the allocated array are not copied around but reused until its final use. The default NRVO/RVO/Move should reduce any copying of the data if not of the string header itself.
Method 1:
Pass result as a ref from the caller which has reserved SomeResonableLargeValue chars, then clear it at the start of readStr. This is only usable if the caller actually can reuse the string.
Method 2:
Use the stack.
// Reserve memory for the string (BOTTLENECK)
if (end - idx < SomeReasonableValue) { // 32?
char result[SomeReasonableValue] = {0}; // feel free to use std::array if you want bounds checking, but the preceding "if" should insure its not a problem.
int ridx = 0;
for(; idx < end; ++idx) {
// Not an escape sequence
if(!isC('\\')) { result[ridx++] = getC(); continue; }
// Escape sequence: skip '\'
++idx;
// Convert escape sequence
result[ridx++] = getEscapeSequence(getC());
}
// Skip closing '"'
++idx;
result[ridx] = 0; // 0-terminated.
// optional assert here to insure nothing went wrong.
return result; // the bottleneck might now move here as the data is copied to the receiving string.
}
// fallback code only if the string is long.
// Your original code here
Method 3:
If your string by default can allocate some size to fill its 32/64 byte boundary, you might want to try to use that, construct result like this instead in case the constructor can optimize it.
Str result(end - idx, 0);
Method 4:
Most systems already has some optimized allocator that like specific block sizes, 16,32,64 etc.
siz = ((end - idx)&~0xf)+16; // if the allocator has chunks of 16 bytes already.
Str result(siz);
Method 5:
Use either the allocator made by google or facebooks as global new/delete replacement.
To understand how a custom allocator can help you, you need to understand what malloc and the heap does and why it is quite slow in comparison to the stack.
The Stack
The stack is a large block of memory allocated for your current scope. You can think of it as this
([] means a byte of memory)
[P][][][][][][][][][][][][][][][]
(P is a pointer that points to a specific byte of memory, in this case its pointing at the first byte)
So the stack is a block with only 1 pointer. When you allocate memory, what it does is it performs a pointer arithmetic on P, which takes constant time.
So declaring int i = 0; would mean this,
P + sizeof(int).
[i][i][i][i][P][][][][][][][][][][][],
(i in [] is a block of memory occupied by an integer)
This is blazing fast and as soon as you go out of scope, the entire chunk of memory is emptied simply by moving P back to the first position.
The Heap
The heap allocates memory from a reserved pool of bytes reserved by the c++ compiler at runtime, when you call malloc, the heap finds a length of contiguous memory that fits your malloc requirements, marks it as used so nothing else can use it, and returns that to you as a void*.
So, a theoretical heap with little optimization calling new(sizeof(int)), would do this.
Heap chunk
At first : [][][][][][][][][][][][][][][][][][][][][][][][][]
Allocate 4 bytes (sizeof(int)):
A pointer goes though every byte of memory, finds one that is of correct length, and returns to you a pointer.
After : [i][i][i][i][][][]][][][][][][][][][]][][][][][][][]
This is not an accurate representation of the heap, but from this you can already see numerous reasons for being slow relative to the stack.
The heap is required to keep track of all already allocated memory and their respective lengths. In our test case above, the heap was already empty and did not require much, but in worst case scenarios, the heap will be populated with multiple objects with gaps in between (heap fragmentation), and this will be much slower.
The heap is required to cycle though all the bytes to find one that fits your length.
The heap can suffer from fragmentation since it will never completely clean itself unless you specify it. So if you allocated an int, a char, and another int, your heap would look like this
[i][i][i][i][c][i2][i2][i2][i2]
(i stands for bytes occupied by int and c stands for bytes occupied by a char. When you de-allocate the char, it will look like this.
[i][i][i][i][empty][i2][i2][i2][i2]
So when you want to allocate another object into the heap,
[i][i][i][i][empty][i2][i2][i2][i2][i3][i3][i3][i3]
unless an object is the size of 1 char, the overall heap size for that allocation is reduced by 1 byte. In more complex programs with millions of allocations and deallocations, the fragmentation issue becomes severe and the program will become unstable.
Worry about cases like thread safety (Someone else said this already).
Custom Heap/Allocator
So, a custom allocator usually needs to address these problems while providing the benefits of the heap, such as personalized memory management and object permanence.
These are usually accomplished with specialized allocators. If you know you dont need to worry about thread safety or you know exactly how long your string will be or a predictable usage pattern you can make your allocator fast than malloc and new by quite a lot.
For example, if your program requires a lot of allocations as fast as possible without lots of deallocations, you could implement a stack allocator, in which you allocate a huge chunk of memory with malloc at startup,
e.g
typedef char* buffer;
//Super simple example that probably doesnt work.
struct StackAllocator:public Allocator{
buffer stack;
char* pointer;
StackAllocator(int expectedSize){ stack = new char[expectedSize];pointer = stack;}
allocate(int size){ char* returnedPointer = pointer; pointer += size; return returnedPointer}
empty() {pointer = stack;}
};
Get expected size, get a chunk of memory from the heap.
Assign a pointer to the beginning.
[P][][][][][][][][][] ..... [].
then have one pointer that moves for each allocation. When you no longer need the memory, you simply move the pointer to the beginning of your buffer. This gives your the advantage of O(1) speed allocations and deallocations as well as object permanence for the lack of flexible deallocation and large initial memory requirements.
For strings, you could try a chunk allocator. For every allocation, the allocator gives a set chunk of memory.
Compatibility
Compatibility with other strings is almost guaranteed. As long as you are allocating a contiguous chunk of memory and preventing anything else from using that block of memory, it will work.

Also check realloc() if shrinking allocated size of memory?

When you call realloc() you should check whether the function failed before assigning the returned pointer to the pointer passed as a parameter to the function...
I've always followed this rule.
Now is it necessary to follow this rule when you know for sure the memory will be truncated and not increased?
I've never ever seen it fail. Just wondered if I could save a couple instructions.
realloc may, at its discretion, copy the block to a new address regardless of whether the new size is larger or smaller. This may be necessary if the malloc implementation requires a new allocation to "shrink" a memory block (e.g. if the new size requires placing the memory block in a different allocation pool). This is noted in the glibc documentation:
In several allocation implementations, making a block smaller sometimes necessitates copying it, so it can fail if no other space is available.
Therefore, you must always check the result of realloc, even when shrinking. It is possible that realloc has failed to shrink the block because it cannot simultaneously allocate a new, smaller block.
Even if you realloc (read carefully realloc(3) and about Posix realloc please) to a smaller size, the underlying implementation is doing the equivalent of malloc (of the new smaller size), followed by a memcpy (from old to new zone), then free (of the old zone). Or it may do nothing... (e.g. because some crude malloc implementations maitain a limited set of sizes -like power of two or 3 times power of two-, and the old and new size requirements fits in the same size....)
That malloc can fail. So realloc can still fail.
Actually, I usually don't recommend using realloc for that reason: just do the malloc, memcpy, free yourself.
Indeed, dynamic heap memory functions like malloc rarely fail. But when they do, chaos may happen if you don't handle that. On Linux and some other Posix systems you could setrlimit(2) with RLIMIT_AS -e.g. using bash ulimit builtin- to lower the limits for testing purposes.
You might want to study the source code implementations of C memory management. For example MUSL libc (for Linux) is very readable code. On Linux, malloc is often built above mmap(2) (the C library may allocate a large chunk of memory using mmap then managing smaller used and freed memory zones inside it).

What's the advantage of malloc?

What is the advantage of allocating a memory for some data. Instead we could use an array of them.
Like
int *lis;
lis = (int*) malloc ( sizeof( int ) * n );
/* Initialize LIS values for all indexes */
for ( i = 0; i < n; i++ )
lis[i] = 1;
we could have used an ordinary array.
Well I don't understand exactly how malloc works, what is actually does. So explaining them would be more beneficial for me.
And suppose we replace sizeof(int) * n with just n in the above code and then try to store integer values, what problems might i be facing? And is there a way to print the values stored in the variable directly from the memory allocated space, for example here it is lis?
Your question seems to rather compare dynamically allocated C-style arrays with variable-length arrays, which means that this might be what you are looking for: Why aren't variable-length arrays part of the C++ standard?
However the c++ tag yields the ultimate answer: use std::vector object instead.
As long as it is possible, avoid dynamic allocation and responsibility for ugly memory management ~> try to take advantage of objects with automatic storage duration instead. Another interesting reading might be: Understanding the meaning of the term and the concept - RAII (Resource Acquisition is Initialization)
"And suppose we replace sizeof(int) * n with just n in the above code and then try to store integer values, what problems might i be facing?"
- If you still consider n to be the amount of integers that it is possible to store in this array, you will most likely experience undefined behavior.
More fundamentally, I think, apart from the stack vs heap and variable vs constant issues (and apart from the fact that you shouldn't be using malloc() in C++ to begin with), is that a local array ceases to exist when the function exits. If you return a pointer to it, that pointer is going to be useless as soon as the caller receives it, whereas memory dynamically allocated with malloc() or new will still be valid. You couldn't implement a function like strdup() using a local array, for instance, or sensibly implement a linked representation list or tree.
The answer is simple. Local1 arrays are allocated on your stack, which is a small pre-allocated memory for your program. Beyond a couple thousand data, you can't really do much on a stack. For higher amounts of data, you need to allocate memory out of your stack.
This is what malloc does.
malloc allocates a piece of memory as big as you ask it. It returns a pointer to the start of that memory, which could be treated similar to an array. If you write beyond the size of that memory, the result is undefined behavior. This means everything could work alright, or your computer may explode. Most likely though you'd get a segmentation fault error.
Reading values from the memory (for example for printing) is the same as reading from an array. For example printf("%d", list[5]);.
Before C99 (I know the question is tagged C++, but probably you're learning C-compiled-in-C++), there was another reason too. There was no way you could have an array of variable length on the stack. (Even now, variable length arrays on the stack are not so useful, since the stack is small). That's why for variable amount of memory, you needed the malloc function to allocate memory as large as you need, the size of which is determined at runtime.
Another important difference between local arrays, or any local variable for that matter, is the life duration of the object. Local variables are inaccessible as soon as their scope finishes. malloced objects live until they are freed. This is essential in practically all data structures that are not arrays, such as linked-lists, binary search trees (and variants), (most) heaps etc.
An example of malloced objects are FILEs. Once you call fopen, the structure that holds the data related to the opened file is dynamically allocated using malloc and returned as a pointer (FILE *).
1 Note: Non-local arrays (global or static) are allocated before execution, so they can't really have a length determined at runtime.
I assume you are asking what is the purpose of c maloc():
Say you want to take an input from user and now allocate an array of that size:
int n;
scanf("%d",&n);
int arr[n];
This will fail because n is not available at compile time. Here comes malloc()
you may write:
int n;
scanf("%d",&n);
int* arr = malloc(sizeof(int)*n);
Actually malloc() allocate memory dynamically in the heap area
Some older programming environments did not provide malloc or any equivalent functionality at all. If you needed dynamic memory allocation you had to code it yourself on top of gigantic static arrays. This had several drawbacks:
The static array size put a hard upper limit on how much data the program could process at any one time, without being recompiled. If you've ever tried to do something complicated in TeX and got a "capacity exceeded, sorry" message, this is why.
The operating system (such as it was) had to reserve space for the static array all at once, whether or not it would all be used. This phenomenon led to "overcommit", in which the OS pretends to have allocated all the memory you could possibly want, but then kills your process if you actually try to use more than is available. Why would anyone want that? And yet it was hyped as a feature in mid-90s commercial Unix, because it meant that giant FORTRAN simulations that potentially needed far more memory than your dinky little Sun workstation had, could be tested on small instance sizes with no trouble. (Presumably you would run the big instance on a Cray somewhere that actually had enough memory to cope.)
Dynamic memory allocators are hard to implement well. Have a look at the jemalloc paper to get a taste of just how hairy it can be. (If you want automatic garbage collection it gets even more complicated.) This is exactly the sort of thing you want a guru to code once for everyone's benefit.
So nowadays even quite barebones embedded environments give you some sort of dynamic allocator.
However, it is good mental discipline to try to do without. Over-use of dynamic memory leads to inefficiency, of the kind that is often very hard to eliminate after the fact, since it's baked into the architecture. If it seems like the task at hand doesn't need dynamic allocation, perhaps it doesn't.
However however, not using dynamic memory allocation when you really should have can cause its own problems, such as imposing hard upper limits on how long strings can be, or baking nonreentrancy into your API (compare gethostbyname to getaddrinfo).
So you have to think about it carefully.
we could have used an ordinary array
In C++ (this year, at least), arrays have a static size; so creating one from a run-time value:
int lis[n];
is not allowed. Some compilers allow this as a non-standard extension, and it's due to become standard next year; but, for now, if we want a dynamically sized array we have to allocate it dynamically.
In C, that would mean messing around with malloc; but you're asking about C++, so you want
std::vector<int> lis(n, 1);
to allocate an array of size n containing int values initialised to 1.
(If you like, you could allocate the array with new int[n], and remember to free it with delete [] lis when you're finished, and take extra care not to leak if an exception is thrown; but life's too short for that nonsense.)
Well I don't understand exactly how malloc works, what is actually does. So explaining them would be more beneficial for me.
malloc in C and new in C++ allocate persistent memory from the "free store". Unlike memory for local variables, which is released automatically when the variable goes out of scope, this persists until you explicitly release it (free in C, delete in C++). This is necessary if you need the array to outlive the current function call. It's also a good idea if the array is very large: local variables are (typically) stored on a stack, with a limited size. If that overflows, the program will crash or otherwise go wrong. (And, in current standard C++, it's necessary if the size isn't a compile-time constant).
And suppose we replace sizeof(int) * n with just n in the above code and then try to store integer values, what problems might i be facing?
You haven't allocated enough space for n integers; so code that assumes you have will try to access memory beyond the end of the allocated space. This will cause undefined behaviour; a crash if you're lucky, and data corruption if you're unlucky.
And is there a way to print the values stored in the variable directly from the memory allocated space, for example here it is lis?
You mean something like this?
for (i = 0; i < len; ++i) std::cout << lis[i] << '\n';

Switching from C++ (with a lot of STL use) to C for interpreter building

I'm switching from C++ to C because I'm rebuilding my toy interpreter. I was used to vectors for dynamic allocation of objects like tokens or instructions of my programs, stacks and mainly strings with all their aspects.
Now, in C I'm not going to have all these anymore. I know that I will have to use a lot of memory management, too.
I'm completely new to C, I only know the high-level easy-life data structures from the STL, how can I get started with strings and dynamic memory allocation?
In C, the landscape is much simpler. You have only malloc, calloc, realloc, and free.
malloc allocates a number of bytes and returns it to you, returning NULL on failure.
calloc is the same thing as malloc, but performs the size multiplication for you. (You give it sizeof(mytype) and the number, and it gives you the right size). It also fills the memory block with zeros.
realloc takes a pointer previously malloc'd and changes the size of the underlying memory block. If the block can be expanded, it is, and if it can't, then a new block is allocated and the contents of the old block are copied to the new block. It returns NULL on failure.
free gives the memory back previously allocated with malloc, calloc, or realloc.
As you may have guessed, there will be a great deal more work involved if you are used to C++ and want to change to C.
For your vectors, the usual approach is to use linked lists when you want a dynamic array-like structure. Though this is not the same as a vector - O(1) access here vs O(n) in a regular linked list - it usually "does the job". Alternatively: don't use a dynamic array. Lots of situations can get by with a fixed array and a MAX_ARRAY-style constant. For v0.2 at least :-)
For strings, you will probably end up with something like:
struct string {
char *buf;
size_t length;
}
With more fields to account for the buffer allocated vs the actual buffer used, and so on. Then a whole raft of routines to append to the string, free it, copy another string, and so on.
Stacks can be implemented in terms of a linked list or an array.
Have you spotted the pattern? Computer Science 101. Plenty of wheel reinvention. The advantage is that you can optimize the data structures for your program. The disadvantages are that you will have to write a whole bunch of code just to get back to where you are now, probably. And you're going to need a lot more unit tests.
Strings are just char arrays terminated with 0 (also '\0') (pretty hard core), but you can make anything you want with your own functions, and maybe your own structure - you can make your own string.
Functions for ansi strings that are natively available: http://cplusplus.com/reference/clibrary/cstring/

C and C++: Freeing PART of an allocated pointer

Let's say I have a pointer allocated to hold 4096 bytes. How would one deallocate the last 1024 bytes in C? What about in C++? What if, instead, I wanted to deallocate the first 1024 bytes, and keep the rest (in both languages)? What about deallocating from the middle (it seems to me that this would require splitting it into two pointers, before and after the deallocated region).
Don't try and second-guess memory management. It's usually cleverer than you ;-)
The only thing you can achieve is the first scenario to 'deallocate' the last 1K
char * foo = malloc(4096);
foo = realloc(foo, 4096-1024);
However, even in this case, there is NO GUARANTEE that "foo" will be unchanged. Your entire 4K may be freed, and realloc() may move your memory elsewhere, thus invalidating any pointers to it that you may hold.
This is valid for both C and C++ - however, use of malloc() in C++ is a bad code smell, and most folk would expect you to use new() to allocate storage. And memory allocated with new() cannot be realloc()ed - or at least, not in any kind of portable way. STL vectors would be a much better approach in C++
If you have n bytes of mallocated memory, you can realloc m bytes (where m < n) and thus throw away the last n-m bytes.
To throw away from the beginning, you can malloc a new, smaller buffer and memcpy the bytes you want and then free the original.
The latter option is also available using C++ new and delete. It can also emulate the first realloc case.
You don't have "a pointer allocated to hold 4096 bytes", you have a pointer to an allocated block of 4096 bytes.
If your block was allocated with malloc(), realloc() will allow you to reduce or increase the size of the block. The start address of the block won't necessarily stay the same, though.
You can't change the start address of a malloc'd memory block, which is really what your second scenario is asking. There's also no way to split a malloc'd block.
This is a limitation of the malloc/calloc/realloc/free API -- and implementations may rely on these limitations (for example, keeping bookkeeping information about the allocation immediately before the start address, which would make moving the start address difficult.)
Now, malloc isn't the only allocator out there -- your platform or libraries might provide other ones, or you could write your own (which gets memory from the system via malloc, mmap, VirtualAlloc or some other mechanism) and then hands it out to your program in whatever fashion you desire.
For C++, if you allocate memory with std::malloc, the information above applies. If you're using new and delete, you're allocating storage for and constructing objects, and so changing the size of an allocated block doesn't make sense -- objects in C++ are a fixed size.
You can make it shorter with realloc(). I don't think the rest is possible.
You can use realloc() to apparently make the memory shorter. Note that for some implementations such a call will actually do nothing. You can't free the first bit of the block and retain the last bit.
If you find yourself needing this kind of functionality, you should consider using a more complex data structure. An array is not the correct answer to every programming problem.
http://en.wikipedia.org/wiki/New_(C%2B%2B)
SUMMARY:In contrast to C's realloc, it
is not possible to directly reallocate
memory allocated with new[]. To extend
or reduce the size of a block, one
must allocate a new block of adequate
size, copy over the old memory, and
delete the old block. The C++ standard
library provides a dynamic array that
can be extended or reduced in its
std::vector template.