Is the memory allocated by new operated consecutive? - c++

as the title says, I want to know in c++, whether the memory allocated by one new operation is consecutive...

BYTE* data = new BYTE[size];
In this code, whatever size is given, the returned memory region is consecutive. If the heap manager can't allocate consecutive memory of size, it's fail. an exception (or NULL in malloc) will be returned.
Programmers will always see the illusion of consecutive (and yes, infinite :-) memory in a process's address space. This is what virtual memory provides to programmers.
Note that programmers (other than a few embedded systems) always see virtual memory. However, virtually consecutive memory could be mapped (in granularity of 'page' size, which is typically 4KB) in physical memory in arbitrary fashion. That mapping, you can't see, and mostly you don't need to understand it (except for very specific page-level optimizations).
What about this?
BYTE* data1 = new BYTE[size1];
BYTE* data2 = new BYTE[size2];
Sure, you can't say the relative address of data1 and data2. It's generally non-deterministic. It depends on heap manager (such as malloc, often new is just wrapped malloc) policies and current heap status when a request was made.

The memory allocated in your process's address space will be contiguous.
How those bytes are mapped into physical memory is implementation-specific; if you allocate a very large block of memory, it is likely to be mapped to different parts of physical memory.
Edit: Since someone disagrees that the bytes are guaranteed to be contiguous, the standard says (3.7.3.1):
The allocation function attempts to allocate the requested amount of storage. If it is successful, it shall return the address of the start of a block of storage whose length in bytes shall be at least as large as the requested size.

Case 1:
Using "new" to allocate an array, as in
int* foo = new int[10];
In this case, each element of foo will be in contiguous virtual memory.
Case 2:
Using consecutive "new" operations non-atomically, as in
int* foo = new int;
int* bar = new int;
In this case, there is never a guarantee that the memory allocated between calls to "new" will be adjacent in virtual memory.

The virtual addresses of the allocated bytes will be contiguous. They will also be physically contiguous within resident pages backing the address space of your process. The mapping of physical pages to regions of the process virtual space is very OS and platform specific, but in general you cannot assume physically contiguous range larger then or not aligned on a page.

If by your question you mean "Will successive (in time) new() operations return adjacent chunks of memory, with no gaps in between?", this old programmer will suggest, very politely, that you should not rely on it.
The only reason that question would come up was if you intended to walk a pointer "out" of one data object and "into" the next one. This is a really bad idea, since you have no guarantee that the next object in the address space is of anything remotely resembling the same type as the previous one.

Yes.
Don't bother about the "virtual memory" issue: apart that there could be cases when you haven't at all a system that supports virtual memory, from your PoV you get a consecutive memory chunk. That's all.

Physical memory is never contiguous its logical memory which is contiguous.

Related

I don't understand about memory issue of appending string

Runtime error: pointer index expression with base 0x000000000000 overflowed to 0xffffffffffffffff for frequency sort
In first answer of that link, it says that appending char to string can cause memory issue.
string s = "";
char c = 'a';
int max = INT_MAX;
for(int j=0;j<max;j++)
s = s + c;
The answer explains [s=s+c in above code copies the same string again and again so it will cause memory issue.] But I don't understand why that code copies the same string again and again.
Is there someone who is likely to make me understand that part :)?
I don't understand why that code copies the same string again and
again.
Okay, let's look at the what happens each time the loop is iterated:
s = s + c;
There are three things the program has to do in order to execute that line of code:
Compute the temporary value s + c -- to do that, the program has to create a temporary, anonymous std::string object, and allocate for it (from the heap) an internal byte-buffer that is at least one byte larger than the number of chars currently in s (so that it can hold all of s's old contents, plus the additional char provided by c)
Set s equal to the temporary-string. In C++03 and earlier, this would be done by reallocating s's internal byte-buffer to be larger, then copying all of the bytes from the temporary-string into s's new/larger buffer. C++11 optimizes this a bit via the new move-assignment operator, so that all the bytes don't have to be copied; rather, s can simply take ownership of the temporary-string's byte-buffer.
Free the temporary string's resources, now that we're done using it. In practice, this takes the form of the std::string class's destructor calling delete[] on the old (no-longer-large-enough) byte-buffer.
Given that the above is going to be performed at least 2 billion times in a loop, it's already quite inefficient.
However, what I think the answer you referred to was particularly concerned about was heap fragmentation. Keep in mind that heap allocation doesn't work by magic; when you (or the std::string class, or anyone) asks to allocate N bytes of memory from the heap, the heap implementation's job is to find N bytes of contiguous memory and return it. And since there is no provision in C++ for moving blocks of memory around (as doing so would invalidate any pointers that the program might have pointing into those blocks of memory), the heap can't create an N-byte contiguous-memory-chunk out of smaller chunks; instead, there has to be a range of contiguous-memory-space already available. For example, it does the heap no good to have a total of 1GB of memory available, if that 1GB of memory is made up of thousands of nonconsecutive 1KB chunks and the caller is asking for a 2KB allocation.
Therefore, the heap's job is to efficiently allocate chunks of memory of the sizes the program requests, and when they are freed again, it will try to glue them back together into larger chunks again if it can, but it may not always be able to. Certain patterns of allocating and freeing memory may result in heap fragmentation, which is simply a large number of discontinuous memory allocations that render the small regions of free memory between them unusable for large allocations.
Whether or not this particular allocate/free pattern would cause that, I'm not sure; given that only one or two buffers are being allocated at a time, the heap may be able to reabsorb them back into adjacent free-memory chunks as they get freed again -- it probably depends on the particular heap algorithm the system is using, as well as on whether any other threads are allocating/freeing heap memory while this is going on. But I wouldn't be too surprised if there are systems out there where it would cause problems (particularly on 16-bit or 32-bit systems where virtual address space is limited, or embedded systems that don't use virtual memory)

Malloc memory check if contiguous

I am implementing a memory pool - type class. One of the methods allocates B bytes of memory and returns a void pointer to it, while internally handling buffers and moving around older memory to ensure all memory managed by the object during its lifetime is contiguous (similar to how a std::vector will have a pre allocated buffer and allocate extra space once the buffer runs out, copying the information from the old buffer to the new one to ensure all memory is contiguous). My question is, how do I ensure, or check for, all the allocated memory being continuous? If I wish to jump from object to object manually, using
static_cast<desired_type*>(buffer_pointer + N)
This method is naturally going to fail if the location for an object is offset by some amount that isn't just the sum of the sizes of the previous objects. I am new to writing custom memory pools, so I am just wondering, how do I either ensure allocated memory is non fragmented, or access the location of the new fragment so that I can still manually index through a block of malloc()-ed memory? Thank you.
If I understand your question, you're asking if you can have multiple calls to malloc return contiguous memory.
The answer is no, memory will not be contiguous across multiple mallocs as most memory managers will put head/tail data around the allocated memory for both their own management and to put protection markers around the edges to detect overruns - the details are heavily implementation dependent.
for your own memory management you need to allocate a big enough block with malloc and then split it and manage the internals yourself.
You can look at this github proj as an example of the management required:
https://github.com/bcorriveau/memblock

C++ allocating array non contiguously

Let's consider this C++ code as a rough example.
int *A = new int [5];
int *B = new int [5];
int *C = new int [5];
delete []A;
delete []C;
int *D = new int [10];
Obviously any machine can handle this case without any problems with buffer overflow or memory leak. However let's imagine that the lengths are multiplied by one million or even a bigger number. As far as I know addresses (at least virtual addresses) of all array elements are consecutive. So whenever I create an array I can be sure that they are contiguous chunks in virtual memory and I can perform pointer arithmetic to access n-th element if I have pointer to the first one. My question is illustrated in the following image( registers representing end of array are ignored for the sake of simplicity).
After allocating A, B, C in the heap we free A and C and get two free memory chunks of length 5 (marked with green dots). What happens when I want to allocate an array of length 10? I think that there are 3 possible cases.
I will get bad_alloc exception for not having contiguous 10 length memory chunk.
The program will automatically reallocate array B to the beginning of the heap and join together the rest of the unused memory.
The array D will be split into 2 parts and stored not contiguously causing not constant access time for n-th element of the array (if there are much more than 2 splits it starts to resemble a linked list rather than an array).
Which one of these is most possible answer or is there another possible case I didn't take into account?
I will get bad_alloc exception for not having contiguous 10 length memory chunk.
This can happen.
The program will automatically reallocate array B to the beginning of the heap and join together the rest of the unused memory.
This cannot happen. Moving an object to a different address is not possible in C++ because it would invalidate existing pointers.
The array D will be split into 2 parts and stored not contiguously causing not constant access time for n-th element of the array (if there are much more than 2 splits it starts to resemble a linked list rather than an array).
This also cannot happen. In C++ array elements are stored contiguously, so that pointer arithmetic is possible.
But there are in fact more possibilities. To understand them, we must account for the fact that the memory can be virtual. This, among other things, means that available address space may be larger than the amount of physically present memory. A chunk of physical memory can be assigned any address from the available address space.
As an example, consider a machine with 8GB (2^33 bytes) of memory running a 64-bit os on a 64-bit CPU. Addresses allocated to the program do not gave all be less that 8GB; it can receive a megabyte chunk of memory at address 0x00000000ffff0000 and another megabyte chunk at address 0x0000ffffffff0000. The total amount of memory allocated to the program cannot be more than 2^33 bytes, but each chunk can be located anywhere in the 2^64 space. (In reality this is a bit more complicated but similar enough to what I describe).
In your picture, you have 15 little squares that represent chunks of memory. Let's say it's physical memory. Virtual memory is 15,000 little squares, of which you can use any 15 at any given time.
So, considering this fact, the following scenarios are also possible.
A chunk of virtual address space is given to the program that is not backed by real physical memory. When and if the program attempts to access this space, the OS will try to allocate physical memory and map it to the corresponding address so that the program can continue. If this attempt fails, the program may be killed by the OS. The newly-free memory is now available to other programs that may want it.
The two short chunks of memory are mapped to new virtual addresses such that they form one long contiguous chunk in the virtual memory space. Remember that typically there are many more virtual memory addresses than there is physical memory, and it is normally easy to find an unassigned range. Typically this scenario is only realized when the memory chunks in question are large.
The problem that you are asking about is called heap-fragmentation, and it's a real, hard problem.
I will get bad_alloc exception for not having contiguous 10 length memory chunk.
This is the theory. But such a situation is really only possible within a 32-bit process; the 64-bit address space is vast.
That is, with a 64-bit process, it is more likely that heap fragmentation stops your new implementation from reusing some memory, which leads to an out-of-memory condition since it needs to ask the kernel for new memory for the entire D array instead of half of it. Also, such an OOM-condition will more likely cause your process to get shot by the OOM-killer sometime when you try to access a location in D, rather than new throwing an exception, because the kernel won't realize that it has overcommitted its memory before it's too late. For more information, google "memory overcommitment".
The program will automatically reallocate array B to the beginning of the heap and join together the rest of the unused memory.
No, it can't. You are in C++, and your runtime does not know where you have possibly stored pointers to B, so it would either run the danger of missing a pointer that needs to be modified, or run the danger of modifying something that's not a pointer to B but happens to have the same bit pattern.
The array D will be split into 2 parts and stored not contiguously causing not constant access time for n-th element of the array (if there are much more than 2 splits it starts to resemble a linked list rather than an array).
This is also not possible because C++ guarantees contiguous storage of arrays (to allow array accesses to be implemented via pointer arithmetic).

Sequential memory allocation

I'm planning a application that allocates a lot of variables in memory. In difference from another "regular" application, I want this memory be allocated in specific memory blocks of 4096 bytes. My allocated vars must be placed in memory sequentially. One after another, in order to fill the whole allocated memory.
For example, I'm allocating a region (4096 bytes) in memory and this region is ready for my further use. From now, each time that my application creates a new variable in memory (which is probably made in "regular" application with malloc), this variable will be placed in free space in my memory region.
This sequential memory allocation is similare to how an array allocation works. But, in my case, I need an array that will be able to contain many types of data (string, byte, int, ...).
One possible solution is to achieve this is by pointer arithmetics. I want to avoid this method, this may insert a lot of bugs in my application.
Maybe someone solved this problem before?
Thank you!
malloc() by no means guarantees that subsequent allocated blocks are on sequential memory address. Even worse, most implementations use a small number of bytes before and/or after the allocated block for 'housekeeping'. This means that, even if you're lucky that addresses are sequential, there will be small gaps in between the blocks. So the actual allocated blocks are slightly bigger to make space for those 'housekeeping' bytes.
As you suggest, you'll need to write some code yourself and write a few functions with malloc(), realloc(), ... You can hide all the logic in these functions and should not make your application code using these functions more complex compared to using malloc() if it did what you wanted.
Important questions: Why do you need to have these blocks adjacent to each other? What about freeing blocks?

How does a computer 'know' what memory is allocated?

When memory is allocated in a computer, how does it know which bytes are already occupied and can't be overwritten?
So if these are some bytes of memory that aren't being used:
[0|0|0|0]
How does the computer know whether they are or not? They could just be an integer that equals zero. Or it could be empty memory. How does it know?
That depends on the way the allocation is performed, but it generally involves manipulation of data belonging to the allocation mechanism.
When you allocate some variable in a function, the allocation is performed by decrementing the stack pointer. Via the stack pointer, your program knows that anything below the stack pointer is not allocated to the stack, while anything above the stack pointer is allocated.
When you allocate something via malloc() etc. on the heap, things are similar, but more complicated: all theses allocators have some internal data structures which they never expose to the calling application, but which allow them to select which memory addresses to return on an allocation request. Some malloc() implementation, for instance, use a number of memory pools for small objects of fixed size, and maintain linked lists of free objects for each fixed size which they track. That way, they can quickly pop one memory region of that list, only doing more expensive computations when they run out of regions to satisfy a certain request size.
In any case, each of the allocators have to request memory from the system kernel from time to time. This mechanism always works on complete memory pages (usually 4 kiB), and works via the syscalls brk() and mmap(). Again, the kernel keeps track of which pages are visible in which processes, and at which addresses they are mapped, so there is additional memory allocated inside the kernel for this.
These mappings are made available to the processor via the page tables, which uses them to resolve the virtual memory addresses to the physical addresses. So here, finally, you have some hardware involved in the process, but that is really far, far down in the guts of the mechanics, much below anything that a userspace process is ever able to see. Still, even the page tables are managed by the software of the kernel, not by the hardware, the hardware only interpretes what the software writes into the page tables.
First of all, I have the impression that you believe that there is some unoccupied memory that doesn't holds any value. That's wrong. You can imagine the memory as a very large array when each box contains a value whereas someone put something in it or not. If a memory was never written, then it contains a random value.
Now to answer your question, it's not the computer (meaning the hardware) but the operating system. It holds somewhere in its memory some tables recording which part of the memory are used. Also any byte of memory can be overwriten.
In general, you cannot tell by looking at content of memory at some location whether that portion of memory is used or not. Memory value '0' does not mean the memory is not used.
To tell what portions of memory are used you need some structure to tell you this. For example, you can divide memory into chunks and keep track of which chunks are used and which are not.
There are memory blocks, they have an occupied or not occupied. On the heap, there are very complex data structures which organise it. But the answer to your question is too broad.