I am trying to find the maximum memory allocated using new[]. I have used binary search to make allocation a bit faster, in order to find the final memory that can be allocated
bool allocated = false;
int* ptr= nullptr;
int low = 0,high = std::numeric_limits<int>;
while(true)
{
try
{
mid = (low + high) / 2;
ptr = new int[mid];
delete[] ptr;
allocated = true;
}
catch(Exception e)
{....}
if (allocated == true)
{
low = mid;
}else
{
high = low;
cout << "maximum memory allocated at: " << ptr << endl;
}
}
I have modified my code, I am using a new logic to solve this. My problem right now is it is going to a never ending loop. Is there any better way to do this?
This code is useless for a couple of reasons.
Depending on your OS, the memory may or may not be allocated until it is actually accessed. That is, new happily returns a new memory address, but it doesn't make the memory available just yet. It is actually allocated later when and if a corresponding address is accessed. Google up "lazy allocation". If the out-of-memory condition is detected at use time rather than at allocation time, allocation itself may never throw an exception.
If you have a machine with more than 2 gigabytes available, and your int is 32 bits, alloc will eventually overflow and become negative before the memory is exhausted. Then you may get a bad_alloc. Use size_t for all things that are sizes.
Assuming you are doing ++alloc and not ++allocation, it shouldn't matter what address it uses. if you want it to use a different address every time then don't delete the pointer.
This is a particularly bad test.
For the first part you have undefined behaviour. That's because you should only ever delete[] the pointer returned to you by new[]. You need to delete[] pvalue, not value.
The second thing is that your approach will be defragmenting your memory as you're continuously allocating and deallocating contiguous memory blocks. I imagine that your program will understate the maximum block size due to this fragmentation effect. One solution to this would be to launch instances of your program as a new process from the command line, setting the allocation block size as a parameter. Use a divide and conquer bisection approach to attain the maximum size (with some reliability) in log(n) trials.
Related
This is the first time I am trying to use std::unique_ptr but I am getting an access violation
when using std::make_unique with large size .
what is the difference in this case and is it possible to catch this type of exceptions in c++ ?
void SmartPointerfunction(std::unique_ptr<int>&Mem, int Size)
{
try
{
/*declare smart pointer */
//Mem = std::unique_ptr<int>(new int[Size]); // using new (No crash)
Mem = std::make_unique<int>(Size); // using make_unique (crash when Size = 10000!!)
/*set values*/
for (int k = 0; k < Size; k++)
{
Mem.get()[k] = k;
}
}
catch(std::exception& e)
{
std::cout << "Exception :" << e.what() << std::endl;
}
}
When you invoke std::make_unique<int>(Size), what you actually did is allocate a memory of size sizeof(int) (commonly 4bytes), and initialize it as a int variable with the number of Size. So the size of the memory you allocated is only a single int, Mem.get()[k] will touch the address which out of boundary.
But out of bounds doesn't mean your program crash immediately. As you may know, the memory address we touch in our program is virtual memory. And let's see the layout of virtual memory addresses.
You can see the memory addresses are divided into several segments (stack, heap, bss, etc). When we request a dynamic memory, the returned address will usually located in heap segment (I use usually because sometimes allocator will use mmap thus the address will located at a memory shared area, which is located between stack and heap but not marked on the diagram).
The dynamic memory we obtained are not contiguous, but heap is a contiguous segment. from the OS's point of view, any access to the heap segment is legal. And this is what the allocator exactly doing. Allocator manages the heap, divides the heap into different blocks. These blocks, some of which are marked "used" and some of which are marked "free". When we request a dynamic memory, the allocator looks for a free block that can hold the size we need, (split it to a small new block if this free block is much larger than we need), marks it as used, and returns its address. If such a free block cannot be found, the allocator will call sbrk to increase the heap.
Even if we access address which out of range, as long as it is within the heap, the OS will regard it as a legal operation. Although it might overwrite data in some used blocks, or write data into a free block. But if the address we try to access is out of the heap, for example, an address greater than program break or an address located in the bss. The OS will regard it as a "segment fault" and crash immediately.
So your program crashing is nothing to do with the parameter of std::make_unique<int>. It just so happens that when you specify 1000, the addresses you access are out of the segment.
std::make_unique<int>(Size);
This doesn't do what you are expecting!
It creates single int and initializes it into value Size!
I'm pretty sure your plan was to do:
auto p = std::make_unique<int[]>(Size)
Note extra brackets. Also not that result type is different. It is not std::unique_ptr<int>, but std::unique_ptr<int[]> and for this type operator[] is provided!
Fixed version, but IMO you should use std::vector.
My understanding is that memory allocated on the free store (the heap) should grow upwards as I allocate additional free store memory; however, when I run my code, occasionally the memory location of the next object allocated on the free store will be a lower value. Is there an error with my code, or could someone please explain how this could occur? Thank you!
int main()
{
int* a = new int(1);
int* b = new int(1);
int* c = new int(1);
int* d = new int(1);
cout << "Free Store Order: " << int(a) << " " << int(b) << " " << int(c) << " " << int(d) << '\n';
// An order I found: 13011104, 12998464, 12998512, 12994240
delete a;
delete b;
delete c;
delete d;
return 0;
}
The main problem with that code is that you are casting int * to int, an operation that may lose precision, and therefore give you incorrect results.
But, aside from that, this statement is a misapprehansion:
My understanding is that memory allocated on the free store (the heap) should grow upwards as I allocate additional free store memory.
There is no guarantee that new will return objects with sequential addresses, even if they're the same size and there have been no previous allocations. A simple allocator may well do that but it is totally free to allocate objects in any manner it wants.
For example, it may allocate in a round robin method from multiple arenas to reduce resource contention. I believe the jemalloc implementation does this (see here), albeit on an per-thread basis.
Or maybe it has three fixed-address 128-byte buffers to hand out for small allocations so that it doesn't have to fiddle about with memory arenas in programs with small and short-lived buffers. That means the first three will be specific addresses outside the arena, while the fourth is "properly" allocated from the arena.
Yes, I know that may seem a contrived situation but I've actually done something similar in an embedded system where, for the vast majority of allocations, there were less than 64 128-byte allocations in flight at any given time.
Using that method means that most allocations were blindingly fast, using a count and bitmap to figure out free space in the fixed buffers, while still being able to handle larger needs (> 128 bytes) and overflows (> 64 allocations).
And deallocations simply detected if you were freeing one of the fixed blocks and marked it free, rather than having to return it to the arena and possibly coalesce it with adjacent free memory sections.
In other words, something like (with suitable locking to prevent contention, of course):
def free(address):
if address is one of the fixed buffers:
set free bit for that buffer to true
return
call realFree(address)
def alloc(size):
if size is greater than 128 or fixed buffer free count is zero:
return realAlloc(size)
find first free fixed buffer
decrement fixed buffer free count
set free bit for that buffer to false
return address of that buffer
The bottom line is that the values returned by new have certain guarantees but ordering is not one of them.
The following code gives me a segmentation fault:
bool primeNums[100000000]; // index corresponds to number, t = prime, f = not prime
for (int i = 0; i < 100000000; ++i)
{
primeNums[i] = false;
}
However, if I change the array declaration to be dynamic:
bool *primeNums = new bool[100000000];
I don't get a seg-fault. I have a general idea of why this is: in the first example, the memory's being put on the stack while in the dynamic case it's being put on the heap.
Could you explain this in more detail?
bool primeNums[100000000];
used out all your stack space, therefore, you will get segmentation fault since there is not enough stack space to allocate a static array with huge size.
dynamic array is allocated on the heap, therefore, not that easy to get segmentation fault. Dynamic arrays are created using new in C++, it will call operator new to allocate memory then call constructor to initialize the allocated memory.
More information about how operator new works is quoted from the standard below [new.delete.single]:
Required behavior:
Return a nonnull pointer to suitably aligned storage (3.7.3), or else throw a bad_alloc exception. This requirement is binding on a replacement version of this function.
Default behavior:
— Executes a loop: Within the loop, the function first attempts to allocate the requested storage. Whether the attempt involves a call to the Standard C library function malloc is unspecified.
— Returns a pointer to the allocated storage if the attempt is successful. Otherwise, if the last argument to set_new_handler() was a null pointer, throw bad_alloc.
— Otherwise, the function calls the current new_handler (18.4.2.2). If the called function returns, the loop repeats.
— The loop terminates when an attempt to allocate the requested storage is successful or when a called new_handler function does not return.
So using dynamic array with new, when there is not enough space, it will throw bad_alloc by default, in this case, you will see an exception not a segmentation fault, when your array size is huge, it is better to use dynamic array or standard containers such as vectors.
bool primeNums[100000000];
This declaration allocates memory in the stack space. The stack space is a memory block allocated when your application is launched. It is usually in the range of a few kilobyes or megabytes (it depends on the language implementation, compiler, os, and other factors).
This space is used to store local and static variables so you have to be gentle and don't overuse it. Because this is a stack, all allocations are continuos (no empty space between allocations).
bool *primeNums = new bool[100000000];
In this case the memory is allocated is the heap. This is space free where large new chucks of memory can be allocated.
Some compilers or operating systems limit the size of the stack. On windows the default is 1 MB but it can be changed.
in the first case you allocate memory on stack:
bool primeNums[100000000]; // put 100000000 bools on stack
for (int i = 0; i < 100000000; ++i)
{
primeNums[i] = false;
}
however this is allocation on heap:
bool *primeNums = new bool[100000000]; // put 100000000 bools in the heap
and since stack is (very) limited this is the reason for segfault
"Process is terminated due to StackOverflowException" is the error I receive when I run the code below. If I change 63993 to 63992 or smaller there are no errors. I would like to initialize the structure to 100,000 or larger.
#include <Windows.h>
#include <vector>
using namespace std;
struct Point
{
double x;
double y;
};
int main()
{
Point dxF4struct[63993]; // if < 63992, runs fine, over, stack overflow
Point dxF4point;
vector<Point> dxF4storage;
for (int i = 0; i < 1000; i++) {
dxF4point.x = i; // arbitrary values
dxF4point.y = i;
dxF4storage.push_back(dxF4point);
}
for (int i = 0; i < dxF4storage.size(); i++) {
dxF4struct[i].x = dxF4storage.at(i).x;
dxF4struct[i].y = dxF4storage.at(i).y;
}
Sleep(2000);
return 0;
}
You are simply running out of stackspace - it's not infinite, so you have to take care not to run out.
Three obvious choices:
Use std::vector<Point>
Use a global variable.
Use dynamic allocation - e.g. Point *dxF4struct = new Point[64000]. Don't forget to call delete [] dxF4struct; at the end.
I listed the above in order that I think is preferable.
[Technically, before someone else points that out, yes, you can increase the stack, but that's really just moving the problem up a level somewhere else, and if you keep going at it and putting large structures on the stack, you will run out of stack eventually no matter how large you make the stack]
Increase the stack size. On Linux, you can use ulimit to query and set the stack size. On Windows, the stack size is part of the executable and can be set during compilation.
If you do not want to change the stack size, allocate the array on the heap using the new operator.
Well, you're getting a stack overflow, so the allocated stack is too small for this much data. You could probably tell your compiler to allocate more space for your executable, though just allocating it on the heap (std::vector, you're already using it) is what I would recommend.
Point dxF4struct[63993]; // if < 63992, runs fine, over, stack overflow
That line, you're allocating all your Point structs on the stack. I'm not sure the exact memory size of the stack but the default is around 1Mb. Since your struct is 16Bytes, and you're allocating 63393, you have 16bytes * 63393 > 1Mb, which causes a stackoverflow (funny posting aboot a stackoverflow on stack overflow...).
So you can either tell your environment to allocate more stack space, or allocate the object on the heap.
If you allocate your Point array on the heap, you should be able to allocate 100,000 easily (assuming this isn't running on some embedded proc with less than 1Mb of memory)
Point *dxF4struct = new Point[63993];
As a commenter wrote, it's important to know that if you "new" memory on the heap, it's your responsibility to "delete" the memory. Since this uses array new[], you need to use the corresponding array delete[] operator. Modern C++ has a smart pointer which will help with managing the lifetime of the array.
Ok, so i'm just learning about memory leaks. i ran valgrind to find memory leaks. i get the following:
==6134== 24 bytes in 3 blocks are definitely lost in loss record 4 of 4
==6134== at 0x4026351: operator new(unsigned int) (vg_replace_malloc.c:255)
==6134== by 0x8048B74: readInput(char&) (in calc)
so does that definitively mean the leak is in my readInput function? if so, how do i get rid of the memory leaks? here's the offending function:
double* readInput(char& command){
std::string in;
std::getline(std::cin, in);
if(!isNumber(in)){
if(in.length()>1){
command = 0;
}
else{
command = in.c_str()[0];
}
return NULL;
}
else{
return new double(atof(in.c_str()));
}
}
thanks!
// ...
return new double(atof(in.c_str()));
// ...
new acquires resource from free store which is being returned. The returned value must be deallocated using delete to avoid memory leak.
If you are calling the function in a while loop, number should be definitely deallocated using delete before running the loop next time. Just using delete once will only deallocate the very last source acquired.
Edit:
// ....
while( condition1 )
{
double *number = NULL ;
number = readInput(command) ;
if( condition2 )
{ .... }
else
{ .... }
delete number ; // Should be done inside the loop itself.
// readInput either returns NULL or a valid memory location.
// delete can be called on a NULL pointer.
}
You're returning a new double... When is that freed? You do call delete on it at some point... right?
Personally, I would recommend just returning non-zero for success and zero for failure and put the value in a double * (or double &) parameter. That way you don't need to bother with new at all.
You return a newly allocated double. Do you delete it somewhere?
Why are you returning a pointer to a newly allocated double? Why not just return a double? It isn't a big deal to return an eight-byte temporary value, and the caller can decide what it wants to do with it (including allocating a new double on the heap if it likes). Assuming the values aren't large, I'd much rather return a temporary. Having the new closer conceptually to the actual use makes the memory management easier.
Moreover, allocating large numbers of very small blocks can lead to inefficient heap use and heap fragmentation, so that the program might run out of memory when it wouldn't otherwise, and might not be able to allocate a large chunk even if it looks like there's plenty left. This may or may not matter (and the extra time needed to allocate memory may or may not matter, especially in a function whose running time is probably dominated by I/O). This is likely micro-optimization, but if there is no good reason for such small allocations you may as well get in the habit of not using them.