Is it possible to solve CUDA memory fragmentation issue? - c++

I'm trying to allocate some memory but sometimes get error "out of memory". cudaMemGetInfo says that available more memory that I need. So, problem in memory fragmentation. Is it possible to fix this problem? Is it possible to place elements in memory not one by one and fragment to few peaces that I can place in memory?

If you get "out of memory" because of memory fragmentation, then there is some error in the way that you work with memory!! You are responsible for fragmenting that memory, consider a redesign of your program and for example use a pool of memory to avoid too much new/delete to avoid fragmenting memory

Related

C++ can memory leak lead to memory error?

"Memory error" in the title means the type of error that can cause the program to crash or corrupt managed memory.
To make it clearer, also assume memory full is not this type of "memory error".
Thanks
if your leak causes you to run out of memory then one thing that can happen is that memory allocations will fail. If you are not correctly dealing with these failed allocations then all sorts of bad things can happen
But , in general, I would say that if you have memory corruption going on its not due directly to the leak. More likely the leak is irrelevant or the leak and the memory trashing are a symptom of a different bug
valgrind?
If leak will be big enough, yes it will.
Yes, it does. memory allocation will just allocate memory and when you are out of memory it will allocate memory which is in use.
If you are able to simulate your program in a simulator you can just put your function in a infinite while loop and check your task manager. if the task of your simulation is going up to tens of MB's there is certainly a leak in your memory.

Preallocating memory space for programs use

In my Windows' C++ program, I allocate several small objects on heap (thousands) by calling new CMyClass()
The performance seems to get affected due to this.
Is there a way to preallocate some minimum memory in heap for the program's use so that the OS starts allocating from this preallocated space when ever I call new CMyClass() to improve the performance?
Thanks.
You seem to be looking for a memory pool - http://www.codeproject.com/Articles/27487/Why-to-use-memory-pool-and-how-to-implement-it
Note that you can pre-allocate some memory and then use placement new to prevent multiple allocations.

CRT memory allocation

Our application allocates large std::vector<> of geometric coordinates -
it must be a vector (which means contiguous) because it eventually sent to OpenGL to draw model.
And Open GL works with contiguous data.
At some moment allocation fails which means that reserving memory throws std::bad_alloc exception.
However there is a lot of memory still free at this moment.
Problem is that contiguous block can not be allocated.
So the 1st two questions are:
Is there any way to control way in which CRT allocates memory? Or a way to defragment it (crazy idea))..
Maybe there is a way to check either run time can allocate block of memory of some size (not using try/catch).
Problem above was partially solved by fragmenting this one large vector to several vectors and calling OpenGL once for each of them.
However still there is a question how to define size of each smaller vector - if there are a lot of them with fairly small size we almost sure fit memory but there will be a lot of calls to OpenGL which will slow down visualization.
You can't go beyond ~600MiB of contiguous memory in a 32-bit address space. Compile as 64-bit and run it on a 64-bit platform to get around this (hopefully forever).
That said, if you have such demanding memory requirements, you should look into a custom allocator. You can use a disk-backed allocation that will appear to the vector as memory-based storage. You can mmap the file for OpenGL.
If heap fragmentation really is your peoblem, and you're running on Windows then you might like to investigate the Low Fragmentation Heap options at http://msdn.microsoft.com/en-us/library/windows/desktop/aa366750(v=vs.85).aspx

Breaking when a certain amount of bytes is allocated

_CrtDumpMemoryLeaks(); if you didn't know, is a function that dumps all the memory leaks in a program. Mine currently displays that I have a 3632062 byte memory leak (it's not being deallocated).
I was wondering:
Is there any way to cause Visual C++ Express to break when a certain amount of bytes has been allocated? That way I could break when 3632062 bytes have been allocated, then read the stack trace to see where I allocated it.
This is currently the only method I can think of for finding where the memory is being allocated so I can fix it. I've been searching my code a lot but I can't find anywhere where I would need to allocate 3632062 bytes (since the only file I load is 2767136 bytes..) although I am certain the leak is related to the file I'm operating on.
Any ideas for finding the source of the memory leak? I'm using Native C++, Visual C++ 2010
You could do this using _CrtSetAllocHook to track the total memory usage.
UMDH will give you a list of allocated blocks in all heaps. This might be what you want, since breaking on hitting a given total alloc threshold will not tell you where all of the blocks were allocated.
I have previously used this simple memory leak detector with good success for finding memory leaks.

How do I predict serious memory fragmentation issue is likely to happen?

Can I check the page fault time or something else.. :)
No question is stupid. If you're worried about memory fragmentation you will need to rework how you allocate memory so you can track it. Perhaps you should overload the new operator in those classes where you feel fragmentation would cause the most harm. Use some sort of logging functionality to write to you where it all is going. That should suffice as a first exercise. Then if you find that it really is harming you, you can create chunks of memory of the right size to guarantee your items are aligned in the manner you wish them to be.
Are you on windows? There is a Low-fragmentation Heap available there that can be used as a preventative measure. It only takes a few lines to set up and should help with these issues.
You use HeapSetInformation to set it. Something like this should do the trick:
ULONG hi = 2;
HeapSetInformation(
(HANDLE)_get_heap_handle(),
HeapCompatibilityInformation,
&hi, sizeof(hi));
If you are using C++, I wouldn't bother trying to predict it. The nice thing that encapsulation gives you is that you can at a later date change the memory allocation strategy without breaking all existing code. So I would do nothing and see if fragmentation does actually occur in real-life circumstances - it very probably won't.
AFAIK memory fragmentation is likely to happen when there is a lot of allocation and deallocation of small objects.
Have a look at Boost.Pool to prevent memory fragmentation.
http://www.boost.org/doc/libs/1_41_0/libs/pool/doc/index.html
An indication of fragmentation would be for example, if your system reports having 5MB free but you are unable to allocate large chunks (like 1MB) at a time. This suggests that your system has some free memory but it has been chopped up into small non-contiguous pieces.
Your mem fragmentation is largely due to the data structures you use and your memory manager. You can usually assume any lib you want to use will have lots of memory fragmentation. To completely stop it you need to use data structures that minimize these issues and try to avoid allocating and deallocating data uneccessarily and possibly tackle memory manager itself.