Probably everyone ran into this problem at least once during development:
while(/*some condition here that somehow never will be false*/)
{
...
yourvector.push_back(new SomeType());
...
}
As you see the program starts to drain all system memory, your program hangs and your system starts to swap like crazy. If you don't recognize the problem fast enough and kill the process you probably get an unresponsive system in seconds where your mouse pointer don't even moving. You can either wait your program crash with "out of memory" error (which may take several long minutes) or hit the reset on your computer.
If you can't track down the bug immediately then you will need several tests and resets to find out which is very annoying...
I'm looking for a possibly cross-platform way to prevent this somehow. The best would be a debug mode code that exits the program if it allocated too much memory, but how can I keep track how much memory is allocated?
Overriding the global new and delete operators won't help, because the free function I would invoke in the delete won't give any idea how many bytes are freed.
Any ideas appreciated.
If you're on a Linux or Unix-ish system, you could check into setrlimit(2) which allows you to configure resource limits for your program. You can do similar things from the shell with ulimit.
Overriding the global new and delete operators won't help, because the free function I would invoke in the delete won't give any idea how many bytes are freed.
But you can make it so. Here's a full framework for overloading the global memory operators (throw it in some global_memory.cpp file):
namespace
{
// utility
std::new_handler get_new_handler(void)
{
std::new_handler handler = std::set_new_handler(0);
std::set_new_handler(handler);
return handler;
}
// custom allocation scheme goes here!
void* allocate(std::size_t pAmount)
{
}
void deallocate(void* pMemory)
{
}
// allocate with throw, properly
void* allocate_throw(std::size_t pAmount)
{
void* result = allocate(pAmount);
while (!result)
{
// call failure handler
std::new_handler handler = get_new_handler();
if (!handler)
{
throw std::bad_alloc();
}
handler();
// try again
result = allocate(pAmount);
}
return result;
}
}
void* operator new(std::size_t pAmount) throw(std::bad_alloc)
{
return allocate_throw(pAmount);
}
void *operator new[](std::size_t pAmount) throw(std::bad_alloc)
{
return allocate_throw(pAmount);
}
void *operator new(std::size_t pAmount, const std::nothrow_t&) throw()
{
return allocate(pAmount);
}
void *operator new[](std::size_t pAmount, const std::nothrow_t&) throw()
{
return allocate(pAmount);
}
void operator delete(void* pMemory) throw()
{
deallocate(pMemory);
}
void operator delete[](void* pMemory) throw()
{
deallocate(pMemory);
}
void operator delete(void* pMemory, const std::nothrow_t&) throw()
{
deallocate(pMemory);
}
void operator delete[](void* pMemory, const std::nothrow_t&) throw()
{
deallocate(pMemory);
}
Then you can do something like:
// custom allocation scheme goes here!
const std::size_t allocation_limit = 1073741824; // 1G
std::size_t totalAllocation = 0;
void* allocate(std::size_t pAmount)
{
// make sure we're within bounds
assert(totalAllocation + pAmount < allocation_limit);
// over allocate to store size
void* mem = std::malloc(pAmount + sizeof(std::size_t));
if (!mem)
return 0;
// track amount, return remainder
totalAllocation += pAmount;
*static_cast<std::size_t*>(mem) = pAmount;
return static_cast<char*>(mem) + sizeof(std::size_t);
}
void deallocate(void* pMemory)
{
// get original block
void* mem = static_cast<char*>(pMemory) - sizeof(std::size_t);
// track amount
std::size_t amount = *static_cast<std::size_t*>(mem);
totalAllocation -= pAmount;
// free
std::free(mem);
}
because the free function I would invoke in the delete won't give any idea how many bytes are freed
It can, you'll just have to keep a map of the size of allocated memory by address, and subtract the right amount based on that information during the free.
You could implement you own global new operator:
void* operator new (std::size_t size) throw (std::bad_alloc);
void* operator new (std::size_t size, const std::nothrow_t& nothrow_constant) throw();
void* operator new (std::size_t size, void* ptr) throw();
void* operator new[] (std::size_t size) throw (std::bad_alloc);
void* operator new[] (std::size_t size, const std::nothrow_t& nothrow_constant) throw();
void* operator new[] (std::size_t size, void* ptr) throw();
Then, just set a hard limit about how much memory you allocate; maybe even how moch Kb/sec
If you want an easy way to find all those potential leaks, simply use your text editor and search for .push_back in all of your source code. Then examine all occurances of that function call and see if they reside inside of a tight loop. That may help you find some bad problems in the code. Sure you may get 100 hits, but that can be examined in a finite amount of time. Or you could write a static analyzer (Using Scitools API) to find all while loops that have a container method called .push_back that is called inside of them.
Related
My question is not a duplicate of Is it safe to `free()` memory allocated by `new`?.
I'm writing a toy garbage collector for PODs, in which I'm defining my own custom operator new/new[] and operator delete/delete[]. Code below:
#include <iostream>
#include <map>
std::map<void*, std::size_t> memory; // globally allocated memory map
struct collect_t {} collect; // tag for placement new
void* operator new(std::size_t size, const collect_t&)
{
void* addr = malloc(size);
memory[addr] = size;
return addr;
}
void* operator new[](std::size_t size, const collect_t&)
{
return operator new(size, collect);
}
void operator delete(void *p, const collect_t&) noexcept
{
memory.erase(p); // should call ::operator delete, no recursion
free(p);
}
void operator delete[](void *p, const collect_t&) noexcept
{
operator delete(p, collect);
}
void display_memory()
{
std::cout << "Allocated heap memory: " << std::endl;
for (auto && elem : memory)
{
std::cout << "\tADDR: " << elem.first << " "
<< "SIZE: " << elem.second << std::endl;
}
}
void clear()
{
for (auto && elem : memory)
free(elem.first); // is this safe for arrays?
memory.clear();
}
int main()
{
// use the garbage collector
char *c = new(collect) char;
int *p = new(collect) int[1024]; // true size: sizeof(int)*1024 + y (unknown overhead)
display_memory();
clear();
display_memory();
}
The idea is simple: I store all allocated tracked addresses (the ones allocated with my custom new) in a std::map, and make sure that at the end of the day I clear all memory in my clear() function. I use a tag for my new and delete (and don't overload the global ones) so that std::map's allocator can call the global ones without recurring.
My question is the following: in my clear() function, I de-allocate the memory in the line
for (auto && elem : memory)
free(elem.first); // is this safe for arrays?
Is this safe for arrays, e.g. for int *p = new(collect) int[1024];?. I believe it is, since void* operator new[](std::size_t size, const collect_t&) calls operator new(size, collect);, and the latter calls malloc. I am not 100% sure though, can something go wrong here?
It appears to me that in order for memory to be in your memory container it must have been allocated with your custom allocator that always calls malloc. Therefore I believe your code calling free should be ok.
Obviously if someone goes around stuffing random addresses into the memory map you will wind up with all sorts of undefined behavior.
Assuming the objects using your garbage collector never implement a destructor, and this holds true for any members that those objects may contain, the code as it were is "safe" in the sense that the call to free() directly is just by-passing the work the compiler would have done to achieve the same thing as it inlined the delete calls.
However, the code is not really safe.
If you ever changed how your garbage collector worked, or how the new function worked, then you would have to hunt down all the direct calls to free() to head off any problems. If the code was ever cut-and-pasted or otherwise reused in a context outside of your garbage collector, you would face a similar problem.
It is just better practice to always match new to delete and malloc to free.
(C++) I have memory aligned instances allocated on heap, then delete them in another thread. The codes look like this:
ALIGNED class Obj
{
public: ALIGNED_NEW_DELETE
...
};
Thread 1:
Obj *o = new Obj; // overloaded new for aligned memory allocation
postTask(o);
Thread 2:
o->runTask();
delete o; // overloaded delete for aligned memory deletion
// "delete" statement crashes
The delete statement in thread 2 will give an assertion error in Visual Studio 2013 (_BLOCK_TYPE_IS_VALID).
Strangely, if i delete the object in the creation thread, everything runs fine.
Why does this happen? What's the solution?
EDIT:
#galop1n: Actually what i am currently using is Eigen's built-in new/delete operators EIGEN_MAKE_ALIGNED_OPERATOR_NEW. I also tried my own operators, both failed.
For Eigen's operators, please look up its source yourself.
For my allocators:
void* operator new(size_t size){ return alignedMalloc(size, align); }
void operator delete(void* ptr) { alignedFree(ptr); }
void* operator new[](size_t size) { return alignedMalloc(size, align); }
void operator delete[](void* ptr) { alignedFree(ptr); }
void* alignedMalloc(size_t size, size_t align)
{
char* base = (char*)malloc(size + align + sizeof(int));
if (base == nullptr)
ASSERT(0, "memory allocation failed");
char* unaligned = base + sizeof(int);
char* aligned = unaligned + align - ((size_t)unaligned & (align - 1));
((int*)aligned)[-1] = (int)((size_t)aligned - (size_t)base);
return aligned;
}
void alignedFree(const void* ptr) {
int ofs = ((int*)ptr)[-1];
free((char*)ptr - ofs);
}
And the ALIGNED macro is __declspec(align(16)). It crashes with or without the "ALIGNED" attribute.
This is awkward, the problem is in Thread 2, The Obj* is casted into a base class' pointer Task*, and for the utter stupidity: ~Task() is not virtual:
class Task
{
public:
~Task(); // <-- not virtual, therefore it crashes
...
}
ALIGNED class Obj : public Task
{ ... }
Should have discovered this problem much much earlier. Because, as in my description of the problem, i said it myself it gives an assertion error: _BLOCK_TYPE_IS_VALID, this is a visual studio debug lib's stuff for the default delete operator, which means it didn't even run into my overloaded delete operator, which ultimately means I missed a "virtual".
It's my bad that i even forgot to add the class inheritance to the question.
Sometimes, i can be stuck at a problem for hours or even days. But after i posted the issue online, i can immediately find the answer. Dunno if any of you have similar problems before; perhaps i've put too much stress onto myself.
Still, thanks you, Internet.
I'm working on a memory pool/memory allocator implementation and I am setting it up in a manor where only a special "Client" object type can draw from the pool.The client can either be constructed directly onto the pool, or it can use the pool for dynamic memory calls or it could in theory do both.
I would like to be able to overload operator new and operator delete in a way that would call my pools "alloc()" and "free()" functions in order to get the memory needed for the object to construct upon.
One of the main issues that I am having is getting my operator delete to be able to free up the memory by calling the pool->free() function I have written. I came up with a hack that fixes it by passing the pool into the constructor and having the destructor do the deallocation work. This is all fine and dandy until someone needs to inherit from this class and override the destructor for their own needs and then forgets to do the memory deallocations. Which is why i want to wrap it all up in the operators so the functionality is tucked away and inherited by default.
My Code Is on GitHub here: https://github.com/zyvitski/Pool
My class definition for the Client is as follows:
class Client
{
public:
Client();
Client(Pool* pool);
~Client();
void* operator new(size_t size,Pool* pool);
void operator delete(void* memory);
Pool* m_pPool;
};
And the implementation is:
Client::Client()
{
}
Client::Client(Pool* pool)
{
m_pPool = pool;
}
Client::~Client()
{
void* p = (void*)this;
m_pPool->Free(&p);
m_pPool=nullptr;
}
void* Client::operator new(size_t size, Pool* pool)
{
if (pool!=nullptr) {
//use pool allocator
MemoryBlock** memory=nullptr;
memory = pool->Alloc(size);
return *memory;
}
else throw new std::bad_alloc;
}
void Client::operator delete(void* memory)
{
//should somehow free up the memory back to the pool
// the proper call will be:
//pool->free(memory);
//where memory is the address that the pool returned in operator new
}
Here is the example Main() that i'm using for the moment:
int main(int argc, const char * argv[]){
Pool* pool = new Pool();
Client* c = new(pool) Client(pool);
/*
I'm using a parameter within operator new to pass the pool in for use and i'm also passing the pool as a constructor parameter so i can free up the memory in the destructor
*/
delete c;
delete pool;
return 0;
}
So far my code works, but I want to know if there is a better way to achieve this?
Please let me know if anything I am asking/doing is simply impossible, bad practice or just simply dumb. I am on a MacBook Pro right now but i would like to keep my code cross platform if at all possible.
If you have any questions that would help you help me do let me know.
And of course, Thanks in advance to anyone who can help.
You might store additional information just before the returned memory address
#include <iostream>
#include <type_traits>
class Pool {
public:
static void* Alloc(std::size_t size) { return data; }
static void Dealloc(void*) {}
private:
static char data[1024];
};
char Pool::data[1024];
class Client
{
public:
void* operator new(size_t size, Pool& pool);
void operator delete(void* memory);
};
struct MemoryHeader {
Pool* pool;
};
void* Client::operator new(size_t size, Pool& pool)
{
auto header = static_cast<MemoryHeader*>(pool.Alloc(sizeof(MemoryHeader) + size));
std::cout << " New Header: " << header << '\n';
header->pool = &pool;
return header + 1;
}
void Client::operator delete(void* memory)
{
auto header = static_cast<MemoryHeader*>(memory) - 1;
std::cout << " Delete Header: " << header << '\n';
header->pool->Dealloc(header);
}
int main()
{
Pool pool;
Client* p = new(pool) Client;
std::cout << "Client Pointer: " << p << '\n';
delete p;
return 0;
}
With the help of Dieter Lücking I was able to figure out how to use my pool in operator new and operator delete
Here is the code for operator new:
void* ObjectBase::operator new(size_t size, Pool* pool)
{
if (pool!=nullptr) {
//use pool allocation
MemoryBlock** block = pool->Alloc(size+(sizeof(MemoryHeader)));
MemoryBlock* t = * block;
t = (MemoryBlock*)((unsigned char*)t+sizeof(MemoryHeader));
MemoryHeader* header = new(*block)MemoryHeader(pool);
header=nullptr;
return t;
}
else{
//use std allocation
void* temp = ::operator new(size);
if (temp!=nullptr) {
return temp;
}
else throw new std::bad_alloc;
}
}
Here is the code for operator delete
void ObjectBase::operator delete(void* memory)
{
MemoryBlock* temp = (MemoryBlock*)((unsigned char*)memory-sizeof(MemoryHeader));
MemoryHeader* header = static_cast<MemoryHeader*>(temp);
if (header->pool!=nullptr) {
if (header->pool->Free((MemoryBlock**)&header));
else
{
::operator delete(memory);
}
}
else{
::operator delete(memory);
}
}
I'm using the "Memory Header" idea that was suggested.
The code is also set up in a way that defaults to using a standard memory allocation call if for some reason the pool fails.
Thanks Again for your help.
If your delete operator simply calls free, you custom allocator will not do e very good job. The idea of a custom allocator is that it will work with a predefined memory region which it will have control over: when it allocates memory it will be from it's memory region or pool and when the memory is freed, the allocator is 'informed' it can reuse that memory.
Now, if you use free, you just return the memory to the heap, not to your memory pool. The way this part is usually done is with the use of smart pointers - to keep track of what memory is available.
Any other mechanism will do as long as you can keep track of which addresses are in use and which are available.
Hope this helps
I have some code here that implements a dynamic memory pool. The pool starts off at size 0 and grows with each successive allocation. It is used to try and minimise the overhead of tons of allocations and de-allocations.
The call to malloc is NOT matched by a call to free. It seems to be relying on the application that uses it to not call enough new's in succession for the application to leak a significant amount of memory.
I did not write it, so this is my best guess.
My question are:
Is the absence of a call to free a bug or am I missing something to do with overloading the delete operator?
Is this implementation relying on the OS to clean up the small amount of memory that does leak at exit?
Thanks.
//Obj.h
class Obj
{
public:
Obj(){};
void* operator new(std::size_t size);
void operator delete(void* p);
private:
static std::vector<void*> pool_;
static std::size_t checked_in_;
static std::size_t checked_out_;
};
//Obj.cpp
std::vector<void*> Obj::pool_;
std::size_t Obj::checked_out_ = 0;
std::size_t Obj::checked_in_ = 0;
void* Obj::operator new(std::size_t size)
{
if (pool_.empty())
{
++checked_out_;
return malloc(size);
}
else
{
--checked_in_;
++checked_out_;
void* p = pool_.back();
pool_.pop_back();
return p;
}
}
void Obj::operator delete(void* p)
{
pool_.push_back(p);
if (pool_.size() % 10000 == 0)
{
std::cout<<"mem leak\n";
}
--checked_out_;
++checked_in_;
}
The missing 'free' means that you cannot embed this in some larger application, start it up, shut it down, and end up back where you started. This is fine if you control the entire application, and not fine if this code has to, in fact, be embeddable. To make that work, you would need some entrypoint that walks the vector calling free.
It never leaks in the conventional sense, since each malloc'ed chunk is stored in the vector by operator delete for re-use, and the operator delete complains if it sees too many items in the vector.
You're creating a memory pool. This pool implementation will grow as needed, but will never return memory to the OS. That's normally a bug, but not when creating your own method of allocating memory- as long as this pool exists for the life of your program. You'll leak when the program exits, but you can likely live with that. Basically you're overriding how new/malloc usually work and doing memory completely on your own.
I'm trying to make a mechanism that could tell where the object of the class is allocated.
Thought about making a flag in the class, but it's not possible to set a value because object's lifetime is not started during the call of "new" operator.
Is it possible in C++ to tell if an object is on stack or heap (runtime)?
There is no portable way to do this, but if we assume you have a limited amount of system types where you are going to do this on, you could try the following:
Take the address of some local variable in main (or other "low in the callstack"). Store this in a global variable, lets call char *stackbase;
Then take the address of a local variable in your function that you are checking in, let's call it char *stacktop;
Now, if we have a char *obj = reinterpret_cast<char *>(object_in_test);, then:
if (obj > stacktop && obj < stackbase) on_stack = true;
else on_stack = false;
Note that there are SEVERAL flaws with this:
It's technically undefined behaviour. It will work on most systems, because the whole memory space is contiguous. But there are systems where the stack and other sections of memory have separate "address spaces", which means that two pointers to different types of memory can have the same address.
Threads will need to have a "per thread stackbase".
The stack is assumed to "grow towards zero" (if not, you'll have to invert the > and < in the if.
Global variables will be seen as not on stack.
USE AT YOUR OWN RISK!
I fully expect to have to delete this answer as it will be downvoted by language lawyers, despite the disclaimer below.
I have been doing some experimentation, and have discovered that this seems to work for being able to always tell at runtime if an object was allocated on the stack or not.
The interface is as follows:
#ifndef HEAPAWARE_H
#define HEAPAWARE_H
#include <cintttypes>
class HeapAware
{
public:
HeapAware();
void *operator new(std::size_t size);
void *operator new[](std::size_t size);
void operator delete(void *ptr, std::size_t);
void operator delete[](void *ptr, std::size_t);
bool is_on_heap() const { return on_heap; }
std::ptrdiff_t get_heap_array_index() const { return heap_array_index; }
private:
const bool on_heap;
const std::ptrdiff_t heap_array_index;
static thread_local HeapAware * last_alloc;
static thread_local std::size_t allocated_size;
};
#endif
And the implementation is:
void *HeapAware::operator new(std::size_t size)
{
auto result = last_alloc = reinterpret_cast<HeapAware*>(malloc(size));
allocated_size = 1;
return result;
}
void *HeapAware::operator new[](std::size_t size)
{
auto result = last_alloc = reinterpret_cast<HeapAware*>(malloc(size));
allocated_size = size;
return result;
}
void HeapAware::operator delete(void *ptr, std::size_t)
{
free(ptr);
}
void HeapAware::operator delete[](void *ptr, std::size_t)
{
free(ptr);
}
HeapAware::HeapAware():on_heap(this>=last_alloc && this<last_alloc+allocated_size),heap_array_index(allocated_size>1?this-last_alloc:-1)
{
}
thread_local HeapAware * HeapAware::last_alloc = nullptr;
thread_local std::size_t HeapAware::allocated_size = 0;
This seems to always work correctly. For arrays allocated on the heap, the index of the entry is also available. For values that are allocated on the stack, or for entries that are just allocated singly, the get_heap_array_index() function returns -1.
The assumption that this code makes is that the new operator is called immediately before construction on any given thread. This assumption seems to hold true for everything I have tried, however.