std::map<int, int> * mp = new std::map<int, int>;
for(int i = 0; i < 999999; i++){
mp->insert(std::pair<int, int>(i, 999999-i ));
}
p("created");
//mp->clear(); - doesn't help either
delete mp;
p("freed");
The problem is: "delete mp" doesn't do anything. To compare:
std::vector<int> * vc = new std::vector<int>;
for(int i = 0; i < 9999999; i++){
vc->push_back(i);
}
p("created");
delete vc;
p("freed");
releases memory. How to release memory from map?
PS: p("string") just pauses program and waits for input.
The RAM used by the application is not a precise way to tell if the memory has been semantically freed.
Freed memory is memory that you can reuse. Sometimes though, you don't observe this freed memory directly in what the OS reports as memory used by our app.
You know that the memory is freed because the semantics of the language say so.
Actually, if the following code doesn't leak:
{
std::map<int, int> some_map;
}
The following code shouldn't leak as well:
{
std::map<int, int>* some_map = new std::map<int, int>();
/* some instructions that do not throw or exit the function */
delete some_map;
}
This applies whatever the type you use with new, as long as the type is well written. And std::map is probably very well written.
I suggest you use valgrind to check for your leaks. I highly doubt that what you observed was a real leak.
As Daniel mentions, RAM used by an application is not necessarily an indicator of a memory leak.
With respect to the behavior you notice regarding vectors, a vector guarantees that the memory layout is contiguous & hence when you create a vector of size 99999999, all the 99999999 elements are laid out in sequence & would constitute a pretty sizable chunk of memory. Deleting this would definitely impact process size. A map behaves differently (as per Wikipedia, its often implemented as a self balancing binary tree) so I guess deleting it causes fragmentation in the process memory space & maybe due to that the process memory does not drop immediately.
The best way to detect memory leaks would be to use a tool like valgrind which will clearly indicate whats wrong.
To pinpoint if you are releasing memory or not, try adding observable effects to the destructors of your object and... observe them.
For instance, instead of a map, create a custom class which emits output when the destructor is invoked.
Something like this:
#include <map>
#include <iostream>
#include <utility>
class Dtor{
int counter;
public:
explicit Dtor(int c):counter(c) {std::cout << "Constructing counter: " << counter << std::endl; }
Dtor(const Dtor& d):counter(d.counter) {std::cout << "Copy Constructing counter: " << counter << std::endl; }
~Dtor(){ std::cout << "Destroying counter: " << counter << std::endl; }
};
int main(){
std::map<int, const Dtor&> * mp = new std::map<int, const Dtor&>;
for (int i = 0; i < 10; ++i){
mp -> insert(std::make_pair(i, Dtor(i)));
}
delete mp;
return 0;
}
You will witness that deleting the pointer invokes the destructor of your objects, as expected.
Related
I'm building a buffer for network connections where you can explicitly allocate memory or you can supply it on your own via some sequential container(eg.:std::vector,std::array)these memory chunks are stored in a list what we use later for read/write operations. (the chunks are needed for handle multiple read/write requests)
I have a question about the last part, I want to make a pointer to the container's data and then tell the container to not care about it's data anymore.
So something like move semantics.
std::vector<int> v = {9,8,7,6,5,4,3,2,1,0};
std::vector<int> _v(std::move(v));
Where _v has all the values of v and v left in a safe state.
The problem is if I just make a pointer for v.data() after the lifetime of the container ends, the data pointed by the pointer releases with the container.
For example:
// I would use span to make sure It's a sequential container
// but for simplicity i use a raw pointer
// gsl::span<int> s;
int *p;
{
std::vector<int> v = {9,8,7,6,5,4,3,2,1,0};
// s = gsl::make_span(v);
p = v.data();
}
for(int i = 0; i < 10; ++i)
std::cout << p[i] << " ";
std::cout << std::endl;
Now p contains some memory trash and i would need the memory previously owned by the vector.
I also tried v.data() = nullptr but v.data() is rvalue so it's not possible to assign it. Do you have any suggestions, or is this possible?
edit.:
To make it more clear what i'm trying to achieve:
class readbuf_type
{
struct item_type // representation of a chunk
{
uint8_t * const data;
size_t size;
inline item_type(size_t psize)
: size(psize)
, data(new uint8_t[psize])
{}
template <std::ptrdiff_t tExtent = gsl::dynamic_extent>
inline item_type(gsl::span<uint8_t,tExtent> s)
: size(s.size())
, data(s.data())
{}
inline ~item_type()
{ delete[] data; }
};
std::list<item_type> queue; // contains the memory
public:
inline size_t read(uint8_t *buffer, size_t size); // read from queue
inline size_t write(const uint8_t *buffer, size_t size); // write to queue
inline void *get_chunk(size_t size)
{
queue.emplace_back(size);
return queue.back().data;
}
template <std::ptrdiff_t tExtent = gsl::dynamic_extent>
inline void put_chunk(gsl::span<uint8_t,tExtent> arr)
{
queue.emplace_back(arr);
}
} readbuf;
I have the get_chunkfunction what basically just allocates memory with the size, and I have put_chunk what I'm struggling with, the reason i need this because before you can write to this queue you need to allocate memory and then copy all the elements from the buffer(vector,array) you're trying to write from to the queue.
Something like:
std::vector<int> v = {9,8,7,6,5,4,3,2,1,0};
// instead of this
readbuf.get_chunk(v.size);
readbuf.write(v.data(), v.size());
// we want this
readbuf.put_chunk({v});
Since we're developing for distributed systems memory is crucial and that's why we want to avoid the unnecessary allocation, copying.
ps.This is my first post, so sorry if i wasn't precise in the first place..
No, it is not possible to "steal" the buffer of the standard vector in the manner that you suggest - or any other standard container for that matter.
You've already shown one solution: Move the buffer into another vector, instead of merely taking the address (or another non-owning reference) of the buffer. Moving from the vector transfers the ownership of the internal buffer.
It would be possible to implement such custom vector class, whose buffer could be stolen, but there is a reason why vector doesn't make it possible. It can be quite difficult to prove the correctness of your program if you release resources willy-nilly. Have you considered how to prevent the data from leaking? The solution above is much simpler and easier to verify for correctness.
Another approach is to re-structure your program in such way that no references to the data of your container outlive the container itself (or any invalidating operation).
Unfortunately the memory area of the vector cannot be detached from the std::vector object. The memory area can be deleted even if you insert some data to the std::vector object. Therefore use of this memory area later is not safe, unless you are sure that this particular std::vector object exists and is not modified.
The solution to this problem is to allocate a new memory area and copy the content of the vector to this newly allocated memory area. The newly allocated memory area can be safely accessed without worrying about the state of the std::vector object.
std::vector<int> v = {1, 2, 3, 4};
int* p = new int[v.size()];
memcpy(p, v.data(), sizeof(int) * v.size());
Don't forget to delete the memory area after you are finished using this memory area.
delete [] p;
Your mistake is in thinking that the pointer "contains" memory. It doesn't contain anything, trash or ints or otherwise. It is a pointer. It points to stuff. You have deleted that stuff and not transferred it anywhere else, so it can't work any more.
In general, you will need a container to put this information in, be it another vector, or even your own hand-made array. Just having a pointer to data does not mean you have data.
Furthermore, since it is impossible to ask a vector to relinquish its buffer to a non-vector thing, a vector is really your only chance in this particular case. It's not quite clear why that's not a good enough solution for you. :)
Not sure what you try to achieve but I would use moving semantic like this:
#include <iostream>
#include <memory>
#include <vector>
int main() {
std::unique_ptr<std::vector<int>> p;
{
std::vector<int> v = {9,8,7,6,5,4,3,2,1,0};
p = std::move(make_unique<std::vector<int>>(v));
}
for(int i = 0; i < 10; ++i)
std::cout << (*p)[i] << " ";
std::cout << std::endl;
return 0;
}
let's assume I have a structure MyStruct and i want to allocate a "big" chunk of memory like this:
std::size_t memory_chunk_1_size = 10;
MyStruct * memory_chunk_1 = reinterpret_cast <MyStruct *> (new char[memory_chunk_1_size * sizeof(MyStruct)]);
and because of the "arbitrary reasons" I would like to "split" this chunk of memory into two smaller chunks without; moving data, resizing the "dynamic array", deallocating/allocating/reallocating memory, etc.
so I am doing this:
std::size_t memory_chunk_2_size = 5; // to remember how many elements there are in this chunk;
MyStruct * memory_chunk_2 = &memory_chunk_1[5]; // points to the 6th element of memory_chunk_1;
memory_chunk_1_size = 5; // to remember how many elements there are in this chunk;
memory_chunk_1 = memory_chunk_1; // nothing changes still points to the 1st element.
Unfortunately, when I try to deallocate the memory, I'm encountering an error:
// release memory from the 2nd chunk
for (int i = 0; i < memory_chunk_2_size ; i++)
{
memory_chunk_2[i].~MyStruct();
}
delete[] reinterpret_cast <char *> (memory_chunk_2); // deallocates memory from both "memory_chunk_2" and "memory_chunk_1"
// release memory from the 1st chunk
for (int i = 0; i < memory_chunk_1_size ; i++)
{
memory_chunk_1[i].~MyStruct(); // Throws exception.
}
delete[] reinterpret_cast <char *> (memory_chunk_1); // Throws exception. This part of the memory was already dealocated.
How can I delete only a selected number of elements (to solve this error)?
Compilable example:
#include <iostream>
using namespace std;
struct MyStruct
{
int first;
int * second;
void print()
{
cout << "- first: " << first << endl;
cout << "- second: " << *second << endl;
cout << endl;
}
MyStruct() :
first(-1), second(new int(-1))
{
cout << "constructor #1" << endl;
print();
}
MyStruct(int ini_first, int ini_second) :
first(ini_first), second(new int(ini_second))
{
cout << "constructor #2" << endl;
print();
}
~MyStruct()
{
cout << "destructor" << endl;
print();
delete second;
}
};
int main()
{
// memory chunk #1:
std::size_t memory_chunk_1_size = 10;
MyStruct * memory_chunk_1 = reinterpret_cast <MyStruct *> (new char[memory_chunk_1_size * sizeof(MyStruct)]);
// initialize:
for (int i = 0; i < memory_chunk_1_size; i++)
{
new (&memory_chunk_1[i]) MyStruct(i,i);
}
// ...
// Somewhere here I decided I want to have two smaller chunks of memory instead of one big,
// but i don't want to move data nor reallocate the memory:
std::size_t memory_chunk_2_size = 5; // to remember how many elements there are in this chunk;
MyStruct * memory_chunk_2 = &memory_chunk_1[5]; // points to the 6th element of memory_chunk_1;
memory_chunk_1_size = 5; // to remember how many elements there are in this chunk;
memory_chunk_1 = memory_chunk_1; // nothing changes still points to the 1st element.
// ...
// some time later i want to free memory:
// release memory from the 2nd chunk
for (int i = 0; i < memory_chunk_2_size ; i++)
{
memory_chunk_2[i].~MyStruct();
}
delete[] reinterpret_cast <char *> (memory_chunk_2); // deallocates memory from both "memory_chunk_2" and "memory_chunk_1"
// release memory from the 1st chunk
for (int i = 0; i < memory_chunk_1_size ; i++)
{
memory_chunk_1[i].~MyStruct(); // Throws exception.
}
delete[] reinterpret_cast <char *> (memory_chunk_1); // Throws exception. This part of the memory was already dealocated.
// exit:
return 0;
}
This kind of selective deallocation is not supported by the C++ programming language, and is probably never going to be supported.
If you intend to deallocate individual portions of memory, those individual portions need to be individually allocated in the first place.
It is possible that a specific OS or platform might support this kind of behavior, but it would be with OS-specific system calls, not through C++ standard language syntax.
Memory allocated with malloc or new cannot be partially deallocate. Many heaps use bins of different sized allocations for performance and to prevent fragmentation so allowing partial frees would make such a strategy impossible.
That of course does not prevent you writing your own allocator.
The simplest way I could think of by means of standard c++ would follow this idiomatic code:
std::vector<int> v1(1000);
auto block_start = v1.begin() + 400;
auto block_end = v1.begin() + 500;
std::vector<int> v2(block_start,block_end);
v1.erase(block_start,block_end);
v1.shrink_to_fit();
If a compiler is intelligent enough to translate such pattern to the most efficient low level OS and CPU memory management operations, is an implementation detail.
Here's the working example.
Let's be honest: this is a very bad practice ! Trying to cast new and delete and in addition call yourself destructor between the two is an evidence of low-level manual memory management.
Alternatives
The proper way to manage dynamic memory structures in contiguous blocks in C++ is to use std::vector instead of manual arrays or manual memory management, and let the library proceed. You can resize() a vector to delete the unneeded elements. You can shrink_to_fit() to say that you no longer need the extra free capacity, although it's not guaranteed that unneeded memory is released.
The use of C memory allocation functions and in particular realloc() is to be avoided, as it is very error prone, and it works only with trivially copiable objects.
Edit: Your own container
If you want to implement your own special container and must allows this kind of dynamic behaviour, due to unusual special constraints, then you should consider writing your own memory management function that would manage a kind of "private heap".
Heap management is often implement via a linked list of free chunks.
One strategy could be to allocate a new chunk when there's no sufficient contiguous memory left in your private heap. You could then offer a more permissive myfree() function that reinserts a freed or partially freed chunk into that linked list. Of course, this requires to iterate through the linked list to find if the released memory is contiguous to any other chunk of free memory in the private heap, and merge the blocks if adjacent.
I see that MyStruct is very small. Another approach could then be to write a special allocation function optimised for small fixed size blocks. There is an example in Loki's small object library that is described in depth in "Modern C++ Design".
Finally, you could perhaps have a look at the Pool library of Boost, which also offers a chunk based approach.
I have a program (full code here) that is exiting around the 46000th iteration:
{
PROCESSER<MONO_CONT> processer;
c_start = std::clock();
for (unsigned long long i = 0; i < iterations; i++) {
BloombergLP::bdlma::BufferedSequentialAllocatoralloc(pool, sizeof(pool));
MONO_CONT* container = new(alloc) MONO_CONT(&alloc);
container->reserve(elements);
processer(container, elements);
}
c_end = std::clock();
std::cout << (c_end - c_start) * 1.0 / CLOCKS_PER_SEC << " ";
}
In this case, MONO_CONT is a vector<string, scoped_allocator_adaptor<alloc_adaptor<BloombergLP::bdlma::BufferedSequentialAllocator>>>.
My understanding is that the scoped_allocator_adaptor would make sure that the supplied allocator would be used for allocations for the strings being passed in, thus ensuring the strings would be deallocated at the end of each loop iteration (avoiding #1201ProgramAlarm's suggestion for the problem). The alloc_adapter is just a wrapper to make Bloomberg allocators conform to the proper interface.
PROCESSER is the following templated functor that just performs some basic operations on the templated container, MONO_CONT:
template<typename DS2>
struct process_DS2 {
void operator() (DS2 *ds2, size_t elements) {
escape(ds2);
for (size_t i = 0; i < elements; i++) {
ds2->emplace_back(&random_data[random_positions[i]], random_lengths[i]);
}
clobber();
}
};
Note that escape and clobber are just some magic that do nothing other than defeat the optimizer (see this talk if you're interested). random_data is just an array of chars containing garbage. random_positions defines valid indices into random_data. random_lengths defines a valid string length (does not go off the end of the garbage data) starting from the corresponding position in random_positions.
I have similar code that runs the exact same number of iterations, and does not fail:
{
PROCESSER<MONO_CONT> processer;
c_start = std::clock();
for (unsigned long long i = 0; i < iterations; i++) {
BloombergLP::bdlma::BufferedSequentialAllocator alloc(pool, sizeof(pool));
MONO_CONT container(&alloc);
container.reserve(elements);
processer(&container, elements);
}
c_end = std::clock();
std::cout << (c_end - c_start) * 1.0 / CLOCKS_PER_SEC << " ";
}
The main difference between the two snippets is that in the first, I'm newing the container into the allocator, and then passing the allocator to the container, relying on on the allocator's destruction to deallocate all the memory of the container (without having to actually call the destructor of the container itself). In the second snippet I'm allowing the more natural destruction of the container by going out of scope at the end of each iteration of the loop.
I'm building this with Clang, running in a Docker container on Debian. An suggestions on what the issue could be or how I could start debugging this?
While you're relying on the allocator's destruction to deallocate the memory allocated for container, this will not free up memory used by the contained strings, which will not be using the allocator for the vector but will be using the global heap (new). When the program runs out of memory it exits without reporting anything, possibly because it doesn't have enough memory available to do so.
In your second version container is destroyed which will free up the allocated strings when the vector is destroyed.
As far as how to debug something like this, the usual advice of "try stepping thru it in the debugger" is a start. If you run attached in a debugger it might break when the std::bad_alloc exception is created or thrown. And you can monitor the process's memory usage.
Try to use has map in C++. The problem is that when I add a constant entry like mymap["U.S."] = "Washington";, it used malloc() (I verified when using gdb with breakpoint on malloc()). Unnecessary calls to malloc will reduce the performance, hence I am trying to avoid it as much as possible. Any ideas?
#include <iostream>
#include <unordered_map>
int main ()
{
std::unordered_map<std::string,std::string> mymap;
mymap["U.S."] = "Washington";
std::cout << mymap["U.S."] << std::endl;
return 0;
}
I know the malloc is necessary when the scope of a key is short, since one may need to use mymap[key] even after the key is destroyed. But in my case, all the keys (a huge number of them) will be persisitent in memory .
How does an allocator create and destroy and array, for example
int* someInt = someAllocator(3);
Where without the allocator it would just be
int* someInt = new int[3];
Where the allocator is responsible for create each element and ensuring the constructor will be called.
How is the internals for an allocator written without the use of new? Could someone provide and example of the function?
I do not want to just use std::vector as I am trying to learn how an allocator will create an array.
The problem of general memory allocation is a surprisingly tricky one. Some consider it solved and some unsolvable ;) If you are interested in internals, start by taking a look at Doug Lea's malloc.
The specialized memory allocators are typically much simpler - they trade the generality (e.g. by making the size fixed) for simplicity and performance. Be careful though, using general memory allocation is usually better than a hodge-podge of special allocators in realistic programs.
Once a block of memory is allocated through the "magic" of the memory allocator, it can be initialized at container's pleasure using placement new.
--- EDIT ---
The placement new is not useful for "normal" programming - you'd only need it when implementing your own container to separate memory allocation from object construction. That being said, here is a slightly contrived example for using placement new:
#include <new> // For placement new.
#include <cassert>
#include <iostream>
class A {
public:
A(int x) : X(x) {
std::cout << "A" << std::endl;
}
~A() {
std::cout << "~A" << std::endl;
}
int X;
};
int main() {
// Allocate a "dummy" block of memory large enough for A.
// Here, we simply use stack, but this could be returned from some allocator.
char memory_block[sizeof(A)];
// Construct A in that memory using placement new.
A* a = new(memory_block) A(33);
// Yup, it really is constructed!
assert(a->X == 33);
// Destroy the object, wihout freeing the underlying memory
// (which would be disaster in this case, since it is on stack).
a->~A();
return 0;
}
This prints:
A
~A
--- EDIT 2 ---
OK, here is how you do it for the array:
int main() {
// Number of objects in the array.
const size_t count = 3;
// Block of memory big enough to fit 'count' objects.
char memory_block[sizeof(A) * count];
// To make pointer arithmetic slightly easier.
A* arr = reinterpret_cast<A*>(memory_block);
// Construct all 3 elements, each with different parameter.
// We could have just as easily skipped some elements (e.g. if we
// allocated more memory than is needed to fit the actual objects).
for (int i = 0; i < count; ++i)
new(arr + i) A(i * 10);
// Yup, all of them are constructed!
for (int i = 0; i < count; ++i) {
assert(arr[i].X == i * 10);
}
// Destroy them all, without freeing the memory.
for (int i = 0; i < count; ++i)
arr[i].~A();
return 0;
}
BTW, if A had a default constructor, you could try call it on all elements like this...
new(arr) A[count];
...but this would open a can of worms you really wouldn't want to deal with.
I've written about it in my second example here:
How to create an array while potentially using placement new
The difference is that the t_allocator::t_array_record would be managed by the allocator rather than the client.