This question already has answers here:
Freeing Memory From An Array
(2 answers)
Closed 4 years ago.
I have some problem with c++ pointer.
Here is my code:
#include <iostream>
#include <vector>
int main() {
std::vector<int*> * data = new std::vector<int*>;
for (int i = 0; i < 1000; i++) {
data->push_back(new int[100000]);
}
for (int i = 0; i < 100; i++) {
delete data->at(i);
}
data->clear();
delete data;
data = nullptr;
return 0;
}
After
std::vector<int*> * data = new std::vector<int*>;
for (int i = 0; i < 1000; i++) {
data->push_back(new int[100000]);
}
It takes 384Mb (I found it in task manager)
But after
for (int i = 0; i < 100; i++) {
delete data->at(i);
}
It still takes 346Mb
After
delete data;
data = nullptr;
It doesn't change anything
My problem is, what can I do to completely delete a pointer and free memory?
First, you probably aren't actually deleting everything. You loop only goes to 100, and you're pushing on 1000 items. Secondly, your use of data->clear() is largely pointless and irrelevant. Vectors frequently do not ever shrink themselves no matter what you do. And even if they do, you're just deleting it anyway.
Lastly, don't use new and delete, and use raw pointers sparingly. If you hadn't used them in the first place, you wouldn't have made the mistakes you did. You should've done this:
#include <iostream>
#include <vector>
#include <memory>
int main() {
using ::std::make_unique;
using ::std::unique_ptr;
typedef unique_ptr<int []> ary_el_t;
auto data = make_unique<::std::vector<ary_el_t>>();
for (int i = 0; i < 1000; i++)
{
data->push_back(make_unique<int[]>(100000));
}
data.reset();
return 0;
}
And, even if you did, you might still not get the memory back. Allocators will reuse freed space when asked for more space, but they often don't return it to the operating system.
The above code does require C++14 to work. But recent versions of Visual Studio should support that. I tested this with g++ on my Linux box, and my allocator does return memory to the OS, so I was able to verify that indeed, all the de-allocation works.
Related
Just curious as to how I would delete this once it is done being used.
TicTacNode *t[nodenum];
for (int i = 0; i < nodenum; ++i)
{
t[i] = new TicTacNode();
}
Would any pointers that are assigned with values in t need to be deleted as well?? For example,
TicTacNode * m = (t[i + 1]);
Like this:
TicTacNode *t[nodenum] = {};
for (int i = 0; i < nodenum; ++i)
{
t[i] = new TicTacNode();
}
...
for (int i = 0; i < nodenum; ++i)
{
delete t[i];
}
Though, you really should use smart pointers instead, then you don't need to worry about calling delete manually at all:
#include <memory>
std::unique_ptr<TicTacNode> t[nodenum];
for (int i = 0; i < nodenum; ++i)
{
t[i].reset(new TicTacNode);
// or, in C++14 and later:
// t[i] = std::make_unique<TicTacNode>();
}
Or, you could simply not use dynamic allocation at all:
TicTacNode t[nodenum];
Would any pointers that are assigned with values in t need to be deleted as well?
No. However, you have to make sure that you don't use those pointers any more after the memory have been deallocated.
Just curious as to how I would delete this once it is done being used.
As simple as this:
std::unique_ptr<TicTacNode> t[nodenum];
for (int i = 0; i < nodenum; ++i)
{
t[i] = std::make_unique<TicTacNode>();
}
// as soon as scope for t ends all data will be cleaned properly
or even simpler as looks like there is no reason to allocate them dynamically:
TicTacNode t[nodenum]; // default ctor is called for each object and all will be deleted when t destroyed
Actual you don't have to explicitly allocate and deallocate memory. All you need is the right data structure for the job.
In your case either std::vector or std::list might do the job very well
Using std::vector the whole code might be replaced by
auto t = std::vector<TicTacNode>(nodenum)
or using std::list
auto t = std::list<TicTacNode>(nodenum)
Benefits:
Less and clear code.
No need for std::new, since both containers will allocate and
initialise nodenum of objects.
No need for std::delete, since containers will free memory
automatically when they go out of scope.
I would like to write a 2D integer array to a binary file in binarySave.cc and then read it in binaryRead.cc. But the execution of binaryRead gives: Segmentation fault (core dumped).
However, when the contents of binarySave.cc and binaryRead.cc are placed to the same file (binarySaveRead.cc) the reading works like expected.
binarySave.cc
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
int** a = new int*[10];
for(int i=0; i<10; ++i)
{
a[i] = new int[2];
}
for(int i=0; i<10; ++i)
{
a[i][0] = 1;
a[i][1] = 2;
}
ofstream out("test.bin", ios::binary);
if (out.is_open())
{
out.write((char*)a, 10*2*sizeof(int));
}
out.close();
for(int i=0; i<10; ++i)
{
delete [] a[i];
}
delete [] a;
return 0;
}
binaryRead.cc
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
int** a = new int*[10];
for(int i=0; i<10; ++i)
{
a[i] = new int[2];
}
ifstream input("test.bin", ios::binary);
input.read((char*)a, 10*2*sizeof(int));
input.close();
for(int i=0; i<10; ++i)
{
std::cout<<a[i][0]<<" "<<a[i][1]<<std::endl; //segfault
}
for(int i=0; i<10; ++i)
{
delete [] a[i];
}
delete [] a;
return 0;
}
Execution gives
> ./binarySave
> ./binaryRead
Segmentation fault (core dumped)
But putting putting the exact same code to that same file makes it work.
binarySaveRead.cc
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
int** a1 = new int*[10];
for(int i=0; i<10; ++i)
{
a1[i] = new int[2];
}
for(int i=0; i<10; ++i)
{
a1[i][0] = 1;
a1[i][1] = 2;
}
ofstream out("test2.bin", ios::binary);
if (out.is_open())
{
out.write((char*)a1, 10*2*sizeof(int));
}
out.close();
delete [] a1;
//-------------------
int** a2 = new int*[10];
for(int i=0; i<10; ++i)
{
a2[i] = new int[2];
}
ifstream input("test2.bin", ios::binary);
input.read((char*)a2, 10*2*sizeof(int));
input.close();
for(int i=0; i<10; ++i)
{
std::cout<<a2[i][0]<<" "<<a2[i][1]<<std::endl;
}
for(int i=0; i<10; ++i)
{
delete [] a2[i];
}
delete [] a2;
return 0;
}
The output:
> ./binarySaveRead
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
What is the problem when the write and read are in two files?
I am on openSuse 42.3 using g++ 4.8.5.
You wrote your platform's internal representation of the data to a file. That data likely included references to the program's memory. When you ran a different program and read in the same data, the references pointed to nowhere special.
Say you and I are in the same room and I ask you what color your car is. You might say, "It's exactly the same as the ceiling". That would be perfectly understood by me since we're in the same room. But I can't just describe the color on the Internet that way. It would make no sense to people outdoors or in other rooms.
To store data to a file, you have to serialize it. That is, convert it to a known format that can be understood by other programs. You didn't do that.
You can't assume that what made sense to the first program, in its environment, will continue to make sense in the second program, in a completely different environment. You have to go to the effort of ensuring it can be understood in any environment.
We call the process of converting information into a form that anyone can understand "serialization". And you should learn how to do this so you can write data to files and sockets and be assured that other programs can make sense of the data.
Apart from many other defects pointed out by others, please note that:
a2 points to the consistent block of 10*sizeof(int*) bytes, which is not equal to 10*2*sizeof(int).
each element in the block pointed by your a2 points to the memory block of size 2*sizeof(int)
Consequently, your read (and write) procedure will cause Undefined Behavior.
Serialization (aka How to savely generate serial binary data frim internal program representations) is a hard nut to crack. Refer to the answer of #DavidSchwartz for general advice.
Please: If you do not understand how pointers and memory allocations by hand (via new/delete) actually work: Do not do it in C++! Use std::vector and the likes. (Although using vectors wouldn't have saved you from the problem you're facing here. Take is as a off-topic advice anyway.)
Your program leaks memory and certainly does not do what you think it does.
Your array is not contiguously stored in memory.
Every allocation with new results in a -well- new memory location which is unrelated to any other call to new.
See 1D or 2D array, what's faster? for further explanation.
Note that at the location of a/a1/a2 there are only 10 int* and not your data (1s and 2s).
I'll try to explain why the combination of both codes work (which is just because you do not properly delete stuff, actually).
So what you (in your combined write/read code) basically do is:
In your write code:
You allocate 10x int*
You allocate 10 times 2x int
Of your 10x int* each points to a single 2x int patch
You write those 10 int* to your binary file (where int* likely happens to be twice the size of int so it works by accident).
You delete the 10x int*
Attention: You do NOT delete the 10 times 2x int - they persist!
In your read code:
You allocate 10x int* to a2
You allocate 10 times 2x int
You read the binary int* pointers from the file saving them in a2 (again works by the likely coincidence that int* is twice as big as int).
Attention: At this point the pointers in a2 are replaced. They point to the 10 times 2x int patches from the write part now!
Attention: Your newly allocated 10 times 2x int memory patches from the read part are lost here as you have overwritten the only pointers to them.
You access/output the int storage pointed to by these pointers (which is the storage you allocated, filled and didn't delete in the write part).
You delete the 10 times 2x int patches you allocated in the write part (as your a2 pointers have been replaced)
You delete the 10x int* patch
Attention: The program terminates and you leak the 10 times 2x int patches from the read part as you do not delete them nor you do not have a pointer anymore in order to know where they are in the first place.
I'm rather new to c++. I have been programming Java for quite some time so i apologize in advance if i use some java terminology instead of the proper c++ terminology.
I want to create a hash map(unordered_map) which maps int to pointers to a class. Now the trouble for me is to create "new" instances of the class at different addresses.
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <tr1/unordered_map>
using namespace std;
class paul{
public:
paul(int n) {stuff = n;}
int stuff;
};
int main(void) {
tr1::unordered_map<int,paul*> glenn;
for(int i = 0; i < 5; i++){
paul victor(i*i);
glenn[i] = &victor;
}
for(int i = 0; i < 5; i++){
cout << i*i << "," << (*glenn[i]).stuff << "\n";
}
return EXIT_SUCCESS;
}
The code above does not work. It produces the output:
0,16
1,16
4,16
9,16
16,16
This is due to the fact that each new instance of paul gets created at the same address and thus every key in glenn will map to the same instance of paul.
So my question is now this how can i create several instances of a class at different addresses?
So my question is now this how can i create several instances of a class at different addresses?
Forget about addresses and pointers, and store objects:
tr1::unordered_map<int, paul> glenn;
for(int i = 0; i < 5; i++){
glenn[i] = paul(i*i);
}
If you really want to store pointers in your map, allocate them on the heap to extend their lifetime and prefer to use smart pointers such as std::unique_ptr or std::shared_ptr
(these smart pointers require the use of C++11 or higher).
The heap allocation will store each newly created object at a different address in memory on the heap. The smart pointers will clean up the objects when they're lifetime is up (a primitive form of garbage collection).
int main(void) {
tr1::unordered_map<int,std::unique_ptr<paul>> glenn;
for(int i = 0; i < 5; i++){
glenn[i].reset(new paul(i*i)); // prefer to use std::make_unique if/when it is available.
}
for(int i = 0; i < 5; i++){
cout << i*i << "," << (*glenn[i]).stuff << "\n";
}
return EXIT_SUCCESS;
}
I have been working on this program for quite some time. This is just two of the functions extracted that are causing a memory leak that I cant seem to debug. Any help would be fantastic!
vector<int**> garbage;
CODE for deleting the used memory
void clearMemory()
{
for(int i = 0; i < garbage.size(); i++)
{
int ** dynamicArray = garbage[i];
for( int j = 0 ; j < 100 ; j++ )
{
delete [] dynamicArray[j];
}
delete [] dynamicArray;
}
garbage.clear();
}
CODE for declaring dynamic array
void main()
{
int ** dynamicArray1 = 0;
int ** dynamicArray2 = 0;
dynamicArray1 = new int *[100] ;
dynamicArray2 = new int *[100] ;
for( int i = 0 ; i < 100 ; i++ )
{
dynamicArray1[i] = new int[100];
dynamicArray2[i] = new int[100];
}
for( int i = 0; i < 100; i++)
{
for(int j = 0; j < 100; j++)
{
dynamicArray1[i][j] = random();
}
}
//BEGIN MULTIPLICATION WITH SELF AND ASSIGN TO SECOND ARRAY
dynamicArray2 = multi(dynamicArray1); //matrix multiplication
//END MULTIPLICATION AND ASSIGNMENT
garbage.push_back(dynamicArray1);
garbage.push_back(dynamicArray2);
clearMemory();
}
I stared at the code for some time and I can't seem to find any leak. It looks to me there's exactly one delete for every new, as it should be.
Nonetheless, I really wanted to say that declaring an std::vector<int**> pretty much defies the point of using std::vector itself.
In C++, there are very few cases when you HAVE to use pointers, and this is not one of them.
I admit it would be a pain to declare and use an std::vector<std::vector<std::vector<int>>> but that would make sure there are no leaks in your code.
So I'd suggest you rethink your implementations in term of objects that automatically manage memory allocation.
Point 1: If you have a memory leak, use valgrind to locate it. Just like blue, I can't seem to find a memory leak in your code, but valgrind will tell you for sure what's up with your memory.
Point 2: You are effectively creating a 2x100x100 3D array. C++ is not the right language for this kind of thing. Of course, you could use an std::vector<std::vector<std::vector<int>>> with the obvious drawbacks. Or you can drop back to C:
int depth = 2, width = 100, height = 100;
//Allocation:
int (*threeDArray)[height][width] = malloc(depth*sizeof(*threeDArray));
//Use of the last element in the 3D array:
threeDArray[depth-1][height-1][width-1] = 42;
//Deallocation:
free(threeDArray);
Note that this is valid C, but not valid C++: The later language does not allow runtime sizes to array types, while the former supports that since C99. In this regard, C is more powerful than C++.
I've got the following code:
jobjectArray objects; //function argument, actually a byte[][]
jbyteArray* arrays = malloc(sizeof(jbyteArray), 2); // Assume 2
for(int i = 0; i < 2; ++i) {
arrays[i] = (jbyteArray)env->GetObjectArrayElement(objects, i);
}
// Do stuff with arrays.
// Do I have to do this?
// for(int i = 0; i < 2; ++i) {
// env->DeleteLocalRef(arrays[i]);
// }
free(arrays);
Is this enough to avoid leaking memory / keeping stray references? Or should I also be calling DeleteLocalRef?
edit:
I did find this reference at the help for IBM SDK for Java which states that they are automatically cleaned up when the function returns to Java. But if there is no automatic garbage collection, references might leak.