I've got the following code:
jobjectArray objects; //function argument, actually a byte[][]
jbyteArray* arrays = malloc(sizeof(jbyteArray), 2); // Assume 2
for(int i = 0; i < 2; ++i) {
arrays[i] = (jbyteArray)env->GetObjectArrayElement(objects, i);
}
// Do stuff with arrays.
// Do I have to do this?
// for(int i = 0; i < 2; ++i) {
// env->DeleteLocalRef(arrays[i]);
// }
free(arrays);
Is this enough to avoid leaking memory / keeping stray references? Or should I also be calling DeleteLocalRef?
edit:
I did find this reference at the help for IBM SDK for Java which states that they are automatically cleaned up when the function returns to Java. But if there is no automatic garbage collection, references might leak.
Related
Just curious as to how I would delete this once it is done being used.
TicTacNode *t[nodenum];
for (int i = 0; i < nodenum; ++i)
{
t[i] = new TicTacNode();
}
Would any pointers that are assigned with values in t need to be deleted as well?? For example,
TicTacNode * m = (t[i + 1]);
Like this:
TicTacNode *t[nodenum] = {};
for (int i = 0; i < nodenum; ++i)
{
t[i] = new TicTacNode();
}
...
for (int i = 0; i < nodenum; ++i)
{
delete t[i];
}
Though, you really should use smart pointers instead, then you don't need to worry about calling delete manually at all:
#include <memory>
std::unique_ptr<TicTacNode> t[nodenum];
for (int i = 0; i < nodenum; ++i)
{
t[i].reset(new TicTacNode);
// or, in C++14 and later:
// t[i] = std::make_unique<TicTacNode>();
}
Or, you could simply not use dynamic allocation at all:
TicTacNode t[nodenum];
Would any pointers that are assigned with values in t need to be deleted as well?
No. However, you have to make sure that you don't use those pointers any more after the memory have been deallocated.
Just curious as to how I would delete this once it is done being used.
As simple as this:
std::unique_ptr<TicTacNode> t[nodenum];
for (int i = 0; i < nodenum; ++i)
{
t[i] = std::make_unique<TicTacNode>();
}
// as soon as scope for t ends all data will be cleaned properly
or even simpler as looks like there is no reason to allocate them dynamically:
TicTacNode t[nodenum]; // default ctor is called for each object and all will be deleted when t destroyed
Actual you don't have to explicitly allocate and deallocate memory. All you need is the right data structure for the job.
In your case either std::vector or std::list might do the job very well
Using std::vector the whole code might be replaced by
auto t = std::vector<TicTacNode>(nodenum)
or using std::list
auto t = std::list<TicTacNode>(nodenum)
Benefits:
Less and clear code.
No need for std::new, since both containers will allocate and
initialise nodenum of objects.
No need for std::delete, since containers will free memory
automatically when they go out of scope.
This question already has answers here:
Freeing Memory From An Array
(2 answers)
Closed 4 years ago.
I have some problem with c++ pointer.
Here is my code:
#include <iostream>
#include <vector>
int main() {
std::vector<int*> * data = new std::vector<int*>;
for (int i = 0; i < 1000; i++) {
data->push_back(new int[100000]);
}
for (int i = 0; i < 100; i++) {
delete data->at(i);
}
data->clear();
delete data;
data = nullptr;
return 0;
}
After
std::vector<int*> * data = new std::vector<int*>;
for (int i = 0; i < 1000; i++) {
data->push_back(new int[100000]);
}
It takes 384Mb (I found it in task manager)
But after
for (int i = 0; i < 100; i++) {
delete data->at(i);
}
It still takes 346Mb
After
delete data;
data = nullptr;
It doesn't change anything
My problem is, what can I do to completely delete a pointer and free memory?
First, you probably aren't actually deleting everything. You loop only goes to 100, and you're pushing on 1000 items. Secondly, your use of data->clear() is largely pointless and irrelevant. Vectors frequently do not ever shrink themselves no matter what you do. And even if they do, you're just deleting it anyway.
Lastly, don't use new and delete, and use raw pointers sparingly. If you hadn't used them in the first place, you wouldn't have made the mistakes you did. You should've done this:
#include <iostream>
#include <vector>
#include <memory>
int main() {
using ::std::make_unique;
using ::std::unique_ptr;
typedef unique_ptr<int []> ary_el_t;
auto data = make_unique<::std::vector<ary_el_t>>();
for (int i = 0; i < 1000; i++)
{
data->push_back(make_unique<int[]>(100000));
}
data.reset();
return 0;
}
And, even if you did, you might still not get the memory back. Allocators will reuse freed space when asked for more space, but they often don't return it to the operating system.
The above code does require C++14 to work. But recent versions of Visual Studio should support that. I tested this with g++ on my Linux box, and my allocator does return memory to the OS, so I was able to verify that indeed, all the de-allocation works.
I have an array of pointers:
Hotel *hotels[size];
for (int i = 0; i < size; ++i)
hotels[i] = new Hotel();
And I want to insert an object in this array after some object with name I know:
cin >> tmp_name;
for (int i = 0; i < size; i++) {
if (hotels[i]->get_name() == tmp_name) {
hotels[size] = new Hotel();
size += 1;
Hotel *tmp_hotel;
tmp_hotel = hotels[i+1];
hotels[i+1]->fillHotel();
for (i = i + 2; i < size; i++) {
hotels[i] = tmp_hotel;
tmp_hotel = hotels[i+1];
}
break;
}
}
What I do wrong?
UPD:
My solution:
cin >> tmp_name;
for (int i = 0, j = 0; i < size; i++, j++) {
new_hotels[j] = hotels[i];
if (hotels[i]->get_name() == tmp_name) {
new_hotels[j+1]->fillHotel();
++j;
system("clear");
}
}
hotels[size] = new Hotel();
++size;
for (int i = 0; i < size; i++) {
hotels[i] = new_hotels[i];
}
I can see different errors in your code.
For example:
Hotel *hotels[size];
size should be a constant expression and something let me think this is not the case. VLA are not part of the C++ standard. In short you cannot allocate dynamic memory on the stack. The proper initialization should be:
Hotel* hotels = new Hotel*[size];
The line in the loop:
hotels[size] = new Hotel();
you're actually accessing out of bounds of your array: size index is some memory is not included in your array and this will produce an undefined behaviour.
Another strange line is the following:
size += 1;
Despite the fact that confirms size is not a constant, you cannot increase your size of vector simply changing that variable. You're actually just changing a variable size, but the allocated memory for your array will be the same.
How resolve?
In order in increase (or change) the size of an array, the solution is almost always to create a new array, copy the old one. In your case that solution is pretty reasonable because you should copy just pointers and not entire objects.
There are a lots of question on S.O. where this topic is, for example here.
Despite of that, I strongly suggest you to use the most practical alternative, that is to use a real C++ code.
The most efficient class is std::vector which is a C++ way to handle dynamic array.
Finally, you should also consider the std::unique_ptr<T> class to handle dynamic memory and pointers.
The final solution will be a class:
std::vector<std::unique_ptr<Hotel>> hotels;
I get an error "munmap_chunk(): invalid pointer", I don't know why. Problem appears when I use MultipliedByMatrix method. It can't properly delete the matrix that was created in this method.
#include "matrix.h"
Matrix::Matrix(int matr_size) {
size = matr_size;
Matr = new int *[size];
for(int i = 0; i < size; i++)
Matr[i] = new int[size];
for(int i = 0 ; i < size; i++)
for(int j = 0; j < size; j++)
Matr[i][j] = rand() % 100;
std::cout << "New matrix is created" << std::endl;
}
Matrix::~Matrix() {
for(int i = 0; i < size; i++)
delete[] Matr[i];
delete[] Matr;
Matr = NULL;
std::cout << "Matrix is deleted" << std::endl;
}
Matrix Matrix::MultipliedByMatrix(Matrix OtherMatr) {
Matrix new_matr = Matrix(this->GetSize());
int new_value;
for(int i = 0 ; i < size; i++)
for(int j = 0; j < size; j++) {
new_value = 0;
new_value += Matr[j][i] * OtherMatr.GetValue(i, j);
new_matr.SetValue(i, j, new_value);
}
return new_matr;
}
int Matrix::GetSize() {
return size;
}
int Matrix::GetValue(int i, int j) {
return Matr[i][j];
}
void Matrix::SetValue(int i, int j, int value) {
Matr[i][j] = value;
}
Did you write the Matrix class yourself? If so, I bet the problem is that you don't have a copy or move constructor. If so, the compiler will have generated one for you. This will copy the values of size and Matr but it won't create copies of the pointed-to arrays. When you write:
return new_matr;
this creates a new matrix (using the copy constructor - which just copies the pointer), and then calls the destructor of new_matr (which deletes the memory which is pointed to). The calling function is then dealing with junk memory, and when it tries to eventually delete the result, all hell will break loose
You also will need to write a move assignment operator.
Alternatively make Matr a std::vector<int> (of length 'size' squared), and write:
int Matrix::GetValue(int i, int j) {
return Matr[i*size+j];
}
(and similarly for other functions). std::vector has a proper copy and move constructor, and proper assignment behaviour - so it will all just work. (It will also be a lot faster - you save a whole pointer indirection.)
This is not an analytical answer to the question but a piece of advice with respect to solving (or better circumventing) the problem.
Avoid memory handling on your own if you can. (And it is very likely that you actually can avoid it.)
You can read my answer on the question "1D or 2D array, what's faster?" to get a lengthy explenation why it is probably undesirable to use the kind of memory layout you're using.
Furthermore, you'll find an (yet untested) example of how to implement a simple matrix container on top of std::vector.
You can look at the scheme and try to implement your own if you want. The design has several advantages compared to your implementation:
It is templated and thus usable with int as well as many other types.
Conformance to the standard container concept is achieved easily.
No destructor / copy constructor / move constructor or assignment operators required: std::vector is handling the resources and does the "dirty work" for you.
If you still want to use your RAW-Pointer approach (for academic purposes or something):
Read What is meant by Resource Acquisition is Initialization (RAII)? and try to understand the answers.
Read What is The Rule of Three? properly and make sure you have implemented (preferably obeying the RAII concept) those functions:
copy constructor,
destructor,
assignment operator and if desired
move constructor and
move assignment operator.
Still read my answer to the "1D or 2D array, what's faster?" question to see how you would want to organize your allocations in order to be exception safe in case of std::bad_alloc.
Example: Your constructor the 'a little better' way:
Matrix::Matrix(std::size_t const matr_size) // you have a size here, no sign required
{
Matr = new int*[matr_size];
std::size_t allocs(0U);
try
{ // try block doing further allocations
for (std::size_t i = 0; i < matr_size; ++i)
{
Matr[i] = new int[matr_size]; // allocate
++allocs; // advance counter if no exception occured
for(std::size_t j = 0; j < matr_size; j++)
{
Matr[i][j] = rand() % 100;
}
}
}
catch (std::bad_alloc & be)
{ // if an exception occurs we need to free out memory
for (size_t i = 0; i < allocs; ++i) delete[] Matr[i]; // free all alloced rows
delete[] Matr; // free Matr
throw; // rethrow bad_alloc
}
}
I have been working on this program for quite some time. This is just two of the functions extracted that are causing a memory leak that I cant seem to debug. Any help would be fantastic!
vector<int**> garbage;
CODE for deleting the used memory
void clearMemory()
{
for(int i = 0; i < garbage.size(); i++)
{
int ** dynamicArray = garbage[i];
for( int j = 0 ; j < 100 ; j++ )
{
delete [] dynamicArray[j];
}
delete [] dynamicArray;
}
garbage.clear();
}
CODE for declaring dynamic array
void main()
{
int ** dynamicArray1 = 0;
int ** dynamicArray2 = 0;
dynamicArray1 = new int *[100] ;
dynamicArray2 = new int *[100] ;
for( int i = 0 ; i < 100 ; i++ )
{
dynamicArray1[i] = new int[100];
dynamicArray2[i] = new int[100];
}
for( int i = 0; i < 100; i++)
{
for(int j = 0; j < 100; j++)
{
dynamicArray1[i][j] = random();
}
}
//BEGIN MULTIPLICATION WITH SELF AND ASSIGN TO SECOND ARRAY
dynamicArray2 = multi(dynamicArray1); //matrix multiplication
//END MULTIPLICATION AND ASSIGNMENT
garbage.push_back(dynamicArray1);
garbage.push_back(dynamicArray2);
clearMemory();
}
I stared at the code for some time and I can't seem to find any leak. It looks to me there's exactly one delete for every new, as it should be.
Nonetheless, I really wanted to say that declaring an std::vector<int**> pretty much defies the point of using std::vector itself.
In C++, there are very few cases when you HAVE to use pointers, and this is not one of them.
I admit it would be a pain to declare and use an std::vector<std::vector<std::vector<int>>> but that would make sure there are no leaks in your code.
So I'd suggest you rethink your implementations in term of objects that automatically manage memory allocation.
Point 1: If you have a memory leak, use valgrind to locate it. Just like blue, I can't seem to find a memory leak in your code, but valgrind will tell you for sure what's up with your memory.
Point 2: You are effectively creating a 2x100x100 3D array. C++ is not the right language for this kind of thing. Of course, you could use an std::vector<std::vector<std::vector<int>>> with the obvious drawbacks. Or you can drop back to C:
int depth = 2, width = 100, height = 100;
//Allocation:
int (*threeDArray)[height][width] = malloc(depth*sizeof(*threeDArray));
//Use of the last element in the 3D array:
threeDArray[depth-1][height-1][width-1] = 42;
//Deallocation:
free(threeDArray);
Note that this is valid C, but not valid C++: The later language does not allow runtime sizes to array types, while the former supports that since C99. In this regard, C is more powerful than C++.