I have a simple c++ program using the multiprecision library MPFR written to try and understand a memory problem in a bigger program:
int main() {
int prec=65536, size=1, newsize=1;
mpfr_t **mf;
while(true) {
size=newsize;
mf=new mpfr_t*[size];
for(int i=0;i<size;i++) {
mf[i]=new mpfr_t[size];
for(int j=0;j<size;j++) mpfr_init2(mf[i][j], prec);
}
cout << "Size of array: ";
cin >> newsize;
for(int i=0;i<size;i++) {
for(int j=0;j<size;j++) mpfr_clear(mf[i][j]);
delete [] mf[i];
}
delete [] mf;
}
}
The point here is to declare arrays of different sizes and monitor the memory usage with task manager (I'm using Windows). This works fine for sizes ~< 200 but if I declare something larger the memory doesn't seem to be freed up when I decrease the size again.
Here's an example run:
I start the program and choose size 50. Then I change sizes between 50, 100, 150 and 200 and see the memory usage go up and down as expected. I then choose size 250 and the memory usage goes up as expected but when I go back to 200 it doesn't decrease but increases to something like the sum of the memory values needed for size 200 and 250 respectively. A similar behaviour is seen with bigger sizes.
Any idea what's going on?
Process Explorer will give you a more realistic view of your process's memory usage (Virtual Size) than Task Manager will. A memory leak is when a program doesn't free memory is should and if this happens all the time it's memory will never stop increasing.
Windows won't necessarily free your program's memory back to the system itself - and so task manager etc won't tell you the whole truth.
To detect memory leaks in visual studio you can enable the _CRTDBG_MAP_ALLOC macro, as described on this MSDN page.
Also this question talks a bit about making it work with C++ new keyword.
Related
In cpp one can use an array declaration as
typename array[size];
or
typename *array = new typename[size];
Where array is of length 'size' and elements are indexed from '0' to 'size -1'
Here my question is am I allowed to access the elements beyond the index >= size.
So I wrote this little code to check it
#include <iostream>
using namespace std;
int main()
{
//int *c; //for dynamic allocation
int n; //length of the array c
cin>>n; //getting the length
//c = new int[n]; //for dynamic allocation
int c[n]; //for static allocation
for(int i=0; i<n; i++) //getting the elements
cin>>c[i];
for(int i=0; i<n+10; i++) //showing the elements, I have add up 10
cout<<c[i]<<" "; //with size to access the memory I haven't
//allocated for
return 0;
}
And the result is like this
2
1 2
1 2 2686612 1970422009 7081064 4199040 2686592 0 1 1970387429 1971087432 2686700
Shouldn't the program crashed but gives garbage values. And for both the allocation methods it gives the same result. It makes more bugs which are hard to detect. Is it related with the environment or the compiler I am using or anything else?
I was using codeblocks IDE having TDM-GCC 4.8.1 compiler on windows 8.1
Thanks in advance.
This is called "undefined behavior" in the C++ standard.
Undefined behavior can mean any one of the following:
The program crashes
The program continues to run, but produces meaningless, garbage results
The program continues to run, and automatically copies the entire contents of your hard drive, and posts it on Facebook
The program continues to run, and automatically subscribes you to Publishers Clearinghouse Sweepstakes
The program continues to run, but your computer catches fire and explodes
The program continues to run, and makes your computer self-aware, which automatically links and networks with other self-aware networks, forming Skynet, and destroying the human race
Conclusion: do not run and access elements past the end of your arrays.
The c++ compilers don't enforce this as there is no specification to do so.
When you access an element of an array there is no boundary check done. c[i] just gets translated to c + i * sizeof(int) and that's it. If that area of memory is not initialize you'll get garbage, but you could be getting other useful information it all depends on what is there.
Please note that depending on the OS and the c++ runtime you're running you can get different results, for instance on a linux box you'll probably be getting a segmentation fault and the program will crash.
So I wrote this program in C++ to solve COJ(Caribbean Online Judge) problem 1456. http://coj.uci.cu/24h/problem.xhtml?abb=1456. It works just fine with the sample input and with some other files I wrote to test it but I kept getting 'Wrong Answer' as a veredict, so I decided to try with a larger input file and I got Segmentation Fault:11. The file was 1000001 numbers long without the first integer which is the number of inputs that will be tested. I know that error is caused by something related to memory but I am really lacking more information. Hope anyone can help, it is driving me nuts. I program mainly in Java so I really have no idea how to solve this. :(
#include <stdio.h>
int main(){
long singleton;
long N;
scanf("%ld",&N);
long arr [N];
bool sing [N];
for(int i = 0; i<N; i++){
scanf("%ld",&arr[i]);
}
for(int j = 0; j<N; j++){
if(sing[j]==false){
for(int i = j+1; i<N; i++){
if(arr[j]==arr[i]){
sing[j]=true;
sing[i]=true;
break;
}
}
}
if(sing[j]==false){
singleton = arr[j];
break;
}
}
printf("%ld\n", singleton);
}
If you are writing in C, you should change the first few lines like this:
#include <stdio.h>
#include <stdlib.h>
int main(void){
long singleton;
long N;
printf("enter the number of values:\n");
scanf("%ld",&N);
long *arr;
arr = malloc(N * sizeof *arr);
if(arr == NULL) {
// malloc failed: handle error gracefully
// and exit
}
This will at least allocate the right amount of memory for your array.
update note that you can access these elements with the usual
arr[ii] = 0;
Just as if you had declared the array as
long arr[N];
(which doesn't work for you).
To make it proper C++, you have to convince the standard committee to add Variable length arrays to the language.
To make it valid C, you have to include <stdbool.h>.
Probably your VLA nukes your stack, consuming a whopping 4*1000001 byte. (The bool only adds a quarter to that) Unless you use the proper compiler options, that is probably too much.
Anyway, you should use dynamic memory for that.
Also, using sing without initialisation is ill-advised.
BTW: The easiest C answer for your programming challenge is: Read the numbers into an array (allocated with malloc), sort (qsort works), output the first non-duplicate.
When you write long arr[N]; there is no way that your program can gracefully handle the situation where there is not enough memory to store this array. At best, you might get a segfault.
However, with long *arr = malloc( N * sizeof *arr );, if there is not enough memory then you will find arr == NULL, and then your program can take some other action instead, for example exiting gracefully, or trying again with a smaller number.
Another difference between these two versions is where the memory is allocated from.
In C (and in C++) there are two memory pools where variables can be allocated: automatic memory, and the free store. In programming jargon these are sometimes called "the stack" and "the heap" respectively. long arr[N] uses the automatic area, and malloc uses the free store.
Your compiler and/or operating system combination decide how much memory is available to your program in each pool. Typically, the free store will have access to a "large" amount of memory, the maximum possible that a process can have on your operating system. However, the automatic storage area may be limited in size , as well as having the drawback that if allocation fails then you have to have your process killed or have your process go haywire.
Some systems use one large area and have the automatic area grow from the bottom, and free store allocations grow from the top, until they meet. On those systems you probably wouldn't run out of memory for your long arr[N], although the same drawback remains about not being able to handle when it runs out.
So you should prefer using the free store for anything that might be "large".
I need help understanding problems with my memory allocation and deallocation on Windows. I'm using VS11 compiler (VS2012 IDE) with latest update at the moment (Update 3 RC).
Problem is: I'm allocating dynamically some memory for a 2-dimensional array and immediately deallocating it. Still, before memory allocation, my process memory usage is 0,3 MB before allocation, on allocation 259,6 MB (expected since 32768 arrays of 64 bit ints (8bytes) are allocated), 4106,8 MB during allocation, but after deallocation memory does not drop to expected 0,3 MB, but is stuck at 12,7 MB. Since I'm deallocating all heap memory I've taken, I've expected memory to be back to 0,3 MB.
This is the code in C++ I'm using:
#include <iostream>
#define SIZE 32768
int main( int argc, char* argv[] ) {
std::getchar();
int ** p_p_dynamic2d = new int*[SIZE];
for(int i=0; i<SIZE; i++){
p_p_dynamic2d[i] = new int[SIZE];
}
std::getchar();
for(int i=0; i<SIZE; i++){
for(int j=0; j<SIZE; j++){
p_p_dynamic2d[i][j] = j+i;
}
}
std::getchar();
for(int i=0; i<SIZE; i++) {
delete [] p_p_dynamic2d[i];
}
delete [] p_p_dynamic2d;
std::getchar();
return 0;
}
I'm sure this is a duplicate, but I'll answer it anyway:
If you are viewing Task Manager size, it will give you the size of the process. If there is no "pressure" (your system has plenty of memory available, and no process is being starved), it makes no sense to reduce a process' virtual memory usage - it's not unusual for a process to grow, shrink, grow, shrink in a cyclical pattern as it allocates when it processes data and then releases the data used in one processing cycle, allocating memory for the next cycle, then freeing it again. If the OS were to "regain" those pages of memory, only to need to give them back to your process again, that would be a waste of processing power (assigning and unassigning pages to a particular process isn't entirely trivial, especially if you can't know for sure who those pages belonged to in the first place, since they need to be "cleaned" [filled with zero or some other constant to ensure the "new owner" can't use the memory for "fishing for old data", such as finding my password stored in the memory]).
Even if the pages are still remaining in the ownership of this process, but not being used, the actual RAM can be used by another process. So it's not a big deal if the pages haven't been released for some time.
Further, in debug mode, the C++ runtime will store "this memory has been deleted" in all memory that goes through delete. This is to help identify "use after free". So, if your application is running in debug mode, then don't expect any freed memory to be released EVER. It will get reused tho'. So if you run your code three times over, it won't grow to three times the size.
I need help understanding problems with my memory allocation and deallocation on Windows. I'm using VS11 compiler (VS2012 IDE) with latest update at the moment (Update 3 RC).
Problem is: I'm allocating dynamically some memory for a 2-dimensional array and immediately deallocating it. Still, before memory allocation, my process memory usage is 0,3 MB before allocation, on allocation 259,6 MB (expected since 32768 arrays of 64 bit ints (8bytes) are allocated), 4106,8 MB during allocation, but after deallocation memory does not drop to expected 0,3 MB, but is stuck at 12,7 MB. Since I'm deallocating all heap memory I've taken, I've expected memory to be back to 0,3 MB.
This is the code in C++ I'm using:
#include <iostream>
#define SIZE 32768
int main( int argc, char* argv[] ) {
std::getchar();
int ** p_p_dynamic2d = new int*[SIZE];
for(int i=0; i<SIZE; i++){
p_p_dynamic2d[i] = new int[SIZE];
}
std::getchar();
for(int i=0; i<SIZE; i++){
for(int j=0; j<SIZE; j++){
p_p_dynamic2d[i][j] = j+i;
}
}
std::getchar();
for(int i=0; i<SIZE; i++) {
delete [] p_p_dynamic2d[i];
}
delete [] p_p_dynamic2d;
std::getchar();
return 0;
}
I'm sure this is a duplicate, but I'll answer it anyway:
If you are viewing Task Manager size, it will give you the size of the process. If there is no "pressure" (your system has plenty of memory available, and no process is being starved), it makes no sense to reduce a process' virtual memory usage - it's not unusual for a process to grow, shrink, grow, shrink in a cyclical pattern as it allocates when it processes data and then releases the data used in one processing cycle, allocating memory for the next cycle, then freeing it again. If the OS were to "regain" those pages of memory, only to need to give them back to your process again, that would be a waste of processing power (assigning and unassigning pages to a particular process isn't entirely trivial, especially if you can't know for sure who those pages belonged to in the first place, since they need to be "cleaned" [filled with zero or some other constant to ensure the "new owner" can't use the memory for "fishing for old data", such as finding my password stored in the memory]).
Even if the pages are still remaining in the ownership of this process, but not being used, the actual RAM can be used by another process. So it's not a big deal if the pages haven't been released for some time.
Further, in debug mode, the C++ runtime will store "this memory has been deleted" in all memory that goes through delete. This is to help identify "use after free". So, if your application is running in debug mode, then don't expect any freed memory to be released EVER. It will get reused tho'. So if you run your code three times over, it won't grow to three times the size.
I've been having trouble with a memory leak in a large-scale project I've been working on, but the project has no leaks according to the VS2010 memory checker (and I've checked everything extensively).
I decided to write a simple test program to see if the leak would occur on a smaller scale.
struct TestStruct
{
std::string x[100];
};
class TestClass
{
public:
std::vector<TestStruct*> testA;
//TestStruct** testA;
TestStruct xxx[100];
TestClass()
{
testA.resize(100, NULL);
//testA = new TestStruct*[100];
for(unsigned int a = 0; a < 100; ++a)
{
testA[a] = new TestStruct;
}
}
~TestClass()
{
for(unsigned int a = 0; a < 100; ++a)
{
delete testA[a];
}
//delete [] testA;
testA.clear();
}
};
int _tmain(int argc, _TCHAR* argv[])
{
_CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
char inp;
std::cin >> inp;
{
TestClass ttt[2];
TestClass* bbbb = new TestClass[2];
std::cin >> inp;
delete [] bbbb;
}
std::cin >> inp;
std::cin >> inp;
return 0;
}
Using this code, the program starts at about 1 meg of memory, goes up to more than 8 meg, then at the end drops down to 1.5 meg. Where does the additional .5 meg go? I am having a similar problem with a particle system but on the scale of hundreds of megabytes.
I cannot for the life of me figure out what is wrong.
As an aside, using the array (which I commented out) greatly reduces the wasted memory, but does not completely reduce it. I would expect for the memory usage to be the same at the last cin as the first.
I am using the task manager to monitor memory usage.
Thanks.
"I cannot for the life of me figure out what is wrong."
Probably nothing.
"[Program] still uses more memory at program end after destroying all objects."
You should not really care about memory usage at program end. Any modern operating system cares about "freeing" all memory associated with a process, when the process ends. (Technically speaking, the address space of the process is simply released.)
Freeing memory at program end can actually slow down the termination of your program, since it unnecessarily needs to access memory pages which may even lie on swap space.
That additional 0.5MB probably remains at your allocator (malloc/free, new/delete, std::allocator). These allocators usually work in a way that they request memory from the operating system when necessary, and give memory back the OS when convenient. Fragmentation could be one of the reasons why the allocator has to hold more memory than strictly required at a moment in time. It is also usually faster to keep some memory in reserve, since requesting memory from the operating system is slow.
"I am using the task manager to monitor memory usage."
Measuring memory usage is in fact more sophisticated than observing a single number, and it requires good understanding of virtual memory and the memory management between a process and the operating system. Unfortunately I cannot recommend any good tools for Windows.
Overall, I think there is no issue with your simple program.