Memory leak using a class method - c++

I have a problem with my program : I have a memory leak. I know it because when I look at the memory usage it never stop increasing, and then the program crash.
I have remarked that it happens because of this line (when I comment it there is no problem) :
replica[n].SetTempId(i+1); // FUITE MEMOIRE SUR CETTE LIGNE !!
I have checked that n is never bigger than the size of my array (which is a vector type, see below).
My class :
class Replica
{
/* All the functions in this class are in the 'public:' section */
public:
Replica();
~Replica();
void SetEnergy(double E);
void SetConfName(string config_name);
void SetTempId(int id_temp);
int GetTempId();
string GetConfName();
double GetEnergy();
void SetUp();
void SetDown();
void SetZero();
int GetUpDown();
/* All the attributes in this class are in the 'private:' section */
private:
double m_E; // The energy of the replica
string m_config_name; // The name of the config file associated
int m_id_temp; // The id of the temperature where the spin configuration is.
int m_updown;
};
The method
void Replica::SetTempId(int id_temp)
{
m_id_temp=id_temp;
}
I initialised my object like this :
vector<Replica> replica(n_temp); // we create a table that will contain information on the replicas.
The constructor :
Replica::Replica() : m_E(0), m_config_name(""), m_updown(0), m_id_temp(0)
{
}
How I initialize my vector :
for(int i=0; i<=n_temp-1 ; i++) // We write the object replica that will contain information on a given spin configuration.
{
string temp_folder="";
temp_folder= spin_folder + "/T=" + to_string(Tf[i]) + ".dat";
replica[i].SetEnergy(Ef[i]+i); // we save the energy of the config (to avoid to calculate it)
replica[i].SetConfName(temp_folder); // we save the name of the file where the config is saved (to avoid to have huge variables that will slow down the program)
replica[i].SetTempId(i);
replica[i].SetZero();
if(i==0)
replica[i].SetDown();
if(i==(n_temp-1))
replica[i].SetUp();
}
I am a beginner in C++ so it is probably a basic mistake.
Thank you for your help !
I have read your answers.
But it is hard to write a minimal code : I tried to delete some stuff but as soon as I delete lines it works.
In fact the problem is very "random", for example when I delete my line :
replica[n].SetTempId(i+1);
it works, but I can have this line not deleted but when I delete an other line of my code it will also works (I dont know if you see what I mean).
The bug is very hard to find because of this "randomness"...
I also can say that when it crash the program says me :
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
So, could you give me guess on what could cause this error (because I don't arrive to write the minimal code).
I don't do dynamic allocation on my code.
The structure of my code is like this :
while(Nsw_c<Nsw)
{
cout << "test1";
// a
// lot
// of
// code
// with some Nsw_c++
// and this cout at the end of the loop
cout << " Nsw_c : " << Nsw_c << endl << " i " << i << " compteur_bug " << compteur_bug;
}
cout << "test2";
It ALWAYS freeze on this cout above which is at the end of the loop.
I know this because neither test2 or test1 are displayed when it freezes and it is the next cout.
Nsw, Nsw_c, i are integers that are lower than 100 (they are not too big).
To be more precise, if I replace the cout just at the end of the loop by another cout like this :
cout << " test ";
It will also freeze at the same place.
In fact the program always freeze at the end of my while (juste before analysing the condition).
But Nsw and Nsw_c are not big at all so that's why it is strange.
I tried to replace the condition Nsw_c < Nsw just by "1" and it didn't freeze anymore. So it is probably a problem with the condition but both are just "normal" integers so...
Thanks !
I have launched gdb (i just learnt to use it) and i wrote :
catch throw std::bad_alloc
The debugger then do this (I don't know if it can help) :
not stopped at a C++ exception catchpoint
Catchpoint 1 (exception thrown), 0xb7f25290 in __cxa_throw () from /usr/lib/i386-linux-gnu/libstdc++.so.6

Related

Apparent method call after destruction in olcSoundMaker.h

I couldn't come up with a better title, so feel free to give suggestions.
I tried to follow OneLoneCoder's tutorial on sound synthesizing, I'm only halfway through the first video and my code already throws an exception.
All I did was downloading his olcSoundMaker.h from his github, and copying the entry point:
#include <iostream>
#include "olcNoiseMaker.h"
double make_noise(double time)
{
return 0.5 * sin(440.0 * 2 * PI * time);
}
int main()
{
std::wcout << "Synthesizer, part 1" << std::endl;
std::vector<std::wstring> devices = olcNoiseMaker<short>::Enumerate();
for (auto d : devices)
{
std::wcout << "Found output device: " << d << std::endl;
}
olcNoiseMaker<short> sound(devices[0], 44100, 1, 8, 512);
sound.SetUserFunction(make_noise);
while (1) { ; }
return EXIT_SUCCESS;
}
In the video he runs this just fine; for me, it starts producing a sound, then after 60-80 iterations of the while (1) loop, it stops and raises this:
Unhandled exception thrown: write access violation.
std::_Atomic_address_as<long,std::_Atomic_padded<unsigned int> >(...) was 0xB314F7CC.
(from the <atomic> header file, line 1474.)
By stepping through the code with VS I didn't find out much, except that it happens at different times during every run, which may mean it has something to do with multithreading, but I'm not sure since I'm not very familiar with the topic.
I found this question which is similar, but even though it says [SOLVED] it doesn't show me the answers.
Anyone that can help to get rid of that exception?

Severe & Bizzare performance issue

Update
Ok, I removed the 3 couts and replaced it with *buffer = 'a', and there was a big performance difference. Removing that line made the program 2x as fast. If you go on godbolt and compile it using msvc, that single line of code changes most of the program. (It adds a whole lot more complexity)
The following might seem extremely weird, but it's true on my computer:
Alright, so I was doing some benchmarking of some code, and I noticed extremely weird performance anomalies that were 100% consistent. I'm running windows-10 and visual-studio-2019. Basically, deleting a line of code that is never called completely changes the performance of the program.
Here is exactly what to do:
Create new VS-2019 Console C++ App project
Set the configuration to Release & x64
Paste the code below:
#include <iostream>
#include <chrono>
class Test {
public:
size_t length;
size_t doublingVal;
char* buffer;
Test() : length(0), doublingVal(2) {
buffer = static_cast<char*>(malloc(1));
}
~Test() {
std::cout << "called" << "\n";
std::cout << "called" << "\n";
std::cout << "called" << "\n"; // Remove this line and the time decreases DRASTICALLY (ie remove line 14)
}
void append() {
if (doublingVal == length) {
doublingVal <<= 1;
}
*buffer = 'a';
++length;
}
};
int main()
{
Test test;
auto start = std::chrono::high_resolution_clock::now();
for (size_t i = 0; i < static_cast<size_t>(1024) * 1024 * 1024 * 4; ++i) {
test.append();
}
std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::high_resolution_clock::now() - start).count() << "\n";
}
Run the program using CTRL+F5, not in debug. Now remember how long it takes to run. (a few seconds)
Then, in the destructor of Test, remove the third line which has the comment.
Run the program again, and you should see that the performance increases drastically. I tested this exact same code with 4 different projects all brand new, and 3 different computers.
The destructor is called at the very end, when the entire program is finished measuring time. The extra cout shouldn't affect anything.
Edit:
You can also see a similar thing go on if you remove the 3 cout's and replace it with a single *buffer = 'a'. Then CTRL+F5 once again, record the time, and then remove that line we just added. Then run it again and the time magically decreases by half.
WTF is going on, and how do you solve the weird performance difference?

C++ pointer to function [raspberry pi]

I'm trying to change a code written in C++ and run it on Raspberry Pi. It is an opensource code called freelss (https://github.com/hairu/freelss) and it's a laser scanner software. I tried to change some parts. For example, I expanded the system with some i2c chips and some buttons. Now I have changed the main.cpp and the main.h to scan the buttons. Everything works fine but now I wanted to have a button that starts the scanning process if I push it.
Scanner (*scanner_Max);
std::cout << "before the first function" << std::endl;
scanner_Max->setTask(Scanner::GENERATE_SCAN);
std::cout << "after the first function" << std::endl;
Now, if I push the button, It says "before the first function" goes into the setTask function and stays there. It never comes back. So I never get the message "after the first function".
void Scanner::setTask(Scanner::Task task)
{
std::cout << "setTask starts" << std::endl;
m_task = task;
}
This is the function in scanner.cpp. I always get the "setTask starts" but it won't come back to the main programm. Can please someone help me with the code?
Greets, Max
In the code you have shown you have not created an instance of Scanner, just a pointer.
Are you missing a:
scanner_Max = new Scanner ();
Or have you just not shown this?

Can't deque.push_back() 10 million+ deques

I'm a student, and my Operating Systems class project has a little snag, which is admittedly a bit superfluous to the assignment specifications itself:
While I can push 1 million deques into my deque of deques, I cannot push ~10 million or more.
Now, in the actual program, there is lots of stuff going on, and the only thing already asked on Stack Overflow with even the slightest relevance had exactly that, only slight relevance. https://stackoverflow.com/a/11308962/3407808
Since that answer had focused on "other functions corrupting the heap", I isolated the code into a new project and ran that separately, and found everything to fail in exactly the same ways.
Here's the code itself, stripped down and renamed for the sake of space.
#include <iostream>
#include <string>
#include <sstream>
#include <deque>
using namespace std;
class cat
{
cat();
};
bool number_range(int lower, int upper, double value)
{
while(true)
{
if(value >= lower && value <= upper)
{
return true;
}
else
{
cin.clear();
cerr << "Value not between " << lower << " and " << upper << ".\n";
return false;
}
}
}
double get_double(char *message, int lower, int upper)
{
double out;
string in;
while(true) {
cout << message << " ";
getline(cin,in);
stringstream ss(in); //convert input to stream for conversion to double
if(ss >> out && !(ss >> in))
{
if (number_range(lower, upper, out))
{
return out;
}
}
//(ss >> out) checks for valid conversion to double
//!(ss >> in) checks for unconverted input and rejects it
cin.clear();
cerr << "Value not between " << lower << " and " << upper << ".\n";
}
}
int main()
{
int dq_amount = 0;
deque<deque <cat> > dq_array;
deque<cat> dq;
do {
dq_amount = get_double("INPUT # OF DEQUES: ", 0, 99999999);
for (int i = 0; i < number_of_printers; i++)
{
dq_array.push_back(dq);
}
} while (!number_range(0, 99999999, dq_amount));
}
In case that's a little obfuscated, the design (just in case it's related to the error) is that my program asks for you to input an integer value. It takes your input and verifies that it can be read as an integer, and then further parses it to ensure it is within certain numerical bounds. Once it's found within bounds, I push deques of myClass into a deque of deques of myClass, for the amount of times equal to the user's input.
This code has been working for the past few weeks that I've being making this project, but my upper bound had always been 9999, and I decided to standardize it with most of the other inputs in my program, which is an appreciably large 99,999,999. Trying to run this code with 9999 as the user input works fine, even with 99999999 as the upper bound. The issue is a runtime error that happens if the user input is 9999999+.
Is there any particular, clear reason why this doesn't work?
Oh, right, the error message itself from Code::Blocks 13.12:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's supprt team for more information.
Process returned 3 (0x3) execution time : 12.559 s
Press any key to continue.
I had screenshots, but I need to be 10+ reputation in order to put images into my questions.
This looks like address space exhaustion.
If you are compiling for a 32-bit target, you will generally be limited to 2 GiB of user-mode accessible address space per process, or maybe 3 GiB on some platforms. (The remainder is reserved for kernel-mode mappings shared between processes)
If you are running on a 64-bit platform and build a 64-bit binary, you should be able to do substantially more new/alloc() calls, but be advised you may start hitting swap.
Alternatively, you might be hitting a resource quota even if you are building a 64-bit binary. On Linux you can check ulimit -d to see if you have a per-process memory limit.

Crash from a push_back() in a std vector

This program works as expected:
#include <iostream>
#include <string>
#include <vector>
using namespace std;
struct Thumbnail
{
string tag;
string fileName;
};
int main()
{
{
Thumbnail newThumbnail;
newThumbnail.tag = "Test_tag";
newThumbnail.fileName = "Test_filename.jpg";
std::vector<Thumbnail> thumbnails;
for(int i = 0; i < 10; ++i) {
thumbnails.push_back(newThumbnail);
}
}
return 0;
}
If I copy and paste the main block of code in another project (still single threaded), inside any function, I get this exception from the line commented // <-- crash at the 2nd loop:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
If I clear the vector before any push_back, everything is all right (but of course this is not the desired behaviour); this makes me think that it is like if the vector could not store more than one such object.
This is the function where the code is crashing:
int ImageThumbnails::Load(const std::string &_path)
{
QDir thumbDir(_path.c_str());
if(!thumbDir.exists())
return errMissingThumbPath;
// Set a filter
thumbDir.setFilter(QDir::Files);
thumbDir.setNameFilters(QStringList() << "*.jpg" << "*.jpeg" << "*.png");
thumbDir.setSorting(QDir::Name);
// Delete previous thumbnails
thumbnails.clear();
Thumbnail newThumbnail;
///+TEST+++
{
Thumbnail newThumbnail;
newThumbnail.tag = "Test_tag";
newThumbnail.fileName = "Test_filename.jpg";
std::vector<Thumbnail> thumbnails;
for(int i = 0; i < 10; ++i)
{
TRACE << i << ": " << sizeof(newThumbnail) << " / " << newThumbnail.tag.size() << " / " << newThumbnail.fileName.size() << std::endl;
//thumbnails.clear(); // Ok with this decommented
thumbnails.push_back(newThumbnail); // <-- crash at the 2nd loop
}
exit(0);
}
///+TEST+END+++
...
This is the output:
> TRACE: ImageThumbnails.cpp:134:Load
0: 8 / 8 / 17
> TRACE: ImageThumbnails.cpp:134:Load
1: 8 / 8 / 17
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Why do I get this different behaviour for the same piece of code in two different projects?
Platform: Windows 7, MinGW 4.4, GCC
To add to this (because first Google result): In my szenario i got bad_alloc even when i still had a few GBs of RAM available.
If your application needs more than 2GB of memory you have to enable the /LARGEADDRESSAWARE option in the Linker settings.
If you need more than 4GB you have to set your build target to x64 (in Project Settings and the Build configuration)
Due to how the automatic resizing of vectors works you will hit the breakpoints at ~1gb / ~2gb vector size
If it is crashing when using the exact same code in another application, there is the possibility that the program is out of memory (std::bad_alloc exceptions can be because of this). Check how much memory your other application is using.
Another thing ... use the reserve() method when using std::vectors and you know ahead of time how many elements are going to be pushed into the vector. It looks like you are pushing the exact same element 10 times. Why not use the resize() method that includes the default object parameter?