`pointer being freed was not allocated` when using stxxl queue - c++

My code seems to work(I haven't tried it with large datasets because of the above error).
Code:
#include <iostream>
#include <queue>
#include <stxxl/queue>
int main()
{
//queue<int> q; //this works
stxxl::queue<int> q; //does not work
for (int i = 0; i<100; i++) {
q.push(i);
}
std::cout << "done copying" << std::endl;
while (q.empty() == false) {
std::cout << q.front() << std::endl;
q.pop();
}
std::cout << "done poping" << std::endl;
return 0;
}
my simple .stxxl is simply: disk=./testfile,0,syscall
But my error is:
stackexchangeexample(3884) malloc: *** error for object 0x101c04000: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
The program has unexpectedly finished.
I'm not sure how to troubleshoot it, do I need to free memory in this case? I'm still learning c++ so sorry if this is really basic(this only happens when I use the stxxl queue).

I've never used stxxl before but since it's a template you can take a look at the code here: http://algo2.iti.kit.edu/stxxl/trunk/queue_8h_source.html. And since you're a newbie I'll explain a few things. This goofy queue maintains a queue of pointers. Line 00054 shows typedef ValTp value_type, so now your int is a value_type. Line's 00072 & 00073 show that your front and back elements are of that value_type. Do you see how they will be maintained as pointers. Finally if you look at any constructor the pool_type* pool defined on line 00069 will be "new'd" up (which is the basis of your elements) and the init function is always called. And within init, pool->steal() is called, click on it if you want to learn more.
In short, you need to be pushing new'd up integers onto your queue. Bad interface, not your fault.

Related

Accessing pointers after deletion [duplicate]

This question already has answers here:
What happens to the pointer itself after delete? [duplicate]
(3 answers)
Closed 3 years ago.
I have a code snippet like below. I have created some Dynamic memory allocation for my Something class and then deleted them.
The code print wrong data which I expect but why ->show does not crash?
In what case/how ->show will cause crash?
Is it possible to overwrite the same memory location of i, ii, iii with some other object?
I am trying to understand why after delete which frees up the memory location to be written with something else still have information about ->show!
#include <iostream>
#include <vector>
class Something
{
public:
Something(int i) : i(i)
{
std::cout << "+" << i << std::endl;
}
~Something()
{
std::cout << "~" << i << std::endl;
}
void show()
{
std::cout << i << std::endl;
}
private:
int i;
};
int main()
{
std::vector<Something *> somethings;
Something *i = new Something(1);
Something *ii = new Something(2);
Something *iii = new Something(3);
somethings.push_back(i);
somethings.push_back(ii);
somethings.push_back(iii);
delete i;
delete ii;
delete iii;
std::vector<Something *>::iterator n;
for(n = somethings.begin(); n != somethings.end(); ++n)
{
(*n)->show(); // In what case this line would crash?
}
return 0;
}
The code print wrong data which I expect but why ->show does not crash?
Why do you simultaneously expect the data to be wrong, but also that it would crash?
The behaviour of indirecting through an invalid pointer is undefined. It is not reasonable to expect the data to be correct, nor to expect the data to be wrong, nor to expect that it should crash, nor to expect that it shouldn't crash - in particular.
In what case/how ->show will cause crash?
There is no situation where the C++ language specifies the program to crash. Crashing is a detail of the particular implementation of C++.
For example, a Linux system will typically force the process to crash due to "segmentation fault" if you attempt to write into a memory area that is marked read-only, or attempt to access an unmapped area of memory.
There is no direct way in standard C++ to create memory mappings: The language implementation takes care of mapping the memory for objects that you create.
Here is an example of a program that demonstrably crashes on a particular system:
int main() {
int* i = nullptr;
*i = 42;
}
But C++ does not guarantee that it crashes.
Is it possible to overwrite the same memory location of i, ii, iii with some other object?
The behaviour is undefined. Anything is possible as far as the language is concerned.
Remember, a pointer stores an integer memory address. On a call to delete, the dynamic memory will be deallocated but the pointer will still store the memory address. If we nulled the pointer, then program would crash.
See this question: What happens to the pointer itself after delete?

how to handle double free crash in c++

Deleting the double pointer is will cause the harmful effect like crash the program and programmer should try to avoid this as its not allowed.
But sometime if anybody doing this then i how do we take care of this.
As delete in C++ is noexcept operator and it'll not throw any exceptions. And its written type is also void. so how do we catch this kind of exceptions.
Below is the code snippet
class myException: public std::runtime_error
{
public:
myException(std::string const& msg):
std::runtime_error(msg)
{
cout<<"inside class \n";
}
};
void main()
{
int* set = new int[100];
cout <<"memory allcated \n";
//use set[]
delete [] set;
cout <<"After delete first \n";
try{
delete [] set;
throw myException("Error while deleting data \n");
}
catch(std::exception &e)
{
cout<<"exception \n";
}
catch(...)
{
cout<<"generic catch \n";
}
cout <<"After delete second \n";
In this case i tried to catch the exception but no success.
Pleas provide your input how we'll take care of these type of scenario.
thanks in advance!!!
Given that the behaviour on a subsequent delete[] is undefined, there's nothing you can do, aside from writing
set = nullptr;
immediately after the first delete[]. This exploits the fact that a deletion of a nullptr is a no-op.
But really, that just encourages programmers to be sloppy.
Segmentation fault or bad memory access or bus errors cannot be caught by exception. Programmers need to manage their own memory correctly as you do not have garbage collection in C/C++.
But you are using C++, no ? Why not make use of RAII ?
Here is what you should strive to do:
Memory ownership - Explicitly via making use of std::unique_ptr or std::shared_ptr and family.
No explicit raw calls to new or delete. Make use of make_unique or make_shared or allocate_shared.
Make use of containers like std::vector or std::array instead of creating dynamic arrays or allocating array on stack resp.
Run your code via valgrind (Memcheck) to make sure there are no memory related issues in your code.
If you are using shared_ptr, you can use a weak_ptr to get access to the underlying pointer without incrementing the reference count. In this case, if the underlying pointer is already deleted, bad_weak_ptr exception gets thrown. This is the only scenario I know of when an exception will be thrown for you to catch when accessing a deleted pointer.
A code must undergo multiple level of testing iterations maybe with different sets of tools before committing.
There is a very important concept in c++ called RAII (Resource Acquisition Is Initialisation).
This concept encapsulates the idea that no object may exist unless it is fully serviceable and internally consistent, and that deleting the object will release any resources it was holding.
For this reason, when allocating memory we use smart pointers:
#include <memory>
#include <iostream>
#include <algorithm>
#include <iterator>
int main()
{
using namespace std;
// allocate an array into a smart pointer
auto set = std::make_unique<int[]>(100);
cout <<"memory allocated \n";
//use set[]
for (int i = 0 ; i < 100 ; ++i) {
set[i] = i * 2;
}
std::copy(&set[0], &set[100] , std::ostream_iterator<int>(cout, ", "));
cout << std::endl;
// delete the set
set.reset();
cout <<"After delete first \n";
// delete the set again
set.reset();
cout <<"After delete second \n";
// set also deleted here through RAII
}
I'm adding another answer here because previous answers focus very strongly on manually managing that memory, while the correct answer is to avoid having to deal with that in the first place.
void main() {
std::vector<int> set (100);
cout << "memory allocated\n";
//use set
}
This is it. This is enough. This gives you 100 integers to use as you like. They will be freed automatically when control flow leaves the function, whether through an exception, or a return, or by falling off the end of the function. There is no double delete; there isn't even a single delete, which is as it should be.
Also, I'm horrified to see suggestions in other answers for using signals to hide the effects of what is a broken program. If someone is enough of a beginner to not understand this rather basic stuff, PLEASE don't send them down that path.

Threadsafe circular buffer for storing pointers

I am currently working on a threadsafe circular buffer for storing pointers. As basis I used the code from this thread: Thread safe implementation of circular buffer. My code is shown below.
Because I want to store pointers in the circular buffer, I need to make sure that allocated memory is deleted, in case the buffer is full and the first element is overwritten, as mentioned in the boost documentation:
When a circular_buffer becomes full, further insertion will overwrite the stored pointers - resulting in a memory leak. 1
So I tried to delete the first element in the add method, in case the buffer is full and the type T of the template is actually a pointer. This leads to a C2541-error, because I try to delete an object, which is not seen as a pointer.
Is my basic approach correct? How can I solve the above issue?
#pragma once
#include <boost/thread/condition.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/thread.hpp>
#include <boost/circular_buffer.hpp>
#include <type_traits>
#include "Settings.hpp"
template <typename T>
class thread_safe_circular_buffer : private boost::noncopyable
{
public:
typedef boost::mutex::scoped_lock lock;
thread_safe_circular_buffer(bool *stop) : stop(stop) {}
thread_safe_circular_buffer(int n, bool *stop) : stop(stop) {cb.set_capacity(n);}
void add (T imdata) {
monitor.lock();
std::cout << "Buffer - Add Enter, Size: " << cb.size() << "\n";
if(cb.full())
{
std::cout << "Buffer - Add Full.\n";
T temp = cb.front();
if(std::is_pointer<T>::value)
delete[] temp;
}
std::cout << "Buffer - Push.\n";
cb.push_back(imdata);
monitor.unlock();
std::cout << "Buffer - Add Exit.\n";
}
T get() {
std::cout << "Buffer - Get Enter, Size: " << cb.size() << "\n";
monitor.lock();
while (cb.empty())
{
std::cout << "Buffer - Get Empty, Size: " << cb.size() << "\n";
monitor.unlock();
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
if(*stop)
return NULL;
monitor.lock();
}
T imdata = cb.front();
// Remove first element of buffer
std::cout << "Buffer - Pop.\n";
cb.pop_front();
monitor.unlock();
std::cout << "Buffer - Get Exit.\n";
return imdata;
}
void clear() {
lock lk(monitor);
cb.clear();
}
int size() {
lock lk(monitor);
return cb.size();
}
void set_capacity(int capacity) {
lock lk(monitor);
cb.set_capacity(capacity);
}
bool *stop;
private:
boost::condition buffer_not_empty;
boost::mutex monitor;
boost::circular_buffer<T> cb;
};
The error tells you the problem: you can't delete things that aren't pointers. When T isn't a pointer type, delete[] temp; doesn't make sense. It's also a bad idea if your buffer is storing things that weren't allocated with new[], or when your circular buffer doesn't conceptually 'own' the pointers.
I think you maybe misunderstand the whole problem. The warning from the boost documentation only applies to situations where you can't afford to "lose" any of the data stored in the buffer. One example where this is a problem — the one they highlight specifically — is if you were storing raw pointers in the buffer that are your only references to some dynamically allocated memory.
There are, I think, only three reasonable designs:
Don't use a circular buffer when you can't afford to lose data (e.g. this can mean modifying your data so you can afford to lose it; e.g. circular_buffer<unique_ptr<T[]>> for storing dynamically allocated arrays) . And consequently, make your class not worry about what to do with lost data.
Make your class take a 'deleter'; i.e. a function object that specifies what to do with a data element that is about to be overwritten. (and you probably want to have a default parameter of "do nothing")
Change the functionality of the buffer to do something other than overwriting when full (e.g. block)
Do as boost does: let the user of your code handle this. If objects are stored, its destructor should handle possible memory management, if pointers are stored, you have no way of knowing what it actually points to: arrays, objects, memory that needs to be freed, memory that is managed elsewhere, not dynamic memory.

c++ segfault on one platform (MacOSX) but not another (linux)

I'm getting a segfault on MacOSX ("Segmentation fault: 11", in gdb "Program received signal SIGSEGV, Segmentation fault"), appearing in the destructor in which a container is looped over with an iterator and memory deleted. I've tried with clang++, g++ (both part of LLVM) and homebrew g++. The segfault appears when the iterator is incremented for the first time, with the gdb message (having compiled with clang++)
"0x000000010001196d in std::__1::__tree_node_base<void*>* std::__1::__tree_next<std::__1::__tree_node_base<void*>*>(std::__1::__tree_node_base<void*>*) ()"
When starting the program in gdb I also get warnings saying "warning: Could not open OSO archive file".
On a cluster linux node, with gcc 4.8.1, I don't get a segfault. Any ideas what might be wrong and how I can avoid the segfault on my mac (preferably with clang)? I really don't know much about compilers and such.
EDIT:
I think I found the problem, however I'd like to understand still why this works on one platform but not another. Here's a minimal example:
class Word:
#ifndef WORD_H
#define WORD_H
#include <string>
#include <map>
class Word {
public:
/*** Constructor ***/
Word(std::string w) : m_word(w) {
// Add word to index map, if it's not already in there
std::map<std::string, Word*>::iterator it = index.find(w);
if (it == index.end()) {
index[w] = this;
}
}
~Word() { index.erase(m_word); } // Remove from index
static void DeleteAll() { // Clear index, delete all allocated memory
for (std::map<std::string, Word*>::const_iterator it = index.begin();
it != index.end();
++it)
{ delete it->second; }
}
private:
std::string m_word;
static std::map<std::string, Word*> index; // Index holding all words initialized
};
#endif
WordHandler class:
#ifndef _WORDHANDLER_H_
#define _WORDHANDLER_H_
#include <string>
#include "Word.h"
class WordHandler {
WordHandler() {}
~WordHandler() { Word::DeleteAll(); } // clear memory
void WordHandler::NewWord(const std::string word) {
Word* w = new Word(word);
}
};
#endif
Main program:
#include <iostream>
#include "WordHandler.h"
int main () {
std::cout << "Welcome to the WordHandler. " << std::endl;
WordHandler wh;
wh.NewWord("hallon");
wh.NewWord("karl");
std::cout << "About to exit WordHandler after having added two new words " << std::endl;
return 0;
}
So the segfault occurs upon exiting of the program, when the destructor ~WordHandler is called. The reason I found, is the Word destructor: the Word object is erased from the map, which makes the DeleteAll() function weird because the map is altered while it's being iterated over (some sort of double delete I suppose). The segfault disappears either by removing the DeleteAll completely, or removing the Word destructor.
So I'm still wondering why the segfault doesn't appear on linux with g++ from gcc 4.8.1. (Also, I guess off topic, I'm wondering about the programming itself – what would be the proper way to treat index erasing/memory deletion in this code?)
EDIT 2:
I don't think this is a duplicate of Vector.erase(Iterator) causes bad memory access, because my original question had to do with why I get a segfault on one platform and not another. It's possible that the other question explains the segfault per se (not sure how get around this problem... perhaps removing the Word destructor and calling erase from DeleteAll() rather than "delete"? But that destructor makes sense to me though...), but if it's truly a bug in the code why isn't picked up by gcc g++?
This is a problem:
~Word() { index.erase(m_word); } // Remove from index
static void DeleteAll() { // Clear index, delete all allocated memory
for (std::map<std::string, Word*>::const_iterator it = index.begin();
it != index.end();
++it)
{ delete it->second; }
}
delete it->second invokes ~Word which erases from the map that you are iterating over. This invalidates your active iterator, leading to undefined behaviour. Because it is UB, the fact that it works on one platform but not another is basically just luck (or lack thereof).
To fix this, you can either make a copy of index and iterate over that, consider a different design that doesn't mutate the index as you delete it, or use the fact that erase returns the next valid iterator to make the loop safe (which means hoisting the erase into DeleteAll).

C++ basics, vectors, destructors

I'm a little confused about the best practice for how to do this. Say I have a class that for example allocs some memory. I want it to self destruct like an auto but also put it in a vector for some reason unknown.
#include <iostream>
#include <vector>
class Test {
public:
Test();
Test(int a);
virtual ~Test();
int counter;
Test * otherTest;
};
volatile int count = 0;
Test::Test(int a) {
count++;
counter = count;
std::cout << counter << "Got constructed!\n";
otherTest = new Test();
otherTest->counter = 999;
}
Test::Test() {
count++;
counter = count;
std::cout << counter << "Alloced got constructed!\n";
otherTest = NULL;
}
Test::~Test() {
if(otherTest != 0){
std::cout << otherTest->counter << " 1Got destructed" << counter << "\n";
otherTest->counter = 888;
std::cout << otherTest->counter << " 2Got destructed" << counter << "\n";
}
}
int vectorTest(){
Test a(5);
std::vector<Test> vecTest;
vecTest.push_back(a);
return 1;
}
int main(){
std::cout << "HELLO WORLD\n";
vectorTest();
std::cout << "Prog finished\n";
}
In this case my destructor gets called twice all from counter 1, the alloc' object has already been set to 888 (or in a real case freed leading to bad access to a deleted object). What's the correct case for putting a local variable into a vector, is this some kind of design that would never happen sensibly. The following behaves differently and the destructor is called just once (which makes sense given its an alloc).
int vectorTest(){
//Test a(5);
std::vector<Test> vecTest;
vecTest.push_back(*(new Test(5)));
return 1;
}
How can I make the local variable behave the same leading to just one call to the destructor? Would a local simply never be put in a vector? But aren't vectors preferred over arrays, what if there are a load of local objects I want to initialize separately and place into the vector and pass this to another function without using free/heap memory? I think I'm missing something crucial here. Is this a case for some kind of smart pointer that transfers ownership?
A vector maintains its own storage and copies values into it. Since you did not implement a copy constructor, the default one is used, which just copies the value of the pointer. This pointer is thus deleted twice, once by the local variable destructor and once by the vector. Don't forget the rule of three. You either need to implement the copy and assignment operators, or just use a class that already does this, such as shared_ptr.
Note that this line causes a memory leak, since the object you allocated with new is never deleted:
vecTest.push_back(*(new Test(5)));
In addition to what Dark Falcon wrote: To avoid reallocating when inserting into a vector, you typically implement a swap function to swap local element with a default-constructed one in the vector. The swap would just exchange ownership of the pointer and all will be well. The new c++0x also has move-semantics via rvalue-references to help with this problem.
More than likely, you'd be better off having your vector hold pointers to Test objects instead of Test objects themselves. This is especially true for objects (like this test object) that allocate memory on the heap. If you end up using any algorithm (e.g. std::sort) on the vector, the algorithm will be constantly allocating and deallocating memory (which will slow it down substantially).