Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I was told that putting all pointers in the catch block is bad OO programming. Cleanup occurs in the catch block. How does it violate every rule of OO design?
Here is a sample code:
#include <iostream>
#include <string>
using namespace std;
class Error
{
friend int main();
public:
Error(int* p, string m) : arr(p), msg(m) { }
private:
int* arr;
string msg;
};
void initialize();
int main()
{
try {
initialize();
}
catch(Error& err) {
cout<<endl<< "Error! "<< err.msg <<endl<<endl;
delete [] err.arr;
}
return 0;
}
void initialize()
{
int size;
cout<<"Enter the number of elements: ";
cin >> size;
int* myArray = new int[size];
cout<<"Enter the elements: " <<endl;
for (int i=0; i<size; ++i)
cin >> myArray[i];
if (!cin.good())
throw Error(myArray, (string)"bad input!");
cout<<endl<<"You entered:"<<endl;
for (int i=0; i<size; ++i)
cout << myArray[i] << " ";
cout<<endl;
delete [] myArray;
}
Please ignore this line. I'm just trying to get this question posted.
To deal with resources, C used to focus on managing the execution paths. the programmer had to make sure that for every possible path, the resources were freed.
So it was normal to end with code like that :
Notice that most of the code is there to handle error(s).
HRESULT
CreateNotifyIcon(NotifyIcon** ppResult)
{
NotifyIcon* icon = 0;
Icon* inner = 0;
const wchar_t * tmp1 = 0;
HRESULT hr = S_OK;
if ( SUCCEEDED(hr) ) {
icon = new (nothrow) NotifyIcon();
if ( !icon ) hr = E_OUTOFMEM;
}
if ( SUCCEEDED(hr) )
hr = icon->set_text("Blah blah blah");
if ( SUCCEEDED(hr) ) {
inner = new (nothrow) Icon(...);
if ( !inner )
hr = E_OUTOFMEM;
else {
Info info;
hr = GetInfo( &info );
if ( SUCCEEDED(hr) )
hr = icon->set_icon(inner, info);
if ( SUCCEEDED(hr) )
inner = NULL;
}
}
if ( SUCCEEDED(hr) )
hr = icon->set_visible(true);
if ( SUCCEEDED(hr) ) {
*ppResult = icon;
icon = NULL;
} else {
*ppResult = NULL;
}
cleanup:
if ( inner ) delete inner;
if ( icon ) delete icon;
return hr;
}
In C++, this is not the right way because you have exceptions. For instance :
String EvaluateSalaryAndReturnName( Employee e )
{
if( e.Title() == "CEO" || e.Salary() > 100000 )
{
cout << e.First() << " " << e.Last()
<< " is overpaid" << endl;
}
return e.First() + " " + e.Last();
}
There are 23 different execution paths in that snippet of code.
So C++ chose to focus on the resources instead. Each function (should) handle a limited number of resources. Roughly speaking, you put a watchdog on each resource to make sure they are properly released/freed. This watch dog is RAII. Indeed, whatever the execution path may be, you are 100% sure the destructor of all objects allocated on the stack will be called. That way, by putting your resources into RAII object (STL containers, std::unique_ptr,...), you can deal with exceptions without any problem of leaked resource.
Look at the difference :
BAD WAY
void function(int n){
int* p = 0;
int* c = 0;
try{
p = new int[n];
c = new int[n*2];
}
catch(std::exception const& e){
delete[] c;
delete[] p;
throw;
}
delete[] c;
delete[] p;
}
int main(){
try{
function(1000);
} catch (std::exception const& e){
std::cerr<<e.what()<<std::endl;
}
}
GOOD WAY
void function(int n){
std::unique_ptr<int[]> p(new int[n]); //or std::vector
std::unique_ptr<int[]> c(new int[n*2]);
}
int main(){
try{
function(1000);
} catch (std::exception const& e){
std::cerr<<e.what()<<std::endl;
}
}
The problem is, that you have to delete the array in all possible ways to leave the function. This may be easy if you have only one or two ways, but gets confusing with more. Even in your code you have one delete already outside the function, what it makes hard to find.
Use smart pointers to target that issue. They deallocate their content when they get out of scope. That way you don't have to bother for the destruction of the array. As soon as the function is done, the array will be destroyed.
Here is some documentation for smart pointers:
unique_ptr
shared_ptr
The C++ Standard n3337 § 15.2/3 says:
The process of calling destructors for automatic objects constructed
on the path from a try block to a throw-expression is called “stack
unwinding. (...)”
The problem with your code is that pointer to array is allocated inside try block, so it is no longer alive when control reaches catch block. You cannot do
To correct this you should declare a pointer before try block:
int* myArray;
try{
function(1000); // allocate an array and point myArray to it
} catch (std::exception const& e){
delete [] myArray; // OK, myArray pointer is valid here
}
This will delete the objects and return memory to the system. Such approach is taken for example in std::uninitialized_fill. But you got the better possibility. To free yourself from the burden of deallocating memory you can consider using smart pointer or a handle to an array (a class that wraps around a resource): representing each resource as a class is the foundation of approach called RAII.
try{
MyArray myArray(1000); // allocate an array in constructor
} catch (std::exception const& e){
// destructor for myArray has deleted ints & returned memory already
}
Related
I am using VS2017 and do not understand why I am getting compiler "Warning C6001 Using uninitialized memory 'values'", on line if(values!= NULL) in catch block.
#include <windows.h>
typedef enum
{
VALUE_STATE_NOT_AVAILABLE = 1,
VALUE_STATE_ERROR = 2,
VALUE_STATE_VALID = 3,
VALUE_STATE_UNKNOWN = 4
} XyzValueState_t;
class XyzValue
{
private: XyzValueState_t _valueState;
protected: XyzValue( XyzValueState_t valueState ) {
_valueState = valueState;
}
}
typedef XyzValue* xyzValuePtr_t;
main(){
bool flag=true;
xyzValuePtr_t* values = NULL;
unsigned int _argument=2;
if(flag==true) {
values = new PCLValuePtr_t[2]{ NULL,NULL };
try {
values[0] = new PCLUnsignedInteger(_argument);
values[1] = new PCLUnsignedInteger(_argument);
xyz(values); // passing the value to third party function which returns void
}
catch(...) {
if(values!= NULL){
for(int k = 0; k < 1; k++) {
delete values[k];
values[k] = NULL;
}
delete [] values;
values = NULL;
}
}
}
}
Thank you in advance for your help and guidance
not quite sure why your compiler thinks this might be unitialized.
But, in C++, I'd argue that the way you're building your array with new is unnecessarily complicated and error prone. This look like someone from 1993 tried to write C++11. You have initializer lists, but you don't use RAII!
so, do the C++ thing and use C++'s deterministic object lifetime to manage memory. For an array of objects, this is elegantly handled by std::vector:
#include <vector>
class XyzValue;
main(){
bool flag=true;
unsigned int _argument=2;
if(flag==true) {
std::vector<XyzValue> values(2); // default initialization for two XyzValues.
try {
xyz(values.data()); // if you need the raw contiguous memory. **You probably don't.**
}
catch(...) {
// all the manual deletion not necessary anymore, because at end of scope, things are deconstructed automatically, so this catch clause now is empty.
}
}
}
See how this is much shorter, better readable, has the same functionality, but none of the need to manually delete anything? That's why we write C++ instead of C.
I created a queue data structure using a struct and a dynamically allocated array, I don't understand what is the right way to free or delete it without any memory leaks.
I have tried using the following:
delete[] q->data;
delete[] &(q->data);
delete &(q->data);
#include "queue.h"
void initQueue(queue* q, unsigned int size)
{
q->maxSize = size;
q->size = 0;
q->data = new unsigned int[size];
q->front = 0;
q->rear = 0;
}
void enqueue(queue* q, unsigned int newValue)
{
if (q->size != q->maxSize)
{
q->data[q->rear] = newValue;
q->size++;
q->rear++;
}
else
{
std::cout << "Queue is full! you can clean it and initialize a new one" << std::endl;
}
}
int dequeue(queue* q)
{
int i = 0;
if (q->size == 0)
{
std::cout << "Queue is empty!" << std::endl;
return EMPTY;
}
else
{
q->front++;
q->size--;
return q->data[q->front];
}
}
void cleanQueue(queue* q)
{
//the delete function
}
The technical right answer here is to delete q->data, as others have suggested. But...
right way to free or delete it without any memory leaks
The right way in C++, unless you're doing some exotic with allocation, is not to do your own memory management. Write a class that allocates in the constructor, and deletes in the destructor, as Chris suggested, is a great way to learn about RAII and how it saves you from the mental burden of manually writing "delete" everywhere.
But the right right way, if someone was paying me? I'd skip all that and use a vector.
#include <vector>
class MyQueue {
public:
MyQueue(unsigned int size) : data(size) { }
void enqueue(unsigned int value) { /* whatever... */ }
int dequeue() { /* whatever... */ }
private:
std::vector<unsigned int> data;
};
When this class goes out of scope or gets deleted, the vector will automatically be cleaned up. You don't even need to free or delete anything.
I found this interesting exercise in Bruce Eckel's Thinking in C++, 2nd ed., Vol.1 in Chapter 13:
/*13. Modify NoMemory.cpp so that it contains an array of int
and so that it actually allocates memory instead of
throwing bad_alloc. In main( ), set up a while loop like
the one in NewHandler.cpp to run out of memory and
see what happens if your operator new does not test to
see if the memory is successfully allocated. Then add the
check to your operator new and throw bad_alloc*/
#include <iostream>
#include <cstdlib>
#include <new> // bad_alloc definition
using namespace std;
int count = 0;
class NoMemory {
int array[100000];
public:
void* operator new(size_t sz) throw(bad_alloc)
{
void* p = ::new char[sz];
if(!p)
{
throw bad_alloc(); // "Out of memory"
}
return p;
}
};
int main() {
try {
while(1) {
count++;
new NoMemory();
}
}
catch(bad_alloc)
{
cout << "memory exhausted after " << count << " allocations!" << endl;
cout << "Out of memory exception" << endl;
exit(1);
}
}
My question is: why does this code does not throw the bad_alloc, when ran completely out of memory (as per the Task Manager's resource monitor on Win7)?
I assume the global ::new char[sz] never returns with 0, even if the memory is full. But why? It even turns the Win7 OS into a numb, non-responding state once ran out of memory, still is does keep on trying to allocate new space.
(One interesting addition: I tryed it on Ubuntu too: the bad_alloc is not thrown either, still this OS does not goes frozen but this dangerous process gets killed before by the OS - smart isn't it?)
Your implementation of operator new is incorrect.
void* operator new(size_t sz) throw(bad_alloc)
{
void* p = ::new char[sz];
if(!p)
{
throw bad_alloc(); // "Out of memory"
}
return p;
}
::new already throws std::bad_alloc and you don't need to check return value of p pointer.
If you look g++'s libstdc++ source, they compare pointer to null after malloc, so you should do that too in order to simulate this:
_GLIBCXX_WEAK_DEFINITION void *
operator new (std::size_t sz) _GLIBCXX_THROW (std::bad_alloc)
{
void *p;
/* malloc (0) is unpredictable; avoid it. */
if (sz == 0)
sz = 1;
while (__builtin_expect ((p = malloc (sz)) == 0, false))
{
new_handler handler = std::get_new_handler ();
if (! handler)
_GLIBCXX_THROW_OR_ABORT(bad_alloc());
handler ();
}
return p;
}
So it does not return 0 but throws an exception. The reason why don't you get it on linux I believe is that the process is always killed by the kernel (OOM-Killer) in such cases.
As #MarcGlisse pointed you may want to use nothrow (noexcept) version of new:
_GLIBCXX_WEAK_DEFINITION void *
operator new (std::size_t sz, const std::nothrow_t&) GLIBCXX_USE_NOEXCEPT
{
void *p;
/* malloc (0) is unpredictable; avoid it. */
if (sz == 0)
sz = 1;
while (__builtin_expect ((p = malloc (sz)) == 0, false))
{
new_handler handler = std::get_new_handler ();
if (! handler)
return 0;
__try
{
handler ();
}
__catch(const bad_alloc&)
{
return 0;
}
}
return p;
}
As you see it will return 0 if allocation fails and it will catch all exceptions that can be raised by new_handler. Default new_handler throws std::bad_alloc.
But even in this case I think OOM-Killer will kill your application before you get something. If your question is more about why is it killed? then I recommend you to read about OOM killer policy.
I've found my error: I called new wrong: new NoMemory();
The correct way to do is new NoMemory; (without the parentheses)
Now it works like a charm, like this:
#include <iostream>
#include <cstdlib>
#include <new> // bad_alloc definition
using namespace std;
int count = 0;
class NoMemory {
int array[100000];
public:
void* operator new(size_t sz) throw(bad_alloc)
{
void* p = ::new(std::nothrow) char[sz];
if(0 != p)
return p;
throw bad_alloc();
}
};
int main() {
try {
while(1) {
count++;
new NoMemory;
}
}
catch(bad_alloc)
{
cout << "memory exhausted after " << count << " allocations!" << endl;
cout << "Out of memory exception" << endl;
exit(1);
}
}
i had asked help on this question here Static member reclaiming memory and recovering from an exception
the program below is to allocate memory using own new operator. I have to throw exception on 5th object allocation and recover by freeing up memory (strange question i know, but it is assignment)
i have written the code here. allocation works, but when i try to call delete (through option '2') i get stuck in infinite loop.
#include <iostream>
#include <cstdlib>
using namespace std;
class object
{
int data;
static int count;
static void* allocatedObjects[5];
public:
object() { }
void* operator new(size_t);
void operator delete(void *);
void static release();
void static printCount()
{
cout << count << '\n';
}
~object()
{
release();
}
};
int object::count = 0;
void* object::allocatedObjects[5];
void* object::operator new(size_t size)
{
if (count > 5)
throw "Cannot allocate more than 5 objects!\n";
void *p = malloc(size);
allocatedObjects[count] = p;
count++;
return p;
}
void object::operator delete(void *p)
{
free(p);
count--;
}
void object::release()
{
while (count > 0)
{
delete static_cast<object*> (allocatedObjects[count]);
}
}
int main()
{
object *o[10];
int i = -1;
char c = 1;
while (c != '3')
{
cout << "Number of objects currently allocated : ";
object::printCount();
cout << "1. Allocate memory for object.\n";
cout << "2. Deallocate memory of last object.\n";
cout << "3. Exit.\n";
cin >> c;
if (c == '1')
{
try
{
i++;
o[i] = new object;
}
catch (char* e)
{
cout <<e;
object::release();
i = 0;
}
}
else if (c == '2' && i >= 0)
{
delete o[i];
i--;
}
}
return 0;
}
What am i doing wrong?
EDIT
I have fixed the delete problem. By getting rid of destructor. And explicitly calling release at end of main.
But now my catch block is not catching exception. After allocating 5 objects exception is thrown (as traced through debugger) but not caught. New Changes in the code do not affect the related code.
The infinite loop occurs because the destructor (~object()) is called before object::operator delete(). Your destructor is attempting to delete the most recent object allocated, which calls the destructor on the same object. The operator delete() is never being called.
I'm not sure what the allocatedObjects achieves here and the code will work without it. Get rid of the release () as well.
UPDATE
OK, so there's a need for the release (exception handling).
Don't call release from the destructor. Instead, make the operator delete() function set the allocatedObjects entry to 0, after the calling free (mirroring the operator new()). The release() function is only called during exception handling and makes sure un-freed memory is freed correctly (i.e. loop through allocatedObjects array and free non-zero entries and then setting them to zero.
I'm using a vector container to hold instances of an object which contain 3 ints and 2 std::strings, this is created on the stack and populated from a function in another class but running the app through deleaker shows that the std::strings from the object are all leaked. Here's the code:
// Populator function:
void PopulatorClass::populate(std::vector<MyClass>& list) {
// m_MainList contains a list of pointers to the master objects
for( std::vector<MyClass*>::iterator it = m_MainList.begin(); it != m_MainList.end(); it++ ) {
list.push_back(**it);
}
}
// Class definition
class MyClass {
private:
std::string m_Name;
std::string m_Description;
int m_nType;
int m_nCategory;
int m_nSubCategory;
};
// Code causing the problem:
std::vector<MyClass> list;
PopulatorClass.populate(list);
When this is run through deleaker the leaked memory is in the allocator for the std::string classes.
I'm using Visual Studio 2010 (CRT).
Is there anything special I need to do to make the strings delete properly when unwinding the stack and deleting the vector?
Thanks,
J
May be Memory leak with std::vector<std::string> or something like this.
Every time you got a problem with the STL implementation doing something strange or wrong like a memory leak, try this :
Reproduce the most basic example of what you try to achieve. If it runs without a leak, then the problem is in the way you fill the data. It's the most probable source of problem (I mean your own code).
Not tested simple on-the-fly example for your specific problem :
#include <string>
#include <sstream>
// Class definition
struct MyClass { // struct for convenience
std::string m_Name;
std::string m_Description;
int m_nType;
int m_nCategory;
int m_nSubCategory;
};
// Prototype of populator function:
void populate(std::vector<MyClass>& list)
{
const int MAX_TYPE_IDX = 4;
const int MAX_CATEGORY_IDX = 8;
const int MAX_SUB_CATEGORY_IDX = 6;
for( int type_idx = 0; type_idx < MAX_TYPE_IDX ; ++type_idx)
for( int category_idx = 0; category_idx < MAX_CATEGORY_IDX ; ++category_idx)
for( int sub_category_idx = 0; sub_category_idx < MAX_SUB_CATEGORY_IDX ; ++sub_category_idx)
{
std::stringstream name_stream;
name_stream << "object_" << type_idx << "_" << category_idx << "_" << sub_category_idx ;
std::stringstream desc_stream;
desc_stream << "This is an object of the type N°" << type_idx << ".\n";
desc_stream << "It is of category N°" << category_idx << ",\n";
desc_stream << "and of sub-category N°" << category_idx << "!\n";
MyClass object;
object.m_Name = name_stream.str();
object.m_Description = desc_stream.str();
object.m_nType = type_idx;
m_nCategory =
m_nSubCategory =
list.push_back( object );
}
}
int main()
{
// Code causing the problem:
std::vector<MyClass> list;
populate(list);
// memory leak check?
return 0;
}
If you still got the memory leak, first check that it's not a false-positive from your leak detection software.
Then if it's not, google for memory leak problems with your STL implementation (most of the time on the compiler developer website). The implementor might provide a bug tracking tool where you could search in for the same problem and potential solution.
If you still can't find the source of the leak, maybe try to build your project with a different compiler (if you can) and see if it have the same effect. Again if the leak still occurs, the problem have a lot of chances to come from your code.
Probably same root issue as Alexey's link. The shipped version has broken move code for basic_string. MS abandoned us VC10 users, so you must fix it yourself. in xstring file you have this:
_Myt& assign(_Myt&& _Right)
{ // assign by moving _Right
if (this == &_Right)
;
else if (get_allocator() != _Right.get_allocator()
&& this->_BUF_SIZE <= _Right._Myres)
*this = _Right;
else
{ // not same, clear this and steal from _Right
_Tidy(true);
if (_Right._Myres < this->_BUF_SIZE)
_Traits::move(this->_Bx._Buf, _Right._Bx._Buf,
_Right._Mysize + 1);
else
{ // copy pointer
this->_Bx._Ptr = _Right._Bx._Ptr;
_Right._Bx._Ptr = 0;
}
this->_Mysize = _Right._Mysize;
this->_Myres = _Right._Myres;
_Right._Mysize = 0;
_Right._Myres = 0;
}
return (*this);
}
Note the last
_Right._Myres = 0;
that should happen only under the last condition, for the short case _Right should better be left alone.
As the capacity is set to 0 instead of 15, other code will take unintended branch in function Grow() when you assign another small string and will allocate a block of memory just to trample over the pointer with the immediate string content.