Testing raw pointers c++, but could not find leaks [duplicate] - c++

This question already has answers here:
method on deleted instance of class still work?
(4 answers)
Closed 5 years ago.
I am playing a bit with raw pointers on c++, I know that nowadays it is good practices to use smart pointers, but as I am learning c++ on my own, I would like first understand raw pointers before moving to smart pointers.
To play around I have created a fakeClass, and playing in a Xcode console project c++:
/** fakeClass.hpp **/
#ifndef fakeClass_hpp
#define fakeClass_hpp
namespace RPO {
class fakeClass {
private:
int _id;
public:
fakeClass(int id);
~fakeClass();
void message();
}; // end class
} // end namespace
#endif
/** fakeClass.cpp **/
#include "fakeClass.hpp"
#include <iostream>
namespace RPO {
fakeClass::fakeClass(int id) {
_id = id;
std::cout << "Creating object: " << _id << std::endl;
}
fakeClass::~fakeClass() {
std::cout << "Destroying objet: " << _id << std::endl;
}
void fakeClass::message() {
std::cout << "Object: " << _id << std::endl;
}
}
/** main.cpp **/
int main(int argc, const char * argv[]) {
// Instantiate on the stack
RPO::fakeClass fClass(1);
fClass.message();
// Instantiate on the heap
RPO::fakeClass *fClassPointer = new RPO::fakeClass(2);
fClassPointer->message();
fClassPointer = new RPO::fakeClass(3); // Create new object #3, on non deleted pointer
fClassPointer->message();
delete fClassPointer; // Free memory of object #3, but still pointing to the memory address
fClassPointer = nullptr; // pointer is pointing to null right now
fClassPointer->message(); // throws an error
}
Output:
Creating object: 1
Object: 1
Creating object: 2
Object: 2
Creating object: 3
Object: 3
Destroying object: 3
Destroying object: 1
Edited:
Thanks to the revisors, I have been searching but could not find anything, but the question posted by tobi303 answers one of my questions (about why one method is still responding after free the memory), but as I said on the title the main question was about leaks.
As it can be seen, object #2 is never destroyed, so it is an obvious memory leak. I breakpointed after creating the object #3, and typed on Console "leaks pruebaC", following instructions from this question, but I get no leak report... why?
leaks Report Version: 2.0
Process 1292: 147 nodes malloced for 17 KB
**Process 1292: 0 leaks for 0 total leaked bytes.**
Also tried after de fClassPointer = nullptr; with the same result.
So, 2 questions, why the leak didn't showed up? and, do the memory used by an app is freed when the app is terminated? (even if it is a memory leak with no pointers)
Thank you.
PS: extra bonus, when I see examples with "char *myString", should I "delete myString" after?

That is my question, I suppose is because memory is freed, but not yet overwritten, and that is why I get the "Object: 2" message after deleting the object #2. Is that correct?
This is "correct" in the sense that most implementations will behave this way, but your situation qualifies as "undefined behavior", so there is no guarantee that this will work accross different compilers, compiler versions, or architectures.
when running on Terminal "leaks pruebaC", I get no leak report... why?
There is no leak detector added by default, because it adds unnecessary weight to the program (makes it slow to shut down). You have to add a leak detector explicitely by yourself (like linking to tcmalloc).
when I see examples with "char *myString", should I "delete myString" after?
Here's a rule of thumb for this: Every new needs to be matched with a delete, and vice-versa.

Related

Do any of these pointer operation trigger undefined behavior?

The question:
I have some code executing which is very unsafe. My question is, does this code trigger undefined behavior or is it 'correct' (= behavior fully defined).
Some context:
I have a program which has shown strange bugs/crashes in the past, which repeatedly also vanish again without reason.
I have identified some pieces of (pointer) logic as a potential source of these bugs. Still, the bugs can also be caused by other part of that code.
The bugs typically happen in Release builds, and not in Debug builds in case that matters.
I have taken these lines of (pointer) logic code and created a clear minimum-working example.
This minimum-working example runs without problems which I know is no conclusive proof that no undefined behavior is happening.
Therefore, I ask if someone sees if some 'C++ rules' are broken (= results in undefined behavior).
I build and develop this in Visual Studio 2019 and/or 2022.
I know this code contains very bad patterns. However, this is a minimal working example and in the actual code, many things are out of my control so I cannot simply change much of it.
For now, I just want to learn if certain pointer operations result in defined or undefined behavior.
In case one or more operation do trigger undefined behavior, I would be grateful if someone can point out which operations they are and why they trigger undefined behavior.
The code:
Main program, minimum_working_example.cpp:
#include <iostream>
#include <sstream>
#include "CppObject.h"
int main()
{
/* Create a shared pointer on the heap.
* The lifetime of this shared_ptr is managed by some managed C# code which wraps this C++ code.
* That is why the shared_ptr is created on the heap, so C# can manage the lifetime of the pointer and therefore the C++ object.
*/
std::shared_ptr<void>* regularPointerToSmartPointer = new std::shared_ptr<void>(); // Create a shared pointer on the heap, and store a regular pointer pointing to it.
// Create an CppObject and store it in the shared_ptr<void> (note that this should add a reference to the correct type deleter into the shared_ptr).
int someValue = 5; // Some value stored in the CppObject, just for illustration.
*regularPointerToSmartPointer = std::make_shared<CppObjectStuff::CppObject>(someValue); // Create an object on the heap, create a smart pointer pointing to it, and store this smart pointer on the heap at the memory reserved in the previous step.
// Just get the raw pointer to object.
void* regularPointerToObject = (*regularPointerToSmartPointer).get();
// Convert pointer to string.
std::stringstream ss1;
ss1 << std::hex << regularPointerToObject; // Pointer to hex string.
std::string regularPointerToObjectString = ss1.str();
/* Many other operations can happen here
* e.g. like passing this string to all sorts of objects which could
desire to use the instance of CppObject.
*/
// Convert string to back pointer.
std::stringstream ss2;
ss2 << regularPointerToObjectString;
intptr_t raw_ptr = 0;
ss2 >> std::hex >> raw_ptr; //Hex string to int.
void* regularPointerToObjectFromString = reinterpret_cast<void*>(raw_ptr);
// Convert the void pointer to a CppObject pointer.
CppObjectStuff::CppObject* pointerToCppObject = static_cast<CppObjectStuff::CppObject*>(regularPointerToObjectFromString);
// Do something with the CppObject behind the pointer.
int result = pointerToCppObject->GiveIntValue();
// Print result.
std::cout << "This code runs fine:" << "\n";
std::cout << result;
// Delete the shared_ptr which in turn deletes the CppObject (something the managed C# code would normally do).
delete regularPointerToSmartPointer;
}
For reference, CppObject.h:
namespace CppObjectStuff
{
class CppObject
{
public:
CppObject(int value) { _intValue = value; };
int GiveIntValue() { return _intValue; };
private:
int _intValue;
};
}
One item that could cause discussion:
I already researched specifically the following tricky case which I concluded should be allowed, see e.g. this bog post.
Note, this is similar but not fully identical to the usage in my code above so there could still be stuff I misunderstand:
// Start scope
{
// Create object in shared_ptr
std::shared_ptr<void> regularPointerToSmartPointer = std::make_shared<CppObjectStuff::CppObject>(someValue);
}
// Scope ended, shared_ptr went out of scope and should delete the `ppObjectStuff::CppObject` created before.

What to give a function that expects a raw pointer?

I'm using a library that, in order to construct some object that I use, expects a raw pointer to an object. I'm not sure what it will do with the pointer, to make my code as safe as possible, what should I pass to this function?
Use a unique pointer - if they decide to delete the pointer, I will do a double-delete
Keep track of a raw pointer - bad because I have to remember to write delete, but it could still be a double-delete
Use auto duration and give them a pointer Give them a reference - their code will error if they call delete
Use a shared pointer - same double-delete problem as unique pointer, but now my scope won't hurt their pointer
Based on my reading, option 3 seems like what I should do - they shouldn't be calling delete on the pointer, and this format enforces that. But what if I don't know whether they now or in the future will call delete on the reference I gave them? Use a shared pointer and say "not my fault about the double delete"?
#include <memory>
#include <iostream>
class ComplexObj {
public:
ComplexObj() : m_field(0) {}
ComplexObj(int data) : m_field(data) {}
void print() { std::cout << m_field << std::endl; }
private:
int m_field;
};
class BlackBox {
public:
BlackBox(ComplexObj* data) {
m_field = *data;
// Do other things I guess...
delete data;
std::cout << "Construction complete" << std::endl;
}
void print_data() { m_field.print(); }
private:
ComplexObj m_field;
};
int main(int argc, char* argv[]) {
// Use a smart pointer
std::unique_ptr<ComplexObj> my_ptr(new ComplexObj(1));
BlackBox obj1 = BlackBox(my_ptr.get());
obj1.print_data();
my_ptr->print(); // Bad data, since BlackBox free'd
// double delete when my_ptr goes out of scope
// Manually manage the memory
ComplexObj* manual = new ComplexObj(2);
BlackBox obj2 = BlackBox(manual);
obj2.print_data();
manual->print(); // Bad data, since BlackBox free'd
delete manual; // Pair new and delete, but this is a double delete
// Edit: use auto-duration and give them a pointer
ComplexObj by_ref(3);
BlackBox obj3 = BlackBox(&by_ref); // they can't call delete on the pointer they have
obj3.print_data();
by_ref.print();
// Use a shared pointer
std::shared_ptr<ComplexObj> our_ptr(new ComplexObj(4));
BlackBox obj4 = BlackBox(our_ptr.get());
obj4.print_data();
our_ptr->print(); // Bad data, they have free'd
// double delete when our_ptr goes out of scope
return 0;
}
Other questions I read related to this topic...
unique_ptr.get() is legit at times
I should pass by reference
I think I am case 2 and should pass by reference
You cannot solve this problem with the information you have. All choices produce garbage.
You have to read the documentation of the API you are using.
Doing any of your 4 answers without knowing if they take ownership of the pointer will result problems.
Life sometimes sucks.
If you have a corrupt or hostile API, the only halfway safe thing to do is to interact with it in a separate process, carefully flush all communication, and shut down the process.
If the API isn't corrupt or hostile, you should be able to know if it is taking ownership of the pointed to object. Calling an API without knowing this is a common mistake in novice C++ programmers. Don't do it. Yes, this sucks.
If this API is at all internal and you have any control, seek to make all "owning pointer" arguments be std::unique_ptr<>s. That makes it clear in the API that you intend to own the object and delete it later.

Accessing pointers after deletion [duplicate]

This question already has answers here:
What happens to the pointer itself after delete? [duplicate]
(3 answers)
Closed 3 years ago.
I have a code snippet like below. I have created some Dynamic memory allocation for my Something class and then deleted them.
The code print wrong data which I expect but why ->show does not crash?
In what case/how ->show will cause crash?
Is it possible to overwrite the same memory location of i, ii, iii with some other object?
I am trying to understand why after delete which frees up the memory location to be written with something else still have information about ->show!
#include <iostream>
#include <vector>
class Something
{
public:
Something(int i) : i(i)
{
std::cout << "+" << i << std::endl;
}
~Something()
{
std::cout << "~" << i << std::endl;
}
void show()
{
std::cout << i << std::endl;
}
private:
int i;
};
int main()
{
std::vector<Something *> somethings;
Something *i = new Something(1);
Something *ii = new Something(2);
Something *iii = new Something(3);
somethings.push_back(i);
somethings.push_back(ii);
somethings.push_back(iii);
delete i;
delete ii;
delete iii;
std::vector<Something *>::iterator n;
for(n = somethings.begin(); n != somethings.end(); ++n)
{
(*n)->show(); // In what case this line would crash?
}
return 0;
}
The code print wrong data which I expect but why ->show does not crash?
Why do you simultaneously expect the data to be wrong, but also that it would crash?
The behaviour of indirecting through an invalid pointer is undefined. It is not reasonable to expect the data to be correct, nor to expect the data to be wrong, nor to expect that it should crash, nor to expect that it shouldn't crash - in particular.
In what case/how ->show will cause crash?
There is no situation where the C++ language specifies the program to crash. Crashing is a detail of the particular implementation of C++.
For example, a Linux system will typically force the process to crash due to "segmentation fault" if you attempt to write into a memory area that is marked read-only, or attempt to access an unmapped area of memory.
There is no direct way in standard C++ to create memory mappings: The language implementation takes care of mapping the memory for objects that you create.
Here is an example of a program that demonstrably crashes on a particular system:
int main() {
int* i = nullptr;
*i = 42;
}
But C++ does not guarantee that it crashes.
Is it possible to overwrite the same memory location of i, ii, iii with some other object?
The behaviour is undefined. Anything is possible as far as the language is concerned.
Remember, a pointer stores an integer memory address. On a call to delete, the dynamic memory will be deallocated but the pointer will still store the memory address. If we nulled the pointer, then program would crash.
See this question: What happens to the pointer itself after delete?

how to handle double free crash in c++

Deleting the double pointer is will cause the harmful effect like crash the program and programmer should try to avoid this as its not allowed.
But sometime if anybody doing this then i how do we take care of this.
As delete in C++ is noexcept operator and it'll not throw any exceptions. And its written type is also void. so how do we catch this kind of exceptions.
Below is the code snippet
class myException: public std::runtime_error
{
public:
myException(std::string const& msg):
std::runtime_error(msg)
{
cout<<"inside class \n";
}
};
void main()
{
int* set = new int[100];
cout <<"memory allcated \n";
//use set[]
delete [] set;
cout <<"After delete first \n";
try{
delete [] set;
throw myException("Error while deleting data \n");
}
catch(std::exception &e)
{
cout<<"exception \n";
}
catch(...)
{
cout<<"generic catch \n";
}
cout <<"After delete second \n";
In this case i tried to catch the exception but no success.
Pleas provide your input how we'll take care of these type of scenario.
thanks in advance!!!
Given that the behaviour on a subsequent delete[] is undefined, there's nothing you can do, aside from writing
set = nullptr;
immediately after the first delete[]. This exploits the fact that a deletion of a nullptr is a no-op.
But really, that just encourages programmers to be sloppy.
Segmentation fault or bad memory access or bus errors cannot be caught by exception. Programmers need to manage their own memory correctly as you do not have garbage collection in C/C++.
But you are using C++, no ? Why not make use of RAII ?
Here is what you should strive to do:
Memory ownership - Explicitly via making use of std::unique_ptr or std::shared_ptr and family.
No explicit raw calls to new or delete. Make use of make_unique or make_shared or allocate_shared.
Make use of containers like std::vector or std::array instead of creating dynamic arrays or allocating array on stack resp.
Run your code via valgrind (Memcheck) to make sure there are no memory related issues in your code.
If you are using shared_ptr, you can use a weak_ptr to get access to the underlying pointer without incrementing the reference count. In this case, if the underlying pointer is already deleted, bad_weak_ptr exception gets thrown. This is the only scenario I know of when an exception will be thrown for you to catch when accessing a deleted pointer.
A code must undergo multiple level of testing iterations maybe with different sets of tools before committing.
There is a very important concept in c++ called RAII (Resource Acquisition Is Initialisation).
This concept encapsulates the idea that no object may exist unless it is fully serviceable and internally consistent, and that deleting the object will release any resources it was holding.
For this reason, when allocating memory we use smart pointers:
#include <memory>
#include <iostream>
#include <algorithm>
#include <iterator>
int main()
{
using namespace std;
// allocate an array into a smart pointer
auto set = std::make_unique<int[]>(100);
cout <<"memory allocated \n";
//use set[]
for (int i = 0 ; i < 100 ; ++i) {
set[i] = i * 2;
}
std::copy(&set[0], &set[100] , std::ostream_iterator<int>(cout, ", "));
cout << std::endl;
// delete the set
set.reset();
cout <<"After delete first \n";
// delete the set again
set.reset();
cout <<"After delete second \n";
// set also deleted here through RAII
}
I'm adding another answer here because previous answers focus very strongly on manually managing that memory, while the correct answer is to avoid having to deal with that in the first place.
void main() {
std::vector<int> set (100);
cout << "memory allocated\n";
//use set
}
This is it. This is enough. This gives you 100 integers to use as you like. They will be freed automatically when control flow leaves the function, whether through an exception, or a return, or by falling off the end of the function. There is no double delete; there isn't even a single delete, which is as it should be.
Also, I'm horrified to see suggestions in other answers for using signals to hide the effects of what is a broken program. If someone is enough of a beginner to not understand this rather basic stuff, PLEASE don't send them down that path.

Accessing an already destroyed object does not cause segfault [duplicate]

This question already has answers here:
Can a local variable's memory be accessed outside its scope?
(20 answers)
C++ delete - It deletes my objects but I can still access the data?
(13 answers)
Closed 7 years ago.
Out of fun, I decided to see what gdb would say about this code, which is meant to attempt to use methods of an already destroyed object.
#include <iostream>
class ToDestroy
{
public:
ToDestroy() { }
~ToDestroy() {
std::cout << "Destroyed!" << std::endl;
}
void print() {
std::cout << "Hello!" << std::endl;
}
};
class Good
{
public:
Good() { }
~Good() { }
void setD(ToDestroy* p) {
mD = p;
}
void useD() {
mD->print();
}
private:
ToDestroy* mD;
};
int main() {
Good g;
{
ToDestroy d;
g.setD(&d);
}
g.useD();
return 0;
}
The output is (built with -O0 flag):
Destroyed!
Hello!
Allocating d in the heap and deleting it causes the same behaviour (i.e., no crash).
I assume the memory has not been overwritten and C++ is 'tricked' into using it normally. However, I am surprised about the fact that, when allocating on the heap and deleting, one can use memory not assigned to them.
Can someone provide any more insight about this? Does this mean that when trying to dereference a pointer, if that memory happens to have something 'coherent' for our context the execution would not cause a SEGFAULT despite the memory not having been assigned to us?
A segfault happens when you try to access an address that the OS forbids you to access. This can be because the mem behind the address is not allocated to you process, or because it does not exist or whatever. So you are now trying to access a piece of memory that is still allocated to your process, so no segfault.
Malloc (the one that manages your heap) works with certain buffers to limit the amount of syscalls. So there is uninitialized mem that you can access.
You pass an invalid this pointer to print but it is never dereferenced as print is not virtual nor is it accessing any member.