I have some code and I try to figure out, why I'm getting an segmentation fault here:
I add a SpeedEffect to a EffectStack, this works quite well. But if I try to remove one of the Effects (which are already on the stack) I have to call effect.removeEffect(). This causes a segmentation fault.
If I try to call effect.removeEffect() from the TestStack() function, it works well (and prints the expected "speed effect removed" on the console)
void Test::testStack() {
Story* st = new Story; //<-- only needed for initialization of an Effect
Veins::TraCIMobility* mob = new Veins::TraCIMobility; //<-- only needed for initialization of an Effect
SpeedEffect a = SpeedEffect(1.0, st, mob);
a.removeEffect(); //<-- This one works quite well
(&a)->removeEffect(); //<-- Clearly, this works too
EffectStack s;
s.addEffect(&a); //<-- Adds a Effect to the effect Stack
assert(s.getEffects().size() == 1);
s.removeEffect(&a); //<-- Try to remove effect from stack
}
The Stack and the Effect are implemented as following:
class Effect {
public:
Effect(Story* story, Veins::TraCIMobility* car) :
m_story(story), m_car(car) {}
virtual void removeEffect() = 0;
private:
Story* m_story;
protected:
Veins::TraCIMobility* m_car;
};
class SpeedEffect : public Effect {
public:
SpeedEffect(double speed, Story* story, Veins::TraCIMobility* car):
Effect(story, car), m_speed(speed){}
void removeEffect() {
std::cout << "speed effect removed" << std::endl;
}
private:
double m_speed;
};
class EffectStack {
public:
void addEffect(Effect* effect) {
if(std::count(m_effects.begin(), m_effects.end(), effect) == 0) {
m_effects.push_back(effect);
}
}
void removeEffect(Effect* effect) {
if(effect == m_effects.back()) {
//effect is pointing on the same address like its doing before, but causes the seg fault
m_effects.back()->removeEffect(); //<--- Seg Fault here!!
effect->removeEffect(); //<-- if I use this, seg fault too
m_effects.pop_back();
}else {
removeFromMiddle(effect);
}
}
const std::vector<Effect*>& getEffects() {
return m_effects;
}
private:
std::vector<Effect*> m_effects;
};
I hope this code is enough, I have removed all functions which are not called by the testing scenario.
Is there any problem, because the address of the speedEffect a becomes invalid in the Stack?
Maybe you can help me with this.
New thoughts about the question:
No I have tested a bit more, which makes me even more confused:
void dofoo(SpeedEffect* ef) {
ef->removeEffect(); //<-- breaks with a segmentation fault
}
void Test::testStack() {
Story* st = new Story;
Veins::TraCIMobility* mob = new Veins::TraCIMobility;
SpeedEffect e = SpeedEffect(3.0, st, mob);
e.removeEffect(); //<-- Works fine
(&e)->removeEffect(); //<-- Works fine also
dofoo(&a); //<-- Jumps into the dofoo() function
}
This may not help you, but persisting the address of stack-based objects is usually not a great idea. In your code above it's potentially okay since you know EffectStack won't outlive your effect.
Does the crash still occur if you do:
SpeedEffect* a = new SpeedEffect(1.0, st, mob);
(and adjust the rest of the code accordingly?) This will leak memory of course, but it will tell you if the problem is SpeedEffect being destroyed. Another option is to give SpeedEffect a destructor (and Effect a virtual destructor) and set a breakpoint inside to see when the the compiler is destroying 'a'.
Story* st = new Story; //<-- only needed for initialization of an Effect
Veins::TraCIMobility* mob = new Veins::TraCIMobility; //<-- only needed for initialization
I don't see the delete st and delete mob - there is memory allocated for these objects inside void Test::testStack() but not explicitly released.
add these two statement at the end of the function and try again.
I have found the problem.
I'm using the omnet Simulation framework and there is something unexpected happening if I instanciate the TraciMobility..
So without it, there is no error..
Related
Here I have a class definition. It is a little long, but the focus will be on the move constructor and the destructor. Below the class definition is a short test.
#include <cassert>
#include <iostream>
#include <utility>
template <typename T>
class SharedPtr {
public:
SharedPtr() {}
explicit SharedPtr(T* input_pointer) : raw_ptr_(input_pointer), ref_count_(new size_t(1)) {}
SharedPtr(const SharedPtr& other) : raw_ptr_(other.raw_ptr_), ref_count_(other.ref_count_) {
if (ref_count_) {
++*ref_count_;
}
}
SharedPtr(SharedPtr&& other) : raw_ptr_(other.raw_ptr_), ref_count_(other.ref_count_) {}
SharedPtr& operator=(SharedPtr other) {
swap(other, *this);
return *this;
}
size_t use_count() const {
return ref_count_ ? *ref_count_ : 0;
}
~SharedPtr() {
if (ref_count_) {
--*ref_count_;
if (*ref_count_ == 0) {
delete raw_ptr_;
delete ref_count_;
}
}
}
private:
T* raw_ptr_ = nullptr;
size_t* ref_count_ = nullptr;
friend void swap(SharedPtr<T>& left, SharedPtr<T>& right) {
std::swap(left.raw_ptr_, right.raw_ptr_);
std::swap(left.ref_count_, right.ref_count_);
}
};
int main() {
// Pointer constructor
{
SharedPtr<int> p(new int(5));
SharedPtr<int> p_move(std::move(p));
assert(p_move.use_count() == 1);
}
std::cout << "All tests passed." << std::endl;
return 0;
}
If I run the code I get an error message indicating memory corruption:
*** Error in `./a.out': corrupted size vs. prev_size: 0x0000000001e3dc0f ***
======= Backtrace: =========
...
======= Memory map: ========
...
Aborted (core dumped)
We may suspect something is wrong with the move constructor: if we move from a SharedPtr and then later destruct that SharedPtr, it will still destruct as if it were an "active" SharedPtr. So we could fix that by setting the other object's pointers to nullptr in the move constructor.
But that's not the interesting thing about this code. The interesting thing is what happens if I don't do that, and instead simply add std::cout << "x" << std::endl; to the move constructor.
The new move constructor is given below, and the rest of the code is unchanged.
SharedPtr(SharedPtr&& other) : raw_ptr_(other.raw_ptr_), ref_count_(other.ref_count_) {
std::cout << "x" << std::endl;
}
The code now runs without error on my machine and yields the output:
x
All tests passed.
So my questions are:
Do you get the same results as I do?
Why does adding a seemingly innocuous std::cout line cause the program to run "successfully"?
Please note: I am not under any sort of impression that error message gone implies bug gone.
bolov's answer explains the cause of the undefined behavior (UB), when the move constructor of SharedPtr does not invalidate the moved-from pointer.
I disagree with bolov's view that it is pointless to understand UB. The question why code changes result in different behavior, when facing UB, is extremely interesting. Knowing what happens can help debugging, on one hand, and it can help intruders intrude the system, on the other.
The difference in the code in question comes from adding std::cout << something. In fact, the following change also makes the crash go away:
{
SharedPtr<int> p(new int(5));
SharedPtr<int> p_move(std::move(p));
assert(p_move.use_count() == 1);
std::cout << "hi\n"; // <-- added
}
The std::cout << allocates some internal buffer, which std::cout << uses. The allocation in cout happens only once, and the question is if this allocation happens before or after the double free. Without the additional std::cout, this allocation happens after the double free, when the heap is corrupted. When the heap is corrupted, the allocation in std::cout << triggers the crash. But when there is a std::cout << before the double-free, there is no allocation after the double-free.
Let's have few other experiments to validate this hypothesis:
Remove all std::cout << lines. All works fine.
Move two calls to new int(some number) right before the end:
int main() {
int *p2 = nullptr;
int *cnt = nullptr;
// Pointer constructor
{
SharedPtr<int> p(new int(5));
SharedPtr<int> p_move(std::move(p));
assert(p_move.use_count() == 1);
}
p2 = new int(100);
cnt = new int(1); // <--- crash
return 0;
}
This crashes, since the new is attempted on a corrupted heap.
(you can try it out here)
Now move the two new lines to slightly up, right before the closing } of the inner block. In this case, the new is performed before the heap is corrupted, so nothing triggers a crash. The delete simply puts the data in the free list, which is not corrupted. As long as the corrupted heap is not touched, then things will work fine. One can call new int, and get a pointer of one of the lately released pointers, and nothing bad will happen.
{
SharedPtr<int> p(new int(5));
SharedPtr<int> p_move(std::move(p));
assert(p_move.use_count() == 1);
p2 = new int(100);
cnt = new int(1);
}
delete p2;
delete cnt;
p2 = new int(100); // No crash. We are reusing one of the released blocks
cnt = new int(1);
(you can try it out here)
The interesting fact is that the corrupted heap can be undetected to much later in the code. The computer may run millions of unrelated lines of code, and suddenly crash on a completely unrelated new in a completely different part of the code. This is why sanitizers and the likes of valgrind are needed: debugging memory corruption can be practically impossible to debug otherwise.
Now, the really interesting question is "can this be exploited more than for denial of service?". Yes it can. It depends on the kind of object that is destroyed twice, and what it does in the destructor. It also depends on what happens between the first destruction of the pointer, and its second free. In this trivial example, nothing substantial seems to be possible.
SharedPtr(SharedPtr&& other) : raw_ptr_(other.raw_ptr_), ref_count_(other.ref_count_) {}
When you move the moved from object remains the same. This means that at some point in your program you will delete raw_ptr_ twice for the same memory. The same for ref_count_. This is Undefined Behavior.
The behaviour you observe falls well within Undefined Behavior because that's what UB means: the standard doesn't mandate absolutely any kind of behavior from your program. Trying to understand why exactly happens what happens on your particular compiler and your particular version on your particular platform with your specific flags is ... kind of pointless.
I have ran into a rather confusing problem. It seems like the IF statement in my program is causing me a segmentation error.
I am working with extern libraries, and calling the code from external libraries in the IF statement, so I can't provide the whole code of those functions because I don't have it either.
Basic example of what happens. So this example causes me a Segmentation fault.
IRank *rank;
//Generating wavelet tree from BWT with sdsl library
if(true) {
std::cout << "I am in IF" << endl; // this gets printed on the screen
wt_huff<> wt; // right after that - segm fault
construct_im(wt, BWT, 1);
WTRank wtrank(&wt);
rank = &wtrank;
}
However, the same example, but without an IF, when I comment it out, does not cause Segmentation fault, and executes normally.
IRank *rank;
//Generating wavelet tree from BWT with sdsl library
//if(true) {
std::cout << "I am in IF" << endl; // again this gets printed
wt_huff<> wt; // no segmentation error this time
construct_im(wt, BWT, 1);
WTRank wtrank(&wt);
rank = &wtrank;
//}
Original example:
// // Decide what rank function to use
IRank *rank;
if(m_wt) {
// Multiary Wavelet Tree rank function :: student implementation
mwt::node *m_wtree = mwt::generateMultiaryWT(BWT, ary);
MultiWTRank m_wt_rank(m_wtree, ary);
rank = &m_wt_rank;
} else if(b_wt) {
// Binary Wavelet Tree rank function :: SDSL implementation
wt_huff<> b_wtree;
construct_im(b_wtree, BWT, 1);
WTRank b_wt_rank(&b_wtree);
rank = &b_wt_rank;
} else if(non_wt) {
// Implementation of rank function not using Wavelet Tree
LinRank lin_rank(BWT);
rank = &lin_rank;
} else {
// should not happen
}
//...
run(rank);
What happens here, it is so confusing?
EDIT: example of other code being called from this snipper
#include "IRank.h"
#include "mwt.h"
class MultiWTRank : public IRank {
private:
mwt::node *wt;
int n_ary;
public:
MultiWTRank(mwt::node *root, int ary) {
wt = root;
n_ary = ary;
}
~MultiWTRank() {
}
index_type rank(index_type index, symbol_type symbol);
};
So this is being constructed in the first IF.
EDIT2: Providing a code that generates a pointer to the tree that could cause the trouble
class mwt {
public:
// Structure of a MW tree node
typedef struct node {
vector<int> data;
vector<node*> next;
} node;
// ...
static node* generateMultiaryWT(string input, int ary) {
//...
return root;
}
Node is created like this:
static node* InitRoot(int ary){
node *root = new node;
for(int iter = 0; iter < ary; iter++){
root->next.push_back(NULL);
}
return root;
}
Declare the 'wt' and 'wtrank' variables before the if. If you declare it inside the block following the if, its scope is limited to that block. After the } it is out of scope and the 'rank' pointer becomes dangling, so accessing it later may cause a segfault.
Your problem is almost certainly some other code you have not shown doing something untoward - molesting a pointer, falling off the end of an array, accessing value of an uninitialised variable, etc.
Introducing an if (true) around some block, at most, will change memory layout of your program (e.g. if storage is set aside to hold the value true, and if the compiler emits some code to test it before executing the subsequent code). Because the memory layout changes, the implications of misbehaving code (i.e. what gets clobbered) can change.
Naturally, in this case, the possible change depends on the compiler. An aggressive optimisation may detect that true is always (well) true, and therefore eliminate the if (true) entirely from emitted code. In this case, there will be no difference on program behaviour of having it or not. However, not all compilers (or compiler settings) do that.
Incidentally, the advice to change where you define the variable wt might or might not work for similar reasons. Moving the definition might simply change the order of actions in code (machine instructions, etc), or the layout of memory as used by your program (particularly if the constructor for that object allocates significant resources). So it is not a solution, even if it might appear to work. Because it is not guaranteed to work. And may break because of other changes (of your code, compiler, compilation settings, etc) in future.
The thing is, the real problem might be in code you have shown (impact of functions being called, constructors being invoked, etc) or it might be in code executed previously in your program. Such is the nature of undefined behaviour - when a problem occurs, the symptom may not become visible immediately, but may affect behaviour of unrelated code.
Given where the problem occurs, the rank = &wtrank statement is not the cause. The cause will be in previous code. However, that dangling pointer will be another problem for subsequently executed code - once this problem is fixed.
Why would you want the declaration of > wt in the IF statement?
Hopefully I don't dumb down my code too much...
Index::Index() : m_first(""), m_count(0)
{
m_ll = new LinkedList;
}
void TestClass::testMethod()
{
if (getIndex(i).getCount() != 0)
{
//do stuff
}
}
Index TestClass::getIndex(int num) const
{
return m_index[num];
}
Index::~Index()
{
delete m_ll;
}
This is the code that's really involved in the crash. When I enter testMethod, I have m_index[num], which contains a pointer to m_ll. They're completely valid. After it returns m_index[num], it goes into the destructor even though m_index[num] is still in use and because of this, my program crashes. I don't understand why the destructor would be called so early.
The dtor calls delete, getIndex returns by value. My crystal ball tells me that Index::Index() calls new but Index::Index(Index const&) does not.
I would like to know how delete works?
In main function I have deleted the cfact object. But still the cfact->Hello() works instead of throwing an error.
While debugging I found while delete happens, cfact releases the memory. as soon as factory* c2fact = newfun.Newfun("c2_fact"); line executes cfact gets some memory location.
class factory{
public:
virtual void Hello() = 0;
};
class c_fact: public factory
{
public:
void Hello(){
cout << "class c_fact: public factory"<<endl;
}
};
class c2_fact: public factory
{
public:
void Hello(){
cout << "class c2_fact: public factory"<<endl;
}
};
class callFun{
public:
virtual factory* Newfun(string data)
{
if(data == "c_fact")
{return new c_fact;}
else
{return new c2_fact;}
}
};
class newFun:public callFun{
public:
factory* Newfun(string data)
{
if(data == "c_fact")
{return new c_fact;}
else if (data == "c2_fact")
{return new c2_fact;}
}
};
int main()
{
newFun newfun;
factory* cfact = newfun.Newfun("c_fact");
delete cfact; //Deleted the instance
factory* c2fact = newfun.Newfun("c2_fact");
cfact->Hello();//Still it prints the output
c2fact->Hello();
system("pause");
return 0;
}
delete doesn't actually invalidate what it points to. It just tells the OS that the memory can be used for something else and that the program doesn't need it anymore.
If it not overwritten by other data your data will still be in memory and will still be accessible. This is a cause of many bugs that go undetected during development phase and later show up.
The fact that is is working now doesn't mean it will always work. For example if you move the code to another machine or if you restart your computer the code might segfault.
It is always a good practice to set pointers to NULL after delete. Or even better use smart pointers.
This is undefined behavior, most likely this works because the method Hello is not using any of the classes variables and thus is not using the this pointer. Trying outputting this in Hello and you should see an invalid pointer after the call to delete:
std::cout << std::hex << this << << std::endl ;
In my test case it comes back as 0 after delete
Dereferencing a deleted pointer is undefined behaviour. That means anything can happen, including the program appearing to "work". You cannot rely on any such behaviour.
When you delete the memory it is released. however, the content is usually not changed, so anything that is written in that memory is still there after the delete, but you don't know how long it will stay, as other functions can grab it and overwrite it with their own data.
On some compilers, when compiling in debug mode, the memory is marked, so that you can detect such errors as you did by reusing the deleted pointer. However that is not necessarily the default. So you should never reuse a pointer that was deleted.
Sorry I can't comment...
I compiled your code and you can observe that c2fact replaces the cfact you just destroyed (the output is
class c2_fact: public factory
class c2_fact: public factory
)
BTW if you put "cfact->Hello();" before you create your c2fact, the program may crash (which is what you seem to wish) because the mem blocks are not affected to any object. Note that this behavior may change depending on the memory monitoring and other running processes.
Consider the following c++ code:
class test
{
public:
int val;
test():val(0){}
~test()
{
cout << "Destructor called\n";
}
};
int main()
{
test obj;
test *ptr = &obj;
delete ptr;
cout << obj.val << endl;
return 0;
}
I know delete should be called only on dynamically allocated objects but what would happen to obj now ?
Ok I get that we are not supposed to do such a thing, now if i am writing the following implementation of a smart pointer, how can i make sure that such a thing does't happen.
class smart_ptr
{
public:
int *ref;
int *cnt;
smart_ptr(int *ptr)
{
ref = ptr;
cnt = new int(1);
}
smart_ptr& operator=(smart_ptr &smptr)
{
if(this != &smptr)
{
// House keeping
(*cnt)--;
if(*cnt == 0)
{
delete ref;
delete cnt;
ref = 0;
cnt = 0;
}
// Now update
ref = smptr.ref;
cnt = smptr.cnt;
(*cnt)++;
}
return *this;
}
~smart_ptr()
{
(*cnt)--;
if(*cnt == 0)
{
delete ref;
delete cnt;
ref = 0;
cnt = 0;
}
}
};
You've asked two distinct questions in your post. I'll answer them separately.
but what would happen to obj now ?
The behavior of your program is undefined. The C++ standard makes no comment on what happens to obj now. In fact, the standard makes no comment what your program does before the error, either. It simply is not defined.
Perhaps your compiler vendor makes a commitment to what happens, perhaps you can examine the assembly and predict what will happen, but C++, per se, does not define what happens.
Practially speaking1, you will likely get a warning message from your standard library, or you will get a seg fault, or both.
1: Assuming that you are running in either Windows or a UNIX-like system with an MMU. Other rules apply to other compilers and OSes.
how can i make sure that [deleteing a stack variable] doesn't happen.
Never initialize smart_ptr with the address of a stack variable. One way to do that is to document the interface to smart_ptr. Another way is to redefine the interface so that the user never passes a pointer to smart_ptr; make smart_ptr responsible for invoking new.
Your code has undefined behaviour because you used delete on a pointer that was not allocated with new. This means anything could happen and it's impossible to say what would happen to obj.
I would guess that on most platforms your code would crash.
Delete's trying to get access to obj space in memory, but opperation system don't allow to do this and throws (core dumped) exception.
It's undefined what will happen so you can't say much. The best you can do is speculate for particular implementations/compilers.
It's not just undefined behavior, like stated in other answers. This will almost certainly crash.
The first issue is with attempting to free a stack variable.
The second issue will occur upon program termination, when test destructor will be called for obj.