rvalue reference undfefined behavior - c++

#include<iostream>
struct Test
{
int n ;
~Test(){}
Test& operator =(int v)
{
n=v;
return *this;
}
};
Test * ptr = nullptr;
void g(Test && p)
{
std::cout << "&&";
}
void g(Test & p)
{
ptr = &p;
std::cout << "&";
}
void f(Test&& t)
{
g(t);
}
void buggy()
{
*ptr = 5;
}
int main()
{
f(Test());
buggy();
std::cin.ignore();
}
Just to be sure, the above code lead to an undefined behavior as we keep address of a tempory ?

Declaring pointer to struct Test* ptr; or "keeping address" as you call it doesn't lead to an undefined behaviour. Using pointers to objects whose lifetime has ended does.
Lifetime of the object created by Test() in main ends right after f(Test()); is executed. After that, whatever you do by using ptr is undefined. This object most likely stays in the memory even after its lifetime has ended, but you shouldn't rely on it.
You should also check out: What are all the common undefined behaviours that a C++ programmer should know about?

Yes, the temporary Test() is allocated on the stack, you take a pointer to it and it's destructor is called after returning. After that, the value of the pointer is still valid, but it points to "undefined" memory so all bets are off on dereferencing the pointer.

Related

Why reference is safer than pointer? [duplicate]

This question already has answers here:
Why is a c++ reference considered safer than a pointer?
(9 answers)
Closed 5 years ago.
Hey I'm trying to understand what is the difference between pointer and reference in therm of safety to use, a lot of person say that reference are safer than pointer and it couldn't be null. But for the following code it show that reference could create run-time error but not the pointer :
class A {
public:
A(): mAi(0) {}
void ff() {std::cout << "This is A" << std::endl;}
int mAi;
};
class B {
public:
B() : mBi(0), mAp(NULL) {}
A* getA() const { return mAp; }
void ff(){std::cout << "This is B" << std::endl;}
int mBi;
A* mAp;
};
int main()
{
B* b = new B();
/// Reference part
A& rA = *b->getA();
rA.ff();
std::cout << rA.mAi << std::endl;
/// Pointer part
A* pA = b->getA();
if (NULL != pA)
{
pA->ff();
std::cout << pA->mAi << std::endl;
}
}
This code will crash for "reference part" but not for the "pointer part".
My questions are :
Why we always say that reference are safer than pointers if they could be invalid (as in the previous code) and we can't check for their invalidity?
There is any difference in terms of RAM or CPU consumption between using Pointer or Reference ? (Is it worth to refactor big code to use reference instead of pointer when we can then ?)
References cannot be NULL, that is correct. The reference part of your code however is crashing because you're explicitly trying to dereference a NULL pointer when you try to initialize the reference, not because the reference was ever null:
*b->getA(); // Equivalent to *((A*) NULL)
References can become dangling references though, if you did something like:
int* a = new int(5);
int& b = *a;
// Some time later
delete a;
int c = b + 2; // Ack! Dangling reference
A pointer wouldn't have saved you here, here's the equivalent code using a pointer:
int* a = new int(5);
int* b = a;
// Some time later
delete a;
if(b) { // b is still set to whatever memory a pointed to!
int c = *b + 2; // Ack! Pointer used after delete!
}
Pointers and references are unlikely to make any performance difference, they're probably similarly implemented under the hood depending on your compiler. References might be optimized out completely if the compiler can tell exactly what the reference is bound to.

Transparently insert temporary into caller's scope

In C++, operator-> has special semantics, in that if the returned type isn't a pointer, it will call operator-> again on that type. But, the intermediate value is kept as a temporary by the calling expression. This allows code to detect changes in the returned value:
template<class T>
class wrapper
{
// ...
T val;
struct arrow_helper
{
arrow_helper(const T& temp)
: temp(temp){}
T temp;
T* operator->() { return &temp; }
~arrow_helper() { std::cout << "modified to " << temp << '\n'; }
};
arrow_helper operator->() { return{ val }; }
//return const value to prevent mistakes
const T operator*() const { return val; }
}
and then T's members can be accessed transparently:
wrapper<Foo> f(/*...*/);
f->bar = 6;
Is there anything that could go wrong from doing this? Also, is there a way to get this effect with functions other than operator->?
EDIT: Another issue I've come across is in expressions like
f->bar = f->bar + 6;
since when the arrow_helper from the second operator-> is destructed it re-overwrites the value back to the original. My semi-elegant solution is for arrow_helper to have a T orig that is hidden, and assert(orig == *owner) in the destructor.
There is no guarantee that all changes will be caught:
Foo &x = f->bar;
x = 6; /* undetected change */
If there is no way to grab a reference to any data within T through T's interface or otherwise, I think this should be safe. If there's any way to grab such a pointer or reference, you're done and in undefined behavior as soon as someone saves off such reference and uses it later.

How do move semantics work with unique_ptr?

I was experimenting with using unique_ptr and wrote some simple code to check how it works with move semantics.
#include <iostream>
#include <vector>
using namespace std;
class X
{
public:
X(){}
~X() { cout << "Destructor X" << endl; }
void Print() { cout << "X" << endl; }
};
int main()
{
unique_ptr<X> ptr(new X());
ptr->Print();
vector<unique_ptr<X>> v;
v.push_back(move(ptr));
ptr->Print();
v.front()->Print();
return 0;
}
The output is as follows:
X
X
X
Destructor X
My expectation was that the original unique_ptr ptr would be invalidated after the push_back. But the Print() method is called just fine. What would be the explanation for this behavior?
My expectation was that the original unique_ptr ptr would be invalidated after the push_back.
It's set to a null pointer. You can check that by comparing it to nullptr.
But the Print() method is called just fine. What would be the explanation for this behavior?
You're calling a member function on a null pointer, which is undefined behaviour. That member function doesn't actually access any data in the class, so it doesn't crash, but it's still undefined behaviour.
You get similar behaviour for this program, it has nothing to do with unique_ptr:
int main()
{
X x;
X* ptr = &x;
ptr->Print();
ptr = nullptr;
ptr->Print();
}
It appears to work fine because X::Print() doesn't actually read anything from the this pointer. If you change the definition of X::Print() to access some member data in the class you'll probably get a crash due to dereferencing a null pointer.
See When does invoking a member function on a null instance result in undefined behavior? for more information.
What you have is plain undefined behavior. If I replace the contents of main with the following
int main()
{
unique_ptr<X> ptr;
ptr->Print();
cout << (static_cast<bool>(ptr) ? "active\n" : "inactive\n");
}
Both gcc and clang still print
X
inactive
You're calling a member function on a nullptr, and I'm guessing it just happens to work because the member function doesn't actually make use of the this pointer. Change your class definition to:
class X
{
int y = 0;
public:
X(){}
~X() { cout << "Destructor X" << endl; }
void Print() { cout << "y = " << y << endl; }
};
Now your original code should result in a segmentation fault because it'll attempt to dereference nullptr.
As for your expectation that unique_ptr will be invalidated after you move from it, you're absolutely correct. This is guaranteed by the standard.
§20.8.1/4 [unique.ptr]
Additionally, u can, upon request, transfer ownership to another unique pointer u2. Upon completion of such a transfer, the following postconditions hold:
— u2.p is equal to the pre-transfer u.p,
— u.p is equal to nullptr, and
...
Above u & u2 are unique_ptr objects, and p is the pointer to the managed object.

Side effects when passing objects to function in C++

I have read in C++ : The Complete Reference book the following
Even though objects are passed to functions by means of the normal
call-by-value parameter passing mechanism, which, in theory, protects
and insulates the calling argument, it is still possible for a side
effect to occur that may affect, or even damage, the object used as an
argument. For example, if an object used as an argument allocates
memory and frees that memory when it is destroyed, then its local copy
inside the function will free the same memory when its destructor is
called. This will leave the original object damaged and effectively
useless.
I do not really understand how the side effect occurs. Could anybody help me understand this with an example ?
Here is an example:
class bad_design
{
public:
bad_design( std::size_t size )
: _buffer( new char[ size ] )
{}
~bad_design()
{
delete[] _buffer;
}
private:
char* _buffer;
};
Note that the class has a constructor and a destructor to handle the _buffer resource. It would also need a proper copy-constructor and assignment operator, but is such a bad design that it wasn't added. The compiler will fill those with the default implementation, that just copies the pointer _buffer.
When calling a function:
void f( bad_design havoc ){ ... }
the copy constructor of bad_design is invoked, which will create a new object pointing to the same buffer than the one passed as an argument. As the function returns, the local copy destructor will be invoked which will delete the resources pointed by the variable used as an argument. Note that the same thing happens when doing any copy construction:
bad_design first( 512 );
bad_design error( first );
bad_design error_as_well = first;
That passage is probably talking about this situation:
class A {
int *p;
public:
A () : p(new int[100]) {}
// default copy constructor and assignment
~A() { delete[] p; }
};
Now A object is used as pass by value:
void bar(A copy)
{
// do something
// copy.~A() called which deallocates copy.p
}
void foo ()
{
A a; // a.p is allocated
bar(a); // a.p was shallow copied and deallocated at the end of 'bar()'
// again a.~A() is called and a.p is deallocated ... undefined behavior
}
Here is another example. The point is that when the callee (SomeFunc) parameter destructor is invoked it will free the same object (ptr) pointed to by the caller argument (obj1). Consequently, any use of the caller argument (obj1) after the invocation will produce a segfault.
#include <iostream>
using namespace std;
class Foo {
public:
int *ptr;
Foo (int i) {
ptr = new int(i);
}
~Foo() {
cout << "Destructor called.\n" << endl;
//delete ptr;
}
int PrintVal() {
cout << *ptr;
return *ptr;
}
};
void SomeFunc(Foo obj2) {
int x = obj2.PrintVal();
} // here obj2 destructor will be invoked and it will free the "ptr" pointer.
int main() {
Foo obj1 = 15;
SomeFunc(obj1);
// at this point the "ptr" pointer is already gone.
obj1.PrintVal();
}

Dangling pointer

Does this piece of code lead to dangling pointer. My guess is no.
class Sample
{
public:
int *ptr;
Sample(int i)
{
ptr = new int(i);
}
~Sample()
{
delete ptr;
}
void PrintVal()
{
cout << "The value is " << *ptr;
}
};
void SomeFunc(Sample x)
{
cout << "Say i am in someFunc " << endl;
}
int main()
{
Sample s1 = 10;
SomeFunc(s1);
s1.PrintVal();
}
Yes. Sample's copy constructor gets called when you pass s1 to SomeFunc. The default copy constructor does a shallow copy, so ptr will get deleted twice.
Yes, as user said.
~Sample() {
delete ptr; // Pointer deleted but left dangling
ptr = NULL; // Pointer is no longer dangling
}
Note however, that any pointers you copied that pointer to will be left dangling unless they are set to NULL as well.
When you pass the object to SomeFunc() by value, shallow copy is taking place and after its execution, the memory ptr was pointing to has been deleted...
so when you call the PrintVal () function on s1 and try to dereference the pointer, your program may crash at this stage.... you can delete a pointer once and its memory becomes out of your control