Temporary object not destroyed correctly? - c++

See this code here:
class test
{
int n;
int *j;
public:
test(int m)
{
n = 12;
j = new int;
cin >> *j;
}
void show()
{
cout << n << ' ' << *j << endl;
}
~test()
{
delete j;
}
};
int main()
{
test var = 123;
var.show();
return 0;
}
In this program compiler should complain about double deleting of j. First delete is done when temporary object temporary(123) is destroyed. The second delete is done when var object is destroyed. But this is working fine?
Does it mean temporary object does not call destructor?

The contentious line is this:
test var = 123;
The relevant standard text (that the pundits in the comments are referencing), I believe, is (8.5, "Declarators"):
The function selected is called with the initializer expression as its argument; if the function is a constructor, the call initializes a temporary of the cv-unqualified version of the destination type. The temporary is an rvalue. The result of the call (which is the temporary for the constructor case) is then used to direct-initialize, according to the rules above, the object that is the destination of the copy-initialization. In certain cases, an implementation is permitted to eliminate the copying inherent in this direct-initialization by constructing the intermediate result directly into the object being initialized;
Indeed, in 12.6, we get an example of this:
complex f = 3; // construct complex(3) using
// complex(double)
// copy it into f
Thus, in your use of =, your implementation is probably directly constructing object and eliminating the intermediate temporary entirely (and, as the comments have noted, most do).
This class doesn't copy properly, so creating a copy of it (and the freeing the copy and the original) would result in a double delete (and crashes, undefined behavior, etc.). Because no copies are created, this scenario does not happen above.

Two points: first, in this particular case, the compiler is allowed to
optimize out the temporary, by an explicit authorization of the
standard. All of the compilers I'm familiar with do. You can verify
whether this is happening in your code by defining a copy constructor
and instrumenting it.
Second is that if the temporary isn't optimized out, your code has
undefined behavior. A double delete can have any imaginable behavior:
an immediate crash, corruption of the free space arena (leading to a
much later crash if the program continues running), no effect what so
ever, or anything else. The fact that you are not seeing any symptoms
doesn't mean that the code is correct.

The fact that the code happens to not blow up does not mean that it is correct.
Your class is buggy in that it is susceptible to double deletes in the manner you describe.
For example, changing var.show(); to the following:
test(var).show();
makes the code reliably blow up on my computer.
To fix, implement the copy constructor and the assignment operator.

Related

Why not auto move if object is destroyed in next step?

If a function return a value like this:
std::string foo() {
std::string ret {"Test"};
return ret;
}
The compiler is allowed to move ret, since it is not used anymore. This doesn't hold for cases like this:
void foo (std::string str) {
// do sth. with str
}
int main() {
std::string a {"Test"};
foo(a);
}
Although a is obviously not needed anymore since it is destroyed in the next step you have to do:
int main() {
std::string a {"Test"};
foo(std::move(a));
}
Why? In my opinion, this is unnecessarily complicated, since rvalues and move semantic are hard to understand especially for beginners. So it would be great if you wouldn't have to care in standard cases but benefit from move semantic anyway (like with return values and temporaries). It is also annoying to have to look at the class definition to discover if a class is move-enabled and benefits from std::move at all (or use std::move anyway in the hope that it will sometimes be helpfull. It is also error-prone if you work on existing code:
int main() {
std::string a {"Test"};
foo(std::move(a));
// [...] 100 lines of code
// new line:
foo(a); // Ups!
}
The compiler knows better if an object is no longer used used. std::move everywhere is also verbose and reduces readability.
It is not obvious that an object is not going to be used after a given point.
For instance, have a look at the following variant of your code:
struct Bar {
~Bar() { std::cout << str.size() << std::endl; }
std::string& str;
}
Bar make_bar(std::string& str) {
return Bar{ str };
}
void foo (std::string str) {
// do sth. with str
}
int main() {
std::string a {"Test"};
Bar b = make_bar(a);
foo(std::move(a));
}
This code would break, because the string a is put in an invalid state by the move operation, but Bar is holding a reference to it, and will try to use it when it's destroyed, which happens after the foo call.
If make_bar is defined in an external assembly (e.g. a DLL/so), the compiler has no way, when compiling Bar b = make_bar(a);, of telling if b is holding a reference to a or not. So, even if foo(a) is the last usage of a, that doesn't mean it's safe to use move semantics, because some other object might be holding a reference to a as a consequence of previous instructions.
Only you can know if you can use move semantics or not, by looking at the specifications of the functions you call.
On the other side, you can always use move semantics in the return case, because that object will go out of scope anyway, which means any object holding a reference to it will result in undefined behaviour regardless of the move semantics.
By the way, you don't even need move semantics there, because of copy elision.
Its all sums up on what you define by "Destroyed"? std::string has no special effect for self-destroying but deallocating the char array which hides inside.
what if my destructor DOES something special? for example - doing some important logging? then by simply "moving it because it's not needed anymore" I miss some special behavior that the destructor might do.
Because compilers cannot do optimizations that change behavior of the program except when allowed by the standard. return optimization is allowed in certain cases but this optimization is not allowed for method calls. By changing the behavior, it would skip calling copy constructor and destructor which can have side effects (they are not required to be pure) but by skipping them, these side effects won't happen and therefore the behavior would be changed.
(Note that this highly depends on what you try to pass and, in this case, STL implementation. In cases where all code is available at the time of compilation, the compiler may determine both copy constructor and destructor are pure and optimize them out.)
While the compiler is allowed to move ret in your first snippet, it might also do a copy/move elision and construct it directly into the stack of the caller.
This is why it is not recommended to write the function like this:
std::string foo() {
auto ret = std::string("Test");
return std::move(ret);
}
Now for the second snippet, your string a is a lvalue. Move semantics only apply to rvalue-references, which obtained by returning a temporary, unnamed object, or casting a lvalue. The latter is exactly what std::move does.
std::string GetString();
auto s = GetString();
// s is a lvalue, use std::move to cast it to rvalue-ref to force move semantics
foo(s);
// GetString returns a temporary object, which is a rvalue-ref and move semantics apply automatically
foo(GetString());

std::vector push_back and class constructor not being called?

I have class like this
class variable
{
public:
variable(int _type=0) : type(_type), value(NULL), on_pop(NULL)
{
}
virtual ~variable()
{
if (type)
{
std::cout << "Variable Deleted" <<std::endl;
on_pop(*this);
value=NULL;
}
}
int type;
void* value;
typedef void(*func1)(variable&);
func1 on_pop;
}
And then I push instances into a std::vector like this:
stack.push_back(variable(0));
I expect that the destructor of variable will be called but the if won't enter until a value is assigned to type because I expect the constructor I provide will be called when the instance is copied into the vector. But for some reason it is not.
After calling stack.push_back the destructor (of the copy?) is ran and type has some random value like if the constructor was never called.
I can't seem to figure what I am doing wrong. Please help! ^_^
EDIT:
Ok here is a self contained example to show what I mean:
#include <iostream>
#include <vector>
class variable
{
public:
variable(int _type=0) : type(_type), value(NULL), on_pop(NULL)
{
}
~variable()
{
if (type)
{
std::cout << "Variable Deleted" <<std::endl;
on_pop(*this);
value=NULL;
}
}
int type;
void* value;
typedef void(*func1)(variable&);
func1 on_pop;
};
static void pop_int(variable& var)
{
delete (int*)var.value;
}
static void push_int(variable& var)
{
var.type = 1;
var.value = new int;
var.on_pop = &pop_int;
}
typedef void(*func1)(variable&);
func1 push = &push_int;
int main()
{
std::vector<variable> stack;
stack.push_back(variable(0));
push(stack[stack.size()-1]);
stack.push_back(variable(0));
push(stack[stack.size()-1]);
stack.push_back(variable(0));
push(stack[stack.size()-1]);
return 0;
}
The program above outputs the following:
Variable Deleted
Variable Deleted
Variable Deleted
Variable Deleted
Variable Deleted
Variable Deleted
Process returned 0 (0x0) execution time : 0.602 s
Press any key to continue.
Welcome to RVO and NRVO. This basically means that the compiler can skip creating an object if it's redundant- even if it's constructor and destructor have side effects. You cannot depend on an object which is immediately copied or moved to actually exist.
Edit: The actual value in the vector cannot be ellided at all. Only the intermediate variable variable(0) can be ellided. The object in the vector must still be constructed and destructed as usual. These rules only apply to temporaries.
Edit: Why are you writing your own resource management class? You could simply use unique_ptr with a custom deleter. And your own RTTI?
Every object that was destructed must have been constructed. There is no rule in the Standard that violates this. RVO and NRVO only become problematic when you start, e.g., modifying globals in your constructors/destructors. Else, they have no impact on the correctness of the program. That's why they're Standard. You must be doing something else wrong.
Ultimately, I'm just not sure exactly WTF is happening to you and why it's not working or what "working" should be. Post an SSCCE.
Edit: In light of your SSCCE, then absolutely nothing is going wrong whatsoever. This is entirely expected behaviour. You have not respected the Rule of Three- that is, you destroy the resource in your destructor but make no efforts to ensure that you actually own the resource in question. Your compiler-generated copy constructor is blowing up your logic. You must read about the Rule of Three, copy and swap and similar idioms for resource handling in C++, and preferably, use a smart pointer which is already provided as Standard like unique_ptr which does not have these problems.
After all, you create six instances of variable- three temporaries on the stack, and three inside the vector. All of these have their destructors called. The problem is that you never considered the copy operation or what copying would do or what would happen to these temporaries (hint: they get destructed).
Consider the equal example of
int main()
{
variable v(0);
push_int(v);
variable v2 = v;
return 0;
}
Variable v is constructed and allocates a new int and everything is dandy. But wait- then we copy it into v2. The compiler-generated constructor copies all the bits over. Then both v2 and v are destroyed- but they both point to the same resource because they both hold the same pointer. Double delete abounds.
You must define copy (shared ownership - std::shared_ptr) or move (unique ownership - std::unique_ptr) semantics.
Edit: Just a quick note. I observe that you actually don't push into items until after they're already in the vector. However, the same effect is observed when the vector must resize when you add additional elements and the fundamental cause is the same.
The destructor is called 6 times. A constructor is called six times. Just not the one you intended.
Ok. I've been reading some more about the intrinsics of different containers and, apparently, the one that does the job I'm trying to accomplish here is std::deque.

how the destructor works in c++

here is my c++ code :
class Sample
{
public:
int *ptr;
Sample(int i)
{
ptr = new int(i);
}
~Sample()
{
delete ptr;
}
void PrintVal()
{
cout << "The value is " << *ptr;
}
};
void SomeFunc(Sample x)
{
cout << "Say i am in someFunc " << endl;
}
int main()
{
Sample s1= 10;
SomeFunc(s1);
s1.PrintVal();
}
it returns me the output like :
Say i am in someFunc
Null pointer assignment(Run-time error)
here As the object is passed by value to SomeFunc the destructor of the object is called when the control returns from the function
should i right ? if yes then why it is happening ? and whats the solution for this ???
Sample is passed by value to SomeFunc, which means a copy is made. The copy has the same ptr, so when that copy is destroyed when SomeFunc returns, ptr is deleted for both objects. Then when you call PrintVal() in main you dereference this invalid pointer. This is undefined behavior. Even if that works then when s1 is destroyed ptr is deleted again, which is also UB.
Also, if the compiler fails to elide the copy in Sample s1= 10; then s1 won't even be valid to begin with, because when the temporary is destroyed the pointer will be deleted. Most compilers do avoid this copy though.
You need to either implement copying correctly or disallow copying. The default copy-ctor is not correct for this type. I would recommend either making this type a value type (which holds its members directly rather than by pointer) so that the default copy-ctor works, or use a smart pointer to hold the reference so that it can manage the by-reference resources for you and the default copy-ctor will still work.
One of the things I really like about C++ is that it's really friendly toward using value types everywhere, and if you need a reference type you can just wrap any value type up in a smart pointer. I think this is much nicer than other languages that have primitive types with value semantics but then user defined types have reference semantics by default.
You usually need to obey the Rule of Three since you have an pointer member.
In your code example to avoid the Undefined Behavior you are seeing:
Replace the need to in first statement by must.
Since SomeFunc() takes its argument by value, the Sample object that you pass to it is copied. When SomeFunc() returns, the temporary copy is destroyed.
Since Sample has no copy constructor defined, its compiler-generated copy constructor simply copies the pointer value, so both Sample instances point to the same int. When one Sample (the temporary copy) is destroyed, that int is deleted, and then when the second Sample (the original) is destroyed, it tries to delete the same int again. That's why your program crashes.
You can change SomeFunc() to take a reference instead, avoiding the temporary copy:
void someFunc(Sample const &x)
and/or you can define a copy constructor for Sample which allocates a new int rather than just copying the pointer to the existing one.
When you pass the argument for the function it's called the copy constructor, but you don't have one so the pointer is not initialised. When it exits the function, the object is calls the destructor to delete the unitialised pointer, so it thows an error.
Instead of
int main()
{
Sample s1= 10;
SomeFunc(s1);
s1.PrintVal();
}
try to use
int main()
{
Sample* s1= new Sample(10);
SomeFunc(*s1);
s1->PrintVal();
}

Strange behavior of copy-initialization, doesn't call the copy-constructor!

I was reading the difference between direct-initialization and copy-initialization (§8.5/12):
T x(a); //direct-initialization
T y = a; //copy-initialization
What I understand from reading about copy-initialization is that it needs accessible & non-explicit copy-constructor, or else the program wouldn't compile. I verified it by writing the following code:
struct A
{
int i;
A(int i) : i(i) { std::cout << " A(int i)" << std::endl; }
private:
A(const A &a) { std::cout << " A(const A &)" << std::endl; }
};
int main() {
A a = 10; //error - copy-ctor is private!
}
GCC gives an error (ideone) saying:
prog.cpp:8: error: ‘A::A(const A&)’ is private
So far everything is fine, reaffirming what Herb Sutter says,
Copy initialization means the object is initialized using the copy constructor, after first calling a user-defined conversion if necessary, and is equivalent to the form "T t = u;":
After that I made the copy-ctor accessible by commenting the private keyword. Now, naturally I would expect the following to get printed:
A(const A&)
But to my surprise, it prints this instead (ideone):
A(int i)
Why?
Alright, I understand that first a temporary object of type A is created out of 10 which is int type, by using A(int i), applying the conversion rule as its needed here (§8.5/14), and then it was supposed to call copy-ctor to initialize a. But it didn't. Why?
If an implementation is permitted to eliminate the need to call copy-constructor (§8.5/14), then why is it not accepting the code when the copy-constructor is declared private? After all, its not calling it. Its like a spoiled kid who first irritatingly asks for a specific toy, and when you give him one, the specific one, he throws it away, behind your back. :|
Could this behavior be dangerous? I mean, I might do some other useful thing in the copy-ctor, but if it doesn't call it, then does it not alter the behavior of the program?
Are you asking why the compiler does the access check? 12.8/14 in C++03:
A program is ill-formed if the copy
constructor or the copy assignment
operator for an object is implicitly
used and the special member function
is not accessible
When the implementation "omits the copy construction" (permitted by 12.8/15), I don't believe this means that the copy ctor is no longer "implicitly used", it just isn't executed.
Or are you asking why the standard says that? If copy elision were an exception to this rule about the access check, your program would be well-formed in implementations that successfully perform the elision, but ill-formed in implementations that don't.
I'm pretty sure the authors would consider this a Bad Thing. Certainly it's easier to write portable code this way -- the compiler tells you if you write code that attempts to copy a non-copyable object, even if the copy happens to be elided in your implementation. I suspect that it could also inconvenience implementers to figure out whether the optimization will be successful before checking access (or to defer the access check until after the optimization is attempted), although I have no idea whether that warranted consideration.
Could this behavior be dangerous? I
mean, I might do some other useful
thing in the copy-ctor, but if it
doesn't call it, then does it not
alter the behavior of the program?
Of course it could be dangerous - side-effects in copy constructors occur if and only if the object is actually copied, and you should design them accordingly: the standard says copies can be elided, so don't put code in a copy constructor unless you're happy for it to be elided under the conditions defined in 12.8/15:
MyObject(const MyObject &other) {
std::cout << "copy " << (void*)(&other) << " to " << (void*)this << "\n"; // OK
std::cout << "object returned from function\n"; // dangerous: if the copy is
// elided then an object will be returned but you won't see the message.
}
C++ explicitly allows several optimizations involving the copy constructor that actually change the semantics of the program. (This is in contrast with most optimizations, which do not affect the semantics of the program). In particular, there are several cases where the compiler is allowed to re-use an existing object, rather than copying one, if it knows that the existing object will become unreachable. This (copy construction) is one such case; another similar case is the "return value optimization" (RVO), where if you declare the variable that holds the return value of a function, then C++ can choose to allocate that on the frame of the caller, so that it doesn't need to copy it back to the caller when the function completes.
In general, in C++, you are playing with fire if you define a copy constructor that has side effects or does anything other than just copying.
In any compiler, syntax [and semantic] analysis process are done prior to the code optimization process.
The code must be syntactically valid otherwise it won't even compile. Its only in the later phase (i.e code optimization) that the compiler decides to elide the temporary that it creates.
So you need an accessible copy c-tor.
Here you can find this (with your comment ;)):
[the standard] also says that the temporary copy
can be elided, but the semantic
constraints (eg. accessibility) of the
copy constructor still have to be
checked.
RVO and NRVO, buddy. Perfectly good case of copy ellision.
This is an optimization by the compiler.
In evaluating: A a = 10; instead of:
first constructing a temporary object through A(int);
constructing a through the copy constructor and passing in the temporary;
the compiler will simply construct a using A(int).

Getting undefined behavior for something that shouldn't be getting undefined behavior

class Sample
{
public:
int *ptr;
Sample(int i)
{
ptr = new int(i);
}
~Sample()
{
delete ptr;
}
void PrintVal()
{
cout << "The value is " << *ptr;
}
};
void SomeFunc(Sample x)
{
cout << "Say i am in someFunc " << endl;
}
int main()
{
Sample s1 = new Sample(10);
SomeFunc(s1);
s1.PrintVal();
}
Two things that I think should happen:
s1 can be initialized using the parameterised constructor.
The value of *ptr from PrintVal() should be 10.
For 1) I'm getting invalid conversion from 'Sample*' to 'int' [-fpermissive]. I'm calling the constructor properly, why is this happening?
For the case of 2) I'm getting either a garbage value or the program gets an segmentation fault. This shouldn't happen because the ptr of the local object x of the SomeFunc should be deleted not the ptr of s1 as it is passed by value not reference. IIRC pass by value for objects sends a copy of the object to the function's receiving arguments.
Your code does have undefined behaviour. But let’s start at the beginning.
Sample s1 = new Sample(10);
This is what happens in this line:
A Sample object is allocated on the heap and the new expression returns a pointer to it, a Sample*.
You cannot assign a Sample* to a variable of type Sample. But Sample has a constructor that allows implicit construction from an int. If you use the -fpermissive compiler option (hint: don’t!), the compiler allows implicit conversion of a pointer to an integer – after all, a pointer is just a memory address, a.k.a. a number.
Accordingly s1 is constructed by interpreting the memory address of the heap Sample object as an integer (truncating it if sizeof(Sample*) > sizeof(int)). That’s the value that ends up as *(s1.ptr).
To reiterate the key point: In that line you don’t instantiate one Sample object, but two. Bug 1: The one created on the heap is never deleted. That’s a memory leak.
SomeFunc(s1);
Sample has nothing in it that prevents the compiler from generating the default copy constructor and default copy assignment operator. Important: “default” for pointers means to copy the pointer, not the object behind it. So:
s1 is copied to call SomeFunc(). The copy is available as x in the function. Because of the default pointer copy both s1 and x point to the same int object.
x goes out of scope at the end of the function, the destructor runs and deletes the int object.
We are not quite undefined yet, but we’re getting close.
s1.PrintVal();
The function tries to acces the int object behind the pointer, but it’s already deleted. s1.ptr is a dangling pointer. Bug 2: Dereferencing a dangling pointer is undefined behaviour.
And all that because of that seemingly innocent implicit pointer-to-int conversion … That’s why it is a compiler error by default, at least in non-ancient compilers.