Uninitialized default constructor in c++: munmap_chunk(): invalid pointer - c++

having this code:
#include <iostream>
#include <iterator>
#include <initializer_list>
#include <algorithm>
class Foo {
public:
Foo() = default;
explicit Foo(size_t size) :size(size){
ar = new double[size];
}
Foo(std::initializer_list<double> initList): Foo(initList.size()){
std::copy(initList.begin(), initList.end(), ar);
}
Foo(double *values, size_t size):size(size), ar(values){}
Foo(const Foo &rhs): Foo(rhs.size){
std::copy(rhs.ar, rhs.ar+size, ar);
}
~Foo(){delete[] ar;}
Foo &operator=(Foo rhs){
swap(*this, rhs);
return *this;
}
void print(){
std::copy(ar, ar+size, std::ostream_iterator<double>(std::cout, " "));
std::cout << std::endl;
}
private:
size_t size;
double *ar;
static void swap(Foo &f, Foo &s){
std::swap(f.size, s.size);
std::swap(f.ar, s.ar);
}
};
int main() {
using namespace std;
size_t size = 100;
auto *values = new double[size];
for(int i = 0; i<100; i++){
double fraction = ((10+i) % 10) / 10.0;
values[i] = i + fraction;
}
Foo f(values, size);
// Foo g; //IF THIS IS NOT BRACED-INITIALIZED, I GOT munmap_chunk(): invalid pointer
Foo g{};
g = f;
g.print();
}
The only difference between the programing running and getting error is whether I initilize the Foo g with braces or not. Why is that important. I know the braces will value-initialize the class, which means the int *ar would be nullptr. If it is not brace-initialized, then the int *ar is indeterminate. But what does that mean? How could be pointer indeterminate? Is it the same as nullptr? And why does the program break, when the pointer is indeterminate?

If it is not brace-initialized, then the int *ar is indeterminate. But what does that mean? How could be pointer indeterminate?
Because you are not assigning any value to the pointer, not even nullptr. So its value will consist of whatever random bytes were already stored in the memory location that the pointer is occupying.
When a default constructor is declared with = default, that just means the compiler will implicitly generate a constructor that will default-initialize each class member for you. Any member that is a class type will have its default constructor called, and any member that is a non-class type will either have its default value assigned if such a value is explicitly specified, or else it will not be assigned any value at all. The latter is what is happening in your situation.
Is it the same as nullptr?
No.
And why does the program break, when the pointer is indeterminate?
Because the pointer is not pointing at valid memory, so any attempt to dereference the pointer to access the pointed memory will fail. Including your destructor, which is unconditionally calling delete[] on the pointer, which is safe to do only if the pointer is set to nullptr or is pointing at valid memory that was new[]'ed.
In your case, you should add default values for your non-class type members, eg:
private:
size_t size = 0;
double *ar = nullptr;
That way, if any constructor does not explicitly set values to them, the compiler will still assign their default values to them. In this case, Foo() = default; will generate an implicit default constructor that is roughly equivalent to:
Foo() : size(0), ar(nullptr) {}
You are also missing a move constructor. Your existing operator= assignment operator acts sufficiently as a copy assignment operator, but adding a move constructor will allow it to also act as a sufficient move assignment operator, eg:
Foo(Foo &&rhs): size(rhs.size), ar(rhs.ar){
rhs.ar = nullptr;
rhs.size = 0;
}
Or:
Foo(Foo &&rhs){
size = std::exchange(rhs.size, 0);
ar = std::exchange(rhs.ar, nullptr);
}
Or:
Foo(Foo &&rhs): Foo(){
swap(*this, rhs);
}
Also, your Foo(double*, size_t) constructor is broken. It is taking ownership of the passed in double* pointer, which is not guaranteed to be new[]'ed so the destructor can safely delete[] it.
This constructor needs to allocate its own double[] array and copy the source values into it, just like the Foo(std::initializer_list) constructor is doing, eg:
Foo(const double *values, size_t size): Foo(size) {
std::copy(values, values+size, ar);
}
That all being said, a much better and safer design would be to replace your manual double[] array with std::vector instead, and let it handle all of the memory management and copy/move operations for you, eg:
#include <vector>
class Foo {
public:
Foo() = default;
explicit Foo(size_t size) : ar(size){}
Foo(std::initializer_list<double> initList) : ar(initList){}
Foo(const double *values, size_t size) : ar(values, values+size){}
// compiler-generated copy/move constructors, copy/move assignment operators,
// and destructor will suffice, so no need to declare them explicitly...
void print() const {
std::copy(ar.cbegin(), ar.cend(), std::ostream_iterator<double>(std::cout, " "));
std::cout << std::endl;
}
private:
std::vector<double> ar;
};

Related

Assignment operator as copy constructor

Assignment operator can be used to copy the value of one object to another
instead of using copy constructor,then why we required a copy constructor?
class example
{
int data;
public:
example()
{
}
example(int x)
{
data = x;
}
};
int main()
{
example a(50);
example a(b);
//same can be done with the assignment operator
//b = a;
return 0;
}
Because at the point of calling a copy constructor, the object being copied to doesn't yet exist.
An assignment operator assigns the value of another object to one that does exist.
Devices such as member initialisation can be used with a copy constructor, but are not available on assignment. Furthermore it's possible to create a const object using a copy constructor.
Furthermore, the assignment operator typically returns a reference to self.
So a copy constructor and the assignment operator probably will leave the mutated object in an identical state, but it doesn't necessarily have to be the case.
As Bathsheba already said: A copy constructor creates a new object, an assignment operator assigns values to an already existing object. One needs to construct a new object, the other needs to handle whatever happens if you assign the values from one object to another. Take this example:
class Foo
{
public:
Foo(int x) { someValue = x; };
int getValue() const { return someValue; };
private:
int someValue;
}
class Bar
{
public:
Bar(int y)
{
myFoo = new Foo(y);
myValue = y + 1;
myInitDone = true;
};
Bar(const Bar& other)
{
//myFoo was not yet initalized, so no need to clean it up
myFoo = new Foo(other.myFoo->getValue());
myValue = other.myValue;
myInitDone = true;
}
Bar& operator=(const Bar& other)
{
delete myFoo; // If we don't clean up myFoo here we leak memory
myFoo = new Foo(other.myFoo->getValue());
myValue = other.myValue;
// myInitDone is only set during construction due to [reason]
}
private:
Foo* myFoo;
int myValue;
bool myInitDone;
}
The copy constructor needs to set myInitDone (which is only done during constuction because [insert reason here]), while the assigment operator needs to clean up myFoo or it will leak memory.

How can I efficiently clone a dynamically allocated array?

I have a class which is a templated smart pointer meant for wrapping dynamically allocated arrays. I know that there are classes in the STL that can be used for this, especially in C++11, but this is a widely-used internal class.
I wish to write a Clone() method for it. My initial implementation used std::copy, but I realized I should be able to avoid the default construction when allocating the array.
My attempt at a PoC ends up with a segmentation fault:
#include <iostream>
#include <algorithm>
class A
{
public:
A(int j) : i(j) {}
~A() {std::cout << "Destroying " << i << std::endl;}
private:
int i;
};
int main()
{
int a[] = {1, 2, 3};
A* arr = static_cast<A*>(::operator new[](sizeof(A) * 3));
std::uninitialized_copy(a, a + 3, arr);
delete [] arr;//::operator delete[](arr);
}
How do I create a dynamically allocated array of T, initialized with std::uninitialized_copy, so that it can be deleted with 'delete []' (i.e. treated as if it was allocated with a simple 'new T[N]')?
Since it seems people have had trouble understanding what I'm asking, here's the essence of my question:
#include <algorithm>
template <typename T>
T* CloneArray(T* in_array, size_t in_count)
{
if (!in_array)
return nullptr;
T* copy = new T[in_count];
try
{
std::copy(in_array, in_array + in_count, copy);
}
catch (...)
{
delete [] copy;
throw;
}
return copy;
}
How would I rewrite this function in a way that prevents T::T() from being called (if it even exists!), while returning the exact same result (let's assume our types are well behaved in that T t; t = other; and T t(other); are equivalent), including the fact that the result of the function can be deleted using the standard delete [] operator.
How do I create a dynamically allocated array of T, initialized with std::uninitialized_copy, so that it can be deleted with 'delete []' (i.e. treated as if it was allocated with a simple 'new T[N]')?
So, given the relatively simple requirement that the memory be able to be deleted with delete[], lets see what options we have.
Note: All quotes from the standard are from the C++14 draft N3797 and I'm not the best at standard-interpreting so take this with a grain of salt.
Mixing malloc()/free() and new[]/delete[]
Undefined, since new doesn't necessarily call malloc, see §18.6.1/4 (default behavior of operator new):
Default behavior:
Executes a loop: Within the loop, the function first attempts to allocate the requested storage. Whether the attempt involves a call to the Standard C library function malloc is unspecified.
Avoiding default-initialization
So, seeing that we're required to use new[] if we want to use delete[],
looking at the standard for information about initialization in a new-expression §5.3.4/17:
A new-expression that creates an object of type T initializes that object as follows:
If the new-initializer is omitted, the object is default-initialized (8.5); if no initialization is performed, the object has indeterminate value.
Otherwise, the new-initializer is interpreted according to the initialization rules of 8.5 for direct-initialization.
and going to §8.5/7:
To default-initialize an object of type T means:
if T is a (possibly cv-qualified) class type (Clause 9), the default constructor (12.1) for T is called (and the initialization is ill-formed if T has no default constructor or overload resolution (13.3) results in an ambiguity or in a function that is deleted or inaccessible from the context of the initialization);
if T is an array type, each element is default-initialized;
we see that if we omit a new-initializer in our new[], all the elements of the array will be default initialized via their default constructors.
So, what if we include a new-initializer, do we have any options? Going back to its definition in §5.3.2/1:
new-initializer:
(expression-list opt)
braced-init-list
The only possibility we are left with is a braced-init-list (expression-list is for non-array new-expressions). I managed to get it working for objects with compile time size, but obviously that's not terribly helpful. For reference (portions of code adapted from here):
#include <iostream>
#include <utility>
struct A
{
int id;
A(int i) : id(i) {
std::cout << "c[" << id << "]\t";}
A() : A(0) {}
~A() {std::cout << "d[" << id << "]\t";}
};
template<class T, std::size_t ...I>
T* template_copy_impl(T* a, std::index_sequence<I...>) {
return new T[sizeof...(I)]{std::move(a[I])...};
}
template<class T, std::size_t N,
typename Indices = std::make_index_sequence<N>>
T* template_copy(T* a) {
return template_copy_impl<T>(a, Indices());
}
int main()
{
const std::size_t N = 3;
A* orig = new A[N];
std::cout << std::endl;
// modify original so we can see whats going on
for (int i = 0; i < N; ++i)
orig[i].id = 1 + i;
A* copy = template_copy<A, N>(orig);
for (int i = 0; i < N; ++i)
copy[i].id *= 10;
delete[] orig;
std::cout << std::endl;
delete[] copy;
std::cout << std::endl;
}
Which when compiled with -std=c++1y (or equivalent) should output something like:
c[0] c[0] c[0]
d[3] d[2] d[1]
d[30] d[20] d[10]
Different types in new[] vs delete[]
To summarize, not only are we required to use new[] if we want to use delete[] but when omitting a new-initializer, our objects are default-initialized. So, what if we allocate memory using a fundamental type (similar, in a way, to using placement new), it will leave the memory uninitialized, right? Yes, but its undefined to delete the memory with something like T* ptr = /* whatever */; delete[] ptr if it was allocated with a different type. See §5.3.5/2:
In the second alternative (delete array), the value of the operand of delete may be a null pointer value or a pointer value that resulted from a previous array new-expression. If not, the behavior is undefined. [ Note: this means that the syntax of the delete-expression must match the type of the object allocated by new, not the syntax of the new-expression. — end note ]
and §5.3.5/3, which hints at the same thing:
In the second alternative (delete array) if the dynamic type of the object to be deleted differs from its static type, the behavior is undefined.
Other options?
Well, you could still use unique_ptrs as others have suggested. Although given that you're stuck with a large code base it's probably not feasible. Again for reference, here's what my humble implementation might look like:
#include <iostream>
#include <memory>
struct A
{
int id;
A(int i) : id(i) {
std::cout << "c[" << id << "]\t";}
A() : A(0) {}
~A() {std::cout << "d[" << id << "]\t";}
};
template<class T>
struct deleter
{
const bool wrapped;
std::size_t size;
deleter() :
wrapped(true), size(0) {}
explicit deleter(std::size_t uninit_mem_size) :
wrapped(false), size(uninit_mem_size) {}
void operator()(T* ptr)
{
if (wrapped)
delete[] ptr;
else if (ptr)
{
// backwards to emulate destruction order in
// normally allocated arrays
for (std::size_t i = size; i > 0; --i)
ptr[i - 1].~T();
std::return_temporary_buffer<T>(ptr);
}
}
};
// to make it easier on ourselves
template<class T>
using unique_wrap = std::unique_ptr<T[], deleter<T>>;
template<class T>
unique_wrap<T> wrap_buffer(T* orig)
{
return unique_wrap<T>(orig);
}
template<class T>
unique_wrap<T> copy_buffer(T* orig, std::size_t orig_size)
{
// get uninitialized memory
auto mem_pair = std::get_temporary_buffer<T>(orig_size);
// get_temporary_buffer can return less than what we ask for
if (mem_pair.second < orig_size)
{
std::return_temporary_buffer(mem_pair.first);
throw std::bad_alloc();
}
// create a unique ptr with ownership of our memory, making sure to pass
// the size of the uninitialized memory to the deleter
unique_wrap<T> a_copy(mem_pair.first, deleter<T>(orig_size));
// perform the actual copy and return the unique_ptr
std::uninitialized_copy_n(orig, orig_size, a_copy.get());
return a_copy;
}
int main()
{
const std::size_t N = 3;
A* orig = new A[N];
std::cout << std::endl;
// modify original so we can see whats going on
for (int i = 0; i < N; ++i)
orig[i].id = 1 + i;
unique_wrap<A> orig_wrap = wrap_buffer(orig);
{
unique_wrap<A> copy = copy_buffer(orig, N);
for (int i = 0; i < N; ++i)
copy[i].id *= 10;
// if we are passing the original back we can just release it
A* back_to_somewhere = orig_wrap.release();
delete[] back_to_somewhere;
std::cout << std::endl;
}
std::cout << std::endl;
}
Which should output:
c[0] c[0] c[0]
d[3] d[2] d[1]
d[30] d[20] d[10]
Lastly, you might be able to override the global or class operator new/delete, but I wouldn't suggest it.
Since A already is a smart pointer to a class that wraps dynamically allocated memory, it's guaranteed that the memory will not be deallocated until all references are released. Thus you can use a simple array or vector and copy the smart pointers around, no need to dynamically allocate the array.
For example:
typedef sdt::vector<A<SomeType> > AVector;
AVector copy(AVector in)
{
AVector copyArray = in;
return copyArray;
}

why I cannot use "const string* sp = 0" to initialize in a constructor

my problem is that const string* p gives me an error. What is wrong with this? since I am not change the original value. const int& n = 0 works fine.
#include <iostream>
using namespace std;
class HasPtr
{
private:
int num;
string* sp;
public:
//constructor
HasPtr(const int& n = 0, const string* p = 0): num(n), sp(p) {}
//copy-constructor
HasPtr(const HasPtr& m): num(m.num), sp(m.sp) {}
//assignment operator
HasPtr& operator=(const HasPtr& m)
{
num = m.num;
sp = m.sp;
return *this;
}
//destructor
~HasPtr() {}
};
int main ()
{
return 0;
}
output Error is :
error: invalid conversion from ‘const std::string*’ to ‘std::string*’
private:
int num;
string* sp;
sp is non-const, but p is:
const string* p = 0
The result is that this sp(p) is basically this:
string* sp = (const string*)0;
It's complaining because doing this would remove the string's const-ness.
This is because your sp member is not const.
string* sp;
But your parameter p is a const. The result is that you are trying to assign a const pointer to a non-const pointer - hence the error.
To fix this, you need to declare sp const at as well.
const string* sp;
I think you got confused by the various meanings of const.
const string*sp;
declares a pointer to a constant object, which only allows access to constant methods of class string.
string*const sp;
declares a pointer to a string to be a constant member of the class, which you must initialise sp in the constructor and cannot change (except using const_cast<>).
const int&num;
in the argument list means that the function promises not to alter the value of the integer referred to by num, but it can of course copy its value (as you did). The corresponding operation for the string pointer would have been
HasPtr(string*const&p) : sp(p) { /* ... */ }
and would have been perfectly legal albeit rather unorthodox.

Simulating new[] with argument constructor

If I am not modifying any static variable inside the argument constructor, is below the proper way to simulate new T[N] (x,y); (array new with arguments) ?
template<typename T>
void* operator new [] (size_t size, const T &value)
{
T* p = (T*) malloc(size);
for(int i = size / sizeof(T) - 1; i >= 0; i--)
memcpy(p + i, &value, sizeof(T));
return p;
}
Usage will be,
struct A
{
A () {} // default
A (int i, int j) {} // with arguments
};
int main ()
{
A *p = new(A(1,2)) A[10]; // instead of new A[10](1,2)
}
I'd suggest
std::vector<A> v(10, A(1,2));
I realize that this doesn't really address the question for arrays.
You could use
p = &v[0];
since the standard guarantees contiguous storage. Be very careful with resizing the vector though, because it could invalidate p
I checked boost::array<> (which adapts C style arrays), but it doesn't define constructors...
This isn’t OK. You are copying objects into uninitialised memory without invoking proper copy semantics.
As long as you’re only working with PODs, this is fine. However, when working with objects that are not PODs (such as your A) you need to take precautions.
Apart from that, operator new cannot be used in this way. As Alexandre has pointed out in the comments, the array won’t be initialised properly since C++ will call constructors for all elements after having called your operator new, thus overriding the values:
#include <cstdlib>
#include <iostream>
template<typename T>
void* operator new [] (size_t size, T value) {
T* p = (T*) std::malloc(size);
for(int i = size / sizeof(T) - 1; i >= 0; i--)
new(p + i) T(value);
return p;
}
struct A {
int x;
A(int x) : x(x) { std::cout << "int ctor\n"; }
A() : x(0) { std::cout << "default ctor\n"; }
A(const A& other) : x(other.x) { std::cout << "copy ctor\n"; }
};
int main() {
A *p = new(A(42)) A[2];
for (unsigned i = 0; i < 2; ++i)
std::cout << p[i].x << std::endl;
}
This yields:
int ctor
copy ctor
copy ctor
default ctor
default ctor
0
0
… not the desired outcome.
That's not okay - C++ will call those objects non-trivial default constructors if typename T has such (struct A in your example does have one) and that would lead to reconstructing objects in memory already occupied.
An appropriate solution would be to use std::vector (recommended) or call ::operator new[] to allocate memory, then call constructors using placement-new and taking care of exceptions if any.
You should consider that operator new[] may be called asking for more memory than the bare amount sizeof(T) * n.
This extra memory is possibly needed because C++ must know how many object to destroy in case of delete[] p; but it cannot reliably use the size of block of memory allocated by new p[sz] to infer this number because the memory may have been asked to a custom memory manager so (e.g. your case) there is no way to know how much memory was allocated only by knowing the pointer.
This also means that your attempt to provide already-initialized objects will fail because the actually array returned to the application will potentially not start at the address you returned from your custom operator new[] so that initialization could be misaligned.
template <typename myType> myType * buildArray(size_t numElements,const myType & startValue) {
myType * newArray=(myType *)malloc(sizeof(myType)*numElements);
if (NULL!=newArray) {
size_t index;
for (index=0;index<numElements;++index) {
new (newArray+index) myType(startValue);
}
}
return newArray;
}
template <typename myType> void destroyArray(size_t numElements,myType * oldArray) {
size_t index;
for (index=0;index<numElements;++index) {
(oldArray+index)->~myType();
}
free(oldArray);
}
A * p=newArray(10,A(1,2));
destroyArray(10,p);
destroyArray could also be written like this depending on the platform you are building for:
template <typename myType> void destroyArray(myType * oldArray) {
size_t numElements=malloc_size(oldArray)/sizeof(myType); //or _msize with Visual Studio
size_t index;
for (index=0;index<numElements;++index) {
(oldArray+index)->~myType();
}
free(oldArray);
}

Is this C++ reassignment valid?

Sorry for the basic question, but I'm having trouble finding the right thing to google.
#include <iostream>
#include <string>
using namespace std;
class C {
public:
C(int n) {
x = new int(n);
}
~C( ) {
delete x;
}
int getX() {return *x;}
private:
int* x;
};
void main( ) {
C obj1 = C(3);
obj1 = C(4);
cout << obj1.getX() << endl;
}
It looks like it does the assignment correctly, then calls the destructor on obj1 leaving x with a garbage value rather than a value of 4. If this is valid, why does it do this?
If there is a class C that has a constructor that takes an int, is this code valid?
C obj1(3);
obj1=C(4);
Assuming C has an operator=(C) (which it will by default), the code is valid. What will happen is that in the first line obj1 is constructed with 3 as a the parameter to the constructor. Then on the second line, a temporary C object is constructed with 4 as a parameter and then operator= is invoked on obj1 with that temporary object as a parameter. After that the temporary object will be destructed.
If obj1 is in an invalid state after the assignment (but not before), there likely is a problem with C's operator=.
Update: If x really needs to be a pointer you have three options:
Let the user instead of the destructor decide when the value of x should be deleted by defining a destruction method that the user needs to call explicitly. This will cause memory leaks if the user forgets to do so.
Define operator= so that it will create a copy of the integer instead of a copy of the value. If in your real code you use a pointer to something that's much bigger than an int, this might be too expensive.
Use reference counting to keep track how many instances of C hold a pointer to the same object and delete the object when its count reaches 0.
If C contains a pointer to something, you pretty much always need to implement operator=. In your case it would have this signature
class C
{
public:
void operator=(const C& rhs)
{
// For each member in rhs, copy it to ourselves
}
// Your other member variables and methods go here...
};
I do not know enough deep, subtle C++ to explain the problem you are encountering. I do know, however, that it's a lot easier to make sure a class behaves the way you expect if you follow the Rule of Three, which the code you posted violates. Basically, it states that if you define any of the following you should define all three:
Destructor
Copy constructor
Assignment operator
Note as well that the assignment operator implementation needs to correctly handle the case where an object is assigned to itself (so-called "self assignment"). The following should work correctly (untested):
#include <iostream>
#include <string>
using namespace std;
class C {
public:
C(int n) {
x = new int(n);
}
C(const C &other): C(other.getX()) { }
~C( ) {
delete x;
}
void operator=(const C &other) {
// Just assign over x. You could reallocate if you first test
// that x != other.x (the pointers, not contents). The test is
// needed to make sure the code is self-assignment-safe.
*x = *(other.x);
}
int getX() {return *x;}
private:
int* x;
};
void main( ) {
C obj1 = C(3);
obj1 = C(4);
cout << obj1.getX() << endl;
}
Basically you are trying to re-implement a smart pointer.
This is not trivial to get correct for all situations.
Please look at the available smart pointers in the standard first.
A basic implementation (Which will fail under certain situations (copy one of the standard ones to get a better one)). But this should cover the basics:
class X
{
int* data;
public:
// Destructor obvious
~X()
{
delete data;
}
// Easy constructor.
X(int x)
:data(new int(x))
{}
// Copy constructor.
// Relatively obvious just do the same as the normal construcor.
// Get the value from the rhs (copy). Note A class is a friend of
// itself and thus you can access the private members of copy without
// having to use any accessor functions like getX()
X(X const& copy)
:data(new int(copy.x))
{}
// Assignment operator
// This is an example of the copy and swap idiom. This is probably overkill
// for this trivial example but provided here to show how it is used.
X& operator=(X const& copy)
{
X tmp(copy);
this->swap(tmp);
return this;
}
// Write a swap() operator.
// Mark it is as no-throw.
void swap(X& rhs) throws()
{
std::swap(data,rhs.data);
}
};
NEW:
What's happening is that your destructor has deallocated the memory allocated by the constructor of C(4). So the pointer you have copied over from C(4) is a dangling pointer i.e. it still points to the memory location of the deallocated memory
class C {
public:
C(int n) {
x = new int(n);
}
~C( ) {
//delete x; //Don't deallocate
}
void DeallocateX()
{
delete x;
}
int getX() {return *x;}
private:
int* x;
};
int main(int argc, char* argv[])
{
// Init with C(3)
C obj1 = C(3);
// Deallocate C(3)
obj1.DeallocateX();
// Allocate memory and store 4 with C(4) and pass the pointer over to obj1
obj1 = C(4);
// Use the value
cout << obj1.getX() << endl;
// Cleanup
obj1.DeallocateX();
return 0;
}
Be explicit about ownership of pointers! auto_ptr is great for this. Also, when creating a local don't do C obj1 = C(3) that creates two instances of C and initializes the first with the copy constructor of the second.
Heed The Guru.
class C {
public:
C(int n) : x(new int(n)) { }
int getX(){ return *x; }
C(const C& other) : x(new int(*other.x)){}
C& operator=(const C& other) { *x = *other.x; return *this; }
private:
std::auto_ptr<int> x;
};
int main() {
C obj1(3);
obj1 = C(4);
std::cout << obj1.getX() << std::endl;
}
When are you testing the value of obj1? Is it after you leave the scope?
In your example, obj1 is a stack object. That means as soon as you leave the function in which it defined, it gets cleaned up (the destructor is called). Try allocating the object on the heap:
C *obj1 = new C(3);
delete obj1;
obj1 = new C(4);