Overloaded assignment operator for copy/move operations? - c++

I'm going over the basics of overloaded operators, specifically the assignment operator. I'm trying to understand the use of overloading in dictating copy and move behavior by following this:
operator overloading
I find the example they give to be quite unclear.
This is the basic code I've written so far to illustrate overloading. How can this code be edited to illustrated the use of overloading in customizing copy and move behavior?
class Distance
{
public:
int feet, inches;
Distance()
{
feet = 0;
inches = 0;
}
Distance(int f, int i)
{
feet = f;
inches = i;
}
auto operator=(Distance &D)->void //Use operator to perform additional operation (adding 100)
{
feet = D.feet + 100;
inches = D.inches + 100;
}
};
int main()
{
Distance D1;
D1.feet = 10;
D1.inches = 12;
Distance D2;
D2 = D1;
std::cout << D2.feet << std::endl;
}

You shouldn't use assignment operator like that (even if you can). Assignment operator with additional operation could be used for example for counting how many assignments was made. Altering data this way leads to confusion and human error during use of the interface. You expect the code to work in some way, in case of assignment operator, it is copying values. Just follow how built-in types work and implement your overloaded operators that way. You wouldn't multiply volumes for operator+(), right? That wouldn't make any sense.
Additional operation in this case could be e.g.:
static int assignmentCount = 0;
auto operator=(Distance &D)->void
{
feet=D.feet;
inches=D.inches;
std::cout << "Made assignment"; //Just some debug stuff
assignmentCount++; //Count assignments made
}
Don't forget, that you have put there void as return type, which disables you to do following D1=D2=D3, which could be useful in some cases. Returning reference to receiving value is common practice.
I recommend reading Professional C++ by Marc Gregoire, or one of Stroupstrup's books. My recent experience is, that online sources can lead to some sort of confusion and books are generally better for learning basics.

Related

Is this good or bad practice with dynamic memory allocation?

I've seen this used by other people and it looks really clever, but I'm not sure if it's good or bad practice. It works, and I like the way it works, personally, but is doing this actually useful in the scope of a larger program?
What they've done is dynamically allocate some data type inside the actual function argument, and delete it in the function. Here's an example:
#include <iostream>
class Foo {
private:
int number;
public:
Foo(int n) : number(n) { }
int num() { return number; }
Foo* new_num (int i) { number = i; }
};
void some_func (int thing, Foo* foo);
int main() {
std::cout << "Enter number: ";
int n;
std::cin >> n;
some_func(n, new Foo(0)); // <-- uses the 'new' operator with a function argument
return 0;
}
// calculates difference between 'thing' and 'n'
// then puts it inside the Foo object
void some_func (int thing, Foo* foo) {
std::cout << "Enter another number: ";
int n;
std::cin >> n;
std::cout << "Difference equals " << foo->new_num(thing - n)->num() << std::endl;
delete foo; // <-- the Foo object is deleted here
}
I knew that it was possible to use operators in function arguments, but I was only aware of doing this with the operators on levels 2, 4 through 15, and 17, as well as the assignment operators, ? :, ++ and --, unary + and -, !, ~, * and &, sizeof and casts. Stuff like this:
foo((x < 3)? 5 : 6, --y * 7);
bar(player->weapon().decr_durability().charge(0.1), &shield_layers);
So, I actually have two questions.
Is the new-as-an-argument good practice?
Since apparently any operator returning a type works if new works, are using these good practice?
::, new [], throw, sizeof..., typeid, noexcept, alignof
No, this is not clever at all. It takes a function that could be simpler and more general and reduces its capabilities for no reason, while at the same time creating an entry point into your program for difficult-to-debug bugs.
It's not clear to me exactly what Foo::new_num is meant to do (right now it doesn't compile), so I won't address your example directly, but consider the following two code samples:
void bad_function(int i, F * f)
{
f->doSomething(i);
delete f;
}
// ...
bad_function(0, new F(1, 2, 3));
versus
void good_function(int i, F & f)
{
f.doSomething(i);
}
// ...
good_function(0, F(1, 2, 3));
In both cases you allocate a new F object as part of the method call and it's destroyed once you're done using it, so you get no advantage by using bad_function instead of good function. However there's a bunch of stuff you can do with good_function that's not so easy to do with bad_function, e.g.
void multi_function(const std::vector<int> & v, F & f)
{
for(int i : v) { good_function(i, f); }
}
Using the good_function version means you're also prevented by the language itself from doing various things you don't want to do, e.g.
F * f; // never initialized
bad_function(0, f); // undefined behavior, resulting in a segfault if you're lucky
It's also just better software engineering, because it makes it a lot easier for people to guess what your function does from its signature. If I call a function whose purpose involves reading in a number from the console and doing arithmetic, I absolutely do not expect it to delete the arguments I pass in, and after I spent half an hour figuring out what's causing some obscure crash in some unrelated part of the code I'm going to be furious with whoever wrote that function.
By the way, assuming that F::doSomething doesn't alter the value of the current instance of F in any way, it should be declared const:
class F
{
void doSomething(int i) const;
// ...
};
and good_function should also take a const argument:
void good_function(int i, const F & f);
This lets anyone looking at the signature confidently deduce that the function won't do anything stupid like mess up the value of f that's passed into the function, because the compiler will prevent it. And that in turn lets them write code more quickly, because it means there's one less thing to worry about.
In fact if I see a function with a signature like bad_function's and there's not an obvious reason for it, then I'd immediately be worried that it's going to do something I don't want and I'd probably read the function before using it.

c++ changing implicit conversion from double to int

I have code which has a lot of conversions from double to int . The code can be seen as
double n = 5.78;
int d = n; // double implicitly converted to a int
The implicit conversion from double to int is that of a truncation which means 5.78 will be saved as 5 . However it has been decided to change this behavior with custom rounding off .
One approach to such problem would be to have your own DOUBLE and INT data types and use conversion operators but alas my code is big and I am not allowed to do much changes . Another approach i thought of was to add 0.5 in each of the numbers but alas the code is big and i was changing too much .
What can be a simple approach to change double to int conversion behaviour which impact the whole code.
You can use uniform initialization syntax to forbid narrowing conversions:
double a;
int b{a}; // error
If you don't want that, you can use std::round function (or its sisters std::ceil/std::floor/std::trunc):
int b = std::round(a);
If you want minimal diff changes, here's what you can do. Please note, though, that this is a bad solution (if it can be named that), and much more likely leaving you crashing and burning due to undefined behavior than actually solving real problems.
Define your custom Int type that handles conversions the way you want it to:
class MyInt
{
//...
};
then evilly replace each occurence of int with MyInt with the help of preprocessor black magic:
#define int MyInt
Problems:
if you accidentally change definitions in the standard library - you're in the UB-land
if you change the return type of main - you're in the UB-land
if you change the definition of a function but not it's forward declarations - you're in the UB/linker error land. Or in the silently-calling-different-overload-land.
probably more.
Do something like this:
#include <iostream>
using namespace std;
int myConvert (double rhs)
{
int answer = (int)rhs; //do something fancier here to meet your needs
return answer;
}
int main()
{
double n = 5.78;
int d = myConvert(n);
cout << "d = " << d << endl;
return 0;
}
You can make myConvert as fancy as you want. Otherwise, you could define your own class for int (e.g. myInt class) and overload the = operator to do the right conversion.

Is it okay to use object pointer to overload assignment operator?

Suppose we want to implement a Complex Number class:
#include <iostream>
using namespace std;
class Complex{
public:
double real;
double imag;
Complex();
Complex(double _r, double _i)
:real(_r),imag(_i){}
const Complex* operator = (const Complex*);
};
Normally to overload assignment operator we would pass a const reference as parameter, but why can't we pass a pointer instead, like this?
const Complex* Complex::operator = (const Complex* cp){
real = cp->real;
imag = cp->imag;
return this;
}
int main(){
Complex c1(1,2), c2(3,4), c3(5,6);
Complex *pc = &c3;
c1 = c2 = pc;
cout<<"c1 = "<<c1.real<<"+"<<c1.imag<<'i'<<endl;
cout<<"c2 = "<<c2.real<<"+"<<c2.imag<<'i'<<endl;
cout<<"c3 = "<<c2.real<<"+"<<c2.imag<<'i'<<endl;
return 0;
}
The above code runs and give the answer just as I expected: all three complex numbers yield 5+6i.
I know this approach is rather unorthodox, but it seems to work as well. I'm wondering why our teacher strongly recommended us to use reference for assignment? Thanks a lot guys!
you can do that if you wish. But it's not a good idea.
Usually you'd imagine that an assign takes an object of the same type:
/*typeA*/ A = /*typeA*/ B;
a more real problem is that pointers are ints. You would assume that the code:
Complex c = 5;
sets a complex {5, 0}, instead it will segfault (some compilers will warn you that int->complex* conversion was probably not intended, but sooner or later it, one cast<> or another, it will happen)

C++ What's the usage difference between get and typecasting? Which one should I use?

#include <iosteam>
using namespace std;
Class A
{
int k;
public:
int getK() { return k; }
operator int() { return k; }
};
int main()
{
A a;
cout << a.getK() << " " << int(a) << endl;
}
What's the difference, and which one should I use? I'm wondering if typecasting returns a reference and getK returns a copy.
The only difference is that typecasting can be implicit.
int i = a;
Note that c++11 allow you to force cast operator to be explicitly called.
explicit operator int() { return k; }
They are both returning copies. Providing a cast operator usually is for when casting is necessary. For example you might do something like this maybe:
#include <iosteam>
using namespace std;
Class A
{
double k;
public:
A(double v) : k(v) {}
double getK() { return k; }
operator int() { return static_cast<int>(k); }
};
int main()
{
A a(3.14);
cout << a.getK() << " " << int(a) << endl; // 3.14 3
}
In general I avoid cast operators entirely because I prefer explicit casting.
It returns what the return type is. If you cast to a reference, then that's what you get back. What you're doing both times is making a copy.
The "difference" is what your method does. Your "cast" could add 5 to it and then return it. Or anything you want.
As for appropriateness, as chris said in the first comment, it's usually a "is your class a or not?" type question. Operators should be done for common conversions because your class operates as something, not merely to extract something from it. That's why it's a separate function to convert strings to integers, rather than being merely a cast on the string class. Whereas a complex number type can often be cast directly to a double or int, though that strips information from it. That the conversions can be "abused" is actually why some modern languages don't allow operator overloading. Others take the approach of while it can be abused, it can also be awesome. That's the C++ philosophy on most things: give all the tools, let the user do good or bad with them.
I hope that made sense.

Different destructor behavior between vc9 and gcc

The following code gives a different number of destructors when compiled on GCC and vc9. AFAIK when run on vc9 i get 5 destructors showing, which I understand. The + overloaded operator is called, and two object are created, when returned a temporary object is created. This makes destruction of 3 objects possible. When the overloaded = operator is called, one object is created and again a temporary one when returned. This sums it up to five destructs, not counting the three objects created at the start of main.
But when I compile on GCC I get 3.
Which leads me to guess that there isn't a temporary object created when the function is terminated and returned ? or a question about different behavior between compilers. I simply do not know, and some clarification would be nice.
#include <iostream>
using namespace std;
class planetCord {
double x, y, z;
public:
planetCord() { x = y = z = 0; }
planetCord(double j, double i, double k) { x = j; y = i; z = k; }
~planetCord() { cout << "destructing\n"; }
planetCord operator+(planetCord obj);
planetCord operator=(planetCord obj);
void show();
};
planetCord planetCord::operator +(planetCord obj) {
planetCord temp;
temp.x = x + obj.x;
temp.y = y + obj.y;
temp.z = z + obj.z;
return temp;
}
planetCord planetCord::operator =(planetCord obj) {
x = obj.x;
y = obj.y;
z = obj.z;
return *this;
}
void planetCord::show() {
cout << "x cordinates: " << x << "\n";
cout << "y cordinates: " << y << "\n";
cout << "z cordinates: " << z << "\n\n";
}
int main() {
planetCord jupiter(10, 20, 30);
planetCord saturn(50, 100, 200);
planetCord somewhereDark;
jupiter.show();
saturn.show();
somewhereDark.show();
somewhereDark = jupiter + saturn;
jupiter.show();
saturn.show();
somewhereDark.show();
return 0;
}
GCC is implementing the "return value optimization" to skip temporaries. Set VC9 to Release mode and it'll probably do the same.
If GCC is really good, it is seeing that temp inside operator+ will be default-initialized, just like somewhereDark, and can just use a reference to somewhereDark directly if it tries to inline the function. Or it is seeing that the pass-by-value is useless and can instead pass-by-reference.
A permissible but not mandatory optimization for a C++ compiler is to turn the tight sequence:
ctor for new temporary object X
copy ctor from X to other object Y
dtor for X
into just performing the ctor directly on Y. A really good C++ optimizer can do that across a function clal (i.e. when X is the return value for a function). Looks like gcc is optimizing better. Does the result change as you play with optimization options for the two compilers?
There are a number of of things wrong with your code. Can I suggest you investigate two concepts - consts and references. If your C++ text book doesn't cover these, get a new text book - I strongly recommend Accelerated C++
by Koenig & Moo.
Actually, in GCC, temporaries ARE being made. They are:
In operator+.
Returned by operator+.
Returned by operator=.
In MSVC (I think; can't test), temporaries are being made as well. However, some are not being optimized away like GCC does. They are:
As a parameter to operator+.
In operator+.
Returned by operator+.
As a parameter to operator=.
Returned by operator=.
Ironically I think MSVC is in the right here, because I'm not sure if GCC's behaviour is standard.
To make them both behave the same, use const references instead of passing the object by value.