C++ Operator Overloading??? Isn't it more like override? [duplicate] - c++

This question already has answers here:
Why is it called operator overloading?
(2 answers)
Closed 4 years ago.
In C++, why is operator overloading called "overloading"?
To me, this seems more like "override."

Because you're not changing the meaning of +, -, *, / etc. with respect to the fundamental datatypes. You can't change what those means for char, short, int, float, etc. Therefore, you're not truly overriding anything.
You are instead expanding the meaning of them to new contexts, which seems to fit with the term "overloading": you've loaded the symbols onto new meanings they did not previously have.

This is pretty subjective, and not easy to answer in specific terms.
But generally we use "override" to mean "replacing the behaviour of a function with another behaviour", as in when you have a polymorphic class hierarchy, and a call to a function whose various implementations are virtual can result in totally different behaviour. Certainly that's what the standard means by the term.
Isn't this also what happens with overloading? Kind of. But usually when you overload a function, you do so to give it different parameter lists, but would still expect each implementation to perform the same job. It doesn't have to, but one expects it to.
Similarly with overloaded operators, if you're overloading say operator+ then generally we expect that it actually still just does the normal, conventional "addition" logic — but overloaded so that it can take an argument of your new class type, instead of the existing overloads that take built-in types.
In practice, that breaks down a bit, because even the standard library makes operator<< mean something completely different (among other examples).
Still, the actual task of creating these new operators is accomplished by what the language considers to be function overloading (particularly as no virtual calls are involved at all).
In short, you're arguably not entirely wrong, but it's pretty arbitrary and this is what we ended up with.

Related

Operator vs functions behaviour

I am reading through the following document,
https://code.google.com/p/go-wiki/wiki/GoForCPPProgrammers
and found the statement below a bit ambiguous:
Unlike in C++, new is a function, not an operator; new int is a syntax error.
In C++ we implement operators as functions, e.g. + using operator+.
So what is the exact difference of operator vs function in programming languages in general?
The actual distinction between functions and operators depends on the programming language. In plain C, operators are a part of the language itself. One cannot add an operator, nor change the behavior of an existing operator. This is not the case with C++, where operators are resolved to functions.
From a totally different point of view, consider Haskell, where ANY (binary) function may be treated as a binary operator:
If you don't speak Haskell, but know about dot products, this example should still be fairly straight-forward. Given:
dotP :: (Double, Double) -> (Double, Double) -> Double
dotP (x1, y1) (x2, y2) = x1 * x2 + y1 * y2
Both
dotP (1,2) (3,4)
and
(1,2) `dotP` (3,4)
will give 11.
To address the quote in the Go documentation: The Go developers are simply stressing that where in C++, one would treat new as a keyword with its own syntax, one should treat new in Go as any other function.
Although I still think the question is basically a duplicate of Difference between operator and function in C++?, it may be worthwhile to clarify what the difference means in the specific context you quoted.
The point there is that a function in C++ is something that has a name and possibly function arguments, and is called using this syntax:
func(arg1,arg2,...)
In other words, the name first, then a round bracket, then the comma-separated list of arguments. This is the function call syntax of C++.
Whereas an operator is used in the way described by clause 5 of the Standard. The details of the syntax vary depending on the kind of operator, e.g. there are unary operators like &, binary operators like +, * etc.; there is also the ternary conditional operator ? :, and then there are special keywords like new, delete, sizeof, some of which translate to function calls for user-defined types, but they don't use the function call syntax described above. I.e. you don't call
new(arg1,arg2,...)
but instead, you use a special "unary expression syntax" (§5.3), which implies, among other things, that there are no round brackets immediately after the keyword new (at least, not necessarily).
It's this syntactic difference that the authors talk about in the section you quoted.
"What is the difference between operators and functions?"
Syntax. But in fact, it's purely a convention with regards to
the language: in C++, + is an infix operator (and only
operators can be infix), and func() would be a function. But
even this isn't always true: MyClass::operator+() is
a function, but it can, and usually is invoked using the
operator syntax.
Other languages have different rules: in languages like Lisp,
there is no real difference. One can distinguish between
built-in functions vs. user defined functions, but the
distinction is somewhat artificial, since you could easily
extend lisp to add additional built-in functions. And there are
languages which allow using the infix notation for user defined
functions. And languages like Python map between them: lhs
+ rhs maps to the function call lhs.__add__( rhs ) (so
"operators" are really just syntactic sugar).
I sum, there is not rule for programming languages in general.
There are simply two different words, and each language is
free to use them as it pleases, to best describe the language.
So what is the exact difference of operator vs function in programming languages in general?
It is broad. In an abstract syntax tree, operators are unary, binary or sometimes ternary nodes - binding expressions together with a certain precedence e.g. + has lower precedence than *, which in turn has lower precedence than new.
Functions are a much more abstract concept. As a primitive, they are typed subroutine entrypoints that depending on the language can be used as rvalues with lexical scope.
C++ allows to override (overload) operators with methods by means of dynamically dispatching operator evaluation to said methods. This is a language "feature" that - as the existence of this question implies - mostly confuses people and is not available in Go.
operators are part of c++ language syntax, in C++ you may 'overload' them as functions if you dont want the default behaviour, For complex types or user defined types , Language may not have the semantic of the operator known , So yuser can overload them with thier own implementation.

Member functions for scalar types and operator overloading [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've been thinking about some possible features C++ could have, does anyone know why they aren't supported?
Member functions for built-in types. This may just not seem necessary, but an interesting feature nonetheless. Example pseudocode: int int::getFirstBit(void) {return *this & 1;} ... int a = 2; a.getFirstBit(); This may seem useless, but it shouldn't be hard to implement either. With this springs up the following thought:
Member functions outside class definition. I don't see why this shouldn't be supported, except for conflicts with access restriction (public,protected,private,etc.) and encapsulation, but perhaps only structs could have this feature.
Operator overloading for non-object types, a use for this could be for pointers or arrays.
I know these features aren't necessary for much, but they still seem cool. Is it because they don't seem necessary or because they can cause many headaches?
I know these features aren't necessary for much, but they still seem cool. Is it because they don't seem necessary or because they can cause many headaches?
Part of one part of the other. Every new feature that is added to a language increments the complexity of the language, compilers and programs. In general, unless there is a real motivating need (or the new feature will help in writing simpler safer programs) features are not added.
As of the particular features you suggest:
1- Member functions for built in types
There is no need, you can do anything you want to do with a member function with a free function for the same cost and with the only difference in user code that the argument to the function is before a . or inside the parenthesis.
The only thing that cannot be done with free functions is dynamic dispatch (polymorphism) but since you cannot derive from those types, you could not have polymorphism either. Then again, to be able to do this you would need 2.
2- Member functions outside class definition.
I understand that you mean extension methods as in C#, where new methods can be added to a type externally. There are very few uses of this feature that are not simple enough to implement without it. Then there are the complexities.
Currently, the compiler sees a single definition of the class and is able to determine all member methods that can be applied to an element of the type. This includes virtual functions, which means that the compiler can at once determine the virtual function table (while vtable is not standard, all implementations use them). If you could add virtual methods outside of the class definition, different translation units would be seeing different incompatible view of the type. Dispatching to the 3rd virtual function could be calling foo in one .cpp file but bar in another. Solving this without postponing a big part of the linking stage to the loading of the binary into memory for execution would be almost impossible, and postponing it would mean a significant change in the language model.
If you restrict the feature to non-virtual functions, things get simpler as the calls would be dispatched to the function directly, but nevertheless, even this would imply other levels of complexity. With a separate compilation model as in C++, you would end up having to write headers for the original class and the extension methods, and including both in the translation unit from which you want to use it, which in most cases you can simplify by just declaring those same methods in the original headers as real member methods (or free functions, free functions do form part of the interface of user defined types!)
Additionally, allowing this would mean that simple typos in the code could have unexpected results. Currently the compiler verifies that the definition of a member function has a proper declaration, if this was allowed that check would have to be removed, and a simple typo while writing the name in either the declaration or definition would cause two separate functions rather than a quick to fix compiler error.
3- Operator overloading for non-object types
The language allows for overloading of operators for all user defined types, which includes classes and enumerations. For the rest of the types, there is a set of operators that are already defined with precise semantics that cannot be changed. Again with separate compilation model, it would mean that 1+2 could mean different things in different translation units, in particular the exact combination of includes could change the semantics of the program, and that would cause havoc --you remove a dependency in your header, and that removes an include that contains the overload for const char* + int, which in turn means that the semantics of "Hi" + 2 in code that included your header changes, from the user defined operation to yielding a pointer to the nul terminator of the string. This is really dangerous, because it means that a simple change in one part of a program can render other parts of the program incorrect.
Even for combinations for which there is no current meaning (char* + int*) you can use a regular function to provide the same operation. Remember that you should only overload an operator when in the domain that you are modeling that operation is naturally understood to have that particular semantics, which is why you can overload for user defined types, but pointers are not part of your domain, but rather part of the language, and in the language there is no natural definition of what "Hi" + new int(5) means. Operator overloading has the purpose of making code more readable and in any context for which there is no natural definition, operator overloading has the exact opposite effect.
Because you can write a free function that does the same thing. Would the "member function" be so much more desirable so as to offset the tremendous cost of ratifying this in the standard and having compiler vendors implement it? No.
Outside where exactly?
See 1.

Is the C++ * operator "already overloaded?"

My C++ teacher thinks that the * operator in standard C++ is "already overloaded," because it can mean indirection or multiplication depending on the context. He got this from C++ Primer Plus, which states:
Actually, many C++ (and C) operators already are overloaded. For example, the * operator, when applied to an address, yields the value stored at that address. But applying * to two numbers yields the product of the values. C++ uses the number and type of operands to decide which action to take. (pg 502, 5th ed)
At least one other textbook says much the same. So far as I can tell, this is not true; unary * is a different operator from binary *, and the mechanism by which the compiler disambiguates them has nothing to do with operator overloading.
Who is right?
Both are right as the question depends on context and the meaning of the word overloading.
"Overloading" can take a common meaning of "same symbol, different meaning" and allow all uses of "*" including indirection and multiplication, and any user-defined behavior.
"Overloading" can be used to apply to C++'s official operator overloading functionality, in which case indirection and multiplication are indeed different.
ADDENDUM: See Steve's comment below, on "operator overoading" versues "token overloading".
I believe you are. The dereference op and the mult. op are different operators, even if written the same. same goes for +,-,++,--, and any other I may have forgotten.
I believe the book, in this paragraph, refers to the word "overloaded" as "used in more than 1 way", but not by the user. So you could also consider the book as being correct... also depends if you're referring to the overloading of the * operator or of the multiplication operator (for example).
It's overloaded in the sense that the same character is used to mean different things in different places (e.g. pointer dereference, multiplication between ints, multiplication with other built-in types, etc.).
Generally, though, "operator overloading" refers to defining an operator (that has the same symbol as a built-in one) using custom code so that it does something interesting with a user defined type.
So... you're both right :-)

How to dis-ambiguate operator definitions between objects/classes in a programming language?

I'm designing my own programming language (called Lima, if you care its on www.btetrud.com), and I'm trying to wrap my head around how to implement operator overloading. I'm deciding to bind operators on specific objects (its a prototype based language). (Its also a dynamic language, where 'var' is like 'var' in javascript - a variable that can hold any type of value).
For example, this would be an object with a redefined + operator:
x =
{ int member
operator +
self int[b]:
ret b+self
int[a] self:
ret member+a
}
I hope its fairly obvious what that does. The operator is defined when x is both the right and left operand (using self to denote this).
The problem is what to do when you have two objects that define an operator in an open-ended way like this. For example, what do you do in this scenario:
A =
{ int x
operator +
self var[b]:
ret x+b
}
B =
{ int x
operator +
var[a] self:
ret x+a
}
a+b ;; is a's or b's + operator used?
So an easy answer to this question is "well duh, don't make ambiguous definitions", but its not that simple. What if you include a module that has an A type of object, and then defined a B type of object.
How do you create a language that guards against other objects hijacking what you want to do with your operators?
C++ has operator overloading defined as "members" of classes. How does C++ deal with ambiguity like this?
Most languages will give precedence to the class on the left. C++, I believe, doesn't let you overload operators on the right-hand side at all. When you define operator+, you are defining addition for when this type is on the left, for anything on the right.
In fact, it would not make sense if you allowed your operator + to work for when the type is on the right-hand side. It works for +, but consider -. If type A defines operator - in a certain way, and I do int x - A y, I don't want A's operator - to be called, because it will compute the subtraction in reverse!
In Python, which has more extensive operator overloading rules, there is a separate method for the reverse direction. For example, there is a __sub__ method which overloads the - operator when this type is on the left, and a __rsub__ which overloads the - operator when this type is on the right. This is similar to the capability, in your language, to allow the "self" to appear on the left or on the right, but it introduces ambiguity.
Python gives precedence to the thing on the left -- this works better in a dynamic language. If Python encounters x - y, it first calls x.__sub__(y) to see if x knows how to subtract y. This can either produce a result, or return a special value NotImplemented. If Python finds that NotImplemented was returned, it then tries the other way. It calls y.__rsub__(x), which would have been programmed knowing that y was on the right hand side. If that also returns NotImplemented, then a TypeError is raised, because the types were incompatible for that operation.
I think this is the ideal operator overloading strategy for dynamic languages.
Edit: To give a bit of a summary, you have an ambiguous situation, so you really only three choices:
Give precedence to one side or the other (usually the one on the left). This prevents a class with a right-side overload from hijacking a class with a left-side overload, but not the other way around. (This works best in dynamic languages, as the methods can decide whether they can handle it, and dynamically defer to the other one.)
Make it an error (as #dave is suggesting in his answer). If there is ever more than one viable choice, it is a compiler error. (This works best in static languages, where you can catch this thing in advance.)
Only allow the left-most class to define operator overloads, as in C++. (Then your class B would be illegal.)
The only other option is to introduce a complex system of precedence to the operator overloads, but then you said you want to reduce the cognitive overhead.
I'm going to answer this question by saying "duh, don't make ambiguous definitions".
If I recreate your example in C++ (using a function f instead of the + operator and int/float instead of A/B, but there really isn't much difference)...
template<class t>
void f(int a, t b)
{
std::cout << "me! me! me!";
}
template<class t>
void f(t a, float b)
{
std::cout << "no, me!";
}
int main(void)
{
f(1, 1.0f);
return 0;
}
...the compiler will tell me precisely that: error C2668: 'f' : ambiguous call to overloaded function
If you create a language powerful enough, it's always going to be possible to create things in it that don't make sense. When this happens, it's probably ok to just throw up your hands and say "this doesn't make sense".
In C++, a op b means a.op(b), so it's unambigious; the order settles it. If, in C++, you want to define an operator whose left operand is a built-in type, then the operator has to be a global function with two arguments, not a member; again, though, the order of the operands determines which method to call. It is illegal to define an operator where both operands are of built-in types.
I would suggest that given X + Y, the compiler should look for both X.op_plus(Y) and Y.op_added_to(X); each implementation should include an attribute indicating whether it should be a 'preferred', 'normal', 'fallback' implementation, and optionally also indicating that it is "common". If both implementations are defined, and they implementations are of different priorities (e.g. "preferred" and "normal"), use the type to select a preference. If both are defined to be of the same priority, and both are "common", favor the X.op_plus(Y) form. If both are defined with the same priority and they are not both "common", flag an error.
I would suggest that the ability to prioritize overloads and conversions would IMHO a very important feature for a language to have. It is not helpful for languages to squawk about ambiguous overloads in cases where both candidates would do the same thing, but languages should squawk in cases where two possible overloads would have different meanings, each of which would be useful in certain contexts. For example, given someFloat==someDouble or someDouble==someLong, a compiler should squawk, since there can be usefulness to knowing whether the numerical quantities represented by two values match, and there can also be usefulness in knowing whether the left-hand operand holds the best possible representation (for its type) of the value in the right-hand operand. Java and C# do not flag ambiguity in either case, opting instead to use the first meaning for the first expression and the second for the second, even though either meaning might be useful in either case. I would suggest that it would be better to reject such comparisons than to have them implement inconsistent semantics.
Overall, I'd suggest as a philosophy that a good language design should let a programmer indicate what's important and what isn't. If a programmer knows that certain "ambiguities" aren't problems, but other ones are, it should be easy to have the compiler flag the latter but not the former.
Addendum
I looked briefly through your proposal; it sees you're expecting bindings to be fully dynamic. I've worked with a language like that (HyperTalk, circa 1988) and it was "interesting". Consider, for example, that "2X" < "3" < 4 < 10 < "11" < "2X". Double dispatch can sometimes be useful, but only in cases where operators overloads with different semantics (e.g. string and numeric comparisons) are limited to operating on disjoint sets of things. Forbidding ambiguous operations at compile time is a good thing, since the programmer will be in a position to specify what's intended. Having such ambiguity trigger a run-time error is a bad thing, because the programmer may be long gone by the time an error surfaces. Consequently, I really can't offer any advice for how to do run-time double dispatch for operators except to say "don't", unless at compile time you restrict the operands to combinations where any possible overload would always have the same semantics.
For example, if you had an abstract "immutable list of numbers" type, with a member to report the length or return the number at a particular index, you could specify that two instances are equal if they have the same length, and every for every index they return the same number. While it would be possible to compare any two instances for equality by examining every item, that could be inefficient if e.g. one instance was a "BunchOfZeroes" type which simply held an integer N=1000000 and didn't actually store any items, and the other was an "NCopiesOfArray" which held N=500000 and {0,0} as the array to be copied. If many instances of those types are going to be compared, efficiency could be improved by having such comparisons invoke a method which, after checking overall array length, checks whether the "template" array contains any non-zero elements. If it doesn't, then it can be reported as equal the bunch-of-zeroes array without having to perform 1,000,000 element comparisons. Note that the invocation of such a method by double dispatch would not alter the program's behavior--it would merely allow it to execute more quickly.

What makes Scala's operator overloading "good", but C++'s "bad"? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Operator overloading in C++ is considered by many to be A Bad Thing(tm), and a mistake not to be repeated in newer languages. Certainly, it was one feature specifically dropped when designing Java.
Now that I've started reading up on Scala, I find that it has what looks very much like operator overloading (although technically it doesn't have operator overloading because it doesn't have operators, only functions). However, it wouldn't seem to be qualitatively different to the operator overloading in C++, where as I recall operators are defined as special functions.
So my question is what makes the idea of defining "+" in Scala a better idea than it was in C++?
C++ inherits true blue operators from C. By that I mean that the "+" in 6 + 4 is very special. You can't, for instance, get a pointer to that + function.
Scala on the other hand doesn't have operators in that way. It just has great flexibility in defining method names plus a bit of built in precedence for non-word symbols. So technically Scala doesn't have operator overloading.
Whatever you want to call it, operator overloading isn't inherently bad, even in C++. The problem is when bad programmers abuse it. But frankly, I'm of the opinion that taking away programmers ability to abuse operator overloading doesn't put a drop in the bucket of fixing all the things that programmers can abuse. The real answer is mentoring. http://james-iry.blogspot.com/2009/03/operator-overloading-ad-absurdum.html
None-the-less, there are differences between C++'s operator overloading and Scala's flexible method naming which, IMHO, make Scala both less abusable and more abusable.
In C++ the only way to get in-fix notation is using operators. Otherwise you must use object.message(argument) or pointer->messsage(argument) or function(argument1, argument2). So if you want a certain DSLish style to your code then there's pressure to use operators.
In Scala you can get infix notation with any message send. "object message argument" is perfectly ok, which means you don't need to use non-word symbols just to get infix notation.
C++ operator overloading is limited to essentially the C operators. Combined with the limitation that only operators may be used infix that puts pressure on people to try to map a wide range of unrelated concepts onto a relatively few symbols like "+" and ">>"
Scala allows a huge range of valid non-word symbols as method names. For instance, I've got an embedded Prolog-ish DSL where you can write
female('jane)! // jane is female
parent('jane,'john)! // jane is john's parent
parent('jane, 'wendy)! // jane is wendy's parent
mother('Mother, 'Child) :- parent('Mother, 'Child) & female('Mother) //'// a mother of a child is the child's parent and is female
mother('X, 'john)? // find john's mother
mother('jane, 'X)? // find's all of jane's children
The :-, !, ?, and & symbols are defined as ordinary methods. In C++ only & would be valid so an attempt to map this DSL into C++ would require some symbols that already evoke very different concepts.
Of course, this also opens up Scala to another kind of abuse. In Scala you can name a method $!&^% if you want to.
For other languages that, like Scala, are flexible in the use of non-word function and method names see Smalltalk where, like Scala, every "operator" is just another method and Haskell which allows the programmer to define precedence and fixity of flexibly named functions.
Operator overloading in C++ is
considered by many to be A Bad
Thing(tm)
Only by the ignorant. It is absolutely required in a language like C++, and it is noticeable that other languages that started off taking a "purist" view, have added it once their designers found out how necessary it is.
Operator overloading was never universally thought to be a bad idea in C++ - just the abuse of operator overloading was thought to be a bad idea. One doesn't really need operator overloading in a language since they can be simulated with more verbose function calls anyway. Avoiding operator overloading in Java made the implementation and specification of Java a little simpler and it forced programmers to not abuse operators. There has been some debate in the Java community about introducing operator overloading.
The advantages and disadvantages of operator overloading in Scala are the same as in C++ - you can write more natural code if you use operator overloading appropriately - and more cryptic, obfuscated code if you don't.
FYI: Operators are not defined as special functions in C++, they behave just like any other function - although there are some differences in name lookup, whether they need to be member functions, and the fact that they can be called in two ways: 1) operator syntax, and 2) operator-function-id syntax.
This article - "The Positive Legacy of C++ and Java" - answers your question directly.
"C++ has both stack allocation and heap allocation and you must overload your operators to handle all situations and not cause memory leaks. Difficult indeed. Java, however, has a single storage allocation mechanism and a garbage collector, which makes operator overloading trivial" ...
Java mistakenly (according to the author) omitted operator overloading because it was complicated in C++, but forgot why (or didn't realize that it didn't apply to Java).
Thankfully, higher level languages like Scala give developers options, while still running on the same JVM.
Operator overloading is not something that you really "need" very often, but when using Java, if you hit a point where you genuinely need it, it'll make you want to rip your fingernails out just so you have an excuse to stop typing.
That code which you've just found overflows a long? Yup, you're going to have to retype the whole lot to make it work with BigInteger. There is nothing more frustrating that having to reinvent the wheel just to change the type of a variable.
There is nothing wrong with operator overloading. In fact, there's something wrong with not having operator overloading for numeric types. (Take a look at some Java code that uses BigInteger and BigDecimal.)
C++ has a tradition of abusing the feature, though. An often-cited example is that the bitshift operators are overloaded to do I/O.
In general it is not a bad thing.
New languages such as C# also have operator overloading.
It is the abuse of operator overloading that is a bad thing.
But there are also problems with operator overloading as defined in C++. Because overloaded operators are just syntactic sugar for method calls they behave just like method. On the other hand normal built-in operators do not behave like methods. These inconsistency can be cause problems.
Off the top of my head operators || and &&.
The built in versions of these are short-cut operators. This is not true for overloaded versions and has caused some problems.
The fact that + - * / all return the same type that they operate on (after operator promotion)
The overloaded versions can return anything (This is where the abuse sets in, If your operators start to return some arbitrator type the user was not expecting things go down hill).
Guy Steele argued that operator overloading should be in Java as well, in his keynote speech "Growing a language" - there's a video and a transcription of it, and it's really an amazing speech. You will wonder what he is talking about for the first couple of pages, but if you keep on reading, you will see the point and achieve enlightenment. And the very fact that he could do such a speech at all is also amazing.
At the same time, this talk inspired a lot of fundamental research, probably including Scala - it's one of those papers that everybody should read to work in the field.
Back to the point, his examples are mostly about numeric classes (like BigInteger, and some weirder stuff), but that's not essential.
It is true, though, that misuse of operator overloading can lead to terrible results, and that even proper uses can complicate matters, if you try to read code without studying a bit the libraries it uses. But is that a good idea? OTOH, shouldn't such libraries try to include an operator cheat sheet for their operators?
I believe EVERY answer missed this. In C++ you can overload operators all you want, but you can't effect the precedence with which they're evaluated. Scala doesn't have this issue, IIRC.
As for it being a bad idea, besides precedence issues, people come up with really daft meanings for operators, and it rarely aids readability. Scala libraries are especially bad for this, goofy symbols that you must memorize each time, with library maintainers sticking their heads in the sand saying, 'you only need to learn it once'. Great, now I need to learn some 'clever' author's cryptic syntax * the number of libraries I care to use. It wouldn't be so bad if there existed a convention of ALWAYS supplying a literate version of the operators.
The only thing known wrong in C++ is the lack of the ability to overload []= as a separate operator. This could be hard to implement in a C++ compiler for what is probably not an obvious reason but plenty worth it.
Operator overloading was not a C++ invention - it came from Algol IIRC and even Gosling does not claim it is a bad idea in general.
As the other answers have pointed out; operator overloading itself isn't necessarily bad. What is bad it when it is used in ways that make the resulting code un-obvious. Generally when using them you need to make them do the least surprising thing (having operator+ do division would cause trouble for a rational class's usage) or as Scott Meyers says:
Clients already know how types like
int behave, so you should strive to
have your types behave in the same way
whenever reasonable... When in
doubt, do as the ints do.
(From Effective C++ 3rd Edition item 18)
Now some people have taken operator overloading to the extreme with things like boost::spirit. At this level you have no idea how it is implemented but it makes an interesting syntax to get what you want done. I'm not sure if this is good or bad. It seems nice, but I haven't used it.
I have never seen an article claiming that C++'s operator overloading is bad.
User-definable operators permit an easier higher level of expressivity and usability for users of the language.
However, it wouldn't seem to be qualitatively different to the operator overloading in C++, where as I recall operators are defined as special functions.
AFAIK, There is nothing special in operator functions compared to "normal" member functions. Of course you only have a certain set of operators that you can overload, but that doesn't make them very special.