Avoid pointers and #defines when programming in an Arduino? - c++

I was looking over the StyleGuide for the Arduino when I noticed that in the Commenting your Code section, it recommends to avoid using pointers and #defines.
Is there a reason the writer stated this? There isn't an explanation as to why he/she stated this. It doesn't make sense to me. Is this something specific to embedded systems?

I don't know the specific reason the author wrote it, and I am not familiar with the library's written style - so I am going to answer in general terms of C++ programs.
I assume the preference is given because modern C++ typically favors other idioms, many of which were designed to avoid or minimize the issues frequently introduced by preprocessor and raw pointers.
Avoid Pointers
Instead of a pointer, it is conventional in C++ to use a reference for an object or a container such as a vector for a collection of objects.
//////// For an object
//// Using a pointer
bool getURL(t_url* const outUrl);
// In use:
bool result(obj.getURL(&outUrl));
//// versus using a reference
bool getURL(t_url& outUrl);
// In use:
bool result(obj.getURL(outUrl));
//////// For a collection
//// Using a pointer
bool apply(const double* const values, const size_t& count);
// In use:
bool result(obj.apply(array, count));
//// versus using a container
bool apply(const std::vector<double>& values);
// In use:
bool result(obj.apply(values));
Even pointers may be given object containers (auto pointer, smart pointer, shared pointer, weak pointer) because there can be a lot of complexity or ambiguity when dealing with with raw pointers, particularly in clients' code. It's quite rare that I write C++ programs that take or return raw pointers.
Avoid Defines
Preprocessor/defines are also not generally the preferred approach in C++ - you have inline functions, anonymous namespaces, templates, and enums.
The ubiquitous example of a macro that is problematic for many reasons is #define max(a,b) ((a > b) ? a : b), versus std::max.
Conclusion
If I see a C++ program which uses a considerable amount of either, I find myself wondering in what decade it was written, or if the author was was writing in the "C with some more features" dialect.
Another answerer said the "advice is garbage". I disagree. The advice in Arduino simply says 'avoid pointers' and 'avoid #defines`. Of course, there will be times when you need to use these facilities, but you can write a clearer program when you use the language and library facilities which were intended to replace them (in the common ways they were misused or problematic). To avoid using them means to use them sparingly and only when necessary, favoring more modern and idiomatic alternatives.

That advice is garbage -- pointers are an extremely useful and powerful tool, and quite often they're required in order to do something effectively.
#defines, on the other hand, should often be avoided for a number of reasons (one two three four) in favor of inline functions, but again there are many situations where macros should be used and are the best solution. It depends on your problem -- be smart, and know when to use them and when not to. Don't blindly avoid using them because some FAQ told you not to.

Related

Pertinence of void pointers

Looking through a colleague's code, I see that some of its handles are stored as void pointers.
// Class header
void* hSomeSdk;
// Class implementation
hSomeSdk = new SomeSDK(...);
((SomeSDK*)hSomeSdk)->DoSomeWork();
Now I know that sometimes handles are void pointers because it may be unknown before runtime what will be the actual type of the handle. Or that it can help when we need to share the pointer without revealing its actual structure. But this does not seem to be the case in my situation: it will always be SomeSDK and it is not shared outside the class where it is created. Also the author of this code is gone from the company.
Are there other reasons why it would make sense to have it be a void pointer?
Since this is a member variable, I'm gonna go out on a limb and say your colleague wanted to minimize dependencies. Including the header for SomeSDK is probably undesirable just to define a pointer. The colleague may have had one of two reasons as far as I can see from the code you show:
They just didn't know they can add a forward declarations like class SomeSDK; to allow defining pointers. Some programmers just aren't aware of it.
They couldn't forward declare it. If SomeSDK is not a class, but a type alias (aka typedef), then it's not possible to forward declare it exactly. One can only declare the class it aliases, but that in turn may be an implementation detail that's hard to keep track of. Even the standard library has a similar problem, that is why it provides iosfwd to make forward declaring standard stream types easier.
If the code is peppered with casts of this handle, then the design should have been reworked ages ago. Otherwise, if it's in one place (or a few at most) only, I can see why the people maintaining it could live with it peacefully.
Nope.
If I had to guess, the ex-colleague was unfamiliar with forward declarations and thus didn't know they could still do SomeSDK* in the header without including the entire SomeSDK definition.
Given the constraints you've mentioned, the only thing this pattern achieves is to eliminate some type safety, make the code harder to read/maintain, and generate a Stack Overflow question.
void* were popular and needed back in C. They are convenient in the sense that they can be easily cast to anything. If you need to cast from double* to char*, you have to make a mid cast to void*.
The problem with void* is that they are too flexible: they do not convey intentions of the writer, making them very unsafe especially in big projects.
In Object Oriented Design it is popular to create abstract interface classes (all members are virtual and not implemented) and make pointers to such classes and then instantiate various possible implementation depending on the usage.
However, nowadays, it is more recommended to work with templates (main advantage of C++ over other languages), as those are much faster and enable more compile-time optimization than OOD allowed. Unfortunately, working with templates is still a huge hassle - they have more complicated syntax and it is difficult to convey intentions of the writer to users about restrictions and demands of the template parameters (Concepts TS that solves this problem decently will be available in C++20 - currently there is only SFINAE, a horrible temporary solution from 20 years ago; while Reflection TS, that will greatly enhance generic programming in C++, is unlikely to be available even in C++23).

C++ strings, when to use what?

It's been quite some time now that I've been coding in C++ and I think most who actually code in C++, would agree that one of the most trickiest decisions is to choose from an almost dizzying number of string types available. I mostly prefer ATL Cstring for its ease of use and features, but would like a comparative study of the available options.
I've checked out SO and haven't found any content which assists one choosing the right string. There are websites which state conversions from one string to another, but thats not what we want here.
Would love to have a comparison based on specialty, performance, portability (Windows, Mac, Linux/Unix, etc), ease of use/features, multi language support(Unicode/MBCS), cons (if any), and any other special cases.
I'm listing out the strings that I've encountered so far. I believe, there would be more, so we may edit this later to accommodate other options. Mind you, I've worked mostly on Windows, so the list reflects the same:
char*
std::string
STL's basic_string
ATL's CString
MFC's CString
BSTR
_bstr_t
CComBstr
Don't mean to put a dampener on your enthusiasm for this, but realistically it's inefficient to mix a lot of string types in the one project, so the larger the project gets the more inevitably it should settle on std::string (which is a typedef to an instantiation of STL's basic_string for type char, not a different entity), given that's the only Standard value-semantic option. char* is ok mainly for fixed sized strings (e.g. string literals, fixed size buffers) or interfacing with C.
Why do I say it's inefficient? You end up with needless template instantiations for the variety of string arguments (permutations even for multiple arguments). You find yourself calling functions that want to load a result into a string&, then have to call .c_str() on that and construct some other type, doing redundant memory allocation. Even const std::string& requires a string temporary if called using an ASCIIZ char* (e.g. to some other string type's buffer). When you want to write a function to handle the type of string a particular caller wants to use, you're pushed towards templates and therefore inline code, longer compile times and recompilation depedencies (there are some ways to mitigate this, but they get complex and to be convenient or automated they tend to require changes to the various string types - e.g. casting operator or member function returning some common interface/proxy object).
Projects may need to use non-Standard string types to interact with libraries they want to use, but you want to minimise that and limit the pervasiveness if possible.
The sorry story of C++ string handling is too depressing for me to write an essay on, but just a couple of points:
ATL and MFC CString are the same thing (same code and everything). They were merged years ago.
If you're using either _bstr_t or CComBstr, you probably wouldn't use BSTR except on calls into other people's APIs which take BSTR.
char* - fast, features include those that are in < cstring > header, error-prone (too low-level)
std::string - this is actually a typedef for std::basic_string<char, char_traits<char> > A beautiful thing - first of all, it's fast too. Second, you can use all the < algorithm >s because basic_string provides iterators. For wide-character support there is another typedef, wstring which is, std::basic_string<wchar_t, char_traits<wchar_t> >. This (basic_string)is a standard type therefore is absolutely portable. I'd go with this one.
ATL's and MFC's CStrings do not even provide iterators, therefore they are an abomination for me, because they are a class-wrapper around c-strings and they are very badly designed. IMHO
don't know about the rest.
HOpe this partial information helps
Obviously, only the first three are portable, so they should be preferred in most cases. If you're doing C++, then you should avoid char * in most instances, as raw pointers and arrays are error-prone. Interfacing with low-level C, such as in system calls or drivers, is the exception. std:string should be preferred by default, IMHO, because it meshes so nicely with the rest of the STL.
Again, IMHO, if you need to work with e.g. MFC, you should work with everything as std::string in your business logic, and translate to and from CString when you hit the WinApi functions.
2 and 3 are the same. 4 and 5 are the same, too. 7 and 8 are wrappers of 6. So, arguably, the list contains just C's strings, standard C++'s strings, Microsoft's C++ strings, and Microsoft's COM strings. That gives you the answer: in standard C++, use standard C++ strings (std::string)

What are the bad habits of C programmers starting to write C++? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
A discussion recently ended laughing at the bad habits of programmers that have been too exposed to a language when they start programming in another language. The best example would be a Pascal programmer starting to #define begin { and #define end } when starting to write C.
Goal is to try to catch the bad habits of C programmers when they start using C++.
Tell about the big don't that you encountered. One suggestion by answer, please, to try to achieve a kind of best of.
For the ones interested in good habits, have a look at the accepted answer to this question.
Using raw pointers and resources instead of RAII objects.
using char* instead of std::string
using arrays instead of std::vector (or other containers)
not using other STL algorithms or libraries like boost where appropriate
abusing the preprocessor where constants, typedefs or templates would have been better
writing SESE-style (single-entry single exit) code
Declaring all the variables at the top of a function instead of as close as possible to where they are used.
Not using the STL, especially std::string,
and/or
using std::strings and reverting to old c string functions in tight corners.
Writing class definitions that are 2000 lines of code.
Copying and pasting that class definition into 12 different places.
Using switch statements when a simple virtual method would do.
Failing to allocate memory in constructor and deallocate in destructor.
Virtual methods that take optional arguments.
Writing while loops to manipulate char* strings.
Writing giant macro's that are a page in length. (Could have used templates instead).
Adding using's into header files so they can avoid names like std::string in type declarations.
using pointers instead of references
Very experienced developers not understanding casting or even object oriented programming:
I started helping out on a project and one of the senior guys was having a problem with some code that used to work and now didn't.
(Class names have been changed to protect the innocent, and I can't remember the exact names)
He had some C++ code that was listening to incoming message classes and reading them. The way it had worked in the past was that a Message class was passed in and he would interogate a variable on it to find out what type of message it was. He would then C-style cast the Message as another specialised class he'd written that inherited from Message. This new class had functions on it that extracted the data how he wanted it. Now, this had been working for him fine but now was not.
After many hours looking through his code he could not see a problem and I had a look over his shoulder. Immediately I told him that it's not a good idea to C-style cast Message to a derived class which it was not. He disagreed with me and said he'd been doing it for years and if that was wrong then everything he does is wrong because he frequently uses this approach. He was further backed up by a contractor who told me I was wrong. They both argued that this always works and the code hasn't changed so it's not this approach but something else that has broken his code.
I looked a bit further and found the difference. The latest version of the Message class had a virtual function and hadn't previously had any use of virtual. I told the pair of them that there was now a virtual table and functions were being looked up, etc, etc.... and this was causing their problem, etc, etc.... They eventually agreed and I was presented with a comment that I will never forget: "Virtual completely screws up polymorphism and object oriented programming".
I forwarded them a copy of a decorator pattern as an example of how to add a function to an existing class but heard nothing back from them. How they fixed the idea I have no idea.
One word: macros. I am not saying macros have no place at all in C++, but former C programmers tend to use them way too much after they switch to C++.
Using C-style casts.
C++ allows you to independently choose whether to allow casts between unrelated types, and whether to allow changes to const and volatile qualifiers, giving considerable improvements to compile-time type safety compared with C. It also offers completely safe casts at the cost of a runtime check.
C-style casts, unchecked conversions between just about any types, allow whole classes of error that could be easily identified by more restrictive casts. Their syntax also makes them very difficult to search for, if you want to audit buggy code for dubious conversions.
Assuming said programmers have already made the mistake of attempting to learn C++:
Mistakes
Not using STL.
Trying to wrap everything in classes.
Trying to use templates for everything.
Not using Boost. (I know Boost can be a real PITA, and a learning curve, but C++ is just C+ without it. Boost gives C++ some batteries).
Not using smart pointers.
Not using RAII.
Overusing exceptions.
Controversial
Moving to C++. Don't do it.
Try to convert C stdio to iostreams. Iostreams SUX. Don't use it. It's inherently broken. Look here.
Using the following parts of the libstdc++ library:
strings (beyond freeing them for me, go the hell away)
localization (what the hell does this have to do with c++, worse yet, it's awful)
input/output (64 bit file offsets? heard of them?)
Naively believing you can still debug on the command line. Don't use C++ extensively without a code crane (IDE).
Following C++ blogs. C++ blogs carp on about what essentially boils down to metadata and sugar. Beyond a good FAQ, and experience, I've yet to see a useful C++ blog. (Note that's a challenge: I'd love to read a good C++ blog.)
Writing using namespace std because everyone does and then never reflecting on its meaning.
Or knowing what it means but saying "std::cout << "Hello World" << std::endl; looks ugly".
Passing objects with pointers instead of references. Yes, there are still times when you need pointers in C++, but references are safer, so you should use them when you can.
Making everything in a class public. So, data members that should be private aren't.
Not fully understanding the semantics of pointers and references and when to use one or the other. Related to pointers is also the issue of not managing dynamic allocated memory correctly or failing at using "smarter" constructs for that(e.g. smart pointers).
My favourite is the C programmer who writes a single method with multiple, optional, arguments.
Basically, the function would do different things depending on the values and/or nullability of the arguments.
Not using templates when creating algorithms and data structures (example). It makes things either too localized or too generic
I.e. writing
void qsort(MyStruct *begin, size_t length); //too localized
void qsort(void *begin, size_t length,
size_t rec_size, int(compare*)(void*,void*)); //too generic
instead of
template <class RA_Iter>
void qsort(RA_Iter begin, size_t length);
//uses RA_Iter::value_type::operator< for comparison
Well, bad program design transcends languages ( casts, ignoring warnings, unnecessary precompiler magic, unnecessary bit-twiddling, not using the char classification macros ) , and The C language itself doesn't create too many "bad habits" ( Ok, Macros, esp from the stone ages ), and many of the idioms translate directly. But a few that could be considered:
Using a feature just because it's in C++ and so therefore it must be the right way to do something. Some programs just don't need Inheritance, MI, exceptions, RTTI, templates ( great as they are ... the debugging load is steep ), or Virtual class stuff.
Sticking with some code snippet from C, without thinking if C++ has a better way. ( There's a reason you now have class, private, public, const (expanded beyond C89) , static class funcs, references.
Not being familiar with the C++ i/o lib ( its BIG, and you do need to know it) , and mixing C++ i/o and C i/o.
He thinks that C++ is just a little more different language from C. He will continue programming C masked by C++. No advanced use of classes, the structs are considered less powerful than classes, namespace, new headers, templates, nothing of these new elements are used. He will continue declaring integer vars without int, he will not provide functions prototypes. He will use malloc and free, unsafe pointers and preprocessor to define inline functions. This is just a small list ;)
Confused uses of structs vs. classes, overuse of global methods that take object pointers as arguments, and globally-accessible instance pointers, a la:
extern Application* g_pApp;
void RunApplication(Application* app, int flags);
Also (not saying it's totally useless, but still):
const void* buf;
Declaring all the variables at the start of the function itself even if the variable will be used only after 100 lines or so.
Happens especially for local variables declared inside a function.
Solving the problem instead of creating a class-based monstrosity guaranteed to keep you in health insurance and 401K benefits.
Implementing lisp in a single file and doing the design in that.
Writing normal readable functions instead of overriding operators?
Writing in a style which can be understood by the junior programmers which see good practice as "not writing in C++".
Talking to the OS in it's own language.
Not leaving well enough alone, and using C instead.

Argument order for mixed const and non-const pass-by-reference

In keeping with the practice of using non-member functions where possible to improve encapsulation, I've written a number of classes that have declarations which look something like:
void auxiliaryFunction(
const Class& c,
std::vector< double >& out);
Its purpose is to do something with c's public member functions and fill a vector with the output.
You might note that its argument order resembles that of a python member function, def auxiliaryFunction(self, out).
However, there are other reasonable ways of choosing the argument order: one would be to say that this function resembles an assignment operation, out = auxiliaryFunction(c). This idiom is used in, for example,
char* strcpy ( char* destination, const char* source );
What if I have a different function that does not resemble a non-essential member function, i.e. one that initializes a new object I've created:
void initializeMyObject(
const double a,
const std::vector<double>& b,
MyObject& out);
So, for consistency's sake, should I use the same ordering (mutable variable last) as I did in auxiliaryFunction?
In general, is it better to choose (non-const , const) over (const, non-const), or only in certain situations? Are there any reasons for picking one, or should I just choose one and stick with it?
(Incidentally, I'm aware of Google style guide's suggestion of using pointers instead of non-const references, but that is tangential to my question.)
The STL algorithms places output (non-const) values last. There you have a precedent for C++ that everyone should be aware of.
I also tend to order arguments from important, to less important. (i.e. size of box goes before box-margin tweak value.)
(Note though: Whatever you do, be consistent! That's infinitely more important than choosing this or that way...)
Few points that may be of help:
Yes, I like the idea of following the standard library's argument ordering convention as much as possible. Principle of least surprises. So, good to go. However, remember that the C standard library itself is not very well crafted, particularly if you look at the file handling API. So beware -- learn from these mistakes.
const with basic arithmetic types are rarely used, it'd be more of a surprise if you do.
The STL, even with its deficiencies provide, IMO, a better example.
Finally, note that C++ has another advantage called Return Value Optimization which is turned on in most modern compilers by default. I'd use that and rewrite your initializeMyObject or even better, use a class and constructors.
Pass by const-reference than by value -- save a lot of copying overhead (both time and memory penalties)
So, your function signature should be more like this:
MyObject initializeMyObject(
double a,
const std::vector<double>& b,
);
(And I maybe tempted to hide the std::vector<double> by a typedef if possible.)
In general, is it better to choose (non-const , const) over (const, non-const), or only in certain situations? Are there any reasons for picking one, or should I just choose one and stick with it?
Use a liberal dose of const whenever you can. For parameters, for functions. You are making a promise to the compiler (to be true to your design) and the compiler will help you along every time you digress with diagnostics. Make the most of your language features and compilers. Further, learn about const& (const-refernces) and their potential performance benefits.

Features of C++ that can't be implemented in C?

I have read that C++ is super-set of C and provide a real-time implementation by creating objects. Also C++ is closed to real world as it is enriched with Object Oriented concepts.
What all concepts are there in C++ that can not be implemented in C?
Some say that we can not over write methods in C then how can we have different flavors of printf()?
For example printf("sachin"); will print sachin and printf("%d, %s",count ,name); will print 1,sachin assuming count is an integer whose value is 1 and name is a character array initililsed with "sachin".
Some say data abstraction is achieved in C++, so what about structures?
Some responders here argues that most things that can be produced with C++ code can also be produced with C with enough ambition. This is true in some parts, but some things are inherently impossible to achieve unless you modify the C compiler to deviate from the standard.
Fakeable:
Inheritance (pointer to parent-struct in the child-struct)
Polymorphism (Faking vtable by using a group of function pointers)
Data encapsulation (opaque sub structures with an implementation not exposed in public interface)
Impossible:
Templates (which might as well be called preprocessor step 2)
Function/method overloading by arguments (some try to emulate this with ellipses, but never really comes close)
RAII (Constructors and destructors are automatically invoked in C++, so your stack resources are safely handled within their scope)
Complex cast operators (in C you can cast almost anything)
Exceptions
Worth checking out:
GLib (a C library) has a rather elaborate OO emulation
I posted a question once about what people miss the most when using C instead of C++.
Clarification on RAII:
This concept is usually misinterpreted when it comes to its most important aspect - implicit resource management, i.e. the concept of guaranteeing (usually on language level) that resources are handled properly. Some believes that achieving RAII can be done by leaving this responsibility to the programmer (e.g. explicit destructor calls at goto labels), which unfortunately doesn't come close to providing the safety principles of RAII as a design concept.
A quote from a wikipedia article which clarifies this aspect of RAII:
"Resources therefore need to be tied to the lifespan of suitable objects. They are acquired during initialization, when there is no chance of them being used before they are available, and released with the destruction of the same objects, which is guaranteed to take place even in case of errors."
How about RAII and templates.
It is less about what features can't be implemented, and more about what features are directly supported in the language, and therefore allow clear and succinct expression of the design.
Sure you can implement, simulate, fake, or emulate most C++ features in C, but the resulting code will likely be less readable, or maintainable. Language support for OOP features allows code based on an Object Oriented Design to be expressed far more easily than the same design in a non-OOP language. If C were your language of choice, then often OOD may not be the best design methodology to use - or at least extensive use of advanced OOD idioms may not be advisable.
Of course if you have no design, then you are likely to end up with a mess in any language! ;)
Well, if you aren't going to implement a C++ compiler using C, there are thousands of things you can do with C++, but not with C. To name just a few:
C++ has classes. Classes have constructors and destructors which call code automatically when the object is initialized or descrtucted (going out of scope or with delete keyword).
Classes define an hierarchy. You can extend a class. (Inheritance)
C++ supports polymorphism. This means that you can define virtual methods. The compiler will choose which method to call based on the type of the object.
C++ supports Run Time Information.
You can use exceptions with C++.
Although you can emulate most of the above in C, you need to rely on conventions and do the work manually, whereas the C++ compiler does the job for you.
There is only one printf() in the C standard library. Other varieties are implemented by changing the name, for instance sprintf(), fprintf() and so on.
Structures can't hide implementation, there is no private data in C. Of course you can hide data by not showing what e.g. pointers point to, as is done for FILE * by the standard library. So there is data abstraction, but not as a direct feature of the struct construct.
Also, you can't overload operators in C, so a + b always means that some kind of addition is taking place. In C++, depending on the type of the objects involved, anything could happen.
Note that this implies (subtly) that + in C actually is overridden; int + int is not the same code as float + int for instance. But you can't do that kind of override yourself, it's something for the compiler only.
You can implement C++ fully in C... The original C++ compiler from AT+T was infact a preprocessor called CFront which just translated C++ code into C and compiled that.
This approach is still used today by comeau computing who produce one of the most C++ standards compliant compilers there is, eg. It supports all of C++ features.
namespace
All the rest is "easily" faked :)
printf is using a variable length arguments list, not an overloaded version of the function
C structures do not have constructors and are unable to inherit from other structures they are simply a convenient way to address grouped variables
C is not an OO langaueage and has none of the features of an OO language
having said that your are able to imitate C++ functionality with C code but, with C++ the compiler will do all the work for you in compile time
What all concepts are there in C++
that can not be implemented in C?
This is somewhat of an odd question, because really any concept that can be expressed in C++ can be expressed in C. Even functionality similar to C++ templates can be implemented in C using various horrifying macro tricks and other crimes against humanity.
The real difference comes down to 2 things: what the compiler will agree to enforce, and what syntactic conveniences the language offers.
Regarding compiler enforcement, in C++ the compiler will not allow you to directly access private data members from outside of a class or friends of the class. In C, the compiler won't enforce this; you'll have to rely on API documentation to separate "private" data from "publicly accessible" data.
And regarding syntactic convenience, C++ offers all sorts of conveniences not found in C, such as operator overloading, references, automated object initialization and destruction (in the form of constructors/destructors), exceptions and automated stack-unwinding, built-in support for polymorphism, etc.
So basically, any concept expressed in C++ can be expressed in C; it's simply a matter of how far the compiler will go to help you express a certain concept and how much syntactic convenience the compiler offers. Since C++ is a newer language, it comes with a lot more bells and whistles than you would find in C, thus making the expression of certain concepts easier.
One feature that isn't really OOP-related is default arguments, which can be a real keystroke-saver when used correctly.
Function overloading
I suppose there are so many things namespaces, templates that could not be implemented in C.
There shouldn't be too much such things, because early C++ compilers did produce C source code from C++ source code. Basically you can do everything in Assembler - but don't WANT to do this.
Quoting Joel, I'd say a powerful "feature" of C++ is operator overloading. That for me means having a language that will drive you insane unless you maintain your own code. For example,
i = j * 5;
… in C you know, at least, that j is
being multiplied by five and the
results stored in i.
But if you see that same snippet of
code in C++, you don’t know anything.
Nothing. The only way to know what’s
really happening in C++ is to find out
what types i and j are, something
which might be declared somewhere
altogether else. That’s because j
might be of a type that has operator*
overloaded and it does something
terribly witty when you try to
multiply it. And i might be of a type
that has operator= overloaded, and the
types might not be compatible so an
automatic type coercion function might
end up being called. And the only way
to find out is not only to check the
type of the variables, but to find the
code that implements that type, and
God help you if there’s inheritance
somewhere, because now you have to
traipse all the way up the class
hierarchy all by yourself trying to
find where that code really is, and if
there’s polymorphism somewhere, you’re
really in trouble because it’s not
enough to know what type i and j are
declared, you have to know what type
they are right now, which might
involve inspecting an arbitrary amount
of code and you can never really be
sure if you’ve looked everywhere
thanks to the halting problem (phew!).
When you see i=j*5 in C++ you are
really on your own, bubby, and that,
in my mind, reduces the ability to
detect possible problems just by
looking at code.
But again, this is a feature. (I know I will be modded down, but at the time of writing only a handful of posts talked about downsides of operator overloading)