Is it bad practice to use C features in C++? - c++

For example printf instead of cout, scanf instead of cin, using #define macros, etc?

I wouldn't say bad as it will depend on the personal choice. My policy is when there is a type-safe alternatives is available in C++, use them as it will reduce the errors in the code.

It depends on which features. Using define macros in C++ is strongly frowned upon, and for a good reason. You can almost always replace a use of a define macro with something more maintainable and safe in C++ (templates, inline functions, etc.)
Streams, on the other hand, are rightly judged by some people to be very slow and I've seen a lot of valid and high-quality C++ code using C's FILE* with its host of functions instead.
And another thing: with all due respect to the plethora of stream formatting possibilities, for stuff like simple debug printouts, IMHO you just can't beat the succinctness of printf and its format string.

You should definitely use printf in place of cout. The latter does let you make most or all of the formatting controls printf allows, but it does so in a stateful way. I.e. the current formatting mode is stored as part of the (global) object. This means bad code can leave cout in a state where subsequent output gets misformatted unless you reset all the formatting every time you use it. It also wreaks havoc with threaded usage.

I would say the only ones that are truly harmful to mix are the pairings between malloc/free and new/delete.
Otherwise it's really a style thing...and while the C is compatible with the C++, why would you want to mix the two languages when C++ has everything you need without falling back?

There are better solutions for most cases, but not all.
For example, people quite often use memcpy. I would almost never do that (except in really low-level code). I always use std::copy, even on pointers.
The same counts for the input/output routines. But it’s true that sometimes, C-style printf is substantially easier to use than cout (especially in logging). If Boost.Format isn’t an option then sure, use C.
#define is a different beast entirely. It’s not really a C-only feature, and there are many legitimate uses for it in C++. (But many more that aren’t.)
Of course you’d never use it to define constants (that’s what const is for), nor to declare inline functions (use inline and templates!).
On the other hand, it is often useful to generate debugging assertions and generally as a code generation tool. For example, I’m unit-testing class templates and without extensive use of macros, this would be a real pain in the *ss. Using macros here isn’t nice but it saves literally thousands of lines of code.

For allocations, I would avoid using malloc/free altogether and just stick to new/delete.

Not really, printf() is quite faster than cout, and the c++ iostream library is quite large. It depends on the user preference or the program itself (is it needed? etc). Also, scanf() is not suitable to use anymore, I prefer fgets().

What can be used or not only depends on the compiler that will be used. Since you are programming in c++, in my opinion, to maximize compatibility it is better to use what c++ provides instead of c functions unless you do not have any other choices.

Coming from a slightly different angle, I'd say it's bad to use scanf in C, never mind C++. User input is just far to variable to be parsed reliably with scanf.

I'd just post a comment to another reply, but since I can't... C's printf() is better than C++'s iostream because of internationalization. Want to translate a string and put the embedded number in a different place? Can't do it with an ostream. printf()'s format specification is a whole little language unto itself, interpreted at runtime.

Related

Compiler optimizations on map/set in c++

Does compiler make optimzation on data structures when input size is small ?
unordered_set<int>TmpSet;
TmpSet.insert(1);
TmpSet.insert(2);
TmpSet.insert(3);
...
...
Since since is small using hashing would be not required we can simply store this in 3variables. Do optimization like this happen ? If yes who is responsible for them ?
edit: Replaced set with unordered_set as former doesn't do hasing.
Possible in theory to totally replace whole data structures with a different implementation. (As long as escape analysis can show that a reference to it couldn't be passed to separately-compiled code).
But in practice what you've written is a call to a constructor, and then three calls to template functions which aren't simple. That's all the compiler sees, not the high-level semantics of a set.
Unless it's going to optimize stuff away, real compilers aren't going to use a different implementation which would have different object representations.
If you want to micro-optimize like this, in C++ you should do it yourself, e.g. with a std::bitset<32> and set bits to indicate set membership.
It's far too complex a problem for a compiler to reliably do a good job. And besides, there could be serious quality-of-implementation issues if the compiler starts inventing different data-structures that have different speed/space tradeoffs, and it guesses wrong and uses one that doesn't suit the use-case well.
Programmers have a reasonable expectation that their compiled code will work somewhat like what they wrote, at least on a large scale. Including calling those standard header template functions with their implementation of the functions.
Maybe there's some scope for a C++ implementation adapting to the use-case, but I that would need much debate and probably a special mechanism to allow it in practice. Perhaps name-recognition of std:: names could be enough, making those into builtins instead of header implementations, but currently the implementation strategy is to write those functions in C++ in .h headers. e.g. in libstdc++ or libc++.

Why does arduino uses macros for certain 'functions' like min(), max(), constrain()?

Though I am a fan of macros in general, I am not getting why the Arduino makers chose to use macros instead of actual functions for some of their arithmatic "functions". To name a few examples:
min()
max()
constrain()
Their website informs one not to call functions from within these "functions" or to use pre/postfix inside the brackets() because they are actually macros.
Considering the arduino language is actually C++, they could have easily used (inline) functions instead and prevent any user from falling in one of the well known macro pitfalls.
People usually do things for reasons. And so far I have not found these reasons. So my qeustion: why did the Arduino makers chose to use macro's instead of functions?
Arduino is built based on much older codes and libraries such as AVR-libc where macros are used extensively way before Arduino even existed.
In modern programming, macros are not recommended (versus inline functions) as it does not do type checking, does not check compile errors and if not craft carefully could lead to some side-effects.

In general, does using C++ templates produce larger executables than doing the same code with macros?

In C, when you'd like to do generic programming, your only language-supported option is macros. They work great and are widely used, but are discouraged if you can get by with inline functions or regular functions instead. (If using gcc, you can also use gcc statement expressions, which avoid the double-evaluation "bug". Example.)
C++, however, has done away with the so-called "evils" of macros by creating templates. I'm still somewhat new to the full-blown gigantic behemoth of a language that is C++ (I assess it must have like 4 or 5x as many features and language constructs as C), and generally have favored macros or gcc statement expressions, but am being pressured more and more to use templates in their place. This begs the question: in general, when doing generic programming in C++, which will produce smaller executables: macros or templates?
If you say, "size doesn't matter, choose safety over size", I'm going to go ahead and stop you right there. For large computers and application programming, this may be true, but for microcontroller programming on an Arduino, ATTiny85 with 8KB Flash space for the program, or other small devices, that's hogwash. Size matters too, so tradeoffs must be made.
Which produces smaller executables for the same code when doing generic programming? Macros or templates? Any additional insight is welcome.
Related:
Do c++ templates make programs slow?
Side note:
Some things can only be done with macros, NOT templates. Take, for example, non-name-mangled stringizing/stringifying and X macros. More on X macros:
Real-world use of X-Macros
https://www.geeksforgeeks.org/x-macros-in-c/
https://en.wikipedia.org/wiki/X_Macro
At this time of history, 2020, this is only the job of the optimizer. You can achieve better speed with assembly, the point is that it's not worth in both size and speed. With proper C++ programming your code will be fast enough and small enough. Getting faster or smaller by messing the readability of the code is not worth the trouble.
That said, macros replace stuff at the preprocessor level, templates do that at the compile level. You may get faster compilation time with macros, but a good compiler will optimize them more that macros. This means that you can have the same exe size, or possibly less with templates.
The vast, 99%, troubles of speed or size in an application comes from the programmers errors, not from the language. Very often I discover that some photo resources are PNG instead of proper JPG in my executable and voila, I have a bloat. Or that I accidentally forgot to use weak_ptr to break a reference and now I have two shared pointers that share 100MB of memory that will not be freed. It's almost always the human error.
... in general, when doing generic programming in C++, which will produce smaller executables: macros or templates?
Measure it. There shouldn't be a significant difference assuming you do a good job writing both versions (see the first point above), and your compiler is decent, and your code is equally sympathetic to both.
If you write something that is much bigger with templates - ask a specific question about that.
Note that the linked question's answer is talking about multiple non-mergeable instantiations. IME function templates are very often inlined, in which case they behave very much like type-safe macros, and there's no reason for the inlining site to be larger if it's otherwise the same code. If you start taking the addresses of function template instantiations, for example, that changes.
... C++ ... generally have favored macros ... being pressured more and more to use templates in their place.
You need to learn C++ properly. Perhaps you already have, but don't use templates much: that still leaves you badly-placed to write a good comparison.
This begs the question
No, it prompts the question.

Why does istream::operator>> accept char pointers/arrays?

char someArray[n];
std::cin >> someArray; // potential buffer overrun
I've seen code like the above numerous times on the C++ forums I frequent. Is there a good reason for this not to be treated as a compile time error? or at the very least, a warning?
An underlying premise with C (and C++) is that the coder should know what they're doing. Otherwise they'd be coding in BASIC :-)
It's not permitted to be an error since it's allowed per the standard, just like gets and scanf("%s") are allowed in C, despite the fact they're a problem waiting to happen.
The code you've posted is bad and has no place in serious software, but it's fine for "toy" programs or testing things. You just need to be aware of its problems (and it sounds very much like you are aware of them).
If C++ had been all been invented in one fell swoop, it probably wouldn't exist at all -- if you wanted to read a string, you'd have to read it into a std::string, and that would be the end of it.
Unfortunately, C++ was used for quite a while before std::string was standardized (or invented at all). Both operator>> and istream::getline (not to be mistaken for std::getline) were invented during that time. When they were invented, there was little (or no) real alternative, so they worked with arrays of char.
Today, of course, there are alternatives, and it's best to just avoid these unless you get stuck writing code with some ancient compiler that doesn't support the superior alternatives.

Is a preprocessor needed for a viable language?

How useful is the C++ preprocessor, really? Even in C#, it still has some functionality, but I've been thinking of ditching its use altogether for a hypothetical future language. I guess that there are some languages like Java that survive without such a thing. Is a language without a preprocessing step competitive and viable? What steps do programs written in languages without a preprocessor take to emulate it's functionality, e.g. different code for debug and release code, and how do these compare to #ifdef DEBUG?
In fact, most languages deal very well without a preprocessor. I'd move on to say that the necessity of using preprocessor with C/C++ roots in their lack of several parts of functionality.
For example:
Most languages don't need header files and include guards, because they have the notion of a "module".
Conditional compilation can be easily obtained through static ifs or an analogous mechanism.
Code repetition can almost always be reduced in more clear ways than what you can achieve with the preprocessor: using templates/generics, a reflection system, etc, etc.
So my conclusion is: for most "features" you can get with preprocessor and metaprogramming, a more clear alternative exists which is safer and more convenient to use.
The D programming language, as a compiled low-level language, is a nice example on "how to provide most features usually done via preprocessor, without actually preprocessing" - includes all I've mentioned, plus string mixins and template mixins, and probably some other clever solutions to problems usually solved with preprocessing in C/C++.
I would say no, macros are not needed for a language to be viable and competitive.
That doesn't mean macros are not needed in some languages.
If you need macros it's probably because there is a deficiency in your language. (Or because you're trying to be compatible with some other deficient language, like C++ is with C. :)). Make the language "good enough" and you will need macros so rarely that the language can do without them.
(Of course, it depends what your language's goals are what "good enough" actually means, and whether or not macros are a good way to achieve certain things or just a band-aid for missing concepts/features.)
Even in a "good enough" language, there may still be the odd time where you wish macros were there. Maybe they would still be worth having. Maybe not. Depends on what they bring to the language and what problems they introduce in terms of complexity (to the compiler/runtime and to the programmer). But that is true of any language feature. You wouldn't design a language with the aim to "have every single feature" so you have to pick & choose based on the trade-offs and benefits.
e.g. Templates are a fantastically powerful feature in C++, and something I find I miss occasionally in C#, but should C# have them? Should every language have the feature? Perhaps not, given the complexity they would bring with them (and the fact you can usually use C++/CLI for that kind of work).
BTW, I'm not saying "any good language doesn't have macros"; I'm just saying a good language doesn't need them. And FWIW, it used to irritate me that Java didn't have them, but that was probably because Java was lacking in certain areas and not because macros are essential.
It is very useful, however should be used with care.
Few examples where you need it.
Currently there is no other standard way to handle #include properly other then processor as it a part of standard. Also you need define to have include guard. But this is very C++ specific issue that does not exist in other languages.
Processor is very useful for conditional compilations when you need to configure your system to work with different API's, different OS's different toolkit, it is the only way to go (unless you want to create an abstract interfaces and then make conditional compilation on build system level).
With current C++ standard (2003) without variadic templates it makes life much easier in certain situations. For example, when you need to create a bunch of classes like:
template<typename R>
class function<R()> { ... }
template<typename R,typename T1>
class function<R(T1)> { ... }
template<typename R,typename T1,typename T2>
class function<R(T1,T2)> { ... }
...
It is almost impossible to do it properly without processor in current C++ standard. (in C++0x there is variadic templates that make it much easier).
In fact great tools like boost::function, boost::signal, boost::bind require quite
complicated templates handling to make this stuff work with current compilers.
Sometimes templates provide very nice structures that are impossible without preprocessor, for example:
assert(ptr!=0);
That prints aborts the program printing:
Assertion failed in foo.cpp, line 134 "ptr!=0"
And of course it is really useful for unit testing:
TEST(3.14159 <=pi && pi < 3.141599);
That prints aborts the program printing:
Test failed in foo.cpp, line 134 "3.14159 <=pi && pi < 3.141599"
Logging. Usually logging is something much easier to implement with macros. Why?
You need either to write:
if(should_log(info))
log(info) << "this is the message in file foo.cpp, line 10, foo::doit()" << "Value is " << x;
or simpler:
LOG_INFO() << "Value is " << x;
Which includes already: file, line number, function name and condition. Very valuable.
In fact boost::log apache logging use such things.
Yes... Preprocessor sometimes is evil, but it too many cases it is extremely useful, so use it smartly and with care and it is fine.
However if you implement with it:
Macros instead if inline function.
Unreadable and unclear macros to "reduce the code"
constants
You are do it wrong.
Bottom line
As every tool it can be abused (and beleive me I had seen very crazy preprocessor
abuse it real code) but today it is very useful thing.
You do not need preprocessing step to implement conditional compilation. You do not really need macros (one can live without stringize and token-paste operators, and even these could be done without PP). #includes are a very special kind of nightmare, which models the reference invocation all wrong.
What else is so important?
It would depend upon what you consider 'viable', but
In C++, a lot of the necessity for/desirability of using macros has been obviated by features such as templates, inlining and namespaces. The only reason I find myself using macros in C++ is for integration with C (#ifdef __cplusplus in headers and processing definitions). With other languages, this is unnecessary with tools like JNI or SWIG to generate C headers/libraries.
Rather than use macros to control 'debug' and 'nodebug' builds, it would make more sense for the compiler to include a debug compile option to include/enable the necessary features.
Several languages just work without macros; if you're looking for a C-like language to compare with, I suggest D http://www.digitalmars.com/d/2.0/comparison.html