Why isn't it a compile error if you pass a class object to scanf? - c++

Why is the code below accepted by g++?
#include <cstdio>
#include <string>
int main()
{
std::string str;
scanf("%s", str);
}
What sense does it make to pass a class object to scanf()? Does it get converted to anything that could be useful to another function with variadic arguments?

scanf comes from C. In C if you wanted to have variable number of arguments (like scanf needs) the only solution was variadic function. Variadic functions by design are not type safe, i.e. you can pass absolutely any type and a varargs function will happily accept them. It is a limitation of the C language. That doesn't mean that any type is valid. If an type other than what is actually expected is passed, then we are in the wonderful land of Undefined Behavior.
That being said, scanf is a standard function and what it can accept is known, so most compilers will do extra checks (not required by the standard) if you enable the right flags. See Neil's answer for that.
In C++ (since C++11) we have parameter packs which are type safe ...ish (oh, concepts cannot get sooner).

Enable some warnings. With -Wextra -Wall -pedantic, you will get:
a.cpp:7:10: warning: format '%s' expects argument of type 'char*', but argument 2 has type 'std::__cxx11::string' {aka 'std::__cxx11::basic_string<char>'} [-Wformat=]
scanf("%s", str);
If you want that to be an error rather than a warning, add -Werror.

You have two distinct problems here, not just one:
The passing of a std::string through variadic arguments (which has undefined behaviour), and
The passing of a std::string to a function whose logical semantics expected a char* instead (which has undefined behaviour).
So, no, it doesn't make sense. But it's not a hard error. If you're asking why this has undefined behaviour rather than being ill-formed (and requiring a hard error), I do not know specifically but the answer is usually that it was deemed insufficiently important to require compilers to go to the trouble it would take to diagnose it.
Also, it would be unusual for a logical precondition violation to be deemed ill-formed (just as a matter of convention and consistency; many such violations could not be detected before runtime), so I'd expect point #2 to have undefined behaviour regardless of what hypothetical changes we made to the language to better reject cases of point #1.
Anyway, in the twenty years since standardisation, we've reached a point in technology where the mainstream toolchains do warn on it anyway, and since warnings can be turned into errors, it doesn't really matter.

To answer each of your questions...
The question in the title: "Why isn't it a compile error if you pass a class object to scanf?"
Because the declaration of scanf is int scanf ( const char * format, ... ); which means it will accept any number of arguments after the format string as variadic arguments. The rules for such arguments are:
When a variadic function is called, after lvalue-to-rvalue, array-to-pointer, and function-to-pointer conversions, each argument that is a part of the variable argument list undergoes additional conversions known as default argument promotions:
std::nullptr_t is converted to void*
float arguments are converted to double as in floating-point promotion
bool, char, short, and unscoped enumerations are converted to int or wider integer types as in integer promotion
Only arithmetic, enumeration, pointer, pointer to member, and class type arguments are allowed (except class types with non-trivial copy constructor, non-trivial move constructor, or a non-trivial destructor, which are conditionally-supported with implementation-defined semantics)
Since std::string is a class type with non-trivial copy and move constructors, passing the argument is not allowed. Interestingly, this prohibition, while checkable by a compiler, is not rejected by the compiler as an error.
The first question in the body: "Why is the code below accepted by g++?"
That is a great question. The other answer by #LightnessRacesInOrbit addresses this point very well.
Your second question in the body: "Does it get converted to anything that could be useful to another function with variadic arguments?"
If you run the code, one of the possible results (at run time) is:
.... line 5: 19689 Segmentation fault (core dumped)
so, no, it is not converted into anything, in general, at least not implicitly.
The clarifying question in the comment thread to the question: "I wanted to know "why does the C++ language not disallow this"".
This question appears to be a subjective one, touching on why the C++ language designer(s) and perhaps even the C language designers, did not make their language design robust enough for the language definition to prohibit something other than a string, or memory buffer, or any number of other things, to be sensible as a non-initial argument to scanf. What we do know is that a compiler can often determine such things (that's what linters do, after all!) but we can only guess, really. My guess is that in order to make scanf super typesafe (in the language definition, as opposed to needing a linter) they would need to redefine scanf to use template arguments of some sort. However scanf comes from C, so they did not want to change its signature (that would indeed be wrong, given that C++ wants to be a C superset...).

Related

Is static_cast<int>(myUnsignedLongVar) implementation defined?

Common advice when writing C++ states that implicit conversions should be avoided in favour of explicit casts.
int x = myUnsignedLongVar; // should be avoided
while
int x = static_cast<int>(myUnsignedLongVar); // is preferred
Does a static_cast make this conversion any safer? As far as I'm aware both conversions are implementation defined?
Is the static_cast just a signal to indicate more verbosely that this operation is potentially implementation defined?
And who's implementation? Is the operation dependent on the implementation of the compiler or the CPU?
Both examples would produce the same code.
But, in the second one everyone knows that there’s a cast to int.
In the first one could assume myUnsignedLongVar is int.
In order to make sure nobody misses the cast, guidelines and compilers recommend making it explicit.
I believe your example is somewhat narrow to show the real difference between different types of casting.
If you are simply casting from unsigned int to int, or from double to int, this may not show its real value.
The real value comes when you do not want to allow casting to cause bugs in your program. For example when performing comparisons between signed and unsigned types, or pointer or changing object types. Moreover, C++ style casting i.e. static cast is checked by the compiler.
Another benefit is what you already mentioned. Verbosity. They ensure that the authors intent is captured.
Several answers to this question contain a nice summary and comparison between different types of casting.

Non-POD object error

So, I've read a lot online for this error, but for some reason, I'm still getting it even after I have tried the suggested things. If anyone could help me understand this and point out what's wrong, that would be awesome.
char * s = strtok(text, ",");
string name = s;
printf("%s", name);
Given your example code the error you get is saying something like you cannot pass a non-POD object to an ellipses. This is because you are trying to pass a non-POD type to a variadic function, one that takes a variable number of arguments. In this case by calling printf which is declared something like the below
int printf ( const char * format, ... );
The ellipsis used as the last parameter allows you to pass 0 or more additional arguments to the function as you are doing in your code. The C++ standard does allow you to pass a non-POD type but compilers are not required to support it. This is covered in part by 5.2.2/7 of the standard.
Passing a potentially-evaluated argument of class type having a non-trivial copy constructor, a non-trivial move contructor, or a non-trivial destructor, with no corresponding parameter, is conditionally-supported with implementation-defined semantics.
This means it is up to each compiler maker to decide if they want to support it and how it will behave. Apparently your compiler does not support this and even if it did I wouldn't recommend using it.

Widening of integral types?

Imagine you have this function:
void foo(long l) { /* do something with l */}
Now you call it like so at the call site:
foo(65); // here 65 is of type int
Why, (technically) when you specify in the declaration of your function that you are expecting a long and you pass just a number without the L suffix, is it being treated as an int?
Now, I know it is because the C++ Standard says so, however, what is the technical reason that this 65 isn't just promoted to being of type long and so save us the silly error of forgetting L suffix to make it a long explicitly?
I have found this in the C++ Standard:
4.7 Integral conversions [conv.integral]
5 The conversions allowed as integral promotions are excluded from the set of integral conversions.
That a narrowing conversion isn't being done implicitly, I can think with, but here the destination type is obviously wider than the source type.
EDIT
This question is based on a question I saw earlier, which had funny behavior when you didn't specify the L suffix. Example, but perhaps it's a C thing, more than C++?!!
In C++ objects and values have a type, that is independent on how you use them. Then when you use them, if you need a different type it will be converted appropriately.
The problem in the linked question is that varargs is not type-safe. It assumes that you pass in the correct types and that you decode them for what they are. While processing the caller, the compiler does not know how the callee is going to decode each one of the arguments so it cannot possibly convert them for you. Effectively, varargs is as typesafe as converting to a void* and converting back to a different type, if you get it right you get what you pushed in, if you get it wrong you get trash.
Also note that in this particular case, with inlining the compiler has enough information, but this is just a small case of a general family if errors. Consider the printf family of functions, depending on the contents of the first argument each one of the arguments is processed as a different type. Trying to fix this case at the language level would lead to inconsistencies, where in some cases the compiler does the right thing or the wrong one and it would not be clear to the user when to expect which, including the fact that it could do the right thing today, and the wrong one tomorrow if during refactoring the function definition is moved and not available for inlining, or if the logic of the function changes and the argument is processed as one type or another based on some previous parameter.
The function in this instance does receive a long, not an int. The compiler automatically converts any argument to the required parameter type if it's possible without losing any information (as here). That's one of the main reasons function prototypes are important.
It's essentially the same as with an expression like (1L + 1) - because the integer 1 is not the right type, it's implicitly converted to a long to perform the calculation, and the result is a long.
If you pass 65L in this function call, no type conversion is necessary, but there's no practical difference - 65L is used either way.
Although not C++, this is the relevant part of the C99 standard, which also explains the var args note:
If the expression that denotes the called function has a type that
does include a prototype, the arguments are implicitly converted, as
if by assignment, to the types of the corresponding parameters, taking
the type of each parameter to be the unqualified version of its
declared type. The ellipsis notation in a function prototype
declarator causes argument type conversion to stop after the last
declared parameter. The default argument promotions are performed on
trailing arguments.
Why, (technically) when you specify in the declaration of your function that you are expecting a long and you pass just a number without the L suffix, is it being treated as an int?
Because the type of a literal is specified only by the form of the literal, not the context in which it is used. For an integer, that is int unless the value is too large for that type, or a suffix is used to specify another type.
Now, I know it is because the C++ Standard says so, however, what is the technical reason that this 65 isn't just promoted to being of type long and so save us the silly error of forgetting L suffix to make it a long explicitly?
The value should be promoted to long whether or not you specify that type explicitly, since the function is declared to take an argument of type long. If that's not happening, perhaps you could give an example of code that fails, and describe how it fails?
UPDATE: the example you give passes the literal to a function taking untyped ellipsis (...) arguments, not a typed long argument. In that case, the function caller has no idea what type is expected, and only the default argument promotions are applied. Specifically, a value of type int remains an int when passed through ellipsis arguments.
The C standard states:
"The type of an integer constant is the first of the corresponding list in which its value can be represented."
In C89, this list is:
int, long int, unsigned long int
C99 extends that list to include:
long long int, unsigned long long int
As such, when you code is compiled, the literal 65 fits in an int type, and so it's type is accordingly int. The int is then promoted to long when the function is called.
If, for instance, sizeof(int) == 2, and your literal is something like 64000, the type of the value will be a long (assuming sizeof(long) > sizeof(int)).
The suffixes are used to overwrite the default behavior and force the specified literal value to be of a certain type. This can be particularly useful when the integer promotion would be expensive (e.g. as part of an equation in a tight loop).
We have to have a standard meaning for types because for lower level applications, the type REALLY matters, especially for integral types. Low level operators (such as bitshift, add, ect) rely on the type of the input to determine overflow locations. ((65 << 2) with integers is 260 (0x104), but with a single char it is 4! (0x004)). Sometimes you want this behavior, sometimes you don't. As a programmer, you just need to be able to always know what the compiler is going to do. Thus the design decision was made to make the human explicitly declare the integral types of their constants, with "undecorated" as the most commonly used type, integer.
The compiler does automatically "cast" your constant expressions at compile time, such that the effective value passed to the function is long, but up until the cast it is considered an int for this reason.

What's the -complete- list of kinds of automatic type conversions a C++ compiler will do for a function argument?

Given a C++ function f(X x) where x is a variable of type X, and a variable y of type Y, what are all the automatic/implicit conversions the C++ compiler will perform on y so that the statement "f(y);" is legal code (no errors, no warnings)?
For example:
Pass Derived& to function taking Base& - ok
Pass Base& to function Derived& - not ok without a cast
Pass int to function taking long - ok, creates a temporary long
Pass int& to function taking long& - NOT ok, taking reference to temporary
Note how the built-in types have some quirks compared to classes: a Derived can be passed to function taking a Base (although it gets sliced), and an int can be passed to function taking a long, but you cannot pass an int& to a function taking a long&!!
What's the complete list of cases that are always "ok" (don't need to use any cast to do it)?
What it's for: I have a C++ script-binding library that lets you bind your C++ code and it will call C++ functions at runtime based on script expressions. Since expressions are evaluated at runtime, all the legal combinations of source types and function argument types that might need to be used in an expression have to be anticipated ahead of time and precompiled in the library so that they'll be usable at runtime. If I miss a legal combination, some reasonable expressions won't work in runtime expressions; if I accidently generate a combination that isn't legal C++, my library just won't compile.
Edit (narrowing the question):
Thanks, all of your answers are actually pretty helpful. I knew the answer was complicated, but it sounds like I've only seen the tip of the iceberg.
Let me rephrase the question a little then to limit its scope then:
I will let the user specify a list of "BaseClasses" and a list of "UserDefinedConversions". For Bases, I'll generate everything including reference and pointer conversions. But what cases (const/reference/pointer) can I safely do from the UserDefined Conversions list? (The user will give bare types, I will decorate with *, &, const, etc. in the template.)
C++ Standard gives the answer to your question in 13.3.3.1 Implicit conversion sequences, but it too large to post it here. I recommend you to read at least that part of C++ Standard.
Hope this link will help you.
Unfortunately the answer to your question is hugely complex, occupying at least 9 pages in the ISO C++ standard (specifically: ~6 pages in "3 Standard Conversions" and ~3 pages in "13.3.3.1 Implicit Conversion Sequences").
Brief summary: A conversion that does not require a cast is called an "implicit conversion sequence". C++ has "standard conversions", which are conversions between fundamental types (such as char being promoted to int) and things such as array-to-pointer decay; there can be several of these in a row, hence the term "sequences". C++ also permits user-defined conversions, which are defined by conversion functions and converting constructors. The important thing to note is that an implicit conversion sequence can have at most one user-defined conversion, with optionally a sequence of standard conversions on either side -- C++ will never "chain" more than one user-defined conversion together without a cast.
(If anyone would like to flesh this post out with the full details, please go ahead... But for me, that would just be too exhausting, sorry :-/)
Note how the built-in types have some
quirks compared to classes: a Derived
can be passed to function taking a
Base (although it gets sliced), and an
int can be passed to function taking a
long, but you cannot pass an int& to a
function taking a long&!!
That's not a quirk of built-in vs. class types. It's a quirk of inheritance.
If you had classes A and B, and B had a conversion to A (either because A has a constructor taking B, or because B has a conversion operator to A), then they'd behave just like int and long in this respect - conversion can occur where a function takes a value, but not where it takes a non-const reference. In both cases the problem is that there is no object to which the necessary non-const reference can be taken: a long& can't refer to an int, and an A& can't refer to a B, and no non-const reference can refer to a temporary.
The reason the base/derived example doesn't encounter this problem because a non-const Base reference can refer to a Derived object. The fact that the types are user-defined is a necessary but not a sufficient condition for the reference to be legal. Convertible user-defined classes where there is no inheritance behave just like built-ins.
This comment is way too long for comments, so I've used an answer. It doesn't actually answer your question, though, other than to distinguish between:
"Conversions" where a reference to a derived class is passed to a function taking a reference to a base class.
Conversions where a user-defined or built-in conversion actually creates an object, such as from int to long.

Last named parameter not function or array?

This question is about vararg functions, and the last named parameter of them, before the ellipsis:
void f(Type paramN, ...) {
va_list ap;
va_start(ap, paramN);
va_end(ap);
}
I was reading in the C Standard, and found the following restriction for the va_start macro:
The parameter parmN is the identifier of the rightmost parameter in the variable parameter list in the function definition (the one just before the , ...). If the parameter parmN is declared with the register storage class, with a function or array type, or with a type that is not compatible with the type that results after application of the default argument promotions, the behavior is undefined.
I wonder why the behavior is undefined for the following code
void f(int paramN[], ...) {
va_list ap;
va_start(ap, paramN);
va_end(ap);
}
and not undefined for the following
void f(int *paramN, ...) {
va_list ap;
va_start(ap, paramN);
va_end(ap);
}
The macros are intended to be implementable by pure C code. But pure C code cannot find out whether or not paramN was declared as an array or as a pointer. In both cases, the type of the parameter is adjusted to be a pointer. The same is true for function type parameters.
I wonder: What is the rationale of this restriction? Do some compilers have problems with implementing this when these parameter adjustments are in place internally? (The same undefined behavior is stated for C++ - so my question is about C++ aswell).
The restriction against register parameters or function parameters are probably something like:
you are not allowed to take the address of a variable with the register storage class.
function pointers are sometimes quite different than pointers to objects. For example, they might be larger than pointers to objects (you can't reliably convert a function pointer to an object pointer and back again), so adding some fixed number to the address of a function pointer might not get you to the next parameter. If va_start() and/or va_arg() were implemented by adding some fixed amount to the address of paramN and function pointers were larger than object pointers the calculation would end up with the wrong address for the object va_arg() returns. This might not seem to be a great way to implement these macros, but there might be platforms that have (or even need) this type of implementation.
I can't think of what the problem would be to prevent allowing array parameters, but PJ Plauger says this in his book "The Standard C Library":
Some of the restrictions imposed on the macros defined in <stdarg.h> seem unnecessarily severe. For some implementations, they are. Each was introduced, however, to meet the needs of at least one serious C implementation.
And I imagine that there are few people who know more about the ins and outs of the C library than Plauger. I hope someone can answer this specific question with an actual example; I think it would be an interesting bit of trivia.
New info:
The "Rationale for International Standard - Programming Languages - C" says this about va_start():
The parmN argument to va_start was intended to be an aid to implementors writing the
definition of a conforming va_start macro entirely in C, even using pre-C89 compilers (for example, by taking the address of the parameter). The restrictions on the declaration of the parmN parameter follow from the intent to allow this kind of implementation, as applying the & operator to a parameter name might not produce the intended result if the parameter’s declaration did not meet these restrictions.
Not that that helps me with the restriction on array parameters.
It's not undefined. Keep in mind that when parameter is declared as int paramN[], the actual parameter type will still decay to int* paramN immediately (which is visible in C++, for example, if you apply typeid to paramN).
I must admit that I'm not sure what this bit in the spec is even for, considering that you cannot have parameters of function or array types in the first place (since they will pointer-decay).
I found another relevant quote, from Dinkumware.
The last parameter must not have
register storage class, and it must
have a type that is not changed by the
translator. It cannot have:
* an array type
* a function type
* type float
* any integer type that changes when promoted
* a reference type [C++ only]
So apparently, the problem is precisely that the parameter gets passed in a way different from how it is declared. Interestingly enough, they also ban float and short, even though those should be supported by the standard.
As a hypothesis, it could be that some compilers have problems doing sizeof correctly on such parameters. E.g. it might be that, for
int f(int x[10])
{
return sizeof(x);
}
some (buggy) compiler will return 10*sizeof(int), thus breaking the va_start implementation.
I can only guess that the register restriction is there to ease library/compiler implementation -- it eliminates a special case for them to worry about.
But I have no clue about the array/function restriction. If it were in the C++ standard only, I would hazard a guess that there is some obscure template matching scenario where the difference between a parameter of type T[] and of type T* makes a difference, correct handling of which would complicate va_start etc. But since this clause appears in the C standard too, obviously that explanation is ruled out.
My conclusion: an oversight in the standards. Possible scenario: some pre-standard C compiler implemented parameters of type T[] and T* differently, and the spokesperson for that compiler on the C standards committee had the above restrictions added to the standard; that compiler later became obsolete, but no-one felt the restrictions were compelling enough to update the standard.
C++11 says:
[n3290: 13.1/3]: [..] Parameter declarations that differ only in a
pointer * versus an array [] are equivalent. That is, the array
declaration is adjusted to become a pointer declaration. [..]
and C99 too:
[C99: 6.7.5.3/7]: A declaration of a parameter as ‘‘array of type’’ shall be adjusted to ‘‘qualified pointer to
type’’, where the type qualifiers (if any) are those specified within the [ and ] of the
array type derivation. [..]
And you said:
But pure C code cannot find out whether or not paramN was declared as an array or as a pointer. In both cases, the type of the parameter is adjusted to be a pointer.
Right, so there's no difference between the two pieces of code you showed us. Both have paramN declared as a pointer; there is actually no array type there at all.
So why would there be a difference between the two when it comes to the UB?
The passage you quoted...
The parameter parmN is the identifier of the rightmost parameter in the variable parameter list in the function definition (the one just before the , ...). If the parameter parmN is declared with the register storage class, with a function or array type, or with a type that is not compatible with the type that results after application of the default argument promotions, the behavior is undefined.
...applies to neither, as would be expected.