Why are parameters promoted when it comes to a variadic function? - c++

Why are parameters promoted when it comes to a variadic function,for instance floats are promoted to double ext and in which order are they promoted?
Variadic arguments - cppreference.com
Default conversions
When a variadic function is called, after lvalue-to-rvalue, array-to-pointer, and function-to-pointer conversions, each argument that is a part of the variable argument list undergoes additional conversions known as default argument promotions:
std::nullptr_t is converted to void*
float arguments are converted to double as in floating-point promotion
bool, char, short, and unscoped enumerations are converted to int or wider integer types as in integer promotion

Why are parameters promoted
Because that is how the language has been specified.
You may be thinking, why has the language been specified that way. I don't know if there is published rationale for this choice, but I suspect that the answer is as simple as: Because that is how the C language had been specified
You may be thinking, why was the C language specified that way. There is a standard document N1256 discussing design rationale of some choices for the C99 standard. It seems to not cover this choice. Besides, C language existed long before its standardisation and C99 wasn't even the first standard version. This behaviour may have existed before the involvement of the committee.
For what it's worth, same promotion rules apply also to calling functions that haven't been declared (until C99) or calling a fixed argument function through a prototype which doesn't declare the parameters:
// this is C lanugage
void fun();
int main(int, char [][]) {
float f = 42;
fun(f); // argument promotes to double
undeclared(f); // ill-formed since C99
// argument promotes to double prior to C99
The reasons for this may be similar to the reasons for promotion in case of variable parameter lists.

Promotion of arguments for variadic functions make it way more easier to deal with them. Since the function code doesn't know the actual type of arguments from the function signature, calling has to communicate the type through some other means, and promotion reduces the number of options without sacrificing the flexibility.
For example, consider classical example of variadic function - printf. When you give it %f argument, it already knows that the argument is double precision, since it would be promoted. Absence promotion, two different modifiers would have to exist, one for single precision and another one for double precision.
Another example would be integral promotions. Currently any type would work with %d modifier, and while modifiers for short versions do exist, one is not required to use them, and can simplify their code.
In addition, it provides for fewer surprises when using some other variadic functions. For example, Posix open function is shown as if it would be an overloaded function with either 2 or 3 arguments, last argument being specified in the man as mode_t type. In fact, there are no overloads in C, so there are no two versions of open - there is only one, which is a variadic one.
Absent of promotions, one would have to make sure that when 3-argument version is used, the last argument is exactly mode_t type, which would be quite inconvenient, counterintuitive and failure to do so would likely lead to quite unexpected behavior. Automatic promotions save us from this.

Related

Why isn't it a compile error if you pass a class object to scanf?

Why is the code below accepted by g++?
#include <cstdio>
#include <string>
int main()
{
std::string str;
scanf("%s", str);
}
What sense does it make to pass a class object to scanf()? Does it get converted to anything that could be useful to another function with variadic arguments?
scanf comes from C. In C if you wanted to have variable number of arguments (like scanf needs) the only solution was variadic function. Variadic functions by design are not type safe, i.e. you can pass absolutely any type and a varargs function will happily accept them. It is a limitation of the C language. That doesn't mean that any type is valid. If an type other than what is actually expected is passed, then we are in the wonderful land of Undefined Behavior.
That being said, scanf is a standard function and what it can accept is known, so most compilers will do extra checks (not required by the standard) if you enable the right flags. See Neil's answer for that.
In C++ (since C++11) we have parameter packs which are type safe ...ish (oh, concepts cannot get sooner).
Enable some warnings. With -Wextra -Wall -pedantic, you will get:
a.cpp:7:10: warning: format '%s' expects argument of type 'char*', but argument 2 has type 'std::__cxx11::string' {aka 'std::__cxx11::basic_string<char>'} [-Wformat=]
scanf("%s", str);
If you want that to be an error rather than a warning, add -Werror.
You have two distinct problems here, not just one:
The passing of a std::string through variadic arguments (which has undefined behaviour), and
The passing of a std::string to a function whose logical semantics expected a char* instead (which has undefined behaviour).
So, no, it doesn't make sense. But it's not a hard error. If you're asking why this has undefined behaviour rather than being ill-formed (and requiring a hard error), I do not know specifically but the answer is usually that it was deemed insufficiently important to require compilers to go to the trouble it would take to diagnose it.
Also, it would be unusual for a logical precondition violation to be deemed ill-formed (just as a matter of convention and consistency; many such violations could not be detected before runtime), so I'd expect point #2 to have undefined behaviour regardless of what hypothetical changes we made to the language to better reject cases of point #1.
Anyway, in the twenty years since standardisation, we've reached a point in technology where the mainstream toolchains do warn on it anyway, and since warnings can be turned into errors, it doesn't really matter.
To answer each of your questions...
The question in the title: "Why isn't it a compile error if you pass a class object to scanf?"
Because the declaration of scanf is int scanf ( const char * format, ... ); which means it will accept any number of arguments after the format string as variadic arguments. The rules for such arguments are:
When a variadic function is called, after lvalue-to-rvalue, array-to-pointer, and function-to-pointer conversions, each argument that is a part of the variable argument list undergoes additional conversions known as default argument promotions:
std::nullptr_t is converted to void*
float arguments are converted to double as in floating-point promotion
bool, char, short, and unscoped enumerations are converted to int or wider integer types as in integer promotion
Only arithmetic, enumeration, pointer, pointer to member, and class type arguments are allowed (except class types with non-trivial copy constructor, non-trivial move constructor, or a non-trivial destructor, which are conditionally-supported with implementation-defined semantics)
Since std::string is a class type with non-trivial copy and move constructors, passing the argument is not allowed. Interestingly, this prohibition, while checkable by a compiler, is not rejected by the compiler as an error.
The first question in the body: "Why is the code below accepted by g++?"
That is a great question. The other answer by #LightnessRacesInOrbit addresses this point very well.
Your second question in the body: "Does it get converted to anything that could be useful to another function with variadic arguments?"
If you run the code, one of the possible results (at run time) is:
.... line 5: 19689 Segmentation fault (core dumped)
so, no, it is not converted into anything, in general, at least not implicitly.
The clarifying question in the comment thread to the question: "I wanted to know "why does the C++ language not disallow this"".
This question appears to be a subjective one, touching on why the C++ language designer(s) and perhaps even the C language designers, did not make their language design robust enough for the language definition to prohibit something other than a string, or memory buffer, or any number of other things, to be sensible as a non-initial argument to scanf. What we do know is that a compiler can often determine such things (that's what linters do, after all!) but we can only guess, really. My guess is that in order to make scanf super typesafe (in the language definition, as opposed to needing a linter) they would need to redefine scanf to use template arguments of some sort. However scanf comes from C, so they did not want to change its signature (that would indeed be wrong, given that C++ wants to be a C superset...).

Is it valid to pass non-arithmetic types as arguments to cmath functions?

Given the following user-defined type S with a conversion function to double:
struct S
{
operator double() { return 1.0;}
};
and the following calls to cmath functions using the type S:
#include <cmath>
void test(S s) {
std::sqrt(s);
std::log(s);
std::isgreater(s,1.0);
std::isless(s,1.0);
std::isfinite(s) ;
}
This code compiles with gcc using libstdc++ (see it live) but with clang using libc++ it generates errors for several of the calls (see it live) with the following error for isgreater:
error: no matching function for call to 'isgreater'
std::isgreater(s,1.0);
^~~~~~~~~~~~~~
note: candidate template ignored: disabled by 'enable_if' [with _A1 = S, _A2 = double]
std::is_arithmetic<_A1>::value &&
^
and similar errors for isless and isfinite, so libc++ expects the arguments for those calls to be arithmetic types which S is not, we can confirm this by going to the source for libc++ cmath header. Although, the requirement for arithmetic types is not consistent across all the cmath functions in libc++.
So the question is, is it valid to pass non-arithmetic types as arguments to cmath functions?
TL;DR
According to the standard it is valid to pass non-arithmetic types as arguments to cmath functions but defect report 2068 argues the original intent was that cmath functions should be restricted to arithmetic types and it appears possible using non-arithmetic arguments will eventually be made ill-formed. So although technically valid using non-arithmetic types as arguments seems questionable in light of defect report 2068.
Details
The cmath header is covered in the draft standard section 26.8 [c.math] provides an additional float and long double overload for the each function defined in math.h that takes a double argument and further, paragraph 11 provides for sufficient overloads and says:
Moreover, there shall be additional overloads sufficient to ensure:
If any argument corresponding to a double parameter has type long double, then all arguments corresponding to double parameters are
effectively cast to long double.
Otherwise, if any argument corresponding to a double parameter has type double or an integer type, then all arguments corresponding to
double parameters are effectively cast to double.
Otherwise, all arguments corresponding to double parameters are effectively cast to float.
This seems valid in C++11
In C++11 section 26.8 [c.math] does not include any restrictions disallowing non-arithmetic arguments to cmath functions. In each case from the question we have an overload available which takes double argument(s) and these should be selected via overload resolution.
Defect report 2086
But for C++14 we have defect report 2086: Overly generic type support for math functions, which argues that the original intent of section 26.8 [c.math] was to limit cmath functions to be valid only for arithmetic types, which would mimic how they worked in C:
My impression is that this rule set is probably more generic as
intended, my assumption is that it is written to mimic the C99/C1x
rule set in 7.25 p2+3 in the "C++" way [...] (note that C constraints
the valid set to types that C++ describes as arithmetic types, but see
below for one important difference) [...]
and says:
My current suggestion to fix these problems would be to constrain the
valid argument types of these functions to arithmetic types.
and reworded section 26.8 paragraph 11 to say (emphasis mine):
Moreover, there shall be additional overloads sufficient to ensure:
If any arithmetic argument corresponding to a double parameter has type long double, then all arithmetic arguments corresponding to
double parameters are effectively cast to long double.
Otherwise, if any arithmetic argument corresponding to a double parameter has type double or an integer type, then all arithmetic
arguments corresponding to double parameters are effectively cast to
double.
Otherwise, all arithmetic arguments corresponding to double parameters are effectively cast to are effectively cast to have type float.
So this is invalid in C++14?
Well, despite the intent it looks technically to still be valid as argued in this comment from the discussion in libc++ bug report: incorrect implementation of isnan and similar functions:
That may have been the intent, but I don't see any way to read the
standard's wording that way. From the example in comment#0:
std::isnan(A());
There are no arguments of arithmetic type, so none of the bullets in
26.8/11 apply. The overload set contains 'isnan(float)', 'isnan(double)', and 'isnan(long double)', and 'isnan(float)' should
be selected.
So, the rewording by DR 2086 of paragraph 11 does not make it ill-formed to call the float, double and long double overloads available otherwise with non-arithmetic arguments.
Technically valid but questionable to use
So although the C++11 and C++14 standard do not restrict cmath functions to arithmetic arguments DR 2068 argues the intent of 26.8 paragraph 11 was to restrict cmath functions to take only arithmetic arguments and apparently intended to close the loophole in C++14, but did not provide strong enough restrictions.
It seems questionable to rely on a feature which could become ill-formed in a future version of the standard. Since we have implementation divergence any code that relies on passing non-arithmetic arguments to cmath functions for those cases is non-portable and so will be useful only in limited situations. We have an alternative solution, which is to explicitly cast non-arithmetic types to arithmetic types, which bypasses the whole issue, we no longer have to worry about the code becoming ill-formed and it is portable:
std::isgreater( static_cast<double>(s) ,1.0)
^^^^^^^^^^^^^^^^^^^^^^
As Potatoswatter points out using unary + is also an option:
std::isgreater( +s ,1.0)
Update
As T.C. points out in C++11 it can be argued that 26.8 paragraph 11 bullet 3 applies since the argument is neither long double, double nor an integer and should therefore the arguments of type S should be cast to float first. Note, as indicated by the defect report gcc never implemented this and as far I know neither did clang.

When will default argument promotions happen?

In C language, compiler perform default argument promotion when the function called does not have a prototype.
But how about C++? When will default argument promotions happen?
In C++11 standard 5.2.2/7:
When there is no parameter for a given argument, the argument is
passed in such a way that the receiving function can obtain the value
of the argument by invoking va_arg (18.10). [ Note: This paragraph
does not apply to arguments passed to a function parameter pack.
Function parameter packs are expanded during template instantiation
(14.5.3), thus each such argument has a corresponding parameter when a
function template specialization is actually called. —end note ] The
lvalue-to-rvalue (4.1), array-to-pointer (4.2), and
function-to-pointer (4.3) standard conversions are performed on the
argument expression. An argument that has (possibly cv-qualified) type
std::nullptr_t is converted to type void* (4.10). After these
conversions, if the argument does not have arithmetic, enumeration,
pointer, pointer to member, or class type, the program is ill-formed.
Passing a potentially-evaluated argument of class type (Clause 9)
having a nontrivial copy constructor, a non-trivial move constructor,
or a non-trivial destructor, with no corresponding parameter, is
conditionally-supported with implementation-defined semantics. If the
argument has integral or enumeration type that is subject to the
integral promotions (4.5), or a floating point type that is subject to
the floating point promotion (4.6), the value of the argument is
converted to the promoted type before the call. These promotions are
referred to as the default argument promotions.
This paragraph still does not specify when will a default argument promotion happen. This paragraph may talk too much without a clear logic. I strove to outline the logic but failed. I am not familiar with the invoking va_arg.
Hope you help me.
Default promotions will happen before the function is called, in the calling context.
If you're really asking about the circumstances under which default promotions are carried out, that's covered in the excerpt, though it's such a tiny piece that it's easy to miss: "When there is no parameter for a given argument...". In other words, it's essentially identical to the situation in C, with the exception that a C-style function declaration that doesn't specify parameter types simply doesn't exist in C++. Therefore, the only time you can have an argument without a parameter specifying its type is when a function has an explicit ellipsis, such as printf: int printf(char const *format, ...);.
From the very paragraph you quote in your question: "the value of the argument is converted to the promoted type before the call".
You say of C "default argument promotion when the function called does not have a prototype" - but remember that scenario doesn't exist in C++ - you can not call a function for which no declaration or definition has been seen.
The mention of "invoking va_arg" means that some of the argument promotions are applied when calling a function that will then access the values using the va_arg functions (see http://linux.die.net/man/3/va_arg). Think of it like this: one function call might pass the value int(3), another int(7777), yet another char(7) - how should the called function know what to expect? It will probably promote all values for that parameter to some largest-supported-integral type such as an int or long, then when va_arg is used within the function it will convert from int or long to whatever integral type the va_arg call specifies. This does mean, for example, that int(7777) value might be passed where only a char is expected and the value may be truncated to 8 bits without warning, but that's generally better than having the program crash because the number of bytes of data passed didn't match the number consumed, or some other weird side effect.

Widening of integral types?

Imagine you have this function:
void foo(long l) { /* do something with l */}
Now you call it like so at the call site:
foo(65); // here 65 is of type int
Why, (technically) when you specify in the declaration of your function that you are expecting a long and you pass just a number without the L suffix, is it being treated as an int?
Now, I know it is because the C++ Standard says so, however, what is the technical reason that this 65 isn't just promoted to being of type long and so save us the silly error of forgetting L suffix to make it a long explicitly?
I have found this in the C++ Standard:
4.7 Integral conversions [conv.integral]
5 The conversions allowed as integral promotions are excluded from the set of integral conversions.
That a narrowing conversion isn't being done implicitly, I can think with, but here the destination type is obviously wider than the source type.
EDIT
This question is based on a question I saw earlier, which had funny behavior when you didn't specify the L suffix. Example, but perhaps it's a C thing, more than C++?!!
In C++ objects and values have a type, that is independent on how you use them. Then when you use them, if you need a different type it will be converted appropriately.
The problem in the linked question is that varargs is not type-safe. It assumes that you pass in the correct types and that you decode them for what they are. While processing the caller, the compiler does not know how the callee is going to decode each one of the arguments so it cannot possibly convert them for you. Effectively, varargs is as typesafe as converting to a void* and converting back to a different type, if you get it right you get what you pushed in, if you get it wrong you get trash.
Also note that in this particular case, with inlining the compiler has enough information, but this is just a small case of a general family if errors. Consider the printf family of functions, depending on the contents of the first argument each one of the arguments is processed as a different type. Trying to fix this case at the language level would lead to inconsistencies, where in some cases the compiler does the right thing or the wrong one and it would not be clear to the user when to expect which, including the fact that it could do the right thing today, and the wrong one tomorrow if during refactoring the function definition is moved and not available for inlining, or if the logic of the function changes and the argument is processed as one type or another based on some previous parameter.
The function in this instance does receive a long, not an int. The compiler automatically converts any argument to the required parameter type if it's possible without losing any information (as here). That's one of the main reasons function prototypes are important.
It's essentially the same as with an expression like (1L + 1) - because the integer 1 is not the right type, it's implicitly converted to a long to perform the calculation, and the result is a long.
If you pass 65L in this function call, no type conversion is necessary, but there's no practical difference - 65L is used either way.
Although not C++, this is the relevant part of the C99 standard, which also explains the var args note:
If the expression that denotes the called function has a type that
does include a prototype, the arguments are implicitly converted, as
if by assignment, to the types of the corresponding parameters, taking
the type of each parameter to be the unqualified version of its
declared type. The ellipsis notation in a function prototype
declarator causes argument type conversion to stop after the last
declared parameter. The default argument promotions are performed on
trailing arguments.
Why, (technically) when you specify in the declaration of your function that you are expecting a long and you pass just a number without the L suffix, is it being treated as an int?
Because the type of a literal is specified only by the form of the literal, not the context in which it is used. For an integer, that is int unless the value is too large for that type, or a suffix is used to specify another type.
Now, I know it is because the C++ Standard says so, however, what is the technical reason that this 65 isn't just promoted to being of type long and so save us the silly error of forgetting L suffix to make it a long explicitly?
The value should be promoted to long whether or not you specify that type explicitly, since the function is declared to take an argument of type long. If that's not happening, perhaps you could give an example of code that fails, and describe how it fails?
UPDATE: the example you give passes the literal to a function taking untyped ellipsis (...) arguments, not a typed long argument. In that case, the function caller has no idea what type is expected, and only the default argument promotions are applied. Specifically, a value of type int remains an int when passed through ellipsis arguments.
The C standard states:
"The type of an integer constant is the first of the corresponding list in which its value can be represented."
In C89, this list is:
int, long int, unsigned long int
C99 extends that list to include:
long long int, unsigned long long int
As such, when you code is compiled, the literal 65 fits in an int type, and so it's type is accordingly int. The int is then promoted to long when the function is called.
If, for instance, sizeof(int) == 2, and your literal is something like 64000, the type of the value will be a long (assuming sizeof(long) > sizeof(int)).
The suffixes are used to overwrite the default behavior and force the specified literal value to be of a certain type. This can be particularly useful when the integer promotion would be expensive (e.g. as part of an equation in a tight loop).
We have to have a standard meaning for types because for lower level applications, the type REALLY matters, especially for integral types. Low level operators (such as bitshift, add, ect) rely on the type of the input to determine overflow locations. ((65 << 2) with integers is 260 (0x104), but with a single char it is 4! (0x004)). Sometimes you want this behavior, sometimes you don't. As a programmer, you just need to be able to always know what the compiler is going to do. Thus the design decision was made to make the human explicitly declare the integral types of their constants, with "undecorated" as the most commonly used type, integer.
The compiler does automatically "cast" your constant expressions at compile time, such that the effective value passed to the function is long, but up until the cast it is considered an int for this reason.

C++ type conversion FAQ

Where I can find an excellently understandable article on C++ type conversion covering all of its types (promotion, implicit/explicit, etc.)?
I've been learning C++ for some time and, for example, virtual functions mechanism seems clearer to me than this topic. My opinion is that it is due to the textbook's authors who are complicating too much (see Stroustroup's book and so on).
(Props to Crazy Eddie for a first answer, but I feel it can be made clearer)
Type Conversion
Why does it happen?
Type conversion can happen for two main reasons. One is because you wrote an explicit expression, such as static_cast<int>(3.5). Another reason is that you used an expression at a place where the compiler needed another type, so it will insert the conversion for you. E.g. 2.5 + 1 will result in an implicit cast from 1 (an integer) to 1.0 (a double).
The explicit forms
There are only a limited number of explicit forms. First off, C++ has 4 named versions: static_cast, dynamic_cast, reinterpret_cast and const_cast. C++ also supports the C-style cast (Type) Expression. Finally, there is a "constructor-style" cast Type(Expression).
The 4 named forms are documented in any good introductory text. The C-style cast expands to a static_cast, const_cast or reinterpret_cast, and the "constructor-style" cast is a shorthand for a static_cast<Type>. However, due to parsing problems, the "constructor-style" cast requires a singe identifier for the name of the type; unsigned int(-5) or const float(5) are not legal.
The implicit forms
It's much harder to enumerate all the contexts in which an implicit conversion can happen. Since C++ is a typesafe OO language, there are many situations in which you have an object A in a context where you'd need a type B. Examples are the built-in operators, calling a function, or catching an exception by value.
The conversion sequence
In all cases, implicit and explicit, the compiler will try to find a conversion sequence. A conversion sequence is a series of steps that gets you from type A to type B. The exact conversion sequence chosen by the compiler depends on the type of cast. A dynamic_cast is used to do a checked Base-to-Derived conversion, so the steps are to check whether Derived inherits from Base, via which intermediate class(es). const_cast can remove both const and volatile. In the case of a static_cast, the possible steps are the most complex. It will do conversion between the built-in arithmetic types; it will convert Base pointers to Derived pointers and vice versa, it will consider class constructors (of the destination type) and class cast operators (of the source type), and it will add const and volatile. Obviously, quite a few of these step are orthogonal: an arithmetic type is never a pointer or class type. Also, the compiler will use each step at most once.
As we noted earlier, some type conversions are explicit and others are implicit. This matters to static_cast because it uses user-defined functions in the conversion sequence. Some of the conversion steps consiered by the compiler can be marked as explicit (In C++03, only constructors can). The compiler will skip (no error) any explicit conversion function for implicit conversion sequences. Of course, if there are no alternatives left, the compiler will still give an error.
The arithmetic conversions
Integer types such as char and short can be converted to "greater" types such as int and long, and smaller floating-point types can similarly be converted into greater types. Signed and unsigned integer types can be converted into each other. Integer and floating-point types can be changed into each other.
Base and Derived conversions
Since C++ is an OO language, there are a number of casts where the relation between Base and Derived matters. Here it is very important to understand the difference between actual objects, pointers, and references (especially if you're coming from .Net or Java). First, the actual objects. They have precisely one type, and you can convert them to any base type (ignoring private base classes for the moment). The conversion creates a new object of base type. We call this "slicing"; the derived parts are sliced off.
Another type of conversion exists when you have pointers to objects. You can always convert a Derived* to a Base*, because inside every Derived object there is a Base subobject. C++ will automatically apply the correct offset of Base with Derived to your pointer. This conversion will give you a new pointer, but not a new object. The new pointer will point to the existing sub-object. Therefore, the cast will never slice off the Derived part of your object.
The conversion the other way is trickier. In general, not every Base* will point to Base sub-object inside a Derived object. Base objects may also exist in other places. Therefore, it is possible that the conversion should fail. C++ gives you two options here. Either you tell the compiler that you're certain that you're pointing to a subobject inside a Derived via a static_cast<Derived*>(baseptr), or you ask the compiler to check with dynamic_cast<Derived*>(baseptr). In the latter case, the result will be nullptr if baseptr doesn't actually point to a Derived object.
For references to Base and Derived, the same applies except for dynamic_cast<Derived&>(baseref) : it will throw std::bad_cast instead of returning a null pointer. (There are no such things as null references).
User-defined conversions
There are two ways to define user conversions: via the source type and via the destination type. The first way involves defining a member operator DestinatonType() const in the source type. Note that it doesn't have an explicit return type (it's always DestinatonType), and that it's const. Conversions should never change the source object. A class may define several types to which it can be converted, simply by adding multiple operators.
The second type of conversion, via the destination type, relies on user-defined constructors. A constructor T::T which can be called with one argument of type U can be used to convert a U object into a T object. It doesn't matter if that constructor has additional default arguments, nor does it matter if the U argument is passed by value or by reference. However, as noted before, if T::T(U) is explicit, then it will not be considered in implicit conversion sequences.
it is possible that multiple conversion sequences between two types are possible, as a result of user-defined conversion sequences. Since these are essentially function calls (to user-defined operators or constructors), the conversion sequence is chosen via overload resolution of the different function calls.
Don't know of one so lets see if it can't be made here...hopefully I get it right.
First off, implicit/explicit:
Explicit "conversion" happens everywhere that you do a cast. More specifically, a static_cast. Other casts either fail to do any conversion or cover a different range of topics/conversions. Implicit conversion happens anywhere that conversion is happening without your specific say-so (no casting). Consider it thusly: Using a cast explicitly states your intent.
Promotion:
Promotion happens when you have two or more types interacting in an expression that are of different size. It is a special case of type "coercion", which I'll go over in a second. Promotion just takes the small type and expands it to the larger type. There is no standard set of sizes for numeric types but generally speaking, char < short < int < long < long long, and, float < double < long double.
Coercion:
Coercion happens any time types in an expression do not match. The compiler will "coerce" a lesser type into a greater type. In some cases, such as converting an integer to a double or an unsigned type into a signed type, information can be lost. Coercion includes promotion, so similar types of different size are resolved in that manner. If promotion is not enough then integral types are converted to floating types and unsigned types are converted to signed types. This happens until all components of an expression are of the same type.
These compiler actions only take place regarding raw, numeric types. Coercion and promotion do not happen to user defined classes. Generally speaking, explicit casting makes no real difference unless you are reversing promotion/coercion rules. It will, however, get rid of compiler warnings that coercion often causes.
User defined types can be converted though. This happens during overload resolution. The compiler will find the various entities that resemble a name you are using and then go through a process to resolve which of the entities should be used. The "identity" conversion is preferred above all; this means that a f(t) will resolve to f(typeof_t) over anything else (see Function with parameter type that has a copy-constructor with non-const ref chosen? for some confusion that can generate). If the identity conversion doesn't work the system then goes through this complex higherarchy of conversion attempts that include (hopefully in the right order) conversion to base type (slicing), user-defined constructors, user-defined conversion functions. There's some funky language about references which will generally be unimportant to you and that I don't fully understand without looking up anyway.
In the case of user type conversion explicit conversion makes a huge difference. The user that defined a type can declare a constructor as "explicit". This means that this constructor will never be considered in such a process as I described above. In order to call an entity in such a way that would use that constructor you must explicitly do so by casting (note that syntax such as std::string("hello") is not, strictly speaking, a call to the constructor but instead a "function-style" cast).
Because the compiler will silently look through constructors and type conversion overloads during name resolution, it is highly recommended that you declare the former as 'explicit' and avoid creating the latter. This is because any time the compiler silently does something there's room for bugs. People can't keep in mind every detail about the entire code tree, not even what's currently in scope (especially adding in koenig lookup), so they can easily forget about some detail that causes their code to do something unintentional due to conversions. Requiring explicit language for conversions makes such accidents much more difficult to make.
For integer types, check the book Secure Coding n C and C++ by Seacord, the chapter about integer overflows.
As for implicit type conversions, you will find the books Effective C++ and More Effective C++ to be very, very useful.
In fact, you shouldn't be a C++ developer without reading these.