Is it true that when passing an argument to a function, it is like assigning the value to the parameter? - c++

I am new to C++ and I am writing pseudo code here:
void fn(a) {}
fn(b)
It is correct to assume that in the function body fn what happens is this assignment
`a = b`
I know we can pass the reference/pointer instead of just the value. I get it. But at its core, it still does this assignment of parameter = argument right?
I would like to know:
if there is any official term for this?
when exactly does this assignment happen and what exactly makes this happen? is it the compiler?

if there is any official term for this?
The official semantics of a function call are discussed in the “Function call” section of the C standard. There is no term specifically for the assignments of values to parameters.
C++ 2017 draft N4659 8.2.2 “Function call” [expr.call] 4 says:
When a function is called, each parameter (11.3.5) shall be initialized (11.6, 15.8, 15.1) with its corresponding argument…
when exactly does this assignment happen and what exactly makes this happen? is it the compiler?
It happens when a function is called. The compiler is responsible for generating code that produces a program that performs the semantics of the source code (as defined by the C++ standard).

The C++ standard describes an execution environment, the results of various types of statements and the results of various types of expressions. A compiler is only required to produce code that, when,run, produces results as-if it were run on that described execution environment.
In terms of your actual question, that means that a function call in source code does not necessarily translate into any sort of call or jump instruction when run on actual hardware.
For example, given the function:
int sqr(int x, inty)
{
return x*y;
}
a compiler might well simply compute such a result in-place and not perform any sort of parameter passing. But whether you can actually count on that behavior is a detail left up to the compiler implementor.
All that being said, on actual hardware and without inlining, a function call's parameters are very much like any other variable initialization (think copy rather than assign). The exact details (such as order of parameter evaluation) are left up to each implementation.

Related

Does the implicit conversion of a literal to a class type happen at compile time?

I'm trying to write a class that is closely related to integers, and because of that I included a conversion constructor with the form
constexpr example::example(const int &n);
My question is: if I subsequently define the function
void foo(example n);
and I use it like this
foo(3);
in my code, is the integer literal 3 converted in an instance of example at compile time?
If no, is there a way to obtain this behavior?
If yes, does that still happen if the constructor isn't explicitly declared as constexpr?
The fact that the constructor is constexpr does not force the computation to happen at compile time. It only means that the constructor is eligible to be used within both constant expressions and non-constant expressions.
If on the other hand you declare the constructor consteval, then it means that only constant expressions are allowed to call that constructor. This in turn implies that every invocation of the constructor must be checked by the compiler to ensure that it is a constant expression (because, if it is not, the compiler must diagnose the violation). Since checking that something is a constant expression requires checking whether it contains any undefined behaviour, such checking is as difficult as actually evaluating the expression. Therefore, you can be assured that declaring a constructor (or any other function) consteval will ensure that the function will not be called at runtime: the compiler is allowed to generate code to re-evaluate it at runtime, but there is no reason why it would do so. The downsides of this approach are that, first, it becomes impossible to use the constructor in a non-constant expression, and second, constant expression evaluation is much slower than runtime evaluation, and you have to decide whether the increased compile times are worth it.
If you leave the constructor as constexpr then you can still force it to be called at compile time in particular instances by using a constexpr variable:
constexpr example ex = 3; // evaluated at compile time
foo(ex);
This is a consequence of the fact that a constexpr variable is only allowed to be initialized by a constant expression.
In addition to the answer by Brian Bi, it should be mentioned that compiler optimization may cause the evaluation to happen at compile time in your example.
Look at this compilation without optimization https://godbolt.org/z/EccGosc7n versus the same code compiled with -O3: https://godbolt.org/z/Kz51x4acK.

Why isn't it a compile error if you pass a class object to scanf?

Why is the code below accepted by g++?
#include <cstdio>
#include <string>
int main()
{
std::string str;
scanf("%s", str);
}
What sense does it make to pass a class object to scanf()? Does it get converted to anything that could be useful to another function with variadic arguments?
scanf comes from C. In C if you wanted to have variable number of arguments (like scanf needs) the only solution was variadic function. Variadic functions by design are not type safe, i.e. you can pass absolutely any type and a varargs function will happily accept them. It is a limitation of the C language. That doesn't mean that any type is valid. If an type other than what is actually expected is passed, then we are in the wonderful land of Undefined Behavior.
That being said, scanf is a standard function and what it can accept is known, so most compilers will do extra checks (not required by the standard) if you enable the right flags. See Neil's answer for that.
In C++ (since C++11) we have parameter packs which are type safe ...ish (oh, concepts cannot get sooner).
Enable some warnings. With -Wextra -Wall -pedantic, you will get:
a.cpp:7:10: warning: format '%s' expects argument of type 'char*', but argument 2 has type 'std::__cxx11::string' {aka 'std::__cxx11::basic_string<char>'} [-Wformat=]
scanf("%s", str);
If you want that to be an error rather than a warning, add -Werror.
You have two distinct problems here, not just one:
The passing of a std::string through variadic arguments (which has undefined behaviour), and
The passing of a std::string to a function whose logical semantics expected a char* instead (which has undefined behaviour).
So, no, it doesn't make sense. But it's not a hard error. If you're asking why this has undefined behaviour rather than being ill-formed (and requiring a hard error), I do not know specifically but the answer is usually that it was deemed insufficiently important to require compilers to go to the trouble it would take to diagnose it.
Also, it would be unusual for a logical precondition violation to be deemed ill-formed (just as a matter of convention and consistency; many such violations could not be detected before runtime), so I'd expect point #2 to have undefined behaviour regardless of what hypothetical changes we made to the language to better reject cases of point #1.
Anyway, in the twenty years since standardisation, we've reached a point in technology where the mainstream toolchains do warn on it anyway, and since warnings can be turned into errors, it doesn't really matter.
To answer each of your questions...
The question in the title: "Why isn't it a compile error if you pass a class object to scanf?"
Because the declaration of scanf is int scanf ( const char * format, ... ); which means it will accept any number of arguments after the format string as variadic arguments. The rules for such arguments are:
When a variadic function is called, after lvalue-to-rvalue, array-to-pointer, and function-to-pointer conversions, each argument that is a part of the variable argument list undergoes additional conversions known as default argument promotions:
std::nullptr_t is converted to void*
float arguments are converted to double as in floating-point promotion
bool, char, short, and unscoped enumerations are converted to int or wider integer types as in integer promotion
Only arithmetic, enumeration, pointer, pointer to member, and class type arguments are allowed (except class types with non-trivial copy constructor, non-trivial move constructor, or a non-trivial destructor, which are conditionally-supported with implementation-defined semantics)
Since std::string is a class type with non-trivial copy and move constructors, passing the argument is not allowed. Interestingly, this prohibition, while checkable by a compiler, is not rejected by the compiler as an error.
The first question in the body: "Why is the code below accepted by g++?"
That is a great question. The other answer by #LightnessRacesInOrbit addresses this point very well.
Your second question in the body: "Does it get converted to anything that could be useful to another function with variadic arguments?"
If you run the code, one of the possible results (at run time) is:
.... line 5: 19689 Segmentation fault (core dumped)
so, no, it is not converted into anything, in general, at least not implicitly.
The clarifying question in the comment thread to the question: "I wanted to know "why does the C++ language not disallow this"".
This question appears to be a subjective one, touching on why the C++ language designer(s) and perhaps even the C language designers, did not make their language design robust enough for the language definition to prohibit something other than a string, or memory buffer, or any number of other things, to be sensible as a non-initial argument to scanf. What we do know is that a compiler can often determine such things (that's what linters do, after all!) but we can only guess, really. My guess is that in order to make scanf super typesafe (in the language definition, as opposed to needing a linter) they would need to redefine scanf to use template arguments of some sort. However scanf comes from C, so they did not want to change its signature (that would indeed be wrong, given that C++ wants to be a C superset...).

When calling C/C++ function within another function, why do they stack? Is there a way to fix it? [duplicate]

If we have three functions (foo, bar, and baz) that are composed like so...
foo(bar(), baz())
Is there any guarantee by the C++ standard that bar will be evaluated before baz?
No, there's no such guarantee. It's unspecified according to the C++ standard.
Bjarne Stroustrup also says it explicitly in "The C++ Programming Language" 3rd edition section 6.2.2, with some reasoning:
Better code can be generated in the
absence of restrictions on expression
evaluation order
Although technically this refers to an earlier part of the same section which says that the order of evaluation of parts of an expression is also unspecified, i.e.
int x = f(2) + g(3); // unspecified whether f() or g() is called first
From [5.2.2] Function call,
The order of evaluation of arguments is unspecified. All side effects of argument expression evaluations take effect before the function is entered.
Therefore, there is no guarantee that bar() will run before baz(), only that bar() and baz() will be called before foo.
Also note from [5] Expressions that:
except where noted [e.g. special rules for && and ||], the order of evaluation of operands of individual operators and subexpressions of individual expressions, and the order in which side effects take place, is unspecified.
so even if you were asking whether bar() will run before baz() in foo(bar() + baz()), the order is still unspecified.
There's no specified order for bar() and baz() - the only thing the Standard says is that they will both be evaluated before foo() is called. From the C++ Standard, section 5.2.2/8:
The order of evaluation of arguments
is unspecified.
C++17 specifies evaluation order for operators that was unspecified until C++17. See the question What are the evaluation order guarantees introduced by C++17? But note your expression
foo(bar(), baz())
has still unspecified evaluation order.
In C++11, the relevant text can be found in 8.3.6 Default arguments/9 (Emphasis mine)
Default arguments are evaluated each time the function is called. The order of evaluation of function arguments is unspecified. Consequently, parameters of a function shall not be used in a default argument, even if they are not evaluated.
The same verbiage is used by C++14 standard as well, and is found under the same section.
As others have already pointed out, the standard does not give any guidance on order of evaluation for this particular scenario. This order of evaluation is then left to the compiler, and the compiler might have a guarantee.
It's important to remember that the C++ standard is really a language to instruct a compiler on constructing assembly/machine code. The standard is only one part of the equation. Where the standard is ambiguous or is specifically implementation defined you should turn to the compiler and understand how it translates C++ instructions into true machine language.
So, if order of evaluation is a requirement, or at least important, and being cross-compiler compatible is not a requirement, investigate how your compiler will ultimately piece this together, your answer could ultimate lie there. Note that the compiler could change it's methodology in the future

C/C++ - evaluation of the arguments in a function call [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
order of evaluation of function parameters
Is it safe to use the following construction in C/C++?
f(g(), h());
where g() is expected to be evaluated first, then h().
Do all compilers show the same behavior on all architectures?
NO! There is no guarantee what order these are carried out in. Only that both g() and h() are carried out before f().
See this: http://www.gotw.ca/gotw/056.htm
I think there's an updated C++11 version of that, I'll have a look.
Edit: C++11 version http://herbsutter.com/gotw/_102/
Edit 2: If you really want to know what specific compilers do, try this: http://www.agner.org/optimize/calling_conventions.pdf
Section 7 (page 16) may be relevant, though it's a bit over my head, but for instance __cdecl calling convention means arguments are passed from right to left (at least stored that way), whereas for __fastcall "The first two DWORD or smaller arguments are passed in ECX and EDX registers; all other arguments are passed right to left." (http://msdn.microsoft.com/en-us/library/6xa169sk%28v=vs.71%29.aspx)
So it does vary for different compilers.
Much later edit: It turns out that for constructors using the initializer list syntax (curly braces {}), order of evaluation is guaranteed (even if it is a call to a constructor that does not take a std::initializer_list. See this question.
See 1.9 Program execution:
Certain other aspects and operations of the abstract machine are described in this International Standard as
unspecified (for example, order of evaluation of arguments to a function). Where possible, this International
Standard defines a set of allowable behaviors.
and 8.3.6 Default arguments, 9:
[...] Default arguments are evaluated each time the function is called. The order of evaluation of function
arguments is unspecified. Consequently, parameters of a function shall not be used in a default argument,
even if they are not evaluated. [...]
No, it's not safe - if you need a guaranteed order of evaluation, e.g. because of side effects, then you will need to do something like this:
foo = g();
bar = h();
f(foo, bar);
No, the order of evaluation of arguments with respect to each other is unspecified. The only guarantee that you have is that they will not be executed concurrently with each other.
No.
The standard don't define the order of evaluation in that case, and each compiler may do whatever it wants.
I think that most of them (and specially gcc) evaluate the rightest first.

Why won't this c++ lamba function compile?

Why does this fail to compile:
int myVar = 0;
myVar ? []()->void{} : []()->void{};
with following error msg:
Error 2 error C2446: ':' : no conversion from 'red_black_core::`anonymous-namespace'::< lambda1>' to red_black_core::anonymous-namespace::< lambda0>
While this complies correctly:
void left()
{}
void right()
{}
int myVar = 0;
myVar ? left() : right();
The return type of the ?: operator has to be deduced from it's two operands, and the rules for determining this type are quite complex. Lambdas don't satisfy them because they can't be converted to each other. So when the compiler tries to work out what the result of that ?: is, then there can't be a result, because those two lambdas aren't convertible to each other.
However, when you try to compute the functions, then you actually called them, but you didn't call the lambdas. So when you call the functions, they both have void, so the return type of ?: is void.
This
void left()
{}
void right()
{}
int myVar = 0;
myVar ? left() : right();
is equivalent to
int myVar = 0;
myVar ? [](){}() : [](){}();
Note the extra () on the end- I actually called the lambda.
What you had originally is equivalent to
compiler_deduced_type var;
if (myVar)
var = [](){};
else
var = [](){};
But- no type exists that can be both lambdas. The compiler is well within it's rights to make both lambdas different types.
EDIT:
I remembered something. In the latest Standard draft, lambdas with no captures can be implicitly converted into function pointers of the same signature. That is, in the above code, compiler_deduced_type could be void(*)(). However, I know for a fact that MSVC does not include this behaviour because that was not defined at the time that they implemented lambdas. This is likely why GCC allows it and MSVC does not- GCC's lambda support is substantially newer than MSVC's.
Rules for conditional operator in the draft n3225 says at one point
Otherwise, the result is a prvalue. If the second and third operands do not have the same type, and either
has (possibly cv-qualified) class type, overload resolution is used to determine the conversions (if any) to be
applied to the operands (13.3.1.2, 13.6). If the overload resolution fails, the program is ill-formed. Otherwise,
the conversions thus determined are applied, and the converted operands are used in place of the original
operands for the remainder of this section.
Up to that point, every other alternative (like, convert one to the other operand) failed, so we will now do what that paragraph says. The conversions we will apply are determined by overload resolution by transforming a ? b : c into operator?(a, b, c) (an imaginary function call to a so-named function). If you look what the candidates for the imaginary operator? are, you find (among others)
For every type T , where T is a pointer, pointer-to-member, or scoped enumeration type, there exist candidate operator functions of the form
T operator?(bool, T , T );
And this includes a candidate for which T is the type void(*)(). This is important, because lambda expressions yield an object of a class that can be converted to such a type. The spec says
The closure type for a lambda-expression with no lambda-capture has a public non-virtual non-explicit const conversion function to pointer to function having the same parameter and return types as the closure type’s function call operator. The value returned by this conversion function shall be the address of a function that, when invoked, has the same effect as invoking the closure type’s function call operator.
The lambda expressions can't be convert to any other of the parameter types listed, which means overload resolution succeeds, finds a single operator? and will convert both lambda expressions to function pointers. The remainder of the conditional opreator section will then proceed as usual, now having two branches for the conditional operator having the same type.
That's why also your first version is OK, and why GCC is right accepting it. However I don't really understand why you show the second version at all - as others explained, it's doing something different and it's not surprising that it works while the other doesn't (on your compiler). Next time, best try not to include useless code into the question.
Because every lambda is a unique type. It is basically syntactic sugar for a functor, and two separately implemented functors aren't the same type, even if they contain identical code.
The standard does specify that lambdas can be converted to function pointers if they don't capture anything, but that rule was added after MSVC's lambda support was implemented.
With that rule, however, two lambdas can be converted to the same type, and so I believe your code would be valid with a compliant compiler.
Both snippets compile just fine with GCC 4.5.2.
Maybe your compiler has no (or partial/broken) support to C++0x features such as lambda?
It doesn't fail to compile. It works just fine. You probably don't have C++0x enabled in your compiler.
Edit:
An error message has now been added to the original question! It seems that you do have C++0x support, but that it is not complete in your compiler. This is not surprising.
The code is still valid C++0x, but I recommend only using C++0x features when you really have to, until it's standardised and there is full support across a range of toolchains. You have a viable C++03 alternative that you gave in your answer, and I suggest using it for the time being.
Possible alternative explanation:
Also note that you probably didn't write what you actually meant to write. []()->void{} is a lambda. []()->void{}() executes the lambda and evaluates to its result. Depending what you're doing with this result, your problem could be that the result of calling your lambda is void, and you can't do much with void.