Why am I always getting output as 1 when printing a function? - c++

I am wondering why I'm always getting output as 1 when I print this function. Here is the code:
#include <iostream>
using namespace std;
int main() {
int x(int());
cout << x; // 1
}
It always prints out one. Why? I was expecting it to output 0 as ints are defaulted to 0. So why 1?

int x(int());
is a case of "most vexing parse"; you think it's a declaration of an int (int x) initialized to the default value for ints (int()); instead, the compiler interpret it as a declaration of a function returning an int which takes as a parameter a (pointer to) function that takes no parameters and returns an int (you can get hairy declarations explained by this site, or gain some more understanding about C type declarations here).
Then, when you do:
cout << x;
x here decays to function pointer, but there's no overload of operator<< that takes a function pointer; the simplest implicit conversion that gives some valid overload of operator<< is to bool, and, since a function pointer cannot have a 0 (NULL) value, it is evaluated to true, which is printed as 1.
Notice that I'm not entirely sure that such a code should be compiled without errors - you are taking the address of a function that is only declared and not defined; it is true that it cannot be evaluated to anything other than true, but in line of principle you should get a linker error (here masked by the optimizer, that removes any reference to x, since it isn't actually used).
What you actually wanted is:
int x=int();

The function is being converted to bool and is being printed as a bool value. The function is at a non-zero address, and so the conversion produces true.
This is a standard conversion sequence consisting of a function-to-pointer conversion followed by a boolean conversion.
The sequence is followed because there is no better overloaded operator<<.

Related

How to define a class type conversion to a pointer to function?

I am trying to understand more Class-type conversion. I am reading C++ primer 5ed. So I've tried this code:
int add(int x, int y) { return x + y;}
struct Foo
{
//operator(int(*)(int, int))(){return add;} // error
using pfn = int(*)(int, int);
operator pfn(){return add;} // OK
double value = 5.45;
};
int main()
{
cout << (int(*)(int, int))Foo()(5, 7) << endl; // why 1
cout << ((int(*)(int, int))Foo())(5, 7) << endl; // Ok 12
std::cout << "\nDone!\n";
}
So why I cannot directly define conversion for my class using the type int(*)(int, int) but I can with a type alias?
Why I get value 1 in the first statement which is erroneous and get it correct in the second statement using parenthesis?
I get the warning from first statement: Description Resource Path Location Type cast to pointer from integer of different size [-Wint-to-pointer-cast] main.cpp /MyCppProj line 31 C/C++ Problem
So why I cannot directly define conversion for my class using the type int(*)(int, int) but I can with a type alias?
The grammar for the "operator TYPE" name of a conversion function is much more restricted than a more general declarator or type-id. It doesn't allow parentheses at all, only a type specifier (like a type alias name, unsigned int, a class name, etc.), combinations of the *, &, &&, const and volatile tokens, and [[attributes]]. I can't say exactly why, but complicated declarations like that are tricky to write, read, and parse. Maybe there's a potential ambiguity in some case if more were allowed, or maybe they just didn't want to require compilers to have to figure this one out.
Also, if it were allowed, maybe the form would be operator int (*())(int, int); and not operator (int(*)(int, int))()? Or maybe that doesn't make sense either. See? Tricky.
Why I get value 1 in the first statement which is erroneous and get it correct in the second statement using parenthesis?
Function call syntax has higher precedence than C-style cast. So
(int(*)(int, int))Foo()(5, 7) // (1)
(int(*)(int, int)) (Foo()(5, 7)) // (2), same as (1)
((int(*)(int, int))Foo()) (5, 7) // (3), not the same
Expression (1) or (2) evaluates by first creating a temporary Foo. It's followed by function call syntax, and Foo doesn't define an operator(), but C++ will also check if it implicitly converts to a pointer or reference to function and it does, so the (5, 7) does the implicit conversion and calls the resulting pointer to add, giving 12. This is cast to the function pointer type, which has implementation-defined results. There is no operator<< declared for function pointers, but there is one for bool, and a function pointer can implicitly convert to bool. Presumably the result of the strange cast was not a null pointer value, so the end result is true, and you see the value 1. (If you had done std::cout << std::boolalpha earlier, you should see true or an appropriate translation instead.)
One piece of this, besides the operator precedence misunderstanding, is the dangers of a C-style cast, which can do very many different things, some not usually intended. Use static_cast<int(*)(int,int)>(Foo())(5,7) instead, and everything's fine. Or if we accidentally typed static_cast<int(*)(int,int)>(Foo()(5,7)) instead, the compiler gives an error about converting from int to int(*)(int,int), since only reinterpret_cast or C-style cast may do that.
I get the warning from first statement: Description Resource Path Location Type cast to pointer from integer of different size [-Wint-to-pointer-cast] main.cpp /MyCppProj line 31 C/C++ Problem
Even though the C-style cast forces the conversion from int to function pointer to be valid, the compiler is warning that int doesn't have enough bytes to represent a function pointer. It's assuming the int value earlier came from casting a function pointer to some numeric type, and this is meant to convert back, but whenever it was converted from a type large enough to int, the pointer value was lost.

How can a variable in C++be named a function call?

While going through some allegro tutorials I found an odd call.
int al_init();
The function al_init() initializes allegro so that the allegro functions can be used. What is with the int al_init(); line? If I change this line of the code to exclude int it works the same, but if I take out the line altogether it does not work. What is this line doing? The only thing I can imagine is that it creates an integer and assigns it the return value of the al_init() function, with that likely being -1 for failure and 0 for success etc. But if that is what this is doing, then how can you even check the return value?
In C/C++ there are function declarations and function definitions:
Declarations look like these:
int a_function();
void another_function();
double yet_another_function();
Explanation:
The identifier before the function name (e.g. int, void, double) describes the type of value returned by the function.
If you do not specify one, it defaults to int (which is why it works when you remove int from int al_init(), but not when you remove the declaration altogether)
void means it's not supposed to return a value (even though technically it can, but that's for rarer cases)
Definitions look like these:
int a_function() {
std::cout << "hello world!";
int x = 1;
int y = 2;
return (x + y);
}
Notice the difference:
Declarations end with ;
Definitions are followed by a block of code enclosed by braces: { and }, but no ;!
In some cases, you do not need to declare a function. If it's at a point in the code where the definition has already been seen, the definition can substitute for the declaration (this leads us to the next point...)
The purpose of declaration is to tell the program that, for example, there is a function named a_function, it expects no arguments, and it returns a value of type int.
Are you sure you aren't looking at the function declaration?
According to C's syntactic rules, a prototype is defined as:
<return-type> function-name (param1, param2, ..);
And a function call is defined as:
function-name(param1, param2,...);. So its obviously defining a function prototype and NOT calling the function.
C89 and previous rules were:
With the above rule, another rule was:
For implicit integer return type, the prototype was suppose to be defined outside of any executable function code. If it was inside of a function, it would be called a function-call and not function-prototype-definition.
That aside, lets start with the question:
While going through some allegro tutorials I found an odd call. int
al_init();
That's incorrect, it looks like a function prototype or declaration
The function al_init() initializes allegro so that the allegro
functions can be used. What is with the int al_init(); line? If I
change this line of the code to exclude int it works the same, but if
I take out the line altogether it does not work.
What does not work? Stops to compile? That would be obvious because it will not be able to find al_init() function. Also the reason it works without the "int" is because its implicitly assumed to return an integer.
What is this line doing?
Its telling the compiler that al_init() library function is defined elsewhere, typically in a .lib or .a or .dll, and you would want to use it in your program.
The only thing I can imagine is that it creates an integer and assigns
it the return value of the al_init() function,
Absolutely incorrect interpretation of code, it does not create any integer, nor assigns any value to it.
with that likely being -1 for failure and 0 for success etc. But if
that is what this is doing, then how can you even check the return
value?
Since you want to know what that function returned, you could do this while an actual call to al_init():
int retval = al_init();
printf("al_init() returned %d",retval);

Runtime behavior with "C++ most vexing parse"

While trying to answer this question I found without () (which invokes "C++ most vexing parse") the output of g++ is 1 (Can be seen here: http://ideone.com/GPBHy), where as visual studio gives a linker error. I couldn't understand how the output can 1, any clues?
As the answers to the question already explain, due to the "Most Vexing Parse" the statement instead of defining an object named str with the two istream_iterators to specify its initializers, is parsed as a declaration of a function named str that returns a string.
So a simple version of the program resolves to, this online sample:
#include<iostream>
void doSomething()
{
}
void (*ptr)()=&doSomething;
int main()
{
std::cout << ptr << "\n";
std::cout << doSomething;
return 0;
}
Output:
1
1
Note that there is no overloaded operator << that takes an std::ostream and a function pointer as arguments, this is because there can be any number of user defined function types and ofcourse a standard overloaded api cannot account for them all.
Given that the compiler tries to find the best match among the existing overloads which happens to be bool (a function pointer is implicitly convertible to bool[#1]).
In particular,
basic_ostream& operator<< (bool& val );
Since the function pointer points to something and not null, the value is printed as 1.
[#1]C++03 4.12 Boolean conversions
1 An rvalue of arithmetic, enumeration, pointer, or pointer to member type can be converted to an rvalue of type bool.

why is the address of a c++ function always True?

well why would,
#include <iostream>
using namespace std;
int afunction () {return 0;};
int anotherfunction () {return 0;};
int main ()
{
cout << &afunction << endl;
}
give this,
1
why is every functions address true?
and how then can a function pointer work if all functions share (so it seems) the same addresss?
The function address isn't "true". There is no overload for an ostream that accepts an arbitrary function pointer. But there is one for a boolean, and function pointers are implicitly convertable to bool. So the compiler converts afunction from whatever its value actually is to true or false. Since you can't have a function at address 0, the value printed is always true, which cout displays as 1.
This illustrates why implicit conversions are usually frowned upon. If the conversion to bool were explicit, you would have had a compile error instead of silently doing the wrong thing.
The function pointer type is not supported by std::ostream out of the box. Your pointers are converted to only possible compatible type - bool - and verything that is not zero is true thanks to backward compatibility to C.
There's no overload of operator<< for function pointers (except stream manipulators), but there is one for bool, so the function pointer is converted to that type before display.
The addresses aren't equal, but they're both non-null, and hence they both covert to true.
There is no overloaded function: operator<<(ostream&, int(*)()), so your function pointer is converted into the only type that works, bool. Then operator<<(ostream&, bool) is printing the converted value: 1.
You may be able to print the function address like so:
cout << (void*)&afunction << endl;
All addresses in C++ are non-zero, because zero is the NULL pointer and is a reserved value. Any non-zero value is considered true.
There cannot be an overload for function pointers for the iostream << operator, as there are an infinite number of possible function pointer types. So the function pointer gets a conversion applied, in this case to bool. Try:
cout << (void *) afunction << endl;
Which will give you the address in hex - for me the result was:
0x401344
Did you check anotherfunction() as well?
Anyway, C++ pointer addresses, like C pointer addresses, are usually virtual on most platforms and don't correspond directly to memory locations. Hence the value may be very small or unusual.
Also, they will always be true, as 0 is NULL, an invalid pointer, and anything that is over 0 is true

Difference between variable-length argument and function overloading

This C++ question seems to be pretty basic and general but still I want someone to answer.
1) What is the difference between a function with variable-length argument and an overloaded function?
2) Will we have problems if we have a function with variable-length argument and another same name function with similar arguments?
2) Do you mean the following?
int mul(int a, int b);
int mul(int n, ...);
Let's assume the first multiplies 2 integers. The second multiplies n integers passed by var-args. Called with f(1, 2) will not be ambiguous, because an argument passed through "the ellipsis" is associated with the highest possible cost. Passing an argument to a parameter of the same type however is associated with the lowest possible cost. So this very call will surely be resolved to the first function :)
Notice that overload resolution only compares argument to parameter conversions for the same position. It will fail hard if either function for some parameter pair has a winner. For example
int mul(int a, int b);
int mul(double a, ...);
Imagine the first multiplies two integers, and the second multiplies a list of doubles that is terminated by a 0.0. This overload set is flawed and will be ambiguous when called by
mul(3.14, 0.0);
This is because the second function wins for the first argument, but the first function wins for the second argument. It doesn't matter that the conversion cost for the second argument is higher for the second function than the cost of the first argument for the first function. Once such a "cross" winner situation is determined, the call for such two candidates is ambiguous.
1) Well an overloaded function will require a HELL of a lot of different prototypes and implementations. It will also be type safe.
2) Yes this will cause you problems as the compiler will not know which function it needs to call. It may or may not warn about this. If it doesn't you may well end up with hard to find bugs.
An overloaded function can have completely different parameter types, including none, with the correct one being picked depending on the parameter types.
A variable-length argument requires at least one parameter to be present. You also need some mechanism to "predict" the type of the next parameter (as you have to state it in va_arg()), and it has to be a basic type (i.e., integer, floating point, or pointer). Common techniques here are "format strings" (as in printf(), scanf()), or "tag lists" (every odd element in the parameter list being an enum telling the type of the following even element, with a zero enum to mark the end of the parameter list).
Generally speaking, overloading is the C++ way to go. If you end up really needing something akin to variable-length argument lists in C++, for example for conveniently chaining arguments of various number and type, consider how C++ streams work (those concatenated "<<" and ">>"s):
class MyClass {
public:
MyClass & operator<<( int i )
{
// do something with integer
return *this;
}
MyClass & operator<<( double d )
{
// do something with float
return *this;
}
};
int main()
{
MyClass foo;
foo << 42 << 3.14 << 0.1234 << 23;
return 0;
}
It is pretty general, and Goz has already covered some of the points. A few more:
1) A variable argument list gives undefined behavior if you pass anything but POD objects. Overloaded functions can receive any kind of objects.
2) You can have ambiguity if one member of an overload set takes a variable argument list. Then again, you can have ambiguity without that as well. The variable argument list might create ambiguity in a larger number of situations though.
The first point is the really serious one -- for most practical purposes, it renders variable argument lists purely a "legacy" item in C++, not something to even consider using in any new code. The most common alternative is chaining overloaded operators instead (e.g. iostream inserters/extractors versus printf/scanf).