Why does my function print '0'? - c++

Here is my code, I don't exactly know why does this happen:
#include <stdio.h>
void foo(float *);
int main()
{
int i = 10, *p = &i;
foo(&i);
}
void foo(float *p)
{
printf("%f\n", *p);
}
OUTPUT :
0

As it has been already said you are passing an address of integer variable as an address of single precision floating point number.
Ideally, such implicit conversion should be disallowed, but depending on compiler and whether it is clean C or C++, it may just result in warning.
But why does it print exactly 0?
It is because of the way the single precision FPN is represented in memory:
(1 bit of sign)|(8 bits of biased exponent)|(23 bits of significand(mantissa))
10 in binary is
0|0000 0000|000 0000 0000 0000 0000 0000 1010
So, when interpreted on as floating point value:
(sign = 0)(biased exponent = 0)(significand = 10)
biased exponent is normal exponent plus 127 - http://en.wikipedia.org/wiki/Exponent_bias
To calculate the value we will use following formula:
floatValue = ((sign) ? (-1) : (1)) * pow(2, normalExponent) * significand
That will yield:
floatValue = 1 * pow (2, 0 - 127) * 10 = 10 * 2 in pow -127.
It is a very small number, that when represented using %f specifier turns into "0" string.
SOLUTION:
To solve this problem just use temporary variable and explicit cast before calling the foo:
int main()
{
int i = 10;
float temp = (float)i;
foo(&temp);
}
void foo(float *p)
{
printf("%f\n", *p);
}
P.S. To avoid such problems in future always set your compiler to maximum realistic level of warnings and always deal with each of them before running the app.

Because you are declaring your integer object and pointer object to int and int * instead of float and float *.
You cannot pass an object of type int * to a function that expects an argument of type float * without an explicit conversion.
Change:
int i = 10, *p = &i;
to
float i = 10, *p = &i;
Note that your p pointer in main is not used and its declaration can be removed.

You are essentially casting a pointer to an int to a pointer to a float. This is legal but BAD- the printf is trying to understand the memory reserved for the int as a float (which would be represented differently in memory) and the results are undefined.

You try to read area allocated to a variable of type int and try to read is as float.
Floats are written as mantissa and an exponent. Depending on representation in memory, it's likely you're loading up the exponent with your "10" while mantissa remains zero. As result, 0^10 (or however it's interpreted) is 0.

It is due to, you are passing an Integer to a function and you are catching with float data type.
When you compile it- you will get warning if you enable the warning flags-
root#sathish1:~/My Docs/Programs# cc float1.c
float1.c: In function ‘main’:
float1.c:11:5: warning: passing argument 1 of ‘foo’ from incompatible pointer type [enabled by default]
float1.c:3:6: note: expected ‘float *’ but argument is of type ‘int *’
It clearly tells you what you are doing.
Your function is excepting float *, but you are passing int *. So when printing, it will search the respective memory area and try to print it. But float variables are stored in IEEE 754 standard, but the function receives an address of integer, so it will try to print the data in that memory location(int is not stored in IEEE 754 std). So result is Undefined.

You are calling function expecting float* with a pointer to int. In C++ it is an error and your code does not compile. In C you will most likely get a warning and undefined behaviour anyway.

Related

Multiplying float with double overflow in C++

I am a bit confused of the point of having this warning:
Arithmetic overflow: Using operator '' on a 4byte value and then casting the result to a 8byte value. Cast the value to the wider type before calling '' operator to avoid overflow.
#include <iostream>
using std::cin;
using std::cout;
using std::ios_base;
int main() {
cout.setf(ios_base::fixed, ios_base::floatfield);
double mints = 10.0 / 3.0;
const float c_MILLION = 1e6;
cout << "\n10 million mints: " << 10 * c_MILLION * mints;
cin.get();
}
According to my understanding when we multiply a float value with a double value we are basicaly multiplying a 4byte value to an 8byte value and it we will hence, lose some precision according to the links that I have read:
Cannot implicitly convert type 'double' to 'float'
Multiply a float with a double
http://www.cplusplus.com/articles/DE18T05o/#:~:text=Overflow%20is%20a%20phenomenon%20where,be%20larger%20than%20the%20range
However, when I do output this, I get a double value
https://i.stack.imgur.com/EOQzm.png
If that is the case, why does it bother to warn me to cast c_MILLION to double value if it is automatically changing it to a double result? It cant convert an 8byte value to a 4byte value anyways. So, why does it bother to warn the programmers when it is already saving us from this trouble? Or can it convert an 8byte value to a 4byte value as well. If so, how does it determine what type to print? This is a question that I cannot find the answer to from the links I read.
If it automatically converting the result to 8byte value, what is the point of displaying this warning?
Here is the warning:
https://i.stack.imgur.com/L2szy.png
Severity Code Description Project File Line Suppression State
Warning C26451 Arithmetic overflow: Using operator '*' on a 4 byte value and then casting the result to a 8 byte value. Cast the value to the wider type before calling operator '*' to avoid overflow (io.2)
The problem was that I was multiplying an int value with a double value. But still the warning should not exist when it automatically converts the multiplication of an int to double to a double value.
The warning is because of this multiplication: 10 * c_MILLION. There can be some values of c_MILLION where some precision is lost that would not have been lost if c_MILLION was first converted to a double. Since the result of this multiplication is converted to double, a mistaken programmer might assume that no precision was lost beyond what might be expected if the operands were double in the first place. Hence the warning.

How significant is (int) within a statement?

If I have:
#include <iostream>
int main()
{
float a,b,c;
b = 7;
c = 2;
a = (int) (b / c);
std::cout << a;
}
Does (int) only affect the data type during cout so that 'a' can be printed as an integer or does it affect 'a' as a variable changing it to an 'int'?
Does (int) only affect the data type during cout so that a can be printed as an integer or does it affect a as a variable changing it to an int?
Neither.
a = (int)(....);
only changes what is assigned to a. In this case it truncates the floating point number and assigns the integral part of the number to a.
It does not change how a is processed in cout << a. You will notice a truncated value in the output. However, the reason for that is that a truncated value got assigned to a in the previous statement not because cout << a is processed differently.
It does not change the type of a to an int. The type of a variable cannot be changed in C++. It remains unchanged for the entire life of the program.
In this particular case it converts from a float value, the result of b/c into an int, then as a is still a float, converts it back to a float.
This is an easy, if sometimes problematic way of rounding something to an integer value.
Remember that in C++ variables never change their fundamental type. Something defined as a float stays a float forever. You can force other values into the same memory location, you can recast them, but the variable itself always has a fixed intrinsic type.
Cast does not change the type of a variable the casted value is assigned to.
In your case, result of b/c is casted (truncated) to an int, which is then promoted to float.
In this case the int is a cast datatype.
What the computer are thinking
Inside the main function:
float a, b, c;
Declaring 3 variables of data_Type float.
b = 7;
c = 5;
Assigned value of 7 to b and value 5 to c.
a = (int) (b / c);
A is equal to b/c ==> 7/5 ==> 1.4, wait, the programmer asked to cast the data as int so 1.4 ==> 1
std::cout << a;
Output: 1.0
Hope this help

Confusion about float data type declaration in C++

a complete newbie here. For my school homework, I was given to write a program that displays -
s= 1 + 1/2 + 1/3 + 1/4 ..... + 1/n
Here's what I did -
#include<iostream.h>
#include<conio.h>
void main()
{
clrscr();
int a;
float s=0, n;
cin>>a;
for(n=1;n<=a;n++)
{
s+=1/n;
}
cout<<s;
getch();
}
It perfectly displays what it should. However, in the past I have only written programs which uses int data type. To my understanding, int data type does not contain any decimal place whereas float does. So I don't know much about float yet. Later that night, I was watching some video on YouTube in which he was writing the exact same program but in a little different way. The video was in some foreign language so I couldn't understand it. What he did was declared 'n' as an integer.
int a, n;
float s=0;
instead of
int a
float s=0, n;
But this was not displaying the desired result. So he went ahead and showed two ways to correct it. He made changes in the for loop body -
s+=1.0f/n;
and
s+=1/(float)n;
To my understanding, he declared 'n' a float data type later in the program(Am I right?). So, my question is, both display the same result but is there any difference between the two? As we are declaring 'n' a float, why he has written 1.0f instead of n.f or f.n. I tried it but it gives error. And in the second method, why we can't write 1(float)/n instead of 1/(float)n? As in the first method we have added float suffix with 1. Also, is there a difference between 1.f and 1.0f?
I tried to google my question but couldn't find any answer. Also, another confusion that came to my mind after a few hours is - Why are we even declaring 'n' a float? As per the program, the sum should come out as a real number. So, shouldn't we declare only 's' a float. The more I think the more I confuse my brain. Please help!
Thank You.
The reason is that integer division behaves different than floating point division.
4 / 3 gives you the integer 1. 10 / 3 gives you the integer 3.
However, 4.0f / 3 gives you the float 1.3333..., 10.0f / 3 gives you the float 3.3333...
So if you have:
float f = 4 / 3;
4 / 3 will give you the integer 1, which will then be stored into the float f as 1.0f.
You instead have to make sure either the divisor or the dividend is a float:
float f = 4.0f / 3;
float f = 4 / 3.0f;
If you have two integer variables, then you have to convert one of them to a float first:
int a = ..., b = ...;
float f = (float)a / b;
float f = a / (float)b;
The first is equivalent to something like:
float tmp = a;
float f = tmp / b;
Since n will only ever have an integer value, it makes sense to define it as as int. However doing so means that this won't work as you might expect:
s+=1/n;
In the division operation both operands are integer types, so it performs integer division which means it takes the integer part of the result and throws away any fractional component. So 1/2 would evaluate to 0 because dividing 1 by 2 results in 0.5, and throwing away the fraction results in 0.
This in contrast to floating point division which keeps the fractional component. C will perform floating point division if either operand is a floating point type.
In the case of the above expression, we can force floating point division by performing a typecast on either operand:
s += (float)1/n
Or:
s += 1/(float)n
You can also specify the constant 1 as a floating point constant by giving a decimal component:
s += 1.0/n
Or appending the f suffix:
s += 1.0f/n
The f suffix (as well as the U, L, and LL suffixes) can only be applied to numerical constants, not variables.
What he is doing is something called casting. I'm sure your school will mention it in new lectures. Basically n is set as an integer for the entire program. But since integer and double are similar (both are numbers), the c/c++ language allows you to use them as either as long as you tell the compiler what you want to use it as. You do this by adding parenthesis and the data type ie
(float) n
he declared 'n' a float data type later in the program(Am I right?)
No, he defined (thereby also declared) n an int and later he explicitly converted (casted) it into a float. Both are very different.
both display the same result but is there any difference between the two?
Nope. They're the same in this context. When an arithmetic operator has int and float operands, the former is implicitly converted into the latter and thereby the result will also be a float. He's just shown you two ways to do it. When both the operands are integers, you'd get an integer value as a result which may be incorrect, when proper mathematical division would give you a non-integer quotient. To avoid this, usually one of the operands are made into a floating-point number so that the actual result is closer to the expected result.
why he has written 1.0f instead of n.f or f.n. I tried it but it gives error. [...] Also, is there a difference between 1.f and 1.0f?
This is because the language syntax is defined thus. When you're declaring a floating-point literal, the suffix is to use .f. So 5 would be an int while 5.0f or 5.f is a float; there's no difference when you omit any trailing 0s. However, n.f is syntax error since n is a identifier (variable) name and not a constant number literal.
And in the second method, why we can't write 1(float)/n instead of 1/(float)n?
(float)n is a valid, C-style casting of the int variable n, while 1(float) is just syntax error.
s+=1.0f/n;
and
s+=1/(float)n;
... So, my question is, both display the same result but is there any difference between the two?
Yes.
In both C and C++, when a calculation involves expressions of different types, one or more of those expressions will be "promoted" to the type with greater precision or range. So if you have an expression with signed and unsigned operands, the signed operand will be "promoted" to unsigned. If you have an expression with float and double operands, the float operand will be promoted to double.
Remember that division with two integer operands gives an integer result - 1/2 yields 0, not 0.5. To get a floating point result, at least one of the operands must have a floating point type.
In the case of 1.0f/n, the expression 1.0f has type float1, so the n will be "promoted" from type int to type float.
In the case of 1/(float) n, the expression n is being explicitly cast to type float, so the expression 1 is promoted from type int to float.
Nitpicks:
Unless your compiler documentation explicitly lists void main() as a legal signature for the main function, use int main() instead. From the online C++ standard:
3.6.1 Main function
...
2 An implementation shall not predefine the main function. This function shall not be overloaded. It shall have a declared return type of type int, but otherwise its type is implementation-defined...
Secondly, please format your code - it makes it easier for others to read and debug. Whitespace and indentation are your friends - use them.
1. The constant expression 1.0 with no suffix has type double. The f suffix tells the compiler to treat it as float. 1.0/n would result in a value of type double.

What does this mean? (int &)a

define a float variable a, convert a to float & and int &, what does this mean? After the converting , a is a reference of itself? And why the two result is different?
#include <iostream>
using namespace std;
int
main(void)
{
float a = 1.0;
cout << (float &)a <<endl;
cout << (int &)a << endl;
return 0;
}
thinkpad ~ # ./a.out
1
1065353216
cout << (float &)a <<endl;
cout << (int &)a << endl;
The first one treats the bits in a like it's a float. The second one treats the bits in a like it's an int. The bits for float 1.0 just happen to be the bits for integer 1065353216.
It's basically the equivalent of:
float a = 1.0;
int* b = (int*) &a;
cout << a << endl;
cout << *b << endl;
(int &) a casts a to a reference to an integer. In other words, an integer reference to a. (Which, as I said, treats the contents of a as an integer.)
Edit: I'm looking around now to see if this is valid. I suspect that it's not. It's depends on the type being less than or equal to the actual size.
It means undefined behavior:-).
Seriously, it is a form of type punning. a is a float, but a is also a block of memory (typically four bytes) with bits in it. (float&)a means to treat that block of memory as if it were a float (in other words, what it actually is); (int&)a means to treat it as an int. Formally, accessing an object (such as a) through an lvalue expression with a type other than the actual type of the object is undefined behavior, unless the type is a character type. Practically, if the two types have the same size, I would expect the results to be a reinterpretation of the bit pattern.
In the case of a float, the bit pattern contains bits for the sign, an exponent and a mantissa. Typically, the exponent will use some excess-n notation, and only 0.0 will have 0 as an exponent. (Some representations, including the one used on PCs, will not store the high order bit of the mantissa, since in a normalized form in base 2, it must always be 1. In such cases, the stored mantissa for 1.0 will have all bits 0.) Also typically (and I don't know of any exceptions here), the exponent will be stored in the high order bits. The result is when you "type pun" a floating point value to a an integer of the same size, the value will be fairly large, regardless of the floating point value.
The values are different because interpreting a float as an int & (reference to int) throws the doors wide open. a is not an int, so pretty much anything could actually happen when you do that. As it happens, looking at that float like it's an int gives you 1065353216, but depending on the underlying machine architecture it could be 42 or an elephant in a pink tutu or even crash.
Note that this is not the same as casting to an int, which understands how to convert from float to int. Casting to int & just looks at bits in memory without understanding what the original meaning is.

Issues while printing float values

#include<stdio.h>
#include<math.h>
int main()
{
float i = 2.5;
printf("%d\n%d\n%d",i,i,i);
}
When I compile this using gcc and run it, I get this as the output:
0
1074003968
0
Why doesn't it print just
2
2
2
You're passing a float (which will be converted to a double) to printf, but telling printf to expect an int. The result is undefined behavior, so at least in theory, anything could happen.
What will typically happen is that printf will retrieve sizeof(int) bytes from the stack, and interpret whatever bit pattern they hold as an int, and print out whatever value that happens to represent.
What you almost certainly want is to cast the float to int before passing it to printf.
The "%d" format specifier is for decimal integers. Use "%f" instead.
And take a moment to read the printf() man page.
The "%d" is the specifier for a decimal integer (typically an 32-bit integer) while the "%f" specifier is used for decimal floating point. (typically a double or a float).
if you only want the non-decimal part of the floating point number you could specify the precision as 0.
i.e.
float i = 2.5;
printf("%.0f\n%.0f\n%.0f",i,i,i);
note you could also cast each value to an int and it would give the same result.
printf("%d\n%d\n%d",int(i),int(i),int(i));
%d prints decimal (int)s, not (float)s. printf() cannot tell that you passed a (float) to it (C does not have native objects; you cannot ask a value what type it is); you need to use the appropriate format character for the type you passed.