how to get sizeof double in gfortran - fortran

print *, sizeof(DOUBLE)
gives me 4
DOUBLE PRECISION x
print *, sizeof(x)
gives me 8 (what I assume is correct)
unfortunately, this will not compile:
print *, "size of DOUBLE:", sizeof(DOUBLE PRECISION)
is there a way how to achieve this in one sizeof() call?

sizeof is a GNU extension and, therefore, not necessarily portable.
From the documentation:
The argument shall be of any type, rank or shape.
So the following works:
program test
print *,sizeof(1.d0)
end program
1.d0 is a double precision literal. To get the size of a single precision float, use 1.e0 or 1..

Related

Difference between directly assigning a float variable a hexadecimal integer and assigning through pointer conversion

I was investigating the structure of floating-point numbers, and I've found that most of compilers use IEEE 754 standard to store floating point numbers.
And when I tried to do:
float a=0x3f520000; //have to be equal to 0.8203125 in IEEE 754
printf("value of 'a' is: %X [%x] %f\n",(int)a,(int)a, a);
it produces the result:
value of 'a' is: 3F520000 [3f520000] 1062338560.000000
but if I try:
int b=0x3f520000;
float* c = (float*)&b;
printf("value of 'c' is: %X [%x] %f\r\n", *(int*)c, *(int*)c, c[0]);
it gives:
value of 'c' is: 3F520000 [3f520000] 0.820313
The second try gave me the right answer. What is it wrong with the first try? And why does the result differ from that when I cast int to float via pointer?
The difference is that the first converts the value (0x3f520000 is the integer 1062338560), and is equivalent to this:
float a = 1062338560;
printf("value of 'a' is: %X [%x] %f\n",(int)a,(int)a, a);
The second reinterprets the representation of the int - 111111010100100000000000000000 - as being the representation of a float instead.
(It's also undefined, so you shouldn't expect it to do anything in particular.)
[Note: This answer assumes C, not C++, which have different rules]
With
float a=0x3f520000;
you take the integer value 1062338560 and the compiler will convert it to 1062338560.0f.
If you want hexadecimal floating point constant you must use exponent-format using the letter p. As in 0x1.a40010c6f7a0bp-1 (which is the hexadecimal notation for 0.820313).
What happens with
int b=0x3f520000;
float* c = (float*)&b;
is that you break strict aliasing and tell the compiler that c is pointing to a floating-point value (the strict aliasing break is because b isn't a floating point value). The compiler will then reinterpret the bits in *c as a float value.
0x3f520000 is an integer constant. When assigned to a float, the integer is converted.
Some more proper example of how to convert in the second case:
#include <stdio.h>
#include <string.h>
#include <stdint.h>
int main() {
uint32_t fi = 0x3f520000;
float f;
memcpy(&f, &fi, sizeof(f));
printf("%.7g\n", f);
}
it prints:
0.8203125
so that is what you expected.
The approach I used is memcpy that is the safest for all compilers and best choice for modern compilers (GCC since approx. 4.6, Clang since 3.x) that interpret memcpy as "bit cast" in such case and optimize it in a efficient and safe way (at least in "hosted" mode). That's still safe for older compilers, but not nessessarily efficient in the same way; some can prefer cast through union or ever through different pointer type. On dangers of that ways, see here or generally search "type punning and strict aliasing".
(Also, there could be some weird platforms that suffer from endianness issue that integer endianness differs from float one; ones that have byte different than 8 bits, and so on. I don't consider them here.)
UPDATE: I was starting answering to initial version of the question. Yep, bit casting and value conversion will give principally different results. That's how float numbers work.

Why does my function print '0'?

Here is my code, I don't exactly know why does this happen:
#include <stdio.h>
void foo(float *);
int main()
{
int i = 10, *p = &i;
foo(&i);
}
void foo(float *p)
{
printf("%f\n", *p);
}
OUTPUT :
0
As it has been already said you are passing an address of integer variable as an address of single precision floating point number.
Ideally, such implicit conversion should be disallowed, but depending on compiler and whether it is clean C or C++, it may just result in warning.
But why does it print exactly 0?
It is because of the way the single precision FPN is represented in memory:
(1 bit of sign)|(8 bits of biased exponent)|(23 bits of significand(mantissa))
10 in binary is
0|0000 0000|000 0000 0000 0000 0000 0000 1010
So, when interpreted on as floating point value:
(sign = 0)(biased exponent = 0)(significand = 10)
biased exponent is normal exponent plus 127 - http://en.wikipedia.org/wiki/Exponent_bias
To calculate the value we will use following formula:
floatValue = ((sign) ? (-1) : (1)) * pow(2, normalExponent) * significand
That will yield:
floatValue = 1 * pow (2, 0 - 127) * 10 = 10 * 2 in pow -127.
It is a very small number, that when represented using %f specifier turns into "0" string.
SOLUTION:
To solve this problem just use temporary variable and explicit cast before calling the foo:
int main()
{
int i = 10;
float temp = (float)i;
foo(&temp);
}
void foo(float *p)
{
printf("%f\n", *p);
}
P.S. To avoid such problems in future always set your compiler to maximum realistic level of warnings and always deal with each of them before running the app.
Because you are declaring your integer object and pointer object to int and int * instead of float and float *.
You cannot pass an object of type int * to a function that expects an argument of type float * without an explicit conversion.
Change:
int i = 10, *p = &i;
to
float i = 10, *p = &i;
Note that your p pointer in main is not used and its declaration can be removed.
You are essentially casting a pointer to an int to a pointer to a float. This is legal but BAD- the printf is trying to understand the memory reserved for the int as a float (which would be represented differently in memory) and the results are undefined.
You try to read area allocated to a variable of type int and try to read is as float.
Floats are written as mantissa and an exponent. Depending on representation in memory, it's likely you're loading up the exponent with your "10" while mantissa remains zero. As result, 0^10 (or however it's interpreted) is 0.
It is due to, you are passing an Integer to a function and you are catching with float data type.
When you compile it- you will get warning if you enable the warning flags-
root#sathish1:~/My Docs/Programs# cc float1.c
float1.c: In function ‘main’:
float1.c:11:5: warning: passing argument 1 of ‘foo’ from incompatible pointer type [enabled by default]
float1.c:3:6: note: expected ‘float *’ but argument is of type ‘int *’
It clearly tells you what you are doing.
Your function is excepting float *, but you are passing int *. So when printing, it will search the respective memory area and try to print it. But float variables are stored in IEEE 754 standard, but the function receives an address of integer, so it will try to print the data in that memory location(int is not stored in IEEE 754 std). So result is Undefined.
You are calling function expecting float* with a pointer to int. In C++ it is an error and your code does not compile. In C you will most likely get a warning and undefined behaviour anyway.

Issues while printing float values

#include<stdio.h>
#include<math.h>
int main()
{
float i = 2.5;
printf("%d\n%d\n%d",i,i,i);
}
When I compile this using gcc and run it, I get this as the output:
0
1074003968
0
Why doesn't it print just
2
2
2
You're passing a float (which will be converted to a double) to printf, but telling printf to expect an int. The result is undefined behavior, so at least in theory, anything could happen.
What will typically happen is that printf will retrieve sizeof(int) bytes from the stack, and interpret whatever bit pattern they hold as an int, and print out whatever value that happens to represent.
What you almost certainly want is to cast the float to int before passing it to printf.
The "%d" format specifier is for decimal integers. Use "%f" instead.
And take a moment to read the printf() man page.
The "%d" is the specifier for a decimal integer (typically an 32-bit integer) while the "%f" specifier is used for decimal floating point. (typically a double or a float).
if you only want the non-decimal part of the floating point number you could specify the precision as 0.
i.e.
float i = 2.5;
printf("%.0f\n%.0f\n%.0f",i,i,i);
note you could also cast each value to an int and it would give the same result.
printf("%d\n%d\n%d",int(i),int(i),int(i));
%d prints decimal (int)s, not (float)s. printf() cannot tell that you passed a (float) to it (C does not have native objects; you cannot ask a value what type it is); you need to use the appropriate format character for the type you passed.

why sizeof(13.33) is 8 bytes?

When I give sizeof(a), where a=13.33, a float variable, the size is 4 bytes.
But if i give sizeof(13.33) directly, the size is 8 bytes.
I do not understand what is happening. Can someone help?
Those are the rules of the language.
13.33 is a numeric literal. It is treated as a double because it is a double. If you want 13.33 to be treated as a float literal, then you state 13.33f.
13.33 is a double literal. If sizeof(float) == 4, sizeof(13.33f) == 4 should also hold because 13.33f is a float literal.
The literal 13.33 is treated as a double precision floating point value, 8 bytes wide.
The 13.33 literal is being treated as 'double', not 'float'.
Try 13.33f instead.
The type and size of your variable are fine. It's just that the compiler has some default types for literals, those constant values hard-coded in your program.
If you request sizeof(1), you'll get sizeof(int). If you request sizeof(2.5), you'll get sizeof(double). Those would clearly fit into a char and a float respectively, but the compiler has default types for your literals and will treat them as such until assignment.
You can override this default behaviour, though. For example:
2.5 // as you didn't specify anything, the compiler will take it for a double.
2.5f // ah ha! you're specifying this literal to be float
Cheers!
Because 13.33 is a double, which gets truncated to a float if you assign it. And a double is 8bytes. To create a real float, use 13.33f (note the f).

Purpose of a ".f" appended to a number?

I saw 1/3.f in a program and wondered what the .f was for. So tried my own program:
#include <iostream>
int main()
{
std::cout << (float) 1/3 << std::endl;
std::cout << 1/3.f << std::endl;
std::cout << 1/3 << std::endl;
}
Is the .f used like a cast? Is there a place where I can read more about this interesting syntax?
3. is equivalent to 3.0, it's a double.
f following a number literal makes it a float.
Without the .f the number gets interpreted as an integer, hence 1/3 is (int)1/(int)3 => (int)0 instead of the desired (float)0.333333. The .f tells the compiler to interpret the literal as a floating point number of type float. There are other such constructs such as for example 0UL which means a (unsigned long)0, whereas a plain 0 would be an (int)0.
The .f is actually two components, the . which indicates that the literal is a floating point number rather than an integer, and the f suffix which tells the compiler the literal should be of type float rather than the default double type used for floating point literals.
Disclaimer; the "cast construct" used in the above explanation is not an actual cast, but just a way to indicate the type of the literal.
If you want to know all about literals and the suffixes you can use in them, you can read the C++ standard, (1997 draft, C++11 draft, C++14 draft, C++17 draft) or alternatively, have a look at a decent textbook, such as Stroustrup's The C++ Programming Language.
As an aside, in your example (float)1/3 the literals 1 and 3 are actually integers, but the 1 is first cast to a float by your cast, then subsequently the 3 gets implicitly cast to a float because it is a righthand operand of a floating point operator. (The operator is floating point because its lefthand operand is floating point.)
By default 3.2 is treated as double; so to force the compiler to treat it as float, you need to write f at the end.
Just see this interesting demonstration:
float a = 3.2;
if ( a == 3.2 )
cout << "a is equal to 3.2"<<endl;
else
cout << "a is not equal to 3.2"<<endl;
float b = 3.2f;
if ( b == 3.2f )
cout << "b is equal to 3.2f"<<endl;
else
cout << "b is not equal to 3.2f"<<endl;
Output:
a is not equal to 3.2
b is equal to 3.2f
Do experiment here at ideone: http://www.ideone.com/WS1az
Try changing the type of the variable a from float to double, see the result again!
3.f is short for 3.0f - the number 3.0 as a floating point literal of type float.
The decimal point and the f have a different purpose so it is not really .f
You have to understand that in C and C++ everything is typed, including literals.
3 is a literal integer.
3. is a literal double
3.f is a literal float.
An IEEE float has less precision than a double. float uses only 32 bits, with a 23 bit mantissa and an 8 bit exponent (plus a sign bit).
double gives you more accuracy, but sometimes you do not need such accuracy (e.g. if you are doing calculations on figures that are only estimates in the first place) and that given by float will suffice, and if you are storing large numbers of them (eg processing a lot of time-series data) that can be more important than the accuracy.
Thus float is still a useful type.
You should not confuse this with the notation used by printf and equivalent statements.