Infinity in MSVC++ - c++

I'm using MSVC++, and I want to use the special value INFINITY in my code.
What's the byte pattern or constant to use in MSVC++ for infinity?
Why does 1.0f/0.0f appear to have the value 0?
#include <stdio.h>
#include <limits.h>
int main()
{
float zero = 0.0f ;
float inf = 1.0f/zero ;
printf( "%f\n", inf ) ; // 1.#INF00
printf( "%x\n", inf ) ; // why is this 0?
printf( "%f\n", zero ) ; // 0.000000
printf( "%x\n", zero ) ; // 0
}

Use numeric_limits:
#include <limits>
float maxFloat = std::numeric_limits<float>::infinity();

printf("%x\n", inf) expects an integer (32 bit on MSVC), but receives a double. Hilarity will ensue. Err, I mean: undefined behavior.
(And yes, it receives a double since for a variable argument list, floats are promoted to double).
Edit anyways, you should use numeric_limits, as the other reply says, too.

In the variable arguments list to printf, floats get promoted to doubles. The little-endian byte representation of infinity as a double is 00 00 00 00 00 00 F0 7F.
As peterchen mentioned, "%x" expects an int, not a double. So printf looks at only the first sizeof(int) bytes of the argument. No version of MSVC++ defines int to be larger than 4 bytes, so you get all zeros.

Take a look at numeric_limits::infinity.

That's what happens when you lie to printf(), it gets it wrong. When you use the %x format specifier, it expects an integer to be passed on the stack, not a float passed on the FPU stack. Fix:
printf( "%x\n", *(__int32*)&inf ) ;
You can get infinity out of the <limits> C++ header file:
float inf = std::numeric_limits<float>::infinity().

Related

C/C++ Printing bytes in hex, getting weird hex values

I am using the following to print out numbers from an array in hex:
char buff[1000];
// Populate array....
int i;
for(i = 0; i < 1000; ++i)
{
printf("[%d] %02x\n", i,buff[i]);
}
but I sometimes print weird values:
byte[280] 00
byte[281] 00
byte[282] 0a
byte[283] fffffff4 // Why is this a different length?
byte[284] 4e
byte[285] 66
byte[286] 0a
Why does this print 'fffffff4'?
Use %02hhx as the format string.
From CppReference, %02x accepts unsigned int. When you pass the arguments to printf(), which is a variadic function, buff[i] is automatically converted to int. Then the format specifier %02x makes printf() interprets the value as int, so potential negative values like (char)-1 get interpreted and printed as (int)-1, which is the cause of what you observed.
It can also be inferred that your platform has signed char type, and a 32-bit int type.
The length modifier hh will tell printf() to interpret whatever supplied as char type, so %hhx is the correct format specifier for unsigned char.
Alternatively, you can cast the data to unsigned char before printing. Like
printf("[%d] %02x\n", i, (unsigned char)buff[i]);
This can also prevent negative values from showing up as too long, as int can (almost) always contain unsigned char value.
See the following example:
#include <stdio.h>
int main(){
signed char a = +1, b = -1;
printf("%02x %02x %02hhx %02hhx\n", a, b, a, b);
return 0;
}
The output of the above program is:
01 ffffffff 01 ff
Your platform apparantly has signed char. On platforms where char is unsigned the output would be f4.
When calling a variadic function any integer argument smaller than int gets promoted to int.
A char value of f4 (-12 as a signed char) has the sign bit set, so when converted to int becomes fffffff4 (still -12 but now as a signed int) in your case.
%x02 causes printf to treat the argument as an unsigned int and will print it using at least 2 hexadecimal digits.
The output doesn't fit in 2 digits, so as many as are required are used.
Hence the output fffffff4.
To fix it, either declare your array unsigned char buff[1000]; or cast the argument:
printf("[%d] %02x\n", i, (unsigned char)buff[i]);

Why this comparison to zero is not working properly?

I have this code:
double A = doSomethingWonderful(); // same as doing A = 0;
if(A == 0)
{
fprintf(stderr,"A=%llx\n", A);
}
and this output:
A=7f35959a51c0
how is this possible?
I checked the value of 7f35959a51c0 and seems to be something like 6.91040329973658785751176861252E-310, which is very small, but not zero.
EDIT:
Ok I understood that that way of printing the hex value for a double is not working. I need to find a way to print the bytes of the double.
Following the comments I modified my code:
A = doSomethingWonderful();// same as doing A = 0;
if(A == 0)
{
char bytes[8];
memcpy(bytes, &A, sizeof(double));
for(int i = 0; i < 8; i++)
fprintf(stderr," %x", bytes[i]);
}
and I get this output:
0 0 0 0 0 0 0 0
So finally it seems that the comparison is working properly but I was doing a bad print.
IEEE 754 precision floating point values use a bias in the exponent value in order to fully represent both positive and negative exponents. For double-precision values, that bias is 1023[source], which happens to be 0x3ff in hex, which matches the hex value of A that you printed for 1, or 0e0.
Two other small notes:
When printing bytes, you can use %hhx to get it to only print 2 hex digits instead of sign-extending to 8.
You can use a union to reliably print the double value as an 8-byte integer.
double A = 0;
if(A == 0)
{
A = 1; // Note that you were setting A to 1 here!
char bytes[8];
memcpy(bytes, &A, sizeof(double));
for(int i = 0; i < 8; i++)
printf(" %hhx", bytes[i]);
}
int isZero;
union {
unsigned long i;
double d;
} u;
u.d = 0;
isZero = (u.d == 0.0);
printf("\n============\n");
printf("hex = %lx\nfloat = %f\nzero? %d\n", u.i, u.d, isZero);
Result:
0 0 0 0 0 0 f0 3f
============
hex = 0
float = 0.000000
zero? 1
So in the first line, we see that 1.0 is 0e0 (i.e., 00).
In the following lines, we see that when you use a union to print the hex value of the double 0.0, you get 0 as expected.
When you pass your double to printf(), you pass it as a floating point value. However, since the "%x" format is an integer format, your printf() implementation will try to read an integer argument. Due to this fundamental type mismatch, it is possible, for instance, that the calling code places your double value in a floating point register, while the printf() implementation tries to read it from an integer register. Details depend on your ABI, but apparently the bits that you see are not the bits that you passed. From a language point of view, you have undefined behavior the moment that you have a type mismatch between one printf() argument and its corresponding format specification.
Apart from that, +0.0 is indeed represented as all bits zero, both in single and in double precision formats. However, this is only positive zero, -0.0 is represented with the sign bit set.
In your last bit of code, you are inspecting the bit pattern of 1.0, because you overwrite the value of A before you do the conversion. Note also that you get fffffff0 instead of f0 for the seventh byte because of sign extension. For correct output, use an array of unsigned bytes.
The pattern that you are seeing decodes like this:
00 00 00 00 00 00 f0 3f
to big endian:
3f f0 00 00 00 00 00 00
decode fields:
sign: 0 (1 bit)
exponent: 01111111111 (11 bit), value = 1023
exponent = value - bias = 1023 - 1023 = 0
mantissa: 0...0 (52 bit), value with implicit leading 1 bit: 1.0000...
entire value: -1^0 * 2^0 * 1.0 = 1.0

Unexpected behavior from unsigned_int64;

unsigned__int64 difference;
difference=(64*33554432);
printf ("size %I64u \n", difference);
difference=(63*33554432);
printf ("size %I64u \n", difference);
the first # is ridiculously large. The second number is the correct answer. How does changing it from 62 to 63 cause such a change?
First value is 18446744071562067968
Second value is 2113929216
Sorry the values were 64 and 63, not 63 and 62.
Unless qualified otherwise, integer literals are of type int. I would assume that on the platform you're on, an int is 32-bit. So the calculation (64*33554432) overflows and becomes negative. You then cast this to a unsigned __int64, so this now gets flipped back to a very very large positive integer.
Voila:
int main()
{
int a1 = (64*33554432);
int a2 = (63*33554432);
printf("%08x\n", a1); // 80000000 (negative)
printf("%08x\n", a2); // 7e000000 (positive)
unsigned __int64 b1 = a1;
unsigned __int64 b2 = a2;
printf("%016llx\n", b1); // ffffffff80000000
printf("%016llx\n", b2); // 000000007e000000
}
On gcc it works fine and gives out the correct number in both cases.
size 2113929216
size 2080374784
Could it be a bug with printf?
Are you using MSVC or similar? try stepping it through the debugger and inspect difference after each evaluation. If the numbers look right there then it might just be a printf problem. However, under gcc on linux it's correct.

Display ieee 32bit representation of float ... strange behaviour on pointer output

I have a bit of trouble with the following code:
void main()
{
float value = 100;
char * vP;
vP = (char *) &value;
printf ("%02x ", *(vP+3));
printf ("%02x ", *(vP+2));
printf ("%02x ", *(vP+1));
printf ("%02x ", *(vP+0));
}
The output I get is:
42 ffffffc8 00 00
instead of:
42 c8 00 00 (as required for an IEEE 32bit conversion)
Can anyone help and explain what goes wrong ? If I use
a float value of e.g. 12.2 everything is fine !
Thanks and best regards
Olaf.
That's because a char is signed on your machine. Then, in printf, because it's a variadic function, the char is promoted to an int keeping the negative sign. Therefore the 0xC8 becomes 0xFFFFFFC8.
Use an unsigned char* vP for force the representation to be unsigned. See http://www.ideone.com/WPQ4D for a comparison.

memcpy adds ff ff ff to the beginning of a byte

I have an array that is like this:
unsigned char array[] = {'\xc0', '\x3f', '\x0e', '\x54', '\xe5', '\x20'};
unsigned char array2[6];
When I use memcpy:
memcpy(array2, array, 6);
And print both of them:
printf("%x %x %x %x %x %x", array[0], // ... etc
printf("%x %x %x %x %x %x", array2[0], // ... etc
one prints like:
c0 3f e 54 e5 20
but the other one prints
ffffffc0 3f e 54 ffffffe5 20
what happened?
I've turned your code into a complete compilable example. I also added a third array of a 'normal' char which on my environment is signed.
#include <cstring>
#include <cstdio>
using std::memcpy;
using std::printf;
int main()
{
unsigned char array[] = {'\xc0', '\x3f', '\x0e', '\x54', '\xe5', '\x20'};
unsigned char array2[6];
char array3[6];
memcpy(array2, array, 6);
memcpy(array3, array, 6);
printf("%x %x %x %x %x %x\n", array[0], array[1], array[2], array[3], array[4], array[5]);
printf("%x %x %x %x %x %x\n", array2[0], array2[1], array2[2], array2[3], array2[4], array2[5]);
printf("%x %x %x %x %x %x\n", array3[0], array3[1], array3[2], array3[3], array3[4], array3[5]);
return 0;
}
My results were what I expected.
c0 3f e 54 e5 20
c0 3f e 54 e5 20
ffffffc0 3f e 54 ffffffe5 20
As you can see, only when the array is of a signed char type do the 'extra' ff get appended. The reason is that when memcpy populates the array of signed char, the values with a high bit set now correspond to negative char values. When passed to printf the char are promoted to int types which effectively means a sign extension.
%x prints them in hexadecimal as though they were unsigned int, but as the argument was passed as int the behaviour is technically undefined. Typically on a two's complement machine the behaviour is the same as the standard signed to unsigned conversion which uses mod 2^N arithmetic (where N is the number of value bits in an unsigned int). As the value was only 'slightly' negative (coming from a narrow signed type), post conversion the value is close to the maximum possible unsigned int value, i.e. it has many leading 1's (in binary) or leading f in hex.
The problem is not memcpy (unless your char type really is 32 bits, rather than 8), it looks more like integer sign extension while printing.
you may want to change your printf to explicitly use unsigned char conversion, ie.
printf("%hhx %hhx...", array2[0], array2[1],...);
As a guess, it's possible that your compiler/optimizer is handling array (whose size and contents are known at compile time) and array2 differently, pushing constant values onto the stack in the first place and erroneously pushing sign extended values in the second.
You should mask off the higher bits, since your chars will be extended to int size when calling a varargs function:
printf("%x %x %x %x %x %x", array[0] & 0xff, // ..
%x format expects integer type. Try to use casting:
printf("%x %x %x %x %x %x", (int)array2[0], ...
Edit:
Since there are new comments on my post, I want to add some information. Before calling the printf function, compiler generates code which pushes on the stack variable list of parameters (...). Compiler doesn't know anything about printf format codes, and pushes parameters according to their type. printf collects parameters from the stack according to formatting string. So, array[i] is pushed as char, and handled by printf as int. Therefore, it is always good idea to make casting, if parameter type doesn't match exactly format specification, working with printf/scanf functions.