Me and my friend have just encountered a very weird issue with long long range.
So basically, my computer has a 64bit processor but a 32 bit system on it. He has both 32 bit OS and CPU.
First, we printfed sizeof(long long). For both of us that was 8.
Then we did this:
long long blah = 1;
printf ("%lld\n", blah<<40);
For me this returns 1099511627776 (which is the correct result). For him it is 0.
How is that possible? We both have the same sizeofs.
Thanks in advance.
EDIT:
I compiled and ran it under Win7 with Code Blocks 12.11. He uses Win XP and the same version of CB.
EDIT2: Source codes as requested:
#include <cstdio>
int main()
{
long long blah = 1;
printf ("%lld\n", blah<<40);
return 0;
}
and
#include <cstdio>
int main()
{
printf ("%d", sizeof(long long));
return 0;
}
I would guess that you and your friends are linking to different versions of the infamous MSVCRT.DLL or possibly some other library.
From the Code::Blocks FAQ:
Q: What Code::Blocks is not?
A: Code::Blocks is not a compiler, nor a linker. Release packages of Code::Blocks may include a compiler suite (MinGW/GCC), if not provided by the target platform already. However, this is provided "as-is" and not developed/maintained by the Code::Blocks development team.
So the statement "I compiled and ran it under Win7 with Code Blocks 12.11" isn't strictly true; you cannot compile with something that isn't a compiler.
Figure out what compiler you two are actually using (see, above: it is not "Code Blocks") and what library.
Could be one of two problems: either the printing system cannot cope with long long or the shift operator does not work over 32 bits. Try this
#include <cstdio>
int main()
{
union
{
long long one;
// Intel based systems are back to front
struct {
long lo;
long hi;
} two;
} xxx;
xxx.one = 1LL;
xxx.one = xxx.one << 40;
printf ("%016llx %08x %08x\n", xxx.one, xxx.two.hi, xxx.two.lo);
return 0;
}
If the first number is all zeros but one of the other two isn't, then it is the printf that cannot cope. If all the numbers are zeros, then the shift operator isn't defined for 64 bits.
Related
Consider a piece of code below. Is there an integer literal that would compile on both 32-bit and 64-bit platforms?
#include <iostream>
#include <cstdint>
void f(double)
{
std::cout << "double\n";
}
void f(int64_t)
{
std::cout << "int64_t\n";
}
int main()
{
f(0L); // works on 64-bit fails on 32-bit system
f(0LL); // fails on 64-bit but works on 32-bit system
f(int64_t(0)); // works on both, but is ugly...
f(static_cast<int64_t>(0)); // ... even uglier
return 0;
}
On platforms where int64_t is long, 0LL is a different type and overload resolution doesn't prefer it vs. double.
When int64_t is long long (including on Windows x64), we have the same problem with 0L.
(0LL is int64_t in both the 32-bit and x86-64 Windows ABIs (LLP64), but other OSes use x86-64 System V which is an LP64 ABI. And of course something portable to non-x86 systems would be nice.)
You can make a custom user defined literal for int64_t like
constexpr int64_t operator "" _i64(unsigned long long value)
{
return static_cast<std::int64_t>(value);
}
and then your function call would become
f(0_i64);
This will give you an incorrect value if you try to use the literal -9223372036854775808 for the reason stated here
Expected result
The program simply prints anyNum in its binary representation.
We accomplish this by "front popping" the first bit of value and sending it to standard output.
After ≤32 iterations i (=anyNum) will finally fall down to zero. Loop ends.
Problem
The v1 version of this code produces the expected result (111...),
However, in v2, when I used a mask structure to get the first bit, it worked as if the last bit was grabbed (1000...).
Maybe not the last? Anyway why and what is happing in second version of the code?
#include <iostream>
typedef struct{
unsigned b31: 1;
unsigned rest: 31;
} mask;
int main()
{
constexpr unsigned anyNum= -1; //0b111...
for (unsigned i= anyNum; i; i<<=1){
unsigned bit;
//bit= (i>>31); //v1
bit= ((mask*)&i)->b31; //v2 (unexpected behaviour)
std::cout <<bit;
}
}
Environment
IDE & platform: https://replit.com/
Platform: Linux-5.11.0-1029-gcp-x86_64-with-glibc2.27
Machine: x86_64
Compilaltion command: clang++-7 -pthread -std=c++17 -o main main.cpp
unsigned b31: 1; is the least significant bit.
Maybe not the last?
The last.
why
Because the compiler chose to do so. The order is whatever compiler decides to.
For example, on GCC the order of bits in bitfields is controlled with BITS_BIG_ENDIAN configuration options.
x86 ABI specifies that bit-fields are allocated from right to left.
what is happing in second version of the code?
Undefined behavior, as in, the code is invalid. You should not expect the code to do anything sane. The compiler happens to generate code that is printing the least significant bit from i.
Thanks for #KamilCuk I found the answer.
It was enough to do a little line-swap in the code so as to repair it.
With comments I've marked the memory layouts of struct (LTR) and bit layout of unsigned int (RLT – Little endian).
Not taking them to account was my mistake.
#include <iostream>
//struct uses left-to-right memory layout
typedef struct{
//I've swapped the next 2 lines as a correction.
unsigned rest: 31;
unsigned b31: 1; //NOW `b31` is the most significant bit.
} mask;
int main()
{
//unsigned is Little Endian on Linux-5.11.0-1029-gcp-x86_64-with-glibc2.27
//,which is right-to-left bit layout.
constexpr unsigned anyNum= -1; //0b111...
for (unsigned i= anyNum; i; i<<=1){
unsigned bit;
//bit= (i>>31); //v1
bit= ((mask*)&i)->b31; //v2 (NOW expected behaviour)
std::cout <<bit;
}
}
Due to some libraries, I have to compile my application in 32 bit, but I need to use integer variables that exceed the max number of 32 bit types. So for example if I try to use uint64_t I get an overflow at 2147483647.
I thought it is possible to use 64 bit integer variables in 32 bit application, so what did I miss here? Do I have to include some specific header oder do I have to set some option therefore? Using VS17.
EDIT:
I did some testing, and in this example program, I can reproduce my overflow problem.
#include <iostream>
int main()
{
uint64_t i = 0;
while (true)
{
std::printf("%d\n",i);
i += (uint64_t)10000;
}
return 0;
}
The bug is here:
std::printf("%d\n",i);
^^
You've used the wrong format specifier, and therefore the behaviour of the program is undefined. %d is for signed int. You need to use
std::printf("%" PRIu64 "\n",i);
PRIu64 is declared in <cinttypes>.
P.S. You also haven't included the header which declares std::printf.
I have a sample code which is working properly in 32 bit system, but when I cross compile it for 64-bit system and try to run on 64 bit Machine, it behaves differently.
Can anyone tell me why this is happening?
#include <usr/include/time.h>
#include <sys/time.h>
void func(time_t * inputArg)
{
printf("%ld\n",*inputArg);
}
int main()
{
unsigned int input = 123456;
func((time_t *)&input);
}
Here "time_t" is a type defined in linux system library header file which is of type "long int".
This code is working fine with a 32-bit system but it isn't with 64-bit.
For 64-bit I have tried this:
#include <usr/include/time.h>
#include <sys/time.h>
void func(time_t * inputArg)
{
printf("%ld\n",*inputArg);
}
int main()
{
unsigned int input = 123456;
time_t tempVar = (time_t)input
func(&tempVar);
}
Which is working fine, but I have used the first method in my whole application a number of times. Any alternate solutions would be appreciated.
can anyone tell me why this is happening?
Dereferencing an integer pointer of different size than the type of the pointed object has undefined behaviour.
If the pointed to integer is smaller than the pointers pointed type, you will read unrelated bytes as part of the dereferenced number.
Your fix works because you pass a pointer to an object of proper type, but consider that your input cannot represent all of the values that time_t can.
Best fix is to use the proper type initially. Use time_t as the input.
Your "fixed" code has a cast that lets the compiler convert your unsigned int value to a time_t. Your original code assumes that they're identical.
On your 32-bit system they are identical, so you get lucky. On your 64-bit system you find out what happens when you invoke Undefined Behavior.
In other words, both C and C++ allow you to cast pointers to whatever you want, but it's up to you to make sure such casts are safe.
thank you for response.
Actually I found my mistake when I printed sizeof 'long int' in 32bit machine and 64bit machine.
32bit machine :
sizeof(int) = 32bit & sizeof(long int) = 32 bit & sizeof(long long int) = 64 bit
64bit machine:
sizeof(int) = 32bit & sizeof(long int) = 64 bit & sizeof(long long int) = 64 bit
I included the stdint.h in my solution and used uint64_t, but the result was not what i wanted. Here is the code that i used.
#include <stdio.h>
#include "./stdint.h"
void main (void)
{
//-- to get the maximum value that the 32-bit integer can take
unsigned int test = 0 ;
test-- ;
//-- to see if the 64-bit integer can take the value that the 32-bit integer can't take
uint64_t test2 = test ;
test2++ ;
printf("%u\n", test) ;
printf("%u\n", test2) ;
while(1) { }
}
And Here is the result.
4294967295
0
I want to use the full range that the 64-bit integer can take. How can i do it in x86 Visual Studio 2008? For your information, i'm using a 32-bit windows 7.
The %u format specifier for printf is for unsigned integers. Use %llu.
Since this is C++ though, you might as well use the type-safe std::cout to avoid programmer error:
std::cout << test2;
Use:
#include <inttypes.h>
/* ... */
printf("%" PRIu64 "\n", test2);
to print an uint64_t value.
u conversion specifier is used to print an unsigned int value.
Note that the PRIu64 macro is a C macro. In C++ the macro is not present and you may have to use %llu conversion specification.
Since you've also tagged this as C++, I'll add the obvious way to avoid type mismatches like you've run into with printf: use an ostream instead:
std::cout << test2;
In a book that i have, it says that program always convert any
variables to integer when it calculates. Is it possible to make the
default integer 64-bit?
This is 1) not correct, and 2) no, you can't tell (most) compilers what size int you want. You could of course use something like:
typedef Int int64_t;
typedef Uint uint64_t;
But you can't rename int to something other than what the compiler things it should be - which may be 16, 32, 64, 36, or 72 bits - or some other number >= 16.