http://msdn.microsoft.com/en-us/library/windows/desktop/aa383742%28v=vs.85%29.aspx
They are supposed to be used like this, set two 32 bit values on LowPart and HighPart and then perform arithmetic on the QuadPart.
int a,b,c;
ULARGE_INTEGER u;
...
u.LowPart = a;
u.HighPart = b;
u.QuadPart += c;
So if you are going to perform arithmetic on the QuadPart(64 bits) anyways you need a 64 bit processor, right? So what is the whole point? Why not assign the value directly to the QuadPart?
You don't need a 64 bit processor to perform arithmetic on a 64 bit data type. All 32 bit compilers that I know of support arithmetic on 64 bit integers. If the hardware doesn't allow native arithmetic, then the compiler has to generate code to do the arithmetic. Typically this will make use of support functions in the compiler's RTL.
The struct is intended for use by compilers that don't provide native support for 64 bit data types. The very documentation to which you linked makes that clear:
Note Your C compiler may support 64-bit integers natively. For
example, Microsoft Visual C++ supports the __int64 sized integer type.
For more information, see the documentation included with your C
compiler.
Compilers that don't support native 64 bit integers will not be able to treat the QUADPART union member as an integer.
typedef union _ULARGE_INTEGER {
struct {
DWORD LowPart;
DWORD HighPart;
};
struct {
DWORD LowPart;
DWORD HighPart;
} u;
ULONGLONG QuadPart;
} ULARGE_INTEGER, *PULARGE_INTEGER;
And the definition of ULONGLONG:
#if !defined(_M_IX86)
typedef unsigned __int64 ULONGLONG;
#else
typedef double ULONGLONG;
#endif
Of course, all compilers written in the past 10 years (or more) will have native support for 64 bit integers. But this union was originally introduced a very long time ago and the compiler landscape would have been different then. When looking at Windows header files, always bear in mind the history and legacy.
Typically ULARGE_INTEGER is used when you need to convert a pair of 32-bit integers into a 64-bit integer or vice-versa.
For example, consider manipulating a FILETIME structure:
void add_millisecond(FILETIME * ft)
{
ULARGE_INTEGER uli;
uli.LowPart = ft->dwLowDateTime;
uli.HighPart = ft->dwHighDateTime;
uli.QuadPart += 10000;
ft->dwLowDateTime = uli.LowPart;
ft->dwHighDateTime = uli.HighPart;
}
You can't assign the QuadPart value directly because you don't have it; all you have is the high and low parts.
So if you are going to perform arithmetic on the QuadPart(64 bits)
anyways you need a 64 bit processor, right?
No. But the real question should be, do you need a compiler that supports a 64 bit integer type? The answer to that too, in this case, is no. That's what these functions are for:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa383711%28v=vs.85%29.aspx
Related
Sometimes, I need to declare a type, that works similar to a simple type, that may have a larger size than the register of a cpu, in some machines, in others not.
For example, a UUID (128 bits) or a (128 bits) datetime, in a 32 bits or 64 bits machine.
In some cases, there already are multiplatform libraries, but, in another doesn't.
I know is recommended to stick to existing libraries, but, in case that doesn't, how should I do ?
Example:
typedef
uint_16 /* redeclare as */ code16;
uint_32 /* redeclare as */ code32;
// option 1
struct code64
{
packed uint_32 A, B;
};
// option 2
struct code64
{
aligned uint32_t A, B;
};
// option 3
typedef
packed uint_16 code64[4];
void Example( )
{
code64 A = Foo();
code64 B = Bar();
code64 Q = Zaz(A, B);
}
As the tags indicates, I want it to be compiled, both, in "C" and "C++".
I already search for this subject in other questions, on stackoverflow.
A struct works like a simple type for the purpose of passing it around, taking its address, sizeof, etc. To just store some unstructured data inside, make a struct that contains a single array field.
typedef struct {
uint8_t a[16];
} code128;
Depending on what you want to store, you may want to put more structure in your struct. For example, a UUID is formally defined as having fields of a certain size, some of which are stored in platform endianness. (However not all software cares about endianness, and modern software often treats UUIDs as just a bunch of bytes.)
typedef struct {
uint32_t time_low;
uint16_t time_mid;
uint16_t time_hi_and_version;
uint8_t clock_seq_hi_and_res;
uint8_t clock_seq_low;
uint8_t node[6];
For a 128-bit time, what this could be depends on how the time is counted. It's common to represent time as a 64-bit number of seconds and a number of fractions such as nanoseconds or 2^64th of a second. For example, modern Unix systems have this type:
typedef struct timeval {
uint64_t tv_sec;
uint64_t tv_usec;
};
You can't do arithmetic on these structures, but arithmetic is meaningless on UUID anyway, and arithmetic on time is usually not direct arithmetic because time with subsecond precision is usually not represented as a single number.
If you do need arithmetic, some platforms offer uint128_t, but that's not portable (any C or C++ implementation must offer an integer type that's at least 64 bits, but they don't have to go beyond that). For example GCC and Clang both offer double-width integer types, so they offer uint128_t on machines where the CPU has 64-bit registers, but only up to uint64_t on 32-bit CPUs.
I'm quite new to programming, I have recently learnt a little C++ and I am using Visual Studio 2017 Community version.
I need to use a 64 bit integer to store a value and carry out some arithmetic operations, however my compiler only allows me to use 32 bits of the "int64" variable I have created.
Here is an example of some code and the behaviour
unsigned __int64 testInt = 0x0123456789ABCDEF;
printf("int value = %016X\n", testInt); // only 32 bits are being stored here? (4 bytes)
printf("size of integer in bytes = %i\n\n", sizeof(testInt)); // size of int is correct (8 bytes)
The value stored in the variable seems to be 0x0000000089ABCDEF.
Why can I not use all 64 bits of this integer, it seems to act as a 32 bit int?
Probably I'm missing something basic, but I can't find anything relating to this from searching :(
It would be nice if it were just something basic, but it turns out that 64 bit ints are not dealt with consistently on all platforms so we have to lean on macros.
This answer describes the use of PRIu64, PRIx64, and related macros included in <inttypes.h>. It looks funny like this, but I think the portable solution would look like:
#include <inttypes.h>
unsigned __int64 testInt = 0x0123456789ABCDEF;
printf("int value = %016" PRIX64 "\n", testInt);
The PRIX64 expands to the appropriate format specifier depending on your platform (probably llX for Visual Studio).
Format specifier %X takes an unsigned int (probably 32 bit on your system), whereas __int64 corresponds to long long.
Use printf("int value = %016llX\n", testInt) instead. Documentation can be found, for example, at cppreference.com.
I cant find the answer anywhere. The quesion is (?) simple. Lets assume I have such function:
class Sth {
private:
long u;
public:
void set(long u)
{
this.u = u;
}
};
and I run it like this:
Sth s;
CORBA::Long val = 5;
s.set(val);
Can I do this? Will everything be ok?
This should be fine. According to the IBM reference, an IDL long is in range of -2^31 to 2^31-1 and at least 32 bits in size.
It should convert natively to long in c++. The standard doesn't define a size, but defines the minimum ranges these values can hold. The CORBA::Long is a typedef of long, which may change between platforms.
You could use an int_least32_t (from <stdint.h>, or <cstdint>), which is a native type guaranteed to be at least 32 bits wide and typedef'd to the appropriate compiler native type.
It depends on the inplementation of the IDL to C++ mapping you are using. In the new IDL to C++11 mapping a long in IDL maps to an int32_t.
I have an int[2] representation of a long int in a 32 bit machine and want to convert it to long on 64bit machine. is there a safe architecture independent way of doing this conversion?
The source machine is 32bit and an int is 32bits. Destination machine is 64bit and the long long type is definitely 64bits.
can I do the following?
long i;
int j[2];
#ifdef LITTLEENDIAN
j[1] = *(int*)(&i);
j[0] = *(((int*)(&i))+1)
#else
j[0] = *(int*)(&i);
j[1] = *(((int*)(&i))+1)
#endif
If the above is incorrect, then what is the best and safest way for this? I am sure this would have been asked previously, but I didn't find a clean answer.
Thanks
I have an int[2] representation of a long int in a 32 bit machine and want to convert it to long on 64bit machine. is there a safe architecture independent way of doing this conversion?
Not really. Because apart from endianness, the sizes of the two datatypes may vary as well. On some popular platforms, int and long have the same size (both 32 bits)
Ultimately, it depends on how you created your int[2] representation. Whatever you did to create that int array has to be reversed in order to get a valid long out of it.
One approach which will work in practice (but is, technically speaking, undefined behavior), is to place both in a union:
union {
int i2[2];
long l;
} u;
Now you can simply write to u.i2 and read from u.l. The C++ standard technically doesn't allow this (it is undefined behavior), but it is such a common trick that major compilers explicitly support it anyway.
However, a better approach might be to use a char[] instead of int[], because char's are explicitly allowed to alias other types.
If you are sure of having 32-bit integer and 64-bit then you can use union concept.
union Convert
{
long i;
int j[2];
};
The width concern could be addressed by using boost::uint64_t on both machines.
http://www.boost.org/doc/libs/1_46_1/libs/integer/doc/html/boost_integer/cstdint.html#boost_integer.cstdint.exact_width_integer_types
I am converting some code from C to C++ in MS dev studio under win32. In the old code I was doing some high speed timings using QueryPerformanceCounter() and did a few manipulations on the __int64 values obtained, in particular a minus and a divide. But now under C++ I am forced to use LARGE_INTEGER because that's what QueryPerformanceCounter() returns. But now on the lines where I try and do some simple maths on the values I get an error:
error C2676: binary '-' : 'LARGE_INTEGER' does not define this operator or a conversion to a type acceptable to the predefined operator
I tried to cast the variables to __int64 but then get:
error C2440: 'type cast' : cannot convert from 'LARGE_INTEGER' to '__int64'
How do I resolve this?
Thanks,
LARGE_INTEGER is a union of a 64-bit integer and a pair of 32-bit integers. If you want to perform 64-bit arithmetic on one you need to select the 64-bit int from inside the union.
LARGE_INTEGER a = { 0 };
LARGE_INTEGER b = { 0 };
__int64 c = a.QuadPart - b.QuadPart;
LARGE_INTEGER is a union, documented here. You probably want a QuadPart member.
Here it is:
LARGE_INTEGER x,y;
///
//Some codes...
///
__int64 diff = x.QuadPart - y.QuadPart
Because QuadPart is defined as a LONGLONG , that same as __int64.
LARGE_INTEGER is a union, you can still use .QuadPart if you want to work on the 64-bit value.
As the Documentation says in the Remarks section :
The LARGE_INTEGER structure is actually a union. If your compiler has built-in support for 64-bit integers, use the QuadPart member to store the 64-bit integer. Otherwise, use the LowPart and HighPart members to store the 64-bit integer.
So if your compiler supports 64 bit integers use quadPart like this :
LARGE_INTEGER a, b;
__int64 diff = a.QuadPart - b.QuadPart
In addition to the answers, if you are looking to construct a LARGE_INTEGER with a value other than zero, you can assign the low and high parts separately. LowPart is first as defined in the union, and the only highPart is signed.
LARGE_INTEGER li = {0x01234567, -1};