In C++, can I simply cast a pointer to a DWORD?
MyClass * thing;
DWORD myPtr = (DWORD)thing;
Would that work?
You undoubtedly can do it.
Whether it would work will depend on the environment and what you want it to do.
On 32-bit Windows1 (the most common place to see DWORD) it'll normally be fine. On a 64-bit Windows (where you also see DWORD, but not nearly as much) it generally won't.
Or, more accurately, when compiled as a 32-bit executable that will run as a 32-bit process, regardless of the actual copy of Windows you happen to run that on.
In windows its quite common to pass pointers in such way, for example in windows messages. LPARAM is a typedef for LONG_PTR and quite often is used to pass pointers to some structures. You should use reinterpret_cast<DWORD_PTR>(thing) for casting.
No, in a 64 bit process, a pointer is 64 bits but a DWORD is only 32 bits. Use a DWORD_PTR.
http://en.cppreference.com/w/cpp/language/explicit_cast
Read that, understand that, avoid C-style casts because they hide a lot.
Doing so may be able to be done, but would make no sense, for example DWORD is 4 bytes and a pointer (these days) is 8.
reinterpret_cast<DWORD&>(myPtr);
Should work, but it may be undefined or truncate, if anything will work that will!
BTW, reinterpret_cast is the C++ way of saying "Trust me my dear compiler, I know what I'm doing" - it attempts to interpret the bits (0s and 1s) of one thing as another, regardless of how much sense that makes.
A legitimate use though is the famous 1/sqrt hack ;)
Related
I learned this today at my work place. And I read this, this and this before posting my question.
Here's what my senior co-worker told me:
You cannot assign void* to UINT or unsigned int. It won’t work for 64 bit.
But why? Is it because void* and unsigned int carry different sizes on different architectures (as mentioned in other questions), or something else?
Yes, that is the case.
Your implementation may provide the optional type uintptr_t, however, which is defined as follows:
The following type designates an unsigned integer type with the property that any valid
pointer to void can be converted to this type, then converted back to pointer to void,
and the result will compare equal to the original pointer:
uintptr_t
The signed counterpart intptr_t may also be available. These types are available in the <cstdint> header.
By choosing to use these types, you are acknowledging that your code will only compile with the subset of implementations that provide this types on your target machines.
Size is obviously a show-stopper, if void* can't physically fit in an unisigned int the game is over. But even if sizeof(void*) == sizeof(unsigned int) you have a type compatibility problem: one holds a data pointer, the other data. You'd have to reinterpret_cast<>() the one to the other and all bets are off as to how well this would work.
You're essentially correct: It is not guaranteed that an unsigned int has the same machine word length as a void*, so you can't cast between them without losing information. Here is an excellent FAQ answer about it.
The main thing to keep in mind is that void* is an arbitrary data pointer, not a truly arbitrary pointer. In fact, there is no such thing as a truly generic pointer: certain machines may have different address spaces for programs and data, for example, so the sizes of pointers to each might differ. See this SO answer for more info.
Depends on the target for your application. You tagged VC++ and mention type UINT - thus it seems that you're building for Windows. In 32-bit Windows the pointer size is 32 bit, while in 64-bit Windows it's 64 bit. However, size of type UINT is defined similarly to 32 bit for both Windows flavors. You can use __uint64 or UINT64 MS-specific type instead of UINT to ensure it's big enough for your pointer. You can also use INT_PTR/UINT_PTR types which are specifically designed to match the size of the pointer (thus making it transparent for 32/64-bit flavors).
See http://msdn.microsoft.com/en-us/library/s3f49ktz.aspx for reference on various data types.
Of course, all of these will make your program not natively portable to other architecture/OSes.
I want to use a long integer that will be interpreted as a number when the MSB is set otherwise it will be interpreted as a pointer. So would this work or would I run into problems in either C or C++?
This is on a 64-bit system.
Edited for clarity and a better description.
On x86-64, you WILL have a pointer that is over 47 bits in address have the 63rd bit set, since all the bits above "max number of bits supported by the architecture" (which is currently 48) must all have the same value as the most significant bit of the value itself. (That is any address above 0007 FFFF FFFF FFFF will be FFF8 0000 0000 0000 - everything in between is "invalid" as a pointer)
That may well be addresses ONLY used by the kernel, but I'm not sure it's guaranteed to be.
However, I would try to avoid using tricks like this - it's likely to come back and haunt you at some point.
People have tried tricks like this before.
It never works out well in the long run.
Simply don't do it.
Edit: better link - see reference to 'bit31', which was previously never returned as set. Once it could be set (over 2 gigs of RAM, gasp!) it would break naughty programs and therefore programs needed to opt into this option once this much memory became the norm as people had used trickery like this (amongst other things). And now my lovely, short and to the point answer has become too long :-)
So would this work or would I run into problems in either C or C++?
Do you have 64 bits? Do you want your code to be portable to 32 bit systems? long does not necessarily have 64 bits. Big-endian v. little-endian? (Do you know which your system is?)
Plus, hopeless confusion. Please just use an extra variable to store this information or you will have many many bugs surrounding this.
It depends on the architecture. x86_64 architecture, for example, is currently using 48-bit addressing. It means that you could use 16 bits for your own needs (a trick that sometimes referred to as "pointer packing"). However, even the x86_64 architecture definition allows this limit to be raised in future implementations to the full 64 bits. If that happens, you may run into a situation where a lot of your code might need to be changed. So if you really must go that way, make sure your pointer packing is kept in one place that is easy to change in the future. For other architectures you have to check for yourself.
Unless you really need the space, or you're keeping alot of these things around, I would just use a plain union, and add a tag field. If you're going to go down that route, make sure that your memory is aligned to fit your needs.
Take a look at boost::lockfree::detail::tagged_ptr from boost.lockfree
This is a class that was introduced in latest 1_53 boost. It stores pointer and additional 16 bites in 64 bites variable.
Don't do such tricks. If you need to distinguish integers from pointers inside some container, consider using separate bit set to indicate such flag. In C++ std::bitset could be good enough.
Reasons:
Actually nobody guarantees pointers are long unsigned or long long unsigned. If you need
to store them, always apply sizeof() and void * type (if you need
to remove information about pointed object).
Even on one system addresses are highly dependent on architecture.
Kernel modules could seriously change mapping logics for process so you never know what addresses you will need.
Remember that the virtual address returned to your program does may necessarily line up to the actual physical address in memory. Infact, unless you are directly manipulating pretty special memory [e.g. some forms of graphics memory] then this is absolutely the case.
In this case, its the maximum value of the MMU which defines the values of the pointers your program sees. In which case, for x64 I'm pretty sure its (currently) 48bits, but as Mats specifies above once you've got the top bit set in the 48, you get the 63'd bit says aswell.
So taking his answer and mine - its entirely possible to get a pointer with the 47th bit set even with a small amount of RAM, and once you do you get the 63rd bit set.
If the "64-bit system" in question is x86_64, then yes, it will work.
A c++ specific question. So i read a question about what makes a program 32 bit/64 bit, and the anwser it got was something like this (sorry i cant find the question, was somedays ago i looked at it and i cant find it again:( ): As long as you dont make any "pointer assumptions", you only need to recompile it. So my question is, what are pointer assumtions ? To my understanding there is 32 bit pointer and 64 bit pointers so i figure it is something to do with that . Please show the diffrence in code between them. Any other good habits to keep in mind while writing code, that helps it making it easy to convert between the to are also welcome :) tho please share examples with them
Ps. I know there is this post:
How do you write code that is both 32 bit and 64 bit compatible?
but i tougth it was kind of to generall with no good examples, for new programmers like myself. Like what is a 32 bit storage unit ect. Kinda hopping to break it down a bit more (no pun intended ^^ ) ds.
In general it means that your program behavior should never depend on the sizeof() of any types (that are not made to be of some exact size), neither explicitly nor implicitly (this includes possible struct alignments as well).
Pointers are just a subset of them, and it probably also means that you should not try to rely on being able to convert between unrelated pointer types and/or integers, unless they are specifically made for this (e.g. intptr_t).
In the same way you need to take care of things written to disk, where you should also never rely on the size of e.g. built in types, being the same everywhere.
Whenever you have to (because of e.g. external data formats) use explicitly sized types like uint32_t.
For a well-formed program (that is, a program written according to syntax and semantic rules of C++ with no undefined behaviour), the C++ standard guarantees that your program will have one of a set of observable behaviours. The observable behaviours vary due to unspecified behaviour (including implementation-defined behaviour) within your program. If you avoid unspecified behaviour or resolve it, your program will be guaranteed to have a specific and certain output. If you write your program in this way, you will witness no differences between your program on a 32-bit or 64-bit machine.
A simple (forced) example of a program that will have different possible outputs is as follows:
int main()
{
std::cout << sizeof(void*) << std::endl;
return 0;
}
This program will likely have different output on 32- and 64-bit machines (but not necessarily). The result of sizeof(void*) is implementation-defined. However, it is certainly possible to have a program that contains implementation-defined behaviour but is resolved to be well-defined:
int main()
{
int size = sizeof(void*);
if (size != 4) {
size = 4;
}
std::cout << size << std::endl;
return 0;
}
This program will always print out 4, despite the fact it uses implementation-defined behaviour. This is a silly example because we could have just done int size = 4;, but there are cases when this does appear in writing platform-independent code.
So the rule for writing portable code is: aim to avoid or resolve unspecified behaviour.
Here are some tips for avoiding unspecified behaviour:
Do not assume anything about the size of the fundamental types beyond that which the C++ standard specifies. That is, a char is at least 8 bit, both short and int are at least 16 bits, and so on.
Don't try to do pointer magic (casting between pointer types or storing pointers in integral types).
Don't use a unsigned char* to read the value representation of a non-char object (for serialisation or related tasks).
Avoid reinterpret_cast.
Be careful when performing operations that may over or underflow. Think carefully when doing bit-shift operations.
Be careful when doing arithmetic on pointer types.
Don't use void*.
There are many more occurrences of unspecified or undefined behaviour in the standard. It's well worth looking them up. There are some great articles online that cover some of the more common differences that you'll experience between 32- and 64-bit platforms.
"Pointer assumptions" is when you write code that relies on pointers fitting in other data types, e.g. int copy_of_pointer = ptr; - if int is a 32-bit type, then this code will break on 64-bit machines, because only part of the pointer will be stored.
So long as pointers are only stored in pointer types, it should be no problem at all.
Typically, pointers are the size of the "machine word", so on a 32-bit architecture, 32 bits, and on a 64-bit architecture, all pointers are 64-bit. However, there are SOME architectures where this is not true. I have never worked on such machines myself [other than x86 with it's "far" and "near" pointers - but lets ignore that for now].
Most compilers will tell you when you convert pointers to integers that the pointer doesn't fit into, so if you enable warnings, MOST of the problems will become apparent - fix the warnings, and chances are pretty decent that your code will work straight away.
There will be no difference between 32bit code and 64bit code, the goal of C/C++ and other programming languages are their portability, instead of the assembly language.
The only difference will be the distrib you'll compile your code on, all the work is automatically done by your compiler/linker, so just don't think about that.
But: if you are programming on a 64bit distrib, and you need to use an external library for example SDL, the external library will have to also be compiled in 64bit if you want your code to compile.
One thing to know is that your ELF file will be bigger on a 64bit distrib than on a 32bit one, it's just logic.
What's the point with pointer? when you increment/change a pointer, the compiler will increment your pointer from the size of the pointing type.
The contained type size is defined by your processor's register size/the distrib your working on.
But you just don't have to care about this, the compilation will do everything for you.
Sum: That's why you can't execute a 64bit ELF file on a 32bit distrib.
Typical pitfalls for 32bit/64bit porting are:
The implicit assumption by the programmer that sizeof(void*) == 4 * sizeof(char).
If you're making this assumption and e.g. allocate arrays that way ("I need 20 pointers so I allocate 80 bytes"), your code breaks on 64bit because it'll cause buffer overruns.
The "kitten-killer" , int x = (int)&something; (and the reverse, void* ptr = (void*)some_int). Again an assumption of sizeof(int) == sizeof(void*). This doesn't cause overflows but looses data - the higher 32bit of the pointer, namely.
Both of these issues are of a class called type aliasing (assuming identity / interchangability / equivalence on a binary representation level between two types), and such assumptions are common; like on UN*X, assuming time_t, size_t, off_t being int, or on Windows, HANDLE, void* and long being interchangeable, etc...
Assumptions about data structure / stack space usage (See 5. below as well). In C/C++ code, local variables are allocated on the stack, and the space used there is different between 32bit and 64bit mode due to the point below, and due to the different rules for passing arguments (32bit x86 usually on the stack, 64bit x86 in part in registers). Code that just about gets away with the default stacksize on 32bit might cause stack overflow crashes on 64bit.
This is relatively easy to spot as a cause of the crash but depending on the configurability of the application possibly hard to fix.
Timing differences between 32bit and 64bit code (due to different code sizes / cache footprints, or different memory access characteristics / patterns, or different calling conventions ) might break "calibrations". Say, for (int i = 0; i < 1000000; ++i) sleep(0); is likely going to have different timings for 32bit and 64bit ...
Finally, the ABI (Application Binary Interface). There's usually bigger differences between 64bit and 32bit environments than the size of pointers...
Currently, two main "branches" of 64bit environments exist, IL32P64 (what Win64 uses - int and long are int32_t, only uintptr_t/void* is uint64_t, talking in terms of the sized integers from ) and LP64 (what UN*X uses - int is int32_t, long is int64_t and uintptr_t/void* is uint64_t), but there's the "subdivisions" of different alignment rules as well - some environments assume long, float or double align at their respective sizes, while others assume they align at multiples of four bytes. In 32bit Linux, they align all at four bytes, while in 64bit Linux, float aligns at four, long and double at eight-byte multiples.
The consequence of these rules is that in many cases, bith sizeof(struct { ...}) and the offset of structure/class members are different between 32bit and 64bit environments even if the data type declaration is completely identical.
Beyond impacting array/vector allocations, these issues also affect data in/output e.g. through files - if a 32bit app writes e.g. struct { char a; int b; char c, long d; double e } to a file that the same app recompiled for 64bit reads in, the result will not be quite what's hoped for.
The examples just given are only about language primitives (char, int, long etc.) but of course affect all sorts of platform-dependent / runtime library data types, whether size_t, off_t, time_t, HANDLE, essentially any nontrivial struct/union/class ... - so the space for error here is large,
And then there's the lower-level differences, which come into play e.g. for hand-optimized assembly (SSE/SSE2/...); 32bit and 64bit have different (numbers of) registers, different argument passing rules; all of this affects strongly how such optimizations perform and it's very likely that e.g. SSE2 code which gives best performance in 32bit mode will need to be rewritten / needs to be enhanced to give best performance 64bit mode.
There's also code design constraints which are very different for 32bit and 64bit, particularly around memory allocation / management; an application that's been carefully coded to "maximize the hell out of the mem it can get in 32bit" will have complex logic on how / when to allocate/free memory, memory-mapped file usage, internal caching, etc - much of which will be detrimental in 64bit where you could "simply" take advantage of the huge available address space. Such an app might recompile for 64bit just fine, but perform worse there than some "ancient simple deprecated version" which didn't have all the maximize-32bit peephole optimizations.
So, ultimately, it's also about enhancements / gains, and that's where more work, partly in programming, partly in design/requirements comes in. Even if your app cleanly recompiles both on 32bit and 64bit environments and is verified on both, is it actually benefitting from 64bit ? Are there changes that can/should be done to the code logic to make it do more / run faster in 64bit ? Can you do those changes without breaking 32bit backward compatibility ? Without negative impacts on the 32bit target ? Where will the enhancements be, and how much can you gain ?
For a large commercial project, answers to these questions are often important markers on the roadmap because your starting point is some existing "money maker"...
If we have:
__int32 some_var = 0;
What is the best (if any) way to call InterlockedExchange, InterlockedIncrement and other interlocked functions which require LONG* for some_var ?
Since, there is guarantee that LONG is 32 bit on any Windows, it's probably safe just to pass (long*) some_var. However, it seems to me quite ugly and I can't find confirmation that it's safe.
Note, I can't change type to long because it's not portable. I need exactly 32 bit type.
Update: some research of libraries which provide portable atomic operations has shown that no one bothers about casting. Some examples:
Apache Portable Runtime (APR):
typedef WINBASEAPI apr_uint32_t (WINAPI * apr_atomic_win32_ptr_val_fn)
(apr_uint32_t volatile *,
apr_uint32_t);
APR_DECLARE(apr_uint32_t) apr_atomic_add32(volatile apr_uint32_t *mem, apr_uint32_t val)
{
#if (defined(_M_IA64) || defined(_M_AMD64))
return InterlockedExchangeAdd(mem, val);
#elif defined(__MINGW32__)
return InterlockedExchangeAdd((long *)mem, val);
#else
return ((apr_atomic_win32_ptr_val_fn)InterlockedExchangeAdd)(mem, val);
#endif
}
atomic_ops:
AO_INLINE AO_t
AO_fetch_and_sub1_full (volatile AO_t *p)
{
return _InterlockedDecrement64((LONGLONG volatile *)p) + 1;
}
Well, it's a rock and a hard place. An atomic increment is a heavy duty platform implementation detail. That's why the LONG typedef exists in the first place. Some future operating system 20 or 50 years from now might redefine that type. When, say, 256 bit cores are common and atomic increments work differently. Who knows.
If you want to write truly portable code then you should use truly portable types. Like LONG. And it will be Microsoft's burden to make it work, instead of yours.
It's going to be a 32-bit integer for quite a while to come, I'd recommend you don't worry about it.
You might as well change the type to a long, leaving behind portability, because the entire "interlocked" family of atomic operations are also not portable.
Incidentally, as a side note, I thought interlocked supported an integer overload. Perhaps thats only in .net though.
Well, __int32 isn't a portable type either. So my suggestion to make the problem go away is to use a typedef. On Windows, you can do:
typedef LONG my_int32;
...and safely pass a pointer to such a type to InterlockedExchange(). On other systems, use whatever is a 32 bit type there - for example, if they have stdint.h, you can do:
typedef int32_t my_int32;
Just do assert(sizeof(LONG) == sizeof(some_var)) and only worry about the problem when the assertion fails. YAGNI. As long as the assertion holds, you can use reinterpret_cast<LONG*>(&some_var).
Amusingly enough, there is InterlockedExchange - a windows API that takes a LONG* and _InterlockedExchange a msvc compiler intrinsic, that takes a long*.
Because portability has been invoked, Ill also link a page on GCC atomic intrinsics.
The point is well taken however: MSVC uses the ILP32LLP64 data model for 32bit builds and LLP64 for 64bit builds. GCC based toolchains (such as MinGW) do exist for windows and may very well implement the LP64 model - leading to amusing! occurrences such as 'long' being 64bits, but LONG being 32.
If you are sticking to Microsoft compilers its not something you need to worry about.
So, in conclusion:
1. The value being passed MUST be qualified with 'volatile'.
2. Because you are (a) using a 32bit quantity (and that is youre requirement) and (b) using the explicitly 32bit form of the InterlockedXXX API - its 100% safe to just do the bloody cast and be done with it: InterlockedIncrement is going to operate on a 32bit value on all bit sizes, your variable is going to be explicitly 32bits on all bit sizes - even with different data models in use.
the cast is safe, don't over complicate things for no reason.
Hans Passant has expressed it very well:
"An atomic increment is a heavy duty platform implementation detail."
That is why implementations provide type specific overloads.
atomic_ops is one such project.
Theoretically, every Interlocked function can be implemented by using full-blown locks - which in-turn, rely on platform specifics :-) - but this is a real performance overkill for types and functions that are supported on the target hardware platform.
There is some standardization going on in this regard, see e.g. similar questions answered here and here as well.
I have a package that compiles and works fine on a 32-bit machine. I am now trying to get it to compile on a 64-bit machine and find the following error-
error: cast from ‘void*’ to ‘int’ loses precision
Is there a compiler flag to suppress these errors? or do I have to manually edit these files to avoid these casts?
The issue is that, in 32bits, an int (which is a 32bit integer) will hold a pointer value.
When you move to 64bit, you can no longer store a pointer in an int - it isn't large enough to hold a 64bit pointer. The intptr_t type is designed for this.
Your code is broken. It won't become any less broken by ignoring the warnings the compiler gives you.
What do you think will happen when you try to store a 64-bit wide pointer into a 32-bit integer? Half your data will get thrown away. I can't imagine many cases where that is the correct thing to do, or where it won't cause errors.
Fix your code. Or stay on the 32-bit platform that the code currently works on.
If your compiler defines intptr_t or uintptr_t, use those, as they are integer types guaranteed to be large enough to store a pointer.
If those types are not available, size_t or ptrdiff_t are also large enough to hold a pointer on most (not all) platforms. Or use long (is typically 64-bit on 64-bit platforms on the GCC compiler) or long long (a C99 types which most, but not all compilers, support in C++), or some other implementation-defined integral type that is at least 64 bits wide on a 64-bit platform.
My guess is OP's situation is a void* is being used as general storage for an int, where the void* is larger than the int. So eg:
int i = 123;
void *v = (void*)i; // 64bit void* being (ab)used to store 32bit value
[..]
int i2 = (int)v; // we want our 32bits of the 64bit void* back
Compiler doesn't like that last line.
I'm not going to weigh in on whether it's right or wrong to abuse a void* this way. If you really want to fool the compiler, the following technique seems to work, even with -Wall:
int i2 = *((int*)&v);
Here it takes the address of v, converts the address to a pointer of the datatype you want, then follows the pointer.
It's an error for a reason: int is only half as big as void* on your machine, so you can't just store a void* in an int. You would loose half of the pointer and when the program later tries to get the pointer out of that int again, it won't get anything useful.
Even if the compiler wouldn't give an error the code most likely wouldn't work. The code needs to be changed and reviewed for 64bit compatibility.
Casting a pointer to an int is horrible from a portability perspective. The size of int is defined by the mix of compiler and architecture. This is why the stdint.h header was created, to allow you to explicitly state the size of the type you're using across many different platforms with many different word sizes.
You'd be better off casting to a uintptr_t or intptr_t (from stdint.h, and choose the one that best matches the signedness you need).
You can try to use intptr_t for best portability instead of int where pointer casts are required, such as callbacks.
You do not want to suppress these errors because most likely, they are indicating a problem with the code logic.
If you suppresses the errors, this could even work for a while. While the pointer points to an address in the first 4 GB, the upper 32 bits will be 0 and you won't lose any data. But once you get an address > 4GB, your code will start 'mysteriously' not working.
What you should do is modify any int that can hold a pointer to intptr_t.
You have to manually edit those files in order to replace them with code that isn't likely to be buggy and nonportable.
Suppressing the warnings are a bad idea, but there may be a compiler flag to use 64-bit ints, depending on your compiler and architecture, and this is a safe way to fix the problem (assuming of course that the code didn't also assume ints are 32-bit). For gcc, the flag is -m64.
The best answer is still to fix the code, I suppose, but if it's legacy third-party code and these warnings are rampant, I can't see this refactoring as being a very efficient use of your time. Definitely don't cast pointers to ints in any of your new code, though.
As defined by the current C++ standard, there is no integer type which is guaranteed to hold a pointer. Some platforms will have an intptr_t, but this is not a standard feature of C++. Fundamentally, treating the bits of a pointer as if they were an integer is not a portable thing to do (although it can be made to work on many platforms).
If the reason for the cast is to make the pointer opaque, then void* already achieves this, so the code could use void* instead of int. A typedef might make this a little nicer in the code
typedef void * handle_t;
If the reason for the cast is to do pointer arithmetic with byte granularity, then the best way is probably to cast to a (char const *) and do the math with that.
If the reason for the cast is to achieve compatibility with some existing library (perhaps an older callback interface) which cannot be modified, then I think you need to review the documentation for that library. If the library is capable of supporting the functionality that you require (even on a 64-bit platform), then its documentation may address the intended solution.
I faced similar problem. I solved it in the following way:
#ifdef 64BIT
typedef uint64_t tulong;
#else
typedef uint32_t tulong;
#endif
void * ptr = NULL; //Whatever you want to keep it.
int i;
i = (int)(tulong)ptr;
I think, the problem is of typecasting a pointer to a shorter data type. But for a larger type to int, it works fine.
I converted this problem from typecasting a pointer to long to typecasting a 64-bit integer to 32-bit integer and it worked fine. I am still in the search of a compiler option in GCC/Clang.
Sometimes it is sensible to want to split up a 64-bit item into 2 32-bit items. This is how you would do it:
Header file:
//You only need this if you haven't got a definition of UInt32 from somewhere else
typedef unsigned int UInt32;
//x, when cast, points to the lower 32 bits
#define LO_32(x) (*( (UInt32 *) &x))
//address like an array to get to the higher bits (which are in position 1)
#define HI_32(x) (*( ( (UInt32 *) &x) + 1))
Source file:
//Wherever your pointer points to
void *ptr = PTR_LOCATION
//32-bit UInt containing the upper bits
UInt32 upper_half = HI_32(ptr);
//32-bit UInt containing the lower bits
UInt32 lower_half = LO_32(ptr);