(related to my previous question)
In QT, the QMap documentation says:
The key type of a QMap must provide operator<() specifying a total order.
However, in qmap.h, they seem to use something similar to std::less to compare pointers:
/*
QMap uses qMapLessThanKey() to compare keys. The default
implementation uses operator<(). For pointer types,
qMapLessThanKey() casts the pointers to integers before it
compares them, because operator<() is undefined on pointers
that come from different memory blocks. (In practice, this
is only a problem when running a program such as
BoundsChecker.)
*/
template <class Key> inline bool qMapLessThanKey(const Key &key1, const Key &key2)
{
return key1 < key2;
}
template <class Ptr> inline bool qMapLessThanKey(const Ptr *key1, const Ptr *key2)
{
Q_STATIC_ASSERT(sizeof(quintptr) == sizeof(const Ptr *));
return quintptr(key1) < quintptr(key2);
}
They just cast the pointers to quintptrs (which is the QT-version of uintptr_t, that is, an unsigned int that is capable of storing a pointer) and compare the results.
The following type designates an unsigned integer type with the property that any valid pointer to void can be converted to this type, then converted back to a pointer to void, and the result will compare equal to the original pointer: uintptr_t
Do you think this implementation of qMapLessThanKey() on pointers is ok?
Of course, there is a total order on integral types. But I think this is not sufficient to conclude that this operation defines a total order on pointers.
I think that it is true only if p1 == p2 implies quintptr(p1) == quintptr(p2), which, AFAIK, is not specified.
As a counterexample of this condition, imagine a target using 40 bits for pointers; it could convert pointers to quintptr, setting the 40 lowest bits to the pointer address and leaving the 24 highest bits unchanged (random). This is sufficient to respect the convertibility between quintptr and pointers, but this does not define a total order for pointers.
What do you think?
The Standard guarantees that converting a pointer to an uintptr_t will yield a value of some unsigned type which, if cast to the original pointer type, will yield the original pointer. It also mandates that any pointer can be decomposed into a sequence of unsigned char values, and that using such a sequence of unsigned char values to construct a pointer will yield the original. Neither guarantee, however, would forbid an implementation from including padding bits within pointer types, nor would either guarantee require that the padding bits behave in any consistent fashion.
If code avoided storing pointers, and instead cast to uintptr_t every pointer returned from malloc, later casting those values back to pointers as required, then the resulting uintptr_t values would form a ranking. The ranking might not have any relationship to the order in which objects were created, nor to their arrangement in memory, but it would be a ranking. If any pointer gets converted to uintptr_t more than once, however, the resulting values might rank entirely independently.
I think that you can't assume that there is a total order on pointers. The guarantees given by the standard for pointer to int conversions are rather limited:
5.2.10/4: A pointer can be explicitly converted to any integral type large enough to hold it. The mapping function is
implementation-defined.
5.2.10/5: A value of integral type or enumeration type can be explicitly converted to a pointer. A pointer converted to an integer
of sufficient size (...) and back to the same pointer type will have
its original value; mappings between pointers and integers are
otherwise implementation-defined.
From a practical point of view, most of the mainstream compilers will convert a pointer to an integer in a bitwise manner, and you'll have a total order.
The theoretical problem:
But this is not guaranteed. It might not work on past platforms (x86 real and protected mode), on exotic platform (embedded systems ?) , and -who knows- on some future platforms (?).
Take the example of segmented memory of the 8086: The real address is given by the combination of a segment (e.g. DS register for data segment, an SS for stack segment,...) and an offest:
Segment: XXXX YYYY YYYY YYYY 0000 16 bits shifted by 4 bits
Offset: 0000 ZZZZ ZZZZ ZZZZ ZZZZ 16 bits not sifted
------------------------
Address: AAAA AAAA AAAA AAAA AAAA 20 bits address
Now imagine that the compiler would convert the pointer to int, by simply doing the address math and put 20 bits in the integer: your safe and have a total order.
But another equally valid approach would be to store the segment on 16 upper bits and the offset on the 16 lower bits. In fact, this way would significantly facilitate/accelerate the load of pointer values into cpu registers.
This approach is compliant with standard c++ requirements, but each single address could be represented by 16 different pointers: your total order is lost !!
**Are there alternatives for the order ? **
One could imagine using pointer arithmetics. There are strong constraints on pointer arithmetics for elements in a same array:
5.7/6: When two pointers to elements of the same array object are subtracted, the result is the difference of the subscripts of the two
array elements.
And subscripts are ordered.
Array can be of maximum size_t elements. So, naively, if sizeof(pointer) <= sizof(size_t) one could assume that taking an arbitrary reference pointer and doing some pointer arithmetic should lead to a total order.
Unfortunately, here also, the standard is very prudent:
5.7.7: For addition or subtraction, if the expressions P or Q have type “pointer to cv T”, where T is different from the
cv-unqualified array element type, the behavior is undefined.
So pointer arithmetic won't do the trick for arbitrary pointers either. Again, back to the segmented memory models, helps to understand: arrays could have maximum 65535 bytes to fit completely in one segment. But different arrays could use different segments so that pointer arithmetic wouldn't be reliable for a total order either.
Conclusion
There's a subtle note in the standard on the mapping between pointer and interal value:
It is intended to be unsurprising to those who know the addressing
structure of the underlying machine.
This means that must be be possible to determine a total order. But keep in mind that it'll be non portable.
Related
I have the following question.
Given that a pointer holds the value of a memory address, why is it permitted to add an integer
data type value to a pointer variable but not a double data type?
My thoughts: Is it because we assume that the pointer is an int as well, or maybe because if we add a double will increase its length?
Thank you for your time.
You almost answered your question yourself: a pointer is a memory address. A memory address is an integer. You can add integers to integers and get integers as a result. Adding a float to an integer gives you a float, which cannot be used as a memory address.
For example, char *x = 0; is the address of a single byte; What would char *y = 0.5; mean? A byte that's somehow made up of the second half of the byte at address 0 and the first half of the byte at address 1?? This may make sense, but what about char *x = 3.1415926; or any similar floating-point number??
My thoughts: Is it because we assume that the pointer is an int as well, or maybe because if we add a double will increase its length?
If you look to documentation it says:
Certain addition, subtraction, increment, and decrement operators are defined for pointers to elements of arrays: such pointers satisfy the LegacyRandomAccessIterator requirements and allow the C++ library algorithms to work with raw arrays.
(emphasis is mine) and you should remember that:
*(ptr + 1)
is equal to:
ptr[1]
and indexes for arrays are integers so language does not define operations on pointers with floating point operands as that does not make any sense.
You can not add a double* (pointer) to an int* (pointer) via the conventions of C. A pointer holds a value of a memory address ["stores/points to the address of another variable"] that value in essence is determined by its type in this case int(4 byte-block of memory if I recall). A double is a double-precision, 64-bit floating-point data type. Just can't do it from the most "hardware" of levels.
I'm testing some ways of calculating the size,in bytes of a function(I'm familiar with opcodes on x86). The code is quite self-explanatory:
void exec(void* addr){
int (WINAPI *msg)(HWND,LPCSTR,LPCSTR,UINT)=(int(WINAPI *)(HWND,LPCSTR,LPCSTR,UINT))addr;
msg(0,"content","title",0);
}
void dump(){};
int main()
{
cout<<(char*)dump-(char*)exec; // this is 53
return 0;
}
It is supposed to substract the address of 'exec' from 'dump'. This works but I noticed the values differ when using other types of pointers like DWORD*:
void exec(void* addr){
int (WINAPI *msg)(HWND,LPCSTR,LPCSTR,UINT)=(int(WINAPI *)(HWND,LPCSTR,LPCSTR,UINT))addr;
msg(0,"content","title",0);
}
void dump(){};
int main()
{
cout<<(DWORD*)dump-(DWORD*)exec; // this is 13
return 0;
}
From my understanding no matter the pointer type ,it is always the largest possible data type(so that it can handle large adresses),in my case of 4 bytes (x86 system). The only thing that changes between pointers is the data type it points to.
What is the explanation?
Pointer arithmetic in C/C++ is designed for accessing elements of an array. In fact, array indexing is merely a simpler syntax for pointer arithmetic. For example, if you have an array named array, array[1] is the same thing as *(array+1), regardless of the data type of the elements in array.
(I'm assuming here that no operator overloading is going on; that could change everything.)
If you have a char* or unsigned char*, the pointer points to a single byte, and incrementing the pointer advances it to the next byte.
In Windows, DWORD is a 32-bit value (four bytes), and DWORD* points to a 32-bit value. If you increment a DWORD*, the pointer is advanced by four bytes, just as array[1] gives you the second element of the array, which is four bytes (one DWORD) after the first element. Similarly, if you add 10 to a DWORD*, it advances 40 bytes, not 10 bytes.
Either way, incrementing or adding to a pointer is only valid if the resulting pointer points into the same array as the original one, or one element past the end. Otherwise it is undefined behavior.
Pointer subtraction works just like addition. When you subtract one pointer from another, they must be the same type, and must be pointers into the same array or one past the end.
What you're doing is counting the number of elements between the two pointers, as if they were pointers into the same array (or one past the end). But when the two pointers don't point into the same array (or again, one past the end), the result is undefined behavior.
Here is a reference from Carnegie Mellon University about this:
ARR36-C. Do not subtract or compare two pointers that do not refer to the same array - SEI CERT C Coding Standard
Pointer subtraction tells you the number of elements between the two addresses, so using DWORD * it will be in DWORD sized units.
You have:
cout<<(char*)dump-(char*)exec;
where dump and exec are the names of functions. Each cast converts a function pointer to char*.
I'm not sure about the status of such a conversion in C++. I think it either has undefined behavior or is illegal (making your program ill-formed). When I compile with g++ 4.8.4 with options -pedantic -std=c++11, it complains:
warning: ISO C++ forbids casting between pointer-to-function and pointer-to-object [-Wpedantic]
(There's a similar diagnostic for C, which I believe is not strictly correct, but that's another story.)
There's no guarantee that there's any meaningful relationship between object pointers and function pointers.
Apparently your compiler lets you get away with the casts, and presumably the result is a char* representation of the address of the function. Subtracting two pointers yields the distance between the two addresses in units of the type the pointers point to. Subtracting two char* pointers yields a ptrdiff_t result that is the difference in bytes. Subtracting two DWORD* pointers yields the difference in unit of sizeof (DWORD) (probably 4 bytes?). That explains why you get different results. If two DWORD pointers don't point to addresses that aren't a whole number of DWORDs apart in memory, the results are unpredictable, but in your example getting 13 rather than 53 (truncating) is plausible.
However, pointer subtraction is defined only when both pointer operands point to elements of the same array object, or just past the end of it. For any other operands, the behavior is undefined.
For an implementation that permits the casts, that uses the same representation for object pointers and for function pointers, and on which the value of a function pointer refers to a memory address in the same way that the value of an object pointer does, you can likely determine the size of a function by converting its address to char* and subtracting the result from the converted address of an adjacent function. But a compiler and/or linker is free to generate code for functions in any order it likes, including perhaps inserting code for other functions between two functions whose definitions are adjacent in your source code.
If you want to determine the size in bytes, use pointers to byte-sized types such as char. And be aware that the method you're using is not portable and is not guaranteed to work.
If you really need the size of a function, see if you can get your linker to generate some kind of map showing the allocated sizes and locations of your functions. There's no portable way to do it from within C++.
Let f: Pointers -> Integer_Represenataion be a map provided by implementation (I hope, that map doesn't depends on the way we cast a pointer to an integral type). Let be a pointer to T and be a variable of integral type.
Does the standard explcitly define that the map is isomorphic, i.e. f(p+i)= f(p)+i*sizeof(T)? In general I would like to understand how additive operation between pointers and integrals is bounded.
It isn't. The specification does not require anything for it. It is implementation-defined and some implementations may be weird.
In similar cases it always helps to remember the memory models on 8086 (in 16 bits). There pointers are 32-bits, segment+offset, but they overlap to form only 20 bit address. In huge mode, these are normalized to smallest offset.
So say p = 0123:0004 (which converts to f(p) = 0x01230004), i = 42 and sizeof(T) = 2. Then p + i = 0128:0008 and converts to f(p+i) = 0x01280008, but f(p) + i*sizeof(T) = 0x01230058`, a different representation, though of the same address.
On the other hand in large model, the pointers are not normalized. So you can have both 0128:0008 and 0123:0058 and they are different pointers, but point to the same address.
Both follow the letter of the standard. Because arithmetic is only required to work on pointers to the same array or allocated block and the conversion to integer is implementation defined completely.
Now we know that doing out-of-bounds-pointer-arithmetic has undefined behavior as described in this SO question.
My question is: can we workaround such restriction by casting to std::uintptr_t for arithmetic operations and then cast back to pointer? is that guaranteed to work?
For example:
char a[5];
auto u = reinterpret_cast<std::uintptr_t>(a) - 1;
auto p = reinterpret_cast<char*>(u + 1); // OK?
The real world usage is for optimizing offsetted memory access -- instead of p[n + offset], I want to do offset_p[n].
EDIT To make the question more explicit:
Given a base pointer p of a char array, if p + n is a valid pointer, will reinterpret_cast<char*>(reinterpret_cast<std::uintptr_t>(p) + n) be guaranteed to yield the same valid pointer?
No, uintptr_t cannot be meaningfully used to avoid undefined behavior when performing pointer arithmetic.
For one thing, at least in C there is no guarantee that uintptr_t even exists. The requirement is that any value of type void* may be converted to uintptr_t and back again, yielding the original value without loss of information. In principle, there might not be any unsigned integer type wide enough to hold all pointer values. (I presume the same applies to C++, since C++ inherits most of the C standard library and defines it by reference to the C standard.)
Even if uintptr_t does exist, there is no guarantee that a given arithmetic operation on a uintptr_t value does the same thing as the corresponding operation on a pointer value.
For example, I've worked on systems (Cray vector systems, T90 and SV1) on which byte pointers are implemented in software. A native address is a 64-bit address that refers to a 64-bit word; there is no hardware support for byte addressing. A char* or void* pointer consists of a word pointer with a 3-bit offset stored in the otherwise unused high-order bits. Conversion between integers and pointers simply copies the bits. So incrementing a char* would advance it to point to the next 8-bit byte in memory; incrementing a uintptr_t obtained by converting a char* would advance it to point to the next 64-bit word.
That's just one example. More generally, conversions between pointers and integers are implementation-defined, and the language standard makes no guarantee about the semantics of those conversions (other than, in some cases, converting back to a pointer).
So yes, you can convert a pointer value to uintptr_t (if that type exists) and perform arithmetic on it without risking undefined behavior -- but the result may or may not be meaningful.
It happens that, on most systems, the mapping between pointers and integers is simpler, and you probably can get away with that kind of game. But you're better off using pointer arithmetic directly, and just being very careful to avoid any invalid operations.
Yes, that is legal, but you must reinterpret_cast exactly the same uintptr_t value back to char*.
(Therefore, what it you're intending to do is illegal; that is, converting a different value back to a pointer.)
5.2.10 Reinterpret cast
4 . A pointer can be explicitly converted to any integral type large enough to hold it. The mapping function is
implementation-defined.
5 . A value of integral type or enumeration type can be explicitly converted to a pointer. A pointer converted
to an integer of sufficient size (if any such exists on the implementation) and back to the same pointer type
will have its original value;
(Note that there'd be no way, in general, for the compiler to know that you subtracted one and then added it back.)
Is it safe to cast pointer to int and later back to pointer again?
How about if we know if the pointer is 32 bit long and int is 32 bit long?
long* juggle(long* p) {
static_assert(sizeof(long*) == sizeof(int));
int v = reinterpret_cast<int>(p); // or if sizeof(*)==8 choose long here
do_some_math(v); // prevent compiler from optimizing
return reinterpret_cast<long*>(v);
}
int main() {
long* stuff = new long(42);
long* ffuts = juggle(stuff);
std::cout << "Is this always 42? " << *ffuts << std::endl;
}
Is this covered by the Standard?
No.
For instance, on x86-64, a pointer is 64-bit long, but int is only 32-bit long. Casting a pointer to int and back again makes the upper 32-bit of the pointer value lost.
You may use the intptr_t type in <cstdint> if you want an integer type which is guaranteed to be as long as the pointer. You could safely reinterpret_cast from a pointer to an intptr_t and back.
Yes, if... (or "Yes, but...") and no otherwise.
The standard specifies (3.7.4.3) the following:
A pointer value is a safely-derived pointer [...] if it is the result of a well-defined pointer conversion or reinterpret_cast of a safely-derived pointer value [or] the result of a reinterpret_cast of an integer representation of a safely-derived pointer value
An integer value is an integer representation of a safely-derived pointer [...] if its type is at least as large as std::intptr_t and [...] the result of a reinterpret_cast of a safely-derived pointer value [or]
the result of a valid conversion of an integer representation of a safely-derived pointer value [or] the result of an additive or bitwise operation, one of whose operands is an integer representation of a
safely-derived pointer value
A traceable pointer object is [...] an object of an integral type that is at least as large as std::intptr_t
The standard further states that implementations may be relaxed or may be strict about enforcing safely-derived pointers. Which means it is unspecified whether using or dereferencing a not-safely-derived pointer invokes undefined behavior (that's a funny thing to say!)
Which alltogether means no more and no less than "something different might work anyway, but the only safe thing is as specified above".
Therefore, if you either use std::intptr_t in the first place (the preferrable thing to do!) or if you know that the storage size of whatever integer type you use (say, long) is at least the size of std::intptr_t, then it is allowable and well-defined (i.e. "safe") to cast to your integer type and back. The standard guarantees that.
If that's not the case, the conversion from pointer to integer representation will probably (or at least possibly) lose some information, and the conversion back will not give a valid pointer. Or, it might by accident, but this is not guaranteed.
An interesting anecdote is that the C++ standard does not directly define std::intptr_t at all; it merely says "the same as 7.18 in the C standard".
The C standard, on the other hand, states "designates a signed integer type with the property that any valid
pointer to void can be converted to this type, then converted back to pointer to void, and the result will compare equal to the original pointer".
Which means, without the rather complicated definitions above (in particular the last bit of the first bullet point), it wouldn't be allowable to convert to/from anything but void*.
Yes and no.
The language specification explicitly states that it is safe (meaning that in the end you will get the original pointer value) as long as the size of the integral type is sufficient to store the [implementation-dependent] integral representation of the pointer.
So, in general case it is not "safe", since in general case int can easily turn out to be too small. In your specific case it though it might be safe, since your int might be sufficiently large to store your pointer.
Normally, when you need to do something like that, you should use the intptr_t/uintptr_t types, which are specifically introduced for that purpose. Unfortunately, intptr_t/uintptr_t are not the part of the current C++ standard (they are standard C99 types), but many implementations provide them nevertheless. You can always define these types yourself, of course.
In general, no; pointers may be larger than int, in which case there's no way to reconstruct the value.
If an integer type is known to be large enough, then you can; according to the Standard (5.2.10/5):
A pointer converted to an integer of sufficient size ... and back to the same pointer type will have its original value
However, in C++03, there's no standard way to tell which integer types are large enough. C++11 and C99 (and hence in practice most C++03 implementations), and also Boost.Integer, define intptr_t and uintptr_t for this purpose. Or you could define your own type and assert (preferably at compile time) that it's large enough; or, if you don't have some special reason for it to be an integer type, use void*.
Is it safe? Not really.
In most circumstances, will it work? Yes
Certainly if an int is too small to hold the full pointer value and truncates, you won't get your original pointer back (hopefully your compiler will warn you about this case, with GCC truncating conversions from pointer to integers are hard errors). A long, or uintptr_t if your library supports it, may be better choices.
Even if your integer type and pointer types are the same size, it will not necessarily work depending on your application runtime. In particular, if you're using a garbage collector in your program it might easily decide that the pointer is no longer outstanding, and when you later cast your integer back to a pointer and try to dereference it, you'll find out the object was already reaped.
Absolutely not. Doing some makes a bad assumption that the size of an int and a pointer are the same. This is almost always no the case on 64 bit platforms. If they are not the same a precision loss will occur and the final pointer value will be incorrect.
MyType* pValue = ...
int stored = (int)pValue; // Just lost the upper 4 bytes on a 64 bit platform
pValue = (MyType*)stored; // pValue is now invalid
pValue->SomeOp(); // Kaboom
No, it is not (always) safe (thus not safe in general). And it is covered by the standard.
ISO C++ 2003, 5.2.10:
A pointer can be explicitly converted to any integral type large enough to hold it. The mapping function is implementation-defined.
A value of integral type or enumeration type can be explicitly converted to a pointer. A pointer converted to an integer of sufficient size (if any such exists on the implementation) and back to the same pointer type will have its original value; mappings between pointers and integers are otherwise implementation-defined.
(The above emphases are mine.)
Therefore, if you know that the sizes are compatible, then the conversion is safe.
#include <iostream>
// C++03 static_assert.
#define ASSURE(cond) typedef int ASSURE[(cond) ? 1 : -1]
// Assure that the sizes are compatible.
ASSURE(sizeof (int) >= sizeof (char*));
int main() {
char c = 'A';
char *p = &c;
// If this program compiles, it is well formed.
int i = reinterpret_cast<int>(p);
p = reinterpret_cast<char*>(i);
std::cout << *p << std::endl;
}
Use uintptr_t from "stdint.h" or from "boost/stdint.h". It is guaranteed to have enough storage for a pointer.
No it is not. Even if we rule out the architecture issue, size of a pointer and an integer have differences. A pointer can be of three types in C++ : near, far, and huge. They have different sizes. And if we talk about an integer its normally of 16 or 32 bit. So casting integer into pointers and vice-verse is not safe. Utmost care has to be taken, as there very much chances of precision loss. In most of the cases an integer will be short of space to store a pointer, resulting in loss of value.
If your going to be doing any system portable casting, you need to use something like Microsofts INT_PTR/UINT_PTR, the safety after that relies on the target platforms and what you intend doing to the INT_PTR. generally for most arithmatic char* or uint_8* works better while being typesafe(ish)
To an int ? not always if you are on a 64 bit machine then int is only 4 bytes, however pointers are 8 bytes long and thus you would end up with a different pointer when you cast it back from int.
There are however ways to get around this. You can simply use an 8 byte long data type ,which would work whether or not you are on 32/64 bit system, such as unsigned long long unsigned because you don't want sign extension on 32-bit systems.
It is important to note that on Linux unsigned long will always be pointer size* so if you are targeting Linux systems you could just use that.
*According to cppreference and also tested it myself but not on all Linux and Linux like systems
If the issue is that you want to do normal math on it, probably the safest thing to do would be to cast it to a pointer to char (or better yet, * uint8_t), do your math, and then cast it back.