Unaligned access through reinterpret_cast - c++

I'm in the middle of a discussion trying to figure out whether unaligned access is allowable in C++ through reinterpret_cast. I think not, but I'm having trouble finding the right part(s) of the standard which confirm or refute that. I have been looking at C++11, but I would be okay with another version if it is more clear.
Unaligned access is undefined in C11. The relevant part of the C11 standard (§ 6.3.2.3, paragraph 7):
A pointer to an object type may be converted to a pointer to a different object type. If the resulting pointer is not correctly aligned for the referenced type, the behavior is undefined.
Since the behavior of an unaligned access is undefined, some compilers (at least GCC) take that to mean that it is okay to generate instructions which require aligned data. Most of the time the code still works for unaligned data because most x86 and ARM instructions these days work with unaligned data, but some don't. In particular, some vector instructions don't, which means that as the compiler gets better at generating optimized instructions code which worked with older versions of the compiler may not work with newer versions. And, of course, some architectures (like MIPS) don't do as well with unaligned data.
C++11 is, of course, more complicated. § 5.2.10, paragraph 7 says:
An object pointer can be explicitly converted to an object pointer of a different type. When a prvalue v of type “pointer to T1” is converted to the type “pointer to cv T2”, the result is static_cast<cv T2*>(static_cast<cv void*>(v)) if both T1 and T2 are standard-layout types (3.9) and the alignment requirements of T2 are no stricter than those of T1, or if either type is void. Converting a prvalue of type “pointer to T1” to the type “pointer to T2” (where T1 and T2 are object types and where the alignment requirements of T2 are no stricter than those of T1) and back to its original type yields the original pointer value. The result of any other such pointer conversion is unspecified.
Note that the last word is "unspecified", not "undefined". § 1.3.25 defines "unspecified behavior" as:
behavior, for a well-formed program construct and correct data, that depends on the implementation
[Note: The implementation is not required to document which behavior occurs. The range of possible behaviors is usually delineated by this International Standard. — end note]
Unless I'm missing something, the standard doesn't actually delineate the range of possible behaviors in this case, which seems to indicate to me that one very reasonable behavior is that which is implemented for C (at least by GCC): not supporting them. That would mean the compiler is free to assume unaligned accesses do not occur and emit instructions which may not work with unaligned memory, just like it does for C.
The person I'm discussing this with, however, has a different interpretation. They cite § 1.9, paragraph 5:
A conforming implementation executing a well-formed program shall produce the same observable behavior as one of the possible executions of the corresponding instance of the abstract machine with the same program and the same input. However, if any such execution contains an undefined operation, this International Standard places no requirement on the implementation executing that program with that input (not even with regard to operations preceding the first undefined operation).
Since there is no undefined behavior, they argue that the C++ compiler has no right to assume unaligned access don't occur.
So, are unaligned accesses through reinterpret_cast safe in C++? Where in the specification (any version) does it say?
Edit: By "access", I mean actually loading and storing. Something like
void unaligned_cp(void* a, void* b) {
*reinterpret_cast<volatile uint32_t*>(a) =
*reinterpret_cast<volatile uint32_t*>(b);
}
How the memory is allocated is actually outside my scope (it is for a library which can be called with data from anywhere), but malloc and an array on the stack are both likely candidates. I don't want to place any restrictions on how the memory is allocated.
Edit 2: Please cite sources (i.e., the C++ standard, section and paragraph) in answers.

Looking at 3.11/1:
Object types have alignment requirements (3.9.1, 3.9.2) which place restrictions on the addresses at which an object of that type may be allocated.
There's some debate in comments about exactly what constitutes allocating an object of a type. However I believe the following argument works regardless of how that discussion is resolved:
Take *reinterpret_cast<uint32_t*>(a) for example. If this expression does not cause UB, then (according to the strict aliasing rule) there must be an object of type uint32_t (or int32_t) at the given location after this statement. Whether the object was already there, or this write created it, does not matter.
According to the above Standard quote, objects with alignment requirement can only exist in a correctly aligned state.
Therefore any attempt to create or write an object that is not correctly aligned causes UB.

EDIT This answers the OP's original question, which was "is accessing a misaligned pointer safe". The OP has since edited their question to "is dereferencing a misaligned pointer safe", a far more practical and less interesting question.
The round-trip cast result of the pointer value is unspecified under those circumstances. Under certain limited circumstances (involving alignment), converting a pointer to A to a pointer to B, and then back again, results in the original pointer, even if you didn't have a B in that location.
If the alignment requirements are not met, than that round trip -- the pointer-to-A to pointer-to-B to pointer-to-A results in a pointer with an unspecified value.
As there are invalid pointer values, dereferencing a pointer with an unspecified value can result in undefined behavior. It is no different than *(int*)0xDEADBEEF in a sense.
Simply storing that pointer is not, however, undefined behavior.
None of the above C++ quotes talk about actually using a pointer-to-A as a pointer-to-B. Using a pointer to the "wrong type" in all but a very limited number of circumstances is undefined behavior, period.
An example of this involves creating a std::aligned_storage_t<sizeof(T), alignof(T)>. You can construct your T in that spot, and it will live their happily, even though it "actually" is an aligned_storage_t<sizeof(T), alignof(T)>. (You may, however, have to use the pointer returned from the placement new for full standard compliance; I am uncertain. See strict aliasing.)
Sadly, the standard is a bit lacking in terms of what object lifetime is. It refers to it, but does not define it well enough last I checked. You can only use a T at a particular location while a T lives there, but what that means is not made clear in all circumstances.

All of your quotes are about the pointer value, not the act of dereferencing.
5.2.10, paragraph 7 says that, assuming int has a stricter alignment than char, then the round trip of char* to int* to char* generates an unspecified value for the resulting char*.
On the other hand, if you convert int* to char* to int*, you are guaranteed to get back the exact same pointer as you started with.
It doesn't talk about what you get when you dereference said pointer. It simply states that in one case, you must be able to round trip. It washes its hands of the other way around.
Suppose you have some ints, and alignof(int) > 1:
int some_ints[3] ={0};
then you have an int pointer that is offset:
int* some_ptr = (int*)(((char*)&some_ints[0])+1);
We'll presume that copying this misaligned pointer doesn't cause undefined behavior for now.
The value of some_ptr is not specified by the standard. We'll be generous and presume it actually points to some chunk of bytes within some_bytes.
Now we have a int* that points to somewhere an int cannot be allocated (3.11/1). Under (3.8) the use of a pointer to an int is restricted in a number of ways. Usual use is restricted to a pointer to an T whose lifetime has begun allocated properly (/3). Some limited use is permitted on a pointer to a T which has been allocated properly, but whose lifetime has not begun (/5 and /6).
There is no way to create an int object that does not obey the alignment restrictions of int in the standard.
So the theoretical int* which claims to point to a misaligned int does not point to an int. No restrictions are placed on the behavior of said pointer when dereferenced; usual dereferencing rules provide behavior of a valid pointer to an object (including an int) and how it behaves.
And now our other assumptions. No restrictions on the value of some_ptr here are made by the standard: int* some_ptr = (int*)(((char*)&some_ints[0])+1);.
It is not a pointer to an int, much like (int*)nullptr is not a pointer to an int. Round tripping it back to a char* results in a pointer with unspecified value (it could be 0xbaadf00d or nullptr) explicitly in the standard.
The standard defines what you must do. There are (nearly? I guess evaluating it in a boolean context must return a bool) no requirements placed on the behavior of some_ptr by the standard, other than converting it back to char* results in an unspecified value (of the pointer).

Related

Why (if that is the case) does the standard say that copying uninitialized memory with memcpy is UB?

When a class member cannot have a sensible meaning at the moment of construction,
I don't initialize it. Obviously that only applies to POD types, you cannot NOT
initialize an object with constructors.
The advantage of that, apart from saving CPU cycles initializing something to
a value that has no meaning, is that I can detect erroneous usage of these
variables with valgrind; which is not possible when I'd just give those variables
some random value.
For example,
struct MathProblem {
bool finished;
double answer;
MathProblem() : finished(false) { }
};
Until the math problem is solved (finished) there is no answer. It makes no sense to initialize answer in advance (to -say- zero) because that might not be the answer. answer only has a meaning after finished was set to true.
Usage of answer before it is initialized is therefore an error and perfectly OK to be UB.
However, a trivial copy of answer before it is initialized is currently ALSO UB (if I understand the standard correctly), and that doesn't make sense: the default copy and move constructor should simply be able to make a trivial copy (aka, as-if using memcpy), initialized or not: I might want to move this object into a container:
v.push_back(MathProblem());
and then work with the copy inside the container.
Is moving an object with an uninitialized, trivially copyable member indeed defined as UB by the standard? And if so, why? It doesn't seem to make sense.
Is moving an object with an uninitialized, trivially copyable member indeed defined as UB by the standard?
Depends on the type of the member. Standard says:
[basic.indet]
When storage for an object with automatic or dynamic storage duration is obtained, the object has an indeterminate value, and if no initialization is performed for the object, that object retains an indeterminate value until that value is replaced ([expr.ass]).
If an indeterminate value is produced by an evaluation, the behavior is undefined except in the following cases:
If an indeterminate value of unsigned ordinary character type ([basic.fundamental]) or std​::​byte type ([cstddef.syn]) is produced by the evaluation of:
the second or third operand of a conditional expression,
the right operand of a comma expression,
the operand of a cast or conversion ([conv.integral], [expr.type.conv], [expr.static.cast], [expr.cast]) to an unsigned ordinary character type or std​::​byte type ([cstddef.syn]), or
a discarded-value expression,
then the result of the operation is an indeterminate value.
If an indeterminate value of unsigned ordinary character type or std​::​byte type is produced by the evaluation of the right operand of a simple assignment operator ([expr.ass]) whose first operand is an lvalue of unsigned ordinary character type or std​::​byte type, an indeterminate value replaces the value of the object referred to by the left operand.
If an indeterminate value of unsigned ordinary character type is produced by the evaluation of the initialization expression when initializing an object of unsigned ordinary character type, that object is initialized to an indeterminate value.
If an indeterminate value of unsigned ordinary character type or std​::​byte type is produced by the evaluation of the initialization expression when initializing an object of std​::​byte type, that object is initialized to an indeterminate value.
None of the exceptional cases apply to your example object, so UB applies.
with memcpy is UB?
It is not. std::memcpy interprets the object as an array of bytes, in which exceptional case there is no UB. You still have UB if you attempt to read the indeterminate copy (unless the exceptions above apply).
why?
The C++ standard doesn't include a rationale for most rules. This particular rule has existed since the first standard. It is slightly stricter than the related C rule which is about trap representations. To my understanding, there is no established convention for trap handling, and the authors didn't wish to restrict implementations by specifying it, and instead opted to specify it as UB. This also has the effect of allowing optimiser to deduce that indeterminate values will never be read.
I might want to move this object into a container:
Moving an uninitialised object into a container is typically a logic error. It is unclear why you might want to do such thing.
The design of the C++ Standard was heavily influenced by the C Standard, whose authors (according to the published Rationale) intended and expected that implementations would, on a quality-of-implementation basis, extend the semantics of the language by meaningfully processing programs in cases where it was clear that doing so would be useful, even if the Standard didn't "officially" define the behavior of those programs. Consequently, both standards place more priority upon ensuring that they don't mandate behaviors in cases where doing so might make some implementations less useful, than upon ensuring that they mandate everything that should be supported by quality general-purpose implementations.
There are many cases where it may be useful for an implementation to extend the semantics of the language by guaranteeing that using memcpy on any valid region of storage will, at worst, behave in a fashion consistent with populating the destination with some possibly-meaningless bit pattern with no outside side effects, and few if any where it would be either easier or more useful to have it do something else. The only situations where anyone should care about whether the behavior of memcpy is defined in a particular situation involving valid regions of storage would be those in which some alternative behavior would be genuinely more useful than the commonplace one. If such situations exist, compiler writers and their customers would be better placed than the Committee to judge which behavior would be most useful.
As an example of a situation where an alternative behavior might be more useful, consider code which uses memcpy to copy a partially-written structure, and then uses it to make two copies of that structure. In some cases, having the compiler only write the parts of the two destination structures which had been written in the original may improve efficiency, but that behavior would be observably different from having the first memcpy behave as though it stores some bit pattern to its destination. Note that while such a change would not adversely affect a program's overall behavior if no copies of the uninitialized parts of the structure are ever used in a way that would affect behavior, the Standard has no nice way of distinguishing scenarios that could or could not occur under such a module, and thus leaves all such scenarios undefined.

Is it legal to convert a pointer/reference to a fixed array size to a smaller size

Is it legal as per the C++ standard to convert a pointer or reference to a fixed array (e.g. T(*)[N] or T(&)[N]) to a pointer or reference to a smaller fixed array of the same type and CV qualification (e.g. T(*)[M] or T(&)[M])?
Basically, would this always be well-formed for all instantiations of T (regardless of layout-type):
void consume(T(&array)[2]);
void receive(T(&array)[6])
{
consume(reinterpret_cast<T(&)[2]>(array));
}
I don't see any references to this being a valid conversion in:
expr.reinterpret.cast,
expr.static.cast,
conv.array, or even
basic.types
However, it appears that all major compilers accept this and generate proper code even when optimized when using T = std::string (compiler explorer)(not that this proves much, if it is undefined behavior).
It's my understanding that this should be illegal as per the type-system, since an object of T[2] was never truly created, which means a reference of T(&)[2] would be invalid.
I'm tagging this question c++11 because this is the version I am most interested in the answer for, but I would be curious to know whether this answer is different in newer versions a well.
There’s not much to say here except no, in any language version: the types are simply unrelated. C++20 does allow conversion from T (*)[N] to T (*)[] (and similarly for references), but that doesn’t mean you can treat two different Ns equivalently. The closest you’re going to get to a “reference” for this rule is [conv.array]/1 (“The result is a pointer to the first element of the array.”, which T[2] does not exist in your example) and a note in [defns.undefined] (“Undefined behavior may be expected when this document omits any explicit definition of behavior”).
Part of the reason that compilers don’t “catch” you is that such reinterpret_casts are valid to return to the real type of an object after another reinterpret_cast used to “sneak” it through an interface that expects a pointer or reference to a different type (but doesn’t use it as that type!). That means that the code as given is legitimate, but the obvious sort of definition for consume and caller for receive would together cause undefined behavior. (The other part is that optimizers often leave code alone that’s always undefined unless it can eliminate a branch.)
A late additional answer, that rather yields the quality of a comment but would exceed the allowed content amount by far:
At first: Great question! It's remarkable, that such a quite obvious issue is hard to be verified and generates a lot of confusion even among experts. Worth to mention, that I've seen code of that category quite often already...
Some words about undefined behavior first
I think at least the question about the pointer usage is a great example where one has to admit, that theoretical undefined behavior from one aspect of the language can sometimes be "beaten" by two other strong aspects:
Are there other standard clauses that reduce the degree of UB for the aspect of interest for several cases? Are there maybe clauses whose priorities within the standard are ambiguous to each other even? (There are several prominent examples still existing in C++20, see conversion-type-id handling for operator auto() for instance...).
Are there (Turing-) provable arguments, that any theoretical and practical compiler realization has to behave as you expect since there are other constraints from the language, that have to determine it that way? Saying that even if UB can quirky mean, the compiler could apply "I can do what I want here, even the biggest mess" for your case, it might be provable, that the ensuring of other specified(!) language aspects determines that to be at least effectively impossible.
So with respect to point 2, there's an often underrated aspect: What are the constraints (if definable) by the model of the abstract machine, that determine the outcome of any theoretical (compiler-) implementation for the given code?
So, many words so far, but does anything from 1) apply to your concrete case (the pointer way)?
As multiple times users mentioned within the comments, a chance for that lies here basic.types#basic.compound-4:
Two objects a and b are pointer-interconvertible if:
...
(4.4) there exists an object c such that a and c are
pointer-interconvertible, and c and b are pointer-interconvertible.
That's the simple rule of transitivity. Can we actually find such a c (for arrays)?
Within the same section, the standard says further on:
If two objects are pointer-interconvertible, then they have the same
address, and it is possible to obtain a pointer to one from a pointer
to the other via a reinterpret_­cast. [ Note: An array object and its
first element are not pointer-interconvertible, even though they have
the same address.  — end note ]
demolishing our dreams here of our approach via the pointer-to-the-first-element - usage. There isn't such a c for arrays.
Do we have another chance? You mentioned expr.reinterpret.cast#7 :
An object pointer can be explicitly converted to an object pointer of
a different type.70 When a prvalue v of type “pointer to T1” is
converted to the type “pointer to cv T2”, the result is static_cast<cv
T2*>(static_cast<cv void*>(v)) if both T1 and T2 are standard-layout
types ([basic.types]) and the alignment requirements of T2 are no
stricter than those of T1, or if either type is void. Converting a
prvalue of type “pointer to T1” to the type “pointer to T2” (where T1
and T2 are object types and where the alignment requirements of T2 are
no stricter than those of T1) and back to its original type yields the
original pointer value. The result of any other such pointer
conversion is unspecified.
This looks promising at first glance but the devil is in the details. That solely ensures that you can apply the pointer conversion since the alignment requirements for both arrays are equal, but not refer to interconvertibility (i.e. object usage itself) a priori.
As Davis already said: with the pointer to the first element, one could still use reinterpret_cast as some kind of a fake fascade fully standard compliant as long as the wrong type pointer to T[2] is only really used as a forwarder and all actual use cases refer to the element pointer via an according reinterpret_cast and as long as all use cases "are aware" of the fact, that the actual type was a T[4]. Trivial to see, that this is still hacky as hell for many scenarios. At least a type aliasing in order to emphasize the forwarding quality would be recommended here.
So a strict interpretation of the standard here is: It's undefined behavior with the note that we all know that it should work well with all common modern compilers on many common platforms (I know, the latter was not your question).
Do we have some chances according to my point 2) about effective "weak UB" from above?
I don't think so as long as only the abstract machine is on focus here. For instance, IMO there's no restriction from the standard, a compiler/environment could not handle (abstract) allocation schemes differently between arrays of different size (changed intrinsics for threshold sizes for instance) while still ensuring alignment requirements. To be very quirky here, one could say a very exotic compiler could be allowed to refer to underlying dynamic storage duration mechanisms even for scoped objects that appear to be on that what we know as stack. Another related possible issue could be the question about proper deallocation of arrays of dynamic storage duration here (see the similar debate about UB in the context of inheritance from classes that do not provide virtual destructors). I highly doubt that it's trivial to validate, that the standard guarantees a valid cleanup here a priori, i.e. effectively calling ~T[4] for your example for all cases.

Could reinterpret_cast turn an invalid pointer value into a valid one?

Consider this union:
union A{
int a;
struct{
int b;
} c;
};
c and a are not layout-compatibles types so it is not possible to read the value of b through a:
A x;
x.c.b=10;
x.a+x.a; //undefined behaviour (UB)
Trial 1
For the case below I think that since C++17, I also get an undefined behavior:
A x;
x.a=10;
auto p = &x.a; //(1)
x.c.b=12; //(2)
*p+*p; //(3) UB
Let's consider [basic.type]/3:
Every value of pointer type is one of the following:
a pointer to an object or function (the pointer is said to point to the object or function), or
a pointer past the end of an object ([expr.add]), or
the null pointer value ([conv.ptr]) for that type, or
an invalid pointer value.
Let's call this 4 pointer values categories as pointer value genre.
The value of a pointer may transition from of the above mentioned genre to an other, but the standard is not really explicit about that. Fill free to correct me if I am wrong. So I suppose that at (1) the value of p is a pointer to value. Then in (2) a life ends and the value of p becomes an invalid pointer value. So in (3) I get UB because I try to access the value of an object (a) out of its lifetime.
Trial 2
Now consider this weird code:
A x;
x.a=10;
auto p = &x.a; //(1)
x.c.b=12; //(2)
p = reinterpret_cast<int*>(p); //(2')
*p+*p; //(3) UB?
Could the reinterpret_cast<int*>(p) change the pointer value genre from invalid pointer value to a pointer to value.
reinterpret_cast<int*>(p) is defined to be equivalent to static_cast<int*>(static_cast<void*>(p)), so let's consider how is defined the static_cast from void* to int*, [expr.static.cast]/13:
A prvalue of type “pointer to cv1 void” can be converted to a prvalue of type “pointer to cv2 T”, where T is an object type and cv2 is the same cv-qualification as, or greater cv-qualification than, cv1. If the original pointer value represents the address A of a byte in memory and A does not satisfy the alignment requirement of T, then the resulting pointer value is unspecified. Otherwise, if the original pointer value points to an object a, and there is an object b of type T (ignoring cv-qualification) that is pointer-interconvertible with a, the result is a pointer to b. Otherwise, the pointer value is unchanged by the conversion.
So in our case the original pointer pointed to the object a. So I suppose the reinterpret_cast will not help because a is not within its lifetime. Is my reading to strict? Could this code be well defined?
Then in (2) a life ends and the value of p becomes an invalid pointer value.
Incorrect. Pointers only become invalid when they point into memory that has ended its storage duration.
The pointer in this case becomes a pointer to an object outside of its lifetime. The object it points to is gone, but the pointer is not "invalid" in the way the specification means it. [basic.life] spends quite a bit of time explaining what you can and cannot do to pointers to objects outside of their lifetime.
reinterpret_cast cannot turn a pointer to an object outside of its lifetime into a pointer to a different object that is within its lifetime.
The notion of objects in the standard is rather abstract and differs somewhat from intuition. An object may be within its lifetime or not, and objects not within their lifetimes can have the same address, this is why unions work at all: the definition of active member is "the member that is within its lifetime".
A pointer to an object not within its lifetime is still a pointer to object. reinterpret_cast only casts between the type of the pointer, but not its validity. The UB you get with casting to non-pointer-interconvertible types are due to the strict-aliasing rule, not due to the validity of the pointer.
In all your trials, including your follow up question, you are using an object not within its lifetime in ways that aren't allowed, ie accessing it, and are consequently UB.
Every version to date of the C and C++ Standards has been ambiguous or contradictory with regard to what can be done with addresses of union members. The authors of the C Standard didn't want to require that compilers make pessimistic allowances for the possibility that functions might be invoked by constructs like:
someFunction(&myUnion.member1, &myUnion.member2);
in cases where function would cause the value one member of myUnion would be changed between access made via the other. While the ability to take union members' addresses would have been pretty useless if code couldn't do things like:
someFunction1(&myUnion.member1);
someFunction2(&myUnion.member2);
someFunction3(&myUnion.member1);
the authors of the Standard expected that quality implementations intended for various purposes would process constructs that Undefined Behavior "in a documented fashion characteristic of the environment" when doing so would best serve those purposes, and thus thought that making support for such constructs be a quality-of-implementation issue would be simpler than trying to formulate precise rules for which patterns must be supported. A compiler that generated code for the called functions in the second example without knowing their calling context wouldn't be able to interleave accesses performed by the two functions, and a quality compiler that expanded them inline while processing the above code would have no trouble noticing when each pointer was derived from myUnion.
The authors of the C89 Standard didn't think it necessary to define precise rules for how pointers to union members behave, because they thought compiler writers' desire to produce quality implementations would drive them to handle appropriate cases sensibly even without such rules. Unfortunately, some compiler writers were too lazy to handle cases like the second example above, and rather than recognizing that there was never any reason for quality compilers to be incapable of handling such cases, the authors of later C and C++ Standards have bent over backward to come up with weirdly contorted, ambiguous, and contradictory rules that justify such compiler behavior.
As a result, the address-of operator should only be regarded as meaningfully applicable to union members in cases where the resulting pointer will be used for accessing individual bytes of storage, either using character-types directly, or passing to functions like memcpy that are defined in such fashion. Unless or until there's a major revamp of the Standard, or an appendix that describes means by which implementations can offer optional guarantees beyond what the Standard requires, it would be best to pretend that union members are--like bitfields--lvalues that don't have addresses.

Is the use of "new" necessary when creating a pointer in C++? [duplicate]

So far I can't find how to deduce that the following:
int* ptr;
*ptr = 0;
is undefined behavior.
First of all, there's 5.3.1/1 that states that * means indirection which converts T* to T. But this doesn't say anything about UB.
Then there's often quoted 3.7.3.2/4 saying that using deallocation function on a non-null pointer renders the pointer invalid and later usage of the invalid pointer is UB. But in the code above there's nothing about deallocation.
How can UB be deduced in the code above?
Section 4.1 looks like a candidate (emphasis mine):
An lvalue (3.10) of a
non-function, non-array type T can be
converted to an rvalue. If T is an
incomplete type, a program that
necessitates this conversion is
ill-formed. If the object to which the
lvalue refers is not an object of type
T and is not an object of a type
derived from T, or if the object is
uninitialized, a program that
necessitates this conversion has
undefined behavior. If T is a
non-class type, the type of the rvalue
is the cv-unqualified version of T.
Otherwise, the type of the rvalue is
T.
I'm sure just searching on "uninitial" in the spec can find you more candidates.
I found the answer to this question is a unexpected corner of the C++ draft standard, section 24.2 Iterator requirements, specifically section 24.2.1 In general paragraph 5 and 10 which respectively say (emphasis mine):
[...][ Example: After the declaration of an uninitialized pointer x (as with int* x;), x must always be assumed to have a singular value of a pointer. —end example ] [...] Dereferenceable values are always non-singular.
and:
An invalid iterator is an iterator that may be singular.268
and footnote 268 says:
This definition applies to pointers, since pointers are iterators. The effect of dereferencing an iterator that has been invalidated is undefined.
Although it does look like there is some controversy over whether a null pointer is singular or not and it looks like the term singular value needs to be properly defined in a more general manner.
The intent of singular is seems to be summed up well in defect report 278. What does iterator validity mean? under the rationale section which says:
Why do we say "may be singular", instead of "is singular"? That's becuase a valid iterator is one that is known to be nonsingular. Invalidating an iterator means changing it in such a way that it's no longer known to be nonsingular. An example: inserting an element into the middle of a vector is correctly said to invalidate all iterators pointing into the vector. That doesn't necessarily mean they all become singular.
So invalidation and being uninitialized may create a value that is singular but since we can not prove they are nonsingular we must assume they are singular.
Update
An alternative common sense approach would be to note that the draft standard section 5.3.1 Unary operators paragraph 1 which says(emphasis mine):
The unary * operator performs indirection: the expression to which it is applied shall be a pointer to an object type, or a pointer to a function type and the result is an lvalue referring to the object or function to which the expression points.[...]
and if we then go to section 3.10 Lvalues and rvalues paragraph 1 says(emphasis mine):
An lvalue (so called, historically, because lvalues could appear on the left-hand side of an assignment expression) designates a function or an object. [...]
but ptr will not, except by chance, point to a valid object.
The OP's question is nonsense. There is no requirement that the Standard say certain behaviours are undefined, and indeed I would argue that all such wording be removed from the Standard because it confuses people and makes the Standard more verbose than necessary.
The Standard defines certain behaviour. The question is, does it specify any behaviour in this case? If it does not, the behaviour is undefined whether or not it says so explicitly.
In fact the specification that some things are undefined is left in the Standard primarily as a debugging aid for the Standards writers, the idea being to generate a contradiction if there is a requirement in one place which conflicts with an explicit statement of undefined behaviour in another: that's a way to prove a defect in the Standard. Without the explicit statement of undefined behaviour, the other clause prescribing behaviour would be normative and unchallenged.
Evaluating an uninitialized pointer causes undefined behaviour. Since dereferencing the pointer first requires evaluating it, this implies that dereferencing also causes undefined behaviour.
This was true in both C++11 and C++14, although the wording changed.
In C++14 it is fully covered by [dcl.init]/12:
When storage for an object with automatic or dynamic storage duration is obtained, the object has an indeterminate value, and if no initialization is performed for the object, that object retains an indeterminate value until that value is replaced.
If an indeterminate value is produced by an evaluation, the behavior is undefined except in the following cases:
where the "following cases" are particular operations on unsigned char.
In C++11, [conv.lval/2] covered this under the lvalue-to-rvalue conversion procedure (i.e. retrieving the pointer value from the storage area denoted by ptr):
A glvalue of a non-function, non-array type T can be converted to a prvalue. If T is an incomplete type, a program that necessitates this conversion is ill-formed. If the object to which the glvalue refers is not
an object of type T and is not an object of a type derived from T, or if the object is uninitialized, a program that necessitates this conversion has undefined behavior.
The bolded part was removed for C++14 and replaced with the extra text in [dcl.init/12].
I'm not going to pretend I know a lot about this, but some compilers would initialize the pointer to NULL and dereferencing a pointer to NULL is UB.
Also considering that uninitialized pointer could point to anything (this includes NULL) you could concluded that it's UB when you dereference it.
A note in section 8.3.2 [dcl.ref]
[Note: in particular, a null reference
cannot exist in a well-defined
program, because the only way to
create such a reference would be to
bind it to the “object” obtained by
dereferencing a null pointer, which
causes undefined behavior. As
described in 9.6, a reference cannot
be bound directly to a bitfield. ]
—ISO/IEC 14882:1998(E), the ISO C++ standard, in section 8.3.2 [dcl.ref]
I think I should have written this as comment instead, I'm not really that sure.
To dereference the pointer, you need to read from the pointer variable (not talking about the object it points to). Reading from an uninitialized variable is undefined behaviour.
What you do with the value of pointer after you have read it, doesn't matter anymore at this point, be it writing to (like in your example) or reading from the object it points to.
Even if the normal storage of something in memory would have no "room" for any trap bits or trap representations, implementations are not required to store automatic variables the same way as static-duration variables except when there is a possibility that user code might hold a pointer to them somewhere. This behavior is most visible with integer types. On a typical 32-bit system, given the code:
uint16_t foo(void);
uint16_t bar(void);
uint16_t blah(uint32_t q)
{
uint16_t a;
if (q & 1) a=foo();
if (q & 2) a=bar();
return a;
}
unsigned short test(void)
{
return blah(65540);
}
it would not be particularly surprising for test to yield 65540 even though that value is outside the representable range of uint16_t, a type which has no trap representations. If a local variable of type uint16_t holds Indeterminate Value, there is no requirement that reading it yield a value within the range of uint16_t. Since unexpected behaviors could result when using even unsigned integers in such fashion, there's no reason to expect that pointers couldn't behave in even worse fashion.

Is it legal to compare dangling pointers?

Is it legal to compare dangling pointers?
int *p, *q;
{
int a;
p = &a;
}
{
int b;
q = &b;
}
std::cout << (p == q) << '\n';
Note how both p and q point to objects that have already vanished. Is this legal?
Introduction: The first issue is whether it is legal to use the value of p at all.
After a has been destroyed, p acquires what is known as an invalid pointer value. Quote from N4430 (for discussion of N4430's status see the "Note" below):
When the end of the duration of a region of storage is reached, the values of all pointers representing the address of any part of the deallocated storage become invalid pointer values.
The behaviour when an invalid pointer value is used is also covered in the same section of N4430 (and almost identical text appears in C++14 [basic.stc.dynamic.deallocation]/4):
Indirection through an invalid pointer value and passing an invalid pointer value to a deallocation function have undefined behavior. Any other use of an invalid pointer value has implementation-defined behavior.
[ Footnote: Some implementations might define that copying an invalid pointer value causes a system-generated runtime fault. — end footnote ]
So you will need to consult your implementation's documentation to find out what should happen here (since C++14).
The term use in the above quotes means necessitating lvalue-to-rvalue conversion, as in C++14 [conv.lval/2]:
When an lvalue-to-rvalue conversion is applied to an expression e, and [...] the object to which the glvalue refers contains an invalid pointer value, the behaviour is implementation-defined.
History: In C++11 this said undefined rather than implementation-defined; it was changed by DR1438. See the edit history of this post for the full quotes.
Application to p == q: Supposing we have accepted in C++14+N4430 that the result of evaluating p and q is implementation-defined, and that the implementation does not define that a hardware trap occurs; [expr.eq]/2 says:
Two pointers compare equal if they are both null, both point to the same function, or both represent the same address (3.9.2), otherwise they compare unequal.
Since it's implementation-defined what values are obtained when p and q are evaluated, we can't say for sure what will happen here. But it must be either implementation-defined or unspecified.
g++ appears to exhibit unspecified behaviour in this case; depending on the -O switch I was able to have it say either 1 or 0, corresponding to whether or not the same memory address was re-used for b after a had been destroyed.
Note about N4430: This is a proposed defect resolution to C++14, that hasn't been accepted yet. It cleans up a lot of wording surrounding object lifetime, invalid pointers, subobjects, unions, and array bounds access.
In the C++14 text, it is defined under [basic.stc.dynamic.deallocation]/4 and subsequent paragraphs that an invalid pointer value arises when delete is used. However it's not clearly stated whether or not the same principle applies to static or automatic storage.
There is a definition "valid pointer" in [basic.compound]/3 but it is too vague to use sensibly.The [basic.life]/5 (footnote) refers to the same text to define the behaviour of pointers to objects of static storage duration, which suggests that it was meant to apply to all types of storage.
In N4430 the text is moved from that section up one level so that it does clearly apply to all storage durations. There is a note attached:
Drafting note: this should apply to all storage durations that can end, not just to dynamic storage duration. On an implementation supporting threads or segmented stacks, thread and automatic storage may behave in the same way that dynamic storage does.
My opinion: I don't see any consistent way to interpret the standard (pre-N4430) other than to say that p acquires an invalid pointer value. The behaviour doesn't seem to be covered by any other section besides what we have already looked at. So I am happy to treat the N4430 wording as representing the intent of the standard in this case.
Historically, there have been some systems where using a pointer as an rvalue might cause the system to fetch some information identified by some bits in that pointer. For example, if a pointer could contain the address of an object's header along with an offset into the object, fetching a pointer could cause the system to also fetch some information from that header. If the object has ceased to exist, the attempt to fetch information from its header could fail with arbitrary consequences.
That having been said, in the vast majority of C implementations, all pointers that were alive at some particular moment in time will forever hold the same relationships with regard to the relational and subtraction operators as they had at that particular time. Indeed, in most implementations if one has char *p, one may determine whether it identifies part of an object identified by char *base; size_t size; by checking whether (size_t)(p-base) < size; such comparison will work even retrospectively if there is any overlap in the objects' lifetime.
Unfortunately, the Standard defines no means by which code can indicate that it requires any of the latter guarantees, nor is there a standard means by which code can ask whether a particular implementation can promise any of the latter behaviors and refuse compilation if it does not. Further, some hyper-modern implementations will regard any use of relational or subtraction operators on two pointers as a promise by the programmer that the pointers in question will always identify the same live object, and omit any code which would only be relevant if that assumption didn't hold. Consequently, even though many hardware platforms would be able to offer guarantees that would be useful to many algorithms, there's no safe way by which code can exploit any such guarantees even if code will never need to run on hardware which does not naturally provide them.
The pointers contain the addresses of the variables they reference. The addresses are valid even when the variables that used to be stored there are released / destroyed / unavailable.
As long as you don't try to use the values at those addresses you are safe, meaning *p and *q will be undefined.
Obviously the result is implementation defined, therefore this code example can be used to study the features of your compiler if one doesn't want to dig into to assembly code.
Whether this is a meaningful practice is totally different discussion.