Related
The answers I got for this question until now has two exactly the opposite kinds of answers: "it's safe" and "it's undefined behaviour". I decided to rewrite the question in whole to get some better clarifying answers, for me and for anyone who might arrive here via Google.
Also, I removed the C tag and now this question is C++ specific
I am making an 8-byte-aligned memory heap that will be used in my virtual machine. The most obvious approach that I can think of is by allocating an array of std::uint64_t.
std::unique_ptr<std::uint64_t[]> block(new std::uint64_t[100]);
Let's assume sizeof(float) == 4 and sizeof(double) == 8. I want to store a float and a double in block and print the value.
float* pf = reinterpret_cast<float*>(&block[0]);
double* pd = reinterpret_cast<double*>(&block[1]);
*pf = 1.1;
*pd = 2.2;
std::cout << *pf << std::endl;
std::cout << *pd << std::endl;
I'd also like to store a C-string saying "hello".
char* pc = reinterpret_cast<char*>(&block[2]);
std::strcpy(pc, "hello\n");
std::cout << pc;
Now I want to store "Hello, world!" which goes over 8 bytes, but I still can use 2 consecutive cells.
char* pc2 = reinterpret_cast<char*>(&block[3]);
std::strcpy(pc2, "Hello, world\n");
std::cout << pc2;
For integers, I don't need a reinterpret_cast.
block[5] = 1;
std::cout << block[5] << std::endl;
I'm allocating block as an array of std::uint64_t for the sole purpose of memory alignment. I also do not expect anything larger than 8 bytes by its own to be stored in there. The type of the block can be anything if the starting address is guaranteed to be 8-byte-aligned.
Some people already answered that what I'm doing is totally safe, but some others said that I'm definitely invoking undefined behaviour.
Am I writing correct code to do what I intend? If not, what is the appropriate way?
The global allocation functions
To allocate an arbitrary (untyped) block of memory, the global allocation functions (§3.7.4/2);
void* operator new(std::size_t);
void* operator new[](std::size_t);
Can be used to do this (§3.7.4.1/2).
§3.7.4.1/2
The allocation function attempts to allocate the requested amount of storage. If it is successful, it shall return the address of the start of a block of storage whose length in bytes shall be at least as large as the requested size. There are no constraints on the contents of the allocated storage on return from the allocation function. The order, contiguity, and initial value of storage allocated by successive calls to an allocation function are unspecified. The pointer returned shall be suitably aligned so that it can be converted to a pointer of any complete object type with a fundamental alignment requirement (3.11) and then used to access the object or array in the storage allocated (until the storage is explicitly deallocated by a call to a corresponding deallocation function).
And 3.11 has this to say about a fundamental alignment requirement;
§3.11/2
A fundamental alignment is represented by an alignment less than or equal to the greatest alignment supported by the implementation in all contexts, which is equal to alignof(std::max_align_t).
Just to be sure on the requirement that the allocation functions must behave like this;
§3.7.4/3
Any allocation and/or deallocation functions defined in a C++ program, including the default versions in the library, shall conform to the semantics specified in 3.7.4.1 and 3.7.4.2.
Quotes from C++ WD n4527.
Assuming the 8-byte alignment is less than the fundamental alignment of the platform (and it looks like it is, but this can be verified on the target platform with static_assert(alignof(std::max_align_t) >= 8)) - you can use the global ::operator new to allocate the memory required. Once allocated, the memory can be segmented and used given the size and alignment requirements you have.
An alternative here is the std::aligned_storage and it would be able to give you memory aligned at whatever the requirement is.
typename std::aligned_storage<sizeof(T), alignof(T)>::type buffer[100];
From the question, I assume here that the both the size and alignment of T would be 8.
A sample of what the final memory block could look like is (basic RAII included);
struct DataBlock {
const std::size_t element_count;
static constexpr std::size_t element_size = 8;
void * data = nullptr;
explicit DataBlock(size_t elements) : element_count(elements)
{
data = ::operator new(elements * element_size);
}
~DataBlock()
{
::operator delete(data);
}
DataBlock(DataBlock&) = delete; // no copy
DataBlock& operator=(DataBlock&) = delete; // no assign
// probably shouldn't move either
DataBlock(DataBlock&&) = delete;
DataBlock& operator=(DataBlock&&) = delete;
template <class T>
T* get_location(std::size_t index)
{
// https://stackoverflow.com/a/6449951/3747990
// C++ WD n4527 3.9.2/4
void* t = reinterpret_cast<void*>(reinterpret_cast<unsigned char*>(data) + index*element_size);
// 5.2.9/13
return static_cast<T*>(t);
// C++ WD n4527 5.2.10/7 would allow this to be condensed
//T* t = reinterpret_cast<T*>(reinterpret_cast<unsigned char*>(data) + index*element_size);
//return t;
}
};
// ....
DataBlock block(100);
I've constructed more detailed examples of the DataBlock with suitable template construct and get functions etc., live demo here and here with further error checking etc..
A note on the aliasing
It does look like there are some aliasing issues in the original code (strictly speaking); you allocate memory of one type and cast it to another type.
It may probably work as you expect on your target platform, but you cannot rely on it. The most practical comment I've seen on this is;
"Undefined behaviour has the nasty result of usually doing what you think it should do, until it doesn’t” - hvd.
The code you have probably will work. I think it is better to use the appropriate global allocation functions and be sure that there is no undefined behaviour when allocating and using the memory you require.
Aliasing will still be applicable; once the memory is allocated - aliasing is applicable in how it is used. Once you have an arbitrary block of memory allocated (as above with the global allocation functions) and the lifetime of an object begins (§3.8/1) - aliasing rules apply.
What about std::allocator?
Whilst the std::allocator is for homogenous data containers and what your are looking for is akin to heterogeneous allocations, the implementation in your standard library (given the Allocator concept) offers some guidance on raw memory allocations and corresponding construction of the objects required.
Update for the new question:
The great news is there's a simple and easy solution to your real problem: Allocate the memory with new (unsigned char[size]). Memory allocated with new is guaranteed in the standard to be aligned in a way suitable for use as any type, and you can safely alias any type with char*.
The standard reference, 3.7.3.1/2, allocation functions:
The pointer returned shall be suitably aligned so that it can be
converted to a pointer of any complete object type and then used to
access the object or array in the storage allocated
Original answer for the original question:
At least in C++98/03 in 3.10/15 we have the following which pretty clearly makes it still undefined behavior (since you're accessing the value through a type that's not enumerated in the list of exceptions):
If a program attempts to access the stored value of an object through
an lvalue of other than one of the following types the behavior is
undefined):
— the dynamic type of the object,
— a cvqualified version of the dynamic type of the object,
— a type that is the signed or unsigned type corresponding to the dynamic type of the object,
— a type that is the signed or unsigned type corresponding to a cvqualified version of the dynamic type of the object,
— an aggregate or union type that includes one of the aforementioned types among its members (including, recursively, a member of a subaggregate or contained union),
— a type that is a (possibly cvqualified) base class type of the dynamic type of the object,
— a char or unsigned char type.
pc pf and pd are all different types that access memory specified in block as uint64_t, so for say 'pf the shared types are float and uint64_t.
One would violate the strict aliasing rule were once to write using one type and read using another since the compile could we reorder the operations thinking there is no shared access. This is not your case however, since the uint64_t array is only used for assignment, it is exactly the same as using alloca to allocate the memory.
Incidentally there is no issue with the strict aliasing rule when casting from any type to a char type and visa versa. This is a common pattern used for data serialization and deserialization.
A lot of discussion here and given some answers that are slightly wrong, but making up good points, I just try to summarize:
exactly following the text of the standard (no matter what version) ... yes, this is undefined behaviour. Note the standard doesn't even have the term strict aliasing -- just a set of rules to enforce it no matter what implementations could define.
understanding the reason behind the "strict aliasing" rule, it should work nicely on any implementation as long as neither float or double take more than 64 bits.
the standard won't guarantee you anything about the size of float or double (intentionally) and that's the reason why it is that restrictive in the first place.
you can get around all this by ensuring your "heap" is an allocated object (e.g. get it with malloc()) and access the aligned slots through char * and shifting your offset by 3 bits.
you still have to make sure that anything you store in such a slot won't take more than 64 bits. (that's the hard part when it comes to portability)
In a nutshell: your code should be safe on any "sane" implementation as long as size constraints aren't a problem (means: the answer to the question in your title is most likely no), BUT it's still undefined behaviour (means: the answer to your last paragraph is yes)
I'll make it short: All your code works with defined semantics if you allocate the block using
std::unique_ptr<char[], std::free>
mem(static_cast<char*>(std::malloc(800)));
Because
every type is allowed to alias with a char[] and
malloc() is guaranteed to return a block of memory sufficiently aligned for all types (except maybe SIMD ones).
We pass std::free as a custom deleter, because we used malloc(), not new[], so calling delete[], the default, would be undefined behaviour.
If you're a purist, you can also use operator new:
std::unique_ptr<char[]>
mem(static_cast<char*>(operator new[](800)));
Then we don't need a custom deleter. Or
std::unique_ptr<char[]> mem(new char[800]);
to avoid the static_cast from void* to char*. But operator new can be replaced by the user, so I'm always a bit wary of using it. OTOH; malloc cannot be replaced (only in platform-specific ways, such as LD_PRELOAD).
Yes, because the memory locations pointed to by pf could overlap depending on the size of float and double. If they didn't, then the results of reading *pd and *pf would be well defined but not the results of reading from block or pc.
The behavior of C++ and the CPU are distinct. Although the standard provides memory suitable for any object, the rules and optimizations imposed by the CPU make the alignment for any given object "undefined" - an array of short would reasonably be 2 byte aligned, but an array of a 3 byte structure may be 8 byte aligned. A union of all possible types can be created and used between your storage and the usage to ensure no alignment rules are broken.
union copyOut {
char Buffer[200]; // max string length
int16 shortVal;
int32 intVal;
int64 longIntVal;
float fltVal;
double doubleVal;
} copyTarget;
memcpy( copyTarget.Buffer, Block[n], sizeof( data ) ); // move from unaligned space into union
// use copyTarget member here.
If you tag this as C++ question,
(1) why use uint64_t[] but not std::vector?
(2) in term of memory management, your code lack of management logic, which should keep track of which blocks are in use and which are free and the tracking of contiguoous blocks, and of course the allocate and release block methods.
(3) the code shows an unsafe way of using memory. For example, the char* is not const and therefore the block can be potentially be written to and overwrite the next block(s). The reinterpret_cast is consider danger and should be abstract from the memory user logic.
(4) the code doesn't show the allocator logic. In C world, the malloc function is untyped and in C++ world, the operator new is typed. You should consider something like the new operator.
Or even better a template <T*>?
In case the memory mapped file contains a sequence of 32 bit integers, if data() returned a void*, we could be able to static cast to std::uint32_t directly.
Why did boost authors choose to return a char* instead?
EDIT: as pointed out, in case portability is an issue, a translation is needed. But saying that a file (or a chunk of memory in this case) is a stream of bytes more than it is a stream of bits, or of IEEE754 doubles, or of complex data structures, seems to me a very broad statement that needs some more explanation.
Even having to handle endianness, being able to directly map to a vector of be_uint32_t as suggested (and as implemented here) would make the code much more readable:
struct be_uint32_t {
std::uint32_t raw;
operator std::uint32_t() { return ntohl(raw); }
};
static_assert(sizeof(be_uint32_t)==4, "POD failed");
Is it allowed/advised to cast to a be_uint32_t*? Why, or why not?
Which kind of cast should be used?
EDIT2: Since it seems difficult to get to the point instead of discussing weather the memory model of an elaborator is made of bits, bytes or words I will rephrase giving an example:
#include <cstdint>
#include <memory>
#include <vector>
#include <iostream>
#include <boost/iostreams/device/mapped_file.hpp>
struct entry {
std::uint32_t a;
std::uint64_t b;
} __attribute__((packed)); /* compiler specific, but supported
in other ways by all major compilers */
static_assert(sizeof(entry) == 12, "entry: Struct size mismatch");
static_assert(offsetof(entry, a) == 0, "entry: Invalid offset for a");
static_assert(offsetof(entry, b) == 4, "entry: Invalid offset for b");
int main(void) {
boost::iostreams::mapped_file_source mmap("map");
assert(mmap.is_open());
const entry* data_begin = reinterpret_cast<const entry*>(mmap.data());
const entry* data_end = data_begin + mmap.size()/sizeof(entry);
for(const entry* ii=data_begin; ii!=data_end; ++ii)
std::cout << std::hex << ii->a << " " << ii->b << std::endl;
return 0;
}
Given that the map file contains the bit expected in the correct order, is there any other reason to avoid using the reinterpret_cast to use my virtual memory without copying it first?
If there is not, why force the user to do a reinterpret_cast by returning a typed pointer?
Please answer all the questions for bonus points :)
In case the memory mapped file contains a sequence of 32 bit integers, if data() returned a void*, we could be able to static cast to std::uint32_t directly.
No, not really. You still have to consider (if nothing else) endianness. This "one step conversion" idea would lead you into a false sense of security. You're forgetting about an entire layer of translation between the bytes in the file and the 32-bit integer you want to get into your program. Even when that translation happens to be a no-op on your present system and for a given file, it's still a translation step.
It's much better to get an array of bytes (literally what a char* points to!) then you know you have to do some thinking to ensure that your pointer conversion is valid and that you are performing whatever other work is required.
char* represents array of raw bytes, which is what mapped_file::data is in most general case.
void* would be misleading as it provides less information about the contained type and requires more setup to work with then char* - we know that file contents are some bytes, which char* represents.
Template return type would require conversion to that type be performed inside the library, while it makes more sense to do that on the caller side (since library just provides an interface to raw file contents, and the caller knows specifically what those contents are).
Returning a char * seems to be just a (peculiar) design decision of boost::iostreams implementation.
Other APIs like e.g. the boost interprocess return void*.
As observed by sehe the UNIX mmap specification (and malloc) use void* as well.
It is somewhat a duplicate of void* or char* for generic buffer representation?
As a note of caution the layer of translation mentioned by Lightness in another answer may be needed when the memory is written from one architecture and read on a different one. Endianness is easy to solve using a conversion type, but alignment need to be considered as well.
About static cast: http://en.cppreference.com/w/cpp/language/static_cast mentions:
A prvalue of type pointer to void (possibly cv-qualified) can be
converted to pointer to any type. If the value of the original pointer
satisfies the alignment requirement of the target type, then the
resulting pointer value is unchanged, otherwise it is unspecified.
Conversion of any pointer to pointer to void and back to pointer to
the original (or more cv-qualified) type preserves its original value.
So if the file to be memory mapped was created on a different architecture with a different alignment, the loading may fail (e.g. with a SIGBUS) depending on the architecture and the OS.
My company uses a messaging server which gets a message into a const char* and then casts it to the message type.
I've become concerned about this after asking this question. I'm not aware of any bad behavior in the messaging server. Is it possible that const variables do not incur aliasing problems?
For example say that foo is defined in MessageServer in one of these ways:
As a parameter: void MessageServer(const char* foo)
Or as const variable at the top of MessageServer: const char* foo = PopMessage();
Now MessageServer is a huge function, but it never assigns anything to foo, however at 1 point in MessageServer's logic foo will be cast to the selected message type.
auto bar = reinterpret_cast<const MessageJ*>(foo);
bar will only be read from subsequently, but will be used extensively for object setup.
Is an aliasing problem possible here, or does the fact that foo is only initialized, and never modified save me?
EDIT:
Jarod42's answer finds no problem with casting from a const char* to a MessageJ*, but I'm not sure this makes sense.
We know this is illegal:
MessageX* foo = new MessageX;
const auto bar = reinterpret_cast<MessageJ*>(foo);
Are we saying this somehow makes it legal?
MessageX* foo = new MessageX;
const auto temp = reinterpret_cast<char*>(foo);
auto bar = reinterpret_cast<const MessageJ*>(temp);
My understanding of Jarod42's answer is that the cast to temp makes it legal.
EDIT:
I've gotten some comments with relation to serialization, alignment, network passing, and so on. That's not what this question is about.
This is a question about strict aliasing.
Strict aliasing is an assumption, made by the C (or C++) compiler, that dereferencing pointers to objects of different types will never refer to the same memory location (i.e. alias eachother.)
What I'm asking is: Will the initialization of a const object, by casting from a char*, ever be optimized below where that object is cast to another type of object, such that I am casting from uninitialized data?
First of all, casting pointers does not cause any aliasing violations (although it might cause alignment violations).
Aliasing refers to the process of reading or writing an object through a glvalue of different type than the object.
If an object has type T, and we read/write it via a X& and a Y& then the questions are:
Can X alias T?
Can Y alias T?
It does not directly matter whether X can alias Y or vice versa, as you seem to focus on in your question. But, the compiler can infer if X and Y are completely incompatible that there is no such type T that can be aliased by both X and Y, therefore it can assume that the two references refer to different objects.
So, to answer your question, it all hinges on what PopMessage does. If the code is something like:
const char *PopMessage()
{
static MessageJ foo = .....;
return reinterpret_cast<const char *>(&foo);
}
then it is fine to write:
const char *ptr = PopMessage();
auto bar = reinterpret_cast<const MessageJ*>(foo);
auto baz = *bar; // OK, accessing a `MessageJ` via glvalue of type `MessageJ`
auto ch = ptr[4]; // OK, accessing a `MessageJ` via glvalue of type `char`
and so on. The const has nothing to do with it. In fact if you did not use const here (or you cast it away) then you could also write through bar and ptr with no problem.
On the other hand, if PopMessage was something like:
const char *PopMessage()
{
static char buf[200];
return buf;
}
then the line auto baz = *bar; would cause UB because char cannot be aliased by MessageJ. Note that you can use placement-new to change the dynamic type of an object (in that case, char buf[200] is said to have stopped existing, and the new object created by placement-new exists and its type is T).
My company uses a messaging server which gets a message into a const char* and then casts it to the message type.
So long as you mean that it does a reinterpret_cast (or a C-style cast that devolves to a reinterpret_cast):
MessageJ *j = new MessageJ();
MessageServer(reinterpret_cast<char*>(j));
// or PushMessage(reinterpret_cast<char*>(j));
and later takes that same pointer and reinterpret_cast's it back to the actual underlying type, then that process is completely legitimate:
MessageServer(char *foo)
{
if (somehow figure out that foo is actually a MessageJ*)
{
MessageJ *bar = reinterpret_cast<MessageJ*>(foo);
// operate on bar
}
}
// or
MessageServer()
{
char *foo = PopMessage();
if (somehow figure out that foo is actually a MessageJ*)
{
MessageJ *bar = reinterpret_cast<MessageJ*>(foo);
// operate on bar
}
}
Note that I specifically dropped the const's from your examples as their presence or absence doesn't matter. The above is legitimate when the underlying object that foo points at actually is a MessageJ, otherwise it is undefined behavior. The reinterpret_cast'ing to char* and back again yields the original typed pointer. Indeed, you could reinterpret_cast to a pointer of any type and back again and get the original typed pointer. From this reference:
Only the following conversions can be done with reinterpret_cast ...
6) An lvalue expression of type T1 can be converted to reference to another type T2. The result is an lvalue or xvalue referring to the same object as the original lvalue, but with a different type. No temporary is created, no copy is made, no constructors or conversion functions are called. The resulting reference can only be accessed safely if allowed by the type aliasing rules (see below) ...
Type aliasing
When a pointer or reference to object of type T1 is reinterpret_cast (or C-style cast) to a pointer or reference to object of a different type T2, the cast always succeeds, but the resulting pointer or reference may only be accessed if both T1 and T2 are standard-layout types and one of the following is true:
T2 is the (possibly cv-qualified) dynamic type of the object ...
Effectively, reinterpret_cast'ing between pointers of different types simply instructs the compiler to reinterpret the pointer as pointing at a different type. More importantly for your example though, round-tripping back to the original type again and then operating on it is safe. That is because all you've done is instructed the compiler to reinterpret a pointer as pointing at a different type and then told the compiler again to reinterpret that same pointer as pointing back at the original, underlying type.
So, the round trip conversion of your pointers is legitimate, but what about potential aliasing problems?
Is an aliasing problem possible here, or does the fact that foo is only initialized, and never modified save me?
The strict aliasing rule allows compilers to assume that references (and pointers) to unrelated types do not refer to the same underlying memory. This assumption allows lots of optimizations because it decouples operations on unrelated reference types as being completely independent.
#include <iostream>
int foo(int *x, long *y)
{
// foo can assume that x and y do not alias the same memory because they have unrelated types
// so it is free to reorder the operations on *x and *y as it sees fit
// and it need not worry that modifying one could affect the other
*x = -1;
*y = 0;
return *x;
}
int main()
{
long a;
int b = foo(reinterpret_cast<int*>(&a), &a); // violates strict aliasing rule
// the above call has UB because it both writes and reads a through an unrelated pointer type
// on return b might be either 0 or -1; a could similarly be arbitrary
// technically, the program could do anything because it's UB
std::cout << b << ' ' << a << std::endl;
return 0;
}
In this example, thanks to the strict aliasing rule, the compiler can assume in foo that setting *y cannot affect the value of *x. So, it can decide to just return -1 as a constant, for example. Without the strict aliasing rule, the compiler would have to assume that altering *y might actually change the value of *x. Therefore, it would have to enforce the given order of operations and reload *x after setting *y. In this example it might seem reasonable enough to enforce such paranoia, but in less trivial code doing so will greatly constrain reordering and elimination of operations and force the compiler to reload values much more often.
Here are the results on my machine when I compile the above program differently (Apple LLVM v6.0 for x86_64-apple-darwin14.1.0):
$ g++ -Wall test58.cc
$ ./a.out
0 0
$ g++ -Wall -O3 test58.cc
$ ./a.out
-1 0
In your first example, foo is a const char * and bar is a const MessageJ * reinterpret_cast'ed from foo. You further stipulate that the object's underlying type actually is a MessageJ and that no reads are done through the const char *. Instead, it is only casted to the const MessageJ * from which only reads are then done. Since you do not read nor write through the const char * alias, then there can be no aliasing optimization problem with your accesses through your second alias in the first place. This is because there are no potentially conflicting operations performed on the underlying memory through your aliases of unrelated types. However, even if you did read through foo, then there could still be no potential problem as such accesses are allowed by the type aliasing rules (see below) and any ordering of reads through foo or bar would yield the same results because there are no writes occurring here.
Let us now drop the const qualifiers from your example and presume that MessageServer does do some write operations on bar and furthermore that the function also reads through foo for some reason (e.g. - prints a hex dump of memory). Normally, there might be an aliasing problem here as we have reads and writes happening through two pointers to the same memory through unrelated types. However, in this specific example, we are saved by the fact that foo is a char*, which gets special treatment by the compiler:
Type aliasing
When a pointer or reference to object of type T1 is reinterpret_cast (or C-style cast) to a pointer or reference to object of a different type T2, the cast always succeeds, but the resulting pointer or reference may only be accessed if both T1 and T2 are standard-layout types and one of the following is true: ...
T2 is char or unsigned char
The strict-aliasing optimizations that are allowed for operations through references (or pointers) of unrelated types are specifically disallowed when a char reference (or pointer) is in play. The compiler instead must be paranoid that operations through the char reference (or pointer) can affect and be affected by operations done through other references (or pointers). In the modified example where reads and writes operate on both foo and bar, you can still have defined behavior because foo is a char*. Therefore, the compiler is not allowed to optimize to reorder or eliminate operations on your two aliases in ways that conflict with the serial execution of the code as written. Similarly, it is forced to be paranoid about reloading values that may have been affected by operations through either alias.
The answer to your question is that, so long as your functions are properly round tripping pointers to a type through a char* back to its original type, then your function is safe, even if you were to interleave reads (and potentially writes, see caveat at end of EDIT) through the char* alias with reads+writes through the underlying type alias.
These two technical references (3.10.10) are useful for answering your question. These other references help give a better understanding of the technical information.
====
EDIT: In the comments below, zmb objects that while char* can legitimately alias a different type, that the converse is not true as several sources seem to say in varying forms: that the char* exception to the strict aliasing rule is an asymmetric, "one-way" rule.
Let us modify my above strict-aliasing code example and ask would this new version similarly result in undefined behavior?
#include <iostream>
char foo(char *x, long *y)
{
// can foo assume that x and y cannot alias the same memory?
*x = -1;
*y = 0;
return *x;
}
int main()
{
long a;
char b = foo(reinterpret_cast<char*>(&a), &a); // explicitly allowed!
// if this is defined behavior then what must the values of b and a be?
std::cout << (int) b << ' ' << a << std::endl;
return 0;
}
I argue that this is defined behavior and that both a and b must be zero after the call to foo. From the C++ standard (3.10.10):
If a program attempts to access the stored value of an object through a glvalue of other than one of the following types the behavior is undefined:^52
the dynamic type of the object ...
a char or unsigned char type ...
^52: The intent of this list is to specify those circumstances in which an object may or may not be aliased.
In the above program, I am accessing the stored value of an object through both its actual type and a char type, so it is defined behavior and the results have to comport with the serial execution of the code as written.
Now, there is no general way for the compiler to always statically know in foo that the pointer x actually aliases y or not (e.g. - imagine if foo was defined in a library). Maybe the program could detect such aliasing at run time by examining the values of the pointers themselves or consulting RTTI, but the overhead this would incur wouldn't be worth it. Instead, the better way to generally compile foo and allow for defined behavior when x and y do happen to alias one another is to always assume that they could (i.e. - disable strict alias optimizations when a char* is in play).
Here's what happens when I compile and run the above program:
$ g++ -Wall test59.cc
$ ./a.out
0 0
$ g++ -O3 -Wall test59.cc
$ ./a.out
0 0
This output is at odds with the earlier, similar strict-aliasing program's. This is not dispositive proof that I'm right about the standard, but the different results from the same compiler provides decent evidence that I may be right (or, at least that one important compiler seems to understand the standard the same way).
Let's examine some of the seemingly conflicting sources:
The converse is not true. Casting a char* to a pointer of any type other than a char* and dereferencing it is usually in volation of the strict aliasing rule. In other words, casting from a pointer of one type to pointer of an unrelated type through a char* is undefined.
The bolded bit is why this quote doesn't apply to the problem addressed by my answer nor the example I just gave. In both my answer and the example, the aliased memory is being accessed both through a char* and the actual type of the object itself, which can be defined behavior.
Both C and C++ allow accessing any object type via char * (or specifically, an lvalue of type char). They do not allow accessing a char object via an arbitrary type. So yes, the rule is a "one way" rule."
Again, the bolded bit is why this statement doesn't apply to my answers. In this and similar counter-examples, an array of characters is being accessed through a pointer of an unrelated type. Even in C, this is UB because the character array might not be aligned according to the aliased type's requirements, for example. In C++, this is UB because such access does not meet any of the type aliasing rules as the underlying type of the object actually is char.
In my examples, we first have a valid pointer to a properly constructed type that is then aliased by a char* and then reads and writes through these two aliased pointers are interleaved, which can be defined behavior. So, there seems to be some confusion and conflation out there between the strict aliasing exception for char and not accessing an underlying object through an incompatible reference.
int value;
int *p = &value;
char *q = reinterpret_cast<char*>(&value);
Both p and p refer to the same address, they are aliasing the same memory. What the language does is provide a set of rules defining the behaviors that are guaranteed: write through p read through q fine, other way around not fine.
The standard and many examples clearly state that "write through q, then read through p (or value)" can be well defined behavior. What is not as abundantly clear, but what I'm arguing for here, is that "write through p (or value), then read through q" is always well defined. I claim even further, that "reads and writes through p (or value) can be arbitrarily interleaved with reads and writes to q" with well defined behavior.
Now there is one caveat to the previous statement and why I kept sprinkling the word "can" throughout the above text. If you have a type T reference and a char reference that alias the same memory, then arbitrarily interleaving reads+writes on the T reference with reads on the char reference is always well defined. For example, you might do this to repeatedly print out a hex dump of the underlying memory as you modify it multiple times through the T reference. The standard guarantees that strict aliasing optimizations will not be applied to these interleaved accesses, which otherwise might give you undefined behavior.
But what about writes through a char reference alias? Well, such writes may or may not be well defined. If a write through the char reference violates an invariant of the underlying T type, then you can get undefined behavior. If such a write improperly modified the value of a T member pointer, then you can get undefined behavior. If such a write modified a T member value to a trap value, then you can get undefined behavior. And so on. However, in other instances, writes through the char reference can be completely well defined. Rearranging the endianness of a uint32_t or uint64_t by reading+writing to them through an aliased char reference is always well defined, for example. So, whether such writes are completely well defined or not depends on the particulars of the writes themselves. Regardless, the standard guarantees that its strict aliasing optimizations will not reorder or eliminate such writes w.r.t. other operations on the aliased memory in a manner that itself could lead to undefined behavior.
So my understanding is that you are doing something like that:
enum MType { J,K };
struct MessageX { MType type; };
struct MessageJ {
MType type{ J };
int id{ 5 };
//some other members
};
const char* popMessage() {
return reinterpret_cast<char*>(new MessageJ());
}
void MessageServer(const char* foo) {
const MessageX* msgx = reinterpret_cast<const MessageX*>(foo);
switch (msgx->type) {
case J: {
const MessageJ* msgJ = reinterpret_cast<const MessageJ*>(foo);
std::cout << msgJ->id << std::endl;
}
}
}
int main() {
const char* foo = popMessage();
MessageServer(foo);
}
If that is correct, then the expression msgJ->id is ok (as would be any access to foo), as msgJ has the correct dynamic type. msgx->type on the other hand does incur UB, because msgx has a unrelated type. The fact that the the pointer to MessageJ was cast to const char* in between is completely irrelevant.
As was cited by others, here is the relevant part in the standard (the "glvalue" is the result of dereferencing the pointer):
If a program attempts to access the stored value of an object through a glvalue of other than one of the following types the behavior is undefined:52
the dynamic type of the object,
a cv-qualified version of the dynamic type of the object,
a type similar (as defined in 4.4) to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object,
an aggregate or union type that includes one of the aforementioned types among its elements or nonstatic data members (including, recursively, an element or non-static data member of a subaggregate or contained union),
a type that is a (possibly cv-qualified) base class type of the dynamic type of the object,
a char or unsigned char type.
As far as the discussion "cast to char*" vs "cast from char*" is concerned:
You might know that the standard doesn't talk about strict aliasing as such, it only provides the list above. Strict aliasing is one analysis technique based on that list for compilers to determine which pointers can potentially alias each other. As far as optimizations are concerned, it doesn't make a difference, if a pointer to a MessageJ object was cast to char* or vice versa. The compiler cannot (without further analysis) assume that a char* and MessageX* point to distinct objects and will not perform any optimizations (e.g. reordering) based on that.
Of course that doesn't change the fact that accessing a char array via a pointer to a different type would still be UB in C++ (I assume mostly due to alignment issues) and the compiler might perform other optimizations that could ruin your day.
EDIT:
What I'm asking is: Will the initialization of a const object, by
casting from a char*, ever be optimized below where that object is
cast to another type of object, such that I am casting from
uninitialized data?
No it will not. Aliasing analysis doesn't influence how the pointer itself is handled, but the access through that pointer. The compiler will NOT reorder the write access (store memory address in the pointer variable) with the read access (copy to other variable / load of address in order to access the memory location) to the same variable.
There is no aliasing problem as you use (const)char* type, see the last point of:
If a program attempts to access the stored value of an object through a glvalue of other than one of the following types the behavior is undefined:
the dynamic type of the object,
a cv-qualified version of the dynamic type of the object,
a type similar (as defined in 4.4) to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to the dynamic type of the object,
a type that is the signed or unsigned type corresponding to a cv-qualified version of the dynamic type of the object,
an aggregate or union type that includes one of the aforementioned types among -its elements or non-static data members (including, recursively, an element or non-static data member of a subaggregate or contained union),
a type that is a (possibly cv-qualified) base class type of the dynamic type of the object,
a char or unsigned char type.
The other answer answered the question well enough (it's a direct quotation from the C++ standard in https://isocpp.org/files/papers/N3690.pdf page 75), so I'll just point out other problems in what you're doing.
Note that your code may run into alignment problems. For example, if the alignment of MessageJ is 4 or 8 bytes (typical on 32-bit and 64-bit machines), strictly speaking, it is undefined behaviour to access an arbitrary character array pointer as a MessageJ pointer.
You won't run into any problems on x86/AMD64 architectures as they allow unaligned access. However, someday you may find that the code you're developing is ported to a mobile ARM architecture and the unaligned access would be a problem then.
It therefore seems you're doing something you shouldn't be doing. I would consider using serialization instead of accessing a character array as a MessageJ type. The only problem isn't potential alignment problems, an additional problem is that the data may have a different representation on 32-bit and 64-bit architectures.
Is there a difference between pointer to integer-pointer (int**) and pointer to character-pointer (char**), and any other case of pointer to pointer?
Isn't the memory block size for any pointer is the same, so the sub-datatype doesn't play a role in here?
Is it just a semantic distinction with no other significance?
Why not to use just void**?
Why should we use void** when you want a pointer to a char *? Why should we not use char **?
With char **, you have type safety. If the pointer is correctly initialized and not null, you know that by dereferencing it once you get a valid char * - and by dereferencing that pointer, in turn, you get a char.
Why should you ignore this advantage in type safety, and instead play pointer Russian roulette with void**?
The difference is in type-safety. T** implicitly interprets the data as T. void**, however, needs to be manually casted first. And no, pointers are not all 4 / 8 bytes on 32 / 64bit architectures respectively. Member function pointers, for instance, contain offset information too, which needs to be stored in the pointer itself (in the most common implementation).
Most C implementations use the same size and format for all pointers, but this is not required by the C standard.
Some machines do not have byte addressing, so the C implementation implements it by using shifts and other operations. In these implementations, pointers to larger types, such as int, may be normal addresses, but pointers to char would have to have both a machine address and a byte-within-word offset.
Additionally, C makes use of the type information for a variety of purposes, including reducing mistakes made by programmers (possibly giving warnings or errors when you attempt to use a pointer to int where a pointer to float is needed) and optimization. Regarding optimization, consider this example:
void foo(float *array, int *limit)
{
for (int i = 0; i < *limit; ++i)
array[i] = <some calculation>;
}
The C standard says a compiler may use the fact that array and limit are pointers to different types to conclude that they do not overlap. Given this rule, the C implementation may evaluate *limit once when the loop starts, because it knows it will not change during the loop. Without this rule, the compiler would have to assume that one of the assignments to array[i] might change *limit, and it would have to load *limit from memory in each iteration.
Originally being the topic of this question, it emerged that the OP just overlooked the dereference. Meanwhile, this answer got me and some others thinking - why is it allowed to cast a pointer to a reference with a C-style cast or reinterpret_cast?
int main() {
char c = 'A';
char* pc = &c;
char& c1 = (char&)pc;
char& c2 = reinterpret_cast<char&>(pc);
}
The above code compiles without any warning or error (regarding the cast) on Visual Studio while GCC will only give you a warning, as shown here.
My first thought was that the pointer somehow automagically gets dereferenced (I work with MSVC normally, so I didn't get the warning GCC shows), and tried the following:
#include <iostream>
int main() {
char c = 'A';
char* pc = &c;
char& c1 = (char&)pc;
std::cout << *pc << "\n";
c1 = 'B';
std::cout << *pc << "\n";
}
With the very interesting output shown here. So it seems that you are accessing the pointed-to variable, but at the same time, you are not.
Ideas? Explanations? Standard quotes?
Well, that's the purpose of reinterpret_cast! As the name suggests, the purpose of that cast is to reinterpret a memory region as a value of another type. For this reason, using reinterpret_cast you can always cast an lvalue of one type to a reference of another type.
This is described in 5.2.10/10 of the language specification. It also says there that reinterpret_cast<T&>(x) is the same thing as *reinterpret_cast<T*>(&x).
The fact that you are casting a pointer in this case is totally and completely unimportant. No, the pointer does not get automatically dereferenced (taking into account the *reinterpret_cast<T*>(&x) interpretation, one might even say that the opposite is true: the address of that pointer is automatically taken). The pointer in this case serves as just "some variable that occupies some region in memory". The type of that variable makes no difference whatsoever. It can be a double, a pointer, an int or any other lvalue. The variable is simply treated as memory region that you reinterpret as another type.
As for the C-style cast - it just gets interpreted as reinterpret_cast in this context, so the above immediately applies to it.
In your second example you attached reference c to the memory occupied by pointer variable pc. When you did c = 'B', you forcefully wrote the value 'B' into that memory, thus completely destroying the original pointer value (by overwriting one byte of that value). Now the destroyed pointer points to some unpredictable location. Later you tried to dereference that destroyed pointer. What happens in such case is a matter of pure luck. The program might crash, since the pointer is generally non-defererencable. Or you might get lucky and make your pointer to point to some unpredictable yet valid location. In that case you program will output something. No one knows what it will output and there's no meaning in it whatsoever.
One can rewrite your second program into an equivalent program without references
int main(){
char* pc = new char('A');
char* c = (char *) &pc;
std::cout << *pc << "\n";
*c = 'B';
std::cout << *pc << "\n";
}
From the practical point of view, on a little-endian platform your code would overwrite the least-significant byte of the pointer. Such a modification will not make the pointer to point too far away from its original location. So, the code is more likely to print something instead of crashing. On a big-endian platform your code would destroy the most-significant byte of the pointer, thus throwing it wildly to point to a totally different location, thus making your program more likely to crash.
It took me a while to grok it, but I think I finally got it.
The C++ standard specifies that a cast reinterpret_cast<U&>(t) is equivalent to *reinterpret_cast<U*>(&t).
In our case, U is char, and t is char*.
Expanding those, we see that the following happens:
we take the address of the argument to the cast, yielding a value of type char**.
we reinterpret_cast this value to char*
we dereference the result, yielding a char lvalue.
reinterpret_cast allows you to cast from any pointer type to any other pointer type. And so, a cast from char** to char* is well-formed.
I'll try to explain this using my ingrained intuition about references and pointers rather than relying on the language of the standard.
C didn't have reference types, it only had values and pointer types (addresses) - since, physically in memory, we only have values and addresses.
In C++ we've added references to the syntax, but you can think of them as a kind of syntactic sugar - there is no special data structure or memory layout scheme for holding references.
Well, what "is" a reference from that perspective? Or rather, how would you "implement" a reference? With a pointer, of course. So whenever you see a reference in some code you can pretend it's really just a pointer that's been used in a special way: if int x; and int& y{x}; then we really have a int* y_ptr = &x; and if we say y = 123; we merely mean *(y_ptr) = 123;. This is not dissimilar from how, when we use C array subscripts (a[1] = 2;) what actually happens is that a is "decayed" to mean pointer to its first element, and then what gets executed is *(a + 1) = 2.
(Side note: Compilers don't actually always hold pointers behind every reference; for example, the compiler might use a register for the referred-to variable, and then a pointer can't point to it. But the metaphor is still pretty safe.)
Having accepted the "reference is really just a pointer in disguise" metaphor, it should now not be surprising that we can ignore this disguise with a reinterpret_cast<>().
PS - std::ref is also really just a pointer when you drill down into it.
Its allowed because C++ allows pretty much anything when you cast.
But as for the behavior:
pc is a 4 byte pointer
(char)pc tries to interpret the pointer as a byte, in particular the last of the four bytes
(char&)pc is the same, but returns a reference to that byte
When you first print pc, nothing has happened and you see the letter you stored
c = 'B' modifies the last byte of the 4 byte pointer, so it now points to something else
When you print again, you are now pointing to a different location which explains your result.
Since the last byte of the pointer is modified the new memory address is nearby, making it unlikely to be in a piece of memory your program isn't allowed to access. That's why you don't get a seg-fault. The actual value obtained is undefined, but is highly likely to be a zero, which explains the blank output when its interpreted as a char.
when you're casting, with a C-style cast or with a reinterpret_cast, you're basically telling the compiler to look the other way ("don't you mind, I know what I'm doing").
C++ allows you to tell the compiler to do that. That doesn't mean it's a good idea...