memset and a dynamic array of std::complex<double> - c++

since std::complex is a non-trivial type, compiling the following with GCC 8.1.1
complex<double>* z = new complex<double>[6];
memset(z,0,6*sizeof*z);
delete [] (z);`
produces a warning
clearing an object of non-trivial type
My question is, is there actually any potential harm in doing so?

The behavior of std::memset is only defined if the pointer it is modifying is a pointer to a TriviallyCopyable type. std::complex is guaranteed to be a LiteralType, but, as far as I can tell, it isn't guaranteed to be TriviallyCopyable, meaning that std::memset(z, 0, ...) is not portable.
That said, std::complex has an array-compatibility guarantee, which states that the storage of a std::complex<T> is exactly two consecutive Ts and can be reinterpreted as such. This seems to suggest that std::memset is actually fine, since it would be accessing through this array-oriented access. It may also imply that std::complex<double> is TriviallyCopyable, but I am unable to determine that.
If you wish to do this, I would suggest being on the safe side and static_asserting that std::complex<double> is TriviallyCopyable:
static_assert(std::is_trivially_copyable<std::complex<double>>::value);
If that assertion holds, then you are guaranteed that the memset is safe.
In either case, it would be safe to use std::fill:
std::fill(z, z + 6, std::complex<double>{});
It optimizes down to a call to memset, albeit with a few more instructions before it. I would recommend using std::fill unless your benchmarking and profiling showed that those few extra instructions are causing problems.

Never, never, ever memset non-POD types. They have constructors for a reason. Just writing a bunch of bytes on top of them is highly unlikely to give the desired result (and if it does, the types themselves are badly designed as they should clearly then just be POD in the first place - or you are simply being unlucky that Undefined Behaviour seems to work in this case - have fun debugging it when it doesn't after you change optimization level, compiler or platform (or moon phase)).
Just don't do this.

The answer to this question is that for a standard-compliant std::complex there is no need for memset after new.
new complex<double>[6] will initialize the complex to (0, 0) because it calls a default (non-trivial) constructor that initializes to zero.
(I think this is a mistake unfortunately.)
https://en.cppreference.com/w/cpp/numeric/complex/complex
If the code posted was just and example with missing code between new and memset, then std::fill will do the right thing.
(In part because the specific standard library implementation knows internally how std::complex is implemented.)

Related

Can I leave dynamic memory for std::complex uninitialized after allocation?

I have a container library that initializes elements only if they are not "is_trivially_default_constructible".
In principle, I use the std::is_trivially_default_constructible trait to determine this.
I think this should work if the trait reflects reality, for example ints and doubles are left uninitialized in effect.
1. Is it ok to do that, should I expect some undefined behavior?
(Assume that elements are eventually formed by assignment.)
There are plenty examples like that where it is suggested that one uses a special allocator in which the construct function is a no-op.
If the answer above is positive, what can it be said about the following more complicated case:
In turn, I also use the mechanism and "overwrite" the behavior for std::complex<double>.
That is I treat std::complex<double> as if is_trivially_default_constructible were true.
(It is not true, because the default constructor of std::complex is stupid.)
2. Is it ok if I don't initialize the memory for a type that technically is not trivial to default-construct? Should I "bless" somehow this memory with std::launder or something like that?

Do modern c++ compilers optimize assignments after type casting?

Take the following code:
char chars[4] = {0x5B, 0x5B, 0x5B, 0x5B};
int* b = (int*) &chars[0];
The (int*) &chars[0] value is going to be used in a loop (a long loop). Is there any advantage in using (int*) &chars[0] over b in my code? Is there any overhead in creating b? Since I only want to use it as an alias and improve code readability.
Also, is it OK to do this kind of type casting as long as I know what I'm doing? Or should I always memcpy() to another array with the correct type and use that? Would I encounter any kind of undefined behavior? because in my testing so far, it works, but I've seen people discouraging this kind of type casting.
is it OK to do this kind of type casting as long as I know what I'm doing?
No, this is not OK. This is not safe. The C++ standard does not allow that. You can access to an object representation (ie. casting an object pointer to char*) although the result is dependent of the target platform (due to the endianess and padding). However, you cannot do the opposite safely (ie. without an undefined behaviour).
More specifically, the int type can have different alignment requirements (typically aligned to 4 or 8 bytes) than char (not aligned). Thus, your array is likely not aligned and the cast cause an undefined behaviour when b will be dereferenced. Note that it can cause a crash on some processors (AFAIK, POWER for example) although mainstream x86-64 processors supports that. Moreover, compilers can assume that b is aligned in memory (ot alignof(int)).
Or should I always memcpy() to another array with the correct type and use that?
Yes, or alternative C++ operations like the new std::bit_cast available since C++20. Do not worry about performance: most compilers (GCC, Clang, ICC, and certainly MSVC) does optimize such operations (called type punning).
Would I encounter any kind of undefined behavior?
As said before, yes, as long as the type punning is not done correctly. For more information about this you can read the following links:
What is the Strict Aliasing Rule and Why do we care?
reinterpret_cast conversion
Objects and alignment
because in my testing so far, it works, but I've seen people discouraging this kind of type casting.
It often works on simple examples on x86-64 processors. However, when you are dealing with a big code, compilers does perform silly optimizations (but totally correct ones regarding the C++ standard). To quote cppreference: "Compilers are not required to diagnose undefined behaviour (although many simple situations are diagnosed), and the compiled program is not required to do anything meaningful.". Such issue are very hard to debug as they generally only appear when optimizations are enabled and in some specific cases. The result of a program can change regarding the inlining of functions which is dependent of compiler heuristics. In your case, this is dependent of the alignment of the stack which is dependent of compiler optimizations and declared/used variables in the current scope. Some processors does not support unaligned accesses (eg. accesses that cross a cache line boundary) which resulting in an hardware exception of data corruption.
So put it shortly, "it works" so far does not means it will always work everywhere anytime.
The (int*) &chars[0] value is going to be used in a loop (a long loop). Is there any advantage in using (int*) &chars[0] over b in my code? Is there any overhead in creating b? Since I only want to use it as an alias and improve code readability.
Assuming you use a correct way to do type punning (eg. memcpy), then optimizing compilers can fully optimize this initialization as long as optimization flags are enabled. You should not worry about that unless you find that the generated code is poorly optimized. Correctness matters more than performance.
AFAIK, a C compiler does not insert any code when casting a pointer - which means that both chars and b are just memory addresses. Normally a C++ compiler should compile this in the same way as a C compiler - this the reason C++ has different, more advanced, casting semantics.
But you can always compile this and then disassemble it in gdb to see for yourself.
Otherwise, as long as you are aware of the endianness problems or potentially different int sizes on exotic platforms, your casting is safe.
See this question also: In C, does casting a pointer have overhead?
If code performs multiple discrete byte operations using a pointer derived from a pointer to a type which requires word alignment, clang will sometimes replace the discrete writes with a word write that would succeed if the original object was aligned for that word type, but would fail on systems that don't support unaligned accesses if the object isn't aligned the way the compiler expects.
Among other things, this means that if one casts a pointer to T into a pointer to a union containing T, code which attempts to use the union pointer to access the original type may fail if the union contains any types that require an alignment stricter than the original type, even if the union is only accessed via the member of the original type.

Why is what may be one of the most useful reinterpret cast behavior use cases considered undefined behavior?

For some trivial type T, and for a pointer expression that is suitably aligned and refers to a region at least as large as T, can anyone tell me why is a *reinterpret_cast<T const *>(expr) not simply defined to do the same thing as memcpy?
I totally understand how it might be UB if T were not trivial, or if expr did not meet T's alignment criteria, so that's not an issue in my question. I also am not asking if it is UB, because I already know that it is. I am asking what the rationale for leaving such usage of reinterpret_cast as UB is, given that calling memcpy apparently does exactly the same thing as calling the copy constructor for the trivial type anyways (re: http://eel.is/c++draft/basic.types#2 and http://eel.is/c++draft/basic.types#3)
In terms of practical usage, it seems like all the compilers that support C++11 and later do exactly this anyways, so it's not the case that this is UB is actually stopping me from doing what I might otherwise want to do, but I'm wondering what possible reason there could be to leave it as undefined by the standard?

STL copy efficency

My understanding is that std::copy copies the elements one at a time. This seems to be necessary in order to trigger the constructor on each element. But when no such constructor exists (e.g PODs), I would think a memcpy would be much more efficient.
So, does the STL require/allow for specializations of, for instance, vector<int> copying that would just do a memcpy?
The following questions I would appreciate answered for both GCC / MSVC, since those are the compilers I use.
If it is allowed but not required, do the above compilers actually do it?
If they do, for which containers would this trigger? Obviously it makes no sense for list, but what about string or deque?
Again, if they do, which contained types would trigger this? Only built-in types, or also my own POD types (e.g. struct Point {int x, y;} )?
If they don't, would it be faster to use my own wrapper around new / delete / pointers that uses memcpy for things like integer/char/my own struct arrays?
First off, std::copy doesn't copy-construct anything. (That would be the job of the algorithm std::uninitialized_copy.) Instead, it assigns to each element of the old range the corresponding new value.
Secondly, yes indeed, a compiler may optimize the assignment into a memcopy as long as the result is the same "as if" it had performed element-wise assignment. GCC does this for example by having compiler support to recognize such trivially-copyable types, and C++11 actually adds a new type trait called std::is_trivially_copyable which is true precisely for types which can be memcopied.

The future of C++ alignment: passing by value?

Reading the Eigen library documentation, I noticed that some objects cannot be passed by value. Are there any developments in C++11 or planned developments that will make it safe to pass such objects by value?
Also, why is there no problem with returning such objects by value?
It is entirely possible that Eigen is just a terribly written library (or just poorly-thought out); just because something is online doesn't make it true. For example:
Passing objects by value is almost always a very bad idea in C++, as this means useless copies, and one should pass them by reference instead.
This is not good advice in general, depending on the object. It is sometimes necessary pre-C++11 (because you might want an object to be uncopyable), but in C++11, it is never necessary. You might still do it, but it is never necessary to always pass a value by reference. You can just move it by value if it contains allocated memory or something. Obviously, if it's a "look-but-don't-touch" sort of thing, const& is fine.
Simple struct objects, presumably like Eigen's Vector2d are probably cheap enough to copy (especially in x86-64, where pointers are 64-bits) that the copy won't mean much in terms of performance. At the same time, it is overhead (theoretically), so if you're in performance critical code, it may help.
Then again, it may not.
The particular crash issue that Eigen seems to be talking about has to do with alignment of objects. However, most C++03 compiler-specific alignment support guarantees that alignment in all cases. So there's no reason that should "make your program crash!". I've never seen an SSE/AltaVec/etc-based library that used compiler-specific alignment declarations that caused crashes with value parameters. And I've used quite a few.
So if they're having some kind of crash problem with this, then I would consider Eigen to be of... dubious merit. Not without further investigation.
Also, if an object is unsafe to pass by value, as the Eigen docs suggest, then the proper way to handle this would be to make the object non-copy-constructable. Copy assignment would be fine, since it requires an already existing object. However, Eigen doesn't do this, which again suggests that the developers missed some of the finer points of API design.
However, for the record, C++11 has the alignas keyword, which is a standard way to declare that an object shall be of a certain alignment.
Also, why is there no problem with returning such objects by value?
Who says that there isn't (noting the copying problem, not the alignment problem)? The difference is that you can't return a temporary value by reference. So they're not doing it because it's not possible.
They could do this in C++11:
class alignas(16) Matrix4f
{
// ...
};
Now the class will always be aligned on a 16-byte boundary.
Also, maybe I'm being silly but this shouldn't be an issue anyway. Given a class like this:
class Matrix4f
{
public:
// ...
private:
// their data type (aligned however they decided in that library):
aligned_data_type data;
// or in C++11
alignas(16) float data[16];
};
Compilers are now obligated to allocate a Matrix4f on a 16-byte boundary anyway, because that would break it; the class-level alignas should be redundant. But I've been known to be wrong in the past, somehow.