In what sense is valarray free from aliasing? - c++

An oft-made claim is that std::valarray was intended to eliminate some forms of aliasing in order to enable better optimization (e.g. see valarray vs. vector: Why was valarray introduced?)
Can anyone elaborate on this claim? It seems to me that aliasing is always possible as long as you can obtain a pointer to an element---which you can, because operator[] returns a reference.

The "no aliasing" thing refers to the global functions like cos that accept valarray as a parameter. cos (or whatever function) gets applied to the entire array, and the compiler and standard library implementation can assume that the array does not alias and can perform the operation on each element independently.
It also refers to things like valarray's operator+, which does memberwise addition, etc.

Related

Can I leave dynamic memory for std::complex uninitialized after allocation?

I have a container library that initializes elements only if they are not "is_trivially_default_constructible".
In principle, I use the std::is_trivially_default_constructible trait to determine this.
I think this should work if the trait reflects reality, for example ints and doubles are left uninitialized in effect.
1. Is it ok to do that, should I expect some undefined behavior?
(Assume that elements are eventually formed by assignment.)
There are plenty examples like that where it is suggested that one uses a special allocator in which the construct function is a no-op.
If the answer above is positive, what can it be said about the following more complicated case:
In turn, I also use the mechanism and "overwrite" the behavior for std::complex<double>.
That is I treat std::complex<double> as if is_trivially_default_constructible were true.
(It is not true, because the default constructor of std::complex is stupid.)
2. Is it ok if I don't initialize the memory for a type that technically is not trivial to default-construct? Should I "bless" somehow this memory with std::launder or something like that?

Is it possible to reserve function reference into unordered_map or vector?

i am making some callback system and i wonder if i can reserve function reference to unordered_map so that i can invoke it later.
float func_imp(float x)
{
return x;
}
int main()
{
using Tumap = unordered_map<int, float(&)(float)>;
Tumap umap;
umap.emplace(0, func_imp);
(umap[0])(10.1); //a reference type cannot be value-initialized
using Tvec = vector<float(&)(float)>;
Tvec uvec; // pointer to reference is illegal
//uvec.emplace_back(func_imp);
}
Is it possible to use this type of containers to reserve callback functions? if not, is it the only way to use function pointer?
Regardless of wether this is something you should be doing or not (the comments under your question are covering this already), it's still worth answering your question as is.
The [] operator of map types is a veritable swiss army knife. You can do a lot of things with it.
You can assign a value.
You can lookup the value.
You can lookup a value even if it doesn't exist yet.
Because of this, using that operator imposes some requirements for whatever type is stored in the map. In this specific case, you are running into the requirement that it has to be value-initializable, which references cannot be.
This applies to regular references as well by the way, not just function references.
So the answer is: You can store references in a map as much as you like as long as you don't use any function of the map that requires the reference to do something it's not allowed to.
In this case, you can use umap.at(0)(10.1);. One of the big differences between [] and at() is that if the key is not set yet, at() will throw an exception instead of creating a value-initialized value.
Is it possible to use this type of containers to ...
Regardless of how this sentence continues: No, it is not possible to use this type of containers. Specifically, element type of no standard container can be a reference. Reference types do not satisfy the requirements that containers have for their element type (at least not when using the standard allocator).
if not, is it the only way to use function pointer?
No, function pointer is not the only way, but it is a way that works.
Other alternatives are function objects such as an erasing function wrapper such as std::function, or a reference wrapper such as std::reference_wrapper.
i just thought there is no need to make dereference.
If you mean syntactically, then I have good news that make your concern irrelevant: There is no need to explicitly indirect through a pointer to function. The indirection is implicit just like with function references. Their call syntax is identical Example:
float(&ref)(float) = func_imp;
float(*ptr)(float) = func_imp;
ref(42.);
ptr(42.);
As such, you needen't worry.
If you are talking about having to indirect through the pointer at runtime at the cost of performance, I have bad news that make your concern irrelevant: References are also a form of indirection just as much as pointers are. They are (typically) not an optimisation.

What is the purpose of a scalar in the C++ type traits sense?

I have been able to find reference material at cppreference.com, cplusplus.com, and this site (What is a scalar Object in C++?) that enables me to determine whether a particular C++ data type is a scalar. Namely, I can apply a mental algorithm that runs like this: "Is it a reference type, a function type, or void? If not, is it an array, class, or union? If not, it's a scalar type." In code, of course, I can apply std::is_scalar<T>. And finally, I can apply the working definition "A scalar type is a type that has built-in functionality for the addition operator without overloads (arithmetic, pointer, member pointer, enum and std::nullptr_t)."
What I have not been able to find is a description of the purpose of the scalar classification. Why would anyone care if something is a scalar? It seems like a kind of "leftover" classification, like "reptile" in zoological taxonomy ("Well, a reptile is, um, an amniote that's not a bird or a mammal"). I'm guessing that it must have some use to justify its messiness. I can understand why someone would want to know whether a type is a reference -- you can't take a reference of a reference, for instance. But why would people care whether something is a scalar? What is scalarness all about?
Given is_scalar<T>, you can be sure that operator=(), operator==() and operator!=() does what you think (that is, assignment, comparison and the inverse of that, respectively) for any T.
a class T might or might not have any of these, with arbitrary meaning;
a union T is problematic;
a function doesn't have =;
a reference might hold any of these;
an array - well for two arrays of different size, == and != will make it decay to pointer and compare, while = will fail compile-time.
Thus if you have is_scalar<T>, you can be sure that these work consistently. Otherwise, you need to look further.
One purpose is to write more efficient template specializations. On many architectures, it would be more efficient to pass around pointers to objects than to copy them, but scalars can fit into registers and be copied with a single machine instruction. Or a generic type might need locking, while the machine guarantees that it will read or update a properly-aligned scalar with a single atomic instruction.
Clue here in the notes on cppreference.com?
Each individual memory location in the C++ memory model, including the hidden memory locations used by language features (e.g virtual table pointer), has scalar type (or is a sequence of adjacent bit-fields of non-zero length). Sequencing of side-effects in expression evaluation, interthread synchronization, and dependency ordering are all defined in terms of individual scalar objects.

STL copy efficency

My understanding is that std::copy copies the elements one at a time. This seems to be necessary in order to trigger the constructor on each element. But when no such constructor exists (e.g PODs), I would think a memcpy would be much more efficient.
So, does the STL require/allow for specializations of, for instance, vector<int> copying that would just do a memcpy?
The following questions I would appreciate answered for both GCC / MSVC, since those are the compilers I use.
If it is allowed but not required, do the above compilers actually do it?
If they do, for which containers would this trigger? Obviously it makes no sense for list, but what about string or deque?
Again, if they do, which contained types would trigger this? Only built-in types, or also my own POD types (e.g. struct Point {int x, y;} )?
If they don't, would it be faster to use my own wrapper around new / delete / pointers that uses memcpy for things like integer/char/my own struct arrays?
First off, std::copy doesn't copy-construct anything. (That would be the job of the algorithm std::uninitialized_copy.) Instead, it assigns to each element of the old range the corresponding new value.
Secondly, yes indeed, a compiler may optimize the assignment into a memcopy as long as the result is the same "as if" it had performed element-wise assignment. GCC does this for example by having compiler support to recognize such trivially-copyable types, and C++11 actually adds a new type trait called std::is_trivially_copyable which is true precisely for types which can be memcopied.

references in C++

Once I read in a statement that
The language feature that "sealed the
deal" to include references is
operator overloading.
Why are references needed to effectively support operator overloading?? Any good explanation?
Here's what Stroustrup said in "The Design and Evolution of C++" (3.7 "references"):
References were introduced primarily to support operator overloading. ...
C passes every function argument by value, and where passing an object by value would be inefficient or inappropriate the user can pass a pointer . This strategy doesn't work where operator overloading is used. In that case, notational convenience is essential because users cannot be expected to insert address-of operators if the objects are large. For example:
a = b - c;
is acceptable (that is, conventional) notation, but
a = &b - &c;
is not. Anyway, &b - &c already has a meaning in C, and I didn't want to change that.
An obvious example would be the typical overload of ">>" as a stream extraction operator. To work as designed, this has to be able to modify both its left- and right-hand arguments. The right has to be modified, because the primary purpose is to read a new value into that variable. The left has to be modified to do things like indicating the current status of the stream.
In theory, you could pass a pointer as the right-hand argument, but to do the same for the left argument would be problematic (e.g. when you chain operators together).
Edit: it becomes more problematic for the left side partly because the basic syntax of overloading is that x#y (where "#" stands for any overloaded operator) means x.opertor#(y). Now, if you change the rules so you somehow turn that x into a pointer, you quickly run into another problem: for a pointer, a lot of those operators already have a valid meaning separate from the overload -- e.g., if I translate x+2 as somehow magically working with a pointer to x, then I've produced an expression that already has a meaning completely separate from the overload. To work around that, you could (for example) decide that for this purpose, you'd produce a special kind of pointer that didn't support pointer arithmetic. Then you'd have to deal with x=y -- so the special pointer becomes one that you can't modify directly either, and any attempt at assigning to it ends up assigning to whatever it points at instead.
We've only restricted them enough to support two operator overloads, but our "special pointer" is already about 90% of the way to being a reference with a different name...
References constitute a standard means of specifying that the compiler should handle the addresses of objects as though they were objects themselves. This is well-suited to operator overloading because operations usually need to be chained in expressions; to do this with a uniform interface, i.e., entirely by reference, you often would need to take the address of a temporary variable, which is illegal in C++ because pointers make no guarantee about the lifetimes of their referents. References, on the other hand, do. Using references tells the compiler to work a certain kind of very specific magic (with const references at least) that preserves the lifetime of the referent.
Typically when you're implementing an operator you want to operate directly on the operand -- not a copy of it -- but passing a pointer risks that you could delete the memory inside the operator. (Yes, it would be stupid, but it would be a significant danger nonetheless.) References allow for a convenient way of allowing pointer-like access without the "assignment of responsibility" that passing pointers incurs.