I noticed that most if not all containers now require their ::iterator type to satisfy LegacySomethingIterator instead of SomethingIterator.
For example, std::vector<>::iterator now requires:
iterator LegacyRandomAccessIterator
This seems to be the same for most of the other containers, all requiring their iterators to go from SomethingIterator to LegacySomethingIterator.
There are also the "new" requirements that took the names of the old requirements, such as RandomAccessIterator, why were these added? It seems to me that the new variants just shadow the legacy variants, no differences.
Why were new ones created in the first place, their requirements look the same to me. Why don't the new ones just replace the old requirements instead of right now having 2 different names for them (e.g. RandomAccessIterator and LegacyRandomAccessIterator)?
These are not new things, hence the term "legacy". This is simply how the cppreference site chooses to reconcile the fact that C++20 will have two different things that are both "concepts" called "RandomAccessIterator" (well, until C++20 decided to rename their version random_access_iterator).
Pre-C++20, a "concept" was just a set of requirements in the standard that represented the behavior expected of certain template parameters. In C++20, with concepts becoming an actual language feature, that needed to shift. The problem is that the Ranges concept of "RandomAccessIterator" is not the same as the old-style "concept" of "RandomAccessIterator".
Since C++ considers them both to be "concepts" (though only the newer one is a concept in the language sense), they would both have the same page name on the Wiki. And MediaWiki doesn't really allow that.
So the maintainers of the site settled on using "Legacy" to differentiate them. Note that the actual standard doesn't use this "Legacy" prefix.
Note that the C++20 standard does have a prefix for the older concepts: "Cpp17". So the old concept would be "Cpp17RandomAccessIterator". That was not deemed appropriate for Cppreference for obvious reasons.
Related
It's a well-known fact that the C++ standard library containers, in general, cannot be instantiated with incomplete types. The result of doing so is UB, although in practice a given implementation will either accept the code without issues or issue a compilation error. Discussion about this restriction can be found here: Why C++ containers don't allow incomplete types?
However, in C++17, there are three containers that explicitly allow incomplete types: std::forward_list (26.3.9.1/4), std::list (26.3.10.1/4), and std::vector (26.3.11.1/4).
This is the result of N4510. The paper notes that "based on the discussion on the Issaquah meeting" the decision was made to, at least at first, limit such support to those three containers. But why?
Because we know how to implement those containers to deal with incomplete types, without breaking the ABI.
std::array, on the other hand, needs to know how big an element is (for example).
But why?
The reason incomplete types weren't allowed in the standard containers was that some containers can work with them, but some don't. They didn't want to think too much about this issue at the time and made a blanket prohibition of incomplete types in all standard containers.
Matt Austern documented that in his great article "The Standard Librarian: Containers of Incomplete Types", which is no longer available, but there are still quotes from it in Boost Containers of Incomplete Types.
This C++17 change does justice by undoing the harm inflicted by that blanket prohibition.
I've been reading a bit about C++20's consistent comparison (i.e. operator<=>) but couldn't understand what's the practical difference between std::strong_ordering and std::weak_ordering (same goes for the _equality version for this manner).
Other than being very descriptive about the substitutability of the type, does it actually affect the generated code? Does it add any constraints for how one could use the type?
Would love to see a real-life example that demonstrates this.
Does it add any constraints for how one could use the type?
One very significant constraint (which wasn't intended by the original paper) was the adoption of the significance of strong_ordering by P0732 as an indicator that a class type can be used as a non-type template parameter. weak_ordering isn't sufficient for this case due to how template equivalence has to work. This is no longer the case, as non-type template parameters no longer work this way (see P1907R0 for explanation of issues and P1907R1 for wording of the new rules).
Generally, it's possible that some algorithms simply require weak_ordering but other algorithms require strong_ordering, so being able to annotate that on the type might mean a compile error (insufficiently strong ordering provided) instead of simply failing to meet the algorithm's requirements at runtime and hence just being undefined behavior. But all the algorithms in the standard library and the Ranges TS that I know of simply require weak_ordering. I do not know of one that requires strong_ordering off the top of my head.
Does it actually affect the generated code?
Outside of the cases where strong_ordering is required, or an algorithm explicitly chooses different behavior based on the comparison category, no.
There really isn't any reason to have std::weak_ordering. It's true that the standard describes operations like sorting in terms of a "strict" weak order, but there's an isomorphism between strict weak orderings and a totally ordered partition of the original set into incomparability equivalence classes. It's rare to encounter generic code that is interested both in the order structure (which considers each equivalence class to be one "value") and in some possibly finer notion of equivalence: note that when the standard library uses < (or <=>) it does not use == (which might be finer).
The usual example for std::weak_ordering is a case-insensitive string, since for instance printing two strings that differ only by case certainly produces different behavior despite their equivalence (under any operator). However, lots of types can have different behavior despite being ==: two std::vector<int> objects, for instance, might have the same contents and different capacities, so that appending to them might invalidate iterators differently.
The simple fact is that the "equality" implied by std::strong_ordering::equivalent but not by std::weak_ordering::equivalent is irrelevant to the very code that stands to benefit from it, because generic code doesn't depend on the implied behavioral changes, and non-generic code doesn't need to distinguish between the ordering types because it knows the rules for the type on which it operates.
The standard attempts to give the distinction meaning by talking about "substitutability", but that is inevitably circular because it can sensibly refer only to the very state examined by the comparisons. This was discussed prior to publishing C++20, but (perhaps for the obvious reasons) not much of the planned further discussion has taken place.
I know there is concept for ContiguousIterator in words specification sense, but I wonder if it can be written using C++20/C++17 Concepts TS syntax.
My problem with this is that unlike RandomAccessIterator ContiguousIterator requires not just some operations like it+123 to work, but depends on runtime result of that operation.
No you cannot, not without a traits class or other helper, where types opt-in to being contiguous.
Your problem is currently unsolvable. The committee is considering what to do about deducing contiguous memory access. The flub is that iterator_category is not a trait (although it resides in iterator_traits); It is an ad-hoc type. It cannot be subtyped without breaking existing code. (Beginner mistake, eh what?) The Committee has recognized the mess. This recent discussion tells all -> How to deduce contiguous memory from iterator
Issue
In C++17, associative containers in standard library will have insert_or_assign member function, which will do what its name suggests. Unfortunately, it seems like it doesn't have iterator based interface for bulk insert/assign. I even tried to compile small example, and from the compiler error, compiler couldn't find suitable overload, and neither candidate was reasonably close to iterator based interface.
Question
Why C++17 didn't include iterator based insert_or_assign for operations in bulk? Was there any technical reasons? Design issues?
My assumptions and ideas
I don't see any technical reason to not add iterator based bulk insertion/addition. It seems quite doable. It needs to lookup the key anyway, so I don't see any violations of "Don't pay for what you don't use".
Actually not having the overload makes standard library less consistent, and kind of going against itself. Plain insert supports it, so I'd expect insert_or_assign to support that too. I don't think that not having the overload will make it "Easier to use correctly, and harder to use incorrectly".
The only clue left is this notice from cppreference:
insert_or_assign returns more information than operator[] and does not require default-constructibility of the mapped type.
I'm not sure why this might be a limitation, as the associative container has access to all of the internals and doesn't need to deal with operator[] at all.
Binary compatibility is not applicable here, if I didn't forget anything. Modification will be in a header, and everything will need to recompile anyway.
I couldn't find any associated paper either. Splicing maps and sets doesn't seem to mention it. The feature looks like a phantom.
Standard could at least include insert_assign_iterator, so that one could write std::copy(input_first, input_last, insert_assign_iterator{map});, but standard includes neither.
insert_or_emplace is intended to be a better form of doing some_map[key] = value;. The latter requires that the mapped_type is default constructible, while insert_or_emplace does not.
What you're talking about is similar, but different. You have some range of key-value pairs, and you want to stick all of those values into the map, whether they have equivalent keys or not. This was not the problem that these functions were created to solve, as evidenced by the original proposal for them.
It's not that they're necessarily a bad idea. It simply wasn't the problem the function was added to solve.
I'm reading the VC11 Blog on VC11's C++11 features when I've come up to the SCARY iterators topic.
What are SCARY iterators and how does this affect my C++ coding experience?
If you're using them, there's no need to get SCAREd... just ignore their SCARY-ness.
If you're making them, that means you have to make your iterators independent of the container's allocator type, and of other generic parameters to the container that don't affect the iterators.
From the linked PDF, at http://www.open-std.org/jtc1/sc22/WG21/docs/papers/2009/n2911.pdf
The acronym SCARY describes assignments and initializations that are Seemingly erroneous (appearing Constrained by conflicting generic parameters), but Actually work with the Right implementation (unconstrained bY the conflict due to minimized dependencies).