This question cites the C++ standard to demonstrate that the alignment and size of CV qualified types must be the same as the non-CV qualified equivalent type. This seems obvious, because we can implicitly cast an object of type T to a const T& using static_cast or reinterpret_cast.
However, suppose we have two types which both have the same member variable types, except one has all const member variables and the other does not. Such as:
typedef std::pair<T, T> mutable_pair;
typedef std::pair<const T, const T> const_pair;
Here, the standard does not allow us to produce a const_pair& from an instance of mutable_pair. That is, we cannot say:
mutable_pair p;
const_pair& cp = reinterpret_cast<const_pair&>(p);
This would yield undefined behavior, as it is not listed as a valid use of reinterpret_cast in the standard. Yet, there seems to be no reason, conceptually, why this shouldn't be allowed.
So... why should anyone care? You can always just say:
const mutable_pair& cp = p;
Well, you might care in the event you only want ONE member to be const qualified. Such as:
typedef std::pair<T, U> pair;
typedef std::pair<const T, U> const_first_pair;
pair p;
const_first_pair& cp = reinterpret_cast<const_first_pair&>(p);
Obviously that is still undefined behavior. Yet, since CV qualified types must have the same size and alignment, there's no conceptual reason this should be undefined.
So, is there some reason the standard doesn't allow it? Or is it simply a matter that the standard committee didn't think of this use case?
For anyone wondering what sort of use this could have: in my particular case, I ran into a use case where it would have been very useful to be able to cast a std::pair<T, U> to a std::pair<const T, U>&. I was implementing a specialized balanced tree data structure that provides log(N) lookup by key, but internally stores multiple elements per node. The find/insert/rebalance routines requires internal shuffling of data elements. (The data structure is known as a T-tree.) Since internal shuffling of data elements adversely affects performance by triggering countless copy constructors, it is beneficial to implement the internal data shuffling to take advantage of move constructors if possible.
Unfortunately... I also would have liked to be able to provide an interface which meets the C++ standard requirements for AssociativeContainer, which requires a value_type of std::pair<const Key, Data>. Note the const. This means individual pair objects cannot be moved (or at least the keys can't). They have to be copied, because the key is stored as a const object.
To get around this, I would have liked to be able to store elements internally as mutable objects, but simply cast the key to a const reference when the user access them via an iterator. Unfortunately, I can't cast a std::pair<Key, Data> to a std::pair<const Key, Data>&. And I can't provide some kind of workaround that returns a wrapper class or something, because that wouldn't meet the requirements for AssociativeContainer.
Hence this question.
So again, given that the size and alignment requirements of a CV qualified type must be the same as the non-CV qualified equivalent type, is there any conceptual reason why such a cast shouldn't be allowed? Or is it simply something the standard writers didn't really think about?
Having a type as a template parameter does not mean that you won't have different alignments, the class contents could be changed, e.g., via specialization or template metaprogramming. Consider:
template<typename T> struct X { int i; };
template<typename T> struct X<const T> { double i; };
template<typename T> struct Y {
typename std::conditional<std::is_const<T>::value, int, double>::type x;
};
Related
When using boost::any_range, what's the correct way of specifying that the underlying container (if any) shouldn't be modified?
E.g., with the alias
template<typename T>
using Range = boost::any_range<T, boost::forward_traversal_tag>;
to declare a range that isn't capable of modifying the contents of the underlying container or "data source", should it be declared as
const Range<T> myRange;
or as
Range<const T> myRange;
?
I suspect the first version is the correct one. But is it guaranteed to keep the constness of the container, if, for example, I apply any of the boost::adaptors?
Edit
From the documentation, apparently the range_iterator metafunction "deduces" the constness of the underlying container by declaring the range with const T instead of T. That is, range_iterator::<const T>::type is const_iterator (if the underlying container has such member type), instead of iterator, so the container can't be modified through this iterator.
Does that mean that Range<const T> also uses const_iterators to traverse the range?
Apparently the correct way to ensure that the values aren't modified is neither of those I mentioned.
From Boost.Range's documentation, we can see that any_range takes the following template parameters:
template<
class Value
, class Traversal
, class Reference
, class Difference
, class Buffer = any_iterator_default_buffer
>
class any_range;
I strongly suspect the way to declare a "const range" is to specify const T as the Reference type template parameter, although, surprisingly, I still haven't been able to find any explicit indication in the documentation that this is so.
So a const range could be declared as:
template<class C>
using ConstRange = boost::any_range<C, boost::forward_traversal_tag, const C, std::ptrdiff_t>
When choosing to pass by const reference vs. by value, the choice seems clear for large classes: gotta use const ref to avoid an expensive copy since copy elision is permitted only in limited circumstances (if copy constructor has no side effects or if the copy was from an rvalue).
For smaller classes, it seems passing by value is more attractive:
it's just as fast as dereferencing
it avoids aliasing (which is both bug-prone and bad for performance as it forces the compiler to be more conservative)
if a copy is necessary anyway, it makes the copy in the scope where the compiler may be able to use copy elision
So what is the best practice when writing a template function, where it's not always obvious whether the class of the argument is large or small?
(Assuming that you only want to read from the value you're passing.)
I think that the best choice is passing by const&, as:
Some objects cannot be copied, are expensive to copy, or contain side-effects in their copy constructor.
While taking primitives by const& might result in a minor performance degradation, this is a smaller loss compared to the problems described in the bullet point above.
Ideally, you would want to do something like this (I'm not being careful about small classes that have side-effects in the copy constructor here):
template <typename T>
using readonly = std::conditional_t<
sizeof(T) <= sizeof(void*),
T,
const T&
>;
template <typename T>
void foo(readonly<T> x);
The problem is that T cannot be deduced from a call to foo, as it is in a "non-deducible context".
This means that your users will have to call foo<int>(0) instead of foo(0), as T=int cannot be deduced from the compiler.
(I want to reiterate that the condition I'm using above is very naive and potentially dangerous. You might want to simply check if T is a primitive or a POD smaller than void* instead.)
Another possible thing you can do is use std::enable_if_t to control what function gets called:
template <typename T>
auto foo(T x) -> std::enable_if_t<(sizeof(T) <= sizeof(void*))>;
template <typename T>
auto foo(const T& x) -> std::enable_if_t<(sizeof(T) > sizeof(void*))>;
live example on wandbox
This obviously requires a lot of extra boilerplate... maybe there will be a better solution using code generation when (if?) we'll get constexpr blocks and "metaclasses".
What you want is a type trait that tests if the type is a scalar and then switches on that
template <typename Type>
using ConstRefFast = std::conditional_t<
std::is_scalar<std::decay_t<Type>>::value,
std::add_const_t<Type>,
std::add_lvalue_reference_t<std::add_const_t<std::decay_t<Type>>>
>;
And then pass an object by reference like this
template <typename Type>
void foo(typename ConstRefFast<Type>::type val);
Note that this means that the function will not be able to deduce the type T automatically anymore. But in some situations it might give you what you want.
Note that when it comes to template functions, sometimes the question of ownership is more important than just whether you want to pass the value by const ref or by value.
For example consider a method that accepts a shared_ptr to some type T and does some processing on that pointer (either at some point in the future or immediately), you have two options
void insert_ptr(std::shared_ptr<T> ptr);
void insert_ptr(const std::shared_ptr<T>& ptr);
When the user looks at both of these functions, one conveys meaning and semantics clearly while the other just leaves questions in their mind. The first one obviously makes a copy of the pointer before the method starts, thus incrementing the ref count. But the second one's semantics is not quite clear. In an asynchronous setting this might leave room for doubt in the user's mind. Is my pointer going to be safely used (and object safely released) if for example the pointed to object is used at some point in the future asynchronously?
You can also consider another case that does not consider asynchronous settings. A container that copies values of type T into internal storage
template <typename T>
class Container {
public:
void set_value(T val) {
this->insert_into_storage(std::move(val));
}
};
or
template <typename T>
class Container {
public:
void set_value(const T& val) {
this->insert_into_storage(val);
}
};
Here the first one does convey the fact the value is copied into the method, after which the container presumably stores the value internally. But if the question of lifetime of the copied object is not important, then the second one is more efficient, simply because it avoids an extra move.
So in the end it just comes down to whether you need clarity of your API or performance.
I think as a basic rule, you should just pass by const&, preparing your generic template code for the general case of expensive-to-copy objects.
For example, if you take a look at std::vector's constructors, in the overload that takes a count and a value, the value is simply passed by const&:
explicit vector( size_type count,
const T& value = T(),
const Allocator& alloc = Allocator())
I'm making a simple, non-owning array view class:
template <typename T>
class array_view {
T* data_;
size_t len_;
// ...
};
I want to construct it from any container that has data() and size() member functions, but SFINAE-d correctly such that array_view is only constructible from some container C if it would then be valid and safe behavior to actually traverse data_.
I went with:
template <typename C,
typename D = decltype(std::declval<C>().data()),
typename = std::enable_if_t<
std::is_convertible<D, T*>::value &&
std::is_same<std::remove_cv_t<T>,
std::remove_cv_t<std::remove_pointer_t<D>>>::value>
>
array_view(C&& container)
: data_(container.data()), len_(container.size())
{ }
That seems wholly unsatisfying and I'm not even sure it's correct. Am I correctly including all the right containers and excluding all the wrong ones? Is there an easier way to write this requirement?
If we take a look at the proposed std::experimental::array_view in N4512, we find the following Viewable requirement in Table 104:
Expression Return type Operational semantics
v.size() Convertible to ptrdiff_t
v.data() Type T* such that T* is static_cast(v.data()) points to a
implicitly convertible to U*, contiguous sequence of at least
and is_same_v<remove_cv_t<T>, v.size() objects of (possibly
remove_cv_t<U>> is true. cv-qualified) type remove_cv_t<U>.
That is, the authors are using essentially the same check for .data(), but add another one for .size().
In order to use pointer arithmetic on U by using operations with T, the types need to be similar according to [expr.add]p6. Similarity is defined for qualification conversions, this is why checking for implicit convertibility and then checking similarity (via the is_same) is sufficient for pointer arithmetic.
Of course, there's no guarantee for the operational semantics.
In the Standard Library, the only contiguous containers are std::array and std::vector. There's also std::basic_string which has a .data() member, but std::initializer_list does not, despite it being contiguous.
All of the .data() member functions are specified for each individual class, but they all return an actual pointer (no iterator, no proxy).
This means that checking for the existence of .data() is currently sufficient for Standard Library containers; you'd want to add a check for convertibility to make array_view less greedy (e.g. array_view<int> rejecting some char* data()).
The implementation can of course be moved away from the interface; you could use Concepts, a concepts emulation, or simply enable_if with an appropriate type function. E.g.
template<typename T, typename As,
typename size_rt = decltype(std::declval<T>().size())
typename data_rt = decltype(std::declval<T>().data())>
constexpr bool is_viewable =
std::is_convertible_v<size_rt, std::ptrdiff_t>
&& std::is_convertible_v<data_rt, T*>
&& std::is_same_v<std::remove_cv_t<T>, std::remove_cv_t<data_rt>>;
template <typename C,
typename = std::enable_if_t<is_viewable<C, T>>
>
array_view(C&& container)
: data_(container.data()), len_(container.size())
{ }
And yes, that doesn't follow the usual technique for a type function, but it is shorter and you get the idea.
This is a long post, so I would like to write the sole question at the top:
It seems I need to implement "allocator-extended" constructors for a custom container that itself doesn't use an allocator, but propagates this to its internal implementation which is a variant type and whose allowed types may be a container like a std::map, but also a type which doesn't need an allocator, say a boolean.
Alone, I have no idea how to accomplish this.
Help is greatly appreciated! ;)
The "custom container" is a class template value which is an implementation of a representation of a JSON data structure.
Class template value is a thin wrapper around a discriminated union: class template variant (similar like boost variant). The allowed types of this variant represent the JSON types Object, Array, String, Number Boolean and Null.
Class template value has a variadic template template parameter pack Policies which basically defines how the JSON types are implemented. Per default the JSON types are implemented with std::map (for Object), std::vector (for Array), std::string (for JSON data string) and a few custom classes representing the remaining JSON types.
A type-machinery defined in value is used to create the recursive type definitions for the container types in terms of the given Policies and also value itself. (The variant class does not need to use a "recursive wrapper" for the implementation of the JSON containers when it uses std::map or std::vector for example). That is, this type machinery creates the actual types used to represent the JSON types, e.g. a std::vector for Array whose value_type equals value and a std::map for Object whose mapped_type equals value. (Yes, value is actually incomplete at this moment when the types are generated).
The class template value basically looks as this (greatly simplified):
template <template <typename, typename> class... Policies>
class value
{
typedef json::Null null_type;
typedef json::Boolean boolean_type;
typedef typename <typegenerator>::type float_number_type;
typedef typename <typegenerator>::type integral_number_type;
typedef typename <typegenerator>::type string_type;
typedef typename <typegenerator>::type object_type;
typedef typename <typegenerator>::type array_type;
typedef variant<
null_type
, boolean_type
, float_number_type
, integral_number_type
, string_type
, object_type
, array_type
> variant_type;
public:
...
private:
variant_type value_;
};
value implements the usual suspects, e.g. constructors, assignments, accessors, comparators, etc. It also implements forwarding constructors so that a certain implementation type of the variant can be constructed with an argument list.
The typegenerator will basically find the relevant implementation policy and use it unless it doesn't find one, then it uses a default implementation policy (this is not shown in detail here, but please ask if something should be unclear).
For example array_type becomes:
std::vector<value, std::allocator<value>>
and object_type becomes
std::map<std::string, value, std::less<std::string>, std::allocator<std::pair<const std::string, value>>>
So far, this works as intended.
Now, the idea is to enable the user to specify a custom allocator which is used for all allocations and all constructions within the "container", that is value. For example, an arena-allocator.
For that purpose, I've extended the template parameters of value as follows:
template <
typename A = std::allocator<void>,
template <typename, typename> class... Policies
>
class value ...
And also adapted the type machinery in order to use a scoped_allocator_adaptor when appropriate.
Note that template parameter A is not the allocator_type of value - but instead is just used in the type-machinery in order to generate the proper implementation types. That is, there is no embedded allocator_type in value - but it affects the allocator_type of the implementation types.
Now, when using a state-ful custom allocator, this works only half-way. More precisely, it works -- except propagation of the scoped allocator will not happen correctly. E.g.:
Suppose, there is a state-ful custom-allocator with a property id, an integer. It cannot be default-constructed.
typedef test::custom_allocator<void> allocator_t;
typedef json::value<allocator_t> Value;
typedef typename Value::string_type String;
typedef Value::array_type Array;
allocator_t a1(1);
allocator_t a2(2);
// Create an Array using allocator a1:
Array array1(a1);
EXPECT_EQ(a1, array1.get_allocator());
// Create a value whose impl-type is a String which uses allocator a2:
Value v1("abc",a2);
// Insert via copy-ctor:
array1.push_back(v1);
// We expect, array1 used allocator a1 in order to construct internal copy of value v1 (containing a string):
EXPECT_EQ(a1, array1.back().get<String>().get_allocator());
--> FAILS !!
The reasons seems, that the array1 will not propagate it's allocator member (which is a1) through the copy of value v1 to its current imp type, the actual copy of string.
Maybe this can be achieved through "allocator-extended" constructors in value, albeit, it itself does not use allocators - but instead needs to "propagate" them appropriately when needed.
But how can I accomplish this?
Edit: revealing part of the type generation:
A "Policy" is a template template parameter whose first param is the value_type (in this case value), and the second param is an allocator type. The "Policy" defines how a JSON type (e.g. an Array) shall be implemented in terms of the value type and the allocator type.
For example, for a JSON Array:
template <typename Value, typename Allocator>
struct default_array_policy : array_tag
{
private:
typedef Value value_type;
typedef typename Allocator::template rebind<value_type>::other value_type_allocator;
typedef GetScopedAllocator<value_type_allocator> allocator_type;
public:
typedef std::vector<value_type, allocator_type> type;
};
where GetScopedAllocator is defined as:
template <typename Allocator>
using GetScopedAllocator = typename std::conditional<
std::is_empty<Allocator>::value,
Allocator,
std::scoped_allocator_adaptor<Allocator>
>::type;
The logic for deciding whether to pass an allocator to child elements is called uses-allocator construction in the standard, see 20.6.7 [allocator.uses].
There are two standard components which use the uses-allocator protocol: std::tuple and std::scoped_allocator_adaptor, and you can also write user-defined allocators that also it (but it's often easier to just use scoped_allocator_adaptor to add support for the protocol to existing allocators.)
If you're using scoped_allocator_adaptor internally in value then all you should need to do to get scoped allocators to work is ensure value supports uses-allocator construction, which is specified by the std::uses_allocator<value, Alloc> trait. That trait will be automatically true if value::allocator_type is defined and std::is_convertible<value::allocator_type, Alloc> is true. If value::allocator_type doesn't exist you can specialize the trait to be true (this is what std::promise and std::packaged_task do):
namespace std
{
template<typename A, typename... P, typename A2>
struct uses_allocator<value<A, P...>, A2>
: is_convertible<A, A2>
{ };
}
This will mean that when a value is constructed by a type that supports uses-allocator construction it will attempt to pass the allocator to the value constructor, so you do also need to add allocator-extended constructors so it can be passed.
For this to work as you want:
// Insert via copy-ctor:
array1.push_back(v1);
the custom_allocator template must support uses-allocator construction, or you must have wrapped it so that Value::array_type::allocator_type is scoped_allocator_adaptor<custom_allocator<Value>>, I can't tell from your question if that's true or not.
Of course for this to work the standard library implementation has to support scoped allocators, what compiler are you using? I'm only familiar with GCC's status in this area, where GCC 4.7 supports it for std::vector only. For GCC 4.8 I've added support to forward_list too. I hope the remaining containers will all be done for GCC 4.9.
N.B. Your types should also use std::allocator_traits for all allocator-related operations, instead of calling member functions on the allocator type directly.
Yes, value is actually incomplete at this moment when the types are generated
It is undefined behaviour to use incomplete types as template arguments when instantiating standard template components unless speficied otherwise, see 17.6.4.8 [res.on.functions]. It might work with your implementation, but isn't required to.
I know that compilers have much freedom in implementing std::type_info functions' behavior.
I'm thinking about using it to compare object types, so I'd like to be sure that:
std::type_info::name must return two different strings for two different types.
std::type_info::before must say that Type1 is before Type2 exclusive-or Type2 is before Type1.
// like this:
typeid(T1).before( typeid(T2) ) != typeid(T2).before( typeid(T1) )
Two different specialization of the same template class are considered different types.
Two different typedef-initions of the same type are the same type.
And finally:
Since std::type_info is not copyable, how could I store type_infos somewhere (eg: in a std::map)? The only way it to have a std::type_info always allocated somewhere (eg: on the stack or on a static/global variable) and use a pointer to it?
How fast are operator==, operator!= and before on most common compilers? I guess they should only compare a value. And how fast is typeid?
I've got a class A with a virtual bool operator==( const A& ) const. Since A has got many subclasses (some of which are unknown at compile time), I'd overload that virtual operator in any subclass B this way:
virtual bool operator==( const A &other ) const {
if( typeid(*this) != typeid(other) ) return false;
// bool B::operator==( const B &other ) const // is defined for any class B
return operator==( static_cast<B&>( other ) );
}
Is this an acceptable (and standard) way to implement such operator?
After a quick look at the documentation, I would say that :
std::type_info::name always returns two different strings for two different types, otherwise it means that the compiler lost itself while resolving types and you shouldn't use it anymore.
Reference tells : "before returns true if the type precedes the type of rhs in the collation order. The collation order is just an internal order kept by a particular implementation and is not necessarily related to inheritance relations or declaring order."
You therefore have the guarantee that no types has the same rank in the collation order.
Each instantiation of a template class is a different type. Specialization make no exceptions.
I don't really understand what you mean. If you mean something like having typedef foo bar; in two separate compilation units and that bar is the same in both, it works that way. If you mean typedef foo bar; typedef int bar;, it doesn't work (except if foo is int).
About your other questions :
You should store references to std::type_info, of wrap it somehow.
Absolutely no idea about performance, I assume that comparison operators have constant time despite of the type complexity. Before must have linear complexity depending on the number of different types used in your code.
This is really strange imho. You should overload your operator== instead of make it virtual and override it.
Standard 18.5.1 (Class type_info) :
The class type_info describes type
information generated by the
implementation. Objects of this class
effectively store a pointer to a name
for the type, and an encoded value
suitable for comparing two types for
equality or collating order. The
names, encoding rule, and collating
sequence for types are all unspecified
and may differ between programs.
From my understanding :
You don't have this guarantee regarding std:type_info::name. The standard only states that name returns an implementation-defined NTBS, and I believe a conforming implementation could very well return the same string for every type.
I don't know, and the standard isn't clear on this point, so I wouldn't rely on such behavior.
That one should be a definite 'Yes' for me
That one should be a definite 'Yes' for me
Regarding the second set of questions :
No, you cannot store a type_info. Andrei Alexandrescu proposes a TypeInfo wrapper in its Modern C++ Design book. Note that the objects returned by typeid have static storage so you can safely store pointers without worrying about object lifetime
I believe you can assume that type_info comparison are extremely efficient (there really isn't much to compare).
You can store it like this.
class my_type_info
{
public:
my_type_info(const std::type_info& info) : info_(&info){}
std::type_info get() const { return *info_;}
private:
const std::type_info* info_;
};
EDIT:
C++ standard 5.2.8.
The result of a
typeid expression is an lvalue of
static type const std::type_info...
Which means you can use it like this.
my_type_info(typeid(my_type));
The typeid function returns an lvalue (it is not temporary) and therefore the address of the returned type_info is always valid.
The current answers for questions 1 and 2 are perfectly correct, and they're essentially just details for the type_info class - no point in repeating those answers.
For questions 3 and 4, it's important to understand what precisely is a type in C++, and how they relate to names. For starters, there are a whole bunch of predefined types, and those have names: int, float, double. Next, there are some constructed types that do not have names of their own: const int, int*, const int*, int* const. There are function types int (int) and function pointer types int (*)(int).
It's sometimes useful to give a name to an unnamed type, which is possible using typedef. For instance, typedef int* pint or typedef int (*pf)(int);. This introduces a name, not a new type.
Next are user-defined types: structs, classes, unions. There's a good convention to give them names, but it's not mandatory. Don't add such a name with typedef, you can do so directly: struct Foo { }; instead of typedef struct {} Foo;. It's common to have class definitions in headers, which end up\ in multiple translation units. That does mean the class is defined more than once. This is still the same type, and therefore you aren't allowed to play tricks with macros to change the class member definitions.
A template class is not a type, it's a recipe for types. Two instantiations of a single class template are distinct types if the template arguments are different types (or values). This works recursively: Given template <typename T> struct Foo{};, Foo<Foo<int> > is the same type as Foo<Foo<Bar> > if and only if Bar is another name for the type int.
Type_info is implementation defined so I really wouldn't rely on it. However, based on my experiences using g++ and MSVC, assumptions 1,3 and 4 hold... not really sure about #2.
Is there any reason you can't use another method like this?
template<typename T, typename U>
struct is_same { static bool const result = false; };
template<typename T>
struct is_same<T, T> { static bool const result = true; };
template<typename S, typename T>
bool IsSame(const S& s, const T& t) { return is_same<S,T>::result; }
Since std::type_info is not copyable, how could I store type_infos somewhere (eg: in a std::map)? The only way it to have a std::type_info always allocated somewhere (eg: on the stack or on a static/global variable) and use a pointer to it?
This is why std::type_index exists -- it's a wrapper around a type_info & that is copyable and compares (and hashes) by using the underlying type_info operations