SFINAE-ing any container into a c-style array view - c++

I'm making a simple, non-owning array view class:
template <typename T>
class array_view {
T* data_;
size_t len_;
// ...
};
I want to construct it from any container that has data() and size() member functions, but SFINAE-d correctly such that array_view is only constructible from some container C if it would then be valid and safe behavior to actually traverse data_.
I went with:
template <typename C,
typename D = decltype(std::declval<C>().data()),
typename = std::enable_if_t<
std::is_convertible<D, T*>::value &&
std::is_same<std::remove_cv_t<T>,
std::remove_cv_t<std::remove_pointer_t<D>>>::value>
>
array_view(C&& container)
: data_(container.data()), len_(container.size())
{ }
That seems wholly unsatisfying and I'm not even sure it's correct. Am I correctly including all the right containers and excluding all the wrong ones? Is there an easier way to write this requirement?

If we take a look at the proposed std::experimental::array_view in N4512, we find the following Viewable requirement in Table 104:
Expression Return type Operational semantics
v.size() Convertible to ptrdiff_t
v.data() Type T* such that T* is static_cast(v.data()) points to a
implicitly convertible to U*, contiguous sequence of at least
and is_same_v<remove_cv_t<T>, v.size() objects of (possibly
remove_cv_t<U>> is true. cv-qualified) type remove_cv_t<U>.
That is, the authors are using essentially the same check for .data(), but add another one for .size().
In order to use pointer arithmetic on U by using operations with T, the types need to be similar according to [expr.add]p6. Similarity is defined for qualification conversions, this is why checking for implicit convertibility and then checking similarity (via the is_same) is sufficient for pointer arithmetic.
Of course, there's no guarantee for the operational semantics.
In the Standard Library, the only contiguous containers are std::array and std::vector. There's also std::basic_string which has a .data() member, but std::initializer_list does not, despite it being contiguous.
All of the .data() member functions are specified for each individual class, but they all return an actual pointer (no iterator, no proxy).
This means that checking for the existence of .data() is currently sufficient for Standard Library containers; you'd want to add a check for convertibility to make array_view less greedy (e.g. array_view<int> rejecting some char* data()).
The implementation can of course be moved away from the interface; you could use Concepts, a concepts emulation, or simply enable_if with an appropriate type function. E.g.
template<typename T, typename As,
typename size_rt = decltype(std::declval<T>().size())
typename data_rt = decltype(std::declval<T>().data())>
constexpr bool is_viewable =
std::is_convertible_v<size_rt, std::ptrdiff_t>
&& std::is_convertible_v<data_rt, T*>
&& std::is_same_v<std::remove_cv_t<T>, std::remove_cv_t<data_rt>>;
template <typename C,
typename = std::enable_if_t<is_viewable<C, T>>
>
array_view(C&& container)
: data_(container.data()), len_(container.size())
{ }
And yes, that doesn't follow the usual technique for a type function, but it is shorter and you get the idea.

Related

Proper way of declaring `const` `boost::range`s

When using boost::any_range, what's the correct way of specifying that the underlying container (if any) shouldn't be modified?
E.g., with the alias
template<typename T>
using Range = boost::any_range<T, boost::forward_traversal_tag>;
to declare a range that isn't capable of modifying the contents of the underlying container or "data source", should it be declared as
const Range<T> myRange;
or as
Range<const T> myRange;
?
I suspect the first version is the correct one. But is it guaranteed to keep the constness of the container, if, for example, I apply any of the boost::adaptors?
Edit
From the documentation, apparently the range_iterator metafunction "deduces" the constness of the underlying container by declaring the range with const T instead of T. That is, range_iterator::<const T>::type is const_iterator (if the underlying container has such member type), instead of iterator, so the container can't be modified through this iterator.
Does that mean that Range<const T> also uses const_iterators to traverse the range?
Apparently the correct way to ensure that the values aren't modified is neither of those I mentioned.
From Boost.Range's documentation, we can see that any_range takes the following template parameters:
template<
class Value
, class Traversal
, class Reference
, class Difference
, class Buffer = any_iterator_default_buffer
>
class any_range;
I strongly suspect the way to declare a "const range" is to specify const T as the Reference type template parameter, although, surprisingly, I still haven't been able to find any explicit indication in the documentation that this is so.
So a const range could be declared as:
template<class C>
using ConstRange = boost::any_range<C, boost::forward_traversal_tag, const C, std::ptrdiff_t>

CV-qualified data members and casting

This question cites the C++ standard to demonstrate that the alignment and size of CV qualified types must be the same as the non-CV qualified equivalent type. This seems obvious, because we can implicitly cast an object of type T to a const T& using static_cast or reinterpret_cast.
However, suppose we have two types which both have the same member variable types, except one has all const member variables and the other does not. Such as:
typedef std::pair<T, T> mutable_pair;
typedef std::pair<const T, const T> const_pair;
Here, the standard does not allow us to produce a const_pair& from an instance of mutable_pair. That is, we cannot say:
mutable_pair p;
const_pair& cp = reinterpret_cast<const_pair&>(p);
This would yield undefined behavior, as it is not listed as a valid use of reinterpret_cast in the standard. Yet, there seems to be no reason, conceptually, why this shouldn't be allowed.
So... why should anyone care? You can always just say:
const mutable_pair& cp = p;
Well, you might care in the event you only want ONE member to be const qualified. Such as:
typedef std::pair<T, U> pair;
typedef std::pair<const T, U> const_first_pair;
pair p;
const_first_pair& cp = reinterpret_cast<const_first_pair&>(p);
Obviously that is still undefined behavior. Yet, since CV qualified types must have the same size and alignment, there's no conceptual reason this should be undefined.
So, is there some reason the standard doesn't allow it? Or is it simply a matter that the standard committee didn't think of this use case?
For anyone wondering what sort of use this could have: in my particular case, I ran into a use case where it would have been very useful to be able to cast a std::pair<T, U> to a std::pair<const T, U>&. I was implementing a specialized balanced tree data structure that provides log(N) lookup by key, but internally stores multiple elements per node. The find/insert/rebalance routines requires internal shuffling of data elements. (The data structure is known as a T-tree.) Since internal shuffling of data elements adversely affects performance by triggering countless copy constructors, it is beneficial to implement the internal data shuffling to take advantage of move constructors if possible.
Unfortunately... I also would have liked to be able to provide an interface which meets the C++ standard requirements for AssociativeContainer, which requires a value_type of std::pair<const Key, Data>. Note the const. This means individual pair objects cannot be moved (or at least the keys can't). They have to be copied, because the key is stored as a const object.
To get around this, I would have liked to be able to store elements internally as mutable objects, but simply cast the key to a const reference when the user access them via an iterator. Unfortunately, I can't cast a std::pair<Key, Data> to a std::pair<const Key, Data>&. And I can't provide some kind of workaround that returns a wrapper class or something, because that wouldn't meet the requirements for AssociativeContainer.
Hence this question.
So again, given that the size and alignment requirements of a CV qualified type must be the same as the non-CV qualified equivalent type, is there any conceptual reason why such a cast shouldn't be allowed? Or is it simply something the standard writers didn't really think about?
Having a type as a template parameter does not mean that you won't have different alignments, the class contents could be changed, e.g., via specialization or template metaprogramming. Consider:
template<typename T> struct X { int i; };
template<typename T> struct X<const T> { double i; };
template<typename T> struct Y {
typename std::conditional<std::is_const<T>::value, int, double>::type x;
};

How to detect if a container is guaranteed to have sequence storage

Checking if a sequence container is contiguous in memory.
C++ templates that accept only certain types
I am writing a simple send() method, which internally works with C-style pointers. I would like it to be able to work with all the guaranteed sequence containers. My motivation being twofold:
a flexible interface
efficiency - using std::array avoids heap allocations.
Here is how far I am:
template <typename Container>
void poll( Container &out )
{
static_assert( std::is_base_of< std::array<typename Container::value_type>, Container >::value ||
std::is_base_of< std::vector<typename Container::value_type>, Container >::value ||
std::is_base_of< std::string<typename Container::value_type>, Container >::value,
"A contiguous memory container is required.");
}
Trouble is, std::array requires a second parameter, and that cannot be known at compile time. Is this problem solvable? Possibly by a different approach?
The right way here is to use a trait class. std::is_base_of is a kind of trait. Basically: You have a templated struct that takes a (template) param and returns its result via a nested type/value.
In your case something like this
template<typename T>
struct HasContiguousStorage: public std::false_type{};
template<typename T>
struct HasContiguousStorage<std::vector<T>>: public std::true_type{};
// Specialize others
As you should not derive from standard containers, this should be enough. This can also check for the array:
template<typename T, size_t N>
struct HasContiguousStorage<std::array<T,N>>: public std::true_type{};
In your function you can then either overload it (see enable_if) or branch on it (branch will be evaluated at compile-time)
How about if the container has a data() member function? (that returns a pointer)
While you cannot do it yet, I am in the process of updating n4183 Contiguous Iterators: Pointer Conversion & Type Trait for (hopeful) inclusion in a future C++ standard.

How to avoid strict aliasing errors when using aligned_storage

I'm using std::aligned_storage as the backing storage for a variant template. The problem is, once I enable -O2 on gcc I start getting warnings of 'dereferencing type-punned pointer will break strict aliasing`.
The real template is much more complex (type checked at runtime), but a minimal example to generate the warning is:
struct foo
{
std::aligned_storage<1024> data;
// ... set() uses placement new, stores type information etc ...
template <class T>
T& get()
{
return reinterpret_cast<T&>(data); // warning: breaks strict aliasing rules
}
};
I'm pretty sure boost::variant is doing essentially the same thing as this, but I can't seem to find how they avoid this issue.
My questions are:
If using aligned_storage in this way violates strict-aliasing, how should I be using it?
Is there actually a strict-aliasing problem in get() given that there are no other pointer based operations in the function?
What about if get() is inlined?
What about get() = 4; get() = 3.2? Could that sequence be reordered due to int and float being different types?
std::aligned_storage is part of <type_traits>; like most of the rest of the inhabitants of that header file, it is just a holder for some typedefs and is not meant to be used as a datatype. Its job is to take a size and alignment, and make you a POD type with those characteristics.
You cannot use std::aligned_storage<Len, Align> directly. You must use std::aligned_storage<Len, Align>::type, the transformed type, which is "a POD type suitable for for use as uninitialized storage for any object whose size is at most Len and whose alignment is a divisor of Align." (Align defaults to the largest useful alignment greater than or equal to Len.)
As the C++ standard notes, normally the type returned by std::aligned_storage will be an array (of the specified size) of unsigned char with an alignment specifier. That avoids the "no strict aliasing" rule because a character type may alias any other type.
So you might do something like:
template<typename T>
using raw_memory = typename std::aligned_storage<sizeof(T),
std::alignment_of<T>::value>::type;
template<typename T>
void* allocate() { return static_cast<void*>(new raw_memory<T>); }
template<typename T, typename ...Arg>
T* maker(Arg&&...arg) {
return new(allocate<T>()) T(std::forward<Arg>(arg)...);
}

How to Enable a Custom Container for the Scoped Allocator Model

This is a long post, so I would like to write the sole question at the top:
It seems I need to implement "allocator-extended" constructors for a custom container that itself doesn't use an allocator, but propagates this to its internal implementation which is a variant type and whose allowed types may be a container like a std::map, but also a type which doesn't need an allocator, say a boolean.
Alone, I have no idea how to accomplish this.
Help is greatly appreciated! ;)
The "custom container" is a class template value which is an implementation of a representation of a JSON data structure.
Class template value is a thin wrapper around a discriminated union: class template variant (similar like boost variant). The allowed types of this variant represent the JSON types Object, Array, String, Number Boolean and Null.
Class template value has a variadic template template parameter pack Policies which basically defines how the JSON types are implemented. Per default the JSON types are implemented with std::map (for Object), std::vector (for Array), std::string (for JSON data string) and a few custom classes representing the remaining JSON types.
A type-machinery defined in value is used to create the recursive type definitions for the container types in terms of the given Policies and also value itself. (The variant class does not need to use a "recursive wrapper" for the implementation of the JSON containers when it uses std::map or std::vector for example). That is, this type machinery creates the actual types used to represent the JSON types, e.g. a std::vector for Array whose value_type equals value and a std::map for Object whose mapped_type equals value. (Yes, value is actually incomplete at this moment when the types are generated).
The class template value basically looks as this (greatly simplified):
template <template <typename, typename> class... Policies>
class value
{
typedef json::Null null_type;
typedef json::Boolean boolean_type;
typedef typename <typegenerator>::type float_number_type;
typedef typename <typegenerator>::type integral_number_type;
typedef typename <typegenerator>::type string_type;
typedef typename <typegenerator>::type object_type;
typedef typename <typegenerator>::type array_type;
typedef variant<
null_type
, boolean_type
, float_number_type
, integral_number_type
, string_type
, object_type
, array_type
> variant_type;
public:
...
private:
variant_type value_;
};
value implements the usual suspects, e.g. constructors, assignments, accessors, comparators, etc. It also implements forwarding constructors so that a certain implementation type of the variant can be constructed with an argument list.
The typegenerator will basically find the relevant implementation policy and use it unless it doesn't find one, then it uses a default implementation policy (this is not shown in detail here, but please ask if something should be unclear).
For example array_type becomes:
std::vector<value, std::allocator<value>>
and object_type becomes
std::map<std::string, value, std::less<std::string>, std::allocator<std::pair<const std::string, value>>>
So far, this works as intended.
Now, the idea is to enable the user to specify a custom allocator which is used for all allocations and all constructions within the "container", that is value. For example, an arena-allocator.
For that purpose, I've extended the template parameters of value as follows:
template <
typename A = std::allocator<void>,
template <typename, typename> class... Policies
>
class value ...
And also adapted the type machinery in order to use a scoped_allocator_adaptor when appropriate.
Note that template parameter A is not the allocator_type of value - but instead is just used in the type-machinery in order to generate the proper implementation types. That is, there is no embedded allocator_type in value - but it affects the allocator_type of the implementation types.
Now, when using a state-ful custom allocator, this works only half-way. More precisely, it works -- except propagation of the scoped allocator will not happen correctly. E.g.:
Suppose, there is a state-ful custom-allocator with a property id, an integer. It cannot be default-constructed.
typedef test::custom_allocator<void> allocator_t;
typedef json::value<allocator_t> Value;
typedef typename Value::string_type String;
typedef Value::array_type Array;
allocator_t a1(1);
allocator_t a2(2);
// Create an Array using allocator a1:
Array array1(a1);
EXPECT_EQ(a1, array1.get_allocator());
// Create a value whose impl-type is a String which uses allocator a2:
Value v1("abc",a2);
// Insert via copy-ctor:
array1.push_back(v1);
// We expect, array1 used allocator a1 in order to construct internal copy of value v1 (containing a string):
EXPECT_EQ(a1, array1.back().get<String>().get_allocator());
--> FAILS !!
The reasons seems, that the array1 will not propagate it's allocator member (which is a1) through the copy of value v1 to its current imp type, the actual copy of string.
Maybe this can be achieved through "allocator-extended" constructors in value, albeit, it itself does not use allocators - but instead needs to "propagate" them appropriately when needed.
But how can I accomplish this?
Edit: revealing part of the type generation:
A "Policy" is a template template parameter whose first param is the value_type (in this case value), and the second param is an allocator type. The "Policy" defines how a JSON type (e.g. an Array) shall be implemented in terms of the value type and the allocator type.
For example, for a JSON Array:
template <typename Value, typename Allocator>
struct default_array_policy : array_tag
{
private:
typedef Value value_type;
typedef typename Allocator::template rebind<value_type>::other value_type_allocator;
typedef GetScopedAllocator<value_type_allocator> allocator_type;
public:
typedef std::vector<value_type, allocator_type> type;
};
where GetScopedAllocator is defined as:
template <typename Allocator>
using GetScopedAllocator = typename std::conditional<
std::is_empty<Allocator>::value,
Allocator,
std::scoped_allocator_adaptor<Allocator>
>::type;
The logic for deciding whether to pass an allocator to child elements is called uses-allocator construction in the standard, see 20.6.7 [allocator.uses].
There are two standard components which use the uses-allocator protocol: std::tuple and std::scoped_allocator_adaptor, and you can also write user-defined allocators that also it (but it's often easier to just use scoped_allocator_adaptor to add support for the protocol to existing allocators.)
If you're using scoped_allocator_adaptor internally in value then all you should need to do to get scoped allocators to work is ensure value supports uses-allocator construction, which is specified by the std::uses_allocator<value, Alloc> trait. That trait will be automatically true if value::allocator_type is defined and std::is_convertible<value::allocator_type, Alloc> is true. If value::allocator_type doesn't exist you can specialize the trait to be true (this is what std::promise and std::packaged_task do):
namespace std
{
template<typename A, typename... P, typename A2>
struct uses_allocator<value<A, P...>, A2>
: is_convertible<A, A2>
{ };
}
This will mean that when a value is constructed by a type that supports uses-allocator construction it will attempt to pass the allocator to the value constructor, so you do also need to add allocator-extended constructors so it can be passed.
For this to work as you want:
// Insert via copy-ctor:
array1.push_back(v1);
the custom_allocator template must support uses-allocator construction, or you must have wrapped it so that Value::array_type::allocator_type is scoped_allocator_adaptor<custom_allocator<Value>>, I can't tell from your question if that's true or not.
Of course for this to work the standard library implementation has to support scoped allocators, what compiler are you using? I'm only familiar with GCC's status in this area, where GCC 4.7 supports it for std::vector only. For GCC 4.8 I've added support to forward_list too. I hope the remaining containers will all be done for GCC 4.9.
N.B. Your types should also use std::allocator_traits for all allocator-related operations, instead of calling member functions on the allocator type directly.
Yes, value is actually incomplete at this moment when the types are generated
It is undefined behaviour to use incomplete types as template arguments when instantiating standard template components unless speficied otherwise, see 17.6.4.8 [res.on.functions]. It might work with your implementation, but isn't required to.