How to define arity of an aggregate in logarithmic (at least base two) compilation time (strictly speaking, in logarithmic number of instantiations)?
What I can do currently is to achieve desired in a linear time:
#include <type_traits>
#include <utility>
struct filler { template< typename type > operator type (); };
template< typename A, typename index_sequence = std::index_sequence<>, typename = void >
struct aggregate_arity
: index_sequence
{
};
template< typename A, std::size_t ...indices >
struct aggregate_arity< A, std::index_sequence< indices... >, std::__void_t< decltype(A{(indices, std::declval< filler >())..., std::declval< filler >()}) > >
: aggregate_arity< A, std::index_sequence< indices..., sizeof...(indices) > >
{
};
struct A0 {};
struct A1 { double x; };
struct A2 { int i; char c; };
struct C50 { template< typename ...Args, typename = std::enable_if_t< (sizeof...(Args) < 51) > > C50(Args &&...) { ; } };
static_assert(aggregate_arity< A0 >::size() == 0);
static_assert(aggregate_arity< A1 >::size() == 1);
static_assert(aggregate_arity< A2 >::size() == 2);
static_assert(aggregate_arity< C50 >::size() == 50);
Live example.
Please correct me if term "arity" is poor.
I think it is possible in principle: firstly one need to double arity trials starting from one until SFINAE failed (surely, in soft manner), then use bisection.
A bit of terminology first: we can argue that you are not so much looking for the aggregate initialization arity but the maximum aggregate initialization arity. E.g. the aptly named A2 can be aggregate initialized from 0, 1, and 2 arguments so its maximum arity is 2.
Let’s turn 'is aggregate initializable from N arguments' into a trait (although with a shorter name):
struct filler { template<typename type> operator type () const; };
template<typename Arg> void accept(Arg);
template<typename Aggr, std::size_t... Indices,
typename = decltype( accept<Aggr>({ (static_cast<void>(Indices), filler {})... }) )>
void aggregate_arity_test(std::index_sequence<Indices...>);
template<typename Aggr, int N, typename Sfinae = void>
struct has_aggregate_arity: std::false_type {};
template<typename Aggr, int N>
struct has_aggregate_arity<Aggr, N, std::void_t<decltype( aggregate_arity_test<Aggr>(std::make_index_sequence<N>()) )>>
: std::true_type {};
(We use accept<Aggr>({ args... }) because that’s the same as checking for Aggr aggr = { args... };, i.e. copy-list-initialization and what people have in mind when they talk about aggregate initialization. Aggr aggr { args.. }; is direct-list-initialization, but you can still check against that if that’s what you care about.)
Now we can find an arity for which initialization fails in not too many instantiations with iterated doubling (i.e. we will check at arity 0, then arity 1, arity 2, arity 4, arity 8, ..., arity 2i):
template<typename Aggr, int Acc = 0>
struct find_failing_init_fast: std::conditional_t<
has_aggregate_arity<Aggr, Acc>::value,
find_failing_init_fast<Aggr, Acc == 0 ? 1 : Acc * 2>,
std::integral_constant<int, Acc>
> {};
Now it’s a matter of a binary search inside [0, N) where N is an arity for which initialization fails:
// binary search invariant:
// has_aggregate_arity<Aggr, Low> && !has_aggregate_arity<Aggr, High>
template<typename Aggr, int Low, int High>
struct max_aggregate_arity_impl
: std::conditional_t<
has_aggregate_arity<Aggr, midpoint(Low, High)>::value
&& !has_aggregate_arity<Aggr, midpoint(Low, High) + 1>::value,
std::integral_constant<int, midpoint(Low, High)>,
std::conditional<
has_aggregate_arity<Aggr, midpoint(Low, High)>::value,
max_aggregate_arity_impl<Aggr, midpoint(Low, High), High>,
max_aggregate_arity_impl<Aggr, Low, midpoint(Low, High)>
>
>::type {};
// special case that 'errors' out (important for SFINAE purposes)
// as the invariant obviously cannot be maintained
template<typename Aggr>
struct max_aggregate_arity_impl<Aggr, 0, 0> {};
template<typename Aggr>
struct max_aggregate_arity
: max_aggregate_arity_impl<Aggr, 0, find_failing_init_fast<Aggr>::value> {};
Live On Coliru
Discussion
(The discussion is based on another answer of mine which I will delete now.)
As in the original question, the following answer checks whether the invocation of the constructor of the aggregate is possible with a given number of arguments. For aggregates, one can base a binary search on this pattern by using the following properties from the standard:
8.5.1 (6):
An initializer-list is ill-formed if the number of initializer-clauses
exceeds the number of members or elements to initialize. [ Example:
char cv[4] = { ’a’, ’s’, ’d’, ’f’, 0 }; // error is ill-formed. — end
example ]
and
8.5.1 (7):
If there are fewer initializer-clauses in the list than there are
members in the aggregate, then each member not explicitly initialized
shall be initialized from its default member initializer (9.2) or, if
there is no default member initializer, from an empty initializer list
(8.5.4). [ Example: struct S { int a; const char* b; int c; int d =
b[a]; }; S ss = { 1, "asdf" }; initializes ss.a with 1, ss.b with
"asdf", ss.c with the value of an expression of the form int{} (that
is, 0), and ss.d with the value of ss.b[ss.a] (that is, ’s’), and in
struct X { int i, j, k = 42; }; X a[] = { 1, 2, 3, 4, 5, 6 }; X b[2] =
{ { 1, 2, 3 }, { 4, 5, 6 } }; a and b have the same value — end
example ]
However, as you already implied by the question title, a binary search will in general not work with non-aggregates, first due to the fact that those are usually not callable with less parameters than necessary, and next due to the fact that non-aggregates can have explicit constructors so that the "conversion-to-anything" trick via the struct filler won't work.
Implementation
First ingredient is an is_callable check from here:
template<typename V, typename ... Args>
struct is_constructible_impl
{
template<typename C> static constexpr auto test(int) -> decltype(C{std::declval<Args>() ...}, bool{}) { return true; }
template<typename> static constexpr auto test(...) { return false; }
static constexpr bool value = test<V>(int{});
using type = std::integral_constant<bool, value>;
};
template<typename ... Args>
using is_constructible = typename is_callable_impl<Args...>::type;
Note that this one is usable also with a fewer number of parameters than necessary (unlike your check).
Next a helper function which takes an integer argument and returns whether the aggregate is callable with the corresponding number of constructor arguments:
template<typename A, size_t ... I>
constexpr auto check_impl(std::index_sequence<I ...>)
{
return is_constructible<A, decltype(I, filler{}) ...>::value;
}
template<typename A, size_t N>
constexpr auto check()
{
return check_impl<A>(std::make_index_sequence<N>{});
}
And finally the binary search:
template<typename A, size_t Low, size_t Up, size_t i = Low + (Up - Low)/2>
struct binary_search
: public std::conditional_t<check<A, i>() && !check<A,i+1>()
, std::integral_constant<size_t, i>
, std::conditional_t<check<A, i>()
, binary_search<A, i, Up>
, binary_search<A, Low, i> >
>
{};
Use it as
int main()
{
static_assert(binary_search<A2,0,10>::value==2);
}
Live on Coliru
Related
Consider the following code:
struct A
{
// No data members
//...
};
template<typename T, size_t N>
struct B : A
{
T data[N];
}
This is how you have to initialize B: B<int, 3> b = { {}, {1, 2, 3} };
I want to avoid the unnecessary empty {} for the base class.
There is a solution proposed by Jarod42 here, however, it doesn't work with elements default initialization: B<int, 3> b = {1, 2, 3}; is fine but B<int, 3> b = {1}; is not: b.data[1] and b.data[2] aren't default initialized to 0, and a compiler error occurs.
Is there any way (or there will be with c++20) to "hide" base class from construction?
The easiest solution is to add a variadic constructor:
struct A { };
template<typename T, std::size_t N>
struct B : A {
template<class... Ts, typename = std::enable_if_t<
(std::is_convertible_v<Ts, T> && ...)>>
B(Ts&&... args) : data{std::forward<Ts>(args)...} {}
T data[N];
};
void foo() {
B<int, 3> b1 = {1, 2, 3};
B<int, 3> b2 = {1};
}
If you provide fewer elements in the {...} initializer list than N, the remaining elements in the array data will be value-initialized as by T().
Since C++20 you could use designated initializers in aggregate initialization.
B<int, 3> b = { .data {1} }; // initialize b.data with {1},
// b.data[0] is 1, b.data[1] and b.data[2] would be 0
Still with constructor, you might do something like:
template<typename T, size_t N>
struct B : A
{
public:
constexpr B() : data{} {}
template <typename ... Ts,
std::enable_if_t<(sizeof...(Ts) != 0 && sizeof...(Ts) < N)
|| !std::is_same_v<B, std::decay_t<T>>, int> = 0>
constexpr B(T&& arg, Ts&&... args) : data{std::forward<T>(arg), std::forward<Ts>(args)...}
{}
T data[N];
};
Demo
SFINAE is done mainly to avoid to create pseudo copy constructor B(B&).
You would need extra private tag to support B<std::index_sequence<0, 1>, 42> ;-)
I've found another solution that (I don't know how) works perfectly and solves the problem we were discussing under Evg's answer
struct A {};
template<typename T, size_t N>
struct B_data
{
T data[N];
};
template<typename T, size_t N>
struct B : B_data<T, N>, A
{
// ...
};
How would one best implement a single function that accepts two std::array<int, [size]> arguments, each with a size constrained by a corresponding set of values known at compile-time?
The function must only accept arrays with sizes derived from a given set (enum/macro/etc)
The sets of allowable array "sizes" may be changed in the future and may be large (effectively precluding function overloading)
The function itself should remain fixed regardless of changes to the sets of allowable array "sizes"
The question "Passing a std::array of unknown size to a function", while similar, doesn't appear to directly apply.
The following works in C++14 but seems unnecessarily redundant & messy:
#include <type_traits>
#include <array>
// Add legal/allowable sizes for std::array<> "types" here
// Note: Not married to this; perhaps preprocessor instead?
enum class SizesForArrayX : size_t { Three = 3, Four, Forty = 40 };
enum class SizesForArrayY : size_t { Two = 2, Three, EleventyTwelve = 122 };
// Messy, compile-time, value getter for the above enum classes
template <typename S>
constexpr size_t GetSizeValue(const S size)
{ return static_cast<std::underlying_type_t<S>>(size); }
// An example of the function in question; is Template Argument Deduction
// possible here?
// Note: only arrays of "legal"/"allowable" sizes should be passable
template <SizesForArrayX SX, SizesForArrayY SY>
void PickyArrayHandler(
const std::array<int, GetSizeValue(SX)>& x,
const std::array<int, GetSizeValue(SY)>& y)
{
// Do whatever
for (auto& i : x) i = 42;
for (auto& i : y) while (i --> -41) i = i;
}
Calling the above:
int main()
{
// Declare & (value-)initialize some arrays
std::array<int, GetSizeValue(SizesForArrayX::Forty)> x{};
std::array<int, GetSizeValue(SizesForArrayY::Two>) y{};
//PickyArrayHandler(x, y); // <- Doesn't work; C2672, C2783
// This works & handles arrays of any "allowable" size but the required
// template params are repetitions of the array declarations; ick
PickyArrayHandler<SizesForArrayX::Forty, SizesForArrayY::Two>(x, y);
}
...which is ugly, inelegant, slow-to-compile, and requires the declared array size match the explicit "size" passed to the PickyArrayHandler function template.
For the specific example above: Is there a way for the PickyArrayHandler template to deduce the sizes of the passed arrays?
Generally speaking: Is there a different, better approach?
Since you don't seem to be picky about how the valid sizes are defined, you can use type traits
#include <array>
template <size_t N> struct valid_size1 { enum { value = false }; };
template <size_t N> struct valid_size2 { enum { value = false }; };
template <> struct valid_size1<3> { enum { value = true }; };
template <> struct valid_size1<4> { enum { value = true }; };
template <> struct valid_size1<40> { enum { value = true }; };
template <> struct valid_size2<2> { enum { value = true }; };
template <> struct valid_size2<122> { enum { value = true }; };
template <size_t TX, size_t TY>
void PickyArrayHandler(const std::array<int, TX> &x,
const std::array<int, TY> &y)
{
static_assert(valid_size1<TX>::value, "Size 1 is invalid");
static_assert(valid_size2<TY>::value, "Size 2 is invalid");
// Do whatever
}
int main()
{
// Declare & (value-)initialize some arrays
std::array<int, 40> x{};
std::array<int, 2> y{};
PickyArrayHandler(x, y);
PickyArrayHandler(std::array<int, 4>{}, std::array<int, 2>{});
// PickyArrayHandler(std::array<int, 1>{}, std::array<int, 5>{}); // BOOM!
}
Here's a solution using an array:
#include <iostream>
#include <array>
constexpr size_t valid_1[] = { 3, 4, 40 };
constexpr size_t valid_2[] = { 2, 122 };
template <size_t V, size_t I=0>
struct is_valid1 { static constexpr bool value = V==valid_1[I] || is_valid1<V,I+1>::value; };
template <size_t V, size_t I=0>
struct is_valid2 { static constexpr bool value = V==valid_2[I] || is_valid2<V,I+1>::value; };
template <size_t V>
struct is_valid1<V, sizeof(valid_1)/sizeof(valid_1[0])>
{static constexpr bool value = false; };
template <size_t V>
struct is_valid2<V, sizeof(valid_2)/sizeof(valid_2[0])>
{static constexpr bool value = false; };
template <size_t TX, size_t TY>
void PickyArrayHandler(const std::array<int, TX> &x,
const std::array<int, TY> &y)
{
static_assert(is_valid1<TX>::value, "Size 1 is invalid");
static_assert(is_valid2<TY>::value, "Size 2 is invalid");
// Do whatever
}
twiddled around a bit and got this reduced one working: maybe it helps:
enum SizesForArrayX : size_t { Three = 3, Four, Forty = 40 };
enum SizesForArrayY : size_t { Two = 2, EleventyTwelve = 122 };
template <size_t TX, size_t TY>
void PickyArrayHandler(
const std::array<int, TX>& x,
const std::array<int, TY>& y)
{
// Do whatever
}
int main()
{
// Declare & (value-)initialize some arrays
std::array<int, SizesForArrayX::Forty> x{};
std::array<int, SizesForArrayY::Two> y{};
PickyArrayHandler(x, y);
return 0;
}
Unfortunately, your enums are not continuous so you cannot simply iterate over the enum and you have to handle all cases individually. Since the sizes are known at compile-time you can static_assert for it.
#include <array>
enum SizesForArrayX : size_t { Three = 3, Four, Forty = 40 };
enum SizesForArrayY : size_t { Two = 2, EleventyTwelve = 122 };
template <size_t TX, size_t TY>
void PickyArrayHandler(const std::array<int, TX> &x,
const std::array<int, TY> &y)
{
static_assert(TX == Three || TX == Four || TX == Forty,
"Size mismatch for x");
static_assert(TY == Two || TY == EleventyTwelve, "Size mismatch for y");
// Do whatever
}
int main()
{
// Declare & (value-)initialize some arrays
std::array<int, SizesForArrayX::Forty> x{};
std::array<int, SizesForArrayY::Two> y{};
PickyArrayHandler(x, y);
PickyArrayHandler(std::array<int, 4>{}, std::array<int, 2>{});
//PickyArrayHandler(std::array<int, 1>{}, std::array<int, 5>{}); // BOOM!
}
The best way I see to solve this problem is writing a custom type trait:
template <std::underlying_type_t<SizesForArrayX> SX>
struct is_size_x {
static constexpr bool value = false;
};
template <>
struct is_size_x<static_cast<std::underlying_type_t<SizesForArrayX>>(SizesForArrayX::Forty)>{
static constexpr bool value = true;
};
I'd put these right under the enum class declarations, just so it's easy to check that you got them all. Somebody more clever than I could probably figure out a way to even do this with variadic templates so you only need one specialization.
While tedious, if you have a small set of values this should be fast enough and easy to put in unit tests. The other nice thing about this approach is that if you have multiple functions that need one of these special sizes, you don't have to copy/paste static_asserts around.
With the type traits, your function becomes trivial:
template <std::size_t SX, std::size_t SY>
void PickyArrayHandler(
std::array<int, SX>& x,
std::array<int, SY>& y)
{
static_assert(is_size_x<SX>::value, "Invalid size SX");
static_assert(is_size_y<SY>::value, "Invalid size SY");
// Do whatever
for (auto& i : x) i = 42;
for (auto& i : y) while (i --> -41) i = i;
}
Lastly, you can make a type alias to avoid creating invalid arrays in the first place:
template <typename T, SizesForArrayX SIZE>
using XArray =
std::array<T, static_cast<std::underlying_type_t<SizesForArrayX>>(SIZE)>;
template <typename T, SizesForArrayY SIZE>
using YArray =
std::array<T, static_cast<std::underlying_type_t<SizesForArrayY>>(SIZE)>;
That'll prevent you from declaring an array if it's not an approved size:
XArray<int, SizesForArrayX::Forty> x{};
YArray<int, SizesForArrayY::Two> y{};
Personally I would just manually type the allowable sizes into a static_assert inside PickyArrayHandler. If that's not an option because the sizes will be used in other parts of your program and you're adhering to the DRY principal then I'd use the preprocessor.
#define FOREACH_ALLOWABLE_X(SEP_MACRO) \
SEP_MACRO(3) \
SEP_MACRO(4) \
SEP_MACRO(40) \
#define FOREACH_ALLOWABLE_Y(SEP_MACRO) \
SEP_MACRO(2) \
SEP_MACRO(3) \
SEP_MACRO(122) \
#define COMMA_SEP(NUM) NUM,
#define LOGIC_OR_SEP_X(NUM) N1 == NUM ||
#define LOGIC_OR_SEP_Y(NUM) N2 == NUM ||
#define END_LOGIC_OR false
// some arrays with your sizes incase you want to do runtime checking
namespace allowable_sizes
{
size_t x[] {FOREACH_ALLOWABLE_X(COMMA_SEP)};
size_t y[] {FOREACH_ALLOWABLE_Y(COMMA_SEP)};
}
template <size_t N1, size_t N2>
void PickyArrayHandler(const std::array<int, N1>& x, const std::array<int, N2>& y)
{
static_assert(FOREACH_ALLOWABLE_X(LOGIC_OR_SEP_X) END_LOGIC_OR);
static_assert(FOREACH_ALLOWABLE_Y(LOGIC_OR_SEP_Y) END_LOGIC_OR);
// do whatever
}
#undef FOREACH_ALLOWABLE_X
#undef FOREACH_ALLOWABLE_Y
#undef COMMA_SEP
#undef LOGIC_OR_SEP_X
#undef LOGIC_OR_SEP_Y
#undef END_LOGIC_OR
Some C++ purists will frown at it but it gets the job done.
You could have a is_of_size-like template that check the size of the array, and then use it to disable the template if one of the sizes does not match, something like:
#include <array>
#include <type_traits>
// Forward template declaration without definition.
template <class T, T N, T... Sizes>
struct is_one_of;
// Specialization when there is a single value: Ends of the recursion,
// the size was not found, so we inherit from std::false_type.
template <class T, T N>
struct is_one_of<T, N>: public std::false_type {};
// Generic case definition: We inherit from std::integral_constant<bool, X>, where X
// is true if N == Size or if N is in Sizes... (via recursion).
template <class T, T N, T Size, T... Sizes>
struct is_one_of<T, N, Size, Sizes... >:
public std::integral_constant<
bool, N == Size || is_one_of<T, N, Sizes... >::value> {};
// Alias variable template, for simpler usage.
template <class T, T N, T... Sizes>
constexpr bool is_one_of_v = is_one_of<T, N, Sizes... >::value;
template <std::size_t N1, std::size_t N2,
std::enable_if_t<
(is_one_of_v<std::size_t, N1, 3, 4, 40>
&& is_one_of_v<std::size_t, N2, 2, 3, 122>), int> = 0>
void PickyArrayHandler(
const std::array<int, N1>& x,
const std::array<int, N2>& y)
{
}
Then you can simply:
PickyArrayHandler(std::array<int, 3>{}, std::array<int, 122>{}); // OK
PickyArrayHandler(std::array<int, 2>{}, std::array<int, 3>{}); // NOK
In C++17, you could (I think) replace is_one_of with:
template <auto N, auto... Sizes>
struct is_one_of;
...and automatically deduce std::size_t.
In C++20, you could use a concept to have clearer error messages ;)
Using static_assert for invalid sizes is not a good solution because it doesn't play well with SFINAE; i.e., TMP facilities like std::is_invocable and the detection idiom will return false positives for calls that in fact always yield an error. Far better is to use SFINAE to remove invalid sizes from the overload set, resulting in something resembling the following:
template<std::size_t SX, std::size_t SY,
typename = std::enable_if_t<IsValidArrayXSize<SX>{} && IsValidArrayYSize<SY>{}>>
void PickyArrayHandler(std::array<int, SX> const& x, std::array<int, SY> const& y) {
// Do whatever
}
First we need to declare our valid sizes; I don't see any benefit to stronger typing here, so for a compile-time list of integers, std::integer_sequence works just fine and is very lightweight:
using SizesForArrayX = std::index_sequence<3, 4, 40>;
using SizesForArrayY = std::index_sequence<2, 3, 122>;
Now for the IsValidArraySize traits... The straightforward route is to make use of C++14's relaxed-constexpr rules and perform a simple linear search:
#include <initializer_list>
namespace detail {
template<std::size_t... VSs>
constexpr bool idx_seq_contains(std::index_sequence<VSs...>, std::size_t const s) {
for (auto const vs : {VSs...}) {
if (vs == s) {
return true;
}
}
return false;
}
} // namespace detail
template<std::size_t S>
using IsValidArrayXSize
= std::integral_constant<bool, detail::idx_seq_contains(SizesForArrayX{}, S)>;
template<std::size_t S>
using IsValidArrayYSize
= std::integral_constant<bool, detail::idx_seq_contains(SizesForArrayY{}, S)>;
Online Demo
However if compile times are at all a concern, I suspect the following will be better, if potentially less clear:
namespace detail {
template<bool... Bs>
using bool_sequence = std::integer_sequence<bool, Bs...>;
template<typename, std::size_t>
struct idx_seq_contains;
template<std::size_t... VSs, std::size_t S>
struct idx_seq_contains<std::index_sequence<VSs...>, S>
: std::integral_constant<bool, !std::is_same<bool_sequence<(VSs == S)...>,
bool_sequence<(VSs, false)...>>{}>
{ };
} // namespace detail
template<std::size_t S>
using IsValidArrayXSize = detail::idx_seq_contains<SizesForArrayX, S>;
template<std::size_t S>
using IsValidArrayYSize = detail::idx_seq_contains<SizesForArrayY, S>;
Online Demo
Whichever implementation route is chosen, using SFINAE in this way enables very nice error messages – e.g. for PickyArrayHandler(std::array<int, 5>{}, std::array<int, 3>{});, current Clang 7.0 ToT yields the following, telling you which array's size is invalid:
error: no matching function for call to 'PickyArrayHandler'
PickyArrayHandler(std::array<int, 5>{}, std::array<int, 3>{});
^~~~~~~~~~~~~~~~~
note: candidate template ignored: requirement 'IsValidArrayXSize<5UL>{}' was not satisfied [with SX = 5, SY = 3]
void PickyArrayHandler(std::array<int, SX> const& x, std::array<int, SY> const& y) {
^
I would like iterate over an tuple in some way with member function templates (for later create a new type of tuple from the given template type T).
However, the break condition (function) is not used so I get this error:
invalid use of incomplete type: 'class std::tuple_element<0ul, std::tuple<> >'
The problem seems to be, that even though N == size of the tuple, std::tuple_element_t is evaluated for N != size and not handled as SFINAE.
Both examples showing different not working solutions. What do I wrong?
Note: The function for evaluated with is_same is omitted to minimize the example.
#include <type_traits>
#include <tuple>
template<typename...Ts>
struct A
{
using tuple = std::tuple<Ts...>;
static constexpr std::size_t size = sizeof...(Ts);
template<typename T, std::size_t N = 0, typename std::enable_if_t<N == size>* = nullptr>
int get()
{
return 0;
}
template<typename T, std::size_t N = 0, typename std::enable_if_t<N != size && !std::is_same<T, std::tuple_element_t<N, tuple>>::value>* = nullptr>
int get()
{
return get<T, N + 1>() - 1;
}
};
int main()
{
A<int, float, double, float, float> a;
return a.get<char>();
}
Live Example 1
#include <type_traits>
#include <tuple>
template<typename...Ts>
struct A
{
using tuple = std::tuple<Ts...>;
static constexpr std::size_t size = sizeof...(Ts);
template<typename T, std::size_t N = 0>
std::enable_if_t<N == size, int> get()
{
return 0;
}
template<typename T, std::size_t N = 0>
std::enable_if_t<N != size && !std::is_same<T, std::tuple_element_t<N, tuple>>::value, int> get()
{
return get<T, N + 1>() - 1;
}
};
int main()
{
A<int, float, double, float, float> a;
return a.get<char>();
}
Live Example 2
One workaround would be to use a third function to evaluate until sizeof tuple - 2 and than evaluate sizeof tuple - 1, but Is this really necessary?
#include <type_traits>
#include <tuple>
template<typename...Ts>
struct A
{
using tuple = std::tuple<Ts...>;
static constexpr std::size_t size = sizeof...(Ts);
template<typename T, std::size_t N = 0, typename std::enable_if_t<(N == size - 1) && std::is_same<T, std::tuple_element_t<N, tuple>>::value>* = nullptr>
int get()
{
return 1;
}
template<typename T, std::size_t N = 0, typename std::enable_if_t<(N == size - 1) && !std::is_same<T, std::tuple_element_t<N, tuple>>::value>* = nullptr>
int get()
{
return 2;
}
template<typename T, std::size_t N = 0, typename std::enable_if_t<(N < size - 1) && !std::is_same<T, std::tuple_element_t<N, tuple>>::value>* = nullptr>
int get()
{
return get<T, N + 1>() - 1;
}
};
int main()
{
A<int, float, double, float, float> a;
return a.get<char>();
}
Live Example 3
As suggested by #PiotrSkotnicki in the comments to the question, here is your second example once fixed:
#include <type_traits>
#include <tuple>
template<typename...Ts>
struct A
{
using tuple = std::tuple<Ts...>;
static constexpr std::size_t size = sizeof...(Ts);
template<typename T, std::size_t N = 0>
std::enable_if_t<N == size-1, int>
get()
{
return std::is_same<T, std::tuple_element_t<N, tuple>>::value ? N : 0;
}
template<typename T, std::size_t N = 0>
std::enable_if_t<N != size-1 && !std::is_same<T, std::tuple_element_t<N, tuple>>::value, int>
get()
{
return get<T, N + 1>() - 1;
}
};
int main()
{
A<int, float, double, float, float> a;
return a.get<char>();
}
What was the problem?
Consider the following line:
std::enable_if_t<N != size && !std::is_same<T, std::tuple_element_t<N, tuple>>::value, int> get()
In this case, N was substituted in order to evaluate the condition of the enable_if, even when N == size (substitution is mandatory to find that N == size indeed).
Thus, the tuple_element_t (let me say) issued an out of range and that's why you got the compilation error.
I've simply updated your code to avoid reaching size while iterating over N. It was a matter of using size-1 as a value on which to switch between functions.
In a comment to this answer the OP said:
It does solve the problem but not for automatic type return type deduction based on which function is used (returning int was just an example). I should have been clearer on this.
It follows a minimal, working example that probably solves the problem also for that.
It's far easier to reason in terms of inheritance and tag dispatching in this case, so as to reduce the boilerplate due to sfinae. Moreover, one can use specializations to introduce specific behaviors for specific types if needed.
The final case, the one for the type that is not part of the types list, is easily handled in a dedicated function as well.
It follows the code:
#include <type_traits>
#include <tuple>
template<typename>
struct tag {};
template<typename...>
struct B;
template<typename T, typename... Ts>
struct B<T, Ts...>: B<Ts...> {
using B<Ts...>::get;
auto get(tag<T>) {
return T{};
}
};
template<>
struct B<> {
template<typename T>
auto get(tag<T>) {
return nullptr;
}
};
template<typename...Ts>
struct A: private B<Ts...>
{
template<typename T>
auto get() {
return B<Ts...>::get(tag<T>{});
}
};
int main()
{
A<int, float, double, float, float> a;
static_assert(std::is_same<decltype(a.get<char>()), std::nullptr_t>::value, "!");
static_assert(std::is_same<decltype(a.get<float>()), float>::value, "!");
}
What about using an additional struct that, with partial specialization, can avoid the use of std::tuple_element_t ?
I mean, something like
template <typename T, std::size_t N>
struct checkType
{ constexpr static bool value
= std::is_same<T, std::tuple_element_t<N, tuple>>::value; };
template <typename T>
struct checkType<T, size>
{ constexpr static bool value = false; };
template <typename, std::size_t N = 0>
std::enable_if_t<N == size, int> get ()
{ return 0; }
template <typename T, std::size_t N = 0>
std::enable_if_t<(N < size) && ! checkType<T, N>::value, int> get()
{ return get<T, N + 1>() - 1; }
In all the modern C++ compilers I've worked with, the following is legal:
std::array<float, 4> a = {1, 2, 3, 4};
I'm trying to make my own class that has similar construction semantics, but I'm running into an annoying problem. Consider the following attempt:
#include <array>
#include <cstddef>
template<std::size_t n>
class float_vec
{
private:
std::array<float, n> underlying_array;
public:
template<typename... Types>
float_vec(Types... args)
: underlying_array{{args...}}
{
}
};
int main()
{
float_vec<4> v = {1, 2, 3, 4}; // error here
}
When using int literals like above, the compiler complains it can't implicitly convert int to float. I think it works in the std::array example, though, because the values given are compile-time constants known to be within the domain of float. Here, on the other hand, the variadic template uses int for the parameter types and the conversion happens within the constructor's initializer list where the values aren't known at compile-time.
I don't want to do an explicit cast in the constructor since that would then allow for all numeric values even if they can't be represented by float.
The only way I can think of to get what I want is to somehow have a variable number of parameters, but of a specific type (in this case, I'd want float). I'm aware of std::initializer_list, but I'd like to be able to enforce the number of parameters at compile time as well.
Any ideas? Is what I want even possible with C++11? Anything new proposed for C++14 that will solve this?
A little trick is to use constructor inheritance. Just make your class derive from another class which has a pack of the parameters you want.
template <class T, std::size_t N, class Seq = repeat_types<N, T>>
struct _array_impl;
template <class T, std::size_t N, class... Seq>
struct _array_impl<T, N, type_sequence<Seq...>>
{
_array_impl(Seq... elements) : _data{elements...} {}
const T& operator[](std::size_t i) const { return _data[i]; }
T _data[N];
};
template <class T, std::size_t N>
struct array : _array_impl<T, N>
{
using _array_impl<T, N>::_array_impl;
};
int main() {
array<float, 4> a {1, 2, 3, 4};
for (int i = 0; i < 4; i++)
std::cout << a[i] << std::endl;
return 0;
}
Here is a sample implementation of the repeat_types utility. This sample uses logarithmic template recursion, which is a little less intuitive to implement than with linear recursion.
template <class... T>
struct type_sequence
{
static constexpr inline std::size_t size() noexcept { return sizeof...(T); }
};
template <class, class>
struct _concatenate_sequences_impl;
template <class... T, class... U>
struct _concatenate_sequences_impl<type_sequence<T...>, type_sequence<U...>>
{ using type = type_sequence<T..., U...>; };
template <class T, class U>
using concatenate_sequences = typename _concatenate_sequences_impl<T, U>::type;
template <std::size_t N, class T>
struct _repeat_sequence_impl
{ using type = concatenate_sequences<
typename _repeat_sequence_impl<N/2, T>::type,
typename _repeat_sequence_impl<N - N/2, T>::type>; };
template <class T>
struct _repeat_sequence_impl<1, T>
{ using type = T; };
template <class... T>
struct _repeat_sequence_impl<0, type_sequence<T...>>
{ using type = type_sequence<>; };
template <std::size_t N, class... T>
using repeat_types = typename _repeat_sequence_impl<N, type_sequence<T...>>::type;
First of what you are seeing is the default aggregate initialization. It has been around since the earliest K&R C. If your type is an aggregate, it supports aggregate initialization already. Also, your example will most likely compile, but the correct way to initialize it is std::array<int, 3> x ={{1, 2, 3}}; (note the double braces).
What has been added in C++11 is the initializer_list construct which requires a bit of compiler magic to be implemented.
So, what you can do now is add constructors and assignment operators that accept a value of std::initializer_list and this will offer the same syntax for your type.
Example:
#include <initializer_list>
struct X {
X(std::initializer_list<int>) {
// do stuff
}
};
int main()
{
X x = {1, 2, 3};
return 0;
}
Why does your current approach not work? Because in C++11 std::initializer_list::size is not a constexpr or part of the initializer_list type. You cannot use it as a template parameter.
A few possible hacks: make your type an aggregate.
#include <array>
template<std::size_t N>
struct X {
std::array<int, N> x;
};
int main()
{
X<3> x = {{{1, 2, 3}}}; // triple braces required
return 0;
}
Provide a make_* function to deduce the number of arguments:
#include <array>
template<std::size_t N>
struct X {
std::array<int, N> x;
};
template<typename... T>
auto make_X(T... args) -> X<sizeof...(T)>
// might want to find the common_type of the argument pack as well
{ return X<sizeof...(T)>{{{args...}}}; }
int main()
{
auto x = make_X(1, 2, 3);
return 0;
}
If you use several braces to initialize the instance, you can leverage list-init of another type to accept these conversions for compile-time constants. Here's a version that uses a raw array, so you only need parens + braces for construction:
#include <array>
#include <cstddef>
template<int... Is> struct seq {};
template<int N, int... Is> struct gen_seq : gen_seq<N-1, N-1, Is...> {};
template<int... Is> struct gen_seq<0, Is...> : seq<Is...> {};
template<std::size_t n>
class float_vec
{
private:
std::array<float, n> underlying_array;
template<int... Is>
constexpr float_vec(float const(&arr)[n], seq<Is...>)
: underlying_array{{arr[Is]...}}
{}
public:
constexpr float_vec(float const(&arr)[n])
: float_vec(arr, gen_seq<n>{})
{}
};
int main()
{
float_vec<4> v0 ({1, 2, 3, 4}); // fine
float_vec<4> v1 {{1, 2, 3, 4}}; // fine
float_vec<4> v2 = {{1, 2, 3, 4}}; // fine
}
Explicitly specify that the data type for the initialization to floating point type. You can do this by doing "1.0f" instead of putting "1". If it is a double, put "1.0d". If it is a long, put "1l" and for unsigned long put "1ul" and so on..
I've tested it here: http://coliru.stacked-crooked.com/a/396f5d418cbd3f14
and here: http://ideone.com/ZLiMhg
Your code was fine. You just initialized the class a bit incorrect.
#include <array>
#include <cstddef>
#include <iostream>
template<std::size_t n>
class float_vec
{
private:
std::array<float, n> underlying_array;
public:
template<typename... Types>
float_vec(Types... args)
: underlying_array{{args...}}
{
}
float get(int index) {return underlying_array[index];}
};
int main()
{
float_vec<4> v = {1.5f, 2.0f, 3.0f, 4.5f}; //works fine now..
for (int i = 0; i < 4; ++i)
std::cout<<v.get(i)<<" ";
}
How can I get a count of the number of arguments to a variadic template function?
ie:
template<typename... T>
void f(const T&... t)
{
int n = number_of_args(t);
...
}
What is the best way to implement number_of_args in the above?
Just write this:
const std::size_t n = sizeof...(T); //you may use `constexpr` instead of `const`
Note that n is a constant expression (i.e known at compile-time), which means you may use it where constant expression is needed, such as:
std::array<int, n> a; //array of n elements
std::array<int, 2*n> b; //array of (2*n) elements
auto middle = std::get<n/2>(tupleInstance);
Note that if you want to compute aggregated size of the packed types (as opposed to number of types in the pack), then you've to do something like this:
template<std::size_t ...>
struct add_all : std::integral_constant< std::size_t,0 > {};
template<std::size_t X, std::size_t ... Xs>
struct add_all<X,Xs...> :
std::integral_constant< std::size_t, X + add_all<Xs...>::value > {};
then do this:
constexpr auto size = add_all< sizeof(T)... >::value;
In C++17 (and later), computing the sum of size of the types is much simpler using fold expression:
constexpr auto size = (sizeof(T) + ...);
#include <iostream>
template<typename ...Args>
struct SomeStruct
{
static const int size = sizeof...(Args);
};
template<typename... T>
void f(const T&... t)
{
// this is first way to get the number of arguments
constexpr auto size = sizeof...(T);
std::cout<<size <<std::endl;
}
int main ()
{
f("Raje", 2, 4, "ASH");
// this is 2nd way to get the number of arguments
std::cout<<SomeStruct<int, std::string>::size<<std::endl;
return 0;
}