Idiomatic way to calculate template parameter depending on other parameters - c++

I am looking for an idiomatic way to optimize this template I wrote.
My main concern is how to correctly define the template parameter n and using it as a return parameter while the user must not overwrite it.
I am also open for other suggestions on how to write this template in an idiomatic C++14 way.
template<
typename InType=uint32_t,
typename OutType=float,
unsigned long bits=8,
unsigned long n=(sizeof(InType) * 8) / bits
>
std::array<OutType,n> hash_to_color(InType in) noexcept {
InType mask = ~0;
mask = mask << bits;
mask = ~mask;
std::array<OutType,n> out;
auto out_max = static_cast<OutType>((1 << bits) - 1);
for (auto i = 0; i < n; i++) {
auto selected = (in >> (i * bits)) & mask;
out[i] = static_cast<OutType>(selected) / out_max;
}
return out;
}

Regarding the n template parameter, you can avoid it by using auto as the return type in C++14. Here's a simpler example of the principle:
template<int N>
auto f()
{
constexpr int bar = N * 3;
std::array<int, bar> foo;
return foo;
}
Naturally the calculation of the array template parameter must be a constant expression.
Another option (compatible with C++11) is trailing-return-type:
template<int N>
auto f() -> std::array<int, N * 3>
{
This is a wee bit more verbose than taking advantage of C++14's allowing of return type deduction from the return statement.
Note: ~0 in your code is wrong because 0 is an int, it should be ~(InType)0. Also (1 << bits) - 1 has potential overflow issues.

I think M.M.'s answer is excellent, and, in your case, I'd definitely use one of the two alternatives suggested there.
Suppose you later encounter a situation where the logic is, given n, use not 3 n, but something more complicated, e.g., n2 + 3 n + 1. Alternatively, maybe the logic is not very complicated, but it is subject to change.
The first option - using automatically deduced auto, is pithy, but the omission sometimes makes the declaration less clear.
The second option - trailing return type - violates DRY to some extent.
(Just to clarify again, I don't think that these are significant problems in the context of your question or M.M.'s answer.)
So, a third option would be to factor out the logic to a constexpr function:
#include <array>
constexpr int arr_size(int n) { return n * n + 3 * n + 1; }
Since it's constexpr, it can be used to instantiate the template:
template<int N>
std::array<int, arr_size(N)> f() {
return std::array<int, arr_size(N)>();
}
Note that now the function has an explicit return type, but the logic of arr_size appears only once.
You could use this as usual:
int main() {
auto a = f<10>();
a[0] = 3;
}

Related

Lambda expression in C++17: trailing return type vs static_cast for type conversion

How to convert int to long inside/outside of lambda expression properly? How to check overflow of mathematics inside lambda properly?
int n = 12; // input parameter from std::cin
int a = 23; // input parameter from std::cin
int i = 34; // input parameter from std::cin
auto f = [n, a] (int i) { return a * (n - (i - 1)); };
auto result = f(i);
What's the best way to check for overflow after multiplying of integers inside/outside of lambda?
auto result = f(i);
if ((result > std::numeric_limits<int>::max()) || (result < std::numeric_limits<int>::min())) {
cout << "overflow was detected" << endl;
}
Do I need to add TRT (trailing return type) -> long to lambda for proper conversion from int to long?
auto f = [n, a] (int i) -> long { return a * (n - (i - 1)); };
Do I need to add static_cast to lambda for proper conversion from int to long:
auto f = [n, a] (int i) { return static_cast<long>(a) * (n - (i - 1)); };
Or, may be, I need to combine this?
auto f = [n, a] (int i) -> long { return static_cast<long>(a) * (n - (i - 1)); };
Or, may be, I need to write the type of lambda?
std::function< long( int ) > f = [n, a] (int i) { return a * (n - (i - 1)); };
First part of your question is a duplicate (although C question, C++ doesn't differ, and there are duplicates of including C++). Most important point: You cannot detect signed integer overflow after multiplication, as it already caused undefined behaviour – unless you do calculation in next larger data type; be aware, though, that int and long on many (but not all!) platforms (including Windows on modern PC hardware) have the same size, so switching to long is not a guarantee to avoid overflow! If you want to be safe, use the data types from <cstdint> header (such as int16_t, int32_t, int64_t). If you have some specific reason to insist on int, you'd have to invest some extra work to get guaranteed next larger type. This question is not specific to lambdas, though, but concerns any multiplication (and even addition).
To guarantee a specific return type (again, the question is not specific to lambdas, but to any automatic deduction of type, be it a return type or an auto variable), both aproaches are valuable (explicit trailing return type and a cast), however, as presented, the results do differ!
Just applying the trailing return type is equivalent to having
[n, a] (int i) { return static_cast<long>(a * (n - (i - 1))); };
// ^ ^
Note that the cast is placed around the entire calculation, i. e. the cast is done after the calculation, which still is done in int. If size of int and long do differ, the result can differ from your own casting variant:
[n, a] (int i) { return static_cast<long>(a) * (n - (i - 1)); };
as this enforces already the multiplication being done in long (but not the subtractions! – to enforce even these, too, you'd need to cast i). This cast is entirely sufficient to get the desired return type; an explicit trailing return type (in addition) shows more clearly, though, what the lambda actually returns without having to peek into the implementation. This is especially useful if you have a more complex lambda, and it can help to guarantee a consistent return type if you have multiple exit points.

template metaprogramming Ternary value not hitting base case

I am writing a simple test program using TMP to calculate the Nth fibonacci number. I have already found many ways to do this, but I'm just trying out a bunch of ways to get my understanding better. The way I am having a problem with is this:
template<int A>
struct fib
{
static const bool value = (A<2);
static const int num = (value?A:(fib<A-1>::num + fib<A-2>::num));
};
The error message I am getting is:
error: template instantiation depth exceeds maximum of 900 (use -ftemplate-depth= to increase the maximum) instantiating 'fib<-1796>::value'|
I have tried substituting many values into the "false" field of the ternary, just to play with it and see what it does. I still do not understand why this does not work. Can anyone help me and tell me why? Thanks.
EDIT: My guess is that the compiler might be evaluating the T/F fields of the ternary before checking to see if the value is true or false, but I'm not sure since that's not how an if statement is supposed to work at all, and these are supposed to roughly emulate if statements
I must admit that I'm not that experienced concerning template programming. But in OP's case, a simple solution would be template specialization.
Sample code:
#include <iostream>
template<int A>
struct fib
{
static const int num = fib<A-1>::num + fib<A-2>::num;
};
template<>
struct fib<1>
{
static const int num = 1;
};
template<>
struct fib<2>
{
static const int num = 1;
};
int main()
{
fib<10> fib10;
std::cout << "fib<10>: " << fib10.num << '\n';
return 0;
}
Output:
fib<10>: 55
Live Demo on coliru
One way to write this in a more straightforward manner is to use if constexpr. Unlike with regular if (and with the ternary operator), templates in the not taken branch are not instantiated.
template <int n>
struct fib {
constexpr static int eval() {
if constexpr (n < 2)
return n;
else
return fib<n-1>::eval() + fib<n-2>::eval();
}
};
Of course once you have if constexpr you don't really need templates to make a compile-time function of this type. A constexpr non-template function will do just fine. This is just an illustration of the technique.
The first comment on my original post is the correct answer. The author is (n.m.). Template type instantiations are evaluated in both fields of the ternary, while the object instantiation itself is not. To solve this, I will look into std::condiditional or std::enable_if

Why isn't a for-loop a compile-time expression?

If I want to do something like iterate over a tuple, I have to resort to crazy template metaprogramming and template helper specializations. For example, the following program won't work:
#include <iostream>
#include <tuple>
#include <utility>
constexpr auto multiple_return_values()
{
return std::make_tuple(3, 3.14, "pi");
}
template <typename T>
constexpr void foo(T t)
{
for (auto i = 0u; i < std::tuple_size<T>::value; ++i)
{
std::get<i>(t);
}
}
int main()
{
constexpr auto ret = multiple_return_values();
foo(ret);
}
Because i can't be const or we wouldn't be able to implement it. But for loops are a compile-time construct that can be evaluated statically. Compilers are free to remove it, transform it, fold it, unroll it or do whatever they want with it thanks to the as-if rule. But then why can't loops be used in a constexpr manner? There's nothing in this code that needs to be done at "runtime". Compiler optimizations are proof of that.
I know that you could potentially modify i inside the body of the loop, but the compiler can still be able to detect that. Example:
// ...snip...
template <typename T>
constexpr int foo(T t)
{
/* Dead code */
for (auto i = 0u; i < std::tuple_size<T>::value; ++i)
{
}
return 42;
}
int main()
{
constexpr auto ret = multiple_return_values();
/* No error */
std::array<int, foo(ret)> arr;
}
Since std::get<>() is a compile-time construct, unlike std::cout.operator<<, I can't see why it's disallowed.
πάντα ῥεῖ gave a good and useful answer, I would like to mention another issue though with constexpr for.
In C++, at the most fundamental level, all expressions have a type which can be determined statically (at compile-time). There are things like RTTI and boost::any of course, but they are built on top of this framework, and the static type of an expression is an important concept for understanding some of the rules in the standard.
Suppose that you can iterate over a heterogenous container using a fancy for syntax, like this maybe:
std::tuple<int, float, std::string> my_tuple;
for (const auto & x : my_tuple) {
f(x);
}
Here, f is some overloaded function. Clearly, the intended meaning of this is to call different overloads of f for each of the types in the tuple. What this really means is that in the expression f(x), overload resolution has to run three different times. If we play by the current rules of C++, the only way this can make sense is if we basically unroll the loop into three different loop bodies, before we try to figure out what the types of the expressions are.
What if the code is actually
for (const auto & x : my_tuple) {
auto y = f(x);
}
auto is not magic, it doesn't mean "no type info", it means, "deduce the type, please, compiler". But clearly, there really need to be three different types of y in general.
On the other hand, there are tricky issues with this kind of thing -- in C++ the parser needs to be able to know what names are types and what names are templates in order to correctly parse the language. Can the parser be modified to do some loop unrolling of constexpr for loops before all the types are resolved? I don't know but I think it might be nontrivial. Maybe there is a better way...
To avoid this issue, in current versions of C++, people use the visitor pattern. The idea is that you will have an overloaded function or function object and it will be applied to each element of the sequence. Then each overload has its own "body" so there's no ambiguity as to the types or meanings of the variables in them. There are libraries like boost::fusion or boost::hana that let you do iteration over heterogenous sequences using a given vistior -- you would use their mechanism instead of a for-loop.
If you could do constexpr for with just ints, e.g.
for (constexpr i = 0; i < 10; ++i) { ... }
this raises the same difficulty as heterogenous for loop. If you can use i as a template parameter inside the body, then you can make variables that refer to different types in different runs of the loop body, and then it's not clear what the static types of the expressions should be.
So, I'm not sure, but I think there may be some nontrivial technical issues associated with actually adding a constexpr for feature to the language. The visitor pattern / the planned reflection features may end up being less of a headache IMO... who knows.
Let me give another example I just thought of that shows the difficulty involved.
In normal C++, the compiler knows the static type of every variable on the stack, and so it can compute the layout of the stack frame for that function.
You can be sure that the address of a local variable won't change while the function is executing. For instance,
std::array<int, 3> a{{1,2,3}};
for (int i = 0; i < 3; ++i) {
auto x = a[i];
int y = 15;
std::cout << &y << std::endl;
}
In this code, y is a local variable in the body of a for loop. It has a well-defined address throughout this function, and the address printed by the compiler will be the same each time.
What should be the behavior of similar code with constexpr for?
std::tuple<int, long double, std::string> a{};
for (int i = 0; i < 3; ++i) {
auto x = std::get<i>(a);
int y = 15;
std::cout << &y << std::endl;
}
The point is that the type of x is deduced differently in each pass through the loop -- since it has a different type, it may have different size and alignment on the stack. Since y comes after it on the stack, that means that y might change its address on different runs of the loop -- right?
What should be the behavior if a pointer to y is taken in one pass through the loop, and then dereferenced in a later pass? Should it be undefined behavior, even though it would probably be legal in the similar "no-constexpr for" code with std::array showed above?
Should the address of y not be allowed to change? Should the compiler have to pad the address of y so that the largest of the types in the tuple can be accommodated before y? Does that mean that the compiler can't simply unroll the loops and start generating code, but must unroll every instance of the loop before-hand, then collect all of the type information from each of the N instantiations and then find a satisfactory layout?
I think you are better off just using a pack expansion, it's a lot more clear how it is supposed to be implemented by the compiler, and how efficient it's going to be at compile and run time.
Here's a way to do it that does not need too much boilerplate, inspired from http://stackoverflow.com/a/26902803/1495627 :
template<std::size_t N>
struct num { static const constexpr auto value = N; };
template <class F, std::size_t... Is>
void for_(F func, std::index_sequence<Is...>)
{
using expander = int[];
(void)expander{0, ((void)func(num<Is>{}), 0)...};
}
template <std::size_t N, typename F>
void for_(F func)
{
for_(func, std::make_index_sequence<N>());
}
Then you can do :
for_<N>([&] (auto i) {
std::get<i.value>(t); // do stuff
});
If you have a C++17 compiler accessible, it can be simplified to
template <class F, std::size_t... Is>
void for_(F func, std::index_sequence<Is...>)
{
(func(num<Is>{}), ...);
}
In C++20 most of the std::algorithm functions will be constexpr. For example using std::transform, many operations requiring a loop can be done at compile time. Consider this example calculating the factorial of every number in an array at compile time (adapted from Boost.Hana documentation):
#include <array>
#include <algorithm>
constexpr int factorial(int n) {
return n == 0 ? 1 : n * factorial(n - 1);
}
template <typename T, std::size_t N, typename F>
constexpr std::array<std::result_of_t<F(T)>, N>
transform_array(std::array<T, N> array, F f) {
auto array_f = std::array<std::result_of_t<F(T)>, N>{};
// This is a constexpr "loop":
std::transform(array.begin(), array.end(), array_f.begin(), [&f](auto el){return f(el);});
return array_f;
}
int main() {
constexpr std::array<int, 4> ints{{1, 2, 3, 4}};
// This can be done at compile time!
constexpr std::array<int, 4> facts = transform_array(ints, factorial);
static_assert(facts == std::array<int, 4>{{1, 2, 6, 24}}, "");
}
See how the array facts can be computed at compile time using a "loop", i.e. an std::algorithm. At the time of writing this, you need an experimental version of the newest clang or gcc release which you can try out on godbolt.org. But soon C++20 will be fully implemented by all the major compilers in the release versions.
This proposal "Expansion Statements" is interesting and I will provide the link for you to read further explanations.
Click this link
The proposal introduced the syntactic sugar for... as similar to the sizeof... operator. for... loop statement is a compile-time expression which means it has nothing to do in the runtime.
For example:
std::tuple<int, float, char> Tup1 {5, 3.14, 'K'};
for... (auto elem : Tup1) {
std::cout << elem << " ";
}
The compiler will generate the code at the compile-time and this is the equivalence:
std::tuple<int, float, char> Tup1 {5, 3.14, 'K'};
{
auto elem = std::get<0>(Tup1);
std::cout << elem << " ";
}
{
auto elem = std::get<1>(Tup1);
std::cout << elem << " ";
}
{
auto elem = std::get<2>(Tup1);
std::cout << elem << " ";
}
Thus, the expansion statement is not a loop but a repeated version of the loop body as it was said in the document.
Since this proposal isn't in C++'s current version or in the technical specification (if it's accepted). We can use the alternative version from the boost library specifically <boost/hana/for_each.hpp> and use the tuple version of boost from <boost/hana/tuple.hpp>. Click this link.
#include <boost/hana/for_each.hpp>
#include <boost/hana/tuple.hpp>
using namespace boost;
...
hana::tuple<int, std::string, float> Tup1 {5, "one", 5.55};
hana::for_each(Tup1, [](auto&& x){
std::cout << x << " ";
});
// Which will print:
// 5 "one" 5.55
The first argument of boost::hana::for_each must be a foldable container.
Why isn't a for-loop a compile-time expression?
Because a for() loop is used to define runtime control flow in the c++ language.
Generally variadic templates cannot be unpacked within runtime control flow statements in c++.
std::get<i>(t);
cannot be deduced at compile time, since i is a runtime variable.
Use variadic template parameter unpacking instead.
You might also find this post useful (if this not even remarks a duplicate having answers for your question):
iterate over tuple
Here are two examples attempting to replicate a compile-time for loop (which isn't part of the language at this time), using fold expressions and std::integer_sequence. The first example shows a simple assignment in the loop, and the second example shows tuple indexing and uses a lambda with template parameters available in C++20.
For a function with a template parameter, e.g.
template <int n>
constexpr int factorial() {
if constexpr (n == 0) { return 1; }
else { return n * factorial<n - 1>(); }
}
Where we want to loop over the template parameter, like this:
template <int N>
constexpr auto example() {
std::array<int, N> vals{};
for (int i = 0; i < N; ++i) {
vals[i] = factorial<i>(); // this doesn't work
}
return vals;
}
One can do this:
template <int... Is>
constexpr auto get_array(std::integer_sequence<int, Is...> a) -> std::array<int, a.size()> {
std::array<int, a.size()> vals{};
((vals[Is] = factorial<Is>()), ...);
return vals;
}
And then get the result at compile time:
constexpr auto x = get_array(std::make_integer_sequence<int, 5>{});
// x = {1, 1, 2, 6, 24}
Similarly, for a tuple:
constexpr auto multiple_return_values()
{
return std::make_tuple(3, 3.14, "pi");
}
int main(void) {
static constexpr auto ret = multiple_return_values();
constexpr auto for_constexpr = [&]<int... Is>(std::integer_sequence<int, Is...> a) {
((std::get<Is>(ret)), ...); // std::get<i>(t); from the question
return 0;
}
// use it:
constexpr auto w = for_constexpr(std::make_integer_sequence<int, std::tuple_size_v<decltype(ret)>>{});
}

C++ Auto Keyword - Float vs Int Trouble

I'm relatively new to C++. I just read about the auto keyword in regards to type deduction. I've tried implementing this in a couple functions only to find that it was causing all of kinds of issues when working with math operators. I believe what was happening was that my functions started implementing integer division when I actually needed float division (variables 'i' and 'avg'). I posted the code using the auto keywords below.
Now when I explicitly declared the variables as floats, the function worked fine.
So is this an example in which using auto would not be preferred? However, I can definitely see that they would help when generating the iterators.
namespace Probability
{
/* ExpectedValueDataSet - Calculates the expected value of a data set */
template <typename T, std::size_t N>
double ExpectedValueDataSet(const std::array<T, N>& data)
{
auto i = 0;
auto avg = 0;
for(auto it = data.begin(); it != data.end(); it++)
{
i = it - data.begin() + 1;
avg = ((i-1)/i)*avg + (*it)/i;
}
std::cout << avg << " \n";
return avg;
}
};
The literal 0 is of type int.
A variable auto avg = 0; therefore has type int.
The literal 0.0 (or e.g. 3.14) has type double, which is what you want.
As a general rule, use auto for a variable declaration where
the type is explicitly specified in the initializer, or
the type is awfully verbose, like some iterator type.
But don't use it without reason. :)
If for e.g. aesthetic reasons you want to keep i as an integer, then rewrite the computation
((i-1)/i)*avg + (*it)/i
to e.g.
((i-1)*avg + *it)/i
to avoid pure integer arithmetic for (i-1)/i.

Initialization of all elements of an array to one default value in C++?

C++ Notes: Array Initialization has a nice list over initialization of arrays. I have a
int array[100] = {-1};
expecting it to be full with -1's but its not, only first value is and the rest are 0's mixed with random values.
The code
int array[100] = {0};
works just fine and sets each element to 0.
What am I missing here.. Can't one initialize it if the value isn't zero ?
And 2: Is the default initialization (as above) faster than the usual loop through the whole array and assign a value or does it do the same thing?
Using the syntax that you used,
int array[100] = {-1};
says "set the first element to -1 and the rest to 0" since all omitted elements are set to 0.
In C++, to set them all to -1, you can use something like std::fill_n (from <algorithm>):
std::fill_n(array, 100, -1);
In portable C, you have to roll your own loop. There are compiler-extensions or you can depend on implementation-defined behavior as a shortcut if that's acceptable.
There is an extension to the gcc compiler which allows the syntax:
int array[100] = { [0 ... 99] = -1 };
This would set all of the elements to -1.
This is known as "Designated Initializers" see here for further information.
Note this isn't implemented for the gcc c++ compiler.
The page you linked to already gave the answer to the first part:
If an explicit array size is
specified, but an shorter
initiliazation list is specified, the
unspecified elements are set to zero.
There is no built-in way to initialize the entire array to some non-zero value.
As for which is faster, the usual rule applies: "The method that gives the compiler the most freedom is probably faster".
int array[100] = {0};
simply tells the compiler "set these 100 ints to zero", which the compiler can optimize freely.
for (int i = 0; i < 100; ++i){
array[i] = 0;
}
is a lot more specific. It tells the compiler to create an iteration variable i, it tells it the order in which the elements should be initialized, and so on. Of course, the compiler is likely to optimize that away, but the point is that here you are overspecifying the problem, forcing the compiler to work harder to get to the same result.
Finally, if you want to set the array to a non-zero value, you should (in C++, at least) use std::fill:
std::fill(array, array+100, 42); // sets every value in the array to 42
Again, you could do the same with an array, but this is more concise, and gives the compiler more freedom. You're just saying that you want the entire array filled with the value 42. You don't say anything about in which order it should be done, or anything else.
C++11 has another (imperfect) option:
std::array<int, 100> a;
a.fill(-1);
With {} you assign the elements as they are declared; the rest is initialized with 0.
If there is no = {} to initalize, the content is undefined.
Using std::array, we can do this in a fairly straightforward way in C++14. It is possible to do in C++11 only, but slightly more complicated.
Our interface is a compile-time size and a default value.
template<typename T>
constexpr auto make_array_n(std::integral_constant<std::size_t, 0>, T &&) {
return std::array<std::decay_t<T>, 0>{};
}
template<std::size_t size, typename T>
constexpr auto make_array_n(std::integral_constant<std::size_t, size>, T && value) {
return detail::make_array_n_impl<size>(std::forward<T>(value), std::make_index_sequence<size - 1>{});
}
template<std::size_t size, typename T>
constexpr auto make_array_n(T && value) {
return make_array_n(std::integral_constant<std::size_t, size>{}, std::forward<T>(value));
}
The third function is mainly for convenience, so the user does not have to construct a std::integral_constant<std::size_t, size> themselves, as that is a pretty wordy construction. The real work is done by one of the first two functions.
The first overload is pretty straightforward: It constructs a std::array of size 0. There is no copying necessary, we just construct it.
The second overload is a little trickier. It forwards along the value it got as the source, and it also constructs an instance of make_index_sequence and just calls some other implementation function. What does that function look like?
namespace detail {
template<std::size_t size, typename T, std::size_t... indexes>
constexpr auto make_array_n_impl(T && value, std::index_sequence<indexes...>) {
// Use the comma operator to expand the variadic pack
// Move the last element in if possible. Order of evaluation is well-defined
// for aggregate initialization, so there is no risk of copy-after-move
return std::array<std::decay_t<T>, size>{ (static_cast<void>(indexes), value)..., std::forward<T>(value) };
}
} // namespace detail
This constructs the first size - 1 arguments by copying the value we passed in. Here, we use our variadic parameter pack indexes just as something to expand. There are size - 1 entries in that pack (as we specified in the construction of make_index_sequence), and they have values of 0, 1, 2, 3, ..., size - 2. However, we do not care about the values (so we cast it to void, to silence any compiler warnings). Parameter pack expansion expands out our code to something like this (assuming size == 4):
return std::array<std::decay_t<T>, 4>{ (static_cast<void>(0), value), (static_cast<void>(1), value), (static_cast<void>(2), value), std::forward<T>(value) };
We use those parentheses to ensure that the variadic pack expansion ... expands what we want, and also to ensure we are using the comma operator. Without the parentheses, it would look like we are passing a bunch of arguments to our array initialization, but really, we are evaluating the index, casting it to void, ignoring that void result, and then returning value, which is copied into the array.
The final argument, the one we call std::forward on, is a minor optimization. If someone passes in a temporary std::string and says "make an array of 5 of these", we would like to have 4 copies and 1 move, instead of 5 copies. The std::forward ensures that we do this.
The full code, including headers and some unit tests:
#include <array>
#include <type_traits>
#include <utility>
namespace detail {
template<std::size_t size, typename T, std::size_t... indexes>
constexpr auto make_array_n_impl(T && value, std::index_sequence<indexes...>) {
// Use the comma operator to expand the variadic pack
// Move the last element in if possible. Order of evaluation is well-defined
// for aggregate initialization, so there is no risk of copy-after-move
return std::array<std::decay_t<T>, size>{ (static_cast<void>(indexes), value)..., std::forward<T>(value) };
}
} // namespace detail
template<typename T>
constexpr auto make_array_n(std::integral_constant<std::size_t, 0>, T &&) {
return std::array<std::decay_t<T>, 0>{};
}
template<std::size_t size, typename T>
constexpr auto make_array_n(std::integral_constant<std::size_t, size>, T && value) {
return detail::make_array_n_impl<size>(std::forward<T>(value), std::make_index_sequence<size - 1>{});
}
template<std::size_t size, typename T>
constexpr auto make_array_n(T && value) {
return make_array_n(std::integral_constant<std::size_t, size>{}, std::forward<T>(value));
}
struct non_copyable {
constexpr non_copyable() = default;
constexpr non_copyable(non_copyable const &) = delete;
constexpr non_copyable(non_copyable &&) = default;
};
int main() {
constexpr auto array_n = make_array_n<6>(5);
static_assert(std::is_same<std::decay_t<decltype(array_n)>::value_type, int>::value, "Incorrect type from make_array_n.");
static_assert(array_n.size() == 6, "Incorrect size from make_array_n.");
static_assert(array_n[3] == 5, "Incorrect values from make_array_n.");
constexpr auto array_non_copyable = make_array_n<1>(non_copyable{});
static_assert(array_non_copyable.size() == 1, "Incorrect array size of 1 for move-only types.");
constexpr auto array_empty = make_array_n<0>(2);
static_assert(array_empty.empty(), "Incorrect array size for empty array.");
constexpr auto array_non_copyable_empty = make_array_n<0>(non_copyable{});
static_assert(array_non_copyable_empty.empty(), "Incorrect array size for empty array of move-only.");
}
The page you linked states
If an explicit array size is specified, but an shorter initiliazation list is specified, the unspecified elements are set to zero.
Speed issue: Any differences would be negligible for arrays this small. If you work with large arrays and speed is much more important than size, you can have a const array of the default values (initialized at compile time) and then memcpy them to the modifiable array.
Another way of initializing the array to a common value, would be to actually generate the list of elements in a series of defines:
#define DUP1( X ) ( X )
#define DUP2( X ) DUP1( X ), ( X )
#define DUP3( X ) DUP2( X ), ( X )
#define DUP4( X ) DUP3( X ), ( X )
#define DUP5( X ) DUP4( X ), ( X )
.
.
#define DUP100( X ) DUP99( X ), ( X )
#define DUPx( X, N ) DUP##N( X )
#define DUP( X, N ) DUPx( X, N )
Initializing an array to a common value can easily be done:
#define LIST_MAX 6
static unsigned char List[ LIST_MAX ]= { DUP( 123, LIST_MAX ) };
Note: DUPx introduced to enable macro substitution in parameters to DUP
For the case of an array of single-byte elements, you can use memset to set all elements to the same value.
There's an example here.
The simplest way is to use std::array and write a function template that will return the required std::array with all of its element initialized with the passed argument as shown below.
C++11 Version
template<std::size_t N> std::array<int, N> make_array(int val)
{
std::array<int, N> tempArray{};
for(int &elem:tempArray)
{
elem = val;
}
return tempArray;
}
int main()
{
//---------------------V-------->number of elements
auto arr = make_array<8>(5);
//------------------------^---->value of element to be initialized with
//lets confirm if all objects have the expected value
for(const auto &elem: arr)
{
std::cout << elem << std::endl; //prints all 5
}
}
Working demo
C++17 Version
With C++17 you can add constexpr to the function template so that it can be used in constexpr context:
//-----------------------------------------vvvvvvvvv--->added constexpr
template<std::size_t N> std::array<int, N> constexpr make_array(int val)
{
std::array<int, N> tempArray{};
for(int &elem:tempArray)
{
elem = val;
}
return tempArray;
}
int main()
{
//--vvvvvvvvv------------------------------>constexpr added
constexpr auto arr = make_array<8>(5);
for(const auto &elem: arr)
{
std::cout << elem << std::endl;
}
}
Working demo
1) When you use an initializer, for a struct or an array like that, the unspecified values are essentially default constructed. In the case of a primitive type like ints, that means they will be zeroed. Note that this applies recursively: you could have an array of structs containing arrays and if you specify just the first field of the first struct, then all the rest will be initialized with zeros and default constructors.
2) The compiler will probably generate initializer code that is at least as good as you could do by hand. I tend to prefer to let the compiler do the initialization for me, when possible.
In the C++ programming language V4, Stroustrup recommends using vectors or valarrays over builtin arrays. With valarrary's, when you create them, you can init them to a specific value like:
valarray <int>seven7s=(7777777,7);
To initialize an array 7 members long with "7777777".
This is a C++ way of implementing the answer using a C++ data structure instead of a "plain old C" array.
I switched to using the valarray as an attempt in my code to try to use C++'isms v. C'isms....