How can I Initialize a div_t Object? - c++

So the order of the members returned from the div functions seems to be implementation defined.
Is quot the 1st member or is rem?
Let's say that I'm doing something like this:
generate(begin(digits), end(digits), [i = div_t{ quot, 0 }]() mutable {
i = div(i.quot, 10);
return i.rem;
})
Of course the problem here is that I don't know if I initialized i.quot or i.rem in my lambda capture. Is intializing i with div(quot, 1) the only cross platform way to do this?

You're right that the order of the members is unspecified. The definition is inherited from C, which explicitly states it is (emphasis mine):
7.20.6.2 The div, ldiv, and lldiv functions
3 [...] The structures shall contain (in either order) the members quot (the quotient) and rem (the remainder), each of which has the same type as the arguments numer and denom. [...]
In C, the fact that the order is unspecified doesn't matter, and an example is included specifically regarding div_t:
6.7.8 Initialization
34 EXAMPLE 10 Structure members can be initialized to nonzero values without depending on their order:
div_t answer = { .quot = 2, .rem = -1 };
Unfortunately, C++ never adopted this syntax.
I'd probably go for simple assignment in a helper function:
div_t make_div_t(int quot, int rem) {
div_t result;
result.quot = quot;
result.rem = rem;
return result;
}
For plain int values, whether you use initialisation or assignment doesn't really matter, they have the same effect.
Your division by 1 is a valid option as well.

To quote the C11 Standard Draft N1570 §7.22.6.2
The div, ldiv, and lldiv functions return a structure of type div_t, ldiv_t, and lldiv_t, respectively, comprising both the quotient and the remainder. The structures shall contain (in either order) the members quot (the quotient) and rem (the remainder), each of which has the same type as the arguments numer and denom.
So in this case div_t is a plain POD struct, consisting of two ints.
So you can initialize it like every plain struct, your way would be something I would have done too. It's also portable.
Otherwise I can't find anything special mechanism to initialize them, neither in the C nor in the C++ standard. But for POD aka Plain Old Datatypes, there isn't any need for.

EDIT:
I think the VS workaround could look like this:
#include <cstdlib>
#include <type_traits>
template<class T>
struct DTMaker {
using D = decltype(div(T{}, T{}));
static constexpr D dt = D{0,1};
static constexpr auto quot = dt.quot;
};
template <class T, typename std::enable_if<DTMaker<T>::quot == 0>::type* = nullptr>
typename DTMaker<T>::D make_div(const T &quot, const T& rem) { return {quot, rem}; }
template <class T, typename std::enable_if<DTMaker<T>::quot == 1>::type* = nullptr>
typename DTMaker<T>::D make_div(const T &quot, const T &rem) { return {rem, qout}; }
int main() {
div_t d_t = make_div(1, 2);
}
[live demo]
OLD ANSWER:
If you are using c++17 you could also try to use structured binding, constexpr function and SFINAE overloading to detect which field is declared first in the structure:
#include <cstdlib>
#include <algorithm>
#include <iterator>
constexpr bool first_quot() {
auto [x, y] = std::div_t{1, 0};
(void)y;
return x;
}
template <bool B = first_quot()>
std::enable_if_t<B, std::div_t> foo() {
int quot = 1;
int rem = 0;
return {quot, rem};
}
template <bool B = first_quot()>
std::enable_if_t<!B, std::div_t> foo() {
int quot = 1;
int rem = 0;
return {rem, quot};
}
int main() {
foo();
}
[live demo]
Or even simpler use if constexpr:
#include <cstdlib>
#include <algorithm>
#include <iterator>
constexpr bool first_quot() {
auto [x, y] = std::div_t{1, 0};
(void)y;
return x;
}
std::div_t foo() {
int quot = 1;
int rem = 0;
if constexpr(first_quot())
return {quot, rem};
else
return {rem, quot};
}
int main() {
foo();
}
[live demo]

Try something like this:)
int quot = 10;
auto l = [i = [=] { div_t tmp{}; tmp.quot = quot; return tmp; }()]() mutable
{
i = div(i.quot, 10);
return i.rem;
};
It looks like using a compound literal in C.:)
or you can simplify the task by defining the variable i outside the lambda expression and use it in the lambda by reference.
For example
int quot = 10;
dov_t i = {};
i.quot = quot;
auto l = [&i]()
{
i = div(i.quot, 10);
return i.rem;
};

You can use a ternary to initialize this:
generate(rbegin(digits), rend(digits), [i = div_t{ 1, 0 }.quot ? div_t{ quot, 0 } : div_t{ 0, quot }]() mutable {
i = div(i.quot, 10);
return i.rem;
});
gcc6.3 for example will compile identical code with the ternary and without the ternary.
On the other hand clang3.9 compiles longer code with the ternary than it does without the ternary.
So whether the ternary is optimized out will vary between compilers. But in all cases it will give you implementation independent code that doesn't require a secondary function to be written.
Incidentally if you are into creating a helper function to create a div_t (or any of the other div returns) you can do it like this:
template <typename T>
enable_if_t<decltype(div(declval<T>(), declval<T>())){ 1, 0 }.quot != 0, decltype(div(declval<T>(), declval<T>()))> make_div(const T quot, const T rem) { return { quot, rem }; }
template <typename T>
enable_if_t<decltype(div(declval<T>(), declval<T>())){ 1, 0 }.quot == 0, decltype(div(declval<T>(), declval<T>()))> make_div(const T quot, const T rem) { return { rem, quot }; }
Note this does work on gcc but fails to compile on Visual Studio because of some non-conformity.

My solution uses a constexpr function that itself wraps and executes a lambda function that determines and initializes the correct div_t depending on the template parameters.
template <typename T>
constexpr auto make_div(const T quot, const T rem)
{
return [&]() {
decltype(std::div(quot, rem)) result;
result.quot = quot;
result.rem = rem;
return result;
}();
}
This works with MSVC15, gcc 6.3 and clang 3.9.1.
http://rextester.com/AOBCH32388
The lambda allows us to initialize a value step by step within a constexpr function. So we can set quot and rem correctly and independently from the order they appear in the datatype itself.
By wrapping it into a constexpr function we allow the compiler to completely optimize the call to make_div:
clang: https://godbolt.org/g/YdZGkX
gcc: https://godbolt.org/g/sA61LK

Related

std::variant constructed from uint32_t prefers to hold int32_t than std::optional<uint32_t> using GCC 8.2.0

I've got the following code:
#include <variant>
#include <optional>
#include <cstdint>
#include <iostream>
#include <type_traits>
using DataType_t = std::variant<
int32_t,
std::optional<uint32_t>
>;
constexpr uint32_t DUMMY_DATA = 0;
struct Event
{
explicit Event(DataType_t data)
: data_(data)
{}
template <class DataType>
std::optional<DataType> getData() const
{
if (auto ptr_data = std::get_if<DataType>(&data_))
{
return *ptr_data;
}
return std::nullopt;
}
DataType_t data_;
};
int main() {
auto event = Event(DUMMY_DATA);
auto eventData = event.getData<int32_t>();
if(!eventData) {
std::cout << "missing\n";
return 1;
}
return 0;
}
The code is pretty simple and straightforward but I've encountered a weird behavior.
When I compile it using gcc 8.2, the return code is 0 and there is no 'missing' message on the output console, which indicates that the variant was constructed using int32_t.
On the other hand, when I compile it using gcc 10.2 it behaves the opposite way.
I'm trying to figure out what has changed in standard which would explain this behavior.
Here is also compiler explorer link: click
Here's a reduced version:
constexpr int f() {
return std::variant<int32_t, std::optional<uint32_t>>(0U).index();
}
For gcc 8.3, f() == 0 but for gcc 10.2, f() == 1. The reasoning here is ultimately that variant initialization is... complicated.
Originally, when C++17 shipped, the way that initializing a variant<T, U> from an expression E worked was basically by way of overload resolution to determine the index. Something like this:
constexpr int __index(T) { return 0; }
constexpr int __index(U) { return 1; }
constexpr int which_index == __index(E);
In this particular example, T=int32_t and U=optional<uint32_t>, and E is an expression of type uint32_t. This overload resolution would give us 0: the conversion from uint32_t to int32_t is better than the conversion from uint32_t to optional<uint32_t> (former is standard, latter is user-defined). You can verify this:
constexpr int __index(int32_t) { return 0; }
constexpr int __index(std::optional<uint32_t>) { return 1; }
static_assert(__index(0U) == 0);
But this rule has some surprising results. This was summarized in P0608, which included this example:
variant<string, bool> x = "abc"; // holds bool
This is because the conversion to bool is still a standard conversion, while the conversion to string was user-defined. Which is... very unlikely to be what the user intended.
So the new rule ended up being (since modified further by way of P1957) that before we do the round of overload resolution to determine the index, we first prune the list of types to those that aren't narrowing conversions. That is, those types Ti from the pack for which:
Ti x[] = {E};
is a valid expression. That is not valid anymore for bool x[] = {"abc"};, which is why the variant<string, bool> example now holds a string as desired.
But for the original example here, int32_t x[] = {u}; (for u being an uint32_t) is not a valid declaration - this is a narrowing conversion (this would work for 0U directly, but we lose the constant-ness by the time we get to this check).
Once this defect report was applied, we now have this overload set:
// constexpr int __index(int32_t) { return 0; } // removed from consideration
constexpr int __index(std::optional<uint32_t>) { return 1; }
static_assert(__index(0U) == 1);
Which is why your variant now holds an optional<uint32_t> rather than an int32_t.
The code is pretty simple and straightforward
I hope by now you recognize that attempting to initialize a variant<T, U> from a type that is neither T nor U isn't exactly simple or straightforward.

Bidirectional static value mapping in c++17

I want to efficiently bidirectionally map some values of different types in C++17 (1:1 mapping of only very few values). Consider for example mapping enum values and integers, though the problem is applicable to other types as well. Currently, I'm doing it like this:
#include <optional>
enum class ExampleEnum { A, B, C, D, E };
class MyMapping {
public:
std::optional<int> enumToInt(ExampleEnum v) {
switch(v) {
case ExampleEnum::A:
return 1;
case ExampleEnum::B:
return 5;
case ExampleEnum::D:
return 42;
}
return std::nullopt;
}
std::optional<ExampleEnum> intToEnum(int v) {
switch(v) {
case 1:
return ExampleEnum::A;
case 5:
return ExampleEnum::B;
case 42:
return ExampleEnum::D;
}
return std::nullopt;
}
};
This has the obvious disadvantage of having to write everything twice, and forgetting to update one of the functions will lead to inconsistencies. Is there a better method?
I need:
Consistency. It shouldn't be possible to have different semantics in mapping and reverse mapping.
Compile-time definition. The values which are mapped are known in advance, and will not change at runtime.
Runtime lookup. Which values will be looked up is not known at compile-time, and may even not contain a mapping at all (returning an empty optional instead).
I would like to have:
No additional memory allocations
Basically the same performance as the double-switch-method
An implementation which makes the mapping definition easily extendable (i.e. adding more values in the future or applying it to other types)
I've given a shot to very naive and simple implementation. https://godbolt.org/z/MtcHw8
#include <optional>
enum class ExampleEnum { A, B, C, D, E };
template<typename Enum, int N>
struct Mapping
{
Enum keys[N];
int values[N];
constexpr std::optional<Enum> at(int x) const noexcept
{
for(int i = 0; i < N; i++)
if(values[i] == x) return keys[i];
return std::nullopt;
}
constexpr std::optional<int> at(Enum x) const noexcept
{
for(int i = 0; i < N; i++)
if(keys[i] == x) return values[i];
return std::nullopt;
}
};
constexpr Mapping<ExampleEnum, 3> mapping{{ExampleEnum::A, ExampleEnum::B, ExampleEnum::D},
{111, 222, 333}};
int main()
{
int x = rand(); // Force runtime implementation
auto optEnum = mapping.at(x);
if(optEnum.has_value())
return *mapping.at(ExampleEnum::B); // Returns 222, (asm line 3) constexpr works
auto y = (ExampleEnum)rand(); // Force runtime implementation
auto optInt = mapping.at(y);
if(optInt.has_value())
return (int)*mapping.at(333); // Returns 3, constexpr works
return 0;
}
It utilizes loop unrolling to achieve switch-method performance in int -> ExampleEnum mappings.
Assembly for ExampleEnum -> int mapping is quite obscure, as optimizer utilized the fact that enum values are sequenced and prefers jump table over if-else implementation.
Anyway, the interface requires no duplication, just create constexpr object with two arrays fed into construction. You can have multiple mappings for same types. Also, enum type is templated.
Also, it can be easily extended to support two enum class instead of only enum-int.
I've also created snipped with raw switch implementations for assembly comparison:
https://godbolt.org/z/CbEcnZ
PS. I believe syntax constexpr Mapping<ExampleEnum, 3> mapping could be simplified with proper template deduction guide, but I have not found out how to do it.
PPS. I went with N up to 15, loop unrolling is still on: https://godbolt.org/z/-Cpmgm
It will be better to avoid such code. They tend to violate one of the fundamental principles of software development, The Open-Closed Principle.
You can improve MyMapping by making it general. Let a higher level class/function define the mappings.
class MyMapping {
public:
void registerItem(ExampleEnum eValue, int intValue)
{
enumToIntMap[eValue] = intValue;
intToEnumMap[intValue] = eValue;
}
std::optional<int> enumToInt(ExampleEnum v) {
auto iter = enumToIntMap.find(v);
if ( iter != enumToIntMap.end() )
{
return iter->second;
}
else
{
return std::nullopt;
}
}
std::optional<ExampleEnum> intToEnum(int v) {
auto iter = intToEnumMap.find(v);
if ( iter != intToEnumMap.end() )
{
return iter->second;
}
else
{
return std::nullopt;
}
}
std::map<ExampleEnum, int> enumToIntMap;
std::map<int, ExampleEnum> intToEnumMap;
};
A higher level function can be:
void initMyMapping(MyMapping& mapping)
{
mapping.registerItem(A, 1);
mapping.registerItem(B, 2);
mapping.registerItem(D, 42);
}
I understand that this still violates the open-closed principle but to a lesser degree. If you want to add mapping data for C and E, you'll have to add code for that. However, you can do that without changing MyMapping. You also have the option of doing that in a second function, and not change initMyMapping.
void initMyMapping_extend(MyMapping& mapping)
{
mapping.registerItem(C, 22);
mapping.registerItem(E, 38);
}

C++ std::plus as template parameter

I wanna write a class for a binary indexed array,
which use two non-type default template parameter, op and identity.
And need to check the constraint that op(identity,identity) == identity.
My problem is,
I don't to how to specify op, my current solution does not compile.
‘class std::function<T(T, T)>’ is not a valid type for a template non-type parameter
how to to check op(identity,identity) == identity, currently I cannot verify, since failed on step 1, maybe static_assert?
So currently I use below workaround, but then I cannot specify op, eg, std::multiplies<int>.
Can anyone tell me how to achieve the goal?
#include <vector>
#include <functional>
// template <typename T = int, std::function<T(T,T)> op = std::plus<T>(), const T identity = T()>
template <typename T = int, const T identity = T()> // currently workaround
class BIT { // binary indexed array
const std::function<T(T,T)> op = std::plus<T>(); // currently workaround
public:
BIT(std::vector<T> value) : value(value), prefixSum(value.size() + 1, identity) {
for (size_t i = 1; i < prefixSum.size(); ++i) {
incrementNodeByValue(i, value[i-1]);
}
// print(prefixSum,"prefixSum");
}
T getSum(size_t i) {
auto sum = identity;
while (i) {
sum = op(sum, prefixSum(i));
i = firstSmallerAncestor(i);
}
return sum;
}
void incrementNodeByValue(size_t i, T x) {
while (i < prefixSum.size()) {
prefixSum[i] = op(prefixSum[i], x);
i = firstLargerAncestor(i);
}
}
private:
inline size_t firstLargerAncestor(size_t node) { return node + (node & -node); }
inline size_t firstSmallerAncestor(size_t node) { return node & (node - 1); }
std::vector<T> value;
std::vector<T> prefixSum;
};
int main() {
auto vec = std::vector<int> {5,1,15,11,52,28,0};
auto bit = BIT<>(vec);
}
The use of std::function here is a waste and seems to be the source of your confusion.
Note that templates may only be parameterized on typenames and values of integral types (char, int, long, etc). Here you're attempting to parameterize on a value of a std::function instantiation, which isn't an integral type. That said, you don't actually need to parameterize on a value in this case.
Because your constructor doesn't accept an argument to initialize the op member variable nor is it accessible via the interface, I gather it's safe to assume the operator is known at compile-time, is guaranteed immutable, and default constructible.
As such, I declared the op member to be of a parameter type called operation.
#include <functional>
#include <vector>
template< typename T = int,
typename operation = std::plus<T>,
const T identity = T() >
class BIT {
const operation op = operation();
static_assert( operation()(identity, identity) == identity );
std::vector<T> value;
std::vector<T> prefixSum;
inline size_t firstLargerAncestor(size_t node) { return node + (node & -node); }
inline size_t firstSmallerAncestor(size_t node) { return node & (node - 1); }
public:
BIT(std::vector<T> value) :
value(value),
prefixSum(value.size() + 1, identity) {
for (size_t i = 1; i < prefixSum.size(); ++i) {
incrementNodeByValue(i, value[i-1]);
}
}
T getSum(size_t i) {
auto sum = identity;
while (i) {
sum = op(sum, prefixSum(i));
i = firstSmallerAncestor(i);
}
return sum;
}
void incrementNodeByValue(size_t i, T x) {
while (i < prefixSum.size()) {
prefixSum[i] = op(prefixSum[i], x);
i = firstLargerAncestor(i);
}
}
};
live example
As a note, you'll likely want to define an identity template elsewhere to parameterized on an operation and value types to default the third parameter here. As is, it seems you'll almost always be defining all three parameters during instantiation.

Overload function for arguments (not) deducable at compile time

Is there a way to overload a function in a way to distinguish between the argument being evaluable at compile time or at runtime only?
Suppose I have the following function:
std::string lookup(int x) {
return table<x>::value;
}
which allows me to select a string value based on the parameter x in constant time (with space overhead). However, in some cases x cannot be provided at compile time, and I need to run a version of foo which does the lookup with a higher time complexity.
I could use functions with a different name of course, but I would like to have an unified interface.
I accepted an answer, but I'm still interested if this distinction is possible with exactly the same function call.
I believe the closest you can get is to overload lookup on int and std::integral_constant<int>; then, if the caller knows the value at compile-type, they can call the latter overload:
#include <type_traits>
#include <string>
std::string lookup(int const& x) // a
{
return "a"; // high-complexity lookup using x
}
template<int x>
std::string lookup(std::integral_constant<int, x>) // b
{
return "b"; // return table<x>::value;
}
template<typename T = void>
void lookup(int const&&) // c
{
static_assert(
!std::is_same<T, T>{},
"to pass a compile-time constant to lookup, pass"
" an instance of std::integral_constant<int>"
);
}
template<int N>
using int_ = std::integral_constant<int, N>;
int main()
{
int x = 3;
int const y = 3;
constexpr int z = 3;
lookup(x); // calls a
lookup(y); // calls a
lookup(z); // calls a
lookup(int_<3>{}); // calls b
lookup(3); // calls c, compile-time error
}
Online Demo
Notes:
I've provided an int_ helper here so construction of std::integral_constant<int> is less verbose for the caller; this is optional.
Overload c will have false negatives (e.g. constexpr int variables are passed to overload a, not overload c), but this will weed out any actual int literals.
One option would be to use overloading in a similar manner:
template <int x> std::string find() {
return table<x>::value;
}
std::string find(int x) {
return ...
}
There is also this trick:
std::string lookup(int x) {
switch(x) {
case 0: return table<0>::value;
case 1: return table<1>::value;
case 2: return table<2>::value;
case 3: return table<3>::value;
default: return generic_lookup(x);
}
This sort of thing works well when it's advantageous, but not required, for the integer to be known at compile time. For example, if it helps the optimizer. It can be hell on compile times though, if you're calling many instances of some complicated function in this way.

How to initialise a floating point array at compile time?

I have found two good approaches to initialise integral arrays at compile times here and here.
Unfortunately, neither can be converted to initialise a float array straightforward; I find that I am not fit enough in template metaprogramming to solve this through trial-and-error.
First let me declare a use-case:
constexpr unsigned int SineLength = 360u;
constexpr unsigned int ArrayLength = SineLength+(SineLength/4u);
constexpr double PI = 3.1415926535;
float array[ArrayLength];
void fillArray(unsigned int length)
{
for(unsigned int i = 0u; i < length; ++i)
array[i] = sin(double(i)*PI/180.*360./double(SineLength));
}
As you can see, as far as the availability of information goes, this array could be declared constexpr.
However, for the first approach linked, the generator function f would have to look like this:
constexpr float f(unsigned int i)
{
return sin(double(i)*PI/180.*360./double(SineLength));
}
And that means that a template argument of type float is needed. Which is not allowed.
Now, the first idea that springs to mind would be to store the float in an int variable - nothing happens to the array indices after their calculation, so pretending that they were of another type than they are (as long as byte-length is equal) is perfectly fine.
But see:
constexpr int f(unsigned int i)
{
float output = sin(double(i)*PI/180.*360./double(SineLength));
return *(int*)&output;
}
is not a valid constexpr, as it contains more than the return statement.
constexpr int f(unsigned int i)
{
return reinterpret_cast<int>(sin(double(i)*PI/180.*360./double(SineLength)));
}
does not work either; even though one might think that reinterpret_cast does exactly what is needed here (namely nothing), it apparently only works on pointers.
Following the second approach, the generator function would look slightly different:
template<size_t index> struct f
{
enum : float{ value = sin(double(index)*PI/180.*360./double(SineLength)) };
};
With what is essentially the same problem: That enum cannot be of type float and the type cannot be masked as int.
Now, even though I have only approached the problem on the path of "pretend the float is an int", I do not actually like that path (aside from it not working). I would much prefer a way that actually handled the float as float (and would just as well handle a double as double), but I see no way to get around the type restriction imposed.
Sadly, there are many questions about this issue, which always refer to integral types, swamping the search for this specialised issue. Similarly, questions about masking one type as the other typically do not consider the restrictions of a constexpr or template parameter environment.
See [1][2][3] and [4][5] etc.
Assuming your actual goal is to have a concise way to initialize an array of floating point numbers and it isn't necessarily spelled float array[N] or double array[N] but rather std::array<float, N> array or std::array<double, N> array this can be done.
The significance of the type of array is that std::array<T, N> can be copied - unlike T[N]. If it can be copied, you can obtain the content of the array from a function call, e.g.:
constexpr std::array<float, ArrayLength> array = fillArray<N>();
How does that help us? Well, when we can call a function taking an integer as an argument, we can use std::make_index_sequence<N> to give use a compile-time sequence of std::size_t from 0 to N-1. If we have that, we can initialize an array easily with a formula based on the index like this:
constexpr double const_sin(double x) { return x * 3.1; } // dummy...
template <std::size_t... I>
constexpr std::array<float, sizeof...(I)> fillArray(std::index_sequence<I...>) {
return std::array<float, sizeof...(I)>{
const_sin(double(I)*M_PI/180.*360./double(SineLength))...
};
}
template <std::size_t N>
constexpr std::array<float, N> fillArray() {
return fillArray(std::make_index_sequence<N>{});
}
Assuming the function used to initialize the array elements is actually a constexpr expression, this approach can generate a constexpr. The function const_sin() which is there just for demonstration purpose does that but it, obviously, doesn't compute a reasonable approximation of sin(x).
The comments indicate that the answer so far doesn't quite explain what's going on. So, let's break it down into digestible parts:
The goal is to produce a constexpr array filled with suitable sequence of values. However, the size of the array should be easily changeable by adjusting just the array size N. That is, conceptually, the objective is to create
constexpr float array[N] = { f(0), f(1), ..., f(N-1) };
Where f() is a suitable function producing a constexpr. For example, f() could be defined as
constexpr float f(int i) {
return const_sin(double(i) * M_PI / 180.0 * 360.0 / double(Length);
}
However, typing in the calls to f(0), f(1), etc. would need to change with every change of N. So, essentially the same as the above declaration should be done but without extra typing.
The first step towards the solution is to replace float[N] by std:array<float, N>: built-in arrays cannot be copied while std::array<float, N> can be copied. That is, the initialization could be delegated to to a function parameterized by N. That is, we'd use
template <std::size_t N>
constexpr std::array<float, N> fillArray() {
// some magic explained below goes here
}
constexpr std::array<float, N> array = fillArray<N>();
Within the function we can't simply loop over the array because the non-const subscript operator isn't a constexpr. Instead, the array needs to be initialized upon creation. If we had a parameter pack std::size_t... I which represented the sequence 0, 1, .., N-1 we could just do
std::array<float, N>{ f(I)... };
as the expansion would effectively become equivalent to typing
std::array<float, N>{ f(0), f(1), .., f(N-1) };
So the question becomes: how to get such a parameter pack? I don't think it can be obtained directly in the function but it can be obtained by calling another function with a suitable parameter.
The using alias std::make_index_sequence<N> is an alias for the type std::index_sequence<0, 1, .., N-1>. The details of the implementation are a bit arcane but std::make_index_sequence<N>, std::index_sequence<...>, and friends are part of C++14 (they were proposed by N3493 based on, e.g., on this answer from me). That is, all we need to do is call an auxiliary function with a parameter of type std::index_sequence<...> and get the parameter pack from there:
template <std::size_t...I>
constexpr std::array<float, sizeof...(I)>
fillArray(std::index_sequence<I...>) {
return std::array<float, sizeof...(I)>{ f(I)... };
}
template <std::size_t N>
constexpr std::array<float, N> fillArray() {
return fillArray(std::make_index_sequence<N>{});
}
The [unnamed] parameter to this function is only used to have the parameter pack std::size_t... I be deduced.
Here's a working example that generates a table of sin values, and that you can easily adapt to logarithm tables by passing a different function object
#include <array> // array
#include <cmath> // sin
#include <cstddef> // size_t
#include <utility> // index_sequence, make_index_sequence
#include <iostream>
namespace detail {
template<class Function, std::size_t... Indices>
constexpr auto make_array_helper(Function f, std::index_sequence<Indices...>)
-> std::array<decltype(f(std::size_t{})), sizeof...(Indices)>
{
return {{ f(Indices)... }};
}
} // namespace detail
template<std::size_t N, class Function>
constexpr auto make_array(Function f)
{
return detail::make_array_helper(f, std::make_index_sequence<N>{});
}
static auto const pi = std::acos(-1);
static auto const make_sin = [](int x) { return std::sin(pi * x / 180.0); };
static auto const sin_table = make_array<360>(make_sin);
int main()
{
for (auto elem : sin_table)
std::cout << elem << "\n";
}
Live Example.
Note that you need to use -stdlib=libc++ because libstdc++ has a pretty inefficient implementation of index_sequence.
Also note that you need a constexpr function object (both pi and std::sin are not constexpr) to initialize the array truly at compile-time rather than at program initialization.
There are a few problems to overcome if you want to initialise a floating point array at compile time:
std::array is a little broken in that the operator[] is not constexpr in the case of a mutable constexpr std::array (I believe this will be fixed in the next release of the standard).
the functions in std::math are not marked constexpr!
I had a similar problem domain recently. I wanted to create an accurate but fast version of sin(x).
I decided to see if it could be done with a constexpr cache with interpolation to get speed without loss of accuracy.
An advantage of making the cache constexpr is that the calculation of sin(x) for a value known at compile-time is that the sin is pre-computed and simply exists in the code as an immediate register load! In the worst case of a runtime argument, it's merely a constant array lookup followed by w weighted average.
This code will need to be compiled with -fconstexpr-steps=2000000 on clang, or the equivalent in windows.
enjoy:
#include <iostream>
#include <cmath>
#include <utility>
#include <cassert>
#include <string>
#include <vector>
namespace cpputil {
// a fully constexpr version of array that allows incomplete
// construction
template<size_t N, class T>
struct array
{
// public constructor defers to internal one for
// conditional handling of missing arguments
constexpr array(std::initializer_list<T> list)
: array(list, std::make_index_sequence<N>())
{
}
constexpr T& operator[](size_t i) noexcept {
assert(i < N);
return _data[i];
}
constexpr const T& operator[](size_t i) const noexcept {
assert(i < N);
return _data[i];
}
constexpr T& at(size_t i) noexcept {
assert(i < N);
return _data[i];
}
constexpr const T& at(size_t i) const noexcept {
assert(i < N);
return _data[i];
}
constexpr T* begin() {
return std::addressof(_data[0]);
}
constexpr const T* begin() const {
return std::addressof(_data[0]);
}
constexpr T* end() {
// todo: maybe use std::addressof and disable compiler warnings
// about array bounds that result
return &_data[N];
}
constexpr const T* end() const {
return &_data[N];
}
constexpr size_t size() const {
return N;
}
private:
T _data[N];
private:
// construct each element from the initialiser list if present
// if not, default-construct
template<size_t...Is>
constexpr array(std::initializer_list<T> list, std::integer_sequence<size_t, Is...>)
: _data {
(
Is >= list.size()
?
T()
:
std::move(*(std::next(list.begin(), Is)))
)...
}
{
}
};
// convenience printer
template<size_t N, class T>
inline std::ostream& operator<<(std::ostream& os, const array<N, T>& a)
{
os << "[";
auto sep = " ";
for (const auto& i : a) {
os << sep << i;
sep = ", ";
}
return os << " ]";
}
}
namespace trig
{
constexpr double pi() {
return M_PI;
}
template<class T>
auto constexpr to_radians(T degs) {
return degs / 180.0 * pi();
}
// compile-time computation of a factorial
constexpr double factorial(size_t x) {
double result = 1.0;
for (int i = 2 ; i <= x ; ++i)
result *= double(i);
return result;
}
// compile-time replacement for std::pow
constexpr double power(double x, size_t n)
{
double result = 1;
while (n--) {
result *= x;
}
return result;
}
// compute a term in a taylor expansion at compile time
constexpr double taylor_term(double x, size_t i)
{
int powers = 1 + (2 * i);
double top = power(x, powers);
double bottom = factorial(powers);
auto term = top / bottom;
if (i % 2 == 1)
term = -term;
return term;
}
// compute the sin(x) using `terms` terms in the taylor expansion
constexpr double taylor_expansion(double x, size_t terms)
{
auto result = x;
for (int term = 1 ; term < terms ; ++term)
{
result += taylor_term(x, term);
}
return result;
}
// compute our interpolatable table as a constexpr
template<size_t N = 1024>
struct sin_table : cpputil::array<N, double>
{
static constexpr size_t cache_size = N;
static constexpr double step_size = (pi() / 2) / cache_size;
static constexpr double _360 = pi() * 2;
static constexpr double _270 = pi() * 1.5;
static constexpr double _180 = pi();
static constexpr double _90 = pi() / 2;
constexpr sin_table()
: cpputil::array<N, double>({})
{
for(int slot = 0 ; slot < cache_size ; ++slot)
{
double val = trig::taylor_expansion(step_size * slot, 20);
(*this)[slot] = val;
}
}
double checked_interp_fw(double rads) const {
size_t slot0 = size_t(rads / step_size);
auto v0 = (slot0 >= this->size()) ? 1.0 : (*this)[slot0];
size_t slot1 = slot0 + 1;
auto v1 = (slot1 >= this->size()) ? 1.0 : (*this)[slot1];
auto ratio = (rads - (slot0 * step_size)) / step_size;
return (v1 * ratio) + (v0 * (1.0-ratio));
}
double interpolate(double rads) const
{
rads = std::fmod(rads, _360);
if (rads < 0)
rads = std::fmod(_360 - rads, _360);
if (rads < _90) {
return checked_interp_fw(rads);
}
else if (rads < _180) {
return checked_interp_fw(_90 - (rads - _90));
}
else if (rads < _270) {
return -checked_interp_fw(rads - _180);
}
else {
return -checked_interp_fw(_90 - (rads - _270));
}
}
};
double sine(double x)
{
if (x < 0) {
return -sine(-x);
}
else {
constexpr sin_table<> table;
return table.interpolate(x);
}
}
}
void check(float degs) {
using namespace std;
cout << "checking : " << degs << endl;
auto mysin = trig::sine(trig::to_radians(degs));
auto stdsin = std::sin(trig::to_radians(degs));
auto error = stdsin - mysin;
cout << "mine=" << mysin << ", std=" << stdsin << ", error=" << error << endl;
cout << endl;
}
auto main() -> int
{
check(0.5);
check(30);
check(45.4);
check(90);
check(151);
check(180);
check(195);
check(89.5);
check(91);
check(270);
check(305);
check(360);
return 0;
}
expected output:
checking : 0.5
mine=0.00872653, std=0.00872654, error=2.15177e-09
checking : 30
mine=0.5, std=0.5, error=1.30766e-07
checking : 45.4
mine=0.712026, std=0.712026, error=2.07233e-07
checking : 90
mine=1, std=1, error=0
checking : 151
mine=0.48481, std=0.48481, error=2.42041e-08
checking : 180
mine=-0, std=1.22465e-16, error=1.22465e-16
checking : 195
mine=-0.258819, std=-0.258819, error=-6.76265e-08
checking : 89.5
mine=0.999962, std=0.999962, error=2.5215e-07
checking : 91
mine=0.999847, std=0.999848, error=2.76519e-07
checking : 270
mine=-1, std=-1, error=0
checking : 305
mine=-0.819152, std=-0.819152, error=-1.66545e-07
checking : 360
mine=0, std=-2.44929e-16, error=-2.44929e-16
I am just keeping this answer for documentation. As the comments say, I was mislead by gcc being permissive. It fails, when f(42) is used e.g. as a template parameter like this:
std::array<int, f(42)> asdf;
sorry, this was not a solution
Separate the calculation of your float and the conversion to an int in two different constexpr functions:
constexpr int floatAsInt(float float_val) {
return *(int*)&float_val;
}
constexpr int f(unsigned int i) {
return floatAsInt(sin(double(i)*PI/180.*360./double(SineLength)));
}