I have found two good approaches to initialise integral arrays at compile times here and here.
Unfortunately, neither can be converted to initialise a float array straightforward; I find that I am not fit enough in template metaprogramming to solve this through trial-and-error.
First let me declare a use-case:
constexpr unsigned int SineLength = 360u;
constexpr unsigned int ArrayLength = SineLength+(SineLength/4u);
constexpr double PI = 3.1415926535;
float array[ArrayLength];
void fillArray(unsigned int length)
{
for(unsigned int i = 0u; i < length; ++i)
array[i] = sin(double(i)*PI/180.*360./double(SineLength));
}
As you can see, as far as the availability of information goes, this array could be declared constexpr.
However, for the first approach linked, the generator function f would have to look like this:
constexpr float f(unsigned int i)
{
return sin(double(i)*PI/180.*360./double(SineLength));
}
And that means that a template argument of type float is needed. Which is not allowed.
Now, the first idea that springs to mind would be to store the float in an int variable - nothing happens to the array indices after their calculation, so pretending that they were of another type than they are (as long as byte-length is equal) is perfectly fine.
But see:
constexpr int f(unsigned int i)
{
float output = sin(double(i)*PI/180.*360./double(SineLength));
return *(int*)&output;
}
is not a valid constexpr, as it contains more than the return statement.
constexpr int f(unsigned int i)
{
return reinterpret_cast<int>(sin(double(i)*PI/180.*360./double(SineLength)));
}
does not work either; even though one might think that reinterpret_cast does exactly what is needed here (namely nothing), it apparently only works on pointers.
Following the second approach, the generator function would look slightly different:
template<size_t index> struct f
{
enum : float{ value = sin(double(index)*PI/180.*360./double(SineLength)) };
};
With what is essentially the same problem: That enum cannot be of type float and the type cannot be masked as int.
Now, even though I have only approached the problem on the path of "pretend the float is an int", I do not actually like that path (aside from it not working). I would much prefer a way that actually handled the float as float (and would just as well handle a double as double), but I see no way to get around the type restriction imposed.
Sadly, there are many questions about this issue, which always refer to integral types, swamping the search for this specialised issue. Similarly, questions about masking one type as the other typically do not consider the restrictions of a constexpr or template parameter environment.
See [1][2][3] and [4][5] etc.
Assuming your actual goal is to have a concise way to initialize an array of floating point numbers and it isn't necessarily spelled float array[N] or double array[N] but rather std::array<float, N> array or std::array<double, N> array this can be done.
The significance of the type of array is that std::array<T, N> can be copied - unlike T[N]. If it can be copied, you can obtain the content of the array from a function call, e.g.:
constexpr std::array<float, ArrayLength> array = fillArray<N>();
How does that help us? Well, when we can call a function taking an integer as an argument, we can use std::make_index_sequence<N> to give use a compile-time sequence of std::size_t from 0 to N-1. If we have that, we can initialize an array easily with a formula based on the index like this:
constexpr double const_sin(double x) { return x * 3.1; } // dummy...
template <std::size_t... I>
constexpr std::array<float, sizeof...(I)> fillArray(std::index_sequence<I...>) {
return std::array<float, sizeof...(I)>{
const_sin(double(I)*M_PI/180.*360./double(SineLength))...
};
}
template <std::size_t N>
constexpr std::array<float, N> fillArray() {
return fillArray(std::make_index_sequence<N>{});
}
Assuming the function used to initialize the array elements is actually a constexpr expression, this approach can generate a constexpr. The function const_sin() which is there just for demonstration purpose does that but it, obviously, doesn't compute a reasonable approximation of sin(x).
The comments indicate that the answer so far doesn't quite explain what's going on. So, let's break it down into digestible parts:
The goal is to produce a constexpr array filled with suitable sequence of values. However, the size of the array should be easily changeable by adjusting just the array size N. That is, conceptually, the objective is to create
constexpr float array[N] = { f(0), f(1), ..., f(N-1) };
Where f() is a suitable function producing a constexpr. For example, f() could be defined as
constexpr float f(int i) {
return const_sin(double(i) * M_PI / 180.0 * 360.0 / double(Length);
}
However, typing in the calls to f(0), f(1), etc. would need to change with every change of N. So, essentially the same as the above declaration should be done but without extra typing.
The first step towards the solution is to replace float[N] by std:array<float, N>: built-in arrays cannot be copied while std::array<float, N> can be copied. That is, the initialization could be delegated to to a function parameterized by N. That is, we'd use
template <std::size_t N>
constexpr std::array<float, N> fillArray() {
// some magic explained below goes here
}
constexpr std::array<float, N> array = fillArray<N>();
Within the function we can't simply loop over the array because the non-const subscript operator isn't a constexpr. Instead, the array needs to be initialized upon creation. If we had a parameter pack std::size_t... I which represented the sequence 0, 1, .., N-1 we could just do
std::array<float, N>{ f(I)... };
as the expansion would effectively become equivalent to typing
std::array<float, N>{ f(0), f(1), .., f(N-1) };
So the question becomes: how to get such a parameter pack? I don't think it can be obtained directly in the function but it can be obtained by calling another function with a suitable parameter.
The using alias std::make_index_sequence<N> is an alias for the type std::index_sequence<0, 1, .., N-1>. The details of the implementation are a bit arcane but std::make_index_sequence<N>, std::index_sequence<...>, and friends are part of C++14 (they were proposed by N3493 based on, e.g., on this answer from me). That is, all we need to do is call an auxiliary function with a parameter of type std::index_sequence<...> and get the parameter pack from there:
template <std::size_t...I>
constexpr std::array<float, sizeof...(I)>
fillArray(std::index_sequence<I...>) {
return std::array<float, sizeof...(I)>{ f(I)... };
}
template <std::size_t N>
constexpr std::array<float, N> fillArray() {
return fillArray(std::make_index_sequence<N>{});
}
The [unnamed] parameter to this function is only used to have the parameter pack std::size_t... I be deduced.
Here's a working example that generates a table of sin values, and that you can easily adapt to logarithm tables by passing a different function object
#include <array> // array
#include <cmath> // sin
#include <cstddef> // size_t
#include <utility> // index_sequence, make_index_sequence
#include <iostream>
namespace detail {
template<class Function, std::size_t... Indices>
constexpr auto make_array_helper(Function f, std::index_sequence<Indices...>)
-> std::array<decltype(f(std::size_t{})), sizeof...(Indices)>
{
return {{ f(Indices)... }};
}
} // namespace detail
template<std::size_t N, class Function>
constexpr auto make_array(Function f)
{
return detail::make_array_helper(f, std::make_index_sequence<N>{});
}
static auto const pi = std::acos(-1);
static auto const make_sin = [](int x) { return std::sin(pi * x / 180.0); };
static auto const sin_table = make_array<360>(make_sin);
int main()
{
for (auto elem : sin_table)
std::cout << elem << "\n";
}
Live Example.
Note that you need to use -stdlib=libc++ because libstdc++ has a pretty inefficient implementation of index_sequence.
Also note that you need a constexpr function object (both pi and std::sin are not constexpr) to initialize the array truly at compile-time rather than at program initialization.
There are a few problems to overcome if you want to initialise a floating point array at compile time:
std::array is a little broken in that the operator[] is not constexpr in the case of a mutable constexpr std::array (I believe this will be fixed in the next release of the standard).
the functions in std::math are not marked constexpr!
I had a similar problem domain recently. I wanted to create an accurate but fast version of sin(x).
I decided to see if it could be done with a constexpr cache with interpolation to get speed without loss of accuracy.
An advantage of making the cache constexpr is that the calculation of sin(x) for a value known at compile-time is that the sin is pre-computed and simply exists in the code as an immediate register load! In the worst case of a runtime argument, it's merely a constant array lookup followed by w weighted average.
This code will need to be compiled with -fconstexpr-steps=2000000 on clang, or the equivalent in windows.
enjoy:
#include <iostream>
#include <cmath>
#include <utility>
#include <cassert>
#include <string>
#include <vector>
namespace cpputil {
// a fully constexpr version of array that allows incomplete
// construction
template<size_t N, class T>
struct array
{
// public constructor defers to internal one for
// conditional handling of missing arguments
constexpr array(std::initializer_list<T> list)
: array(list, std::make_index_sequence<N>())
{
}
constexpr T& operator[](size_t i) noexcept {
assert(i < N);
return _data[i];
}
constexpr const T& operator[](size_t i) const noexcept {
assert(i < N);
return _data[i];
}
constexpr T& at(size_t i) noexcept {
assert(i < N);
return _data[i];
}
constexpr const T& at(size_t i) const noexcept {
assert(i < N);
return _data[i];
}
constexpr T* begin() {
return std::addressof(_data[0]);
}
constexpr const T* begin() const {
return std::addressof(_data[0]);
}
constexpr T* end() {
// todo: maybe use std::addressof and disable compiler warnings
// about array bounds that result
return &_data[N];
}
constexpr const T* end() const {
return &_data[N];
}
constexpr size_t size() const {
return N;
}
private:
T _data[N];
private:
// construct each element from the initialiser list if present
// if not, default-construct
template<size_t...Is>
constexpr array(std::initializer_list<T> list, std::integer_sequence<size_t, Is...>)
: _data {
(
Is >= list.size()
?
T()
:
std::move(*(std::next(list.begin(), Is)))
)...
}
{
}
};
// convenience printer
template<size_t N, class T>
inline std::ostream& operator<<(std::ostream& os, const array<N, T>& a)
{
os << "[";
auto sep = " ";
for (const auto& i : a) {
os << sep << i;
sep = ", ";
}
return os << " ]";
}
}
namespace trig
{
constexpr double pi() {
return M_PI;
}
template<class T>
auto constexpr to_radians(T degs) {
return degs / 180.0 * pi();
}
// compile-time computation of a factorial
constexpr double factorial(size_t x) {
double result = 1.0;
for (int i = 2 ; i <= x ; ++i)
result *= double(i);
return result;
}
// compile-time replacement for std::pow
constexpr double power(double x, size_t n)
{
double result = 1;
while (n--) {
result *= x;
}
return result;
}
// compute a term in a taylor expansion at compile time
constexpr double taylor_term(double x, size_t i)
{
int powers = 1 + (2 * i);
double top = power(x, powers);
double bottom = factorial(powers);
auto term = top / bottom;
if (i % 2 == 1)
term = -term;
return term;
}
// compute the sin(x) using `terms` terms in the taylor expansion
constexpr double taylor_expansion(double x, size_t terms)
{
auto result = x;
for (int term = 1 ; term < terms ; ++term)
{
result += taylor_term(x, term);
}
return result;
}
// compute our interpolatable table as a constexpr
template<size_t N = 1024>
struct sin_table : cpputil::array<N, double>
{
static constexpr size_t cache_size = N;
static constexpr double step_size = (pi() / 2) / cache_size;
static constexpr double _360 = pi() * 2;
static constexpr double _270 = pi() * 1.5;
static constexpr double _180 = pi();
static constexpr double _90 = pi() / 2;
constexpr sin_table()
: cpputil::array<N, double>({})
{
for(int slot = 0 ; slot < cache_size ; ++slot)
{
double val = trig::taylor_expansion(step_size * slot, 20);
(*this)[slot] = val;
}
}
double checked_interp_fw(double rads) const {
size_t slot0 = size_t(rads / step_size);
auto v0 = (slot0 >= this->size()) ? 1.0 : (*this)[slot0];
size_t slot1 = slot0 + 1;
auto v1 = (slot1 >= this->size()) ? 1.0 : (*this)[slot1];
auto ratio = (rads - (slot0 * step_size)) / step_size;
return (v1 * ratio) + (v0 * (1.0-ratio));
}
double interpolate(double rads) const
{
rads = std::fmod(rads, _360);
if (rads < 0)
rads = std::fmod(_360 - rads, _360);
if (rads < _90) {
return checked_interp_fw(rads);
}
else if (rads < _180) {
return checked_interp_fw(_90 - (rads - _90));
}
else if (rads < _270) {
return -checked_interp_fw(rads - _180);
}
else {
return -checked_interp_fw(_90 - (rads - _270));
}
}
};
double sine(double x)
{
if (x < 0) {
return -sine(-x);
}
else {
constexpr sin_table<> table;
return table.interpolate(x);
}
}
}
void check(float degs) {
using namespace std;
cout << "checking : " << degs << endl;
auto mysin = trig::sine(trig::to_radians(degs));
auto stdsin = std::sin(trig::to_radians(degs));
auto error = stdsin - mysin;
cout << "mine=" << mysin << ", std=" << stdsin << ", error=" << error << endl;
cout << endl;
}
auto main() -> int
{
check(0.5);
check(30);
check(45.4);
check(90);
check(151);
check(180);
check(195);
check(89.5);
check(91);
check(270);
check(305);
check(360);
return 0;
}
expected output:
checking : 0.5
mine=0.00872653, std=0.00872654, error=2.15177e-09
checking : 30
mine=0.5, std=0.5, error=1.30766e-07
checking : 45.4
mine=0.712026, std=0.712026, error=2.07233e-07
checking : 90
mine=1, std=1, error=0
checking : 151
mine=0.48481, std=0.48481, error=2.42041e-08
checking : 180
mine=-0, std=1.22465e-16, error=1.22465e-16
checking : 195
mine=-0.258819, std=-0.258819, error=-6.76265e-08
checking : 89.5
mine=0.999962, std=0.999962, error=2.5215e-07
checking : 91
mine=0.999847, std=0.999848, error=2.76519e-07
checking : 270
mine=-1, std=-1, error=0
checking : 305
mine=-0.819152, std=-0.819152, error=-1.66545e-07
checking : 360
mine=0, std=-2.44929e-16, error=-2.44929e-16
I am just keeping this answer for documentation. As the comments say, I was mislead by gcc being permissive. It fails, when f(42) is used e.g. as a template parameter like this:
std::array<int, f(42)> asdf;
sorry, this was not a solution
Separate the calculation of your float and the conversion to an int in two different constexpr functions:
constexpr int floatAsInt(float float_val) {
return *(int*)&float_val;
}
constexpr int f(unsigned int i) {
return floatAsInt(sin(double(i)*PI/180.*360./double(SineLength)));
}
Related
I have objects that I need to hash with SHA256. The object has several fields as follows:
class Foo {
// some methods
protected:
std::array<32,int> x;
char y[32];
long z;
}
Is there a way I can directly access the bytes representing the 3 member variables in memory as I would a struct ? These hashes need to be computed as quickly as possible so I want to avoid malloc'ing a new set of bytes and copying to a heap allocated array. Or is the answer to simply embed a struct within the class?
It is critical that I get the exact binary representation of these variables so that the SHA256 comes out exactly the same given that the 3 variables are equal (so I can't have any extra padding bytes etc included going into the hash function)
Most Hash classes are able to take multiple regions before returning the hash, e.g. as in:
class Hash {
public:
void update(const void *data, size_t size) = 0;
std::vector<uint8_t> digest() = 0;
}
So your hash method could look like this:
std::vector<uint8_t> Foo::hash(Hash *hash) const {
hash->update(&x, sizeof(x));
hash->update(&y, sizeof(y));
hash->update(&z, sizeof(z));
return hash->digest();
}
You can solve this by making an iterator that knows the layout of your member variables. Make Foo::begin() and Foo::end() functions and you can even take advantage of range-based for loops.
If you can increment it and dereference it, you can use it any other place you're able to use a LegacyForwardIterator.
Add in comparison functions to get access to the common it = X.begin(); it != X.end(); ++it idiom.
Some downsides include: ugly library code, poor maintainability, and (in this current form) no regard for endianess.
The solution to the latter downside is left as an exercise to the reader.
#include <array>
#include <iostream>
class Foo {
friend class FooByteIter;
public:
FooByteIter begin() const;
FooByteIter end() const;
Foo(const std::array<int, 2>& x, const char (&y)[2], long z)
: x_{x}
, y_{y[0], y[1]}
, z_{z}
{}
protected:
std::array<int, 2> x_;
char y_[2];
long z_;
};
class FooByteIter {
public:
FooByteIter(const Foo& foo)
: ptr_{reinterpret_cast<const char*>(&(foo.x_))}
, x_end_{reinterpret_cast<const char*>(&(foo.x_)) + sizeof(foo.x_)}
, y_begin_{reinterpret_cast<const char*>(&(foo.y_))}
, y_end_{reinterpret_cast<const char*>(&(foo.y_)) + sizeof(foo.y_)}
, z_begin_{reinterpret_cast<const char*>(&(foo.z_))}
{}
static FooByteIter end(const Foo& foo) {
FooByteIter fbi{foo};
fbi.ptr_ = reinterpret_cast<const char*>(&foo.z_) + sizeof(foo.z_);
return fbi;
}
bool operator==(const FooByteIter& other) const { return ptr_ == other.ptr_; }
bool operator!=(const FooByteIter& other) const { return ! (*this == other); }
FooByteIter& operator++() {
ptr_++;
if (ptr_ == x_end_) {
ptr_ = y_begin_;
}
else if (ptr_ == y_end_) {
ptr_ = z_begin_;
}
return *this;
}
FooByteIter operator++(int) {
FooByteIter pre = *this;
(*this)++;
return pre;
}
char operator*() const {
return *ptr_;
}
private:
const char* ptr_;
const char* const x_end_;
const char* const y_begin_;
const char* const y_end_;
const char* const z_begin_;
};
FooByteIter Foo::begin() const {
return FooByteIter(*this);
}
FooByteIter Foo::end() const {
return FooByteIter::end(*this);
}
template <typename InputIt>
char checksum(InputIt first, InputIt last) {
char check = 0;
while (first != last) {
check += (*first);
++first;
}
return check;
}
int main() {
Foo f{{1, 2}, {3, 4}, 5};
for (const auto b : f) {
std::cout << (int)b << ' ';
}
std::cout << std::endl;
std::cout << "Checksum is: " << (int)checksum(f.begin(), f.end()) << std::endl;
}
You can generalize this further by making serialization functions for all data types you might care about, allowing serialization of classes that aren't plain-old-data types.
Warning
This code assumes that the underlying types being serialized have no internal padding, themselves. This answer works for this datatype because it is made of types which themselves do not pad. To make this work for datatypes that have datatypes that have padding, this method would need to be recursed all the way down.
Just cast a pointer to object to a pointer to char. You can iterate through the bytes by increment. Use sizeof(foo) to check overflow.
As long as you're able to make your class an aggregate, i.e. std::is_aggregate_v<T> == true, you can actually sort-of reflect the members of the structure.
This allows you to easily hash the members without actually having to name them. (also you don't have to remember updating your hash function every time you add a new member)
Step 1: Getting the number of members inside the aggregate
First we need to know how many members a given aggregate type has.
We can check this by (ab-)using aggregate initialization.
Example:
Given struct Foo { int i; int j; };:
Foo a{}; // ok
Foo b{{}}; // ok
Foo c{{}, {}}; // ok
Foo d{{}, {}, {}}; // error: too many initializers for 'Foo'
We can use this to get the number of members inside the struct, by trying to add more initializers until we get an error:
template<class T>
concept aggregate = std::is_aggregate_v<T>;
struct any_type {
template<class T>
operator T() {}
};
template<aggregate T>
consteval std::size_t count_members(auto ...members) {
if constexpr (requires { T{ {members}... }; } == false)
return sizeof...(members) - 1;
else
return count_members<T>(members..., any_type{});
}
Notice that i used {members}... instead of members....
This is because of arrays - a structure like struct Bar{int i[2];}; could be initialized with 2 elements, e.g. Bar b{1, 2}, so our function would have returned 2 for Bar if we had used members....
Step 2: Extracting the members
Now that we know how many members our structure has, we can use structured bindings to extract them.
Unfortunately there is no way in the current standard to create a structured binding expression with a variable amount of expressions, so we have to add a few extra lines of code for each additional member we want to support.
For this example i've only added a max of 4 members, but you can add as many as you like / need:
template<aggregate T>
constexpr auto tie_struct(T const& data) {
constexpr std::size_t fieldCount = count_members<T>();
if constexpr(fieldCount == 0) {
return std::tie();
} else if constexpr (fieldCount == 1) {
auto const& [m1] = data;
return std::tie(m1);
} else if constexpr (fieldCount == 2) {
auto const& [m1, m2] = data;
return std::tie(m1, m2);
} else if constexpr (fieldCount == 3) {
auto const& [m1, m2, m3] = data;
return std::tie(m1, m2, m3);
} else if constexpr (fieldCount == 4) {
auto const& [m1, m2, m3, m4] = data;
return std::tie(m1, m2, m3, m4);
} else {
static_assert(fieldCount!=fieldCount, "Too many fields for tie_struct! add more if statements!");
}
}
The fieldCount!=fieldCount in the static_assert is intentional, this prevents the compiler from evaluating it prematurely (it only complains if the else case is actually hit)
Now we have a function that can give us references to each member of an arbitrary aggregate.
Example:
struct Foo {int i; float j; std::string s; };
Foo f{1, 2, "miau"};
// tup is of type std::tuple<int const&, float const&, std::string const&>
auto tup = tie_struct(f);
// this will output "12miau"
std::cout << std::get<0>(tup) << std::get<1>(tup) << std::get<2>(tup) << std::endl;
Step 3: hashing the members
Now that we can convert any aggregate into a tuple of its members, hashing it shouldn't be a big problem.
You can basically hash the individual types like you want and then combine the individual hashes:
// for merging two hash values
std::size_t hash_combine(std::size_t h1, std::size_t h2)
{
return (h2 + 0x9e3779b9 + (h1<<6) + (h1>>2)) ^ h1;
}
// Handling primitives
template <class T, class = void>
struct is_std_hashable : std::false_type { };
template <class T>
struct is_std_hashable<T, std::void_t<decltype(std::declval<std::hash<T>>()(std::declval<T>()))>> : std::true_type { };
template <class T>
concept std_hashable = is_std_hashable<T>::value;
template<std_hashable T>
std::size_t hash(T value) {
return std::hash<T>{}(value);
}
// Handling tuples
template<class... Members>
std::size_t hash(std::tuple<Members...> const& tuple) {
return std::apply([](auto const&... members) {
std::size_t result = 0;
((result = hash_combine(result, hash(members))), ...);
return result;
}, tuple);
}
template<class T, std::size_t I>
using Arr = T[I];
// Handling arrays
template<class T, std::size_t I>
std::size_t hash(Arr<T, I> const& arr) {
std::size_t result = 0;
for(T const& elem : arr) {
std::size_t h = hash(elem);
result = hash_combine(result, h);
}
return result;
};
// Handling structs
template<aggregate T>
std::size_t hash(T const& agg) {
return hash(tie_struct(agg));
}
This allows you to hash basically any aggregate struct, even with arrays and nested structs:
struct Foo{ int i; double d; std::string s; };
struct Bar { Foo k[10]; float f; };
std::cout << hash(Foo{1, 1.2f, "miau"}) << std::endl;
std::cout << hash(Bar{}) << std::endl;
full example on godbolt
Footnotes
This only works with aggregates
No need to worry about padding because we access the members directly.
You have to add a few more ifs into tie_struct if you need more than 4 members
The provided hash() function doesn't handle all types - if you need e.g. std::array, std::pair, etc... you need to add overloads for those.
It's a lot of boilerplate code, but it's insanely powerful.
You can also use Boost.PFR for the aggregate-to-tuple part, if you are allowed to use boost
So I'm currently refactoring a giant function:
int giant_function(size_t n, size_t m, /*... other parameters */) {
int x[n]{};
float y[n]{};
int z[m]{};
/* ... more array definitions */
And when I find a group of related definitions with discrete functionality, grouping them into a class definition:
class V0 {
std::unique_ptr<int[]> x;
std::unique_ptr<float[]> y;
std::unique_ptr<int[]> z;
public:
V0(size_t n, size_t m)
: x{new int[n]{}}
, y{new float[n]{}}
, z{new int[m]{}}
{}
// methods...
}
The refactored version is inevitably more readable, but one thing I find less-than-satisfying is the increase in the number of allocations.
Allocating all those (potentially very large) arrays on the stack is arguably a problem waiting to happen in the unrefactored version, but there's no reason that we couldn't get by with just one larger allocation:
class V1 {
int* x;
float* y;
int* z;
public:
V1(size_t n, size_t m) {
char *buf = new char[n*sizeof(int)+n*sizeof(float)+m*sizeof(int)];
x = (int*) buf;
buf += n*sizeof(int);
y = (float*) buf;
buf += n*sizeof(float);
z = (int*) buf;
}
// methods...
~V0() { delete[] ((char *) x); }
}
Not only does this approach involve a lot of manual (read:error-prone) bookkeeping, but its greater sin is that it's not composable.
If I want to have a V1 value and a W1 value on the stack, then that's
one allocation each for their behind-the-scenes resources. Even simpler, I'd want the ability to allocate a V1 and the resources it points to in a single allocation, and I can't do that with this approach.
What this initially led me to was a two-pass approach - one pass to calculate how much space was needed, then make one giant allocation, then another pass to parcel out the allocation and initialize the data structures.
class V2 {
int* x;
float* y;
int* z;
public:
static size_t size(size_t n, size_t m) {
return sizeof(V2) + n*sizeof(int) + n*sizeof(float) + m*sizeof(int);
}
V2(size_t n, size_t m, char** buf) {
x = (int*) *buf;
*buf += n*sizeof(int);
y = (float*) *buf;
*buf += n*sizeof(float);
z = (int*) *buf;
*buf += m*sizeof(int);
}
}
// ...
size_t total = ... + V2::size(n,m) + ...
char* buf = new char[total];
// ...
void* here = buf;
buf += sizeof(V2);
V2* v2 = new (here) V2{n, m, &buf};
However that approach had a lot of repetition at a distance, which is asking for trouble in the long run. Returning a factory got rid of that:
class V3 {
int* const x;
float* const y;
int* const z;
V3(int* x, float* y, int* z) : x{x}, y{y}, z{z} {}
public:
class V3Factory {
size_t const n;
size_t const m;
public:
Factory(size_t n, size_t m) : n{n}, m{m};
size_t size() {
return sizeof(V3) + sizeof(int)*n + sizeof(float)*n + sizeof(int)*m;
}
V3* build(char** buf) {
void * here = *buf;
*buf += sizeof(V3);
x = (int*) *buf;
*buf += n*sizeof(int);
y = (float*) *buf;
*buf += n*sizeof(float);
z = (int*) *buf;
*buf += m*sizeof(int);
return new (here) V3{x,y,z};
}
}
}
// ...
V3::Factory v3factory{n,m};
// ...
size_t total = ... + v3factory.size() + ...
char* buf = new char[total];
// ..
V3* v3 = v3factory.build(&buf);
Still some repetition, but the params only get input once. And still a lot of manual bookkeeping. It'd be nice if I could build this factory out of smaller factories...
And then my haskell brain hit me. I was implementing an Applicative Functor. This could totally be nicer!
All I needed to do was write some tooling to automatically sum sizes and run build functions side-by-side:
namespace plan {
template <typename A, typename B>
struct Apply {
A const a;
B const b;
Apply(A const a, B const b) : a{a}, b{b} {};
template<typename ... Args>
auto build(char* buf, Args ... args) const {
return a.build(buf, b.build(buf + a.size()), args...);
}
size_t size() const {
return a.size() + b.size();
}
Apply(Apply<A,B> const & plan) : a{plan.a}, b{plan.b} {}
Apply(Apply<A,B> const && plan) : a{plan.a}, b{plan.b} {}
template<typename U, typename ... Vs>
auto operator()(U const u, Vs const ... vs) const {
return Apply<decltype(*this),U>{*this,u}(vs...);
}
auto operator()() const {
return *this;
}
};
template<typename T>
struct Lift {
template<typename ... Args>
T* build(char* buf, Args ... args) const {
return new (buf) T{args...};
}
size_t size() const {
return sizeof(T);
}
Lift() {}
Lift(Lift<T> const &) {}
Lift(Lift<T> const &&) {}
template<typename U, typename ... Vs>
auto operator()(U const u, Vs const ... vs) const {
return Apply<decltype(*this),U>{*this,u}(vs...);
}
auto operator()() const {
return *this;
}
};
template<typename T>
struct Array {
size_t const length;
Array(size_t length) : length{length} {}
T* build(char* buf) const {
return new (buf) T[length]{};
}
size_t size() const {
return sizeof(T[length]);
}
};
template <typename P>
auto heap_allocate(P plan) {
return plan.build(new char[plan.size()]);
}
}
Now I could state my class quite simply:
class V4 {
int* const x;
float* const y;
int* const z;
public:
V4(int* x, float* y, int* z) : x{x}, y{y}, z{z} {}
static auto plan(size_t n, size_t m) {
return plan::Lift<V4>{}(
plan::Array<int>{n},
plan::Array<float>{n},
plan::Array<int>{m}
);
}
};
And use it in a single pass:
V4* v4;
W4* w4;
std::tie{ ..., v4, w4, .... } = *plan::heap_allocate(
plan::Lift<std::tie>{}(
// ...
V4::plan(n,m),
W4::plan(m,p,2*m+1),
// ...
)
);
It's not perfect (among other issues, I need to add code to track destructors, and have heap_allocate return a std::unique_ptr that calls all of them), but before I went further down the rabbit hole, I thought I should check for pre-existing art.
For all I know, modern compilers may be smart enough to recognize that the memory in V0 always gets allocated/deallocated together and batch the allocation for me.
If not, is there a preexisting implementation of this idea (or a variation thereof) for batching allocations with an applicative functor?
First, I'd like to provide feedback on problems with your solutions:
You ignore alignment. Relying on assumption that int and float share the same alignment on your system, your particular use case might be "fine". But try to add some double into the mix and there will be UB. You might find your program crashing on ARM chips due to unaligned access.
new (buf) T[length]{}; is unfortunately bad and non-portable. In short: Standard allows the compiler to reserve initial y bytes of the given storage for internal use. Your program fails to allocate this y bytes on systems where y > 0 (and yes, those systems apparently exist; VC++ does this allegedly).
Having to allocate for y is bad, but what makes array-placement-new unusable is the inability to find out how big y is until placement new is actually called. There's really no way to use it for this case.
You're already aware of this, but for completeness: You don't destroy the sub-buffers, so if you ever use a non-trivially-destructible type, then there will be UB.
Solutions:
Allocate extra alignof(T) - 1 bytes for each buffer. Align start of each buffer with std::align.
You need to loop and use non-array placement new. Technically, doing non-array placement new means that using pointer arithmetic on these objects has UB, but standard is just silly in this regard and I choose to ignore it. Here's language lawyerish discussion about that. As I understand, p0593r2 proposal includes a resolution to this technicality.
Add destructor calls that correspond to placement-new calls (or static_assert that only trivially destructible types shall be used). Note that support for non-trivial destruction raises the need for exception safety. If construction of one buffer throws an exception, then the sub buffers that were constructed earlier need to be destroyed. Same care need to be taken when constructor of single element throws after some have already been constructed.
I don't know of prior art, but how about some subsequent art? I decided to take a stab at this from a slightly different angle. Be warned though, this lacks testing and may contain bugs.
buffer_clump template for constructing / destructing objects into an external raw storage, and calculating aligned boundaries of each sub-buffer:
#include <cstddef>
#include <memory>
#include <vector>
#include <tuple>
#include <cassert>
#include <type_traits>
#include <utility>
// recursion base
template <class... Args>
class buffer_clump {
protected:
constexpr std::size_t buffer_size() const noexcept { return 0; }
constexpr std::tuple<> buffers(char*) const noexcept { return {}; }
constexpr void construct(char*) const noexcept { }
constexpr void destroy(const char*) const noexcept {}
};
template<class Head, class... Tail>
class buffer_clump<Head, Tail...> : buffer_clump<Tail...> {
using tail = buffer_clump<Tail...>;
const std::size_t length;
constexpr std::size_t size() const noexcept
{
return sizeof(Head) * length + alignof(Head) - 1;
}
constexpr Head* align(char* buf) const noexcept
{
void* aligned = buf;
std::size_t space = size();
assert(std::align(
alignof(Head),
sizeof(Head) * length,
aligned,
space
));
return (Head*)aligned;
}
constexpr char* next(char* buf) const noexcept
{
return buf + size();
}
static constexpr void
destroy_head(Head* head_ptr, std::size_t last)
noexcept(std::is_nothrow_destructible<Head>::value)
{
if constexpr (!std::is_trivially_destructible<Head>::value)
while (last--)
head_ptr[last].~Head();
}
public:
template<class... Size_t>
constexpr buffer_clump(std::size_t length, Size_t... tail_lengths) noexcept
: tail(tail_lengths...), length(length) {}
constexpr std::size_t
buffer_size() const noexcept
{
return size() + tail::buffer_size();
}
constexpr auto
buffers(char* buf) const noexcept
{
return std::tuple_cat(
std::make_tuple(align(buf)),
tail::buffers(next(buf))
);
}
void
construct(char* buf) const
noexcept(std::is_nothrow_default_constructible<Head, Tail...>::value)
{
Head* aligned = align(buf);
std::size_t i;
try {
for (i = 0; i < length; i++)
new (&aligned[i]) Head;
tail::construct(next(buf));
} catch (...) {
destroy_head(aligned, i);
throw;
}
}
constexpr void
destroy(char* buf) const
noexcept(std::is_nothrow_destructible<Head, Tail...>::value)
{
tail::destroy(next(buf));
destroy_head(align(buf), length);
}
};
A buffer_clump_storage template that leverages buffer_clump to construct sub-buffers into a RAII container.
template <class... Args>
class buffer_clump_storage {
const buffer_clump<Args...> clump;
std::vector<char> storage;
public:
constexpr auto buffers() noexcept {
return clump.buffers(storage.data());
}
template<class... Size_t>
buffer_clump_storage(Size_t... lengths)
: clump(lengths...), storage(clump.buffer_size())
{
clump.construct(storage.data());
}
~buffer_clump_storage()
noexcept(noexcept(clump.destroy(nullptr)))
{
if (storage.size())
clump.destroy(storage.data());
}
buffer_clump_storage(buffer_clump_storage&& other) noexcept
: clump(other.clump), storage(std::move(other.storage))
{
other.storage.clear();
}
};
Finally, a class that can be allocated as automatic variable and provides named pointers to sub-buffers of buffer_clump_storage:
class V5 {
// macro tricks or boost mpl magic could be used to avoid repetitive boilerplate
buffer_clump_storage<int, float, int> storage;
public:
int* x;
float* y;
int* z;
V5(std::size_t xs, std::size_t ys, std::size_t zs)
: storage(xs, ys, zs)
{
std::tie(x, y, z) = storage.buffers();
}
};
And usage:
int giant_function(size_t n, size_t m, /*... other parameters */) {
V5 v(n, n, m);
for(std::size_t i = 0; i < n; i++)
v.x[i] = i;
In case you need only the clumped allocation and not so much the ability to name the group, this direct usage avoids pretty much all boilerplate:
int giant_function(size_t n, size_t m, /*... other parameters */) {
buffer_clump_storage<int, float, int> v(n, n, m);
auto [x, y, z] = v.buffers();
Criticism of my own work:
I didn't bother making V5 members const which would have arguably been nice, but I found it involved more boilerplate than I would prefer.
Compilers will warn that there is a throw in a function that is declared noexcept when the constructor cannot throw. Neither g++ nor clang++ were smart enough to understand that the throw will never happen when the function is noexcept. I guess that can be worked around by using partial specialization, or I could just add (non-standard) directives to disable the warning.
buffer_clump_storage could be made copyable and assignable. This involves loads more code, and I wouldn't expect to have need for them. The move constructor might be superfluous as well, but at least it's efficient and concise to implement.
I wanna write a class for a binary indexed array,
which use two non-type default template parameter, op and identity.
And need to check the constraint that op(identity,identity) == identity.
My problem is,
I don't to how to specify op, my current solution does not compile.
‘class std::function<T(T, T)>’ is not a valid type for a template non-type parameter
how to to check op(identity,identity) == identity, currently I cannot verify, since failed on step 1, maybe static_assert?
So currently I use below workaround, but then I cannot specify op, eg, std::multiplies<int>.
Can anyone tell me how to achieve the goal?
#include <vector>
#include <functional>
// template <typename T = int, std::function<T(T,T)> op = std::plus<T>(), const T identity = T()>
template <typename T = int, const T identity = T()> // currently workaround
class BIT { // binary indexed array
const std::function<T(T,T)> op = std::plus<T>(); // currently workaround
public:
BIT(std::vector<T> value) : value(value), prefixSum(value.size() + 1, identity) {
for (size_t i = 1; i < prefixSum.size(); ++i) {
incrementNodeByValue(i, value[i-1]);
}
// print(prefixSum,"prefixSum");
}
T getSum(size_t i) {
auto sum = identity;
while (i) {
sum = op(sum, prefixSum(i));
i = firstSmallerAncestor(i);
}
return sum;
}
void incrementNodeByValue(size_t i, T x) {
while (i < prefixSum.size()) {
prefixSum[i] = op(prefixSum[i], x);
i = firstLargerAncestor(i);
}
}
private:
inline size_t firstLargerAncestor(size_t node) { return node + (node & -node); }
inline size_t firstSmallerAncestor(size_t node) { return node & (node - 1); }
std::vector<T> value;
std::vector<T> prefixSum;
};
int main() {
auto vec = std::vector<int> {5,1,15,11,52,28,0};
auto bit = BIT<>(vec);
}
The use of std::function here is a waste and seems to be the source of your confusion.
Note that templates may only be parameterized on typenames and values of integral types (char, int, long, etc). Here you're attempting to parameterize on a value of a std::function instantiation, which isn't an integral type. That said, you don't actually need to parameterize on a value in this case.
Because your constructor doesn't accept an argument to initialize the op member variable nor is it accessible via the interface, I gather it's safe to assume the operator is known at compile-time, is guaranteed immutable, and default constructible.
As such, I declared the op member to be of a parameter type called operation.
#include <functional>
#include <vector>
template< typename T = int,
typename operation = std::plus<T>,
const T identity = T() >
class BIT {
const operation op = operation();
static_assert( operation()(identity, identity) == identity );
std::vector<T> value;
std::vector<T> prefixSum;
inline size_t firstLargerAncestor(size_t node) { return node + (node & -node); }
inline size_t firstSmallerAncestor(size_t node) { return node & (node - 1); }
public:
BIT(std::vector<T> value) :
value(value),
prefixSum(value.size() + 1, identity) {
for (size_t i = 1; i < prefixSum.size(); ++i) {
incrementNodeByValue(i, value[i-1]);
}
}
T getSum(size_t i) {
auto sum = identity;
while (i) {
sum = op(sum, prefixSum(i));
i = firstSmallerAncestor(i);
}
return sum;
}
void incrementNodeByValue(size_t i, T x) {
while (i < prefixSum.size()) {
prefixSum[i] = op(prefixSum[i], x);
i = firstLargerAncestor(i);
}
}
};
live example
As a note, you'll likely want to define an identity template elsewhere to parameterized on an operation and value types to default the third parameter here. As is, it seems you'll almost always be defining all three parameters during instantiation.
So the order of the members returned from the div functions seems to be implementation defined.
Is quot the 1st member or is rem?
Let's say that I'm doing something like this:
generate(begin(digits), end(digits), [i = div_t{ quot, 0 }]() mutable {
i = div(i.quot, 10);
return i.rem;
})
Of course the problem here is that I don't know if I initialized i.quot or i.rem in my lambda capture. Is intializing i with div(quot, 1) the only cross platform way to do this?
You're right that the order of the members is unspecified. The definition is inherited from C, which explicitly states it is (emphasis mine):
7.20.6.2 The div, ldiv, and lldiv functions
3 [...] The structures shall contain (in either order) the members quot (the quotient) and rem (the remainder), each of which has the same type as the arguments numer and denom. [...]
In C, the fact that the order is unspecified doesn't matter, and an example is included specifically regarding div_t:
6.7.8 Initialization
34 EXAMPLE 10 Structure members can be initialized to nonzero values without depending on their order:
div_t answer = { .quot = 2, .rem = -1 };
Unfortunately, C++ never adopted this syntax.
I'd probably go for simple assignment in a helper function:
div_t make_div_t(int quot, int rem) {
div_t result;
result.quot = quot;
result.rem = rem;
return result;
}
For plain int values, whether you use initialisation or assignment doesn't really matter, they have the same effect.
Your division by 1 is a valid option as well.
To quote the C11 Standard Draft N1570 §7.22.6.2
The div, ldiv, and lldiv functions return a structure of type div_t, ldiv_t, and lldiv_t, respectively, comprising both the quotient and the remainder. The structures shall contain (in either order) the members quot (the quotient) and rem (the remainder), each of which has the same type as the arguments numer and denom.
So in this case div_t is a plain POD struct, consisting of two ints.
So you can initialize it like every plain struct, your way would be something I would have done too. It's also portable.
Otherwise I can't find anything special mechanism to initialize them, neither in the C nor in the C++ standard. But for POD aka Plain Old Datatypes, there isn't any need for.
EDIT:
I think the VS workaround could look like this:
#include <cstdlib>
#include <type_traits>
template<class T>
struct DTMaker {
using D = decltype(div(T{}, T{}));
static constexpr D dt = D{0,1};
static constexpr auto quot = dt.quot;
};
template <class T, typename std::enable_if<DTMaker<T>::quot == 0>::type* = nullptr>
typename DTMaker<T>::D make_div(const T ", const T& rem) { return {quot, rem}; }
template <class T, typename std::enable_if<DTMaker<T>::quot == 1>::type* = nullptr>
typename DTMaker<T>::D make_div(const T ", const T &rem) { return {rem, qout}; }
int main() {
div_t d_t = make_div(1, 2);
}
[live demo]
OLD ANSWER:
If you are using c++17 you could also try to use structured binding, constexpr function and SFINAE overloading to detect which field is declared first in the structure:
#include <cstdlib>
#include <algorithm>
#include <iterator>
constexpr bool first_quot() {
auto [x, y] = std::div_t{1, 0};
(void)y;
return x;
}
template <bool B = first_quot()>
std::enable_if_t<B, std::div_t> foo() {
int quot = 1;
int rem = 0;
return {quot, rem};
}
template <bool B = first_quot()>
std::enable_if_t<!B, std::div_t> foo() {
int quot = 1;
int rem = 0;
return {rem, quot};
}
int main() {
foo();
}
[live demo]
Or even simpler use if constexpr:
#include <cstdlib>
#include <algorithm>
#include <iterator>
constexpr bool first_quot() {
auto [x, y] = std::div_t{1, 0};
(void)y;
return x;
}
std::div_t foo() {
int quot = 1;
int rem = 0;
if constexpr(first_quot())
return {quot, rem};
else
return {rem, quot};
}
int main() {
foo();
}
[live demo]
Try something like this:)
int quot = 10;
auto l = [i = [=] { div_t tmp{}; tmp.quot = quot; return tmp; }()]() mutable
{
i = div(i.quot, 10);
return i.rem;
};
It looks like using a compound literal in C.:)
or you can simplify the task by defining the variable i outside the lambda expression and use it in the lambda by reference.
For example
int quot = 10;
dov_t i = {};
i.quot = quot;
auto l = [&i]()
{
i = div(i.quot, 10);
return i.rem;
};
You can use a ternary to initialize this:
generate(rbegin(digits), rend(digits), [i = div_t{ 1, 0 }.quot ? div_t{ quot, 0 } : div_t{ 0, quot }]() mutable {
i = div(i.quot, 10);
return i.rem;
});
gcc6.3 for example will compile identical code with the ternary and without the ternary.
On the other hand clang3.9 compiles longer code with the ternary than it does without the ternary.
So whether the ternary is optimized out will vary between compilers. But in all cases it will give you implementation independent code that doesn't require a secondary function to be written.
Incidentally if you are into creating a helper function to create a div_t (or any of the other div returns) you can do it like this:
template <typename T>
enable_if_t<decltype(div(declval<T>(), declval<T>())){ 1, 0 }.quot != 0, decltype(div(declval<T>(), declval<T>()))> make_div(const T quot, const T rem) { return { quot, rem }; }
template <typename T>
enable_if_t<decltype(div(declval<T>(), declval<T>())){ 1, 0 }.quot == 0, decltype(div(declval<T>(), declval<T>()))> make_div(const T quot, const T rem) { return { rem, quot }; }
Note this does work on gcc but fails to compile on Visual Studio because of some non-conformity.
My solution uses a constexpr function that itself wraps and executes a lambda function that determines and initializes the correct div_t depending on the template parameters.
template <typename T>
constexpr auto make_div(const T quot, const T rem)
{
return [&]() {
decltype(std::div(quot, rem)) result;
result.quot = quot;
result.rem = rem;
return result;
}();
}
This works with MSVC15, gcc 6.3 and clang 3.9.1.
http://rextester.com/AOBCH32388
The lambda allows us to initialize a value step by step within a constexpr function. So we can set quot and rem correctly and independently from the order they appear in the datatype itself.
By wrapping it into a constexpr function we allow the compiler to completely optimize the call to make_div:
clang: https://godbolt.org/g/YdZGkX
gcc: https://godbolt.org/g/sA61LK
I am trying to make a simple LookUpTable based on an array of integers, where the idea is to have it calculated at compile time.
Trying to make it possible to use it for any other future tables of various integer types I might have, I need it as a template.
So I have a LookUpTable.h
#ifndef LOOKUPTABLE_H
#define LOOKUPTABLE_H
#include <stdexcept> // out_of_range
template <typename T, std::size_t NUMBER_OF_ELEMENTS>
class LookUpTableIndexed
{
private:
//constexpr static std::size_t NUMBER_OF_ELEMENTS = N;
// LookUpTable
T m_lut[ NUMBER_OF_ELEMENTS ] {}; // ESSENTIAL T Default Constructor for COMPILE-TIME INTERPRETER!
public:
// Construct and Populate the LookUpTable such that;
// INDICES of values are MAPPED to the DATA values stored
constexpr LookUpTableIndexed() : m_lut {}
{
//ctor
}
// Returns the number of values stored
constexpr std::size_t size() const {return NUMBER_OF_ELEMENTS;}
// Returns the DATA value at the given INDEX
constexpr T& operator[](std::size_t n)
{
if (n < NUMBER_OF_ELEMENTS)
return m_lut[n];
else throw std::out_of_range("LookUpTableIndexed[] : OutOfRange!");
}
constexpr const T& operator[](std::size_t n) const
{
if (n < NUMBER_OF_ELEMENTS)
return m_lut[n];
else throw std::out_of_range("LookUpTableIndexed[] const : OutOfRange!");
}
using iterator = T*;
// Returns beginning and end of LookUpTable
constexpr iterator begin() {return &m_lut[0 ];}
constexpr iterator end () {return &m_lut[NUMBER_OF_ELEMENTS];}
};
#endif // LOOKUPTABLE_H
And I'm trying to use it in a class for rapid attenuation of an integer signal wrt an integer distance.
eg. This is just a sample usage as Foo.h
#ifndef FOO_H
#define FOO_H
#include <limits> // max, digits
#include <stdlib.h> // abs
#include "LookUpTable.h" // LookUpTableIndexed
class Foo
{
private:
template <typename TDistance,
TDistance MAXIMUM_DISTANCE,
std::size_t NUMBER_OF_DIGITS>
struct DistanceAttenuation
{
private:
// Maximum value that can be held in this type
//constexpr auto MAXIMUM_DISTANCE = std::numeric_limits<TDistance>::max();
// Number of bits used by this type
//constexpr auto NUMBER_OF_DIGITS = std::numeric_limits<TDistance>::digits;
// LookUpTable
LookUpTableIndexed<TDistance, NUMBER_OF_DIGITS> m_attenuationRangeUpperLimit {}; // ESSENTIAL LookUpTable Default Constructor for COMPILE-TIME INTERPRETER!
// Returns the number of bits to BIT-SHIFT-RIGHT, attenuate, some signal
// given its distance from source
constexpr std::size_t attenuateBy(const TDistance distance)
{
for (std::size_t i {NUMBER_OF_DIGITS}; (i > 0); --i)
{
// While distance exceeds upper-limit, keep trying values
if (distance >= m_attenuationRangeUpperLimit[i - 1])
{
// Found RANGE the given distance occupies
return (i - 1);
}
}
throw std::logic_error("DistanceAttenuation::attenuateBy(Cannot attenuate signal using given distance!)");
}
public:
// Calculate the distance correction factors for signals
// so they can be attenuated to emulate the the effects of distance on signal strength
// ...USING THE INVERSE SQUARE RELATIONSHIP OF DISTANCE TO SIGNAL STRENGTH
constexpr DistanceAttenuation() : m_attenuationRangeUpperLimit {}
{
//ctor
// Populate the LookUpTable
for (std::size_t i {0}; (i < NUMBER_OF_DIGITS); ++i)
{
TDistance goo = 0; // Not an attenuation calculation
TDistance hoo = 0; // **FOR TEST ONLY!**
m_attenuationRangeUpperLimit[i] = MAXIMUM_DISTANCE - goo - hoo;
}
static_assert((m_attenuationRangeUpperLimit[0] == MAXIMUM_DISTANCE),
"DistanceAttenuation : Failed to Build LUT!");
}
// Attenuate the signal, s, by the effect of the distance
// by some factor, a, where;
// Positive contribution values are attenuated DOWN toward ZERO
// Negative UP ZERO
constexpr signed int attenuateSignal(const signed int s, const int a)
{
return (s < 0)? -(abs(s) >> a) :
(abs(s) >> a);
}
constexpr signed int attenuateSignalByDistance(const signed int s, const TDistance d)
{
return attenuateSignal(s, attenuateBy(d));
}
};
using SDistance_t = unsigned int;
constexpr static auto m_distanceAttenuation = DistanceAttenuation<SDistance_t,
std::numeric_limits<SDistance_t>::max(),
std::numeric_limits<SDistance_t>::digits>();
public:
Foo() {}
~Foo() {}
// Do some integer foo
signed int attenuateFoo(signed int signal, SDistance_t distance) {return m_distanceAttenuation::attenuateSignalByDistance(signal, distance);}
};
#endif // FOO_H
I have tried to do this several ways, using the youtube video tutorial by CppCon 2015: Scott Schurr “constexpr: Applications" and others, but it won't compile giving the error;
error: 'constexpr static auto m_distanceAttenuation...' used before its definition
and the static asserts fail with
error: non-constant condition for static assertion
indicating it isn't calculating anything at compile-time.
I'm new to C++.
I know I'm doing something obvious but I don't know what it is.
Am I misusing static or constexpr?
numeric_limits are constexpr?
What am I doing wrong?
Thank you.
Some observations
1) as observed by michalsrb, Foo isn't complete when you initialize m_distanceAttenuation and DistanceAttenuation is part of Foo, so is incomplete.
Unfortunately you can't initialize a static constexpr member with an incomplete type (as better explained by jogojapan in this answer).
Suggestion: define DistanceAttenuation it outside (and before) Foo; so it's a complete type and can be used to initialize m_distanceAttenuation; something like
template <typename TDistance,
TDistance MAXIMUM_DISTANCE,
std::size_t NUMBER_OF_DIGITS>
struct DistanceAttenuation
{
// ...
};
class Foo
{
// ...
};
2) in C++14, a constexpr method isn't a const method; suggestion: define the following method as const too or you can't use they in some constexpr expressions
constexpr std::size_t attenuateBy (const TDistance distance) const
constexpr signed int attenuateSignal(const signed int s, const int a) const
constexpr signed int attenuateSignalByDistance(const signed int s, const TDistance d) const
3) in attenuateBy(), the test in the following for is ever true
for (std::size_t i {NUMBER_OF_DIGITS - 1}; (i >= 0); --i)
because a std::size_t is ever >= 0, so the for goes in loop and never exit; suggestion: redefine i as int or long
4) in attenuateFoo() you use m_DistanceAttenuation where the variable is defined as m_distanceAttenuation; suggestion: correct che name of the variable used
5) in attenuateFoo() you call the method attenuateSignalByDistance() using the :: operator; suggestion: use the . operator, so (considering point (4) too)
signed int attenuateFoo(signed int signal, SDistance_t distance)
{return m_distanceAttenuation.attenuateSignalByDistance(signal, distance);}