Iterating over a tuple is a common question in the C++ world and I know the various ways it can be done - usually with recursion and variadic templates.
The reason is that a tuple usually holds different element types and the only way to correctly handle every type is to have a specific overload that matches this type. then we let the compiler dispatch every element to the correct function using recursion.
My case goes like this: I have a tuple of elements, every element is an instantiation of a template class:
std::tuple<my_class<int>, my_class<string>, my_class<float>>
Also, my_class<T> is a derived class of my_class_base, which isn't a template.
Considering this constraint, is it possible to write an at(tuple_type& tuple, size_t n) function that returns the nth element of the tuple, as a reference to the base class, in O(1) steps?
Then I can write
for (size_t i = 0; i < N; i++){
auto& o = at<my_base_class>(tuple, i);
o.call_base_method(...);
}
Thanks.
You might just using std::apply:
std::apply([](auto&...args){ (args.call_base_method(), ...); }, tuple);
Else to answer your question, you might do something like:
template <std::size_t... Is, typename tuple_type>
my_class_base& at_impl(std::index_sequence<Is...>, tuple_type& tuple, size_t n)
{
my_class_base* bases[] = {&std::get<Is>(tuple)...};
*return bases[n];
}
template <typename tuple_type>
my_class_base& at(tuple_type& tuple, size_t n)
{
auto seq = std::make_index_sequence<std::tuple_size<tuple_type>::value>();
return at_impl(seq, tuple, n);
}
Demo
I want to optimize a little programm/library i'm writing and since 2 weeks i'm somewhat stuck and now wondering if what i had in mind is even possible like that.
(Please be gentle i don't have very much experience in meta-programming.)
My goal is of course to have certain computations be done by the compiler, so that the programmer - hopefully - only has to edit code at one point in the program and have the compiler "create" all the boilerplate. I do have a resonably good idea how to do what i want with macros, but it is wished that i do it with templates if possible.
My goal is:
Lets say i have a class that a using programmer can derive from. There he can have multiple incoming and outgoing datatypes that i want to register somehow so that the base class can do i'ts operations on them.
class my_own_multiply : function_base {
in<int> a;
in<float> b;
out<double> c;
// ["..."] // other content of the class that actually does something but is irrelevant
register_ins<a, b> ins_of_function; // example meta-function calls
register_outs<c> outs_of_function;
}
The meta-code i have up till now is this: (but it's not jet working/complete)
template <typename... Ts>
struct register_ins {
const std::array<std::unique_ptr<in_type_erasured>, sizeof...(Ts)> ins;
constexpr std::array<std::unique_ptr<in_type_erasured>, sizeof...(Ts)>
build_ins_array() {
std::array<std::unique_ptr<in_type_erasured>, sizeof...(Ts)> ins_build;
for (unsigned int i = 0; i < sizeof...(Ts); ++i) {
ins_build[i] = std::make_unique<in_type_erasured>();
}
return ins_build;
}
constexpr register_ins() : ins(build_ins_array()) {
}
template <typename T>
T getValueOf(unsigned int in_nr) {
return ins[in_nr]->getValue();
}
};
As you may see, i want to call my meta-template-code with a variable number of ins. (Variable in the sens that the programmer can put however many he likes in there, but they won't change at runtime so they can be "baked" in at compile time)
The meta-code is supposed to be creating an array, that is of the lengt of the number of ins and is initialized so that every field points to the original in in the my_own_multiply class. Basically giving him an indexable data structure that will always have the correct size. And that i could access from the function_base class to use all ins for certain functions wich are also iterable making things convinient for me.
Now i have looked into how one might do that, but i now am getting the feeling that i might not really be allowed to "create" this array at compile time in a fashion that allows me to still have the ins a and b be non static and non const so that i can mutate them. From my side they wouldn't have to be const anyway, but my compliler seems to not like them to be free. The only thing i need const is the array with the pointers. But using constexpr possibly "makes" me make them const?
Okay, i will clarify what i don't get:
When i'm trying to create an "instance" of my meta-stuff-structure then it fails because it expects all kinds of const, constexpr and so on. But i don't want them since i need to be able to mutate most of those variables. I only need this meta-stuff to create an array of the correct size already at compile time. But i don't want to sacrifice having to make everything static and const in order to achive this. So is this even possible under these kinds of terms?
I do not get all the things you have in mind (also regarding that std::unique_ptr in your example), but maybe this helps:
Starting from C++14 (or C++11, but that is strictly limited) you may write constexpr functions which can be evaluated at compile-time. As a precondition (in simple words), all arguments "passed by the caller" must be constexpr. If you want to enforce that the compiler replaces that "call" by the result of a compile-time computation, you must assign the result to a constexpr.
Writing usual functions (just with constexpr added) allows to write code which is simple to read. Moreover, you can use the same code for both: compile-time computations and run-time computations.
C++17 example (similar things are possible in C++14, although some stuff from std is just missing the constexpr qualifier):
http://coliru.stacked-crooked.com/a/154e2dfcc41fb6c7
#include <cassert>
#include <array>
template<class T, std::size_t N>
constexpr std::array<T, N> multiply(
const std::array<T, N>& a,
const std::array<T, N>& b
) {
// may be evaluated in `constexpr` or in non-`constexpr` context
// ... in simple man's words this means:
// inside this function, `a` and `b` are not `constexpr`
// but the return can be used as `constexpr` if all arguments are `constexpr` for the "caller"
std::array<T, N> ret{};
for(size_t n=0; n<N; ++n) ret[n] = a[n] * b[n];
return ret;
}
int main() {
{// compile-time evaluation is possible if the input data is `constexpr`
constexpr auto a = std::array{2, 4, 6};
constexpr auto b = std::array{1, 2, 3};
constexpr auto c = multiply(a, b);// assigning to a `constexpr` guarantees compile-time evaluation
static_assert(c[0] == 2);
static_assert(c[1] == 8);
static_assert(c[2] == 18);
}
{// for run-time data, the same function can be used
auto a = std::array{2, 4, 6};
auto b = std::array{1, 2, 3};
auto c = multiply(a, b);
assert(c[0] == 2);
assert(c[1] == 8);
assert(c[2] == 18);
}
return 0;
}
Right now I have this code:
uint64_t buffer = 0;
const uint8_t * data = reinterpret_cast<uint8_t*>(&buffer);
And this works, but it seems risky due to the hanging pointer (and looks ugly too). I don't want naked pointers sitting around. I want to do something like this:
uint64_t buffer = 0;
const std::array<uint8_t, 8> data = partition_me_softly(buffer);
Is there and c++11 style construct that allows for me to get this into a safe container, preferable a std::array out of an unsigned int like this without inducing overhead?
If not, what would be the ideal way to improve this code to be more safe?
So I modified dauphic's answer to be a little more generic:
template <typename T, typename U>
std::array<T, sizeof(U) / sizeof(T)> ScalarToByteArray(const U v)
{
static_assert(std::is_integral<T>::value && std::is_integral<U>::value,
"Template parameter must be a scalar type");
std::array<T, sizeof(U) / sizeof(T)> ret;
std::copy((T*)&v, ((T*)&v) + sizeof(U), ret.begin());
return ret;
}
This way you can use it with more types like so:
uint64_t buffer = 0;
ScalarToByteArray<uint8_t>(buffer);
If you want to store an integer in a byte array, the best approach is probably to just cast the integer to a uint8_t* and copy it into an std::array. You're going to have to use raw pointers at some point, so your best option is to encapsulate the operation into a function.
template<typename T>
std::array<uint8_t, sizeof(T)> ScalarToByteArray(const T value)
{
static_assert(std::is_integral<T>::value,
"Template parameter must be a scalar type");
std::array<uint8_t, sizeof(T)> result;
std::copy((uint8_t*)&value, ((uint8_t*)&value) + sizeof(T), result.begin());
return result;
}
while playing and trying to calculate total size of vector I tried something like
vector<double> vd;
auto area = vd.size()* sizeof (vd::value_type);
//Ive seen Stepanov use area as name for this kind of size, idk if he adds the sizeof vd also to area :)
Unfortunately this doesnt work...
I need to use vector<double>::value_type but that makes code less readable.
Can it be made to work? I dont like sizeof vd.front() because it just looks ugly to write front() for this.
EDIT: decltype variants also fit in what I would call ugly category...
I think decltype can be used:
auto area = vd.size() * sizeof(decltype(vd)::value_type);
as you are using auto I assume C++11 is permitted.
Confirmed with g++ v4.7.2 and clang v3.3.
How about a simple helper function?
template <typename Container>
size_t value_size(const Container &)
{
return sizeof(typename Container::value_type);
}
[...]
vector<double> vd;
auto area = vd.size() * value_size(vd);
You could even overload the function so that it works with other containers such as arrays (of course, you would need to wrap size as well).
Ideally, the entire computation could be wrapped into a generic function:
template <typename Container>
size_t area(const Container &c)
{
return c.size() * sizeof(typename Container::value_type);
}
//possible overload for arrays (not sure it's the best implementation)
template <typename T, size_t N>
size_t area(const T (&arr)[N])
{
return sizeof(arr);
}
[...]
std::vector<double> vd;
auto vd_area = area(vd);
double arr[] = { 1., 2. };
auto arr_area = area(arr);
In C++11, you could use decltype(vd[0]):
auto area = vd.size()* sizeof (decltype(vd[0]));
But in the particular scenario, you could just write this:
auto area = vd.size()* sizeof (vd[0]);
Since the expression inside sizeof (and decltype too) will not be evaluated, both will work even if vd is
empty.
C++ Notes: Array Initialization has a nice list over initialization of arrays. I have a
int array[100] = {-1};
expecting it to be full with -1's but its not, only first value is and the rest are 0's mixed with random values.
The code
int array[100] = {0};
works just fine and sets each element to 0.
What am I missing here.. Can't one initialize it if the value isn't zero ?
And 2: Is the default initialization (as above) faster than the usual loop through the whole array and assign a value or does it do the same thing?
Using the syntax that you used,
int array[100] = {-1};
says "set the first element to -1 and the rest to 0" since all omitted elements are set to 0.
In C++, to set them all to -1, you can use something like std::fill_n (from <algorithm>):
std::fill_n(array, 100, -1);
In portable C, you have to roll your own loop. There are compiler-extensions or you can depend on implementation-defined behavior as a shortcut if that's acceptable.
There is an extension to the gcc compiler which allows the syntax:
int array[100] = { [0 ... 99] = -1 };
This would set all of the elements to -1.
This is known as "Designated Initializers" see here for further information.
Note this isn't implemented for the gcc c++ compiler.
The page you linked to already gave the answer to the first part:
If an explicit array size is
specified, but an shorter
initiliazation list is specified, the
unspecified elements are set to zero.
There is no built-in way to initialize the entire array to some non-zero value.
As for which is faster, the usual rule applies: "The method that gives the compiler the most freedom is probably faster".
int array[100] = {0};
simply tells the compiler "set these 100 ints to zero", which the compiler can optimize freely.
for (int i = 0; i < 100; ++i){
array[i] = 0;
}
is a lot more specific. It tells the compiler to create an iteration variable i, it tells it the order in which the elements should be initialized, and so on. Of course, the compiler is likely to optimize that away, but the point is that here you are overspecifying the problem, forcing the compiler to work harder to get to the same result.
Finally, if you want to set the array to a non-zero value, you should (in C++, at least) use std::fill:
std::fill(array, array+100, 42); // sets every value in the array to 42
Again, you could do the same with an array, but this is more concise, and gives the compiler more freedom. You're just saying that you want the entire array filled with the value 42. You don't say anything about in which order it should be done, or anything else.
C++11 has another (imperfect) option:
std::array<int, 100> a;
a.fill(-1);
With {} you assign the elements as they are declared; the rest is initialized with 0.
If there is no = {} to initalize, the content is undefined.
Using std::array, we can do this in a fairly straightforward way in C++14. It is possible to do in C++11 only, but slightly more complicated.
Our interface is a compile-time size and a default value.
template<typename T>
constexpr auto make_array_n(std::integral_constant<std::size_t, 0>, T &&) {
return std::array<std::decay_t<T>, 0>{};
}
template<std::size_t size, typename T>
constexpr auto make_array_n(std::integral_constant<std::size_t, size>, T && value) {
return detail::make_array_n_impl<size>(std::forward<T>(value), std::make_index_sequence<size - 1>{});
}
template<std::size_t size, typename T>
constexpr auto make_array_n(T && value) {
return make_array_n(std::integral_constant<std::size_t, size>{}, std::forward<T>(value));
}
The third function is mainly for convenience, so the user does not have to construct a std::integral_constant<std::size_t, size> themselves, as that is a pretty wordy construction. The real work is done by one of the first two functions.
The first overload is pretty straightforward: It constructs a std::array of size 0. There is no copying necessary, we just construct it.
The second overload is a little trickier. It forwards along the value it got as the source, and it also constructs an instance of make_index_sequence and just calls some other implementation function. What does that function look like?
namespace detail {
template<std::size_t size, typename T, std::size_t... indexes>
constexpr auto make_array_n_impl(T && value, std::index_sequence<indexes...>) {
// Use the comma operator to expand the variadic pack
// Move the last element in if possible. Order of evaluation is well-defined
// for aggregate initialization, so there is no risk of copy-after-move
return std::array<std::decay_t<T>, size>{ (static_cast<void>(indexes), value)..., std::forward<T>(value) };
}
} // namespace detail
This constructs the first size - 1 arguments by copying the value we passed in. Here, we use our variadic parameter pack indexes just as something to expand. There are size - 1 entries in that pack (as we specified in the construction of make_index_sequence), and they have values of 0, 1, 2, 3, ..., size - 2. However, we do not care about the values (so we cast it to void, to silence any compiler warnings). Parameter pack expansion expands out our code to something like this (assuming size == 4):
return std::array<std::decay_t<T>, 4>{ (static_cast<void>(0), value), (static_cast<void>(1), value), (static_cast<void>(2), value), std::forward<T>(value) };
We use those parentheses to ensure that the variadic pack expansion ... expands what we want, and also to ensure we are using the comma operator. Without the parentheses, it would look like we are passing a bunch of arguments to our array initialization, but really, we are evaluating the index, casting it to void, ignoring that void result, and then returning value, which is copied into the array.
The final argument, the one we call std::forward on, is a minor optimization. If someone passes in a temporary std::string and says "make an array of 5 of these", we would like to have 4 copies and 1 move, instead of 5 copies. The std::forward ensures that we do this.
The full code, including headers and some unit tests:
#include <array>
#include <type_traits>
#include <utility>
namespace detail {
template<std::size_t size, typename T, std::size_t... indexes>
constexpr auto make_array_n_impl(T && value, std::index_sequence<indexes...>) {
// Use the comma operator to expand the variadic pack
// Move the last element in if possible. Order of evaluation is well-defined
// for aggregate initialization, so there is no risk of copy-after-move
return std::array<std::decay_t<T>, size>{ (static_cast<void>(indexes), value)..., std::forward<T>(value) };
}
} // namespace detail
template<typename T>
constexpr auto make_array_n(std::integral_constant<std::size_t, 0>, T &&) {
return std::array<std::decay_t<T>, 0>{};
}
template<std::size_t size, typename T>
constexpr auto make_array_n(std::integral_constant<std::size_t, size>, T && value) {
return detail::make_array_n_impl<size>(std::forward<T>(value), std::make_index_sequence<size - 1>{});
}
template<std::size_t size, typename T>
constexpr auto make_array_n(T && value) {
return make_array_n(std::integral_constant<std::size_t, size>{}, std::forward<T>(value));
}
struct non_copyable {
constexpr non_copyable() = default;
constexpr non_copyable(non_copyable const &) = delete;
constexpr non_copyable(non_copyable &&) = default;
};
int main() {
constexpr auto array_n = make_array_n<6>(5);
static_assert(std::is_same<std::decay_t<decltype(array_n)>::value_type, int>::value, "Incorrect type from make_array_n.");
static_assert(array_n.size() == 6, "Incorrect size from make_array_n.");
static_assert(array_n[3] == 5, "Incorrect values from make_array_n.");
constexpr auto array_non_copyable = make_array_n<1>(non_copyable{});
static_assert(array_non_copyable.size() == 1, "Incorrect array size of 1 for move-only types.");
constexpr auto array_empty = make_array_n<0>(2);
static_assert(array_empty.empty(), "Incorrect array size for empty array.");
constexpr auto array_non_copyable_empty = make_array_n<0>(non_copyable{});
static_assert(array_non_copyable_empty.empty(), "Incorrect array size for empty array of move-only.");
}
The page you linked states
If an explicit array size is specified, but an shorter initiliazation list is specified, the unspecified elements are set to zero.
Speed issue: Any differences would be negligible for arrays this small. If you work with large arrays and speed is much more important than size, you can have a const array of the default values (initialized at compile time) and then memcpy them to the modifiable array.
Another way of initializing the array to a common value, would be to actually generate the list of elements in a series of defines:
#define DUP1( X ) ( X )
#define DUP2( X ) DUP1( X ), ( X )
#define DUP3( X ) DUP2( X ), ( X )
#define DUP4( X ) DUP3( X ), ( X )
#define DUP5( X ) DUP4( X ), ( X )
.
.
#define DUP100( X ) DUP99( X ), ( X )
#define DUPx( X, N ) DUP##N( X )
#define DUP( X, N ) DUPx( X, N )
Initializing an array to a common value can easily be done:
#define LIST_MAX 6
static unsigned char List[ LIST_MAX ]= { DUP( 123, LIST_MAX ) };
Note: DUPx introduced to enable macro substitution in parameters to DUP
For the case of an array of single-byte elements, you can use memset to set all elements to the same value.
There's an example here.
The simplest way is to use std::array and write a function template that will return the required std::array with all of its element initialized with the passed argument as shown below.
C++11 Version
template<std::size_t N> std::array<int, N> make_array(int val)
{
std::array<int, N> tempArray{};
for(int &elem:tempArray)
{
elem = val;
}
return tempArray;
}
int main()
{
//---------------------V-------->number of elements
auto arr = make_array<8>(5);
//------------------------^---->value of element to be initialized with
//lets confirm if all objects have the expected value
for(const auto &elem: arr)
{
std::cout << elem << std::endl; //prints all 5
}
}
Working demo
C++17 Version
With C++17 you can add constexpr to the function template so that it can be used in constexpr context:
//-----------------------------------------vvvvvvvvv--->added constexpr
template<std::size_t N> std::array<int, N> constexpr make_array(int val)
{
std::array<int, N> tempArray{};
for(int &elem:tempArray)
{
elem = val;
}
return tempArray;
}
int main()
{
//--vvvvvvvvv------------------------------>constexpr added
constexpr auto arr = make_array<8>(5);
for(const auto &elem: arr)
{
std::cout << elem << std::endl;
}
}
Working demo
1) When you use an initializer, for a struct or an array like that, the unspecified values are essentially default constructed. In the case of a primitive type like ints, that means they will be zeroed. Note that this applies recursively: you could have an array of structs containing arrays and if you specify just the first field of the first struct, then all the rest will be initialized with zeros and default constructors.
2) The compiler will probably generate initializer code that is at least as good as you could do by hand. I tend to prefer to let the compiler do the initialization for me, when possible.
In the C++ programming language V4, Stroustrup recommends using vectors or valarrays over builtin arrays. With valarrary's, when you create them, you can init them to a specific value like:
valarray <int>seven7s=(7777777,7);
To initialize an array 7 members long with "7777777".
This is a C++ way of implementing the answer using a C++ data structure instead of a "plain old C" array.
I switched to using the valarray as an attempt in my code to try to use C++'isms v. C'isms....