I have trouble understanding the order of template instantiation. It seems that the compiler does not consider a function if it is defined "too late." The following steps illustrates the main ideas of the code below:
The framework should provide a free function convert<From, To> if it can find a working overload for the function generate.
The function to<T> is a shortcut for convert<From,To> and should only work if convert<From,To> is valid.
Users should be able to provide an overload of generate and be able to use to and convert.
The corresponding code:
#include <string>
#include <utility>
#include <iostream>
// If I move the code down below at [*] to this location, everything works as
// expected.
// ------------- Framework Code -------------
// Anything that can be generated can also be converted to a string.
template <typename From>
auto convert(From const& from, std::string& to)
-> decltype(
generate(std::declval<std::back_insert_iterator<std::string>&>(), from)
)
{
to.clear();
auto i = std::back_inserter(to);
return generate(i, from);
}
// Similar to convert, except that it directly returns the requested type.
template <typename To, typename From>
auto to(From const& f) -> decltype(convert(f, std::declval<To&>()), To())
{
To t;
if (! convert(f, t))
throw std::invalid_argument("invalid conversion");
return t;
}
// ------------- User Code -------------
// [*] Support arithmetic types.
template <typename Iterator, typename T>
auto generate(Iterator& out, T i)
-> typename std::enable_if<std::is_arithmetic<T>::value, bool>::type
{
// Note: I merely use std::to_string for illustration purposes here.
auto str = std::to_string(i);
out = std::copy(str.begin(), str.end(), out);
return true;
}
int main()
{
uint16_t s = 16;
std::cout << to<std::string>(s) << std::endl;
return 0;
}
The problem in the following code is that it only works if the function generate appears before the definition of convert and to. How can I work around this problem?
Maybe my mental model is wrong here, but I thought the template when the compiler sees to<std::string>(uint16_t), it starts going backwards to and instantiate as needed. Any guidance would be appreciated.
The compiler does not know of the existence of generate by the time it sees the definition of convert and to, as you have already guessed yourself. Contrary to what you thought, it doest not though sort of put the definitions of convert and to "on hold" until it sees what generate is. To workaround this problem you need to forward declare generate, what can be done using the following construction:
template <typename Iterator, typename T>
auto generate(Iterator& out, T i)
-> typename std::enable_if<std::is_arithmetic<T>::value, bool>::type;
This should appear right before the definition of convert, so that the compiler knows generate actually exists and also is a function by the time it compiles convert and to. This way the compiler can check the syntax and guarantee it is a valid call to generate, even before it knows what generate actually does, since all it needs to do at this point is check if the types of the arguments as well as of the return value match, according to the rules defined by the language standard.
By doing this, you naturally enforce a specific type signature for generate (remember the compiler is required to check the types when it compiles convert and to!). If you don't want to do that, and you probably don't, then the best approach is to expect a further template argument to convert and to likewise, which you expect to be callable, that is, which you can use as in a function call:
template <typename From, typename Generator>
auto convert(From const& from, std::string& to, Generator generate)
-> decltype(
generate(std::declval<std::back_insert_iterator<std::string>&>(), from)
)
{
to.clear();
auto i = std::back_inserter(to);
return generate(i, from);
}
These kind of objects are commonly known as callable objects.
The drawback to this approach is that because c++ unfortunately does not support concepts yet, you can't do much to enforce the requirements the callable object generate should attend to. Nonetheless, this approach is what the std library successfully uses for its algorithms.
The advantage of this approach is that it can be very flexible, for any possible callable object, which minimally attends to the type requirements, can be used. That includes free functions, function objects, member functions through binding, among others. Not to mention the user is absolutely free to choose the name she wants for her callable object instead of being forced to use generate, as your initial idea would require if it were valid c++.
Now to call this modified version of convert using the free function generateyou defined, you would do that:
to<std::string>(s, generate<std::back_insert_iterator<std::string>, uint16_t>);
That isn't very nice, because since you must explicitly state the template arguments, this approach fails to take full advantage of the fact generate is a template function. Fortunately this inconvenience can be overcome by the use of function objects, like the following for instance:
struct Generator
{
template <typename Iterator, typename T>
auto operator()(Iterator& out, T i)
-> typename std::enable_if<std::is_arithmetic<T>::value, bool>::type
{
// Note: I merely use std::to_string for illustration purposes here.
auto str = std::to_string(i);
out = std::copy(str.begin(), str.end(), out);
return true;
}
};
The previous call would become simply
to<std::string>(s, Generator());
taking full advantage of its tamplate nature.
At any rate, if I got the idea correctly, this part of the code is of responsability of the user, so she, as she deserves, have full autonomy to decide which way she prefers.
Related
After messing around with concepts I came across something in visual studio that I didn't understand, although I don't know if the issue here is anything to do with concepts specifically. I'm sure there's a reason for this behaviour, but it would be great if someone could explain. There are two parts to this question. For the following snippet:
#include <concepts>
#include <utility>
template <typename PolicyType, typename T, typename... Ts>
concept concept_policy = requires(Ts&&... args)
{
{ PolicyType::template Create<T>(args...) } -> std::same_as<T*>;
};
struct basic_policy
{
template <typename T, typename... Ts>
static T* Create(Ts&&... args)
{
return new T { std::forward<Ts>(args)... };
}
};
struct type_a
{
int m_val;
};
template <concept_policy<int> TPolicy = basic_policy>
static void DoSomething()
{
//works on msvc, msvc needs the "template" for no args, but not with?
{
type_a* type1 = TPolicy::Create<type_a>(5); //why is this fine without template?
type_a* type2 = TPolicy::template Create<type_a>(); //why does this require template if the above doesn't?
}
// //clang requires both to have "template"
// {
// type_a* type1 = TPolicy::template Create<type_a>(5);
// type_a* type2 = TPolicy::template Create<type_a>();
// }
}
int main()
{
DoSomething();
{
//both versions compile fine without "template"
basic_policy policy;
type_a* type1 = basic_policy::Create<type_a>(5);
type_a* type2 = basic_policy::Create<type_a>();
}
return 0;
}
Why do msvc and clang produce different outputs here? Msvc is fine having the "template" omitted for the call with arguments, but not without
Using a similar policy design, is there any way around prefixing the Create with "template"? Ideally I'd like to be able to call TPolicy::Create<type>(...);
Clang is correct: the call to TPolicy::Create<type_a> requires the word template because TPolicy is a dependent type.
Specifically, according to the standard, when we have a fragment of the form T::m< where T is a dependent type other the current instantiation, the compiler must assume that < is the less-than operator, not the beginning of a template argument list. If you mean < as a template argument list, then you must prefix m with the keyword template.
This behaviour is specified in [temp.names]/3. A < that doesn't satisfy any of the conditions listed must be interpreted to mean the less-than operator; the compiler cannot use contextual information to determine that it means the beginning of a template argument list.
As for why MSVC sometimes fails to diagnose the violation, I am not sure.
There is no way to make TPolicy::Create<type>(...); just work without the template keyword. If you really hate writing template, you have to restructure your code so that Create is a non-member, sort of like std::get in the standard library (which would have to be invoked in the form .template get<i>() if it were a class member and the object expression were of dependent type). I guess in this case, Create could be a class template that takes the policy class as one of its template arguments, and the type you want to create as another. I have been told that people often do make their templates into non-members for this exact reason (to avoid having to write template). I think that's a big mistake. It's better to write template than to choose a less natural design.
I'm working on a library which uses lambdas for delineating the scopes of expression terms. Because the library has to hand out unique integer numbers to identify each variable, it is ideal if the library, not the user, constructs the variables and the user code receives them as lambda arguments.
(In other words I am implementing a C++ analog of "call\fresh" from miniKanren.)
Since the user may want to introduce any number from zero to many fresh variables at a particular scope, I want the user to be able to pass lambdas with differing numbers of arguments to the library. However, I'm not aware of any (simple) way (in C++14) to deduce the number of parameters to an arbitrary lambda object.
An idea occurred to me why not pass a fixed number (say, 10) of variable-id arguments to the lambda, and have the user code use ellipses in the lambda to ignore the ones not needed? Something like this:
auto no_args = call_fresh([](...) { return success(); });
auto one_arg = call_fresh([](var A, ...) { return A == 1; });
auto two_args = call_fresh([](var A, var B, ...) { return A == 1 && B == 2; });
Compiler explorer seems to accept ellipses in lambda parameter lists, at least with gcc.
It would be called something like this (note how the code always passes 10 variable id's no matter whether "f" names only one, two, or none of them):
template <typename F>
auto call_fresh(F f)
{
return [f](StateCounter sc) {
return f(sc+0,sc+1,sc+2,sc+3,sc+4,
sc+5,sc+6,sc+7,sc+8,sc+9);
};
}
Granted it's a feature I was surprised exists, is there any reason not to use lambdas with ellipses?
However, I'm not aware of any (simple) way (in C++14) to deduce the number of parameters to an arbitrary lambda object.
It seems to me that you're looking for sizeof...() over a variadic auto list of paramenters
#include <iostream>
int main ()
{
auto l = [](auto ... as) { return sizeof...(as); };
std::cout << l(1, 2L, 3.0, 4.0f, "5") << std::endl; // print 5
}
Your lambdas are essentially C-style variadic functions. There's nothing wrong with using them, and if you don't want to access the values (which is somewhat ugly), that is fine.
However, the underlying problem that it seems like you actually want to solve is to let your library find the number of arguments (or arity) of a function/lambda/..., which you can do with template metaprogramming - no need for your users to work around that issue.
Disclosure: There is an implementation of this in a library that I also work on, here.
Here is a simple example:
template <typename Callable>
struct function_arity : public function_arity<decltype(&Callable::operator())>
{};
template <typename ClassType, typename ReturnType, typename... Args>
struct function_arity<ReturnType(ClassType::*)(Args...) const>
{
constexpr static size_t arity = sizeof...(Args);
};
template <typename ClassType, typename ReturnType, typename... Args>
struct function_arity<ReturnType(ClassType::*)(Args...)>
{
constexpr static size_t arity = sizeof...(Args);
};
The compiler will automatically deduce the argument types for you, and sizeof... will get you the number of arguments that you need.
Then, you can use function_arity<decltype(lambda)>::arity to get the number of arguments of your lambda. The last version deals with mutable lambdas, where the call operator is non-constant. You may also want to extend this to work properly with noexcept, or you will run into errors like this libc++ bug.
Unfortunately, this will not work with overloaded or templated operator() (e.g. if you use auto-type parameters in your lambda). If you also want to support functions instead of lambdas, additional specializations may be necessary.
The short version of my question is this: How can I use something like std::bind() with a standard library algorithm?
Since the short version is a bit devoid of details, here is a bit of an explanation: Assume I have the algorithms std::transform() and now I want to implement std::copy() (yes, I realize that there is std::copy() in the standard C++ library). Since I'm hideously lazy, I clearly want to use the existing implementation of std::transform(). I could, of course, do this:
struct identity {
template <typename T>
auto operator()(T&& value) const -> T&& { return std::forward<T>(value); }
};
template <typename InIt, typename OutIt>
auto copy(InIt begin, InIt end, OutIt to) -> OutIt {
return std::transform(begin, end, to, identity());
}
Somehow this implementation somewhat feels like a configuration of an algorithm. For example, it seems as if std::bind() should be able to do the job but simply using std::bind() doesn't work:
namespace P = std::placeholders;
auto copy = std::bind(std::transform, P::_1, P::_2, P::_3, identity());
The problem is that the compiler can't determine the appropriate template arguments from just the algorithm and it doesn't matter if there is an & or not. Is there something which can make an approach like using std::bind() work? Since this is looking forward, I'm happy with a solution working with anything which is already proposed for inclusion into the C++ standard. Also, to get away with my laziness I'm happy to do some work up front for later easier use. Think of it this way: in my role as a library implementer, I'll put things together once such that every library user can be lazy: I'm a busy implementer but a lazy user.
In case you want to have a ready-made test bed: here is a complete program.
#include <algorithm>
#include <functional>
#include <iostream>
#include <iterator>
#include <utility>
#include <vector>
using namespace std::placeholders;
struct identity {
template <typename T>
T&& operator()(T&& value) const { return std::forward<T>(value); }
};
int main()
{
std::vector<int> source{ 0, 1, 2, 3, 4, 5, 6 };
std::vector<int> target;
#ifdef WORKS
std::transform(source.begin(), source.end(), std::back_inserter(target),
identity());
#else
// the next line doesn't work and needs to be replaced by some magic
auto copy = std::bind(&std::transform, _1, _2, _3, identity());
copy(source.begin(), source.end(), std::back_inserter(target));
#endif
std::copy(target.begin(), target.end(), std::ostream_iterator<int>(std::cout, " "));
std::cout << "\n";
}
When trying to std::bind() an overloaded function the compiler can't determine which overload to use: at the time the bind()-expression is evaluated the function arguments are unknown, i.e., overload resolution can't decide which overload to pick. There is no direct way in in C++ [yet?] to treat an overload set as an object. Function templates simply generate an overload set with one overload for each possible instantiation. That is, the entire problem of not being able to std::bind() any of the standard C++ library algorithms revolves around the fact that the standard library algorithms are function templates.
One approach to have the same effect as std::bind()ing an algorithm is to use C++14 generic lambdas to do the binding, e.g.:
auto copy = [](auto&&... args){
return std::transform(std::forward<decltype(args)>(args)..., identity());
};
Although this works it is actually equivalent to a fancy implementation of function template rather than configuring an existing function. However, using generic lambdas to create the primary function objects in a suitable standard library namespace could make the actual underlying function objects readily available, e.g.:
namespace nstd {
auto const transform = [](auto&&... args){
return std::transform(std::forward<decltype(args)>(args...));
};
}
Now, with the approach to implementing transform() it is actually trivial to use std::bind() to build copy():
auto copy = std::bind(nstd::transform, P::_1, P::_2, P::_3, identity());
Despite the looks and use of generic lambdas it is worth pointing out that it actually takes roughly the same effort to create corresponding function objects using only features available for C++11:
struct transform_t {
template <typename... Args>
auto operator()(Args&&... args) const
-> decltype(std::transform(std::forward<decltype(args)>(args)...)) {
return std::transform(std::forward<decltype(args)>(args)...);
}
};
constexpr transform_t transform{};
Yes, it is more typing but it is just a reasonable small constant factor over the use of generic lambdas, i.e., if the objects using generic lambdas the C++11 version is, too.
Of course, once we have function objects for the algorithms it may be neat to actually not even having to std::bind() them as we'd need to mention all the not bound arguments. In the example case it is currying (well, I think currying only applies to binding the first argument but whether it's the first or the last argument seems a bit random). What if we had curry_first() and curry_last() to curry the first or the last argument? The implementation of curry_last() is trivial, too (for brevity I'm using a generic lambda but the same rewrite as above could be used to make it available with C++11):
template <typename Fun, typename Bound>
auto curry_last(Fun&& fun, Bound&& bound) {
return [fun = std::forward<Fun>(fun),
bound = std::forward<Bound>(bound)](auto&&... args){
return fun(std::forward<decltype(args)>(args)..., bound);
};
}
Now, assuming that curry_last() lives in the same namespace a either nstd::transform or identity() the definition of copy() could become:
auto const copy = curry_last(nstd::transform, identity());
OK, maybe this question didn't get me any hat but maybe I'll get some support for turning our standard library algorithms into function objects and possibly adding a few cool approaches to creating bound versions of said algorithms. I think this approach is much saner (although in the form described above possibly not as complete) than some of the proposals in this area.
I'm trying to make a stream manipulator for colour for use with output to the console. It works, changing the colour of text and the background:
std::cout << ConColor::Color::FgBlue << 123 << "abc"; //text is blue, sticky
The problem is with the signature:
std::ostream &FgBlue(std::ostream &);
This signature allows for derived classes, such as std::ostringstream as well, but there is no way to change the colour of a string stream. The function would change the colour of the console regardless if it was called with such an argument.
Therefore, I want to ensure the argument is something along the lines of std::cout, std::wcout, etc. I would prefer it be general in the case that more std::ostream objects are added in a future standard.
I tried many things involving std::is_same and std::is_base_of, when the former wouldn't work, just to eventually realize that it was pointless because any argument type inheriting from std::basic_ostream<> will be casted to the type I'm comparing against when passed to the function, giving false positives.
This eventually led me to my answer below (variadic template template arguments? Wow, that's a mouthful!) There are a couple problems, however:
The compiler must support variadic templates. I would prefer the solution work on MSVC.
The compiler gives cryptic errors in the case that a derived class with a different number of template arguments (such as std::ostringstream, which has 3 instead of 2) is used, as it doesn't get past the function signature.
It's possible to redirect stdout, say, to a file, so even if the argument is std::cout, the same thing as the stringstream case happens.
I encourage people to post any other solutions, hopefully better than mine, and really hopefully something that works with at least VS11.
Here's a trait for detecting std::basic_ostream instantiations:
template<typename T> struct is_basic_ostream {
template<typename U, typename V>
static char (&impl(std::basic_ostream<U, V> *))[
std::is_same<T, std::basic_ostream<U, V>>::value ? 2 : 1];
static char impl(...);
static constexpr bool value = sizeof(impl((T *)0)) == 2;
};
Use as:
template<typename T>
void foo(T &) {
static_assert(is_basic_ostream<T>::value,
"Argument must be of type std::basic_ostream<T, U>.");
}
We use template argument deduction to infer the template parameters on the (non-proper) basic_ostream base class, if any. As a more general solution, replacing U and V with a single variadic parameter would allow writing a generic is_instantiation_of trait on compilers that support variadic template parameters.
To detect whether stdout is piped to a file (which can only be detected at runtime, of course) use isatty; see how to use isatty() on cout, or can I assume that cout == file descriptor 1?
This is what I came up with after a lot of trial:
template<template<typename...> class T, typename... U>
void foo(T<U...> &os) {
static_assert(
std::is_same<
std::basic_ostream<U...>,
typename std::remove_reference<decltype(os)>::type
>::value,
"Argument must be of type std::basic_ostream<T, U>."
);
//...
}
Source code containing each of the below tests can be found here.
Source code replacing the types with similar self-made ones that are more explicit and offer more freedom (e.g., instantiation), which might be more useful for testing, can be found here.
Passing in std::cout and std::wcout makes it compile fine.
Passing in an instance of std::ostringstream causes it to complain about the number of template arguments.
Passing in an instance of std::fstream, which has the same number of template parameters, causes the static assertion to fail.
Passing in a self-made 2-parameter template class causes the static assertion to fail.
Please feel free to improve upon this any way you can.
I was working again with C++ during the weekend and came to notice something that I'm not sure where does it come from.
Following the advice in this thread, I decided to implement a map_keys_iterator and map_values_iterator. I took the -- I think -- recommended-against approach of deriving a class from std::map<K,V>::iterator and implementing it as such:
template <typename K, typename V>
struct map_values_iterator:
public std::map<K,V>::iterator {
// explicitly call base's constructor
typedef typename std::map<K,V>::iterator mIterator;
map_values_iterator (const mIterator& mi) :
mIterator(mi) {};
const V& operator* () const { return (*this)->second; }
};
So far, so good, and the following code works (nvm the Unicode, I default to work with i18n-capable terminals):
typedef std::map<double,string> Map;
Map constants;
constants[M_PI] = "π";
constants[(1+sqrt(5))/2] = "φ";
constants[exp(M_PI)-M_PI] = "fake_20";
// ... fill map with more constants!
map_values_iterator<double, std::string> vs(constants.begin());
for (; vs != m.end(); ++vs) {
cout<< (vs != m.begin() ? ", " : "")<< *vs;
}
cout<< endl;
This code prints the expected result, something like (because a Map's elements are ordered):
..., φ, ..., π, ...., fake_20, ....
So I'd guess a map_keys_iterator would work in a similar way as well. I took the care that a Map's value_type is actually pair<const K, V> so the keys version will return a value.
However, it is unwieldly to have to declare the iterator's type so I wanted to create a caller with the classical make_pair-like idiom. And this is where trouble begins:
template <typename K, typename V>
map_values_iterator<K,V> map_values(const typename std::map<K,V>::iterator &i) {
return lpp::map_values_iterator<K,V> (i);
}
template <typename K, typename V>
map_values_iterator<K,V> map_values(const typename std::map<K,V>::const_iterator &i) {
return lpp::map_values_iterator<K,V> (i);
}
I'm relatively sure this function has the right signature and constructor invocation. However if I attempt to call the function from code:
auto vs= map_values(constants.begin());
I get a single STL compiler error of the form:
error: no matching function for call to ‘map_values(std::_Rb_tree_iterator<std::pair<const double, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >)’
I'm assuming here that in this particular case the whole _Rb_tree_iterator is actually the correct iterator for which map::iterator is typedefed; I'm not completely sure however. I've tried to provide more overloads to see if one of them matches (drop the reference, drop the const, use only non-const_iterator variants, etc) but so far nothing allows the signature I'm interested in.
If I store the base iterator in a variable before calling the function (as in auto begin= constans.begin(); auto vs= map_values(begin);) I get the exact same base error, only the description of the unmatched call is obviously different (in that it is a "const blah&").
My first attempt at implementing this sort of iterator was by creating a base class that aggregated map::iterator instead of inheriting, and deriving two classes, each with the adequate operator*, but that version ran into many more problems than the above and still forced me to replicate too much of the interface. So I tried this option for code-expedition.
I've tried to look for answers to this issue but my Google-fu isn't very strong today. Maybe I am missing something obvious, maybe I forgot something with the derivation (although I'm almost sure I didn't -- iterators are unlike containers), maybe I am actually required to specify all the template parameters for the map, or maybe my compiler is broken, but whatever it is I can't find it and I am having real trouble understanding what is the actual thing the compiler is complaining about here. In my previous experience, if you are doing something wrong with the STL you are supposed to see a diarrhoea of errors, not only one (which isn't STL to boot).
So... any (well-encapsulated) pointers would be appreciated.
The reason is that your K and V type parameters are in a non-deducible context, so your function template is never even instantiated during overload resolution.
Look at it again:
template <typename K, typename V>
map_keys_iterator<K,V> map_keys(const typename std::map<K,V>::iterator &i)
For this to work, the C++ compiler would somehow have to walk from a specific iterator class to its "parent container" type - map in this case - to get its K and V. In general, this is impossible - after all, a particular iterator might be a typedef for some other class, and the actual type of argument in the call is that other class; there's no way the compiler can "retrace" it. So, per C++ standard, it doesn't even try in this case - and, more generally, in any case where you have typename SomeType<T>::OtherType, and T is a type parameter.
What you can do is make the entire parameter type a template type parameter. This requires some trickery to derive K and V, though.
template <typename Iterator>
map_keys_iterator<
typename std::iterator_traits<Iterator>::value_type::first_type,
typename std::iterator_traits<Iterator>::value_type::second_type
> map_keys(Iterator i)
Unfortunately, you'll have to repeat those two in the body of the function as well, when invoking the constructor of your type.
As a side note, iterators are generally passed by value (they're meant to be lightweight).