SFINAE-based Operator Overloading across Namespaces - c++

I'm attempting to use an approach which allows for automatic enabling of bitmask operators for strongly typed enum classes. See below header and cpp of an example.
https://www.justsoftwaresolutions.co.uk/files/bitmask_operators.hpp
https://www.justsoftwaresolutions.co.uk/files/testbitmask.cpp
The approach in testbitmask.cpp works when everything is in the same namespace, however I would like to separate the SFINAE code in a different namespace from the usage by other classes (see below or https://wandbox.org/permlink/05xXaViZT3MVyiBl).
#include <type_traits>
namespace ONE {
template<typename E>
struct enable_bitmask_operators{
static const bool enable=false;
};
template<typename E>
inline typename std::enable_if<enable_bitmask_operators<E>::enable,E>::type
operator|(E lhs,E rhs){
typedef typename std::underlying_type<E>::type underlying;
return static_cast<E>(
static_cast<underlying>(lhs) | static_cast<underlying>(rhs));
}
}
namespace TWO {
enum class A{ x=1, y=2};
}
namespace ONE {
template<>
struct enable_bitmask_operators<TWO::A>{
static const bool enable=true;
};
}
int main(){
TWO::A a1 = TWO::A::x | TWO::A::y;
}
This has the effect of not being able to find the overloaded operator in main. Explicitly calling the function works (TWO::A a1 = ONE::operator|(TWO::A::x , TWO::A::y);), but of course is not the desired functionality.
If we move the specialization into namespace ONE, the compiler throws an error: declaration of 'struct ONE::enable_bitmask_operators<TWO::A>' in namespace 'TWO' which does not enclose 'ONE'. I'm wondering if the desired approach is possible in C++?

Your function cannot be found by ADL, you might add some using to allow to use it:
using ONE::operator|;
TWO::A a1 = TWO::A::x | TWO::A::y;
Demo
using namespace ONE; might be an alternative too.

Related

Abbreviate argument to member function expecting introduced type (enum class)

TL;DR Is there a Shorter syntax for the enum class type argument to a member function (field_inst.write(decltype(field_inst)::Type::cpr1_4096);) in the following code?
namespace Hal {
// complex template definition `Bit_Field`
template<
class Tregister,
typename Tregister::Data Toffset,
typename Tregister::Data Tmask,
class Tfield_type,
class Tmutability_policy = typename Tregister::Mutability_Policy
>
struct Bit_Field : Tregister {
using Type = Tfield_type;
static Field read()
{ // ... reading ...
}
};
namespace Aeat_8800_Q24 {
enum class {cpr1_4096 = 0x5,
// ... more settings ...
};
} // namespace Aeat_8800_Q24
} // namespace HAl
int main(void) {
// this template is used multiple times, different template arguments
// on each instantiation (using / typedef not practical)
Hal::Bit_Field<reg<0x8FA>, 0x0, 0x7, Hal::Aeat_8800_Q24::Cpr_Setting1>
field_inst;
// QUESTION: How can I write that more pleasingly?
field_inst.write(decltype(field_inst)::Type::cpr1_4096);
field_inst.write(Hal::Aeat_8800_Q24::Cpr_Setting1::cpr1_4096);
}
Disclaimer: The question itself is a duplicate to: How to prevent class qualification when using nested enum class in member function arguments.
However I want to know if there has been improvements since 2016 (date of question) / C++11 which would make the library easier to use (more pleasant syntax).
Disclaimer
The solution presented in this answer intends to answer the original need: writing shorter yet expressive client code. In doing so, I will go to great unnecessary lengths. To me, the advisable behavior is the the use of sound using declarations such as:
int main() {
using Hal::Aeat_8800_Q24::Cpr_Setting1;
// Or if enums are alone and well encapsulated in their namespace:
//using namespace Hal::Aeat_8800_Q24;
Hal::Bit_Field<reg<0x8FA>, 0x0, 0x7, Cpr_Setting1>
field_int;
field_int.write(Cpr_Setting1::cpr1_4096);
// ...
}
Overkill solution
You can devise a (very overengineered) solution based on user-defined literals.
// This is just a little helper for later
namespace literals {
template <typename T, T... Cs>
constexpr auto operator ""_as_iseq() {
return std::integer_sequence<T, Cs...>{};
}
}
Then, the fun begins. Declare a trait class like this, along with its helper alias:
// Inside namespace Hal::Aeat_8800_Q24
template <typename T> struct setting_enum;
template <typename T>
using setting_enum_t = typename setting_enum<T>::type;
Then, specialize it for each of your enums:
// (Still) Inside namespace Hal::Aeat_8800_Q24
using namespace literals;
template <>
struct SettingEnum<decltype("Cpr_Setting1"_as_iseq)> {
using type = Cpr_Setting1;
};
Finally let's define a last literal operator
// Inside namespace Hal::Aeat_8800_Q24
namespace settings_literals {
template <typename T, T... Cs>
constexpr auto operator""_s()
-> setting_enum_t<
std::integer_sequence<T, Cs...> >;
}
Now your client code just needs to do this:
using namespace Hal::Aeat_8800_Q24::settings_literals;
// ...
field_inst.write(decltype("Cpr_Setting1"_s)::cpr1_4096);
That's still quite long... Is there a way to do better? Yes indeed... Instead of using the trait above let's use a variable template instead.
// In namespace Hal
namespace enum_traits {
using namespace literals;
template <typename Enum, typename ValueIntSeq>
constexpr void *ENUM_VALUE = nullptr;
template <>
constexpr Aeat_8800_Q24::Cpr_Setting1 ENUM_VALUE<
Aeat_8800_Q24::Cpr_Setting1, decltype("cpr1_4096"_as_iseq)> =
CprSetting1::cpr1_4096;
// ...
} // ns enum_traits
The variable template needs to be specialized for each value of each enum (that's tedious! I'll throw my hat to anyone that can do preprocessor tricks to avoid writing all that boilerplate code by hand)
Let's add an overload to the write member function:
struct BitField : Tregister {
// ...
template <typename T, T... Cs>
void write(std::integer_sequence<T, Cs...> s) {
constexpr auto v_ = enum_traits::ENUM_VALUE<Type, decltype(s)>;
static_assert(
!std::is_pointer_v<decltype(v_)>,
"Invalid enum int sequence provided");
write(v_);
}
};
In the end, the client code will look like this:
field_int.write("cpr1_4096"_as_iseq);
Now we're talking! Demo on Coliru.

traits class, namespace and forward declaration

I am currently having trouble using namespaces with traits classes. Here is my tentative code structure:
namespace project {
namespace internal {
template<typename T> struct traits;
} // internal
namespace moduleA {
namespace internal {
class AImpl {
using some_typeA = traits<A>::some_type;
using some_typeAImpl = traits<AImpl>::some_type;
// where to put the traits specialization?? How the forward declaration could be done?
};
} // internal
class A {
A(): imp(new internal::AImpl()) {}
private:
internal::AImpl* imp;
};
} // moduleA
} // project
Here are my questions and I am looking for suggestions to make this code better follow the established conventions and best practices:
I am defining two internal namespaces, ::project::internal and ::project::moduleA::internal, is this a bad practice? My concern on this is that with two levels it might be easier for user to browse the documentation from doxygen, as all the moduleA related stuff, both moduleA::internal and not, are grouped together.
Because moduleA::internal::AImpl depends on the traits class of itself traits<AImpl>, and my traits templates resides in ::project::internal, so I have to either (1) define a traits template in moduleA::internal and specialize it; (2) define the traits specialization in ::project::internal. For this, I'll need forward-declare AImpl. How exactly should it be done for each of the case (1) or (2)? Does that mean I have to write code like this:
namespace project {
namespace moduleA {class A;}
namespace internal {
template<>
struct traits<module::A> {};
}
namespace moduleA {
... // more code
}
}
It looks like I am making too much use of namespace {} clauses.
Similar to 2, module::internal::AImpl depends on traits<A>, again I need to forward declare A, so the same problem.
I'd greatly appreciate you help on this, thank you!
Instead of using class templates for traits in C++11 you can use function declarations (no definition is necessary). Functions can be found using argument-dependent name lookup, so that you can specialise traits for your class in the same namespace where your class is declared.
This completely removes the nuisance of having to close the namespace of your class, open the traits namespace, specialise the trait for your class using its fully qualified name, close the traits namespace, re-open the namespace of your class. And also removes the need to include the declaration of the primary template.
Example:
#include <type_traits>
template<class T> struct Type {};
template<class T>
void trait_of(Type<T>); // Generic trait version.
namespace N {
struct A;
int trait_of(Type<A>); // Trait specialisation for A.
} // N
int main() {
using trait_of_a = decltype(trait_of(Type<N::A>{})); // trait_of is found using ADL.
static_assert(std::is_same<int, trait_of_a>::value, "");
}
The return type of the trait function can be a container of more types, e.g.:
template<class T>
void more_traits(Type<T>); // Generic trait version. Must be specialized.
namespace N {
struct MoreTraitsOfA {
using type_X = ...;
using type_Y = ...;
};
MoreTraitsOfA more_traits(Type<A>); // Trait specialisation for A.
} // N
using MoreTraits = decltype(more_traits(Type<N::A>{}));
using type_X = MoreTraits::type_X;
using type_Y = MoreTraits::type_Y;

What is the rationale behind ADL for arguments whose type is a class template specialization

I've spent some time trying to realize why my code doesn't compile and I've realized that in C++ Argument Dependent Lookup uses template typename arguments to determine name lookup scope.
#include <string>
#include <functional>
namespace myns {
template<typename T>
struct X
{};
template<typename T>
auto ref(T) -> void
{}
} // namespace myns
auto main() -> int
{
ref(myns::X<int>{});
ref(myns::X<std::string>{}); // error: call to 'ref' is ambiguous
}
So the former ref call compiles, because for myns::X<int> only myns::ref is considered, while the latter doesn't compile because it finds myns::ref() as well as std::ref
My question is how this can be useful? Why would I need this? Do you have any ideas, examples? For now I can only see drawbacks like in the example above, where it introduces unneeded ambiguity.
Suppose you put all the things into your own namespace, including a user-defined class, and a function which takes std::vector as the parameter. i.e.
namespace myns {
struct X {};
template<typename T>
auto my_func(const std::vector<T>&) -> void
{}
} // namespace myns
then you can take advantage of the fact that ADL also considers the types provided as template arguments and just write:
my_func(std::vector<myns::X>{});
on the other hand:
my_func(std::vector<int>{}); // error, can't find my_func
myns::my_func(std::vector<int>{}); // fine
Get back to your original question, the lesson here is don't use names from standard libraries, it just makes codes confusing.
In one word: reuse. It allows you to use useful components from other libraries, and still have ADL applied.
For instance:
namespace my_stuff {
class my_class {
// Something useful here
};
void process(std::unique_ptr<my_class> item);
}
Now you can write code naturally, as you would when working with the class directly:
process(std::make_unique<my_class>());
If it wasn't the case, you'd need to roll out your own smart pointer, in your own namespace, just to facilitate good coding idioms and ADL.

How to have ADL prefer a function template to another

I was wondering if it is possible to have ADL select the function template defined in the namespace of the class of one of the arguments (or in some other well defined place) in a situation when other function templates are visible. I have a motivating example that follows, and although I know the way around for that particular case (I discuss it below), the question in general seems to make sense.
I thought kind of cool to avoid using friend declarations but rather delegate work to methods, and thus came up with
namespace n
{
struct a
{
auto swap(a& a2) -> void;
};
auto swap(a& a1, a& a2) -> void
{
a1.swap(a2);
}
}
auto main(void) -> int
{
n::a a1, a2;
using std::swap;
swap(a1,a2); // use case 1
n::swap(a1,a2); // use case 2
}
So far, so good, both use cases work fine, but then, I added a second class with its own swap method and decided to save on boilerplate by turning the freestanding swap into a template:
namespace n
{
struct a
{
auto swap(a& a2) -> void;
};
struct b
{
auto swap(b& b2) -> void;
};
template<class T>
auto swap(T& t1, T& t2) -> void
{
t1.swap(t2);
}
}
auto main(void) -> int
{
n::a a1, a2;
using std::swap;
swap(a1,a2); // use case 1
n::swap(a1,a2); // use case 2
}
And here use case 1 breaks, the compiler complains about ambiguity with the std::swap template. If one anticipates the problem, it is possible to define swap functions rahter than methods (they will usually be friend, since they replace methods):
namespace n
{
struct a
{
friend auto swap(a& a1, a& a2) -> void;
};
struct b
{
friend auto swap(b& b1, b& b2) -> void;
};
}
Now everything works, so in the case of swap it is just enough to remember to use friend functions rahter than methods, but how about the general case? Is there any hack, however dirty, that would let the compiler unambiguously select n::foo<a> (or some other foo<a> under our control) in a situation where other template<class T> foo are visible, either in the global namespace or because of some using clause, especially if the latter are not ours to modify?
The culprit here is not just that you write using std::swap, but fundamentally that you have provided your own unrestricted function template swap that will give an overload resolution error with std::swap whenever namespace std is being considered during name lookup (either by an explicit using directive, or by ADL).
To illustrate: just leaving out the using std::swap will rescue you in this case
Live On Coliru
auto main() -> int
{
n::a a1, a2;
swap(a1,a2); // use case 1
n::swap(a1,a2); // use case 2
}
But suppose that you refactor your classes a and b into class templates b<T> and b<T>, and call them with a template argument from namespace std (e.g. std::string), then you get an overload resolution error:
Live On Coliru
#include <iostream>
#include <string>
namespace n
{
template<class>
struct a /* as before */;
template<class>
struct b /* as before */;
}
auto main() -> int
{
n::a<std::string> a1, a2; // oops, ADL will look into namespace std
swap(a1,a2); // use case 1 (ERROR)
n::swap(a1,a2); // use case 2 (OK)
}
Conclusion: if you define your own version of swap with the same signature as std::swap (as far as overload resolution is concerned), always qualify calls to it in order to disable ADL.
Tip: better yet, don't be lazy, and just provide your own swap function (not function template) for each class in your own namespace.
See also this Q&A where a similar mechanism is explained for why it is a bad idea to provide your own begin and end templates and expect them to work with ADL.
I know I must look silly to be answering my own question, but the fact of posting it, and the discussion, really brought some new understanding to me.
In retrospection, what should have struck me in the first place is the sequence
using std::swap;
swap(a1,a2);
It's so old-hat, and it clearly must be wrong, since using it repeatedly requires one to copy-paste the algorithm (of using using and then swapping). And you should not copy-paste, even if the algorithm is a two-liner. So what can be done better about it? How about turning it into a one-liner:
stdfallback::do_swap(a1,a2);
Let me provide the code that allows this:
namespace stdfallback
{
template<class T>
auto lvalue(void) -> typename std::add_lvalue_reference<T>::type;
template <typename T>
struct has_custom_swap
{
template<class Tp>
using swap_res = decltype(swap(lvalue<Tp>(),lvalue<Tp>()));
template <typename Tp>
static std::true_type test(swap_res<Tp> *);
template <typename Tp>
static std::false_type test(...);
static const bool value = decltype(test<T>(nullptr))::value;
};
template<class T>
auto do_swap(T& t1, T& t2) -> typename std::enable_if<has_custom_swap<T>::value,void>::type
{
swap(t1,t2);
}
template<class T>
auto do_swap(T& t1, T& t2) -> typename std::enable_if<!has_custom_swap<T>::value,void>::type
{
std::swap(t1,t2);
}
}
In the solution you find a SFINAE-based traits class has_custom_swap whose value is true or false depending on whether an unqualified call to swap for lvalues of the instantiation type is found (for that need the lvalue template, similar to declval but resolving to l-value rather than r-value), and then two overloads of a do_swap method for the case when the custom swap is present, and when it is not. They have to be called different than swap, otherwise the one calling the unqualified custom swap does not compile, because it is itself ambiguous to the swap it tries to call.
So maybe we should consider using this pattern instead of the established using?
(To give proper credit, the traits solution was inspired by http://blog.quasardb.net/sfinae-hell-detecting-template-methods/)

Simulating argument-dependent lookup for template arguments

I've encountered this problem while writing some library-like code recently, and I thought discussing it might help others as well.
Suppose I have a library with some function templates defined in a namespace. The function templates work on types supplied by client code, and their inner workings can be customized based on type traits defined for the client types. All client definitions are in other namespaces.
For the simplest example possible, a library function would basically have to look like this (note that all the code snippets are just wishful thinking, nothing compiles):
namespace lib
{
template<typename T> void f()
{
std::cout << traits_for<T>::str() << '\n'; //Use the traits in some way.
}
}
Client code would look like this:
namespace client
{
struct A { };
template<> std::string traits_for<A>::str() { return "trait value"; }
}
And then someone, somewhere could call
lib::f<client::A>();
and everything would magically work (the specialization of lib::f() would find the traits explicit specialization in the namespace where the template argument for T is declared, just like ADL does for functions and their arguments). The goal is to make it as easy as possible for client code to define those traits (there could be several) for each client class (there could be lots of those).
Let's see what we could do to make this work. The obvious thing is to define a traits class primary template in lib, and then explicitly specialize it for client types. But then clients can't define those explicit specializations in their own namespace; they have to exit it, at least up to the global namespace, define the explicit specialization, then re-enter the client namespace, which, for maximum fun, could be nested. I'd like to keep the trait definitions close to each client class definition, so this namespace juggling would have to be done near each class definition. Suddenly, a one-liner in client code has turned into a messy several-liner; not good.
To allow the traits to be defined in the client namespace, we could turn the traits class into a traits function, that could be called from lib like this:
traits_for(T())
but now we're creating an object of class T just to make ADL kick in. Such objects could be expensive to construct (or even impossible in some circumstances), so this isn't good either. We have to keep working with types only, not their instances.
Giving up and defining the traits as members of the client classes is not an option either.
Some plumbing required to make this work would be acceptable, as long as it doesn't complicate the definitions for each class and trait in the client namespace (write some code once, but not for every definition).
I've found a solution that satisfies these stringent requirements, and I'll write it up in an answer, but I'd like to find out what people think about this: alternatives, critique of my solution, comments about how all of this is either bleeding obvious or completely useless in practice, the works...
To find a declaration based on some argument, ADL looks like the most promising direction. So, we'll have to use something like
template<typename T> ??? traits_helper(T);
But we can't create objects of type T, so this function should only appear as an unevaluated operand; decltype springs to mind. Ideally, we shouldn't even assume anything about T's constructors, so std::declval could also be useful:
decltype(traits_helper(std::declval<T>()))
What could this do? Well, it could return the actual traits type if the helper would be declared like this:
template<typename T> traits_for<T> traits_helper(T);
We've just found a class template specialization in another namespace, based on the declaration of its argument.
EDIT: Based on a comment from Yakk, traits_helper() should take a T&&, to allow it to work if T's move constructor is not available (the function may not actually be called, but the semantic constraints required for calling it must be met). This is reflected in the complete sample below.
All put together in a standalone example, it looks like this:
#include <iostream>
#include <string>
#include <utility>
namespace lib
{
//Make the syntax nicer for library code.
template<typename T> using traits_for = decltype(traits_helper(std::declval<T>()));
template<typename T> void f()
{
std::cout << traits_for<T>::str() << '\n';
}
}
namespace client_1
{
//The following two lines are needed only once in every client namespace.
template<typename> struct traits_for { static std::string str(); };
template<typename T> traits_for<T> traits_helper(T&&); //No definition needed.
struct A { };
template<> std::string traits_for<A>::str() { return "trait value for client_1::A"; }
struct B { };
template<> std::string traits_for<B>::str() { return "trait value for client_1::B"; }
}
namespace client_2
{
//The following two lines are needed only once in every client namespace.
template<typename> struct traits_for { static std::string str(); };
template<typename T> traits_for<T> traits_helper(T&&); //No definition needed.
struct A { };
template<> std::string traits_for<A>::str() { return "trait value for client_2::A"; }
}
int main()
{
lib::f<client_1::A>(); //Prints 'trait value for client_1::A'.
lib::f<client_1::B>(); //Prints 'trait value for client_1::B'.
lib::f<client_2::A>(); //Prints 'trait value for client_2::A'.
}
Note that no objects of type T or traits_for<T> are created; the traits_helper specialization is never called - only its declaration is used.
What's wrong with just requiring clients to throw their specializations in the right namespace? If they want to use their own, they can:
namespace client
{
struct A { };
struct traits_for_A {
static std::string str() { return "trait value"; }
};
}
namespace lib
{
template <>
struct traits_for<client::A>
: client::traits_for_A
{ };
}
Could even give your users a macro if you don't want them to write all that out:
#define PROVIDE_TRAITS_FOR(cls, traits) \
namespace lib { \
template <> struct traits_for<cls> : traits { }; \
}
So the above can become
PROVIDE_TRAITS_FOR(client::A, client::traits_for_A)
ADL is awesome. Keep it simple:
namespace lib {
// helpers for client code:
template<class T>
struct default_traits{
using some_type=void;
};
struct no_traits{};
namespace details {
template<class T,class=void>
struct traits:lib::no_traits{};
template<class T>
struct traits<T,decltype(void(
traits_func((T*)0)
))>:decltype(
traits_func((T*)0)
){};
}
template<class T>
struct traits:details::traits<T>{};
}
Now simply add in the type Foo namespace:
namespace bob{
// use custom traits impl:
struct foo{};
struct foo_traits{
using some_type=int;
};
foo_traits traits_func(foo const*);
// use default traits impl:
struct bar {};
lib::default_traits<bar> traits_func(bar const*);
// use SFINAE test for any type `T`:
struct baz {};
template<class T>
std::enable_if_t<
std::is_base_of<T,baz>{},
lib::default_traits<T>
>
traits_func(T const*)
}
and we are done. Defining traits_func that takes a pointer convertable from foo* is enough to inject the trait.
If you fail to write such an overload, we get an empty traits, which is SFINAE friendly.
You can return lib::no_traits in an overload to explicitly turn off support, or just don;t write an overload that matches a type.