How does this C++ template class code work? - c++

I am trying to port Google Test (gtest) code to VxWorks 5.5. The serious drawback is that development environment Tornado 2.2 uses ancient GCC compiler version 2.96.
While analyzing the code I've located part of the code in gtest.h I do not understand! How this C++ template class is functioning?
// ImplicitlyConvertible<From, To>::value is a compile-time bool
// constant that's true iff type From can be implicitly converted to
// type To.
template <typename From, typename To>
class ImplicitlyConvertible {
private:
// We need the following helper functions only for their types.
// They have no implementations.
// MakeFrom() is an expression whose type is From. We cannot simply
// use From(), as the type From may not have a public default
// constructor.
static From MakeFrom();
// These two functions are overloaded. Given an expression
// Helper(x), the compiler will pick the first version if x can be
// implicitly converted to type To; otherwise it will pick the
// second version.
//
// The first version returns a value of size 1, and the second
// version returns a value of size 2. Therefore, by checking the
// size of Helper(x), which can be done at compile time, we can tell
// which version of Helper() is used, and hence whether x can be
// implicitly converted to type To.
static char Helper(To);
static char (&Helper(...))[2]; // NOLINT
// We have to put the 'public' section after the 'private' section,
// or MSVC refuses to compile the code.
public:
// MSVC warns about implicitly converting from double to int for
// possible loss of data, so we need to temporarily disable the
// warning.
#ifdef _MSC_VER
# pragma warning(push) // Saves the current warning state.
# pragma warning(disable:4244) // Temporarily disables warning 4244.
static const bool value =
sizeof(Helper(ImplicitlyConvertible::MakeFrom())) == 1;
# pragma warning(pop) // Restores the warning state.
#elif defined(__BORLANDC__)
// C++Builder cannot use member overload resolution during template
// instantiation. The simplest workaround is to use its C++0x type traits
// functions (C++Builder 2009 and above only).
static const bool value = __is_convertible(From, To);
#else
static const bool value =
sizeof(Helper(ImplicitlyConvertible::MakeFrom())) == 1;
#endif // _MSV_VER
};
When object of this class is created, boolean variable with the name value should contain the answer if template type From is implicitly convertible to template type To. To get the answer, two private functions are used, MakeFrom() and Helper(). But these two functions are only declared here, and I cannot find definition for neither of them. If nothing else, this implementation should not link.
Neither do I understand the syntax of the following
static char (&Helper(...))[2];
Of course, this code compiles just fine (under Microsoft Visual C++ 7.1 or newer or GCC 3.4 or newer) and guys at Google know exactly what they are doing.
Please enlighten me! Not understanding this code will make me go crazy! :)

This is a standard trick with template programming.
Note that the comments say "by checking the size of Helper(x)": this underscores that the only thing the code does with Helper is evaluate sizeof(Helper(x)) for some x. The sizeof operator does not actually evaluate its argument (it doesn't need to; it only needs to find out how large it is, which is possible using only information available at compile time) and this is why there is no linker error (Helper is never really called).
The syntax that gives you trouble means that Helper is a function that accepts any number and type of parameters and returns a reference to a char[2]. To write a signature for this type of function (a variadic function) one needs to use ellipsis (...) as the specification for the last argument.
Variadic functions are a feature inherited from C that should generally be avoided and that wreaks havoc when used with class types, but in this case it does not matter because -- as mentioned earlier -- Helper will not be actually called.
The class ties this all together by allowing you to use the syntax
ImplicitlyConvertible<From, To>::value
To produce value, the code "fakes" calling Helper and passing it an instance of From as an argument¹. It relies on the compiler's overload resolution to determine if the overload that takes a To would be called in this scenario; if so, the return value of that overload is char which has a guaranteed size of 1 and value ends up being true. Otherwise the variadic overload (which can take any type of argument) is selected, which returns a char[2]. This has a size greater than 1, so value ends up false.
¹ Note that here the "sizeof does not actually evaluate the expression" trick is used again: how do you tell the compiler that the argument to Helper is an instance of From? You could use From(), but then From would need to have a default public constructor for the code to compile. So you just tell the compiler "I have a function MakeFrom that returns a From" -- the function will not be actually called.

Related

Difference between boost optional and std::experimental optional assignment

Usually when a function returns boost::optional I've seen a lot of people returning an empty brace {} to designate an empty value, that works fine and is shorter than returning boost::none.
I tried to do something similar to empty a boost::optional<int>, but when calling the copy assignment operator (or most probably the move assignment op) with an empty brace in the right side, the empty brace is converted to an int and then that value is assigned to the optional, so I end up with the variable set to 0 and not an empty value as I was expecting. Here's an example https://godbolt.org/g/HiF92v, if I try the same with std::experimental::optional I get the result I'm expecting (just replace with std::experimental::optional in the example and you will see that the instruction becomes mov eax, eax).
Also if I try with a different template argument for the boost optional (a non integer type) some compilers compile (with the behavior I'm expecting, an example here http://cpp.sh/5j7n) and others don't. So even for the same lib the behavior is different according to the template arg.
I'd like to understand what is going on here, I know it has something to do with the fact that I'm using a C++14 feature for a library that doesn't consider that into the design. I read the boost/optional header but I got lost in the details, I also tried to study the compiled code without inlining with a similar result.
I'm using gcc 4.9.2 with -std=c++14 and boost 1.57.
btw: I known I should have used boost::optional::reset or boost::none, but I was trying to be consistent with the semantics in the rest of the code base.
To understand what is going on, consider this example first:
void fun(int) { puts("f int"); }
void fun(double) { puts("f double"); }
int main() {
fun({}); // error
}
This results in a compiler error, because the overload resolution is inconclusive: double and int fit equally well. But, if a non-scalar type comes into play, the situation is different:
struct Wrap{};
void fun(int) { puts("f(int)"); }
void fun(Wrap) { puts("f(Wrap)"); }
int main() {
fun({}); // ok: f(int) selected
}
This is because a scalar is a better match. If, for some reason, I want the same two overloads but at the same time I would like fun({}) to select overload fun(Wrap), I can tweak the definitions a bit:
template <typename T>
std::enable_if_t<std::is_same<int, std::decay_t<T>>::value>
fun(T) { puts("f int"); }
void fun(Wrap) { puts("f(Wrap)"); }
That is, fun(Wrap) remains unchanged, but the first overload is now a template that takes any T. But with enable_if we constrain it, so that it only works with type int. So, this is quite an 'artificial' template, but it does the job. If I call:
fun(0); // picks fun(T)
The artificial template gets selected. But if I type:
fun({}); // picks fun(Wrap)
The artificial template is still a template, so it is never considered in type deduction in this case, and the only visible overload is fun(Wrap), so it gets selected.
The same trick is employed in std::optional<T>: it does not have an assignment from T. Instead it has a similar artificial assignment template that takes any U, but is later constrained, so that T == U. You can see it in the reference implementation here.
boost::optional<T> has been implemented before C++11, unaware of this 'reset idiom'. Therefore it has a normal assignment from T, and in cases where T happens to be a scalar this assignment from T is preferred. Hence the difference.
Given all that, I think that Boost.Optional has a bug that it does something opposite than std::optional. Even if it is not implementable in Boost.Optional, it should at least fail to compile, in order to avoid run-time surprises.

why can't apply mem_fn to member function of std::string?

struct int_holder {
int value;
int triple() {return value*3;}
};
int main(int argc, const char * argv[])
{
std::string abc{"abc"};
int_holder one{1};
auto f1 = mem_fn(&std::string::clear);
auto f2 = mem_fn(&int_holder::triple);
f1(abc);
f2(one);
}
i test such code in Xcode and the compiler issues such error
it seems mem_fn is fine with member functions of user-defined class but not with member functions of standard string, what's the different, and why?
thanks for your reading, help me plz!
I can reproduce this with Clang 3.1-3.3 as well as 3.6. Looks like bug 16478.
Simplest fix is to just use a lambda or equivalent. Other completely non-portable workarounds include either disabling extern templates with
#ifndef _LIBCPP_EXTERN_TEMPLATE
#define _LIBCPP_EXTERN_TEMPLATE(...)
#endif
before you include any headers (in essence applying r189610); and doing an explicit instantiation of either the member function (template void std::string::clear();) or the entire class.
That said, you should not take the address of a member function of a standard library class. [member.functions]/p2:
An implementation may declare additional non-virtual member function
signatures within a class:
by adding arguments with default values to a member function signature;187
by replacing a member function signature with default values by two or more member function signatures with equivalent behavior; and
by adding a member function signature for a member function name.
187) Hence, the address of a member function of a class in
the C++ standard library has an unspecified type.
As for the standard, you can't take a pointer to any standard nonstatic member because the library implementation is allowed to add hidden overloads, defaulted function template parameters, SFINAE in the return type, etc.
In other words, & std::string::clear is simply not a supported operation.
In terms of Clang, it looks like an issue with hidden symbol visibility. Each shared object (linker output) file gets its own copy of certain functions to avoid the appearance that third-party shared libraries implement the standard library. With different copies floating around, the equality operator over PTMFs would not work: If you retain the value & std::string::clear from an inline function, it might not compare equal to itself later. However, it's probably a bug, since std::string should be completely implemented by the libc++.so shared library. Only other specializations of std::basic_string could really justify this behavior.
A good fix would be to use a lambda instead. []( std::string & o ) { o.clear(); } is a superior alternative to mem_fn. It avoids the indirect call and its sizeof is smaller. Really, you shouldn't use mem_fn unless absolutely necessary.

Overloading function calls for compile-time constants

I'm interested to know whether one can distinguish between function calls using arguments provided by compile-time constants and those without?
For example:
int a = 2;
foo( a ) // #1: Compute at run-time
foo( 3 ) // #2: Compute at compile-time
Is there any way to provide overloads that distinguish between these two cases? Or more generally, how do I detect the use of a literal type?
I've looked into constexpr, but a function parameter cannot be constexpr. It would be neat to have the same calling syntax, but be able to generate different code based on the parameters being literal types or not.
You cannot distinguish between a compile-time literal int and a run-time variable int. If you need to do this, you can provide an overload that can only work at compile-time:
void foo(int ); // run-time
template <int I>
void foo(std::integral_constant<int, I> ); // compile-time
I think the above answers somehow miss the point that the question was trying to make.
Is there any way to provide overloads that distinguish between these two cases? Or more generally, how do I detect the use of a literal type?
this is what a 'rvalue reference' is for. literal type is a rvalue.
It would be neat to have the same calling syntax, but be able to generate different code based on the parameters being literal types or not.
you can simply overload your foo() function as:
void foo(int&& a);
So when you call the function with a literal, e.g. foo(3), the compiler knows you need the above overload, as 3 is a rvalue. If you call the function as foo(a), the compiler will pick up your original version foo(const int& a); as int a=2; is a lvalue.
And this gives you the same calling syntax.
In the general case you couldn't get foo(3) evaluated at compile time. What if foo(x) was defined as add x days to the current date - and you first run the program next Tuesday? If it really is a constant then use a symbolic constant. If it is a simple function you could try a define (which will be replaced at compile time with the implementation -but it still will be evaluated at runtime)
e.g.
#define MIN(x,y) ((x)<(y)?(x):(y))

C++ - What is a "type transparent class"?

Using gcc to compile a program that includes support for decimal data types, I recently encountered the following error:
error: type transparent class 'std::decimal::decimal32' has base classes
A quick look at GCC's source tree shows that this error message is found in gcc/cp/class.c.
What is a "type transparent class"? Why is it an error for such a class to have "base classes"?
Reading the source code of GCC a bit more, in semantics.c:
if (TREE_CODE (t) == RECORD_TYPE
&& !processing_template_decl)
{
tree ns = TYPE_CONTEXT (t);
if (ns && TREE_CODE (ns) == NAMESPACE_DECL
&& DECL_CONTEXT (ns) == std_node
&& DECL_NAME (ns)
&& !strcmp (IDENTIFIER_POINTER (DECL_NAME (ns)), "decimal"))
{
const char *n = TYPE_NAME_STRING (t);
if ((strcmp (n, "decimal32") == 0)
|| (strcmp (n, "decimal64") == 0)
|| (strcmp (n, "decimal128") == 0))
TYPE_TRANSPARENT_AGGR (t) = 1;
}
}
This code means that a type is marked transparent if:
It is a struct, but not a template;
And it is at namespace level, and that namespace is std::decimal.
And it is named decimal32, decimal64 or decimal128.
In class.c there is the error check you encountered, and a few more.
And in mangle.c:
/* According to the C++ ABI, some library classes are passed the
same as the scalar type of their single member and use the same
mangling. */
if (TREE_CODE (type) == RECORD_TYPE && TYPE_TRANSPARENT_AGGR (type))
type = TREE_TYPE (first_field (type));
The comment is key here. I think it means that a transparent type is replaced for the type of its fist (and only) member, so it can be used anywhere it first member can. For example, in my include/decimal the class std::decimal::decimal32 has a single field of type __decfloat32 (from a previous typedef float __decfloat32 __attribute__((mode(SD)));), so any function that takes a __decfloat32 can take a std::decimal::decimal32 and vice-versa. Even the function decoration is done the same. The idea is probably to make this classes ABI compatible with the C types _Decimal32, _Decimal64 and _Decimal128.
Now, how are you getting a class decimal32 with base classes? My only guess is that you are including incompatible (maybe older) header files, with a totally different implementation.
UPDATE
After some investigation, it looks like my guess about the ABI and function decoration is right. The following code:
#include <decimal/decimal>
using namespace std::decimal;
//This is a synonym of C99 _Decimal32, but that is not directly available in C++
typedef float Decimal32 __attribute__((mode(SD)));
void foo(decimal32 a) {}
void foo(Decimal32 a) {}
gives the curious error:
/tmp/ccr61gna.s: Assembler messages:
/tmp/ccr61gna.s:1291: Error: symbol `_Z3fooDf' is already defined
That is, the compiler front-end sees no problem in the overload and emits the asm code, but since both functions are decorated the same the assembler fails.
Now, is this a non-conformance of GCC, as Ben Voigt suggests in the comments? I don't know... you should be able to write overloaded functions with any two different types you want. But OTOH, it is impossible to get the Decimal32 type without using some compiler extension, so the meaning of this type is implementation defined...
As mentioned in one of my comment, a type-transparent class is a wrapper class to some primitive type, like integers, etc.
They are called transparent because of their use of operator overloading, which makes them act just like the primitive type they wrap.
IE, to wrap an int transparently in a class, you'll need to overload the = operator, the ++operator, etc...
Apparently, GNU's libstdc++ uses such classes for some types. Not sure why...
About the base class issue, while I'm not 100% sure, here's a guess.
When dealing with inheritance in C++, you'll often need to declare virtual methods, to resolve issues with upcasting.
Declaring a method as virtual will tell the compiler to create a virtual table for the methods, so they can be looked at runtime.
This will of course increase the instance size of the class.
For a type-transparent class, this is not acceptable, as the compiler won't be able to place an instance of such a class in a register (i.e when passing arguments, etc), unlike the wrapped type, and so the class won't be transparent anymore.
Edit
I've no idea how to declare such a transparent-class in GCC. The closest thing I can think of is transparent unions:
http://gcc.gnu.org/onlinedocs/gcc/Type-Attributes.html
Something like:
class IntWrapper
{
int _x;
/* Constructor, operator overloads... */
};
typedef union
{
int integerValue;
IntWrapper integerWrapper;
}
IntUnion __attribute__( ( __transparent_union__ ) );
My GCC version does not seem to support it, but according to the documentation (see above link), this would allow int or IntWrapper to be passed to functions transparently using the same calling convention as int.

Strange compilation behaviour when calling a template method from another template object

Could someone explain why the following c++ code is not behaving as expected:
struct Object {
template< int i >
void foo(){ }
};
template<int counter>
struct Container {
Object v[counter];
void test(){
// this works as expected
Object a; a.foo<1>();
// This works as well:
Object *b = new Object(); b->foo<1>();
// now try the same thing with the array:
v[0] = Object(); // that's fine (just testing access to the array)
# if defined BUG1
v[0].foo<1>(); // compilation fails
# elif defined BUG2
(v[0]).foo<1>(); // compilation fails
# elif defined BUG3
auto &o = v[0];
o.foo<1>(); // compilation fails
# else
Object &o = v[0];
o.foo<1>(); // works
# endif
}
};
int main(){
Container<10> container;
}
The code above compiles fine without flag. If one of the flag BUG1 to BUG3 is set, the compilation fails with either GCC 4.6 or 4.7 and with clang 3.2 (which seems to indicate it is not a GCC bug).
Lines 21 to 29 are doing exactly the same thing semantically (ie calling a method of the first element of the Object array), but only the last version compiles. The problem only seems to arise when I try to call a templated method from a template object.
BUG1 is just the "normal" way of writing the call.
BUG2 is the same thing, but the array access is protected by parenthesis in case there was a precedence problem (but there shouldn't be any).
BUG3 shows that type inference is not working either (needs to be compiled with c++11 support).
The last version works fine, but I don't understand why using a temporary variable to store the reference solves the problem.
I am curious to know why the other three are not valid.
Thanks
You have to use template as:
v[0].template foo<1>();
auto &o = v[0];
o.template foo<1>();
Because the declaration of v depends on the template argument, which makes v a dependent name.
Here the template keyword tells compiler that whatever follows is a template (in your case, foo is indeed a template). If foo is not a template, then the template keyword is not required (in fact, it would be an error).
The problem is that o.foo<1>() can be parsed/interpreted in two ways: one is just as you expect (a function call), the other way is this:
(o.foo) < 1 //partially parsed
that is, foo is a member data (not function), and you've comparing it with 1. So to tell the compiler that < is not used to compare o.foo with 1, rather it is used to pass template argument 1 to the function template, you're required to use template keyword.
Inside templates, expressions can be type-dependent or value-dependent. From 14.6.2:
types and expressions may depend on the type and/or value of template parameters
In your case, counter is a template argument, and the declaration of v depends on it, making v[0] a value-dependent expression. Thus the name foo is a dependent-name which you must disambiguate as a template name by saying:
v[0].template foo<1>();