c++11 tie name clash with boost - c++

I am trying to migrate some code from boost::tuple to std::tuple but I'm getting some weird errors: after I invoke using namespace std (and never boost), I expect an unqualified tie to resolve to std::tie. However, this seems to fail when the tuple contains a boost container pointer, for example.
#include <tuple>
#include <boost/multi_index_container.hpp>
#include <boost/multi_index/ordered_index.hpp>
#include <boost/multi_index/identity.hpp>
#ifdef USE_STD
#define TIE std::tie
#else
#define TIE tie
#endif
typedef boost::multi_index_container<
int,
boost::multi_index::indexed_by<
boost::multi_index::ordered_non_unique<
boost::multi_index::identity<int>
>
>
> Set;
std::tuple< int, int* > make_i_ptr();
std::tuple< int, Set* > make_set();
int main()
{
using namespace std;
int i;
int* i_ptr;
Set* set_ptr;
tie(i, i_ptr) = make_i_ptr();
TIE(i, set_ptr) = make_set();
return 0;
}
If I compile with g++ -std=c++0x -c test.cpp -DUSE_STD, all is well. However, without -D USE_STD, I get compile errors suggesting g++ tries to use boost::tuples::tie. I'm using g++ 4.8.1 and boost 1.55.0. Do you think this is a bug with boost? Or is there some spec I'm missing?

Lookup is complicated. The problem as others have mentioned is Argument Dependent Lookup or ADL. The rules for ADL were added to allow the definition of operators in the same namespace as the types they refer to, and enabling lookup to find those operators when present. This was later extended to all other functions in the process of standarization.
The issue here is that tie(...) is a function call. The compiler will attempt regular lookup from the point of use (inside main) it will move out to the enclosing namespace. The using directive will add ::std into the lookup search when it hits the root namespace (common ancestor of ::std and ::main). At that point, since the identifier resolves to a function, ADL will kick in.
ADL adds the namespaces associated to the arguments of the function call, which in this case is boost:: (fundamental types like int have no associated namespaces). At this point the compiler sees two declarations of tie: std::tie and boost::tie, causing ambiguity.
As you already know the solution is to qualify the call to std::tie (which I would advice that you use even without this issue). Regarding the comment:
If ADL makes it resolve to boost::tie for... "my convenience" and then the compilation fails, shouldn't that be a clue to the compiler that it picked the wrong function?!
I don't know what the exact error you are getting is (I don't use boost, and I don't know what possible overloads of std::tie it contains). If the problem is indeed one of ambiguity, the problem is that the compiler cannot resolve the identifier, and cannot continue the process. At that point it stops and asks for the programmer to resolve it. If the error is that it uniquely picked boost::tie and it later failed in the assignment, it means that theres is an overload of boost::tie that is a better match than std::tie and that was selected. At a later time the assignment from std::tuple may have failed, but the compiler cannot know whether the problem was during lookup, or whether it is the assignment itself (did you intend on assigning that variable? maybe a different one?) so again it fails and tells you what the problem is.
Note that in general the process of compilation is always moving forward, the compiler does not backtrack to double guess its own decisions*. There is a set of rules, and those rules are applied at each step. If there is ambiguity, compilation stops, if not, then there is a single best candidate and this point is resolved, moving to the next. Attempting to go back to undo decisions would turn the compilation process into something painfully slow (the number of paths that could be taken would be exponential).
* As always there are some exceptions but those are just exceptions, notably during overload resolution if a template is picked as the best candidate but substitution of type arguments fail, it is discarded and the next best candidate is chosen.

Related

Warn against missing std:: prefixes due to ADL

It is possible to omit the std:: namespace for <algorithm>s when the argument types are in that namespace, which is usually the case. Is there any warning or clang-tidy rule that finds such omissions?
#include <vector>
#include <algorithm>
std::vector<int> v;
for_each(v.begin(), v.end(), [](auto){});
return 0;
The above example, compiled with the latest clang and -Wall, -Wextra and -Wpedantic does not emit any diagnostic:
https://godbolt.org/z/dTsKbbEKe
There is an open change in tidy that could be used to flag this:
D72282 - [clang-tidy] Add bugprone-unintended-adl
[patch] Summary
This patch adds bugprone-unintended-adl which flags uses of ADL that are not on the provided whitelist.
bugprone-unintended-adl
Finds usages of ADL (argument-dependent lookup), or potential ADL in the case of templates, that are not on the provided lists of allowed identifiers and namespaces. [...]
.. option:: IgnoreOverloadedOperators
If non-zero, ignores calls to overloaded operators using operator
syntax (e.g. a + b), but not function call syntax (e.g.
operator+(a, b)). Default is 1.
.. option:: AllowedIdentifiers
Semicolon-separated list of names that the check ignores. Default
is
swap;make_error_code;make_error_condition;data;begin;end;rbegin;rend;crbegin;crend;size;ssize;empty.
.. option:: AllowedNamespaces
Semicolon-separated list of namespace names (e.g. foo;bar::baz).
If the check finds an unqualified call that resolves to a function
in a namespace in this list, the call is ignored. Default is an
empty list.
There seems to have been no activity on the patch since July 2020, though, but if this is of interest of the OP, the OP could try to resuscitate the patch.

Using Qt's Q_DECLARE_FLAGS and Q_DECLARE_OPERATORS_FOR_FLAGS without Class Declaration

I have the following enum declaration and I'd like to make use of the QFlags support in Qt for extra type safety:
namespace ssp
{
enum VisualAttribute
{
AttrBrushColor = 0x001,
AttrBrushTexture = 0x002,
AttrPenCapStyle = 0x004,
AttrPenColor = 0x008,
AttrPenJoinStyle = 0x010,
AttrPenPattern = 0x020,
AttrPenScalable = 0x040,
AttrPenWidth = 0x080,
AttrSymbolColor = 0x100,
AttrTextColor = 0x200,
AttrTextFontFamily = 0x400,
AttrTextHeight = 0x800,
AttrAllFlags = 0xfff
};
Q_DECLARE_FLAGS (VisualAttributes, VisualAttribute)
Q_DECLARE_OPERATORS_FOR_FLAGS (VisualAttributes)
}
This declaration works for methods where I declare a VisualAttributes parameter and pass an OR'd list of values, so that part is fine, but it breaks (apparently) everywhere that other flags are used such as the Qt::WindowFlags. The compilation error I'm getting is:
error C2664: 'void QWidget::setWindowFlags(Qt::WindowFlags)' : cannot convert argument 1 from 'int' to 'Qt::WindowFlags'
No constructor could take the source type, or constructor overload resolution was ambiguous
The issue seems to be with the Q_DECLARE_OPERATORS_FOR_FLAGS declaration; if I remove it, the compilation issues with other flag types are resolved, but since this declares the operators for the flags, then the compiler won't accept the OR'd list. Including the declaration is resulting in some kind of ambiguous definition, but I don't understand what it is.
The QFlags documentation shows an example of embedding the enum into a class declaration, and that not only seems cumbersome, but made a bigger mess than what I'm already dealing with. I also looked at Qt's flag declarations (for Qt::AlignmentFlag), and I don't see that they're doing anything different than I am in the code segment above.
This is actually a very old Qt bug, that was fixed in Qt 5.12. As a general rule, due to argument dependent lookup, custom operators should be declared in the same namespace as their arguments. Back when Qt first introduced these flag enums and operators they chose to declare them in the global namespace, possibly due to poor namespace or argument dependent lookup support in compilers at that time.
So if one is a good, modern C++ citizen and declares one's custom operator| in the same namespace as its arguments, lookup fails to find the Qt operator| when compiling code in that same namespace. It finds an operator| in the current namespace that doesn't match and finds no operator| via argument dependent lookup. Due to complicated reasons I can't explain, the lookup then stops instead of searching the global namespace, where it would find Qt's operator|.
You can see a very simplified example of this in action here.
So you have three options:
Do what Qt (< 5.12) does and declare your custom operators in the global namespace, knowing that they might not be found by more modern C++ constructs.
Do what C++ best practices recommend and place your custom operators in the same namespace as their arguments and sprinkle your code with using ::operator|; to make the Qt enum operators compile.
Upgrade to Qt 5.12 or newer.
I was able to resolve this by moving the Q_DECLARE_OPERATORS_FOR_FLAGS declaration out of the namespace block, so it becomes:
Q_DECLARE_OPERATORS_FOR_FLAGS (ssp::VisualAttributes)
This resolved all of the compilation issues.

std::bind and winsock.h bind confusion

I'm working on a very large project and in one file we all of the sudden got a compile-time error where the compiler seems to think that our call to winsock.h bind() is actually a call to std::bind(). It seems that somewhere in a include file there is using namespace std code snippet. We could try and find where these using namespace std are in use and remove them, but perhaps there is a better way to do this?
You can change your calls to use ::bind() to specify the global namespace.
Yes, this is unfortunate. As I described at http://gcc.gnu.org/ml/libstdc++/2011-03/msg00143.html the std::bind template is a better match unless you use exactly the right argument types:
The problem is that the socket bind() function has this signature:
int bind(int, const sockaddr*, socklen_t);
so the call in the example using a non-const pointer finds that the
variadic template std::bind is a better match. The same would happen
if the third argument was any integral type except socklen_t.
Your code would work with GCC because I added a conforming extension to GCC's std::bind to prevent this ambiguity, by removing std::bind from the overload set if the first argument is "socket-like", which I defined using is_integral and is_enum. That doesn't help with other implementations though.
Removing the using namespace std; is a good idea anyway, but may not be entirely sufficient, because an unqualified call to bind() that happens to use a type defined in namespace std (such as std::size_t) could still find std::bind by argument dependent lookup. Jonathan Potter's answer is the best way to ensure you get the right function: qualify it as ::bind.

Incorrect overload resolution for 2-argument functions

Let's take the following example program:
#include <cmath>
namespace half_float
{
template<typename T> struct half_expr {};
struct half : half_expr<half>
{
operator float() const;
};
template<typename T> half sin(const half_expr<T>&);
template<typename T> half atan2(const half_expr<T>&, const half_expr<T>&);
}
using namespace std;
using half_float::half;
int main()
{
half a, b;
half s = sin(a);
half t = atan2(a, b);
}
In VS 2010 this compiles just fine (ignore the obvious linker errors for now). But in VS 2012 this gives me:
error C2440: 'conversion' : cannot convert from 'float' to
'half_float::half'
So it seems overload resolution doesn't pick the version from namespace half_float (which ADL should accomplish), but the one from std using the implicit conversion to float. But the strange thing is, that this only happens for the atan2 call and not the sin call.
In the larger project, where this error actually first occured to me it also occurs for other 2-argument functions (or rather those with 2 half arguments), like fmod, but not for any 1-argument function. Likewise in the larger project it also works fine for gcc 4.6/4.7 and clang 3.1 without error, though I didn't test this SSCCE version explicitly there.
So my question is, is this erroneous behaviour on VS 2012's side (given that it only happens for 2012 and only for the 2-argument function), or did I oversee some subtleties in the overload resolution rules (which can indeed get a bit tricky, I guess)?
EDIT: It also happens if I'm directly using namespace half_float or put the whole thing in global namespace directly. Likewise does it also happen if I'm not using namespace std, but this is rather the VS-implementation putting the math functions in global namespace.
EDIT: It happens both with the original VC 2012 compiler as well as the November 2012 CTP of it.
EDIT: Although I'm not completely sure it is really a violation of the standard in the strictest sense, I have filed a bug for it based on the findings in my answer, since it is at least inconsistent to the definition of the 1-argument functions and deserves further investigation by the VS-Team.
I think I found the cause. The C++ standard says in section 26.8 [c.math], that for the mathematical functions of the C library,
there shall be additional overloads sufficient to ensure:
If any argument corresponding to a double parameter has type long double, then all arguments corresponding to double parameters are
effectively cast to long double.
Otherwise, if any argument corresponding to a double parameter has type double or an integer type, then all arguments corresponding to
double parameters are effectively cast to double.
Otherwise, all arguments corresponding to double parameters are effectively cast to float.
Which can also be seen in the atan2 documentation.
Those overloads are provided by VS 2012 through the use of a general function template of the form:
template<typename T,typename U> common_float_type<T,U>::type atan2(T, U);
So we have a template function whose instantiation would involve an implicit conversion (from half& to const half_expr<half>&) and a template function which can be instantiated directly. Thus the latter is preferred. This doesn't happen for the 1-argument functions because for those there only has to be a general version for integral arguments, which is provided by VS 2012 for only those using a std::enable_if of std::is_integral.
But I think the standard is a bit unclear about the fact that those "additional overloads" are to be provided only for builtin types. So in the end I'm still not sure if VS 2012 strictly violates the standard with its overly generic functions or if it is a viable implementation option to provide those.
EDIT: As it seems, there is already defect report 2086 for the standard's unclear wording and a fix is on it's way, limiting the requirement for those additional overloads to only arithmetic types. Since this seems to have always been the original intent (and realized by nearly all existing implementations) and it was merely the wording that was unclear, I would indeed regard this a bug in VS 2012's implementation.
I just tried your code, and I figured out what was wrong with it.
Since you haven't implemented half::sin and half::atan2, the linker will throw an error anyway. So if you implement the methods half::sin and half::atan2, that should solve it (I implemented them by letting them return an empty half, which is, of course, meaningless).
After I took that step (providing a (meaningless) implementation of the two required methods), the error messages almost magically disappeared.
Maybe this isn't the solution to your problem, as I'm using GCC, and not VS.
EDIT: I just tried the sample I used with G++ with visual studio, which gave me a peculier error message. Provided the strangeness of the error, and the code working with GCC, I must conclude that this is a bug in VC2012.
A workaround is to specialise _Common_float_type for half and half_expr to be an undefined type, so that SFINAE gets rid of the VS2012 version of atan2.
namespace std {
template<class T1, class T2>
struct _Common_float_type<half_float::half_expr<T1>, half_float::half_expr<T2>>;
template<class T2>
struct _Common_float_type<half_float::half, half_float::half_expr<T2>>;
template<class T1>
struct _Common_float_type<half_float::half_expr<T1>, half_float::half>;
template<>
struct _Common_float_type<half_float::half, half_float::half>;
}
Note that you have to specialise for all four combinations of half and half_expr, because template specialisation doesn't consider base classes.

Why is the "using" directive still needed in C++11 to bring forward methods from the base class that are overloaded in the derived class

The example below gets the following compiled error:
test.cpp: In function ‘int main(int, char**)’:
test.cpp:26:8: error: no match for call to ‘(Derived) (p1&)’
test.cpp:14:8: note: candidate is:
test.cpp:16:10: note: void Derived::operator()(const p2&)
test.cpp:16:10: note: no known conversion for argument 1 from ‘p1’ to ‘const p2&’
It was my understanding this was getting changed in C++11 so you weren't required to put the using statement in. Is that not correct? Is there some other way around this?
Example (Compiled with gcc 4.7 using --std=c++11):
#include <iostream>
#include <string>
using namespace std;
struct p1{};
struct p2{};
struct Base
{
void operator()(const p1&) { cout << "p1" << endl; }
};
struct Derived : public Base
{
void operator()(const p2&) { cout << "p2" << endl; }
//Works if I include: using Base::operator();
};
int main(int argc, char** argv)
{
p1 p;
p2 pp;
Derived d;
d(p);
d(pp);
}
To the best of my knowledge, no, this has not changed in C++11.
And the reason it has not changed is that this behavior is not an accident. The language works like this by design. There are advantages and disadvantages to it, but it's not something that just happened because the people on the standards committee forgot about it.
And no, there's no way around it. It's just how member lookup works in C++
It was my understanding this was getting changed in C++11 so you weren't required to put the using statement in. Is that not correct?
No, member functions can still be hidden in C++11.
Is there some other way around this?
Using declarations are the intended remedy.
Just to clarify the situation: I can't imagine this ever changing in C++. To even hope to make it a tenable change, you'd have to tighten up type safety to the point that it would no longer be compatible with C (e.g., you'd pretty much have to eliminate all implicit conversions).
The situation is fairly simple: right now, name lookup stops at the first scope found that includes at least one instance of the name being used. If that instance of the name doesn't match the way you tried to use the name, then compilation fails.
Consider the obvious alternative: instead of stopping searching at that point, the compiler continues searching out scopes, created essentially an overload set of all those names, and then picked the one that matched the best.
In a case like this, a seemingly trivial change at an outer scope could (completely) change the meaning of some code, quite unintentionally. Consider something like this, for example:
int i;
int main() {
long i;
i = 1;
std::cout << i;
return 0;
}
Under the current rules, the meaning of this is clear and unequivocal: the i=1; assigns the value 1 to the i defined local to main.
Under the revised rules, that would be open to question -- in fact, it probably shouldn't be the case. The compiler would find both instances of i, and since 1 has type int, it should probably be matched up with the global i instead. When we print out i, the compiler finds an overload that takes a long, so it prints out the local i (which still contains garbage).
Note that this adds another wrinkle though: the cout << i; would be referring to the local i because there was an overload that could work with it. So instead of the type of the variable controlling the overload that was used, you'd have the available overloads also controlling the variable that was used. I'm not sure, but I'd guess this makes parsing much more difficult (quite possibly an NP-hard or NP-complete problem).
In short, any code that (intentionally or otherwise) used almost any kind of implicit conversion at an inner scope, this seemingly unrelated changes at outer scopes could suddenly change the meaning of that code completely -- and as in the example above, easily break it quite thoroughly in the process.
In the case above, with only a half dozen lines of code, it's pretty easy to figure out what's going on. Consider, however, what happens when (for example) you define a class in a header, then include that header into some other file -- the compiler looks at the other code where you included the header, find a better match, and suddenly code you swore was thoroughly vetted and tested breaks most spectacularly.
With headers, it would (or at least could) get even worse though. You define your class, and include the header into two different files. One of those files defines a variable, function, etc. at outer scope and the other doesn't. Now code in one file using name X refers to the global, while code in the other file refers to the local, because the global doesn't happen to be visible in that file. This would completely break modularity, and render virtually all code completely broken (or at least breakable).
Of course, there would be other possibilities that might not be quite this broken. One possibility would be to eliminate all implicit conversions, so only a perfect type match would ever be considered. This would eliminate most of the obvious problems, but only at the expense of completely breaking compatibility with C (not to mention probably making a lot of programmers quite unhappy). Another possibility would be to search like it does now, stopping at the first scope where it found a match -- then continuing to outer scopes if and only if compilation was going to fail if it used that name at the inner scope.
Either of these could work, but (at the very least) you'd need quite a few restrictions to keep them from leading to almost insane levels of confusion. Just for example, consider something like a =1; a = '\2'; Right now, those must refer to the same variable -- but with the first of these rules, they wouldn't.
With some special cases, you could probably eliminate that particular oddity too -- for example, use the current rules for looking up variable names, and new/separate rules only for function names.
Summary: the simple/obvious way to do this would create a language that was almost irretrievably broken. Modifications to prevent that would be possible, but only at the expense of throwing away compatibility with both C and essentially all existing C++ code. The latter might be possible in a brand new language, but for a language that's already well established like C++ (especially on that became established largely based on backward compatibility -- C++ again).