Implicit conversion from int to vector? - c++

vector<T> has a constructor that takes the size of the vector, and as far as I know it is explicit, which can be proved by the fact that the following code fails to compile
void f(std::vector<int> v);
int main()
{
f(5);
}
What I cannot understand and am asking you to explain is why the following code compiles
std::vector<std::vector<int>> graph(5, 5);
Not only does it compile, it actually resizes graph to 5 and sets each element to a vector of five zeros, i.e. does the same as would the code I would normally write:
std::vector<std::vector<int>> graph(5, std::vector<int>(5));
How? Why?
Compiler: MSVC10.0
OK, seems it's an MSVC bug (yet another one). If someone can elaborate on the bug in an answer (i.e. summarize the cases where it is reproduced) I would gladly accept it

It is not really a bug. The question is what could go wrong to allow the second piece of code while the first does not compile?
The issue is that while it seems obvious to you what constructor you want to call when you do:
std::vector<std::vector<int>> graph(5, 5);
it is not so clear for the compiler. In particular there are two constructor overloads that can potentially accept the arguments:
vector(size_type,const T& value = T());
template <typename InputIterator>
vector(InputIterator first, InputIterator last);
The first one requires the conversion of 5 to size_type (which is unsigned), while the second is a perfect match, so that will be the one picked up by the compiler...
... but the compiler requires that the second overload, if the deduced type InputIterator is integral behaves as if it was a call to:
vector(static_cast<size_type>(first),static_cast<T>(last))
What the C++03 standard effectively mandates is that the second argument is explicitly converted from the original type int to the destination type std::vector<int>. Because the conversion is explicit you get the error.
The C++11 standard changes the wording to use SFINAE to disable the iterator constructor if the argument is not really an input iterator, so in a C++11 compiler the code should be rejected (which is probably the reason some have claimed this to be a bug).

To me it looks like it's calling this constructor:
template <class InputIterator>
vector (InputIterator first, InputIterator last,
const allocator_type& alloc = allocator_type());
I'm not sure where explicit comes into it, because the constructor takes multiple parameters. It's not auto casting from an int to a vector.

This is actually an extension, not a bug.
The constructor being invoked is the one that takes two iterators (but really, the signature will match any two parameters of the same type); it then invokes a specialization for when the two iterators are actually int, which explicitly constructs a value_type using the value of end and populates the vector with begin copies of it.

Related

How can I check that assignment of const_reverse_iterator to reverse_iterator is invalid?

Consider the following:
using vector_type = std::vector<int>;
using const_iterator = typename vector_type::const_iterator;
using const_reverse_iterator = typename vector_type::const_reverse_iterator;
using iterator = typename vector_type::iterator;
using reverse_iterator = typename vector_type::reverse_iterator;
int main()
{
static_assert(!std::is_assignable_v<iterator, const_iterator>); // passes
static_assert(!std::is_assignable_v<reverse_iterator, const_reverse_iterator>); // fails
static_assert(std::is_assignable_v<reverse_iterator, const_reverse_iterator>); // passes
}
I can check that assignment of iterator{} = const_iterator{} is not valid, but not an assignment of reverse_iterator{} = const_reverse_iterator{} with this type trait.
This behavior is consistent across gcc 9.0.0, clang 8.0.0, and MSVC 19.00.23506
This is unfortunate, because the reality is that reverse_iterator{} = const_reverse_iterator{} doesn't actually compile with any of the above-mentioned compilers.
How can I reliably check such an assignment is invalid?
This behavior of the type trait implies that the expression
std::declval<reverse_iterator>() = std::declval<const_reverse_iterator>()
is well formed according to [meta.unary.prop], and this appears consistent with my own attempts at an is_assignable type trait.
This trait passes because the method exists that can be found via overload resolution and it is not deleted.
Actually calling it fails, because the implementation of the method contains code that isn't legal with those two types.
In C++, you cannot test if instantiating a method will result in a compile error, you can only test for the equivalent of overload resolution finding a solution.
The C++ language and standard library originally relied heavily on "well, the method body is only compiled in a template if called, so if the body is invalid the programmer will be told". More modern C++ (both inside and outside the standard library) uses SFINAE and other techniques to make a method "not participate in overload resolution" when its body would not compile.
The constructor of reverse iterator from other reverse iterators is the old style, and hasn't been updated to "not participate in overload resolution" quality.
From n4713, 27.5.1.3.1 [reverse.iter.cons]/3:
template<class U> constexpr reverse_iterator(const reverse_iterator<U>& u);
Effects: Initializes current with u.current.
Notice no mention of "does not participate in overload resolution" or similar words.
The only way to get a trait you want like this would be to
Change (well, fix) the C++ standard
Special case it
I'll leave 1. as an exercise. For 2., you know that reverse iterators are templates over forward iterators.
template<class...Ts>
struct my_trait:std::is_assignable<Ts...> {};
template<class T0, class T1>
struct my_trait<std::reverse_iterator<T0>, std::reverse_iterator<T1>>:
my_trait<T0, T1>
{};
and now my_trait is is_assignable except on reverse iterators, where it instead tests assignability of the contained iterators.
(As extra fun, a reverse reverse iterator will work with this trait).
I once had to do something very similar with std::vector<T>::operator<, which also blindly called T<T and didn't SFINAE disable it if that wasn't legal.
It may also be the case that a C++ standard library implementation can make a constructor which would not compile not participate in overload resolution. This change could break otherwise well formed programs, but only by things as ridiculous as your static assert (which would flip) or things logically equivalent.
It's just a question of constraints. Or lack thereof. Here's a reduced example with a different trait:
struct X {
template <typename T>
X(T v) : i(v) { }
int i;
};
static_assert(is_constructible_v<X, std::string>); // passes
X x("hello"s); // fails
Whenever people talk about being SFINAE-friendly - this is fundamentally what they're referring to. Ensuring that type traits give the correct answer. Here, X claims to be constructible from anything - but really it's not. is_constructible doesn't actually instantiate the entire construction, it just checks the expression validity - it's just a surface-level check. This is the problem that enable_if and later Concepts are intended to solve.
For libstdc++ specifically, we have:
template<typename _Iter>
_GLIBCXX17_CONSTEXPR
reverse_iterator(const reverse_iterator<_Iter>& __x)
: current(__x.base()) { }
There's no constraint on _Iter here, so is_constructible_v<reverse_iterator<T>, reverse_iterator<U>> is true for all pairs T, U, even if it's not actually constructible. The question uses assignable, but in this case, assignment would go through this constructor template, which is why I'm talking about construction.
Note that this is arguably a libstdc++ bug, and probably an oversight. There's even a comment that this should be constrained:
/**
* A %reverse_iterator across other types can be copied if the
* underlying %iterator can be converted to the type of #c current.
*/

Priorities of constructors c++

I got very weird problem
for code like this
template <typename T>
struct A{
explicit A(unsigned int size = 0, const T &t = T())
{
}
template <typename InputIterator>
A(InputIterator first, InputIterator last) {
for(;first != last ; ++first)
{
*first; //do something with iterator
}
}
};
when I for example define
A<int> a(10,10);
the second constructor for iterators is used instead of first one.
How then vectors constructors work, when they looks preety the same?
explicit vector (size_type n, const value_type& val = value_type(),
const allocator_type& alloc = allocator_type());
template <class InputIterator>
vector (InputIterator first, InputIterator last,
const allocator_type& alloc = allocator_type());
And I can make vector v(10,10) without having any troubles.
PS I got error like this
temp.cpp: In instantiation of ‘A<T>::A(InputIterator, InputIterator) [with = int; T = int]’:
temp.cpp:17:15: required from here
temp.cpp:12:4: error: invalid type argument of unary ‘*’ (have ‘int’)
The reason the compiler chooses the second constructor in case of your A is simple: your 10 is a value of signed type int, while size_type is some unsigned integer type. This means that 10 has to be converted to that unsigned integer type. The need for that conversion is what makes the first constructor to lose the overload resolution to the second constructor (which is an exact match for InputIterator = int). You can work around this problem by doing
A<int> a(10u, 10);
This eliminates the need for int -> unsigned conversion and makes the first constructor win the overload resolution through "non-template is better than template" clause.
Meanwhile, the reason it works differently with std::vector is that the language specification gives special treatment to the constructors of standard sequences. It simply requires that std::vector constructor invocation with two integers of the same type as arguments is somehow "magically" resolved to the first constructor from your quote (i.e. the size-and-initializer constructor). How each specific implementation achieves that is up to the implementation. It can overload the constructor for all integer types. It can use a functionality similar to enable_if. It can even hardcode it into the compiler itself. And so on.
This is how it is stated in C++03, for example
23.1.1 Sequences
9 For every sequence defined in this clause and in clause 21:
— the constructor
template <class InputIterator>
X(InputIterator f, InputIterator l, const Allocator& a = Allocator())
shall have the same effect as:
X(static_cast<typename X::size_type>(f),
static_cast<typename X::value_type>(l), a)
if InputIterator is an integral type
C++11, takes it even further by approaching it from a different angle, although the intent remains the same: it states that if InputIterator does not qualify as input iterator, then the template constructor should be excluded from overload resolution.
So, if you want your class template A to behave the same way std::vector does, you have to deliberately design it that way. You can actually take a peek into the standard library implementation on your platform to see how they do it for std::vector.
Anyway, a low-tech brute-force solution would be to add a dedicated overloaded constructor for int argument
explicit A(unsigned int size = 0, const T &t = T())
{ ... }
explicit A(int size = 0, const T &t = T())
{ ... }
That might, of course, imply that you'll eventually have to add overloads for all integer types.
A better solution, which I already mentioned above, would be to disable the template constructor for integer arguments by using enable_if or similar SFINAE-based technique. For example
template <typename InputIterator>
A(InputIterator first, InputIterator last,
typename std::enable_if<!std::is_integral<InputIterator>::value>::type* = 0)
Do you have C++11 features available to you in your compiler?
When InputIterator isint, the instantiated
A(int first, int last)
is a better match than the instantiated
explicit A(unsigned int size = 0, const int &t = int())
due to the first argument being unsigned. A<int> a((unsigned int)10,10) should call the constructor you expect. You can also use SFINAE to prevent a match unless the constructor really is being passed two iterators to T:
#include <iostream>
using namespace std;
template <typename T>
struct A{
explicit A(unsigned int size = 0, const T &t = T())
{
cout << "size constructor for struct A" << endl;
}
template <class I>
using is_T_iterator = typename enable_if<is_same<typename iterator_traits<I>::value_type, T>::value, T>::type;
template <typename InputIterator>
A(InputIterator first, InputIterator last, is_T_iterator<InputIterator> = 0) {
cout << "iterator constructor for struct A" << endl;
for(;first != last ; ++first)
{
*first; //do something with iterator
}
}
};
int main()
{
A<int>(10,10);
A<int>((int*)0,(int*)0);
//A<int>((char*)0,(char*)0); //<-- would cause compile time error since (char *) doesn't dereference to int
return 0;
}
If the condition that both arguments are iterators to T is too strict, there are looser formulations. For instance, you can guarantee that both arguments are iterators. You could go further (but not quite as far as the example above) and ensure that they "point" to a type that is convertible to T (using std::is_convertible).
That is correct, the templated thing is a better match, so it is selected. The standard library implementation fight hard to steer for a sensible behavior with all the templated members. You may want to look up some implementation code for specializations, if you want to implement a similar collection of your own.
Or you may find a way to dodge the problem.
There was a fine GOTW article with all the cases of function overload selection and some advice to fight it.
Your first argument in A<int>(10, 10) doesn't match your explicit constructor because 10 is signed, so it's using the templatized constructor instead. change it to A<int>(10u, 10) and you may end up with the results you expect.
If you're writing a general library, you may want to make the
extra effort, and use template meta programming to catch all of
the cases. Or simply provide explicit overloads for all of the
integral types. For less generic use, it is usually sufficient
to follow the rule that anytime you provide an overload for any
integral type, you also provide one for int (so you would have
a constructor A::A( int size, T const& initialValue = T() ),
in addition to the ones you already provide).
More generally: you should probably just make the size an
int, and be done with it. The standard library is caught up
in a lot of historical issues, and must use size_t by default,
but in general, unless there is a very strong reason to do
otherwise, the normal integral type in C++ is int; in
addition, the unsigned types in C++ have very strange semantics,
and should be avoided anytime there is any chance of arithmetic
operations occuring.

choose which constructor in c++

I am practicing C++ by building my own version of vector, named "Vector". I have two constructors among others, fill constructor and range constructor. Their declarations look like following:
template <typename Type>
class Vector {
public:
// fill constructor
Vector(size_t num, const Type& cont);
// range constructor
template<typename InputIterator> Vector(InputIterator first, InputIterator last);
/*
other members
......
*/
}
The fill constructor fills the container with num of val; and the range constructor copies every value in the range [first, last) into the container. They are supposed to be the same with the two constructors of the STL vector.
Their definitions goes following:
//fill constructor
template <typename Type>
Vector<Type>::Vector(size_t num, const Type& cont){
content = new Type[num];
for (int i = 0; i < num; i ++)
content[i] = cont;
contentSize = contentCapacity = num;
}
// range constructor
template <typename Type>
template<typename InputIterator>
Vector<Type>::Vector(InputIterator first, InputIterator last){
this->content = new Type[last - first];
int i = 0;
for (InputIterator iitr = first; iitr != last; iitr ++, i ++)
*(content + i) = *iitr;
this->contentSize = this->contentCapacity = i;
}
However, when I try to use them, I have problem distinguishing them.
For example:
Vector<int> v1(3, 5);
With the this line of code, I intended to create a Vector that contains three elements, each of which is 5. But the compiler goes for the range constructor, treating both "3" and "5" as instances of the "InputIterator", which, with no surprises, causes error.
Of course, if I change the code to:
Vector<int> v1(size_t(3), 5);
Everything is fine, the fill constructor is called. But that is obviously not intuitive and user friendly.
So, is there a way that I can use the fill constructor intuitively?
You can use std::enable_if (or boost::enable_if if you don't use C++11) to disambiguate the constructors.
#include <iostream>
#include <type_traits>
#include <vector>
using namespace std;
template <typename Type>
class Vector {
public:
// fill constructor
Vector(size_t num, const Type& cont)
{
cout << "Fill constructor" << endl;
}
// range constructor
template<typename InputIterator> Vector(InputIterator first, InputIterator last,
typename std::enable_if<!std::is_integral<InputIterator>::value>::type* = 0)
{
cout << "Range constructor" << endl;
}
};
int main()
{
Vector<int> v1(3, 5);
std::vector<int> v2(3, 5);
Vector<int> v3(v2.begin(), v2.end());
}
The above program should first call the fill constructor by checking if the type is an integral type (and thus not an iterator.)
By the way, in your implementation of the range constructor, you should use std::distance(first, last) rather than last - first. Explicitly using the - operator on iterators limits you to RandomAccessIterator types, but you want to support InputIterator which is the most generic type of Iterator.
Even std::vector seems to have this issue.
std::vector<int> v2(2,3);
chooses
template<class _Iter>
vector(_Iter _First, _Iter _Last)
In Visual C++, even though it should match closer to the non templated case..
Edit: That above function (correctly) delegates the construction to the below one. I am totally lost..
template<class _Iter>
void _Construct(_Iter _Count, _Iter _Val, _Int_iterator_tag)
Edit #2 AH!:
Somehow this below function identifies which version of the constructor is meant to be called.
template<class _Iter> inline
typename iterator_traits<_Iter>::iterator_category
_Iter_cat(const _Iter&)
{ // return category from iterator argument
typename iterator_traits<_Iter>::iterator_category _Cat;
return (_Cat);
}
The above shown _Construct function has (atleast) 2 versions overloading on the third variable which is a tag to returned by the above _Iter_cat function. Based on the type of this category the correct overload of the _Construct is picked.
Final edit:
iterator_traits<_Iter> is a class that seems to be templated for many different common varieties, each returning the appropriate "Category" type
Solution: It appears template specialization of the first arguement's type is how the std library handles this messy situation (primitive value type) in the case of MS VC++. Perhaps you could look into it and follow suit?
The problem arises (I think) because with primitive value types, the Type and size_t variables are similar, and so the template version with two identical types gets picked.
The problem is the same at the one faced by the standard library implementation. There are several ways to solve it.
You can meticulously provide non-template overloaded constructors for all integral types (in place of the first parameter).
You can use SFINAE-based technique (like enable_if) to make sure the range constructor is not selected for integer argument.
You can branch the range constructor at run-time (by using if) after detecting integral argument (by using is_integral) and redirect control to the proper constructing code. The branching condition will be a compile-time value, meaning that the code will probably be reduced at compile-time by the compiler.
You can simply peek into your version of standard library implementation and see how they do it (although their approach is not required to be portable and/or valid from the point of view of abstract C++ language).
This ambiguity caused problems for early library implementers. It's called the "Do The Right Thing" effect. As far as I know, you need SFINAE to solve it… it might have been one of the first applications of that technique. (Some compilers cheated and hacked their overload resolution internals, until the solution was found within the core language.)
The standard specification of this issue is one of the key differences between C++98 and C++03. From C++11, §23.2.3:
14 For every sequence container defined in this Clause and in Clause 21:
— If the constructor
template <class InputIterator>
X(InputIterator first, InputIterator last,
const allocator_type& alloc = allocator_type())
is called with a type InputIterator that does not qualify as an input iterator, then the constructor shall not participate in overload resolution.
15 The extent to which an implementation determines that a type cannot be an input iterator is unspecified, except that as a minimum integral types shall not qualify as input iterators.

Can enable_if be used as a non-extra parameter (e.g. for a constructor)?

I am writing a new container, and trying to comply with N3485 23.2.3 [sequence.reqmts]/14, which states:
For every sequence container defined in this Clause and in Clause 21:
If the constructor
template <class InputIterator>
X(InputIterator first, InputIterator last,
const allocator_type& alloc = allocator_type())
is called with a type InputIterator that does not qualify as an
input iterator, then the constructor shall not participate in overload
resolution.
(/14 repeats this almost verbatim for the member functions taking iterator ranges)
N3485 23.2.3 [sequence.reqmts]/15 says:
The extent to which an implementation determines that a type cannot be an input iterator is unspecified, except that as a minimum integral types shall not qualify as input iterators.
My understanding is that the phrase "shall not participate in overload resolution" means that the container implementer is supposed to use SFINAE tricks to disable that constructor or member function during template argument deduction. For the member functions, this is no big deal; as the return type on a function is the normal way to use enable_if. But for the constructor, there is no return type on which enable_if can be applied. This was my first attempt to declare the constructor:
// The enable_if use below is to comply with 23.2.3 [sequence.reqmts]/14:
// ... is called with a type InputIterator that does not qualify as an input iterator
// then the constructor shall not participate in overload resolution.
template <typename InputIterator>
path(std::enable_if<!std::is_integral<InputIterator>::value, InputIterator>::type first,
InputIterator last, Allocator const& allocator = allocator_type());
However, boost's enable_if docs suggest using a dummy pointer parameter initialized to nullptr instead of using an actual parameter to a function. Is that necessary for correct behavior here or is the preceding declaration of path's iterator range constructor okay?
The Good Robot (R. Martinho Fernandes) discusses the issue with a clean C++11 solution, namely using a default template parameter to apply enable_if, in his blog.
However, let me just point out here that doing
template< class Type >
void foo( typename Something< Type >::T )
foils argument deduction.
It's still possible to call the function, by explicitly providing the template argument. But in C++ the compiler will simply refuse to match e.g. a MyType actual argument to formal argument type Something<Blah>::T, because while this can be done in some special cases it cannot always be done (there could be countless choices of Blah where Something<Blah>::T was MyType).
So, your current approach won't work in general, but the whole problem of the specification's requirement is a C++11 problem, so the C++11 specific solution is OK! :-)

Can it cause problems to explicitly pass a reference into a function template that's not expecting one?

There was a question regarding passing taking the following function template, and instantiating it with a by-reference string parameter:
template <typename T>
void foo(T t) {}
The answer was, of course, to give the argument explicitly:
int main() {
std::string str("some huge text");
foo<std::string&>(str);
}
(As an aside, there was a suggestion about using deduction and passing C++0x's std::ref(str), but this requires the use of .get() inside the function, and the OP's requirement was transparency.)
However, IMO there can be little doubt that the author of the function template intended for the argument to be passed by value, or he would have written:
template <typename T>
void foo(T& t) {}
(It's possible that he deliberately intended for either to be possible, but this seems unlikely to me for some reason.)
Are there any "reasonable" scenarios where passing a reference into foo<T>, where the author of foo had intended the function always to take its argument by value, may cause problems?
The answer I'm looking for most likely consists of a function body for foo, where the use of a reference type for T leads to unexpected/undesirable semantics when compared to the use of a value type for T.
Consider that it's common to write algorithms like this:
template <typename InputIterator>
void dosomething(InputIterator first, InputIterator last, otherparams) {
while (first != last) {
do the work;
++first;
}
}
It's also how the standard algorithms are implemented both in GCC and in the old SGI STL.
If first is taken by reference then it will be modified, so certainly there are plenty of template functions out there that will "go wrong" with reference template arguments. In this example, first is changed to a value equal to last. In another implementation, or in the next release, it might be copied and not modified at all. That's "unexpected/undesirable semantics".
Whether you call this a "problem" or not, I'm not sure. It's not the behavior that the author of the function intended, and it's probably not documented what happens to first if it's passed by reference (it isn't for the standard algorithms).
For the standard algorithms I think it's UB, so the caller is at fault. The standard says that the template argument should be an iterator, and while T* or some library iterator is an iterator type, T* & or reference-to-library-iterator isn't.
As long as authors of template functions have documented the requirements of their template arguments clearly, I suspect that typically it will just fall out in the same way as for standard algorithms and iterators -- reference types are not valid template arguments, and hence the caller is at fault. In cases where the requirements are very simple (a couple of expressions with specified behavior), a reference type probably isn't ruled out, but again as long as the function doesn't say that it doesn't modify the argument, and doesn't say how it modifies the argument, callers should consider that since it's not documented how the argument is modified, then it's unspecified. If they call the function with a reference type, and get surprised whether or how the argument is modified, again it's the caller's fault.
I expect that under-documented functions are at risk of a dispute whose fault it is when it goes wrong, though, since sometimes it'll rely on quite a close reading.
If the programmer decided to be lazy, and use the input parameter as a temporary variable to calculate the output, then you would get problematic results:
template <class T>
T power2n(T a, int n) {
if (n == 0) return 1;
for (int i = 0 ; i < n; i++)
{
a *= a;
}
return a;
}
Now if I pass the first parameter by reference, its value gets messed up.
Not saying this is good programming, just saying it happens.