std::make_shared fails to compile when constructing with parameters from Bitfields - c++

Consider the following Smallest Recreate-able Standard Compliant Code
#include <vector>
#include <memory>
struct Foo
{
int m_field1;
Foo(int field1):m_field1(field1){};
};
typedef unsigned long DWORD;
typedef unsigned short WORD;
struct BitField {
struct {
DWORD Field1:31;
DWORD Field2:1;
} DUMMY;
};
int main()
{
std::vector<std::shared_ptr<Foo>> bar;
BitField *p = new BitField();
//This Line compiles
auto sp1 = std::shared_ptr<Foo>(new Foo((DWORD)p->DUMMY.Field1));
//But std::make_shared fails to compile
auto sp2 = std::make_shared<Foo>((DWORD)p->DUMMY.Field1);
return 0;
}
This code fails to compile in VC11 Update 2 with the following error message
1>Source.cpp(23): error C2664: 'std::shared_ptr<_Ty> std::make_shared<Foo,DWORD&>(_V0_t)' : cannot convert parameter 1 from 'DWORD' to 'unsigned long &'
1> with
1> [
1> _Ty=Foo,
1> _V0_t=DWORD &
1> ]
I cross checked on IDEONE, and it compiled successfully. Am I missing something obvious?
A connect Bug was opened https://connect.microsoft.com/VisualStudio/feedback/details/804888/with-language-extension-enabled-vc11-an-explicit-cast-is-not-creating-an-rvalue-from-bit-fields

This is an odd one. The following snippet compiles under the /Za (disable language extensions) compiler flag, but not without:
struct {
unsigned field:1;
} dummy = {0};
template<class T>
void foo(T&&){}
int main(){
foo((unsigned)dummy.field);
}
Error without /Za:
error C2664: 'foo' : cannot convert parameter 1 from 'unsigned int' to 'unsigned int &'
This is obviously a bug, since the cast to unsigned should simply create an rvalue, which should not be deduced as an lvalue-reference and which should not be treated as a bit-field. I have a feeling the extension for "rvalues bind to lvalue-references" plays a role here.
Please file a bug report on Microsoft Connect.

Here's more of a comment than an answer. It may shed some light on what's happening.
Example by Xeo
struct {
unsigned field:1;
unsigned nonfield;
} dummy = {0};
template<class T>
void foo(T&&){}
Step one: Type deduction.
[class.bit]/1 specifies "The bit-field attribute is not part of the type of the class member." Consequently, type deduction for foo(dummy.field) deduces the template parameter to be unsigned&.
Step two: overload resolution.
Although not strictly necessary here, the Standard has a nice example concerning this in [over.ics.ref]/4
[Example: a function with an “lvalue reference to int” parameter can be a viable candidate even if the corresponding argument is an int bit-field. The formation of implicit conversion sequences treats the int bit-field as an int lvalue and finds an exact match with the parameter. If the function is selected by overload resolution, the call will nonetheless be ill-formed because of the prohibition on binding a non-const lvalue reference to a bit-field (8.5.3). —end example ]
So this function is well-formed and will be selected, but the call will be ill-formed nevertheless.
Step three: Workarounds.
The OP's conversion should resolve the problem, foo( (unsigned)dummy.field ), as it yields an rvalue which leads to T being deduced as unsigned and the parameter unsigned&& is initialized from a temporary. But it seems that MSVC ignores the lvalue-to-rvalue conversion if source and destination have the same type. Writing foo( (unsigned)dummy.nonfield ) deduced T as T& as well (even with a static_cast).
The lvalue-to-rvalue conversion required to deduce T to unsigned rather than unsigned& can be enforced by using a unary +: foo( +dummy.field )

The compiler's error message is correct insofar as it really can't create a DWORD& from the value you pass in. The bitfield isn't the right size to be a real reference to DWORD. Whether the compiler is correct to reject your program, I can't say.
It's easy to work around, though. Simply specify the second template parameter when you call make_shared:
auto sp2 = std::make_shared<Foo, int>(p->DUMMY.Field1);
I used int because that's the constructor's argument type. You could say DWORD instead; any non-reference numeric type would probably be sufficient. You can then also forego the type-casting to DWORD. It doesn't do anything more.

Bit fields can't have references but are sometimes treated sorta like lvalues because they can be assigned to. Bit fields are messy IMO and you should avoid them.
but if you need to convert a bitfield to behaving exactly like an rvalue of the same type you can use a function like below.
template<class T>
T frombits(const T& x) {
return x;
}
//...
std::make_shared<Foo>(frombits(p->DUMMY.Field1));
I'm rather against specifying the template type. When you can, and always when it is intended, let the compiler deduce the type. Template argument deduction can get messy in C++11 but it has been engineered to work very well and work as it should in almost every case. Don't help the compiler and don't think you know better than it; eventually you will loose.

Related

variadic template method to create object

I have a variadic template method inside a template class (of type T_) looking like this
template < typename T_ >
class MyContainer {
public:
...
template <typename ...A>
ulong add (A &&...args)
{
T_ t{args...};
// other stuff ...
vec.push_back(t);
// returning an ulong
}
}
So basically I'm trying to make this class adapt to whatever type T_ but since I can't know in advance which types its constructor requires, I use a variadic. I took inspiration from the emplace_back method from the stl library.
Nevertheless, I get a warning of narrowing conversion with integer types if I try to do something like this
MyContainer<SomeClassRequiringAnUlong> mc;
mc.add(2);
warning: narrowing conversion of ‘args#0’ from ‘int’ to ‘long unsigned int’ [-Wnarrowing]
So I was wondering if I can do anything about it. Is there any way to tell the method which parameters' type it is supposed to take according to the template parameter T_ (which is known when the object is created) ?
Is there any way to tell the method which parameters' type it is
supposed to take according to the template parameter T_ (which is
known when the object is created)?
In your case, you should use direct initialization(()) instead of list initialization({}) (to avoid unnecessary narrowing checks).
Consider the case where T is a vector<int>:
MyContainer<std::vector<int>> mc;
mc.add(3, 0);
What do you expect mc.add(3, 0) to do? In your add() function, T_ t{args...} will invoke vector<int>{3,0} and create a vector of size 2, which is obviously wrong. You should use T_ t(args...) to call the overload of vector(size_type count, const T& value) to construct a vector of size 3, just like emplace_back() does.
It is worth noting that due to P0960R3, T_ t(std::forward<Args>(args)...) can also perform aggregate initialization if T_ is aggregate.
In general, no. The rules of C++ explicitly allow implicit conversions to take place. The fact that the authors of C++ made some of those conversions potentially unsafe is another matter.
You could add std::is_constructible<T,A&&...> static_assert or SFINAE to the code to make the compiler errors less ugly if the user inputs wrong arguments, but it won't solve implicit conversions.
From design perspective, the code should not care about this, the purpose of emplace_XXX is to allow exactly the calls that are allowed for T{args...}.
Note: You most likely want to forward the arguments like T element{std::forward<A>(args)...}; and also move the element into the vector vec.push_back(std::move(t));.
That said, the code
T_ t{args...};
//...
vec.push_back(t);
is the exact opposite what emplace functions do, their purpose is to create the element in-place at its final destination. Not to copy or move it there.
You are looking in the wrong direction. It is not the template, but the user code that needs fix. To see the issue, first understand that 2 is of type signed int. So you are trying to construct an object with a signed int, while the constructor expects long unsigned int instead. A minimal repro of the issue can then be written as below.
class SomeClassRequiringAnUlong {
public:
SomeClassRequiringAnUlong(unsigned long) {}
};
int main() {
int v = 2;
SomeClassRequiringAnUlong obj{v};
}
The warning simply states that this narrowing conversion from signed int to long unsigned int is potentially risky and may catch you off guard. E.g.,
int v = -1;
SomeClassRequiringAnUlong obj{v};
still compiles and runs, but the result may not be what you want. Now you see that the issue lies in the user code. I see two ways to fix. 1) Use the type the constructor expects from the very beginning. In your case, that would be changing mc.add(2) to mc.add(2ul). 2ul is of type long unsigned int. 2) Make the type conversion explicit to inform the compiler that the narrowing conversion is by design and it is fine. That would be changing mc.add(2) to mc.add(static_cast<long unsigned int>(2)).
Note that there are issues in your template (though not quite related to the warning) as other answers have noted. But they are irrelevant to the specific question you asked. So I do not further elaborate on them.

Does the standard prevent narrowing conversion of literal with small enough literal values within variadic templates

Here is the minimal example:
#include <array>
template <class... T>
constexpr std::array<unsigned char, sizeof...(T)> act(T... aArgs)
{
return std::array<unsigned char, sizeof...(T)>{aArgs...};
}
int main()
{
act(5, 5);
}
EDIT Where GCC and Clang can compile that snippet without a complaint No they cannot.
The latest MSVC fails with:
<source>(6): error C2397: conversion from 'int' to '_Ty' requires a narrowing conversion
with
[
_Ty=unsigned char
]
See: https://godbolt.org/z/1PmeLk
Since in this situation, the compiler has everything it needs to statically verify that the call to act(5, 5) does not overflow for the provided values, is it a standard-compliant behaviour to fail on this code?
Bonus question:
Since there is no literal suffix to get an unsigned char literal, how to workaround this bug || fix this non-standard code?
Since in this situation, the compiler has everything it needs to statically verify that the call to act(5, 5) does not overflow for the provided values, is it a standard-compliant behaviour to fail on this code?
Yes. The compiler is only allowed to do a narrowing conversion when it can guarantee that the value is representable in narrower type. If you had
std::array<unsigned char, 2> foo = {5, 127};
then this would be okay because the compiler knows 5 and 127 are representable. Your case though is not the same. You do your initialization inside a function and inside the function {aArgs...} does not have the same guarantee since it is not a constant expression (no variables passed to a function are constant expressions). Because of this the compiler can't prove in all cases that it will be valid so it issues a warning/error.
Since there is no literal suffix to get an unsigned char literal, how to workaround this bug || fix this non-standard code?
You can make your own user define litteral to make unsigned chars like
inline constexpr unsigned char operator ""_uc( unsigned long long arg ) noexcept
{
return static_cast< unsigned char >( arg );
}
//...
act(5_uc, 5_uc);
or you can just cast to them like
act((unsigned char)5, (unsigned char)5);
The actual wording that covers this case can be found in [dcl.init]/7
A narrowing conversion is an implicit conversion [...}
from an integer type or unscoped enumeration type to a floating-point type, except where the source is a constant expression and the actual value after conversion will fit into the target type and will produce the original value when converted back to the original type [...]
emphasis mine
As you can see it requires the initizlizer to be a constant expression. Since function parameters are never constant expressions it doesn't matter how much static analysis the compiler does and how much proof it has. It is not a constant expression so it cannot be done.
Bonus question:
Since there is no literal suffix to get an unsigned char literal, how to workaround this bug || fix this non-standard code?
Just for fun, I propose you a way (not really practical) to pass your literal type values as arguments of the function (in a sense) avoiding the narrowing problem: you can receive the args values through a std::integral_constant
template <typename ... T, T ... args>
constexpr std::array<unsigned char, sizeof...(T)>
act (std::integral_constant<T, args>...)
{
return std::array<unsigned char, sizeof...(T)>{args...};
}
and call it as follows
act(std::integral_constant<int, 5>{},
std::integral_constant<long long, 5ll>{});
This way, the args value are template values so the compiler is guaranteed that particular function instantiation is safe from the narrowing point of view (calling it with adequate values, obviously).
If you can use C++17, you obtain the same result (in a simpler and more practical way) simply passing the args as auto template values.
template <auto ... args>
constexpr auto act ()
{
return std::array<unsigned char, sizeof...(args)>{args...};
}
// ...
act<5, 5ll>();

Is MSVC right to find this method call ambiguous, whilst Clang/GCC don't?

Clang (3.9.1) and GCC (7, snapshot) print "1", "2" to the console when this code is run.
However, MSVC fails to compile this code:
source_file.cpp(15): error C2668: 'Dictionary::set': ambiguous call to overloaded function
source_file.cpp(9): note: could be 'void Dictionary::set(int64_t)'
source_file.cpp(8): note: or 'void Dictionary::set(const char *)'
source_file.cpp(15): note: while trying to match the argument list '(const unsigned int)'
#include <iostream>
static const unsigned ProtocolMajorVersion = 1;
static const unsigned ProtocolMinorVersion = 0;
class Dictionary {
public:
void set(const char *Str) { std::cout << "1"; }
void set(int64_t val) { std::cout << "2"; }
};
int main() {
Dictionary dict;
dict.set(ProtocolMajorVersion);
dict.set(ProtocolMinorVersion);
}
I think MSVC is right - the value of ProtocolMajorVersion is 0, which can be NULL or int64_t(0).
However, this seems to be the case when replacing
dict.set(ProtocolMinorVersion)
with
dict.set(0);
source_file.cpp:15:10: error: call to member function 'set' is ambiguous
dict.set(0);
source_file.cpp:8:10: note: candidate function
void set(const char *Str) { std::cout << "1"; }
source_file.cpp:9:10: note: candidate function
void set(int64_t val) { std::cout << "2"; }
So what's going on here - which compiler is right? Would surprise me if both GCC and Clang are accepting incorrect code, or is MSVC just being buggy? Please refer to the standard
In C++11 and before, any integral constant expression which evaluates to 0 is a considered a null pointer constant. This has been restricted in C++14: only integer literals with value 0 are considered. In addition, prvalues of type std::nullptr_t are null pointer constants since C++11. See [conv.ptr] and CWG 903.
Regarding overload resolution, both the integral conversion unsigned -> int64_t and the pointer conversion null pointer constant -> const char* have the same rank: Conversion. See [over.ics.scs] / Table 12.
So if ProtocolMinorVersion is considered a null pointer constant, then the calls are ambiguous. If you just compile the following program:
static const unsigned ProtocolMinorVersion = 0;
int main() {
const char* p = ProtocolMinorVersion;
}
You will see that clang and gcc reject this conversion, whereas MSVC accepts it.
Since CWG 903 is considered a defect, I'd argue that clang and gcc are right.
When two compilers agree and one doesn't, it's nearly always the one that doesn't that is wrong.
I would argue that if you declare a value as const unsigned somename = 0;, it is no longer a simple zero, it is a named unsigned constant with the value zero. So should not be considered equivalent to a pointer type, leaving only one plausible candidate.
Having said that, BOTH of the set functions require conversion (it's not a uint64_t, neither a const char *), so one could argue that MSVC is right [the compiler shall pick the type that requires least conversion, if multiple types require equal amount of conversion, it's ambiguous] - although I still don't think the compiler should accept a named constant of the value zero as an equivalent to a pointer...
Sorry, probably more of a "comment" than an answer - I started writing with the intention of saying "gcc/clang are right", but then thinking more about it came to the conclusion that "although I would be happier with that behaviour, it's not clear that this is the CORRECT behaviour".

Visual C++: forward an array as a pointer

I've cut down some C++ 11 code that was failing to compile on Visual Studio 2015 to the following which I think should compile (and does with clang and gcc):
#include <utility>
void test(const char* x);
int main()
{
const char x[] = "Hello world!";
test(std::forward<const char*>(x));
}
I understand the call to forward isn't necessary here. This is cut down from a much more complex bit of code that decays any arrays in a variadic argument down to pointers and forwards everything on. I'm sure can find ways to work around this with template specialization or SFINAE, but I'd like to know whether it's valid C++ before I go down that road. The compiler is Visual Studio 2015, and the problem can be recreated on this online MSVC compiler. The compile error is:
main.cpp(13): error C2665: 'std::forward': none of the 2 overloads could convert all the argument types
c:\tools_root\cl\inc\type_traits(1238): note: could be '_Ty &&std::forward<const char*>(const char *&&) noexcept'
with
[
_Ty=const char *
]
c:\tools_root\cl\inc\type_traits(1231): note: or '_Ty &&std::forward<const char*>(const char *&) noexcept'
with
[
_Ty=const char *
]
main.cpp(13): note: while trying to match the argument list '(const char [13])'
Update:
#Yakk has suggested an example more like this:
void test(const char*&& x);
int main()
{
const char x[] = "Hello world!";
test(x);
}
Which gives a more informative error:
main.cpp(7): error C2664: 'void test(const char *&&)': cannot convert argument 1 from 'const char [13]' to 'const char *&&'
main.cpp(7): note: You cannot bind an lvalue to an rvalue reference
Again, this compiles on gcc and clang. The compiler flags for Visual C++ were /EHsc /nologo /W4 /c. #Crazy Eddie suggests this might be down to a VC++ extension to pass temporaries as non const references.
To me this looks like a bug in MSVC where it tries to be clever with array-to-pointer and gets it wrong.
Breaking down your second example:
The compiler needs to initialize a const char*&& from an lvalue of type const char[13]. To do this, 8.5.3 says it creates a temporary of type const char* and initializes it with the const char[13], then binds the reference to the temporary.
Initializing a const char* from a const char[13] involves a simple array-to-pointer conversion, yielding a prvalue of const char* which is then copied into the temporary.
Thus the conversion is well defined, despite what MSVC says.
In your first example, it's not test() that is causing the issue, but the call to std::forward. std::forward<const char*> has two overloads, and MSVC is complaining neither is viable. The two forms are
const char*&& std::forward(const char*&&);
const char*&& std::forward(const char*&);
One takes an lvalue reference, one takes an rvalue reference. When considering whether either overload is viable, the compiler needs to find a conversion sequence from const char[13] to a reference to const char*.
Since the lvalue reference isn't const (it's a reference to a pointer to a const char; the pointer itself isn't const), the compiler can't apply the conversion sequence outlined above. In fact, no conversion sequence is valid, as the array-to-pointer conversion requires a temporary but you can't bind non-const lvalue references to temporaries. Thus MSVC is correct in rejecting the lvalue form.
The rvalue form, however, as I've established above, should be accepted but is incorrectly rejected by MSVC.
I believe std::decay<const char []>::type is what you're looking for http://en.cppreference.com/w/cpp/types/decay
I think it should compile, but why are you bothering to use std::forward?
Isn't the correct solution simply to replace
std::forward<const char*>(x)
with:
(const char*)x
or for the generic case, replace:
std::forward<decay_t<decltype(x)>>(x)
with:
decay_t<decltype(x)>(x)
Using std::forward doesn't seem to have any purpose here, you have an array, you want to decay it to a pointer, so do that.

How to pass an array size as a template with template type?

My compiler behaves oddly when I try to pass a fixed-size array to a template function. The code looks as follows:
#include <algorithm>
#include <iostream>
#include <iterator>
template <typename TSize, TSize N>
void f(TSize (& array)[N]) {
std::copy(array, array + N, std::ostream_iterator<TSize>(std::cout, " "));
std::cout << std::endl;
}
int main() {
int x[] = { 1, 2, 3, 4, 5 };
unsigned int y[] = { 1, 2, 3, 4, 5 };
f(x);
f(y); //line 15 (see the error message)
}
It produces the following compile error in GCC 4.1.2:
test.cpp|15| error: size of array has non-integral type ‘TSize’
test.cpp|15| error: invalid initialization of reference of type
‘unsigned int (&)[1]’ from expression of type ‘unsigned int [5]’
test.cpp|6| error: in passing argument 1 of ‘void f(TSize (&)[N])
[with TSize = unsigned int, TSize N = ((TSize)5)]’
Note that the first call compiles and succeeds. This seems to imply that while int is integral, unsigned int isn't.
However, if I change the declaration of my above function template to
template <typename TSize, unsigned int N>
void f(TSize (& array)[N])
the problem just goes away! Notice that the only change here is from TSize N to unsigned int N.
Section [dcl.type.simple] in the final draft ISO/IEC FDIS 14882:1998 seems to imply that an "integral type" is either signed or unsigned:
The signed specifier forces char objects and bit-fields to be signed; it is redundant with other integral types.
Regarding fixed-size array declarations, the draft says [dcl.array]:
If the constant-expression (expr.const) is present, it shall be an integral constant expression and its value shall be greater than zero.
So why does my code work with an explicit unsigned size type, with an inferred signed size type but not with an inferred unsigned size type?
EDIT Serge wants to know where I'd need the first version. First, this code example is obviously simplified. My real code is a bit more elaborate. The array is actually an array of indices/offsets in another array. So, logically, the type of the array should be the same as its size type for maximum correctness. Otherwise, I might get a type mismatch (e.g. between unsigned int and std::size_t). Admittedly, this shouldn't be a problem in practice since the compiler implicitly converts to the larger of the two types.
EDIT 2 I stand corrected (thanks, litb): size and offset are of course logically different types, and offsets into C arrays in particular are of type std::ptrdiff_t.
Hmm, the Standard says in 14.8.2.4 / 15:
If, in the declaration of a function template with a non-type template-parameter, the non-type template-parameter is used in an expression in the function parameter-list and, if the corresponding template-argument is deduced, the template-argument type shall match the type of the template-parameter exactly, except that a template-argument deduced from an array bound may be of any integral type.
Providing this example:
template<int i> class A { /* ... */ };
template<short s> void f(A<s>);
void k1() {
A<1> a;
f(a); // error: deduction fails for conversion from int to short
f<1>(a); // OK
}
That suggests that the compilers that fail to compile your code (apparently GCC and Digital Mars) do it wrong. I tested the code with Comeau, and it compiles your code fine. I don't think there is a different to whether the type of the non-type template parameter depends on the type of the type-parameter or not. 14.8.2.4/2 says the template arguments should be deduced independent from each other, and then combined into the type of the function-parameter. Combined with /15, which allows the type of the dimension to be of different integral type, i think your code is all fine. As always, i take the c++-is-complicated-so-i-may-be-wrong card :)
Update: I've looked into the passage in GCC where it spits out that error message:
...
type = TREE_TYPE (size);
/* The array bound must be an integer type. */
if (!dependent_type_p (type) && !INTEGRAL_TYPE_P (type))
{
if (name)
error ("size of array %qD has non-integral type %qT", name, type);
else
error ("size of array has non-integral type %qT", type);
size = integer_one_node;
type = TREE_TYPE (size);
}
...
It seems to have missed to mark the type of the size as dependent in an earlier code block. As that type is a template parameter, it is a dependent type (see 14.6.2.1).
Update: GCC developers fixed it: Bug #38950