I have implemented type-parameterized tests (Sample #6) to apply the same test case to more than one class. It happens that when assigning a string to either a signed char[], unsigned char[], const signed char[] or const unsigned char[], I get:
../stackoverflow.cpp: In member function ‘void IosTest_DummyTest_Test<gtest_TypeParam_>::TestBody() [with gtest_TypeParam_ = std::basic_istream<char, std::char_traits<char> >]’:
../stackoverflow.cpp:34: instantiated from here
../stackoverflow.cpp:32: error: char-array initialized from wide string
What is more interesting is that when applying the test case to one type everything goes just fine, but when I add a second type it blows up. I could reproduce the error in the following code:
#include "gtest/gtest.h"
#include <iostream>
// Factory methods
template<class T> std::ios* CreateStream();
template<>
std::ios* CreateStream<std::istream>() {
return &std::cin;
}
template<>
std::ios* CreateStream<std::ostream>() {
return &std::cout;
}
// Fixture class
template<class T>
class IosTest: public ::testing::Test {
protected:
IosTest() : ios_(CreateStream<T>()) {}
virtual ~IosTest() {}
std::ios* const ios_;
};
using testing::Types;
typedef Types<std::istream, std::ostream> Implementations;
TYPED_TEST_CASE(IosTest, Implementations);
TYPED_TEST(IosTest, DummyTest) {
signed char c[] = ".";
this->ios_->fill(c[0]);
};
In the line typedef Types<std::istream, std::ostream> Implementations; is created a list of types called Implementations and in the following line, TYPED_TEST_CASE(IosTest, Implementations);, is defined that the test case IosTest will be applied to the typed defined in the Implementations list.
As I have already said, if I remove either std::istream or std::ostream from the Implementations list I can compile and run the tests without any warning (I am using the -Wall flag). Can anyone explain this phenomenon?
Is it is possible your gtest library was built with a different version compiler that you are compiling your app (stackoverflow.cpp) with? I recall seeing this error message related to a lib I had built with a newer version of gcc and trying to link it with an older version of gcc.
You can try building gtest from source. It comes with a script that extracts and fuses everything into a single header file and a single cpp file.
Look in your gtest installation for this python script:
gtest/scripts/fuse_gtest_files.py
There are instructions in the script for how to run it. You end up with two files:
gtest-all.cc
gtest.h
You only need to do this once and add it to your makefile. I do exactly this for distributing a Linux-based app to a customer.
It looks like GCC bug described here.
If you change signed char c[] = "."; to char c[] = "."; everything seems to work just fine.
Related
I just bounced into something subtle in the vicinity of std::visit and std::function that baffles me. I'm not alone, but the only other folks I could find did the "workaround and move on" dance, and that's not enough for me:
https://github.com/fmtlib/fmt/issues/851
https://github.com/jamboree/bustache/issues/11
This may be related to an open issue in the LWG, but I think something more sinister is happening here:
https://cplusplus.github.io/LWG/issue3052
Minimal Example:
// workaround 1: don't include <variant>
#include <variant>
#include <functional>
struct Target
{
Target *next = nullptr;
};
struct Visitor
{
void operator()(const Target &tt) const { }
};
// workaround 2: concretely use 'const Visitor &' instead of 'std::function<...>'
void visit(const Target &target, const std::function<void(const Target &)> &visitor)
{
visitor(target);
if(target.next)
visit(*target.next,visitor); // workaround 3: explicitly invoke ::visit(...)
//^^^ problem: compiler is trying to resolve this as std::visit(...)
}
int main(int argc, char **argv, char **envp)
{
return 0;
}
Compile with g++ -std=c++17, tested using:
g++-7 (Ubuntu 7.5.0-3ubuntu1~18.04)
g++-8 (Ubuntu 8.4.0-1ubuntu1~18.04)
The net result is the compiler tries to use std::visit for the clearly-not-std invocation of visit(*target.next,visitor):
g++-8 -std=c++17 -o wtvariant wtvariant.cpp
In file included from sneakyvisitor.cpp:3:
/usr/include/c++/8/variant: In instantiation of ‘constexpr decltype(auto) std::visit(_Visitor&&, _Variants&& ...) [with _Visitor = Target&; _Variants = {const std::function<void(const Target&)>&}]’:
wtvariant.cpp:20:31: required from here
/usr/include/c++/8/variant:1385:23: error: ‘const class std::function<void(const Target&)>’ has no member named ‘valueless_by_exception’
if ((__variants.valueless_by_exception() || ...))
~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/8/variant:1390:17: error: no matching function for call to ‘get<0>(const std::function<void(const Target&)>&)’
std::get<0>(std::forward<_Variants>(__variants))...));
~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In my real use case, I thought someone had snuck a "using namespace std" into the header space of my tree and I was gonna be grumpy. However, this minimal example demonstrates otherwise.
Critical Question: given that I have not created nor used any namespaces, why is std::visit(...) getting involved here at all?
WRT workaround 1: At least in the variant header, visit(...) is declared properly in the std namespace
WRT workaround 2: If the second argument is anything other than a std::function, it compiles just fine, leading me to believe that something more subtle is going on here.
WRT workaround 3: I understand that two colons are a small price to pay, but that they are necessary at all feels dangerous to me given my expectations for what it means to put a free function like visit(...) into a namespace.
Any one of the three marked workarounds will suppress the compiler error, but I'm personally intolerant of language glitches that I can't wrap my head around (Though I understand the necessity, I'm still uneasy about how often I have to sprinkle 'typename' into templates to make them compile).
Also of note, if try to make use of other elements of the std namespace without qualification (e.g., try a naked 'cout'), the compiler properly grumps about not being able to figure out the 'cout' that I'm after, so it's not as though the variant header is somehow flattening the std namespace.
Lastly, this problem persists even if I put my visit() method in its own namespace: the compiler really wants to use std::visit(...) unless I explicitly invoke my_namespace::visit(...).
What am I missing?
The argument visitor is an std::function, which is in the namespace std, so argument-dependent lookup finds visit in the namespace std as well.
If you always want the visit in the global namespace, say so with ::visit.
I use a native C++ code base from C#, with a C++/CLI wrapper built around it (with Visual Studio 2013). There are two projects:
NativeCodeBase: simple C++ project set to be built into a static lib.
ManagedWrapper: C++/CLI project referencing NativeCodeBase.
I have the following native "interface" in NativeCodeBase:
class ITest {
virtual void Foo(const std::string& str, MyEnum me) = 0;
}
For which I have a native implementation in the ManagedWrapper project.
In the header:
class TestManaged : public ITest {
virtual void Foo(const std::string& str, MyEnum me) override;
}
In the cpp:
void TestManaged::Foo(const std::string& str, MyEnum me) {
int length = str.length();
}
The MyEnum enum is used both in native and managed code, so in its implementation I use a conditionally compiled C++/CLI extension, to make it usable from C#:
#ifdef _MANAGED
public
#endif
enum class MyEnum : unsigned char
{
Baz = 0,
Qux = 1
};
In my native code I have a reference to ITest and call its Foo function with a local std::string variable. When Foo is called, I can see in the debugger that the string passed as an argument is a valid string object.
The call is similar to this:
void Bar(ITest& test) {
std::string str = "test";
test.Foo(str, MyEnum::Baz);
}
However, if I put a breakpoint at the beginning of TestManaged::Foo, the debugger says that str has <undefined value>, and the length() call crashes with undefined reference error in the <xstring> header in the following function:
size_type length() const _NOEXCEPT
{ // return length of sequence
return (this->_Mysize);
}
The debugger displays <undefined value> for the this pointer as well.
What can be the reason for this? References somehow get corrupted when passed between the two libraries?
(Additional info: I used not to build the NativeCodeBase project as a separate lib, but linked all the source files from it into the CLI project, and the same code base worked without any problem. It started failing since I configured it to be built into a separate lib and added a reference in the CLI project to the native one.)
The problem wasn't with the reference itself. The problem was with the second enum parameter. The implementation of the enum class looked like this:
#ifdef _MANAGED
public
#endif
enum class MyEnum : unsigned char
{
Baz = 0,
Qux = 1
};
The #ifdef directive was put there in order to create a native enum when built for native C++, but create a CLI enum when built for C++/CLI.
This worked well when all the source files were linked to the CLI project and every piece of source was built again for the CLI project. However, this approach does not work any more when I want to use the native lib from the CLI side.
I guess the problem was that the same header was built differently in the two libraries, so the caller and the calle saw a different binary interface of the object, thus the arguments got garbled when passed. Is this correct?
I got rid of the conditionally compiled public keyword and it started working properly again.
Our project uses boost::serialization to serialize many things.
But some types are not correctly registered and when serializing them we get an "unregistered class" error
I have narrowed the problem to the BOOST_CLASS_EXPORT_KEY, which, for some types are not generating code.
What BOOST_CLASS_EXPORT_KEY does is :
namespace boost {
namespace serialization {
template<>
struct guid_defined< T > : boost::mpl::true_ {};
template<>
inline const char * guid< T >(){
return K;
}
} /* serialization */
} /* boost */
All objects that are serialized inherit from a base class called Serializable.
Serialization is always done via a pointer to the base class.
This works fine except for one case:
There is a class template SerializableList which is a Serializable which holds a list of T
template< typename T>
class SerializableList
{
...
std::vector<T> m_list;
template<class Archive>
void serialize( Archive & ar, const unsigned int /*version*/ )
{
ar & boost::serialization::base_object<businessObjects::Serializable>(*this);
ar & mList;
}
};
in a dedicated cpp and hpp files we then declare each instantiation of this template to boost serialization like this:
hpp:
BOOST_CLASS_EXPORT_KEY( SerializableList<SomeT*> );
BOOST_CLASS_EXPORT_KEY( SerializableList<SomeOtherT*> );
BOOST_CLASS_EXPORT_KEY( SerializableList<AThirdT*> );
cpp:
BOOST_CLASS_EXPORT_IMPLEMENT( SerializableList<SomeT*> );
BOOST_CLASS_EXPORT_IMPLEMENT( SerializableList<SomeOtherT*> );
BOOST_CLASS_EXPORT_IMPLEMENT( SerializableList<AThirdT*> );
But half of these lines do not produce executable code in the final executable! if we put a breakpoint on each of those lines and run, half the breakpoints disappear, those who stay are on the working types (those we can serialize).
For instance the breakpoints would stay on SerializableList<SomeT*> and SerializableList<AThirdT*> but not SerializableList<SomeOtherT*>.
Btw, we have also tried to call directly boost::serialization::guid<T>(), and while it works fine for say:
boost::serialization::guid<SerializableList<SomeT*> >() which returns the key,
it doesn't for
boost::serialization::guid<SerializableList<SomeOtherT*> >() which calls the default implementation ...
So is there a compiler bug (we use Visual C++ 2010 SP1), or some good reason for the compiler to ignore some of those specializations?
I forgot to mention, all this code lies in a library, which is linked against the exe project. I've tried with different exe projects and sometimes it works sometimes it doesn't ... the compilation options are the same... I really have no clue what's going on :'(
We found the solution,
One (serializable) class had several SerializableList members, and did not include the file with all the "BOOST_CLASS_EXPORT_KEY" lines.
the other projects which were working didn't use that particular class ...
I have written a custom std::basic_streambuf and std::basic_ostream because I want an output stream that I can get a JNI string from in a manner similar to how you can call std::ostringstream::str(). These classes are quite simple.
namespace myns {
class jni_utf16_streambuf : public std::basic_streambuf<char16_t>
{
JNIEnv * d_env;
std::vector<char16_t> d_buf;
virtual int_type overflow(int_type);
public:
jni_utf16_streambuf(JNIEnv *);
jstring jstr() const;
};
typedef std::basic_ostream<char16_t, std::char_traits<char16_t>> utf16_ostream;
class jni_utf16_ostream : public utf16_ostream
{
jni_utf16_streambuf d_buf;
public:
jni_utf16_ostream(JNIEnv *);
jstring jstr() const;
};
// ...
} // namespace myns
In addition, I have made four overloads of operator<<, all in the same namespace:
namespace myns {
// ...
utf16_ostream& operator<<(utf16_ostream&, jstring) throw(std::bad_cast);
utf16_ostream& operator<<(utf16_ostream&, const char *);
utf16_ostream& operator<<(utf16_ostream&, const jni_utf16_string_region&);
jni_utf16_ostream& operator<<(jni_utf16_ostream&, jstring);
// ...
} // namespace myns
The implementation of jni_utf16_streambuf::overflow(int_type) is trivial. It just doubles the buffer width, puts the requested character, and sets the base, put, and end pointers correctly. It is tested and I am quite sure it works.
The jni_utf16_ostream works fine inserting unicode characters. For example, this works fine and results in the stream containing "hello, world":
myns::jni_utf16_ostream o(env);
o << u"hello, wor" << u'l' << u'd';
My problem is as soon as I try to insert an integer value, the stream's bad bit gets set, for example:
myns::jni_utf16_ostream o(env);
if (o.badbit()) throw "bad bit before"; // does not throw
int32_t x(5);
o << x;
if (o.badbit()) throw "bad bit after"; // throws :(
I don't understand why this is happening! Is there some other method on std::basic_streambuf I need to be implementing????
It looks like the answer is that char16_t support is only partly implemented in GCC 4.8. The library headers don't install facets needed to convert numbers. Here is what the Boost.Locale project says about it:
GNU GCC 4.5/C++0x Status
GNU C++ compiler provides decent support of C++0x characters however:
Standard library does not install any std::locale::facets for this
support so any attempt to format numbers using char16_t or char32_t
streams would just fail. Standard library misses specialization for
required char16_t/char32_t locale facets, so "std" backends is not
build-able as essential symbols missing, also codecvt facet can't be
created as well.
Visual Studio 2010 (MSVC10)/C++0x Status
MSVC provides all required facets however:
Standard library does not provide installations of std::locale::id for
these facets in DLL so it is not usable with /MD, /MDd compiler flags
and requires static link of the runtime library. char16_t and char32_t
are not distinct types but rather aliases of unsigned short and
unsigned types which contradicts to C++0x requirements making it
impossible to write char16_t/char32_t to stream and causing multiple
faults.
I am trying to cross compile some code for windows using MinGw. The code is fairly simple:
Header:
class DragLabel : public QLabel
{
Q_OBJECT
public:
DragLabel();
void fn(QString path, int id, bool small);
};
cpp:
#include "draglabel.h"
DragLabel::DragLabel()
{
/* Snip ... */
};
void DragLabel::fn(QString path, int id, bool small)
{
(void)d;
};
The example function fails to compile givin me:
error: two or more data types in declaration of 'parameter'
for the declaration of fn(QString...).
[EDIT:] Sorry i forgot to mention that this error happens only if there is the bool variable declared, so the function without:
void fn(QString path, int id);
Workes just fine.
It compiles fine using qmake and make under debian linux.
Does anyone know what might happens here?
Thanks
It seems that small is some extension keyword of MinGW (I couldn't find it in standard). According to
When i change everything to int it works
small is some qualifier like long or signed, that extends int declaration.
Try to change variable name from small to anything else.