In a shared library (.so) I define a std::shared_ptr to a class object which is returned to the caller accross the library boundary to the main routine which is a Qt5.4 project. There the pointer is used in a if statement. As the bool operation is the last owner of the shared pointer it is deleted after finishing this operation and the destructor is called.
.so file (an Autotools-project):
#define STD_SHARED_PTR std::shared_ptr
#define STD_WEAK_PTR std::weak_ptr
typedef STD_SHARED_PTR<RenderingControl> RDCH;
typedef STD_WEAK_PTR<RenderingControl> WEAK;
class MediaRenderer {
public:
RDCH rdc();
}
class RenderingControl {
public:
RenderingControl();
virtual ~RenderingControl();
}
RenderingControl::RenderingControl() {
...
}
RendeneringControl::~RenderingControl() {
cerr << "Destructor called" << endl;
}
RDCH MediaRenderer::rdc() {
RDCH rdcl = RDCH(new RenderingControl());
long foo = rdcl.use_count();
WEAK rdc = rdcl;
return rdcl;
}
.cpp (a Qt5.4 project):
typedef STD_SHARED_PTR<RenderingControl> MRDH;
MRDH renderer = MRDH(new MediaRenderer());
if (renderer->rdc()) {
...
return;
}
Everything works fine on a x86 machine compiled with either Qt4.8 or Qt5.4. The destructor is called after finishing the if statement. Cross compiled for an ARM (Raspberry Pi 2) using Qt5.4, however, the destructor is not called. If I additionally add use_count() for debugging, it yields 1 in both the .so and the .cpp file on the x86, but 1 in the .so and 0 in the .cpp for the ARM.
If I compile on ARM using Qt4.8 everything is fine on ARM, too. But why does it not work on ARM using Qt5.4?
Thank you!
Obviously, the reason was different versions of libstdc++ I have on the same system for compiling a autotools project and a Qt project. However, even if I solved the problem by converting the library into an Qt project ensuring the use of the same libstdc++ for both parts, I do not understand why there are different versions on the same machine? Is this a specific feature of Qt? Mayby, anybody could explain...
Related
I have a DLL containing multiple functions that can fastly perform arithmetic operations on extremely large integers. My test program runs smoothly in my Visual Studio 2019, as follows.
int main()
{
HINSTANCE myDDL = LoadLibrary(L".\\BigIntDLL.dll");
typedef string (*func)(string a, string b);
func expBigInt = (func)GetProcAddress(myDDL, "expBigInt");
string y= expBigInt("2", "10000");//it can calculate 2^10000 and return it as a string
cout << y;
}
So, I moved the code directly into my Qt project as a part in widget.cpp, and also placed the BigIntDLL.dll and .lib in the same directory of the project. The compilation was successful, but when debugging my interface, the program broke with a Segmentation fault error due to a call to the expBigInt function.
void Widget::on_expButton_clicked()
{
getTextEditNum();
Output=expBigInt(Input1,Input2);//crashed here
writeResult(Output);
}
I am not really sure where the real problem is, but I now suspect that I have not successfully called the functions in this DLL, causing some memory issues.
I've been porting some c++ app from Visual Studio 2013 to Visual Studio 2017. Aside from the plethora of new warnings that I had to fix, the compilation and linking went okay.
However, when running the app, it 'stalled' when trying to re-enter the constructor of a singleton (when successive function calls form a loop back to the constructor). It seems that this behaviour was okay in VS2013, but is no longer valid in VS2017. There is no error message.
I'm aware of all the bad things related to singletons, and that there should at least not be loops. The question is not there.
Is there a way to tell the VS2017 compiler that I'd like to shoot myself in the foot, and allow the same behaviour that was there in VS2013?
I don't have access to the code that causes this behaviour because it comes from a third-party library, this is why I can't 'just fix it', unfortunately.
Here is an example which works in VS2013, but doesn't work in VS2017:
main.cpp
#include "Singleton.h";
int
main( void )
{
std::cout << "let's do this!" << std::endl;
int two = Singleton::GetReference().getTwo();
std::cout << "ok" << std::endl;
return 0;
}
Singleton.h
#pragma once
class Stuff;
class Singleton
{
public:
static Singleton& GetReference();
int getTwo() { return 2; }
private:
Singleton();
Stuff* stuff;
};
Singleton.cpp
#include "Singleton.h"
#include "Stuff.h"
Singleton&
Singleton::GetReference() {
static Singleton theInstance;
return theInstance;
}
Singleton::Singleton()
{
stuff = new Stuff();
}
Stuff.h
#pragma once
class Stuff
{
public:
Stuff();
private:
int two;
};
Stuff.cpp
#include "Stuff.h"
#include "Singleton.h"
Stuff::Stuff()
{
two = Singleton::GetReference().getTwo();
}
In the code above, when step-by-step debugging, the first time we get on the line static Singleton theInstance; will work as expected, but the second time, a F11 will go to the file thread_safe_statics.cpp, into the method extern "C" void __cdecl _Init_thread_header(int* const pOnce). A Shift+F11 will exit the method and the program will wait indefinitely at the line specified (observed when pausing the program from the debugger).
PS
This issue probably occurs in Visual Studio 2015 too, as the documentation linked from the accepted answer mentions VS2015.
/Zc:threadSafeInit-
The general "Conformance" page is MSDN: Conformance, which details which new features you can disable.
I needed the code for sizedDealloc, where my new compiler was creating a sized new operator for a library which broke older compiled expectations.
As this is a compile flag, at least some of the code would be in your control, and you should be able to unravel the beast.
The constructor Stuff::Stuff is calling a function on an incompletely constructed object.
That would create "Undefined behavior". If the value "2" is not set till the end of the constructor (for example).
Probably the Singleton needs to be split into 2, one which delivers the early static data (e.g. 2).
The second which delivers the held object Stuff. Stuff would only rely on the first, which would break the deadlock.
Alternatively, a second constructor to Stuff which told it which object to use, and was called from the Singleton::Singleton
The MSDN article to disable "Magic Statics" MSDN : disable threadsafe static initialization
I stumbled about an issue while using libstdc++'s std::any implementation with mingw across a shared library boundary. It produces a std::bad_any_cast where it obviously should not (i believe).
I use mingw-w64, gcc-7 and compile the code with -std=c++1z.
The simplified code:
main.cpp:
#include <any>
#include <string>
// prototype from lib.cpp
void do_stuff_with_any(const std::any& obj);
int main()
{
do_stuff_with_any(std::string{"Hello World"});
}
lib.cpp:
Will be compiled into a shared library and linked with the executable from main.cpp.
#include <any>
#include <iostream>
void do_stuff_with_any(const std::any& obj)
{
std::cout << std::any_cast<const std::string&>(obj) << "\n";
}
This triggers a std::bad_any_cast although the any passed to do_stuff_with_any does contain a string. I digged into gcc's any implementation and it seems to use comparison of the address of a static inline member function (a manager chosen from a template struct depending on the type of the stored object) to check if the any holds an object of the requested type.
And the address of this function seems to change across the shared library boundary.
Isn't std::any guaranteed to work across shared library boundaries? Does this code trigger UB somewhere? Or is this a bug in the gcc implementation? I am pretty sure it works on linux so is this only a bug in mingw? Is it known or should i report it somewhere if so? Any ideas for (temporary) workarounds?
While it is true that this is an issue on how Windows DLLs work, and that as of GCC 8.2.0, the issue still remains, this can be easily worked around by changing the __any_caster function inside the any header to this:
template<typename _Tp>
void* __any_caster(const any* __any)
{
if constexpr (is_copy_constructible_v<decay_t<_Tp>>)
{
#if __cpp_rtti
if (__any->type().hash_code() == typeid(_Tp).hash_code())
#else
if (__any->_M_manager == &any::_Manager<decay_t<_Tp>>::_S_manage)
#endif
{
any::_Arg __arg;
__any->_M_manager(any::_Op_access, __any, &__arg);
return __arg._M_obj;
}
}
return nullptr;
}
Or something similar, the only relevant part is the comparison line wrapped in the #if.
To elaborate, there is 2 copies of the manager function one on the exe and one on the dll, the passed object contains the address of the exe because that's where it was created, but once it reaches the dll side, the pointer gets compared to the one in the dll address space, which will never match, so, instead type info hash_codes should be compared instead.
While trying to replicate the behavior in this question in Visual Studio 2017 I found that instead of linking &FuncTemplate<C> to the exact same address the function template<> FuncTemplate<C>() {} gets copied into dllA and dllB so that the corresponding test program always returns not equal.
The solution was setup fresh with 3 Win32Projects, one as ConsoleApplication, the others as DLL. To link the DLLs I added them as reference to the console project (linking manually didn't work either). The only change in code I made was adding the __declspec(dllexport) to a() and b().
Is this behavior standard conforment? It seems like the ODR should be used here to collapse the copies of the function. Is there a way to get the same behavior seen in the other question?
Template.h
#pragma once
typedef void (*FuncPtr)();
template<typename T>
void FuncTemplate() {}
class C {};
a.cpp - dll project 1
#include "Template.h"
__declspec(dllexport) FuncPtr a() {
return &FuncTemplate<C>;
}
b.cpp - dll project 2
#include "Template.h"
__declspec(dllexport )FuncPtr b() {
return &FuncTemplate<C>;
}
main.cpp - console project
#include <iostream>
#include "i.h"
// seems like there is no __declspec(dllimport) needed here
FuncPtr a();
FuncPtr b();
int main() {
std::cout << (a() == b() ? "equal" : "not equal") << std::endl;
return 0;
}
C++ compilation is generally split into two parts, the compiler itself and the linker. It is the job of the linker to find and consolidate all the compilations of an identical function into a single unit and throw away the duplicates. At the end of a linking step, every function should either be part of the linker output or flagged as needing to be resolved at execution time from another DLL. Each DLL will contain a copy of the function if it is being used within that DLL or exported from it.
The process of resolving dynamic links at execution time is outside of the C++ tool chain, it happens at the level of the OS. It doesn't have the ability to consolidate duplicates like the linker does.
I think as far as ODR is concerned, each DLL is considered a separate executable.
I'm working with some legacy C++ code that is behaving in a way I don't understand. I'm using the Microsoft compiler but I've tried it with g++ (on Linux) as well—same behavior.
I have 4 files listed below. In essence, it's a registry that's keeping track of a list of members. If I compile all files and link the object files into one program, it shows the correct behavior: registry.memberRegistered is true:
>cl shell.cpp registry.cpp member.cpp
>shell.exe
1
So somehow the code in member.cpp gets executed (which I don't really understand, but OK).
However, what I want is to build a static library from registry.cpp and member.cpp, and link that against the executable built from shell.cpp. But when I do this, the code in member.cpp does not get executed and registry.memberRegistered is false:
>cl registry.cpp member.cpp /c
>lib registry.obj member.obj -OUT:registry.lib
>cl shell.cpp registry.lib
>shell.exe
0
My questions: how come it works the first way and not the second and is there a way (e.g. compiler/linker options) to make it work with the second way?
registry.h:
class Registry {
public:
static Registry& get_registry();
bool memberRegistered;
private:
Registry() {
memberRegistered = false;
}
};
registry.cpp:
#include "registry.h"
Registry& Registry::get_registry() {
static Registry registry;
return registry;
}
member.cpp:
#include "registry.h"
int dummy() {
Registry::get_registry().memberRegistered = true;
return 0;
}
int x = dummy();
shell.cpp:
#include <iostream>
#include "registry.h"
class shell {
public:
shell() {};
void init() {
std::cout << Registry::get_registry().memberRegistered;
};
};
void main() {
shell *cf = new shell;
cf->init();
}
You have been hit by what is popularly known as static initialization order fiasco.
The basics is that the order of initialization of static objects across translation units is unspecified. See this
The call here Registry::get_registry().memberRegistered; in "shell.cpp" may happen before the call here int x = dummy(); in "member.cpp"
EDIT:
Well, x isn't ODR-used. Therefore, the compiler is permitted not to evaluate int x = dummy(); before or after entering main(), or even at all.
Just a quote about it from CppReference (emphasis mine)
It is implementation-defined whether dynamic initialization
happens-before the first statement of the main function (for statics)
or the initial function of the thread (for thread-locals), or deferred
to happen after.
If the initialization is deferred to happen after the first statement
of main/thread function, it happens before the first odr-use of any
variable with static/thread storage duration defined in the same
translation unit as the variable to be initialized. If no variable or function is odr-used from a given translation unit, the non-local variables defined in that translation unit may never be initialized (this models the behavior of an on-demand dynamic library)...
The only way to get your program working as you want is to make sure x is ODR-used
shell.cpp
#include <iostream>
#include "registry.h"
class shell {
public:
shell() {};
void init() {
std::cout << Registry::get_registry().memberRegistered;
};
};
extern int x; //or extern int dummy();
int main() {
shell *cf = new shell;
cf->init();
int k = x; //or dummy();
}
^ Now, your program should work as expected. :-)
This is a result of the way linkers treat libraries: they pick and choose the objects that define symbols left undefined by other objects processed so far. This helps keep executable sizes smaller, but when a static initialization has side effects, it leads to the fishy behavior you've discovered: member.obj / member.o doesn't get linked in to the program at all, although its very existence would do something.
Using g++, you can use:
g++ shell.cpp -Wl,-whole-archive registry.a -Wl,-no-whole-archive -o shell
to force the linker to put all of your library in the program. There may be a similar option for MSVC.
Thanks a lot for all the replies. Very helpful.
So both the solution proposed WhiZTiM (making x ODR-used) and aschepler (forcing linker to include the whole library) work for me. The latter has my preference since it doesn't require any changes to the code. However, there seems to be no MSVC equivalent for --whole-archive.
In Visual Studio I managed to solve the problem as follows (I have a project for the registry static library, and one for the shell executable):
In the shell project add a reference to the registry project;
In the linker properties of the shell project under General set
"Link Library Dependencies" and "Use Library Dependent Inputs" to
"Yes".
If these options are set registry.memberRegistered is properly initialized. However, after studying the compiler/linker commands I concluded that setting these options results in VS simply passing the registry.obj and member.obj files to the linker, i.e.:
>cl /c member.cpp registry.cpp shell.cpp
>lib /OUT:registry.lib member.obj registry.obj
>link /OUT:shell.exe "registry.lib" shell.obj member.obj registry.obj
>shell.exe
1
To my mind, this is essentially the first approach to my original question. If you leave out registry.lib in the linker command it works fine as well.
Anyway, for now, it's good enough for me.
I'm working with CMake so now I need to figure out how to adjust CMake settings to make sure that the object files get passed to the linker? Any thoughts?