I saw in a C++ program that dlfcn library is used for dynamically
linking to a shared object library chosen by the user of the C++
program during runtime, and for calling functions in the chosen
shared object library via dlsym() and other functions in dlfcn.
Assume the user chooses a shared object library called x.so during runtime. x.so was compiled from a cpp file with definitions
of functions enclosed within extern "C". A comment in the cpp file
says that the usage of extern "C" is important but without further
explanation, and I wonder why?
Is it correct that there is only C++ code and no C code involved here? So is extern "C" not necessarily only used when mixing C and C++
code together?
Does whether dlfcn library is statically or dynamically linked to the C++ program matters to the questions above?
Now compare to a simpler case, where the shared object library is
known much earlier than runtime, and the author of the C++ program
specifies it in the C++ program without using dlfcn before compiling it, and then
dynamically links the shared object library and the C++ program
during runtime. In this case, is `extern "C" still necessary in the
cpp file which was compiled into the shared object library?
Thanks.
extern "C" changes the linkage, and affects name mangling. See What is name mangling, and how does it work?
Without it, exported names in the compiled object will normally be mangled. That means that they cannot be used from C. It also means that looking them up via dlsym() requires using the mangled name.
Is it correct that there is only C++ code and no C code involved here? So is extern "C" not necessarily used when mixing C and C++ code together?
It is not clear what you mean here. If you are compiling C++, then only C++ code is involved at this stage. If you are then linking in any way with a module written in C, then C is involved. If you need an external C library or program to link to your C++-defined functions, then they need to be declared extern "C".
Does whether dlfcn library is statically or dynamically linked to the C++ program matters to the questions above?
No (perhaps you should have explained why you think it might matter).
Now compare to a simpler case, where the shared object library is known much earlier than runtime, and the author of the C++ program specifies it in the C++ program without using dlfcn before compiling it, and then dynamically links the shared object library and the C++ program during runtime. In this case, is `extern "C" still necessary in the cpp file which was compiled into the shared object library?
It is necessary that the declared linkage of the symbols is consistent in the two C++ modules. Certainly, you could remove extern "C" if the modules are both C++. But if one module declares the symbol as extern "C" then the other must also.
Related
I have several C++ classes which I decided to compile into separate library usable by other applications. Therefore I made a simple C-compatible envelope which wraps C++ classes. The function definitions in the header file are put in extern "C" block.
When I compile this using MinGW g++, I get shared library with decorated function names. Each function name has suffix #ars_size. To get rid of this names I passed -Wl,--kill-at argument to g++. Then I finally got undecorated names.
Then I made a simple C++/Qt application which serves as a tester of the new library. Unfortunately when I include the library's header file into my C++ application, I get linking errors:
release/mainwindow.o:mainwindow.cpp:(.text+0xd6f): undefined reference to `_imp__TdbGetLastErr#12'
release/mainwindow.o:mainwindow.cpp:(.text+0x1163): undefined reference to `_imp__TdbGetAsyncErr#12'
release/mainwindow.o:mainwindow.cpp:(.text+0x166f): undefined reference to `_imp__TdbStop#4'
release/mainwindow.o:mainwindow.cpp:(.text+0x1698): undefined reference to `_imp__TdbForceTrigger#0'
...
To get rid of these errors I have to exclude -Wl,--kill-at during compiling of the library. But then the library is not compatible with C applications. I believe the key is to make MinGW g++ to link undecorated functions. Please does anybody know, how to do this?
EDIT: Adding more details requested in comments.
When compiling the library, I include its header (with extern "C") from the
source code, so the compiler should be aware of extern "C".
The library is not so simple wrapper. Actually it creates several C++ objects
which are operated using handles from C applications. Also it catches
exceptions from C++ classes etc.
Klasyc
I am having trouble with a piece of C code that is compiled as C++ producing a linker error from a 3rd package.
The setup is I have lua, oolua, and my program which utilizes the two. This project was moved from luabind due to its horrid build process. My code occasionally will throw an exception (which is expected) when constructing an object, and oolua does not handle this error.
From the reading I've done compiling lua as C++ should alleviate this problem and cause the program to not just quit, but the problem is oolua doesn't like lua being C++ for some reason and I cannot find references to why this could be.
tl;dr: If C code is compiled as C++ what problems could occur with linking assuming it compiles correctly?
When Lua is compiled as C++ it actually uses C++ name mangling and the normal headers should be used by C++. Don't be confused by the lua.hpp header. That should only be used for C++ when Lua is compiled as C. As a result when you compile Lua as C++ you should not use extern "C". Unfortunately OOLua does use extern "C" here. You could try modifying the header in OOLua to not use extern "C" and then recompile OOLua or file a bug with them to fix it properly.
Unfortunately not many people seem to be aware of the pitfalls of using Lua with C++ so many projects assume Lua is compiled as C as normal.
Long explanation:
When Lua is compiled as C there are no extern statements and therefore all functions get the default (extern "C"). When C files use Lua they use the normal header files which again contain no extern statements so the compiler presumes everything to be extern "C" and it all matches the library. If you use Lua from C++ you use lua.hpp which contains the extern "C" block so your compiler knows that Lua is extern "C".
When Lua is compiled as C++ there are no extern statements so the compiler assumes all functions in Lua are extern "C++". You can no longer use Lua directly from C. If you use Lua from C++ with the lua.hpp then it sees the extern "C" block, assumes Lua functions are extern "C" and fails at link time because of the wrong mangling. If you use the normal headers directly from C++ then there are no extern statements so extern "C++" is assumed. This matches the library and all is OK.
What OOLua does is it includes the normal headers but has it's own extern "C" around it so the compiler uses extern "C" linkage for all Lua functions when they are in fact using C++ linkage, the mangling is wrong and you get lots of linker errors about missing symbols.
Under windows we have a C interface (extern "C" { // interface }) to our C++ library, that exports unmangled functions using a module definition file (.def).
I am trying to recreate the same thing under linux, where I am relatively inexperienced. I understand that under NIX systems, all functions are exported by default. With this in mind I created a shared object, which I ran through the nm command.
I was surprised to see that, unlike in windows, my function names had been mangled!
How can I prevent this please?
The usual solution is to declare the functions extern "C". This not only causes the names to be mangled as in C, but also for the function to use the C calling conventions.
I am trying to add some code to a larger C project that requires the use of a C++ library.
The C++ library provides a wrapper using functions marked extern "C" that provide access to the functionality of the library
//Some C++ library
//mywrapper.h
#ifdef __cplusplus
extern "C" {
#endif
void wrapperFunction1( int width );
void wrapperFunction1( int height);
#ifdef __cplusplus
} // extern "C"
#endif
This compiles with g++ without problem.
When making a C program and including mywrapper.h, I continually get linker errors referencing vtable and cxa_pure_virtual:
undefined reference to `vtable for __cxxabiv1::__class_type_info'
undefined reference to `__cxa_pure_virtual'
In testing, these errors go away and allow the program to compile when I add the stdc++ library, but this isn't a option for the larger C project. Is there a way to compile a C program to access a C++ library without these errors and without stdc++? And the deals of the errors are referring to modules that are deep inside of the C++ library and not referenced in mywrapper.h, so why is C program even trying to refer to them?
A C++ library depends on standard C++ library (libstdc++, -lstdc++ option to linker, defaulted if you run linker via g++). That's a fact. Nothing can be done about it. Get over it.
The wrapper header does not refer to those symbols, but the actual object files in the library do. So they need to be defined and the way to define them is to include the standard C++ library.
Note, that if the wrapped library itself is dynamic, it will know it's dependencies and you won't have to specify linking against libstdc++ dynamically.
It is the C++ library you've wrapped that requires stdc++, not your wrapper.
You've wrapped the C++ functions so that they are callable from C but they still use a library internally that depends on the C++ standard library. If you don't provide stdc++, the library you're using is missing part of its implementation. Unless you rewrite the C++ library, there is no way round that.
In general C++ code needs some level of run-time support to deal with specific C++ features. For example, there is a library function dealing with dynamic_cast<>(). Even if you don't use any of the standard C++ library you will need this language support libray. It is typically included in the standard C++ library but specifically for gcc this library is separately available. That is, you may get away with not including libstdc++ but you won't get away including this library. At the moment I can't find out how it is called though. I think it is -lcxxabi but I'm not sure.
It is perfectly valid not to include libstdc++ with some caveats. Depending on what you do in the C++ code (or in this case the library you're wrapping) it may expect the underlying runtime to provide certain functions:
Like you mention, RTTI info
__cxa_pure_virtual if you have unimplemented pure virtual functions (for example, in pure abstract classes)
Calls to constructors before main() and destructors after it
This happens not only when you don't include libstdc++ library, but also on stripped down custom versions of the library.
For example, Rowley Crossworks for ARM (toolchain for embedded systems) recommends that you write __cxa_pure_virtual error handler yourself.
Okay, I've done some more research and I stumbled over the answer, I had two problems, first was the error messages:
./lib.so: undefined reference to vtable for __cxxabiv1::__si_class_type_info'
./lib.so: undefined reference to `vtable for __cxxabiv1::__class_type_info'
These are related to C++'s Run-time type information, or rtti. Compiling the library with "-fno-rtti" eliminated these errors.
The second problem was:
./lib.so: undefined reference to __cxa_pure_virtual'
Which was cause be the C compiler's inability to handle a virtual function from C++. Creating an stub function with extern "C" eliminated the error and allowed the project to compile.
I'm still confirming if the library is working correctly, but those changes seem to have corrected the errors.
Thanks for the help everyone.
I'm converting my App Delegate file from .m to .mm (Objective-C to Objective-C++) so that I can access a third-party library written in Objective-C++. In Objective-C, my app delegate builds and runs fine. But when I change the extension, the project builds and I get link errors, all of which are missing symbols from a static library written in C that I use. The errors are classic link errors with the following format:
"MyFunction(arguments)", referenced from:
-[MyAppDelegate myMethod] in MyAppDelegate.o
Symbol(s) not found
All of the problems are in the app delegate object. I know that I'm all set to compile Objective-C++ because my ViewController file is .mm. So my question has a few parts to it.
First, are these symbols truly not there in the sense that I can't use them? In other words, is it not possible to access plain old C functions from an Objective-C++ file? If this is true, that's pretty unfortunate. I thought that almost all Objective-C code, and certainly all Objective-C code that at least builds as .mm, was valid Objective-C++. Am I wrong?
If not, any idea how I can prevent these errors? Are there header rules that are different in Objective-C++ that I don't know about?
Thanks for any and all help.
Link errors with mixed C++/C or C++/Objective-C programs are usually due to C++ name mangling. Make sure you have extern "C" attached to all the appropriate declarations, and also that all of your code agrees on the linkage. That is, make sure that the function definition as well as the places where it is used can all see the extern "C" or extern "C++".
In your particular situation, it looks like MyFunction() is getting compiled with C++ linkage and having its name mangled, but that your myMethod Objective-C file is trying to link against the unmangled name.
Here is a link to the wikipedia article about name mangling.
Surround your header include with extern C.
This tells the linker that the function names in the library do not get C++ name mangling.
E.g.:
extern "C" {
#include "my-lib.h"
}