I'm trying to use two large, complex linear algebra libraries which define many of the same functions. I can't rewrite (legally in one case, but technically in both) either of them. Let's call them "special" and "normal" because I only call a couple functions from special. To consistently call functions defined in normal.h and only in some cases from special.h, I've done something like this:
namespace special_space
{
#include "special.h" // Defines foo()
}
#include "normal.h" // Defines foo()
int main() {
foo(); // Calls foo() defined in normal.h
special_space::foo(); // Calls foo() defined in special.h
}
With g++-4.4, which was the default where I was developing this, the code compiles and links without warnings, and it executes as I would expect and as I want. This seems to be consistent across platforms, various Linux, Unix and BSD environments. But! if I compile with g++ >4.4, I get warnings about multiple foo() definitions:
In file special.h::line:col: warning: declaration of ‘void
special_space::foo()’ with C language linkage [enabled by default]
The resulting executable then segfaults at the call to special_space::foo(). I /think/ that specifying extern "C++" in the definitions found in special.h might fix this, but I'm not allowed to change special.h. So what should I do? More specifically:
1) Is it safe to use g++-4.4? If so -- what changed in subsequent versions and why?
2) If specifying the C++ linkage model really would fix this, is there a way to tell ld to use it by default?
3) If neither of those -- is there another way to call functions from libraries that define functions of the same name?
So as I posted in a comment, wrap the headers includes with
#ifdef __cplusplus
extern "C" {
#endif
#include normal.h
#ifdef __cplusplus
}
#endif
Do this with both headers.
Basically, since you're linking c libraries from c++, which does name mangling (this is what allowes overloading), your calls to the symbols in your c lib were being mangled by the linker. The #ifdef __cplusplus tells the linker not to mangle the names for those specific function symbols.
Ideally, the creators of the library should include this in their headers but you mentioned that you have no control over that. I had a similar problem with some generated code. I had to wrap all my header includes of the generated code in this to allow C++ to call it.
What I don't know, is how it ever worked without this. I've definitely had to do this going back to gcc < 4.4.
Related
This problem has perplexed me for a week now so I thought it might finally be time to ask you guys for help. Here is the story in a nutshell:
We are developing an embedded server in-house using Qt/C++. It is a very simple server that processes client requests and loads the proper function through dlopen()/dlsym() calls to do something vendor specific. This means the vendor would simply provide us a .so file in C with their function (which is transparent to us) the way we define. This would be written in C because of a lot of low level things it would need to do, and our server is in Qt because we plan on eventually having a frontend for it.
Here is some pseudocode:
In our main.cpp file (this is written in C fashion but uses the g++ compiler as is defined in the Qt mkspec, compiled using -ldl and -rdynamic to export all symbols):
dlopen() vendor.so file (tried with both RTLD_NOW and RTLD_LAZY)
dlsym() vendor.so init() method (this will call the vendor's init method, which will set up the name/properties of this "vendor plugin" through setter methods inside our code, call this plugin_set_name(args...))
In the shared header file (shared.h) (both code bases would use this; ours would have complete definitions of the structs, vendors would simply have prototypes of setters/getters):
extern "C" int plugin_set_name(args...)
In the vendor main.c file (compiled using gcc, -fPIC, and -shared)
Implementation of init() function as mentioned above
So essentially, what is happening is that the C++ code will use the dl calls to load the init() function from the C .so library, and then the C .so library will call a function that is defined in the C++ code (in this case, plugin_set_name). Is this even possible? Neither is linked against each other since they are compiled independently of each other and with different compilers (gcc vs g++).
The error I am getting is on runtime: "undefined symbol: plugin_set_name" (so it finds and gets inside the library's init() method just fine). It works flawlessly when I use gcc and straight C code to do everything so I know it's not the code but something with mixing C/C++. I also understand the use of extern "C" to prevent name mangling and have used nm / readelf to determine that there is no mangling of any kind. Any ideas? What is the best way to go about this?
Somehow, this just magically works today. I can't explain it. I simply have extern "C" declarations around the shared header, so in shared.h:
#ifdef __cplusplus
extern "C" {
#endif
plugin_set_name(args...)
other_shared_functions
#ifdef __cplusplus
}
#endif
I've always had this however. In either case, it works now with vendor plugins being compiled in C and server being compiled in Qt and C++. I think the problem was a combination of where to place all the externs as well as g++ linking flags (where rdynamic is crucial). Thanks. Just putting this here in case anyone else runs into the same issue.
I have a C++ project that due to its directory structure is set up as a static library A, which is linked into shared library B, which is linked into executable C. (This is a cross-platform project using CMake, so on Windows we get A.lib, B.dll, and C.exe, and on Linux we get libA.a, libB.so, and C.) Library A has an init function (A_init, defined in A/initA.cpp), that is called from library B's init function (B_init, defined in B/initB.cpp), which is called from C's main. Thus, when linking B, A_init (and all symbols defined in initA.cpp) is linked into B (which is our desired behavior).
The problem comes in that the A library also defines a function (Af, defined in A/Afort.f) that is intended to by dynamically loaded (i.e. LoadLibrary/GetProcAddress on Windows and dlopen/dlsym on Linux). Since there are no references to Af from library B, symbols from A/Afort.o are not included into B. On Windows, we can artifically create a reference by using the pragma:
#pragma comment (linker, "/export:_Af")
Since this is a pragma, it only works on Windows (using Visual Studio 2008). To get it working on Linux, we've tried adding the following to A/initA.cpp:
extern void Af(void);
static void (*Af_fp)(void) = &Af;
This does not cause the symbol Af to be included in the final link of B. How can we force the symbol Af to be linked into B?
It turns out my original attempt was mostly there. The following works:
extern "C" void Af(void);
void (*Af_fp)(void) = &Af;
For those that want a self-contained preprocessor macro to encapsulate this:
#if defined(_WIN32)
# if defined(_WIN64)
# define FORCE_UNDEFINED_SYMBOL(x) __pragma(comment (linker, "/export:" #x))
# else
# define FORCE_UNDEFINED_SYMBOL(x) __pragma(comment (linker, "/export:_" #x))
# endif
#else
# define FORCE_UNDEFINED_SYMBOL(x) extern "C" void x(void); void (*__ ## x ## _fp)(void)=&x;
#endif
Which is used thusly:
FORCE_UNDEFINED_SYMBOL(Af)
MSVC #pragma comment(linker, "/include:__mySymbol")
gcc -u symbol
There is a better way to write that FORCE_UNDEFINED_SYMBOL macro. Just cast that function pointer to a void*. Then it works with any function - or data for that matter. Also, why bother with MSVC pragmas when the gcc part of your macro will work for MSVC as well. So my simplified version would be:
#define FORCE_UNDEFINED_SYMBOL(x) void* __ ## x ## _fp =(void*)&x;
Which is used thusly:
FORCE_UNDEFINED_SYMBOL(Af)
But it must be used in the program that includes the library that is having its symbols stripped.
You can use the --undefined option when you build B:
g++ -Wl,--undefined,Af -o libB.so ...
Try putting those lines into B/initB.cpp so that they're (hopefully) forced into the libB.so library at link time.
But why do you have to do it in this way at all? Can't you set it up so that the executable references that function (or a caller of it), causing the linker to do the right thing automatically?
If you can use C++0x features of gcc (-std=c++0x), then the function default template arguments may do the trick. As of the current c++ standard, default arguments are not allowed for function templates. With these enabled in c++0x, you can do something like :-
In some header file of static library ...
template< class T = int >
void Af()
{
}
Then in its corresponding cpp file use explicit template instantiation...
template void Af();
This will generate the symbols for the function Af though it is not yet called/referenced.
This won't affect the callers due to the fact that because of the default template argument, you need not specify a type. Just add the template <class T = int > before the function declaration and explicitly instantiate it in its implementation file.
HTH,
How this works in C or C++?
extern "C" {
#include <unistd.h>
#include <fd_config.h>
#include <ut_trace.h>
#include <sys/stat.h>
#include <sys/types.h>
}
The C++ standard does not specify how compilers should name the symbols in their object files (for instance, Foo::bar() might end up as __clsFoo_fncBar or some gobbledygook). The C standard does, and it is almost always different from how C++ compilers do it (C doesn't have to deal with classes, namespaces, overloading, etc.).
As a result, when you are linking against an object file that was output by a C compiler, you have to tell your C++ compiler to look for symbols with names that correspond to the C standard. You are essentially putting it in "C mode." This is what the "C" part of extern "C" does.
(Alternatively, you might also be declaring functions or variables that could be used by an external C object file. In this case, it means export those symbols the C way.)
If your Project has C and C++ source files and you need to build as a whole( C files calls some functions in C++ files) ,so we need to protect the C file function calls and symbols by declaring as in C++ files by
extern "C"
{
/symbols used in c files/
uint8 GetCurrentthreadState(HANDLE ThreadId)
}
Then the C++ complier generate compilation output which is same as that of C complier for the above declared functions and symbols.So on linking time , the compiler can easily link the C and C++ defined symbols with out any link error.
So my opinion is not needed to give the #ifdef __cplusplus check on compilation.
Because we need to protect the symbols in c++ files right? Also C++ files can be compiled by C++ compiler only right?
/renjith g
It will not work, you need to add the cplusplus preprocessor...
#ifdef __cplusplus
extern "C" {
#endif
// your code
#ifdef __cplusplus
}
#endif
EDIT:
In C++, the name will be treated like in C which mean that there will be no mangle name. It allows to make the difference between two C++ different function with different argument type/number or a different namespace in a library (for library purpose libname.so, libname.a). If the name is mangled, the C program will not be able to recognize it
eg:
int myfction()
void myfunction(int)
void myfunction(int, char)
C library: myfction
C++ library: int_myction (it depend on your compiler)
C++ library: int_myction_int (it depend on your compiler)
C++ library: int_myction_int_char (it depend on your compiler)
// ... which is not allowed in C program
Every C++ Compiler needs to support the extern "C" linkage.
The code in such block could be legacy code written in C for a certain functionality, which is required for the current program.
How this is implemented is mostly compiler dependent, However I heard that many Compilers disable the name mangling and change the calling convention.
I generated a DLL in Visual from a C++ code. Dependency Walker sees 3 functions exported as C functions.
I made an SCons project to do the generate the DLL, and 2 of the 3 functions are not seen as C functions.
What makes a function seen as a or C++ function, without modifying the code ? It must be in the compilation/linking options, but I didn't find any relevant thing.
The function are prefixed by a macro : AM_LIB_EXPORT
In the .h, I have :
#ifdef _WIN32
#define AM_LIB_EXPORT __declspec(dllexport)
#else
#define AM_LIB_EXPORT
#endif // _WIN32
Thanks.
What makes a function seen as a or C++ function, without modifying the code ?
A function compiled by a C++ compiler is automatically a 'C++-function' and name-mangling occurs to resolve C++ features such as namespaces and overloading.
In order to get 'C' export names, one must use the aforementioned extern "C" qualifier in the function declaration. Alternatively a huge extern "C" { .. } block around the header containing the prototypes.
If this does not solve your issue, its maybe related to dllimport/dllexport. If you #define AM_LIB_EXPORT __declspec(dllexport) in your DLL build, you'll typically also need to make it dllimport for apps linking against your DLL in order for the linker to know where to fetch the symbols from.
Is this a name mangling issue? If you don't use extern "C" around your function declarations, they will get name-mangled.
I found the reason :
The export was also added as additionnal command line option (/EXPORT). In this case, it is exported as a C function. I don't understand why...
I removed this additionnal command line switch.
Thank you all.
I still don't know how to mark a thread as "resolved" ?
I found some code recently where extern "C" was added in source file also for functions. They were also added in the header files where they were declared.
I was under the assumption that adding 'extern "C" in header files was sufficient.
Where should extern "C" blocks be added?
UPDATE:
Suppose I am compiling my C code using a CPP compiler and have added extern "C" guards for all the functions in header files (i.e. all my functions have their prototypes in headers), but in source files I have not added the same. Will this cause a problem?
Since you mean
extern "C" { ... }
style guards, these declare some functions to be of "C" linkage, rather than "C++" linkage (which typically has a bunch of extra name decoration to support things like overloaded functions).
The purpose, of course, is to allow C++ code to interface with C code, which is usually in a library. If the library's headers weren't written with C++ in mind, then they won't include the extern "C" guards for C++.
A C header written with C++ in mind will include something along the lines of
#ifdef __cplusplus
extern "C" {
#endif
...
#ifdef __cplusplus
}
#endif
to make sure C++ programs see the correct linkage. However, not all libraries were written with C++ in mind, so sometimes you have to do
extern "C" {
#include "myclibrary.h"
}
to get the linkage correct. If the header file is provided by someone else then it's not good practice to change it (because then you can't update it easily), so it's better to wrap the header file with your own guard (possibly in your own header file).
extern "C" isn't (AFAIK) ANSI C, so can't be included in normal C code without the preprocessor guards.
In response to your edit:
If you are using a C++ compiler, and you declare a function as extern "C" in the header file, you do not need to also declare that function as extern "C" in the implementation file. From section 7.5 of the C++ standard (emphasis mine):
If two declarations of the same
function or object specify different
linkage-specifications (that is, the
linkage-specifications of these
declarations specify different
string-literals), the program is
ill-formed if the declarations appear
in the same translation unit, and the
one definition rule
applies if the declarations appear in
different translation units. Except
for functions with C++ linkage, a
function declaration without a linkage
specification shall not precede the
first linkage specification for that
function. A function can be declared
without a linkage specification after
an explicit linkage specification has
been seen; the linkage explicitly
specified in the earlier declaration
is not affected by such a function
declaration.
I'm not convinced it's good practice though, since there's the potential for the linkage specifications to diverge by accident (if, for example, the header file containing the linkage specification isn't included in the implementing file). I think it's better to be explicit in the implementation file.
They only need to go in anything that is included by other source files.
With some idioms you'll find people including source files.
They should be added to all files, that get included in other files.
Normally, one doesn't include source files.
Apologia
The question has changed to be much clearer what it was asking about. This answer addressed the original question, when it was at least debatable whether it was discussing guards against multiple inclusion in header files - which is what my answer addresses. Clearly, if the question had been as clear then as it is now, I would not have submitted this answer.
Original answer
No, it is not necessary to include the guards in the C code too.
If the header file 'header.h' says:
#ifndef HEADER_H_INCLUDED
#define HEADER_H_INCLUDED
...
#endif
Then it is perfectly safe for a source file 'source.c' to say:
#include "header.h"
It is also safe for other headers to include 'header.h'.
However, people note that opening a header file and reading it takes time, which slows up a compilation, so sometimes people do things like:
#ifndef HEADER_H_INCLUDED
#include "header.h"
#endif
This means that if some other header included in 'source.c' has already included 'header.h', then the '#include' is not re-processed. (Or, if 'header.h' has already been included directly in 'source.c', though that's a silly buglet.)
So, when encountered, it is likely to be an attempt to optimize the compilation performance. It is far from clear that it buys you much; modern C preprocessors are fairly intelligent about the issue and will avoid re-including the file if they can. And there's always a risk that the test in 'source.c' has a typo (#ifndef HEARER_H_INCLUDED, perhaps) in which case the test slows the compilation because the preprocessor tests the irrelevant condition and then proceeds to include 'header.h' after all. It is 'safe'; the header is itself protected-- or should be.
If you see the code in 'source.c' also doing '#define HEADER_H_INCLUDED', then there are problems. The #define has to be either before or after the #include, and neither is good as a general technique.
If 'source.c' does '#define HEADER_H_INCLUDED' before including 'header.h', then if the guard appears in 'header.h', the contents of the header will not be included. If the guard does not appear in 'header.h', then things work OK.
If 'source.c' does '#define HEADER_H_INCLUDED' after including 'header.h', then if the guard appears in 'header.h', we get a benign redefinition of HEADER_H_INCLUDED. If 'header.h' does not contain the guard but does include a file which includes 'header.h', you are not protected from multiple inclusion after all.
Note that body of the header appears after the '#define HEADER_H_INCLUDED'. This is again protection if nested includes include 'header.h'.
You mean the 'extern c' preprocessors? They have to be on the function definition as well as that affects how the function call is stored in the compiled binary. Its only really needed if you are linking compiled c++ together with c which is compiled as C (as opposed to c in a .cpp file).
The "C" guards have two purposes:
When your code is compiled, the functions will be exported in a way that will allow a non C++ compiler/linker to use them (no C++ name mangling etc.)
When a C++ compiler uses your header files, it will know that it should bind the symbols in the C way which in turn will make sure that the resulting program will link successfully. They don't carry a meaning for a non C++ compiler but since the symbols were generated in C-style in (1) this is the desired effect.
Since you include the header with the "C" guards also in your implementation file, the information on how the symbols should be created at compile time is available to the compiler and the compiler will create the symbols in a way that can be used by a non C++ compiler. Consequently you only need to specify extern "C" in your header file as long as the header file is also included by the implementation file.
it is not required for extern to be used in source files, if they are used in the header file and that file is included by the rest of the source files.
As far as I remember the standard, all function declarations are considered as "extern" by default, so there is no need to specify it explicitly. That doesn't make this keyword useless since it can also be used with variables (and it that case - it's the only solution to solve linkage problems). But with the functions - yes, it's optional.
A little more verbose answer is that it allows you to use variables compiled in another source code file, but doesn't reserve memory for the variable. So, to utilise extern, you have to have a source code file or a library unit that contains memory space for the variable on the top level (not within functions). Now, you can refer to that variable by defining an extern variable of the same name in your other source code files.
In general, the use of extern definition should be avoided. They lead easily to unmanagable code and errors that hard to locate. Of course, there are examples where other solutions would be impractical, but they are rare. For example, stdin and stdout are macros that are mapped to an extern array variable of type FILE* in stdin.h; memory space for this array is in a standard C-library unit.
We had always only added extern "C" to the header definitions, but this allows the C++ code to implement the function with a different signature via overloading without errors and yet it doesn't mangle the symbol definition so at link times it uses this mismatching function.
If the header and the definition both have extern "C" then a mismatched signature generates an error with Visual Studio 2017 and with g++ 5.4.
This following code compiles without an error for Visual Studio 2017 and g++ 5.4
extern "C" {
int test(float a, int b);
}
int test(int b)
{
return b;
}
It seems that gcc mangles the symbol name in this case but Visual Studio 2017 does not.
However including the extern "C" with the definition catches the mismatch at compile time.
extern "C" {
int test(float a, int b);
}
extern "C" int test(int b)
{
return b;
}
gives the following error on g++:
g++ -c -o test.o test.cpp
test.cpp: In function ‘int test(int)’:
test.cpp:4:26: error: conflicting declaration of C function ‘int test(int)’
extern "C" int test(int b)
^
test.cpp:2:9: note: previous declaration ‘int test(float, int)’
int test(float a, int b);
or with cl
cl /c test.cpp
Microsoft (R) C/C++ Optimizing Compiler Version 19.16.27042 for x86
Copyright (C) Microsoft Corporation. All rights reserved.
test.cpp
test.cpp(4): error C2733: 'test': second C linkage of overloaded function not allowed
test.cpp(2): note: see declaration of 'test'