I want to build 2 dlls, lets call them Foo and Bar. I want Bar to import some class from Foo.
Foo.h:
#ifdef EXPORT
#define DECL __declspec(dllexport)
#else
#define DECL __declspec(dllimport)
#endif
class DECL Foo {
public:
Foo();
void bar();
};
Bar.cpp:
#include "bar.h"
void bar(){
Foo f;
f.bar();
}
To build Foo.dll, I do
g++ -DEXPORT -c Foo.cpp -o Foo.o
g++ -shared Foo.o -o Foo.dll
This produces the following references in Foo.o:
$ nm Foo.o
00000000 b .bss
00000000 d .data
00000000 i .drectve
00000000 t .text
0000000c T __ZN3Foo3barEv
00000006 T __ZN3FooC1Ev
00000000 T __ZN3FooC2Ev
Now, when I want to build Bar.dll, I do
$ g++ -shared Bar.cpp -o Bar.dll
/tmp/ccr8F57C.o:Bar.cpp:(.text+0xd): undefined reference to `__imp___ZN3FooC1Ev'
/tmp/ccr8F57C.o:Bar.cpp:(.text+0x1a): undefined reference to `__imp___ZN3Foo3barEv'
If I try to build Foo.cpp with EXPORT not defined (so that the macro DECL evaluates to __declspec(dllimport), I get the following:
$ g++ -c Foo.cpp
Foo.cpp:3: warning: function 'Foo::Foo()' is defined after prior declaration as dllimport: attribute ignored
Foo.cpp: In constructor `Foo::Foo()':
Foo.cpp:3: warning: function 'Foo::Foo()' is defined after prior declaration as dllimport: attribute ignored
Foo.cpp: In member function `void Foo::bar()':
Foo.cpp:7: warning: function 'void Foo::bar()' is defined after prior declaration as dllimport: attribute ignored
which does make sense, since a function declared dllimport can't then be defined.
How am I supposed to reference Foo in Bar?
When you build Bar.dll you also need to link against Foo.lib with -l option
Related
The following is a simple example for separate compilation:
// mod.cpp
#include <cstdio>
class MyModule {
public:
void print_msg();
};
void MyModule::print_msg() {
printf("hello from module\n");
}
// main.cpp
class MyModule {
public:
void print_msg();
};
int main() {
MyModule a;
a.print_msg();
}
We can compile and run it with
g++ main.cpp -c -o main.o
g++ mod.cpp -c -o mod.o
g++ main.o mod.o -o main
./main
The above works fine, but if I move the definition of MyModule::print_msg inside the class:
// mod.cpp
#include <cstdio>
class MyModule {
public:
void print_msg() { printf("hello from module\n"); }
};
I get an 'undefined reference' error for compiling main:
g++ main.cpp -c -o main.o # OK
g++ mod.cpp -c -o mod.o # OK
g++ main.o mod.o -o main # undefined reference error
/usr/bin/ld: main.o: in function `main':
main.cpp:(.text+0x23): undefined reference to `MyModule::print_msg()'
collect2: error: ld returned 1 exit status
I know that the former is the standard way and the class definition should go to a header file, but I wonder why the second method doesn't work.
Functions defined inside the class are implicitly inline. C++ requires:
The definition of an inline function [or variable (since C++17)] must be reachable in the translation unit where it is accessed.
Since you only defined it in mod.cpp, no definition is reachable in main.cpp, and compilation fails.
Typically, you'd put the definition of the class, and the definition of all functions defined within it, in a header file to be included by all users of the class. The functions defined outside the class then go in a .cpp file. That way a single consistent definition of all the inline functions is available to all users of the class, and you're not repeating the definition of the class in each .cpp file manually.
I have doubt is it possible if I built lib1.so using source file common.cppand lib2.so using same source file common.cpp again. Now I want to build my application APP using this two library ,
My question are
is it possible Or it will give me error ?
If it will successfully built then how naming gets resolved?
F.e. let say foo is class in common.cpp. foo_v1 is object of foo in lib1.so and foo_v2 is object of foo in lib2.so. Now during bulid of APP what would happen? Also is it possible to create object of foo in APP application ?
Naturally one would suggest you consider building the common functionality shared
by lib1.so and lib2.so into a distinct shared library, libcommon.so.
But if you want nevertheless to statically link the common functionality
identically1
into both lib1.so and lib2.so, you can link these two shared libraries with
your program. The linker will have no problem with that. Here is an
illustration:
common.h
#ifndef COMMON_H
#define COMMON_H
#include <string>
struct common
{
void print1(std::string const & s) const;
void print2(std::string const & s) const;
static unsigned count;
};
common.cpp
#include <iostream>
#include "common.h"
unsigned common::count = 0;
void common::print1(std::string const & s) const
{
std::cout << s << ". (count = " << count++ << ")" << std::endl;
}
void common::print2(std::string const & s) const
{
std::cout << s << ". (count = " << count++ << ")" << std::endl;
}
foo.h
#ifndef FOO_H
#define FOO_H
#include "common.h"
struct foo
{
void i_am() const;
private:
common _c;
};
#endif
foo.cpp
#include "foo.h"
void foo::i_am() const
{
_c.print1(__PRETTY_FUNCTION__);
}
bar.h
#ifndef BAR_H
#define BAR_H
#include "common.h"
struct bar
{
void i_am() const;
private:
common _c;
};
#endif
bar.cpp
#include "bar.h"
void bar::i_am() const
{
_c.print2(__PRETTY_FUNCTION__);
}
Now we'll make two shared libraries, libfoo.so and libbar.so. The
source files we need are foo.cpp, bar.cpp and common.cpp. First
compile them all to PIC (Position Independent Code
object files:
$ g++ -Wall -Wextra -fPIC -c foo.cpp bar.cpp common.cpp
And here are the object files we just made:
$ ls *.o
bar.o common.o foo.o
Now link libfoo.so using foo.o and common.o:
$ g++ -shared -o libfoo.so foo.o common.o
Then link libbar.so using bar.o and (again) common.o
$ g++ -shared -o libbar.so bar.o common.o
We can see that common::... symbols are defined and exported by libfoo.so:
$ nm -DC libfoo.so | grep common
0000000000202094 B common::count
0000000000000e7e T common::print1(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
0000000000000efa T common::print2(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
(T means defined in the code section, B means defined in the uinitialized data section). And exactly the same is true about libbar.so
$ nm -DC libbar.so | grep common
0000000000202094 B common::count
0000000000000e7e T common::print1(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
0000000000000efa T common::print2(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
Now we'll make a program linked with these libraries:
main.cpp
#include "foo.h"
#include "bar.h"
int main()
{
foo f;
bar b;
common c;
f.i_am();
b.i_am();
c.print1(__PRETTY_FUNCTION__);
return 0;
}
It calls foo; it calls bar,
and it calls common::print1.
$ g++ -Wall -Wextra -c main.cpp
$ g++ -o prog main.o -L. -lfoo -lbar -Wl,-rpath=$PWD
It runs like:
$ ./prog
void foo::i_am() const. (count = 0)
void bar::i_am() const. (count = 1)
int main(). (count = 2)
Which is just fine. You might perhaps have worried that two copies of the static class variable
common::count would end up in the program - one from libfoo.so and another from libbar.so,
and that foo would increment one copy and bar would increment the other. But that didn't happen.
How did the linker resolve the common::... symbols? Well to see that we need to find their mangled forms,
as the linker sees them:
$ nm common.o | grep common
0000000000000140 t _GLOBAL__sub_I_common.cpp
0000000000000000 B _ZN6common5countE
0000000000000000 T _ZNK6common6print1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
000000000000007c T _ZNK6common6print2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
There they all are and we can tell which one is which with c++filt:
$ c++filt _ZN6common5countE
common::count
$ c++filt _ZNK6common6print1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
common::print1(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
$ c++filt _ZNK6common6print2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
common::print2(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
Now we can re-do the linkage of prog, this time asking the linker to tell us the names of the
input files in which these common::... symbols were defined or referenced. This diagnostic
linkage is a bit of a mouthful, so I'll \-split it:
$ g++ -o prog main.o -L. -lfoo -lbar -Wl,-rpath=$PWD \
-Wl,-trace-symbol=_ZN6common5countE \
-Wl,-trace-symbol=_ZNK6common6print1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE \
-Wl,-trace-symbol=_ZNK6common6print2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
main.o: reference to _ZNK6common6print1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
./libfoo.so: definition of _ZN6common5countE
./libfoo.so: definition of _ZNK6common6print2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
./libfoo.so: definition of _ZNK6common6print1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
./libbar.so: reference to _ZN6common5countE
./libbar.so: reference to _ZNK6common6print2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
./libbar.so: reference to _ZNK6common6print1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
So the linker tells us that it linked the definition of common::count from ./libfoo.so. Likewise the
definition of common::print1. Likewise the definition of common::print2. It linked all the
common::... symbol definitions from libfoo.so.
It tells us that the reference(s) to common::print1 in main.o was resolved to the definition in libfoo.so. Likewise
the reference(s) to common::count in libbar.so. Likewise the reference(s) to common::print1 and
common::print2 in libbar.so. All the common::... symbol references in the program were resolved to the
definitions provided by libfoo.so.
So there were no multiple definition errors, and there is no uncertainty about which "copies" or "versions" of the common::... symbols are used
by the program: it just uses the definitions from libfoo.so.
Why? Simply because libfoo.so was the first library in the linkage that provided definitions
for the common::...symbols. If we relink prog with the order of -lfoo and -lbar reversed:
$ g++ -o prog main.o -L. -lbar -lfoo -Wl,-rpath=$PWD \
-Wl,-trace-symbol=_ZN6common5countE \
-Wl,-trace-symbol=_ZNK6common6print1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE \
-Wl,-trace-symbol=_ZNK6common6print2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
main.o: reference to _ZNK6common6print1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
./libbar.so: definition of _ZN6common5countE
./libbar.so: definition of _ZNK6common6print2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
./libbar.so: definition of _ZNK6common6print1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
./libfoo.so: reference to _ZN6common5countE
./libfoo.so: reference to _ZNK6common6print2ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
./libfoo.so: reference to _ZNK6common6print1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
then we get exactly the opposite answers. All the common::... symbol references in the program
are now resolved to the definitions provided by libbar.so. Because libbar.so provided them first.
There is still no uncertainty, and it makes no difference to the program, because both libfoo.so
and libbar.so linked the common::... definitions from the same object file, common.o.
The linker does not try to find multiple definitions of symbols. Once it has found a
definition of a symbol S, in an input object file or shared library, it binds references to
S to the definition it has found and is done with resolving S. It does
not care if a shared library it finds later can provide another definition of S, the same or different,
even if that later shared library resolves symbols other than S.
The only way in which you can cause a multiple definition error is by compelling the linker
to statically link multiple definitions, i.e. compel it to physically merge into the output binary
two object files obj1.o and obj2.o both of which contain a definition S.
If you do that, the competing static definitions have exactly the same status, and only
one definition can be used by the program, so the linker has to fail you. But it does not need to take any notice of a
dynamic symbol definition of S provided by a shared library if it has already resolved S, and it does not do so.
[1] Of course, if you compile and link lib1 and lib2 with different preprocessor, compiler or linkage options, you can sabotage the "common" functionality to an arbitary extent.
Been fighting with this on and off for 48 hours now; I'm still getting undefined reference errors when attempting to link a dynamic library with its dependency - despite all exports existing, and the library being found successfully.
Scenario:
libmemory (C++) - exports functions with extern "C"
libstring (C) - exports functions, imports from libmemory
libmemory builds successfully:
$ g++ -shared -fPIC -o ./builds/libmemory.so ...$(OBJECTS)...
libstring compiles successfully, but fails to link:
$ gcc -shared -fPIC -o ./builds/libstring.so ...$(OBJECTS)... -L./builds -lmemory
./temp/libstring/string.o: In function `STR_duplicate':
string.c:(.text+0x1cb): undefined reference to `MEM_priv_alloc'
./temp/libstring/string.o: In function `STR_duplicate_replace':
string.c:(.text+0x2a0): undefined reference to `MEM_priv_free'
string.c:(.text+0x2bf): undefined reference to `MEM_priv_alloc'
/usr/bin/ld: ./builds/libstring.so: hidden symbol `MEM_priv_free' isn't defined
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
Verifying libmemory exports its symbols, and the library itself is found by using -v to gcc:
...
attempt to open ./builds/libmemory.so succeeded
-lmemory (./builds/libmemory.so)
...
$ nm -gC ./builds/libmemory.so | grep MEM_
0000000000009178 T MEM_exit
0000000000009343 T MEM_init
00000000000093e9 T MEM_print_leaks
00000000000095be T MEM_priv_alloc
000000000000971d T MEM_priv_free
00000000000099c1 T MEM_priv_realloc
0000000000009d26 T MEM_set_callback_leak
0000000000009d3f T MEM_set_callback_noleak
$ objdump -T ./builds/libmemory.so | grep MEM_
0000000000009d3f g DF .text 0000000000000019 Base MEM_set_callback_noleak
00000000000093e9 g DF .text 00000000000001d5 Base MEM_print_leaks
0000000000009d26 g DF .text 0000000000000019 Base MEM_set_callback_leak
00000000000099c1 g DF .text 0000000000000365 Base MEM_priv_realloc
0000000000009343 g DF .text 00000000000000a6 Base MEM_init
00000000000095be g DF .text 000000000000015f Base MEM_priv_alloc
000000000000971d g DF .text 00000000000002a4 Base MEM_priv_free
0000000000009178 g DF .text 00000000000000a7 Base MEM_exit
$ readelf -Ws ./builds/libmemory.so | grep MEM_
49: 0000000000009d3f 25 FUNC GLOBAL DEFAULT 11 MEM_set_callback_noleak
95: 00000000000093e9 469 FUNC GLOBAL DEFAULT 11 MEM_print_leaks
99: 0000000000009d26 25 FUNC GLOBAL DEFAULT 11 MEM_set_callback_leak
118: 00000000000099c1 869 FUNC GLOBAL DEFAULT 11 MEM_priv_realloc
126: 0000000000009343 166 FUNC GLOBAL DEFAULT 11 MEM_init
145: 00000000000095be 351 FUNC GLOBAL DEFAULT 11 MEM_priv_alloc
192: 000000000000971d 676 FUNC GLOBAL DEFAULT 11 MEM_priv_free
272: 0000000000009178 167 FUNC GLOBAL DEFAULT 11 MEM_exit
103: 0000000000009343 166 FUNC GLOBAL DEFAULT 11 MEM_init
108: 0000000000009178 167 FUNC GLOBAL DEFAULT 11 MEM_exit
148: 0000000000009d3f 25 FUNC GLOBAL DEFAULT 11 MEM_set_callback_noleak
202: 00000000000095be 351 FUNC GLOBAL DEFAULT 11 MEM_priv_alloc
267: 000000000000971d 676 FUNC GLOBAL DEFAULT 11 MEM_priv_free
342: 0000000000009d26 25 FUNC GLOBAL DEFAULT 11 MEM_set_callback_leak
346: 00000000000099c1 869 FUNC GLOBAL DEFAULT 11 MEM_priv_realloc
366: 00000000000093e9 469 FUNC GLOBAL DEFAULT 11 MEM_print_leaks
Is there something horribly simple I'm missing? All the other related questions to this have simple answers such as link library order, and the paths used - but I've already verified they're in place and working as expected.
Tinkering with -fvisibility led to no changes either.
The same result exists whether using clang or gcc.
Linux 3.16.0-38-generic
gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
You should mark MEM_priv_alloc() function as extern "C" or wrap function body into extern "C" { /* function implementation */ } (already done as I can see from description).
And use combination of __cplusplus and extern "C" in header
#ifdef __cplusplus
#define EXTERNC extern "C"
#else
#define EXTERNC
#endif
EXTERNC int MEM_priv_alloc (void);
Please also double check how MEM_priv_alloc prototype is done. For example, MEM_priv_alloc should not be inline (unfortunately I do not know phisics of this, however inline example failed for me)
Below is simplified example that works for me:
files:
$ ls
main.c Makefile mem.cpp mem.h strings.c strings.h
mem.cpp
#include <stdio.h>
#include "mem.h"
extern "C" int MEM_priv_alloc (void)
{
return 0;
}
mem.h
#ifdef __cplusplus
#define EXTERNC extern "C"
#else
#define EXTERNC
#endif
EXTERNC int MEM_priv_alloc (void);
strings.c:
#include <stdio.h>
#include "mem.h"
int STR_duplicate_replace (void)
{
MEM_priv_alloc();
}
strings.h:
int STR_duplicate_replace (void);
main.c
#include <stdio.h>
#include "string.h"
int main (void)
{
STR_duplicate_replace ();
return 0;
}
Makefile:
prog: libmemory.so libstrings.so
gcc -o $# -L. -lmemory -lstrings main.c
libmemory.so: mem.cpp
g++ -shared -fPIC -o $# $^
libstrings.so: strings.c
gcc -shared -fPIC -o $# $^
and the build log:
g++ -shared -fPIC -o libmemory.so mem.cpp
gcc -shared -fPIC -o libstrings.so strings.c
gcc -o prog -L. -lmemory -lstrings main.c
please also see more details in
Using C++ library in C code and
wiki How to mix C and C++
So, I was stripping out the final parts of the amalgamation, and uncovered the issue.
My import/export is modelled off of this: https://gcc.gnu.org/wiki/Visibility
My equivalent implementation ends up looking like this:
# if GCC_IS_V4_OR_LATER
# define DLLEXPORT __attribute__((visibility("default")))
# define DLLIMPORT __attribute__((visibility("hidden")))
# else
# define DLLEXPORT
# define DLLIMPORT
# endif
The DLLIMPORT (visibility hidden) is causing the problem; I replace it with a blank definition and it's all ok. Yes, I did also have the equivalent for clang, which is why that also failed in the same way.
My takeaway from this is that the C code only ever saw these would-be-symbols as hidden, and therefore couldn't import them no matter how hard it tried, and however much they actually existed!
I have a header file that declares a template with a static variable and also defines it:
/* my_header.hpp */
#ifndef MY_HEADER_HPP_
#define MY_HEADER_HPP_
#include <cstdio>
template<int n>
struct foo {
static int bar;
static void dump() { printf("%d\n", bar); }
};
template<int n>
int foo<n>::bar;
#endif // MY_HEADER_HPP_
This header is included both by main.cpp and a shared library mylib. In particular, mylib_baz.hpp just includes this template and declares a function that modifies a specialization of the template.
/* mylib_baz.hpp */
#ifndef MYLIB_BAZ_HPP_
#define MYLIB_BAZ_HPP_
#include "my_header.hpp"
typedef foo<123> mylib_foo;
void init_mylib_foo();
#endif // MYLIB_BAZ_HPP_
and
/* mylib_baz.cpp */
#include "mylib_baz.hpp"
void init_mylib_foo() {
mylib_foo::bar = 123;
mylib_foo::dump();
};
When I make mylib.so (containing mylib_baz.o), the symbol for foo<123>::bar is present and declared weak. However, the symbol for foo<123>::bar is declared weak also in my main.o:
/* main.cpp */
#include "my_header.hpp"
#include "mylib_baz.hpp"
int main() {
foo<123>::bar = 456;
foo<123>::dump(); /* outputs 456 */
init_mylib_foo(); /* outputs 123 */
foo<123>::dump(); /* outputs 123 -- is this guaranteed? */
}
It appears that I am violating one definition rule (foo<123>::bar defined both in my_header.cpp and main.cpp). However, both with g++ and clang the symbols are declared weak (or unique), so I am not getting bitten by this -- all accesses to foo<123>::bar modify the same object.
Question 1: Is this a coincidence (maybe ODR works differently for static members of templates?) or am I in fact guaranteed this behavior by the standard?
Question 2: How could I have predicted the behavior I'm observing? That is, what exactly makes the compiler declare symbol weak?
There is no ODR violation. You have one definition of bar. It is here:
template<int n>
int foo<n>::bar; // <==
As bar is static, that indicates that there is one definition across all translation units. Even though bar will show up once in all of your object files (they need a symbol for it, after all), the linker will merge them together to be the one true definition of bar. You can see that:
$ g++ -std=c++11 -c mylib_baz.cpp -o mylib_baz.o
$ g++ -std=c++11 -c main.cpp -o main.o
$ g++ main.o mylib_baz.o -o a.out
Produces:
$ nm mylib_baz.o | c++filt | grep bar
0000000000000000 u foo<123>::bar
$ nm main.o | c++filt | grep bar
0000000000000000 u foo<123>::bar
$ nm a.out | c++filt | grep bar
0000000000601038 u foo<123>::bar
Where u indicates:
"u"
The symbol is a unique global symbol. This is a GNU extension to the standard set of ELF symbol bindings. For such a symbol the dynamic linker will make sure that in the entire process there is just one symbol with this name and type in use.
Will this code result in undefined behavior?
header.h
#ifdef __cplusplus
extern "C"
{
#endif
inline int foo(int a)
{
return a * 2;
}
#ifdef __cplusplus
}
#endif
def.c
#include "header.h"
extern inline int foo(int a);
use.c
#include "header.h"
int bar(int a)
{
return foo(a + 3);
}
main.cpp
#include <stdio.h>
#include "header.h"
extern "C"
{
int bar(int a);
}
int main(int argc, char** argv)
{
printf("%d\n", foo(argc));
printf("%d\n", bar(argc));
}
This is a example of a program where an inline function has to be used in both C and C++. Would it work if def.c was removed and foo was not used in C? (This is assuming that the C compiler is C99.)
This code works when compiled with:
gcc -std=c99 -pedantic -Wall -Wextra -c -o def.o def.c
g++ -std=c++11 -pedantic -Wall -Wextra -c -o main.o main.cpp
gcc -std=c99 -pedantic -Wall -Wextra -c -o use.o use.c
g++ -std=c++11 -pedantic -Wall -Wextra -o extern_C_inline def.o main.o use.o
foo is only in extern_C_inline once because the different versions that the compiler outputs in different object files get merged, but I would like to know if this behavior is specified by the standard. If I remove extern definition of foo and make it static then foo will appear in the extern_C_inline multiple times because the compiler outputs it in each compilation unit.
The program is valid as written, but def.c is required to ensure the code always works with all compilers and any combination of optimisation levels for the different files.
Because there is a declaration with extern on it, def.c provides an external definition of the function foo(), which you can confirm with nm:
$ nm def.o
0000000000000000 T foo
That definition will always be present in def.o no matter how that file is compiled.
In use.c there is an inline definition of foo(), but according to 6.7.4 in the C standard it is unspecified whether the call to foo() uses that inline definition or uses an external definition (in practice whether it uses the inline definition depends on whether the file is optimised or not). If the compiler chooses to use the inline definition it will work. If it chooses not to use the inline definition (e.g. because it is compiled without optimisations) then you need an external definition in some other file.
Without optimisation use.o has an undefined reference:
$ gcc -std=c99 -pedantic -Wall -Wextra -c -o use.o use.c
$ nm use.o
0000000000000000 T bar
U foo
But with optimisation it doesn't:
$ gcc -std=c99 -pedantic -Wall -Wextra -c -o use.o use.c -O3
$ nm use.o
0000000000000000 T bar
In main.cpp there will be a definition of foo() but it will typically generate a weak symbol, so it might not be kept by the linker if another definition is found in another object. If the weak symbol exists, it can satisfy any possible reference in use.o that requires an external definition, but if the compiler inlines foo() in main.o then it might not emit any definition of foo() in main.o, and so the definition in def.o would still be needed to satisfy use.o
Without optimisation main.o contains a weak symbol:
$ g++ -std=c++11 -pedantic -Wall -Wextra -c -o main.o main.cpp
$ nm main.o
U bar
0000000000000000 W foo
0000000000000000 T main
U printf
However compiling main.cpp with -O3 inlines the call to foo and the compiler does not emit any symbol for it:
$ g++ -std=c++11 -pedantic -Wall -Wextra -c -o main.o main.cpp -O3
$ nm main.o
U bar
0000000000000000 T main
U printf
So if foo() is not inlined in use.o but is inlined in main.o then you need the external definition in def.o
Would it work if def.c was removed and foo was not used in C?
Yes. If foo is only used in the C++ file then you do not need the external definition of foo in def.o because main.o either contains its own (weak) definition or will inline the function. The definition in foo.o is only needed to satisfy non-inlined calls to foo from other C code.
Aside: the C++ compiler is allowed to skip generating any symbol for foo when optimising main.o because the C++ standard says that a function declared inline in one translation unit must be declared inline in all translation units, and to call a function declared inline the definition must be available in the same file as the call. That means the compiler knows that if some other file wants to call foo() then that other file must contain the definition of foo(), and so when that other file is compiled the compiler will be able to generate another weak symbol definition of the function (or inline it) as needed. So there is no need to output foo in main.o if all the calls in main.o have been inlined.
These are differnet semantics from C, where the inline definition in use.c might be ignored by the compiler, and the external definition in def.o must exist even if nothing in def.c calls it.