I have got a dummy question. I have three C files: f1.c f2.c f3.c, which contain:
// f1.c
int f1()
{
return 2;
}
// f2.c
int f2()
{
return 4;
}
// f3.c
int f3()
{
return 10;
}
I have already 3 object file, when i run the following command (i use mingc under windows7):
gcc -c f1.c f2.c f3.c
and i create dll file:
gcc f1.o f2.o f3.o -o test1.dll -shared
using DLL Export Viewer i have opened this file:
How I can use this file in my application (crossplatform)? How I can call functrion f1, f2, f3 ?
Sorry for my bad English
Assuming you have the header, the only thing that you are missing is the A archive, which the linker needs in order to figure out what a DLL provides.
Change this:
gcc f1.o f2.o f3.o -o test1.dll -shared
To this:
gcc f1.o f2.o f3.o -o test1.dll -shared -Wl,--out-implib,libtest1.a
Then, to link with the shared library, pass -ltest1 to gcc.
You will still need to recompile the library for each platform and architecture (x86, AMD64, IA64, etc.). Windows uses DLLs, but Linux uses Shared Objects, for example.
See http://www.mingw.org/wiki/sampleDLL for more information.
Include the header file in the calling program.
Link the calling program to the .lib file associated with the library.
Make sure that the library is in the library search path.
Related
I have some c++ code in msys2 that I am trying to link dynamically to show how a dynamic link library works.
In linux, showing the call is no problem. stepping in gdb, we can watch the call go through the jump vector, eventually landing in the desired function.
But in msys2, they wanted to eliminated dlls and all the libraries I can find are .dll.a, I think they are really static libraries.
I build a trivial little function like this:
#include <cstdint>
extern "C" {
uint64_t f(uint64_t a, uint64_t b) {
return a + b;
}
}
compiling in the makefile with:
g++ -g -fPIC -c lib1.cc
g++ -g -shared lib1.o -o libtest1.so
When I run the file utility, it says that:
libtest1.so: PE32+ executable (DLL) (console) x86-64, for MS Windows
When I compile the code using it:
g++ -g main.cc -ltest1 -o prog
The error is -ltest1 no such file or directory.
MinGW uses the .dll extension for shared libraries, not .so.
lib??.dll.a is an import library, a shim that will load the corresponding lib??.dll at runtime.
At some point in time, you couldn't link .dlls directly, and had to link .dll.as instead. The modern MinGW can link .dlls directly, so you shouldn't need to use import libraries anymore.
-ltest1 no such file or directory
Wouldn't you get the same error on Linux? You have to specify the library search path with -L. -ltest1 needs either libtest1.a or libtest1.dll.a or libtest1.dll (perhaps some other variants are checked too).
The reason your linker cannot find the library is because your current working directory is not in the search path for libraries. Add -L. to your linking command.
It is untrue that MSYS2 "wanted to eliminate DLLs". Just run ls /mingw64/bin/*.dll and you will see plenty of DLLs (assuming you have some MINGW64 packages installed). The .dll.a files used for linking are called import libraries.
On some linux systems this works. Can I generally design plugin based apps such that there is no library, but only header files and the executable?
Afaik this always works if the interface classes are interfaces in the sense that they only contain pure virtual functions. But can I also define classes in the interface containing symbols that have to be bound by linking against an executable containing them?
Use case: an executable foo, the app, offers plugins an interface through a shared library libfoo. Plugins (shared libs) are loaded at runtime. Both, the app and plugins, link against libfoo to resolve symbols in the classes both of them use. Is this necessary or can put the classes in the executable target and let the plugins link the executable instead?
I'm most familiar with Linux (or other ELF based systems).
If you use a PIE executable and build with --export-symbols, you can skip the use of (e.g.) libfoo.so
The executable will load the plugins and will provide any API needed symbols to the plugin(s)
In other words, the plugins do not need to know how the foo executable gets the API symbols. They can be linked without reference to any libraries.
Below is a complete test case for you ...
Note that I bound the API calls into the executable directly. But, the executable could load the API calls from a shared libfoo.so [if it desired]. But, the plugins would know nothing of this library.
Note that you may be able to add/reduce some options. From the ### in the Makefile, I was doing some considerable hacking until I hit upon a combination that worked.
FILE: Makefile
# pieplugin/Makefile -- make file for pieplugin
#
# SO: can we use an executable file as shared library on all platformswindows
# SO: mac l
# SITE: stackoverflow.com
# SO: 70370572
XFILE = foo
XOBJ += foo.o
PLUGINS += libplug1.so
PLUGINS += libplug2.so
CFLAGS += -Wall -Werror -I.
###CFLAGS += -g
PIEFLAGS += -fpie
PIEFLAGS += -fPIC
###PIEFLAGS += -fpic
PICFLAGS += -fPIC
###PICFLAGS += -fpic
PICFLAGS += -nostdlib
PICFLAGS += -nodefaultlibs
PLUG_CFLAGS += $(CFLAGS)
PLUG_CFLAGS += $(PICFLAGS)
PLUG_LFLAGS += -shared
###PLUG_LFLAGS += $(PICFLAGS)
###PLUG_LFLAGS += -no-pie
CC = gcc
###CC = clang
LDSO = $(CC)
LDSO = ld
XFILE_LFLAGS += -Wl,--export-dynamic
all: $(PLUGINS) $(XFILE)
foo.o: foo.c
$(CC) $(CFLAGS) $(XFILE_CFLAGS) -c foo.c
$(XFILE): foo.o
$(CC) -o $(XFILE) $(XFILE_LFLAGS) foo.o -ldl
file $(XFILE)
plug1.o: plug1.c
$(CC) $(PLUG_CFLAGS) -c plug1.c
libplug1.so: plug1.o
$(LDSO) $(PLUG_LFLAGS) -o libplug1.so plug1.o
file libplug1.so
plug2.o: plug2.c
$(CC) $(PLUG_CFLAGS) -c plug2.c
libplug2.so: plug2.o
$(LDSO) $(PLUG_LFLAGS) -o libplug2.so plug2.o
file libplug2.so
test:
./$(XFILE) $(PLUGINS)
xtest: clean all test
clean:
rm -f $(XFILE) $(PLUGINS) *.o
FILE: foo.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <dlfcn.h>
#include <foopriv.h>
#define PLUGSYM(_fnc) \
plug->plug_##_fnc = plugsym(plug,"plugin_" #_fnc)
// plugsym -- load symbol from plugin file
void *
plugsym(plugin_t *plug,const char *sym)
{
void *fnc = dlsym(plug->plug_so,sym);
int sverr = errno;
printf("plugsym: loading %s from %s at %p\n",
sym,plug->plug_file,fnc);
if (fnc == NULL) {
printf("plugsym: failed -- %s\n",strerror(sverr));
exit(1);
}
return fnc;
}
// plugload -- load plugin file
void
plugload(const char *tail)
{
char file[1000];
plugin_t *plug = calloc(1,sizeof(*plug));
strcpy(plug->plug_file,tail);
sprintf(file,"./%s",tail);
printf("plugload: dlopen of %s ...\n",file);
//plug->plug_so = dlopen(file,RTLD_LOCAL);
//plug->plug_so = dlopen(file,RTLD_GLOBAL);
plug->plug_so = dlopen(file,RTLD_LAZY);
int sverr = errno;
printf("plugload: plug_so=%p\n",plug->plug_so);
#if 1
if (plug->plug_so == NULL) {
printf("plugload: failed -- %s\n",strerror(sverr));
exit(1);
}
#endif
PLUGSYM(fncint);
PLUGSYM(fncflt);
plug->plug_next = plugin_list;
plugin_list = plug;
}
int
main(int argc,char **argv)
{
--argc;
++argv;
// NOTE: in production code, maybe we use opendir/readdir to find plugins
for (; argc > 0; --argc, ++argv)
plugload(*argv);
for (plugin_t *plug = plugin_list; plug != NULL; plug = plug->plug_next) {
printf("main: calling plugin %s fncint ...\n",plug->plug_file);
plug->plug_fncint(NULL);
}
for (plugin_t *plug = plugin_list; plug != NULL; plug = plug->plug_next) {
printf("main: calling plugin %s fncint ...\n",plug->plug_file);
plug->plug_fncflt(NULL);
}
return 0;
}
// functions provided by foo executable to plugins ...
void
foo_fncint(fooint_t *ptr,const char *who)
{
printf("foo_fncint: called from %s ...\n",who);
}
void
foo_fncflt(fooflt_t *ptr,const char *who)
{
printf("foo_fncflt: called from %s ...\n",who);
}
FILE: plug1.c
// plug1.c -- a plugin
#include <foopub.h>
void ctors
initme(void)
{
}
void
plugin_fncint(fooint_t *ptr)
{
foo_fncint(ptr,"plug1_fncint");
}
void
plugin_fncflt(fooflt_t *ptr)
{
foo_fncflt(ptr,"plug1_fncflt");
}
FILE: plug2.c
// plug2.c -- a plugin
#include <foopub.h>
void ctors
initme(void)
{
}
void
plugin_fncint(fooint_t *ptr)
{
foo_fncint(ptr,"plug2_fncint");
}
void
plugin_fncflt(fooflt_t *ptr)
{
foo_fncflt(ptr,"plug2_fncflt");
}
Here is the output of make xtest:
rm -f foo libplug1.so libplug2.so *.o
gcc -Wall -Werror -I. -fPIC -nostdlib -nodefaultlibs -c plug1.c
ld -shared -o libplug1.so plug1.o
file libplug1.so
libplug1.so: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, not stripped
gcc -Wall -Werror -I. -fPIC -nostdlib -nodefaultlibs -c plug2.c
ld -shared -o libplug2.so plug2.o
file libplug2.so
libplug2.so: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, not stripped
gcc -Wall -Werror -I. -c foo.c
gcc -o foo -Wl,--export-dynamic foo.o -ldl
file foo
foo: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=d136c53c818056fdbec75294ea472ab8c056ca52, not stripped
./foo libplug1.so libplug2.so
plugload: dlopen of ./libplug1.so ...
plugload: plug_so=0x7fd320
plugsym: loading plugin_fncint from libplug1.so at 0x7fad7f93e037
plugsym: loading plugin_fncflt from libplug1.so at 0x7fad7f93e059
plugload: dlopen of ./libplug2.so ...
plugload: plug_so=0x7fd940
plugsym: loading plugin_fncint from libplug2.so at 0x7fad7f939037
plugsym: loading plugin_fncflt from libplug2.so at 0x7fad7f939059
main: calling plugin libplug2.so fncint ...
foo_fncint: called from plug2_fncint ...
main: calling plugin libplug1.so fncint ...
foo_fncint: called from plug1_fncint ...
main: calling plugin libplug2.so fncint ...
foo_fncflt: called from plug2_fncflt ...
main: calling plugin libplug1.so fncint ...
foo_fncflt: called from plug1_fncflt ...
I'm most familiar with Windows, where the answer to the strict question you asked is NO but you can still do what you want.
On Windows, EXE and DLL files both use the same file format, "Portable Executable". But they still have very important differences:
relocations table is most often missing from EXE files
entry points are different
As a result of these differences, attempting to load a native EXE file via LoadLibrary() WILL FAIL (LoadLibraryEx(LOAD_LIBRARY_AS_DATAFILE) is fine and can be used with Windows Resource API). (.NET assembly EXE files are an exception -- they don't contain any actual code, just intermediate language, and code addresses are always determined dynamically, so relocation fixups aren't needed)
However, your scheme "let the plugins link the executable" will still work, because EXE files do support an export table, and the loader can bind imports of DLLs to exports of the EXE.
Unfortunately, the C++ ABI is not standardized on Windows, so exporting C++ classes is very fragile and results in lockin to a particular compiler. To maintain loose coupling, you need to either export plain C functions, or COM interfaces.
Using interfaces allows you to avoid the entire issue -- you can define a class in the executable that implements an interface described in a header file, pass an interface pointer to the plugin, and the plugin can save that pointer use that for all calls back to the executable without ever having any import entries. For example, the predecessor to COM, "Object Linking and Embedding", defined IObjectWithSite and IOleClientSite interfaces.
Absolutely, for certain file formats.
Mono, .NET Core, and .NET 5.0+ assemblies will run on Windows, Linux, and Mac.
Ditto for Java .class and .jar files.
There are others.
linux Debian Buster
go version go1.11.6 linux/amd64
gcc version 8.3.0 (Debian 8.3.0-6)
libmylib.go
package main
import "C"
import (
"fmt"
)
func say(text string) {
fmt.Println(text)
}
func main(){}
mylib.h
#ifndef MY_LIB_H
#define MY_LIB_H
#include <string>
void say(std::string text);
#endif
main.cpp
#include <string>
#include "mylib.h"
using namespace std;
int main() {
string text = "Hello, world!";
say(text);
return 0;
}
CGO_ENABLED=1 go build -o libmylib.so -buildmode=c-shared libmylib.go
g++ -L/path/to/lib/ -lmylib main.cpp -o my-test-program
/usr/bin/ld: /tmp/ccu4fXFB.o: in function 'main':
main.cpp:(.text+0x53): undefined reference to `say(std::__cxx11::basic_string<char, std::char_traits, std::allocator >)' collect2: error: ld returned 1 exit status
with change: package main -> package mylib
CGO_ENABLED=1 go build -o libmylib.so -buildmode=c-shared libmylib.go
-buildmode=c-shared requires exactly one main package
You have to use GoString rather than std::string in the C++ version. The error message you are getting is because of the type mismatch, manifesting as a link-time error.
See the cgo reference.
Here's a working example. There's a few differences from yours. The //export directive is needed to include the function in the generated header file, the argument is *C.char rather than string or GoString. The C++ code uses the header generated by cgo, and there has to be a const-removing cast from the static string (because go doesn't have C-like const).
libmylib.go
package main
import "C"
import (
"fmt"
)
//export say
func say(text *C.char) {
fmt.Println(C.GoString(text))
}
func main() {}
main.cpp
#include "libmylib.h"
int main(void) {
say(const_cast<char*>("hello world"));
return 0;
}
commands
This compiles to go file, generating libmylib.so and libmylib.h in the current directory.
go build -o libmylib.so -buildmode=c-shared libmylib.go
The compiles the C++ program, linking it to the shared library above:
g++ -L. main.cpp -lmylib -o hello_program
To run the program, LD_LIBRARY_PATH needs to be set to the current directory. That would be different if program was installed and the shared library put in a sensible place.
LD_LIBRARY_PATH=. ./hello_program
g++ -L/path/to/lib/ -lmylib main.cpp -o test
is probably wrong. Read the invoking GCC chapter of the GCC documentation. Order of arguments to g++ matters a lot.
Also, test(1) could be some existing executable. I recommend to use some other name.
So consider compiling with
g++ -Wall -g -O main.cpp -L/path/to/lib/ -lmylib -o my-test-program
You probably want debugging information (-g), warnings (-Wall) and some optimization (-O)
You did comment
I need to use some functions from a Go file in my C ++ project.
This is curious. I assume your operating system is some Linux. Then, can't you just use inter-process communication facilities between a process running a Go program and another process running your C++ program? Consider perhaps using JSONRPC or HTTP between them. There exist several open source libraries in Go and in C++ to help you.
PS. As I commented, calling C++ code from a Go program could be much simpler. Of course you do need to read the Go documentation and the blog about cgo and probably the C++ dlopen minihowto and some C++ reference.
I have a dummy.hpp
#ifndef DUMMY
#define DUMMY
void dummy();
#endif
and a dummy.cpp
#include <iostream>
void dummy() {
std::cerr << "dummy" << std::endl;
}
and a main.cpp which use dummy()
#include "dummy.hpp"
int main(){
dummy();
return 0;
}
Then I compiled dummy.cpp to three libraries, libdummy1.a, libdummy2.a, libdummy.so:
g++ -c -fPIC dummy.cpp
ar rvs libdummy1.a dummy.o
ar rvs libdummy2.a dummy.o
g++ -shared -fPIC -o libdummy.so dummy.cpp
When I try compile main and link the dummy libs
g++ -o main main.cpp -L. -ldummy1 -ldummy2
There is no duplicate symbol error produced by linker. Why does this happen when I link two identical libraries statically?
When I try
g++ -o main main.cpp -L. -ldummy1 -ldummy
There is also no duplicate symbol error, Why?
The loader seems always to choose dynamic libs and not the code compiled in the .o files.
Does it mean the same symbol is always loaded from the .so file if it is both in a .a and a .so file?
Does it mean symbols in the static symbol table in static library never conflict with those in the dynamic symbol table in a .so file?
There's no error in either Scenario 1 (dual static libraries) or Scenario 2 (static and shared libraries) because the linker takes the first object file from a static library, or the first shared library, that it encounters that provides a definition of a symbol it has not yet got a definition for. It simply ignores any later definitions of the same symbol because it already has a good one. In general, the linker only takes what it needs from a library. With static libraries, that's strictly true. With shared libraries, all the symbols in the shared library are available if it satisfied any missing symbol; with some linkers, the symbols of the shared library may be available regardless, but other versions only record the use a shared library if that shared library provides at least one definition.
It's also why you need to link libraries after object files. You could add dummy.o to your linking commands and as long as that appears before the libraries, there'll be no trouble. Add the dummy.o file after libraries and you'll get doubly-defined symbol errors.
The only time you run into problems with this double definitions is if there's an object file in Library 1 that defines both dummy and extra, and there's an object file in Library 2 that defines both dummy and alternative, and the code needs the definitions of both extra and alternative — then you have duplicate definitions of dummy that cause trouble. Indeed, the object files could be in a single library and would cause trouble.
Consider:
/* file1.h */
extern void dummy();
extern int extra(int);
/* file1.cpp */
#include "file1.h"
#include <iostream>
void dummy() { std::cerr << "dummy() from " << __FILE__ << '\n'; }
int extra(int i) { return i + 37; }
/* file2.h */
extern void dummy();
extern int alternative(int);
/* file2.cpp */
#include "file2.h"
#include <iostream>
void dummy() { std::cerr << "dummy() from " << __FILE__ << '\n'; }
int alternative(int i) { return -i; }
/* main.cpp */
#include "file1.h"
#include "file2.h"
int main()
{
return extra(alternative(54));
}
You won't be able to link the object files from the three source files shown because of the double-definition of dummy, even though the main code does not call dummy().
Regarding:
The loader seems always to choose dynamic libs and not compiled in the .o files.
No; the linker always attempts to load object files unconditionally. It scans libraries as it encounters them on the command line, collecting definitions it needs. If the object files precede the libraries, there's not a problem unless two of the object files define the same symbol (does 'one definition rule' ring any bells?). If some of the object files follow libaries, you can run into conflicts if libraries define symbols that the later object files define. Note that when it starts out, the linker is looking for a definition of main. It collects the defined symbols and referenced symbols from each object file it is told about, and keeps adding code (from libraries) until all the referenced symbols are defined.
Does it means the same symbol is always loaded from .so file, if it is both in .a and .so file?
No; it depends which was encountered first. If the .a was encountered first, the .o file is effectively copied from the library into the executable (and the symbol in the shared library is ignored because there's already a definition for it in the executable). If the .so was encountered first, the definition in the .a is ignored because the linker is no longer looking for a definition of that symbol — it's already got one.
Does it mean that symbols in static symbol table in a static library are never in conflict with those in dynamic symbol table in .so file?
You can have conflicts, but the first definition encountered resolves the symbol for the linker. It only runs into conflicts if the code that satisfies the reference causes a conflict by defining other symbols that are needed.
If I link 2 shared libs, can I get conflicts and the link phase failed?
As I noted in a comment:
My immediate reaction is "Yes, you can". It would depend on the content of the two shared libraries, but you could run into problems, I believe. […cogitation…] How would you show this problem? … It's not as easy as it seems at first sight. What is required to demonstrate such a problem? … Or am I overthinking this? … […time to go play with some sample code…]
After some experimentation, my provisional, empirical answer is "No, you can't" (or "No, on at least some systems, you don't run into a conflict"). I'm glad I prevaricated.
Taking the code shown above (2 headers, 3 source files), and running with GCC 5.3.0 on Mac OS X 10.10.5 (Yosemite), I can run:
$ g++ -O -c main.cpp
$ g++ -O -c file1.cpp
$ g++ -O -c file2.cpp
$ g++ -shared -o libfile2.so file2.o
$ g++ -shared -o libfile1.so file1.o
$ g++ -o test2 main.o -L. -lfile1 -lfile2
$ ./test2
$ echo $?
239
$ otool -L test2
test2:
libfile2.so (compatibility version 0.0.0, current version 0.0.0)
libfile1.so (compatibility version 0.0.0, current version 0.0.0)
/opt/gcc/v5.3.0/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.21.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1213.0.0)
/opt/gcc/v5.3.0/lib/libgcc_s.1.dylib (compatibility version 1.0.0, current version 1.0.0)
$
It is aconventional to use .so as the extension on Mac OS X (it's usually .dylib), but it seems to work.
Then I revised the code in the .cpp files so that extra() calls dummy() before the return, and so does alternative() and main(). After recompiling and rebuilding the shared libraries, I ran the programs. The first line of output is from the dummy() called by main(). Then you get the other two lines produced by alternative() and extra() in that order because the calling sequence for return extra(alternative(54)); demands that.
$ g++ -o test2 main.o -L. -lfile1 -lfile2
$ ./test2
dummy() from file1.cpp
dummy() from file2.cpp
dummy() from file1.cpp
$ g++ -o test2 main.o -L. -lfile2 -lfile1
$ ./test2
dummy() from file2.cpp
dummy() from file2.cpp
dummy() from file1.cpp
$
Note that the function called by main() is the first one that appears in the libraries it is linked with. But (on Mac OS X 10.10.5 at least) the linker does not run into a conflict. Note, though, that the code in each shared object calls 'its own' version of dummy() — there is disagreement between the two shared libraries about which function is dummy(). (It would be interesting to have the dummy() function in separate object files in the shared libraries; then which version of dummy() gets called?) But in the extremely simple scenario shown, the main() function manages to call just one of the dummy() functions. (Note that I'd not be surprised to find differences between platforms for this behaviour. I've identified where I tested the code. Please let me know if you find different behaviour on some platform.)
I work in Linux with C++ (Eclipse), and want to use a library.
Eclipse shows me an error:
undefined reference to 'dlopen'
Do you know a solution?
Here is my code:
#include <stdlib.h>
#include <stdio.h>
#include <dlfcn.h>
int main(int argc, char **argv) {
void *handle;
double (*desk)(char*);
char *error;
handle = dlopen ("/lib/CEDD_LIB.so.6", RTLD_LAZY);
if (!handle) {
fputs (dlerror(), stderr);
exit(1);
}
desk= dlsym(handle, "Apply");
if ((error = dlerror()) != NULL) {
fputs(error, stderr);
exit(1);
}
dlclose(handle);
}
You have to link against libdl, add
-ldl
to your linker options
#Masci is correct, but in case you're using C (and the gcc compiler) take in account that this doesn't work:
gcc -ldl dlopentest.c
But this does:
gcc dlopentest.c -ldl
Took me a bit to figure out...
this doesn't work:
gcc -ldl dlopentest.c
But this does:
gcc dlopentest.c -ldl
That's one annoying "feature" for sure
I was struggling with it when writing heredoc syntax and found some interesting facts. With CC=Clang, this works:
$CC -ldl -x c -o app.exe - << EOF
#include <dlfcn.h>
#include <stdio.h>
int main(void)
{
if(dlopen("libc.so.6", RTLD_LAZY | RTLD_GLOBAL))
printf("libc.so.6 loading succeeded\n");
else
printf("libc.so.6 loading failed\n");
return 0;
}
EOF
./app.exe
as well as all of these:
$CC -ldl -x c -o app.exe - << EOF
$CC -x c -ldl -o app.exe - << EOF
$CC -x c -o app.exe -ldl - << EOF
$CC -x c -o app.exe - -ldl << EOF
However, with CC=gcc, only the last variant works; -ldl after - (the stdin argument symbol).
I was using CMake to compile my project and I've found the same problem.
The solution described here works like a charm, simply add ${CMAKE_DL_LIBS} to the target_link_libraries() call
The topic is quite old, yet I struggled with the same issue today while compiling cegui 0.7.1 (openVibe prerequisite).
What worked for me was to set: LDFLAGS="-Wl,--no-as-needed"
in the Makefile.
I've also tried -ldl for LDFLAGS but to no avail.
you can try to add this
LIBS=-ldl CFLAGS=-fno-strict-aliasing
to the configure options
You needed to do something like this for the makefile:
LDFLAGS='-ldl'
make install
That'll pass the linker flags from make through to the linker. Doesn't matter that the makefile was autogenerated.
I met the same problem even using -ldl.
Besides this option, source files need to be placed before libraries, see undefined reference to `dlopen'.
In order to use dl functions you need to use the -ldl flag for the linker.
how you do it in eclipse ?
Press Project --> Properties --> C/C++ build --> Settings --> GCC C++ Linker --> Libraries -->
in the "Libraries(-l)" box press the "+" sign --> write "dl" (without the quotes)-> press ok --> clean & rebuild your project.
$gcc -o program program.c -l <library_to_resolve_program.c's_unresolved_symbols>
A good description of why the placement of -l dl matters
But there's also a pretty succinct explanation in the docs
From $man gcc
-llibrary
-l library
Search the library named library when linking. (The second
alternative with the library as a separate argument is only for POSIX
compliance and is not recommended.)
It makes a difference where in the command you write this option; the
linker searches and processes libraries and object files in the order
they are specified. Thus, foo.o -lz bar.o searches library z after
file foo.o but before bar.o. If bar.o refers to functions in z,
those functions may not be loaded.
Try to rebuild openssl (if you are linking with it) with flag no-threads.
Then try to link like this:
target_link_libraries(${project_name} dl pthread crypt m ${CMAKE_DL_LIBS})
In earlier versions(~2.7) of GNU tool chain, glibc did not have direct interface to link loader(dlopen and dlsym functions), so you had to provide -ldl(libdl) at compile time. You don't have to do that anymore with latest glibc version. Just include <dlcfn.h> and you are good to go.