I've read through the "C bindings" in the tutorial but I'm a novice at C stuff.
Could someone please let me know if a Crystal program can be built as a static library to link to, and if so could you please provide a simple example?
Yes, but it is not recommended to do so. Crystal depends on a GC which makes it less desirable to produce shared (or static) libraries. Thus there are also no syntax level constructs to aid in the creation of such nor a simple compiler invocation to do so. The C bindings section in the documentation is about making libraries written in C available to Crystal programs.
Here's a simple example anyhow:
logger.cr
fun init = crystal_init : Void
# We need to initialize the GC
GC.init
# We need to invoke Crystal's "main" function, the one that initializes
# all constants and runs the top-level code (none in this case, but without
# constants like STDOUT and others the last line will crash).
# We pass 0 and null to argc and argv.
LibCrystalMain.__crystal_main(0, Pointer(Pointer(UInt8)).null)
end
fun log = crystal_log(text: UInt8*): Void
puts String.new(text)
end
logger.h
#ifndef _CRYSTAL_LOGGER_H
#define _CRYSTAL_LOGGER_H
void crystal_init(void);
void crystal_log(char* text);
#endif
main.c
#include "logger.h"
int main(void) {
crystal_init();
crystal_log("Hello world!");
}
We can create a shared library with
crystal build --single-module --link-flags="-shared" -o liblogger.so
Or a static library with
crystal build logger.cr --single-module --emit obj
rm logger # we're not interested in the executable
strip -N main logger.o # Drop duplicated main from the object file
ar rcs liblogger.a logger.o
Let's confirm our functions got included
nm liblogger.so | grep crystal_
nm liblogger.a | grep crystal_
Alright, time to compile our C program
# Folder where we can store either liblogger.so or liblogger.a but
# not both at the same time, so we can sure to use the right one
rm -rf lib
mkdir lib
cp liblogger.so lib
gcc main.c -o dynamic_main -Llib -llogger
LD_LIBRARY_PATH="lib" ./dynamic_main
Or the static version
# Folder where we can store either liblogger.so or liblogger.a but
# not both at the same time, so we can sure to use the right one
rm -rf lib
mkdir lib
cp liblogger.a lib
gcc main.c -o static_main -Llib -levent -ldl -lpcl -lpcre -lgc -llogger
./static_main
With much inspiration from https://gist.github.com/3bd3aadd71db206e828f
Related
"ar" -- is just tool to create archives.
And all static libraries of form "lib*.a" are in fact just archived compiled objects + additional file with symbol table, added there by "ranlib".
No linking is performing during creation of such library.
So why do most of projects use ***_LDFLAGS ***_LIBADD in their Makefile.am during creation of such ("lib*.a" static library) archives?
Does automake ignore those flags (in case when they relate to any "lib*.a" static library), or does it in fact link something there?
So why do most of projects use ***_LDFLAGS ***_LIBADD in their Makefile.am during creation of such ("lib*.a" static library) archives?
The GNU Build System is capable of creating dynamic and static libs (or both) determined at configure time using the --enable-shared and --enable-static flags. As you guessed, _LDFLAGS and _LIBADD are more oriented to dynamic shared objects or program linkage than to the static linker. The static linker of libtool is essentially another link pass that invokes ar to create the archive (omitting all the flags). For example:
lib_LTLIBRARIES=libfoo.la
libfoo_la_SOURCES=$(SRCS)
libfoo_la_LDFLAGS=-Wl,-t
when both shared and static libs are generated outputs something like:
libtool: link: gcc -shared -fPIC -DPIC .libs/foo.o -g -O2 -Wl,-t -Wl,-soname -Wl,libfoo.so.0 -o .libs/libfoo.so.0.0.0
...
libtool: link: (cd ".libs" && rm -f "libfoo.so.0" && ln -s "libfoo.so.0.0.0" "libfoo.so.0")
libtool: link: (cd ".libs" && rm -f "libfoo.so" && ln -s "libfoo.so.0.0.0" "libfoo.so")
libtool: link: ar cru .libs/libfoo.a foo.o
libtool: link: ranlib .libs/libfoo.a
libtool: link: ( cd ".libs" && rm -f "libfoo.la" && ln -s "../libfoo.la" "libfoo.la" )
automake does ignore _LDFLAGS; however the script that performs the linking (libtool) does not. It looks for flags that affect linking there also. For example:
lib_LTLIBRARIES=libfoo.la
libfoo_la_SOURCES=$(SRCS)
libfoo_la_LDFLAGS=-Wl,-t -static
will only generate a static lib, even if configure --disable-static was run to generate the Makefile.
libtool is just a wrapper script over the native compiler/linker tools for portability.
Answer to your question Does automake ignore them is: NO
That is completely true: ""ar" -- is just tool to create archives."
But, Automake does not ignore ***_LDFLAGS ***_LIBADD in their Makefile.am at all, otherwise what is point of having such flag if they are not making any sense for the build system!
From documentation (link give below):
The ‘library_LIBADD’ variable should be used to list extra libtool objects (.lo files) or libtool libraries (.la) to add to library.
The ‘library_LDFLAGS’ variable is the place to list additional libtool linking flags, such as -version-info, -static, and a lot more.
For more details, you should go through this Libtool Documentation for more clarity in these flags.
8.3.7 _LIBADD, _LDFLAGS, and _LIBTOOLFLAGS
Libtool Documentation
EDIT:
As your question is still generic, let me put it in two different scenario...
Static Lib is stand-alone:
In this scenario ***_LIBADD could be not much useful, as mentioned in documentation: "the library_LIBADD variable should be used to list extra libtool objects (.lo files) or libtool libraries (.la) to add to library"
Which means, if your static lib does not have dependency then this flag is no of use in it's Makefile.am.
Static Lib is Dependent (not stand-alone)
From above quote from documentation, it is now clear that, ***_LIBADD flag is used to mentioned libs names which are required to build your current library.
So this flag would be necessary in such requirement.
And about ***_LDFLAGS, as mentioned in documentation, The library_LDFLAGS variable is the place to list additional libtool linking flags for that library.
If your lib does not require such flags, then this can be ignored too. It is all about how you want your final result.
Few more additional links for your reference:
LDFLAGS usage in autotools with libtool
What is the difference between LDADD and LIBADD?
I hope this EDIT suffices what you're looking for.
PS: If you would read my answer carefully and went through documentation, you could have get the same data. Njoy. :)
I have a Makefile for a c++ Linux project:
MODE ?= dbg
DIR = ../../../../../somdir/$(MODE)
SRC_FILES = a.cpp b.cpp
H_FILES = a.h
LDFLAGS += -L$(DIR)/lib/linux '-Wl,-R$$ORIGIN'
CPPFLAGS = -I$(DIR)/include
LIBRARIES = -lsomeso
ifeq (rel, $(MODE))
CFLAGS = -Wall -g -DNDEBUG
else
CFLAGS = -Wall -ansi -pedantic -Wconversion -g -DDEBUG -D_DEBUG
endif
sample: $(SRC_FILES) $(H_FILES) Makefile
g++ $(CPPFLAGS) $(CFLAGS) $(LDFLAGS) $(LIBRARIES) $(SRC_FILES) -o sample
when i run 'make' it builds the project, with no errors.
but when i run the project it complains that:
error while loading shared libraries: libsomeso.so: cannot open shared object file: No such file or directory
The path that i give in DIR goes to the folder where the shared object is held(relatively to where the makefile is placed), and if it was the wrong path why didn't it complain during the make process.
does someone know what am i missing?
Thanks
Matt
LDFLAGS += -L$(DIR)/lib/linux '-Wl,-R$$ORIGIN'
The above should be:
LDFLAGS += -L$(DIR)/lib/linux -Wl,-R$(DIR)/lib/linux '-Wl,-R$$ORIGIN'
That is, for each non-standard dynamic library location -L a corresponding -Wl,-R should be specified. $ORIGIN is needed to locate dynamic libraries relative to the executable, not sure if you need it here.
People often advise using LD_LIBRARY_PATH. This is a bad advice, in my opinion, because it makes deployment more complicated.
When you run your application, location of libsomeso.so should be in LD_LIBRARY_PATH environment variable. Try running program like this:
LD_LIBRARY_PATH="path_to_libsomeso_so:$LD_LIBRARY_PATH" myprogram
Here path_to_libsomeso_so is full path to a directory where libsomeso.so is located, and myprogram is your program executable. Note, that you should specify path to a directory containing libsomeso.so, not to libsomeso.so file itself.
The trouble is not during compilation time. Everything goes fine. There's a problem at runtime.
Indeed, your program has been linked with a shared object library. Therefore, at runtime, it need to load this shared object file. During compilation, you instructs the compiler where this file was with the -L flag.
For the runtime, you have to set the LD_LIBRARY_PATH environment variable to point to the directory where your libsomeso.so file resides.
Alternatively, you can place this file in one of the standard directory where these shared object files are searched for: /usr/local/lib, /usr/lib, /lib, but this should be what you'll do for the final distributed version of your library.
As told from Maxim Egorushkin, LD_LIBRARY_PATH is a bad choice. Meanwhile, using -L$(your lib path) -l$(your lib name) gcc/g++ argument to link shared library isn't a good choice. Because, after build the exe, you should told exe where the shared library directory is. By default, executable file only search shared library at /usr/lib or /usr/local/lib. Although, you have told makefile where the shared library is when build the executable file. But when you execute this exe file, they are different.
However, link static library don't have such problem.
So, the best way to deal with your problem is change the way you link your custom shared file. Like this:
DYNAMIC_LIB_DIR = ../lib (your lib path ,this is a example)
OBJLIBS = xxx.so (your lib name)
gcc/g++ -o exe_name sourcefile/object_file $(DYNAMIC_LIB_DIR)/$(OBJLIBS)
Refresh that dynamic library cache!
After adding a custom, non-standard library to /usr/local/lib, first check that /usr/local/lib is listed under /etc/ld.so.conf.d/libc.conf.
Then, finish off with a dynamic link library cache refresh:
$ sudo ldconfig
I am running Windows 7 with gcc/g++ under Cygwin. What would be the Makefile format (and extension, I think it's .mk?) for compiling a set of .cpp (C++ source) and .h (header) files into a static library (.dll). Say I have a variable set of files:
file1.cpp
file1.h
file2.cpp
file2.h
file3.cpp
file3.h
....
What would be the makefile format (and extension) for compiling these into a static library? (I'm very new to makefiles) What would be the fastest way to do this?
The extension would be none at all, and the file is called Makefile (or makefile) if you want GNU Make to find it automatically.
GNU Make, at least, lets you rely on certain automatic variables that alone give you control over much of the building process with C/C++ files as input. These variables include CC, CPP, CFLAGS, CPPFLAGS, CXX, CXXFLAGS, and LDFLAGS. These control the switches to the C/C++ preprocessor, compiler, and the linker (the program that in the end assembles your program) that make will use.
GNU Make also includes a lot of implicit rules designed to enable it automatically build programs from C/C++ source code, so you don't [always] have to write your own rules.
For instance, even without a makefile, if you try to run make foobar, GNU Make will attempt to first build foobar.o from foobar.c or foobar.cpp if it finds either, by invoking appropriate compiler, and then will attempt to build foobar by assembling (incl. linking) its parts from system libraries and foobar.o. In short, GNU Make knows how to build the foobar program even without a makefile being present -- thanks to implicit rules. You can see these rules by invoking make with the -p switch.
Some people like to rely on GNU Make's implicit rule database to have lean and short makefiles where only that specific to their project is specified, while some people may go as far as to disable the entire implicit rule database (using the -r switch) and have full control of the building process by specifying everything in their makefile(s). I won't comment on superiority of either strategy, rest assured both do work to some degree.
There are a lot of options you can set when building a dll, but here's a basic command that you could use if you were doing it from the command line:
gcc -shared -o mydll.dll file1.o file2.o file3.o
And here's a makefile (typically called Makefile) that will handle the whole build process:
# You will have to modify this line to list the actual files you use.
# You could set it to use all the "fileN" files that you have,
# but that's dangerous for a beginner.
FILES = file1 file2 file3
OBJECTS = $(addsuffix .o,$(FILES)) # This is "file1.o file2.o..."
# This is the rule it uses to assemble file1.o, file2.o... into mydll.dll
mydll.dll: $(OBJECTS)
gcc -shared $^ -o $# # The whitespace at the beginning of this line is a TAB.
# This is the rule it uses to compile fileN.cpp and fileN.h into fileN.o
$(OBJECTS): %.o : %.cpp %.h
g++ -c $< -o $# # Again, a TAB at the beginning.
Now to build mydll.dll, just type "make".
A couple of notes. If you just type "make" without specifying the makefile or the target (the thing to be built), Make will try to use the default makefile ("GNUMakefile", "makefile" or "Makefile") and the default target (the first one in the makefile, in this case mydll.dll).
I am building .so library and was wondering - what is the difference b/w -h and -o cc complier option (using the Sun Studio C++) ?
Aren't they are referring to the same thing - the name of the output file?
-o is the name of the file that will be written to disk by the compiler
-h is the name that will be recorded in ELF binaries that link against this file.
One common use is to provide library minor version numbers. For instance, if
you're creating the shared library libfoo, you might do:
cc -o libfoo.so.1.0 -h libfoo.so.1 *.o
ln -s libfoo.so.1.0 libfoo.so.1
ln -s libfoo.so libfoo.so.1
Then if you compile your hello world app and link against it with
cc -o hello -lfoo
the elf binary for hello will record a NEEDED entry for libfoo.so.1 (which you can
see by running elfdump -d hello ).
Then when you need to add new functions later, you could change the -o value to
libfoo.so.1.1 but leave the -h at libfoo.so.1 - all the programs you already built
with 1.0 still try to load libfoo.so.1 at runtime, so continue to work without being
rebuilt, but you'll see via ls that it's 1.1.
This is also sometimes used when building libraries in the same directory they're
used at runtime, if you don't have a separate installation directory or install
via a packaging system. To avoid crashing programs that are running when you
overwrite the library binary, and to avoid programs not being able to start when
you're in the middle of building, some Makefiles will do:
cc -o libfoo.so.1.new -h libfoo.so.1 *.o
rm libfoo.so.1 ; mv libfoo.so.1.new libfoo.so.1
(Makefiles built by the old Imake makefile generator from X commonly do this.)
They are referring to different names. Specifically, the -o option is the file's actual name - the one on the filesystem. The -h option sets the internal DT_SONAME in the final object file. This is the name by which the shared object is referenced internally by other modules. I believe it's the name that you also see when you run ldd on objects that link to it.
The -o option will name the output file while the -h option will set an intrinsic name inside the library. This intrinsic name has precedence over the file name when used by the dynamic loader and allows it to use predefined rules to peek the right library.
You can see what intrinsic name was recorded into a given library with that command:
elfdump -d xxx.so | grep SONAME
Have a look here for details:
http://docs.oracle.com/cd/E23824_01/html/819-0690/chapter4-97194.html
EDIT: I suppose I should clarify, in case it matters. I am on a AIX Unix box, so I am using VAC compilers - no gnu compilers.
End edit
I am pretty rusty in C/C++, so forgive me if this is a simple question.
I would like to take common functions out of a few of my C programs and put them in shared libraries or shared objects. If I was doing this in perl I would put my subs in a perl module and use that module when needed.
For the sake of an example, let's say I have this function:
int giveInteger()
{
return 1034;
}
Obviously this is not a real world example, but if I wanted to share that function, how would I proceed?
I'm pretty sure I have 2 options:
Put my shared function in a file, and have it compile with my main program at compile time. If I ever make changes to my shared function, I would have to recompile my main program.
Put my shared function in a file, and compile it as a shared library (if I have my terms correct), and have my main program link to that shared library. Any changes I make to my shared library (after compiling it) would be integrated into my main program at runtime without re-compiling my main program.
Am I correct on that thinking?
If so, how can I complish either/both of those methods? I've searched a lot and I seem to find information how how I could have my own program link to someone else's shared library, but not how to create my own shared functions and compile them in a way I can use them in my own program.
Thanks so much!
Brian
EDIT: Conclusion
Thanks everyone for your help! I thought I would add to this post what is working for me (for dynamic shared libraries on AIX) so that others can benefit:
I compile my shared functions:
xlc -c sharedFunctions.c -o sharedFunctions.o
Then make it a shared object:
xlc -qmkshrobj -qexpfile=exportlist sharedFunctions.o
xlc -G -o libsharedFunctions.so sharedFunctions.o -bE:exportlist
Then link it another program:
xlc -brtl -o mainProgram mainProgram.c -L. -lsharedFunctions
And another comment helped me find this link, which also helped:
http://publib.boulder.ibm.com/infocenter/comphelp/v7v91/topic/com.ibm.vacpp7a.doc/proguide/ref/compile_library.htm
Thanks again to all who helped me out!
Yeah you are correct. The first is called a static library, while the second is called a shared library, because the code is not bound to the executable at compile time, but everytime again when your program is loaded.
Static library
Compile your library's code as follows:
gcc -c *.c
The -c tells the program not to link the object file, but just leaves you with object files for each .c file that was compiled. Now, archive them into one static library:
ar rcs libmystuff.a *.o
man ar will tell you what the rcs options mean. Now, libmystuff.a is a archive file (you can open it with some zip-file viewers) which contain those object files, together with an index of symbols for each object file. You can link it to your program:
gcc *.c libmystuff.a -o myprogram
Now, your program is ready. Note that the order of where the static libraries appear in the command matter. See my Link order answer.
Shared library
For a shared library, you will create your library with
gcc -shared -o libmystuff.so *.c
That's all it takes, libmystuff.so is now a shared object file. If you want to link a program to it, you have to put it into a directory that is listed in the /etc/ld.so.conf file, or that is given by the -L switch to GCC, or listed in the LD_LIBRARY_PATH variable. When linking, you cut the lib prefix and .so suffix from the library name you tell gcc.
gcc -L. -lmystuff *.c -o myprogram
Internally, gcc will just pass your arguments to the GNU linker. You can see what arguments it pass using the -### option: Gcc will print the exact arguments given to each sub process.
For details about the linking process (how some stuff is done internally), view my Linux GCC linker answer.
You've got a third option. In general, your C++ compiler should be able to link C routines. The necessary options may vary from compiler to compiler, so R your fine M, but basically, you should be able to compile with g++ as here:
$ g++ -o myapp myapp.cpp myfunc.c giveint.c
... or compile separately
$ gcc -c myfunc.c
$ gcc -c giveint.c
$ g++ -c myapp.cpp
$ g++ -o myapp myapp.o myfunc.o
You also need to include your declaration of the functions; you do that in C++ as
extern "C" {
int myfunc(int,int);
int giveInterger(void);
}
You need to distinguish between recompiling and relinking.
If you put giveInteger() into a separate (archive) library, and then modify it later, you'll (obviously) need to recompile the source file in which it is defined, and relink all programs that use it; but you will not need to recompile such programs [1].
For a shared library, you'll need to recompile and relink the library; but you will not have to relink or recompile any of the programs which use it.
Building C++ shared libraries on AIX used to be complicated; you needed to use makeC++SharedLib shell script. But with VAC 5.0 and 6.0 it became quite easy. I believe all you need to do is [2]:
xlC -G -o shr.o giveInteger.cc
xlC -o myapp main.cc shr.o
[1] If you write correct Makefile (which is recommended practice), all of this will happen automatically when you type make.
[2] There is a certain feature of AIX which may complicate matters: by default shared libraries are loaded into memory, and "stick" there until subsequent reboot. So you may rebuild the shr.o, rerun the program, and observe "old" version of the library being executed. To prevent this, a common practice is to make shr.o world-unreadable:
chmod 0750 shr.o