ocamlmktop with oasis - ocaml

I am having trouble adding adding a library to ocamlmktop.
I have a directory com, with an object file com/com.cma.
If I run ocamlmktop com.cma -o top within the com directory, then the resulting executable top seems to have the library; i.e, I can enter "Com.foo;;" and it will give the type signature of foo in the module Com.
However, if I run ocamlmktop com/com.cma -o top within the directory above com, then the resulting executable does not seem to have the library; i.e, it responds to "Com.foo;;" with "Error: Unbound module Com".
Is there a way to include libraries from different folders, or do I need to put all the .cma files in the same folder?
Also, I'm using the OASIS build system; can I inform OASIS that I want a toplevel with these libraries?
Edit:
I've found a partial solution: ocamlc -pack a/a.cmo b/b.cmo -o everything.cmo, and then ocamlmktop everything.cmo -o top; however, this requires duplicating all the libraries and forces them to be submodules of a single supermodule.

The reason that you cannot use toplevel from the directory above is that toplevels do not include interface files (.cmi) and the toplevel needs to find them on disk when the user accesses some module. So, either load toplevel with -I com switch or after loading issue #directory "com";;.
NB OASIS should support building toplevels natively in the next release (0.4.0).

Related

Issues in c++ compilation for embedded system

I have a few related questions about my issues with compilation for embedded system. My questions are not only about HOW to do something, but more about WHY, because I have solutions for my problems (but maybe there are better ones?), but have no idea why some things works in some conditions, and does not work in others. I already spent some time with this, but until yesterday I was doing things a little blindly, with trials and errors, and without knowing what I was doing. Time to stop that! Please, help.
Scenario
I want to develop an application for Xilinx’s Zynq ARM processor, on Zedboard. The app will involve multithreading, some audio manipulation, and httpserver. So I will need pthread, alsa, sndfile and microhttpd libraries. I created rootfs with yocto. In original conf.local file I added/modified these lines:
BB_NUMBER_THREADS ?= "${#oe.utils.cpu_count()}"
PARALLEL_MAKE ?= "-j ${#oe.utils.cpu_count()}"
MACHINE ?= "zedboard-zynq7"
PACKAGE_CLASSES ?= "package_deb"
EXTRA_IMAGE_FEATURES = "debug-tweaks eclipse-debug"
IMAGE_INSTALL_append = "libgcc alsa-utils mpg123 libstdc++ sthttpd libmicrohttpd libsndfile1"
LICENSE_FLAGS_WHITELIST = "commercial_mpg123"
I also had to add some additional layers to bblayers.conf (and of course downloaded them):
meta-xilinx
meta-multimedia (from meta-openembedded)
meta-oe (from meta-openembedded)
meta-webserver (from meta-openembedded)
Lastly, I generated core-image-minimal with bitbake.
This, together with Linux kernel, and other stuff compiled separately, boots and works fine.
Problems
1. Simple app with this rootfs
It is app for Zynq, so I use XSDK, which is SDK from Xilinx, based on Eclipse. I created new Application project. In dialog window I chose Linux as platform, C++ as language, and I provided path to my unpacked rootfs (excactly the one that system boots with, via NFS). My rootfs path is /home/stas/ZedboardPetalinuxFS (it is not Petalinux, I just used to use it, and this folder name is still the same). This sets proper paths for library and headers search in rootfs.
I started with something very simple:
#include <pthread.h>
int main()
{
int i;
i = 1;
return 0;
}
I also added pthread library for linker (in Eclipse settings). Linking command at this point:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" -o "test.elf" ./src/main.o -lpthread
At this point it compiles. But it stops, when I add sndfile library
#include <sndfile.h>
This is reasonable, because this rootfs does not have all headers. I need to add another path for searching for headers. So I added path in yocto tmp folder, that has all the headers, that was needed for building rootfs. After I add it, it compiles again successfully. But problems started, when I added sndfile library for linking. Here is linking command and error:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" -o "test.elf" ./src/main.o -lpthread -lsndfile
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lsndfile
I looked to usr/lib to check if libsndfile.so is there, and I found only libsndfile.so.1 and ibsndfile.so.1.27. But it is also the case for pthread, and linker does not complain for that. I decided to create libsndfile.so by hand (I linked it to libsndfile.so.1). Linker stopped complaining about it, but started complaining about it’s dependencies. So I also creaded .so files for all the dependencies, and their dependencies, and added them for linking. Then it succeeded. At the end, linking command looked like this:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" -o "test.elf" ./src/main.o -lpthread -lvorbisenc -lvorbis -logg -lFLAC -lsndfile
So here goes the first question – why I did not needed .so file for pthread, but needed it for all other libraries? Or more general – when do I need .so file, and when .so.X file is enough?
2. Simple app - another approach
After the first try, I thought I should make another image, this time more suitable for development. Luckily, in Yocto it is quite easy – I just had to modify one line:
EXTRA_IMAGE_FEATURES = "debug-tweaks eclipse-debug dev-pkgs"
dev-pkgs option adds -dev packages for all installed packages.
So now I have rootfs with all needed headers, and .so files pointing where they should.
Before compilation, I removed unnecessary Include path, leaving only the one from rootfs, and removed all the libraries, except pthread, and sndfile. But then I get new errors:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" -o "test.elf" ./src/main.o -lsndfile -lpthread
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find /lib/libpthread.so.0
makefile:48: polecenia dla obiektu 'test.elf' nie powiodły się (commands for ‘test.elf’ did not succeed)
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find /usr/lib/libpthread_nonshared.a
I spotted, that it looks for libraries in my root folder. Quick search in Google (and SO:)) told me that I should set –-sysroot variable. So I added it to Eclipse option (in Miscelenious card in Linker options) like that:
--sysroot=/home/stas/ZedboardPetalinuxFS
So now linker command looked like this:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" --sysroot=/home/stas/ZedboardPetalinuxFS -o "test.elf" ./src/main.o -lsndfile -lpthread
And all succeed! I also wrote simple example that uses pthreads, and sndfile, and it also worked. But WHY? This leads me to second question:
Why do I need --sysroot option in this case? When do I need to use this option in general? And why this time I didn't have to add all the dependencies to linking command?
3. Another idea
At this point, I had an idea, to check what will happen, if I add --sysroot option having rootfs populated with old, non development image. But this gave me new errors:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" --sysroot=/home/stas/ZedboardPetalinuxFS -o "test.elf" ./src/main.o -lpthread -lvorbisenc -lvorbis -logg -lFLAC -lsndfile
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find crt1.o: No such file or directory
makefile:48: polecenia dla obiektu 'test.elf' nie powiodły się
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find crti.o: No such file or directory
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lpthread
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lm
So third question – what does this errors mean?
Thanks very much in advance!
"why I did not needed .so file for pthread, but needed it for all
other libraries?"
Actually you do need pthread.so file. You included pthread.h but didn't link with -lpthread. So it's normal you don't see any linker errors.
"when do I need .so file, and when .so.X file is enough"
When you give "-lNAME" parameter to g++, the compiler tells the linker to find libNAME.so within library search paths. Since there may exist multiple versions of the same library(libNAME.so.1, libNAME.so.1.20), *.so files link to desired actual library file. (Versioning of shared objects, ld man pages)
"Why do I need --sysroot option in this case? When do I need to use this option in general? And why this time I didn't have to add all the dependencies to linking command?"
The "dev-pkgs" in EXTRA_IMAGE_FEATURES changes your sysroot implicitly to let you link against the dev packages(yoctoproject image-features). That's why you need -sysroot option. You generally need this option when cross compiling to provide a root for standard search paths for headers and libraries. You didn't need it because you didn't have dev-pkgs image feature that changes your sysroot
"So third question – what does this errors mean?"
Even your the most basic hello world code gets linked with standard c library(if you didn't specify otherwise). libm.so, libpthread.so and crt1.o files are parts of libc library and come with libc dev package. So the linker can't see the standard library directories when it looks from your old sysroot
why I did not needed .so file for pthread, but needed it for all other libraries?
A cross compiler will normally come with a C Runtime (including pthread), typically in a directory that is part of the cross compiler installation.
The linker has built in search paths for libraries. These are in respect to the sysroot, which would by default be set to search the cross compiler's own included target C Runtime. If you added any -L options it would search those first and then move on to these pre-defined directories.
When you linked against pthread it would have found at least libpthread.a in the cross compiler's library directory.
Or more general – when do I need .so file, and when .so.X file is enough?
Shared libraries in Linux typically have a major and a minor version number. Libraries are ABI compatible between different minor versions with the same major version, but not between major versions. Sometimes there are three levels of versions but the principal is similar.
When installing libraries it is common to install the actual file with the full name, eg. libmy.so.1.2, then provide symlinks to libmy.so.1 and libmy.so.
If you are linking an application can work with any library version then you would just specify the name, eg. -lmy. In that case you would need symlinks from libmy.so to libmy.so.1.
If you required a specific version you would put -l:libmy.so.1. The ':' indicates a literal file name.
Linker scripts may affect things and may result in specific versions being selected even when you do specify the short name.
Why do I need --sysroot option in this case? When do I need to use
this option in general?
What --sysroot does is prepend the given path onto all the search directories which would normally be used to search for includes and libraries. It is most useful when cross compiling (as you are doing now) to get the compiler and the linker to search inside the target root instead of the build host's own root.
If you have specified a sysroot you probably do not need to specify include paths via -I or linker paths via -L, assuming that the files are within their normal spots inside your target root.
And why this time I didn't have to add all the dependencies to linking command?
One possible scenario is that the first time, sndfile for statically rather than dynamically linked. This would happen if your first root image had only sndfile.a in the lib dir, or elsewhere on the search path. To then satisfy the requirements of sndfile.a you would also need to link the other libs.
When linking against sndfile.so the dependencies will automatically get loaded via the dynamic linking process.
That's just a working theory at present.
So third question – what does this errors mean?
They mean it cannot find even the C runtime library to link.
As described for the first question, it was previously finding the C runtime in the pre-defined search path (relative to the predefined sysroot) which located the C runtime supplied by the cross compiler.
You disturbed this by supplying your own sysroot. It was now only searching the target root. Since this target root filesystem did not have development libs installed, there was no C runtime there to find.
You are doing several things wrong:
looks like you are not using environment variables, but calling cross-compiler directly. So, instead of compiling with arm-linux-gnueabihf-g++ ..., you should do $CXX .... The CXX is the environment variable set by the yocto script to set environment for cross compilation. Using CXX, you do not need to manually pass --sysroot
You should not link directly to pthread library with -lpthread. You should use -pthread

Is there an equivalent of -I (capital I) for object files in g++?

I have several object files coming from different directories (they are stored near the corresponding source that generated them). Is there a way that given this directory structure
Root
main.o
Root/Some_long_path
object_1.o
object_2.o
I could run a command like this
g++ -Wall main.o -ISome_long_path object_1.o object_2.o -o app
So that I don't have to put the full path in front of every object file. What would go instead of the -I command?
I am using gcc version 4.8.3 (from Cygwin installation).
So that I don't have to put the full path in front of every object file. What would go instead of the -I command?
There is no gcc option like -I to add some directory to the object "search path", and there is no search path for objects. But there is search path for libraries (usually named lib*.a for static libraries and lib*.so for shared libraries in Unix world).
Directory-Options manual of gcc lists only -I option for include paths and -L option for library paths. There is no object path:
https://gcc.gnu.org/onlinedocs/gcc/Directory-Options.html#Directory-Options
And Link-Options manual of gcc mention only -L option (near -l description):
https://gcc.gnu.org/onlinedocs/gcc/Link-Options.html#Link-Options
What can you use:
shell variables or environment variables (in bash: OBJ_DIR=./path/to/dir then g++ ... $OBJ_DIR/obj1.o $OBJ_DIR/obj2.o)
Packing several objects from single directory to single static library libSome_Long_Component_Name.a (this type of library is just like the archive of several *.o object files and some extra info made with ar rcs); then you can use g++ ... -Ldir/ -lSome_Long_Component_Name
Makefiles and some variant of make utility (gnu make for linux and cygwin, nmake for windows's MSVC; you can start from gnu make manual: http://www.gnu.org/software/make/manual/make.html#Simple-Makefile then "2.4 Variables Make Makefiles Simpler" then "4.3 Types of Prerequisites" - with example of addprefix command to ask make find objects in some objdir. There is also VPATH special variable which instructs make to search sources and objects in several directories; but all objects in project should have different names).
Some IDE with support of Unix projects (Code::Blocks, list1, list2, wikilist of C/C++ IDE?); they usually manage all sources of you project and are able to generate Makefiles for project.
Some more modern build system, not the make: like CMake, SCons (or qmake if you use Qt).

Autotools: Including a prebuilt 3rd party library

I'm currently working to upgrade a set of c++ binaries that each use their own set of Makefiles to something more modern based off of Autotools. However I can't figure out how to include a third party library (eg. the Oracle Instant Client) into the build/packaging process.
Is this something really simple that I've missed?
Edit to add more detail
My current build environment looks like the following:
/src
/lib
/libfoo
... source and header files
Makefile
/oci #Oracle Instant Client
... header and shared libraries
Makefile
/bin
/bar
... source and header files
Makefile
Makefile
/build
/bin
/lib
build.sh
Today the top level build.sh does the following steps:
Runs each lib's Makefile and copies the output to /build/lib
Runs each binary's Makefile and copied the output to /build/bin
Each Makefile has a set of hardcoded paths to the various sibling directories. Needless to say this has become a nightmare to maintain. I have started testing out autotools but where I am stuck is figuring out the equivalent to copying /src/lib/oci/*.so to /build/lib for compile time linking and bundling into a distribution.
I figured out how to make this happen.
First I switched to a non recursive make.
Next I made the following changes to configure.am as per this page http://www.openismus.com/documents/linux/using_libraries/using_libraries
AC_ARG_WITH([oci-include-path],
[AS_HELP_STRING([--with-oci-include-path],
[location of the oci headers, defaults to lib/oci])],
[OCI_CFLAGS="-$withval"],
[OCI_CFLAGS="-Ilib/oci"])
AC_SUBST([OCI_CFLAGS])
AC_ARG_WITH([oci-lib-path],
[AS_HELP_STRING([--with-oci-lib-path],
[location of the oci libraries, defaults to lib/oci])],
[OCI_LIBS="-L$withval -lclntsh -lnnz11"],
[OCI_LIBS='-L./lib/oci -lclntsh -lnnz11'])
AC_SUBST([OCI_LIBS])
In the Makefile.am you then use the following lines (assuming a binary named foo)
foo_CPPFLAGS = $(OCI_CFLAGS)
foo_LDADD = libnavycommon.la $(OCI_LIBS)
ocidir = $(libdir)
oci_DATA = lib/oci/libclntsh.so.11.1 \
lib/oci/libnnz11.so \
lib/oci/libocci.so.11.1 \
lib/oci/libociicus.so \
lib/oci/libocijdbc11.so
The autotools are not a package management system, and attempting to put that type of functionality in is a bad idea. Rather than incorporating the third party library into your distribution, you should simply have the configure script check for its existence and abort if the required library is not available. The onus is on the user to satisfy the dependency. You can then release a binary package that will allow the user to use the package management system to simplify dependency resolution.

Using "ocamlfind" to make the OCaml compiler and toplevel find (project specific) libraries

I'm trying to use ocamlfind with both the OCaml compiler and toplevel. From what I understood, I need to place the required libraries in the _tags file at the root of my project, so that the ocamlfind tool will take care of loading them - allowing me to open them in my modules like so :
open Sdl
open Sdlvideo
open Str
Currently, my _tags file looks like this :
<*>: pkg_sdl,pkg_str
I can apparently launch the ocamlfind command with the ocamlc or ocamlopt argument, provided I wan't to compile my project, but I did not see an option to launch the toplevel in the same manner. Is there any way to do this (something like "ocamlfind ocaml")?
I also don't know how to place my project specific modules in the _tags file : imagine I have a module name Land. I am currently using the #use "land.ml" directive to open the file and load the module, but it has been suggested that this is not good practice. What syntax should I use in _tags to specify it should be loaded by ocamlfind (considering land.ml is not in the ocamlfind search path) ?
Thank you,
Charlie P.
Edit : According to the first answer of this post, the _tags file is not to be used with ocamlfind. The questions above still stand, there is just a new one to the list : what is the correct way to specify the libraries to ocamlfind ?
try this:
$ cat >> .ocamlinit
#use "topfind";;
#require "sdl";;
#require "sdlvideo";;
open Sdl
open Sdlvideo;;
open Str;;
.ocamlinit is sourced from the current directory, falling back to /home/user/.ocamlinit. you can explicitly override it via ocaml -init <filename>
One should distinguish ocamlfind packages, module names and file names. Source code references only module names. Modules are provided by .cma .cmo .cmx (and .cmi) files. Ocamlfind packages are named collections of such files (binaries built from some ocaml library sources). Ocamlfind is not strictly necessary to build a project - just specify the paths to all used libraries via -I. But if the project is distributed in source form - another people will have troubles building it cause used libraries are placed in different places. Then one seeks the way to specify the names of used "third party code pieces" and some external tool to resolve those names to actual paths. This tool is ocamlfind.
First find the ocamlfind package that provides the modules you need (in this case Sdl and Sdlvideo - just run ocamlfind list, most probably the package is named sdl.
Compile with ocamlfind ocamlc -package <package name> -linkpkg source.ml -o program
Alternatively use ocamlfind to extract the path to the .cma file (and others) provided by the package (ocamlfind query <package name>) and use this path with your build tool (plain Makefile, ocamlbuild, etc).
Moreover ocamlfind (is intended to) take away the burden of remembering how the actual .cma files are named and what matching compiler options are required and in what order the packages are depending on each other. One just needs to specify the package name.
Keep in mind that there is no one-to-one strict formal relationship between library name (purely human-targeted name e.g. ocaml-extlib), module names (ExtLib, Enum, IO, etc), file names (extLib.cma) and ocamlfind package name (extlib), though for convenience package maintainers and library authors usually choose guessable names :)
The _tags file is not for ocamlfind, but for ocamlbuild.
Exemple :
ocamlbuild land.native

How do I link against libtool static la library?

I have compiled this library successfully. It generates a libcds2.la file that I am trying to link into my own project. I have all files (including the .h file) in the same directory as my project file. When I try to link and use the functions of said library, using:
g++ -o test -I/opt/include/ -L/opt/lib/ -lcds2 libcdsNoptrs.cpp util.cpp
comes back with
./test: error while loading shared libraries: libcds2.so.2:
cannot open shared object file: No such file or directory
whatever that is. But the point is that most of the time it just doesn't recognize the library. I assume I'm doing something wrong with the file paths, but can't figure it out. My test file is a C++ file including #include "libcds2/array.h" and everything is installed in opt/lib, opt/include, ugly, I know, but that's what the Makefile generated.
Any pointers?
The libtool .la is a 'meta data' file. After building the cds2 library, it's expected that libtool will also be used in 'link' mode to build any of the package's tests, etc.
Typically, the in the directory you find the .la file, you will find the .a and .so under the .libs subdirectory. The .libs/libcds2.a file will be found there, provided configure was given the --enable-static option (or it is enabled by default). But, my understanding is that you've installed the package in /opt :
g++ -I/opt/include/ libcdsNoptrs.cpp util.cpp -o test /opt/lib/libcds2.a
Otherwise, if libcds2 isn't installed, just supply a path to: .../libcds2/lib/.libs/libcds2.a
Unless you want to use libtool in --link mode with -static to handle everything. But learning the advantages of libtool is usually an exercise for a rainy day:)