I need to set LD_LIBRARY_PATH for an application using native DLLs. The reason is that I need to import two native DLLs, one depending on the other. JVM's -Djava.library.path only works for resolving the first library, but it's dependency is resolved using the dynamic loader policy, which is given by LD_LIBRARY_PATH. I use fork in test := true and fork in run := true. I use javaOptions ++= Seq(s"-Djava.library.path=...").
How do I set LD_LIBRARY_PATH? Meta-question: how do I navigate sbt APIs so I can learn how to set LD_LIBRARY_PATH on my own?
Related
I've been having troubles with the QCoreApplication::addLibraryPath(QString &Path) method on windows.
I've trying to use it to add directories where the application should be looking for dll that i'm loading dynamically with QLibrary.
I soon enough realized that it was not the right way to go. I now use a putenv approach to change directly my environement variables.
Plus, I still don't understand what exactly the addLibraryPath method is supposed to be used for?
I think qt documentation is not clear enough, regarding this topic.
There are (at least) 2 sorts of libs/dlls:
Essential Libs/Dlls das are already needed at program start (like Qt5core.dll).
"Functionality Libs" like the Qt Plugins and third party stuff, that can be loaded later.
It is not obvious (at least for me) which DLL is of sort 1 or sort 2. This leads to the problem that it may be nasty to find out which you can move into subfolders and point your application to it by addLibraryPath().
For me the following solution worked:
use windeploy to find out the bigger part of dependencies (my app's executable is in a "bin" folder below the project folder)
c:\Qt\Qt5.3.2\5.3\mingw482_32\bin\windeployqt.exe ..\bin\myapp.exe --release --force --compiler-runtime -libdir ..\bin -dir ..\bin\plugins
this puts the "sort 1" libs into the app folder
and the "sort 2" libs into a subfolder plugins
Additionally, it is needed to let the installer set an environment var "QT_PLUGIN_PATH" in registry to let the app find the plugins. I wasted hours just to find out that setting this path with addLibraryPath() at runtime is just not working. Also a qt.conf file seems not to work. The only alternative for me is setting a environment var in a .bat file, what is essentially the same like the registry setting.
Here's the registry key (in inno setup syntax):
Root: HKLM; Subkey: "SYSTEM\CurrentControlSet\Control\Session Manager\Environment"; ValueType: string; ValueName: "QT_PLUGIN_PATH"; ValueData: "lib"
Another anoying thing is, that some libs are not identified by windeploy. These are mainly the compiler redistributables, that will vary with the compiler that you use. Others are more depending on the functions that you use, and somehow are not processed by windeploy. This fact is not documented obviously (at least I didn't see it) and is also not easy to understand. For my app, these are the following compiler redists and some database related libs:
libeay32.dll
libgcc_s_dw2-1.dll
libintl.dll
libpq.dll
libstdc++-6.dll
libwinpthread-1.dl
l
Dependency Walker is always said as a solution for finding this out. For me that didn't work either. Not all of the mentioned libs were listed, but the app isn't running without. Maybe it's because the libs are just loaded under special circumstances?
addLibraryPath adds a path to the ones that the application will search when dynamically loading libraries.
From the Qt documentation about QCoreApplication::​libraryPaths() :
This list will include the installation directory for plugins if it
exists (the default installation directory for plugins is
INSTALL/plugins, where INSTALL is the directory where Qt was
installed). The directory of the application executable (NOT the
working directory) is always added, as well as the colon separated
entries of the QT_PLUGIN_PATH environment variable.
Also it's stated in the Qt documentation that :
An application has an applicationDirPath() and an
applicationFilePath(). Library paths (see QLibrary) can be retrieved
with libraryPaths() and manipulated by setLibraryPaths(),
addLibraryPath(), and removeLibraryPath().
So it seems you can add the path for QLibrary with addLibraryPath.
If you have an application which relies on another app being installed, you ideally want your installer to find that dependency path automatically. On Windows you can use the registry but what about Mac/Linux? In this particular case it's a C++ application, if that makes a difference.
If you try to distribute your application through any of the common package managers on Linux (apt, yum) you can add the application as a dependency.
If you down the route of custom install scripts you need to resort to some kind of hackery. Either find out which package manager is in use on the system and try to query with it (which can fail, if the other application was installed without the package manager) or try something like which required_app.
Go for the first, if you want to do it right.
In Mac OS X, if you're looking for an application that's bundled in a typical .app bundle, you can use Spotlight to find it from its bundle ID using the command line utility mdfind(1). For example, to find out if Firefox is installed (and where), run this command:
mdfind 'kMDItemCFBundleIdentifier == org.mozilla.firefox'
Generally, on UNIX systems you can expect all programs to reside in $PATH instead of being distributed in a hodge-podge collection of stupidly named and partially localized directories. So, essentially you don't need to find any dependency path - you just call the other "app" (program) via execvp, and the libc takes care of walking through the entries of $PATH and finding the executable.
In the classic UNIX model, you don't check anything in an installer, but just check at runtime whether an executable is available (with which, for example) or not.
The equivalent of a Windows Installer is the Linux Package Manager. The Package Manager handles dependencies and installs it (if it is not already present on the system). The dependency information for an application is stored within the package file. Each distribution has its own Package Manager, though the concept is the same.
There are plenty of resources online for specifics about a Package Manager. However, if you would like to get an overview in comparison with a Windows Installer, check out application management in GNU/Linux for Windows users.
I am trying to keep my project self-contained, with all major 3rd party library dependencies built and referenced within the project repository. The main ocaml portions of my project rely on ocamlbuild.
But for complex packages like Batteries Included, there seems to be a strong expectation that they be linked into a project via ocamlfind. ocamlfind seems to assume that packages will be installed globally. (I realize it allows environment variables and its conf to point to alternate locations, but it fundamentally still seems to be built around the assumption that packages are globally configured--it has no equivalent of -I or -L flags to dynamically extend the search path for packages, for example. It may be possible to set environment variables to dynamically override the ocamlfind configuration to search the project-local tree, but this is much more awkward than mere arguments, and it also seems like it would be challenging to do so without simultaneously removing discoverability of the main system packages in the primary site-lib, which may also be needed.)
What is a sane strategy for building and building against nontrivial 3rd party packages within a project-local tree for a project using ocamlbuild?
Using environment variables (or separate findlib.conf) is a way to go (and easy). And it doesn't require removing discoverability of global packages, see reference manual for path and destdir in findlib.conf (OCAMLPATH and OCAMLFIND_DESTDIR environment variables respectively).
Basically you set destdir to local path when installing project-local packages, and prepend to path when using them (don't forget to create stublibs in destdir (and add it to ld.conf in stdlib if you are building bytecode binaries)).
PS I think this is the approach used in ocsigen-bundler.
Please tell if you experience any problems (cause I am interested in using this same approach too).
Imagine an overall project with several components:
basic
io
web
app-a
app-b
app-c
Now, let's say web depends on io which depends on basic, and all those things are in one repo and have a CMakeLists.txt to build them as shared libraries.
How should I set things up so that I can build the three apps, if each of them is optional and may not be present at build time?
One idea is to have an empty "apps" directory in the main repo and we can clone whichever app repos we want into that. Our main CMakeLists.txt file can use GLOB to find all the app directories and build them (not knowing in advance how many there will be). Issues with this approach include:
Apparently CMake doesn't re-glob when you just say make, so if you add a new app you must run cmake again.
It imposes a specific structure on the person doing the build.
It's not obvious how one could make two clones of a single app and build them both separately against the same library build.
The general concept is like a traditional recursive CMake project, but where the lower-level modules don't necessarily know in advance which higher-level ones will be using them. Yet, I don't want to require the user to install the lower-level libraries in a fixed location (e.g. /usr/local/lib). I do however want a single invocation of make to notice changed dependencies across the entire project, so that if I'm building an app but have changed one of the low-level libraries, everything will recompile appropriately.
My first thought was to use the CMake import/export target feature.
Have a CMakeLists.txt for basic, io and web and one CMakeLists.txt that references those. You could then use the CMake export feature to export those targets and the application projects could then import the CMake targets.
When you build the library project first the application projects should be able to find the compiled libraries automatically (without the libraries having to be installed to /usr/local/lib) otherwise one can always set up the proper CMake variable to indicate the correct directory.
When doing it this way a make in the application project won't do a make in the library project, you would have to take care of this yourself.
Have multiple CMakeLists.txt.
Many open-source projects take this appraoch (LibOpenJPEG, LibPNG, poppler &etc). Take a look at their CMakeLists.txt to find out how they've done this.
Basically allowing you to just toggle features as required.
I see two additional approaches. One is to simply have basic, io, and web be submodules of each app. Yes, there is duplication of code and wasted disk space, but it is very simple to implement and guarantees that different compiler settings for each app will not interfere with each other across the shared libraries. I suppose this makes the libraries not be shared anymore, but maybe that doesn't need to be a big deal in 2011. RAM and disk have gotten cheaper, but engineering time has not, and sharing of source is arguably more portable than sharing of binaries.
Another approach is to have the layout specified in the question, and have CMakeLists.txt files in each subdirectory. The CMakeLists.txt files in basic, io, and web generate standalone shared libraries. The CMakeLists.txt files in each app directory pull in each shared library with the add_subdirectory() command. You could then pull down all the library directories and whichever app(s) you wanted and initiate the build from within each app directory.
You can use ADD_SUBDIRECTORY for this!
https://cmake.org/cmake/help/v3.11/command/add_subdirectory.html
I ended up doing what I outlined in my question, which is to check in an empty directory (containing a .gitignore file which ignores everything) and tell CMake to GLOB any directories (which are put in there by the user). Then I can just say cmake myrootdir and it does find all the various components. This works more or less OK. It does have some side drawbacks though, such as that some third-party tools like BuildBot expect a more traditional project structure which makes integrating other tools with this sort of arrangement a little more work.
The CMake BASIS tool provides utilities where you can create independent modules of a project and selectively enable and disable them using the ccmake command.
Full disclosure: I'm a developer for the project.
I have NUnit installed on my machine in "C:\Program Files\NUnit 2.4.8\" but on my integration server(running CruiseControl.Net) I have it installed in "D:\Program Files\NUnit 2.4.8\". The problem is that on my development machine my NAnt build file works correctly because in the task I'm using the path "C:\Program Files\NUnit 2.4.8\bin\NUnit.Framework.dll" to add reference to the 'NUnit.Framework.dll' assembly but this same build file cannot build the file on my integration server(because the reference path is different). Do I have to have my NUnit installed at the same location as it is in my integration server? This solution seems too restrictive to me. Are there any better ones? What is the general solution to this kind of problem?
Typically I distribute NUnit and any other dependencies with my project, in some common location (for me that's a libs directory in the top level).
/MyApp
/libs
/NUnit
/NAnt
/etc...
/src
/etc...
I then just reference those libs from my application, and they're always in the same location relative to the project solution.
In general, dependencies on absolute paths should be avoided. As far as CI goes, you should be able to build and run your solution on a clean machine completely from scatch using only resources found in your source code control via automated scripts.
The "ultimate" solution can be to have the entire tool-chain stored in your source-control, and to store any libraries/binaries you build in source-control as well. Set up correctly, this can ensure you have the ability to rebuild any release, from any point in time, exactly as it was shipped, but that, furthermore, you don't need to do that as every binary you#ve ever generated is source-controlled.
However, getting to that point is some serious work.
I'd use two approaches:
1) use two different staging scripts (dev build/integration build) with different paths.
2) put all needed executables in you path folder and call them directly.
I'd agree that absolute paths are evil. If you can't get around them, you can at least set an NUNIT_HOME property within your script that defaults to C:... and in your CI server call your script passing in the NUNIT_HOME property at the command line.
Or you can set your script to require an NUNIT_HOME environment variable to be set in order for NUNIT to work. Now, instead of requiring that the machine it runs on has nUnit in some exact location, your script requires that nunit be present and available in the environment variable.
Either approach would allow you to change the version of nunit you are using without modifying the build script, is that what you want?
The idea of having all the tools in the tool chain under version control is a good one. But while on your path there you can use a couple of different techniques to specify different paths per machine.
NAnt let's you define a <property> that you can override with -Dname=value. You could use this to have a default location for your development machines that you override in your CI system.
You can also get values of environment variables using environment::get-variable to change the location per machine.