The other day I set up an Ubuntu installation in a VM and went to gather the tools and libraries I figured I would need for programming mostly in C++.
I had a problem though, where to put things such as 3rd party source libraries, etc. From what I can gather, a lot of source distributions assume that a lot of their dependencies are already installed in a certain location and assume that a lot of tools are also installed in particular locations.
To give an example of what I currently do on Windows, is I have a directory where I keep all source code. C:\code. In this directory, I have a directory for all 3rd party libraries, c:\code\thirdparty\libs. This way I can easily set up relative paths for all of the dependencies of any projects I write or come across and wish to compile. The reason I am interested in setting up a linux programming environment is that it seems that both the tool and library dependency problems have been solved efficiently making it easy for example to build OpenSSH from source.
So what I was looking for was a decent convention I can use when I am trying to organize my projects and libraries on linux that is easy to maintain and easy to use.
Short answer: don't do a "heaps of code in local dir" thing.
Long answer: don't do a "heaps of code in local dir" thing, because it will be nightmare to keep up-to-date, and if you decide to distribute your code, it will be nightmare to package it for any decent distribution.
Whenever possible, stick to the libraries shipped in the distribution (ubuntu has 20000+ packets, it ought to have most of what you'll need prepackaged). When there is not package, you caninstall by hand to /usr/local (but see above about upgrades and DONT do that).
Better, use "stow" or "installwatch" (or both) to install to per-library dirs (/usr/local/stow/libA-ver123) and then symlink files from there to /usr/local or /usr/ (stow does the simlinking part). Or just package the lib for your distribution.
For libraries/includes...
/usr/local/lib
/usr/local/include
Where possible code against the system/distro provided libraries. This makes it easiest to ship a product on that distro.
However, if you are building a commercial application, because there are so many flavors of Linux distros that can mean you have to maintain a plethora of different application builds for each distro. Which isn't necessarily a bad thing as it means you can more cleanly integrate with the distro's package management system.
But in the case where you can't do that it should be fairly easy to download the source of each 3rd party dependency you have and integrate the building of that dependency into a static lib that is linked to your executable. That way you know exactly what you're linking against but has the downside of bloating out your executable size. This can also be required if you need a specific library (or version) not provided by the distro.
If you want your code to build on as broad a variety of different Unix systems then you're probably wise looking into GNU autoconf and automake. These help you construct a configure script and makefile for your project so that it will build on practically any Unix system.
Also look into pkg-config which is used quite a bit now on Linux distributions for helping you include and link to the right libraries (for libs that support pkg-config).
If you're using subversion to manage your source there is a "convention" that most subversion repositories use to manage their own code and "vendor" code.
Most svn repositories have a "vendor" tree (that goes along with the trunk, branches & tags trees). That is the top for all 3rd party vendor code. In that directory you have directories for each library you use. Eg:
branches/
tags/
trunk/
vendor/somelib
vendor/anotherlib
Beneath each of these libs is a directory for each library version and a "current" directory for the most up-to-date version in your repository.
vendor/somelib/1.0
vendor/somelib/1.1
vendor/somelib/current
Then your project's tree should be laid out something like this:
trunk/source # all your code in here
trunk/libs # all vendor code in here
The libs directory should be empty but it will have svn:externals meta data associated with it, via:
svn propedit svn:externals trunk/libs
The contents of this property would be something along the lines of (assumes subversion 1.5):
^/vendor/somelib/current somelib
^/vendor/anotherlib/1.0 anotherlib
This means that when you checkout your code subversion also checks out your vendor libraries into your trunk/libs directory. So that when checked out it looks like this:
trunk/source
trunk/libs/somelib
trunk/libs/anotherlib
This is described (probably a whole lot better) in the Subversion Book. Particularly the section on handling vendor branches and externals.
Ubuntu = Debian = apt-get goodness
Start with Linux Utilities:
%> sudo apt-get install util-linux
Related
I am making a modified C++ compiler and I have it built and tested locally. However, I would like to be able to package my build for Windows, Linux (Debian), and Mac OSX.
All of the instructions I can find online deal with building gcc but have no regard for making something distributable (or perhaps I am missing something?). I know for Windows I will need to bundle MinGW somehow, but that only further confuses me - and I have no idea how well Mac works with GCC these days..
Can anyone layout a set of discrete high-level steps I could try on each system so I can allow people to install my modified compiler easily?
First make sure your project installs well including executables, headers, runtime dependencies. If you're using something like cmake, this is a matter of installing things to CMAKE_INSTALL_PREFIX while possibly appending GnuInstallDirs. If you are using make, then you need to ensure that make install --prefix=... works well.
From here, you can target each platform independently. Treat the packaging independently from your project. Like Chipster mentioned, making rpm files isn't so tough. deb files for Debian-based OSs, tar.xz files for Arch-based OSs are similar. The rules for creating these packages can use your install rules to create the package. You mentioned mingw. If you're targeting an msys distribution of mingw for Windows deployment, then the Arch-based packaging of pacman will work on msys as well. You can slowly work on supporting one-platform at a time with almost no changes to your actual project.
Typically in the open-source world, people will release a tar.gz file supporting ./configure && make && make install or similar. Then someone associated with the platform (like a Debian-developer) will find your project, make some packaging rules for it, and release it into their distribution. That means your project can be totally agnostic to where it's being release. It also means you don't really need to worry about MacOS yet, you can wait until you have someone who wants it there, or some hardware to test it on.
If you really want to be in control of how things are packaged for each platform from inside of your project, and you are already using cmake, cpack is a great tool which helps out. After writing cpack rules for your project, you can simply type cpack to generate many types of deployable archives. You won't get the resulting *.deb file into Debian or Ubuntu official archives, but at least people can using those formats can install your package.
Also, consider releasing one package with the runtime libraries, and one with the development content (headers, compiler, static libraries). This way, if someone uses your compiler, they can re-distribute the runtime libraries which is probably going to be a much simpler install.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm rather new to Linux filesystems, so could you help me, please? I have to write a sample C++ project (tests) using Ubuntu.
Could you clarify me with a file/folder structure from a developer's perspective? Here are some questions I would like answered:
Where is typical location of the projects (sources, object files, etc.)?
Where is a typical place for dev environment (Eclipse, QT Creator, etc.)?
Where is a typical place for the libraries? Are there different places for binaries and for header-only libraries?
Where is a typical place for a various tools for the development (code analyser, git client, etc.)?
Answers and links would be appreciated. Thank you.
Where is typical location of the projects (sources, object files, etc.)?
I store my projects in $HOME/dev but its entirely up to you.
Where is a typical place for dev environment (Eclipse, QT Creator, etc.)?
I use eclipse and set its workspace pointing to $HOME/dev.
Where is a typical place for the libraries? Are there different places for binaries and for header-only libraries?
Typically libraries are installed to /usr/lib and headers are installed to /usr/include.
Where is a typical place for a various tools for the development (code analyser, git client, etc.)?
Typically these are installed into /usr/bin. I also put tools in $HOME/bin, especially if I made them.
But its more complicated than that. What if you want to develop/test an application with a version of a library that is different to the one that comes with your Linux distribution? Sometimes I will install different library versions in my $HOME/dev folder and compile against those rather than the system version.
Also I run Fedora 21 that comes with GCC 4.9.2, however I have installed GCC 5.1.0 into /opt/gcc-5.1.0 and use that for some things.
A typical project structure for me would be something like:
$HOME/
/dev/
/my-prog/
/src/
/include/
/my-prog.h
/my-prog.cpp
/build-debug/
/src/
/my-prog
/build-release/
/src/
/my-prog
/Makefile
The implementation varies somewhat between Linux distributions, but all seek to implement the Linux Filesystem Hierarchy Standard for the most part. Some distributions more so than others. The latest version is 2.3 and can be viewed via the links in the URL above or directly via FHS 2.3 PDF
Take time to view the document and compare it with the implementation Ubuntu uses. It will provide you with a good background to where and why files are located where they are in Linux, and also give you a quick lesson on how this is more of a goal for distributions than a hard requirement, as distributions are pretty much free to put things anywhere they want.
As for your project source tree layout, a common setup looks like this:
src/ this is were your .cpp files go
include/ for header that should get installed when you are writing a library
data/ is where graphics, icons, sounds, etc. go
external/ for git submodules or external libraries your project requires and that are obscure enough that you don't want users to manually install them
a build file in the top level directory (autoconf/automake was used historically, but these days CMake seems to be more popular)
a README or README.md explaining what the project does and which dependency it needs
Naming and organisation can of course vary between projects, there is no real standard, some use Source/ instead of src/. Some might have the headers within the src/ directory and so on. It's really up to your projects needs. If a library is commonly shipped with most distributions there is no need to include it in the source tree. Same with most programming tools, git, gcc and Co. are provided by the Linux distribution and you don't have to worry about where they go.
If you are writing a library you might also want to looking into pkg-config and the .so naming rules.
As for installation directories, the FHS explains that in detail, short summary:
/usr/bin for executables
/usr/lib for public libraries
/usr/lib/${APPNAME} for private libraries
/usr/include for public header files
/usr/share/${APPNAME} for data files
/opt/${APPNAME} is used by some commercial products instead of spreading the data over the hierachy
/usr itself is reserved for distribution installed packages, manually compiled stuff should go to /usr/local instead. Most build systems will have a --prefix=PREFIX option to allow the user to change that.
If you need to compile a library yourself, I would generally recommend to avoid installing it into /usr/local, as uninstalling software from there is hard and it also makes it impossible to keep different versions of the same software around at the same time.
Instead install software it into a PREFIX of it's own, something like ~/run/somelibrary-0.0.1. Installing software in such a way has however the disadvantage that tools won't find it. You can fix that by setting a few environment variables, I use a simple bash function for that purpose:
function activateprefix {
PREFIX="$1"; \
export PKG_CONFIG_PATH="${PREFIX}/lib/pkgconfig/"; \
export LD_LIBRARY_PATH="${PREFIX}/lib/"; \
export LD_RUN_PATH="${PREFIX}/lib/"; \
export LIBRARY_PATH="${PREFIX}/lib/"; \
export CPLUS_INCLUDE_PATH="${PREFIX}/include/"; \
export C_INCLUDE_PATH="${PREFIX}/include/"; \
}
I have a library that I distribute to my customers. I'm exploring the idea of leaving my 3rd party dependencies as dynamically linked dependencies. In this case, deployment for my customers becomes more complicated, as they must install my dependencies before they can use my library. I am a bit new to this, so I have a broad question:
Assuming that all of my customers are on linux, Is an RPM package that simply installs the dependency .so files into system library directories the best route? From what I'm reading about RPMs, this isn't really the way they are meant to be used. I suppose that what I'm looking for is a sort of 'installer' for linux, but maybe such a thing doesn't exist.
Is the best way to just build a package that includes all of the relevant binaries (and licenses, where applicable), and has instructions on how to install?
You have multiple options:
static linking (if the licenses in play allow it)
support a range of distros and provide packages for all of them (viability depends on who your customers are) . Easiest option for your customers, most complicated one for you.
provide an installer that installs your application in a Windows-style self-contained dir structure (eg /opt/myapp or /home/someuser/myapp). Put the shared libraries in there to and start via a script with LD_LIBRARY_PATH set accordingly. I've seen this option used by Loki games, Adobe Reader, Google Earth and others.
Do not:
provide a custom installer that will copy your binaries and libraries in the standard directory structure. This may overwrite specific library versions needed for other apps your customer has. It also leaves a terrible mess as the distro's package management won't know about these files.
provide an rpm for everyone. On non-rpm distros, this would require your customers to manually convert the package to fit their package management system.
I'm trying to use the Quadprog++ library (http://quadprog.sourceforge.net/). I don't understand the instructions though.
To build the library simply go through the ./configure; make; make
install cycle.
In order to use it, you will be required to include in your code file
the "Array.hh" header, which contains a handy C++ implementation of
Vector and Matrices.
There are some "configure", and "MakeFile" files, but they have no extension and I have no idea what to do with them. There are also some ".am", ".in" and ".ac" extensions in the folder.
Does this look familiar to anyone? What do I do with this?
(Edit: On Windows.)
This package is built using the autotools. These files you talk to (*.am, *.in...) are because of the tools automake, and autoconf.
Autotools is a de-facto standard in the GNU/Linux world. Not everybody uses it, but if they do you ease the work of package and distribution managers. Actually they should be portable to any POSIX system.
That said, I'm guessing that you are using a non-unix machine, such as Windows, so the configure script is not directly runable in your system. If you insist in keep using Windows, wich you probably will, your options are:
Use MinGW and MSYS to get a minimal build enviroment compatible with autotools.
Use Cygwin and create a POSIX like environment in your Windows.
Create a VS project, add all the source of the library in there, compile and debug the errors they may arise, as if the code had been written by you.
Search for someone that already did the work and distributes a binary DLL, or similar.
(My favourite!) Get a Linux machine, install a cross-compiler environment to build Windows binaries, and do configure --host i686-mingw32 ; make.
This instruction say how can be build an program delivered like a tarball in Linux. To understand take a look on Why always ./configure; make; make install; as 3 separate steps?.
This can be confusing at first, but here you go. Type these in as shown below:
cd <the_directory_with_the_configure_file>
./configure
At this point, a bunch of stuff will roll past on the screen. This is Autoconf running (for more details, see http://www.edwardrosten.com/code/autoconf/index.html)
When it's done, type:
make
This initiates the build process. (To learn more about GNU make, check out Comprehensive gnu make / gcc tutorial). This will cause several build messages to be printed out.
When this is done, type:
sudo make install
You will be asked for the root password. If this is not your own machine (or you do not have superuser access), then contact the person who administers this computer.
If this is your computer, type in the root password and the library should install in /usr/local/lib/ or something similar (watch the screen closely to see where it puts the .so file).
The rest of it (include the .hh file) seems self-explanatory.
Hope that helps!
I'm in the middle of setting up an build environment for a c++ game project. Our main requirement is the ability to build not just our game code, but also its dependencies (Ogre3D, Cegui, boost, etc.). Furthermore we would like to be able build on Linux as well as on Windows as our development team consists of members using different operating systems.
Ogre3D uses CMake as its build tool. This is why we based our project on CMake too so far. We can compile perfectly fine once all dependencies are set up manually on each team members system as CMake is able to find the libraries.
The Question is if there is an feasible way to get the dependencies set up automatically. As a Java developer I know of Maven, but what tools do exist in the world of c++?
Update: Thanks for the nice answers and links. Over the next few days I will be trying out some of the tools to see what meets our requirements, starting with CMake. I've indeed had my share with autotools so far and as much as I like the documentation (the autobook is a very good read), I fear autotools are not meant to be used on Windows natively.
Some of you suggested to let some IDE handle the dependency management. We consist of individuals using all possible technologies to code from pure Vim to fully blown Eclipse CDT or Visual Studio. This is where CMake allows use some flexibility with its ability to generate native project files.
In the latest CMake 2.8 version there is the new ExternalProject module.
This allows to download/checkout code, configure and build it as part of your main build tree.
It should also allow to set dependencies.
At my work (medical image processing group) we use CMake to build all our own libraries and applications. We have an in-house tool to track all the dependencies between projects (defined in a XML database). Most of the third party libraries (like Boost, Qt, VTK, ITK etc..) are build once for each system we support (MSWin32, MSWin64, Linux32 etc..) and are commited as zip-files in the version control system. CMake will then extract and configure the correct zip file depending on which system the developer is working on.
I have been using GNU Autotools (Autoconf, Automake, Libtool) for the past couple of months in several projects that I have been involved in and I think it works beautifully. Truth be told it does take a little bit to get used to the syntax, but I have used it successfully on a project that requires the distribution of python scripts, C libraries, and a C++ application. I'll give you some links that helped me out when I first asked a similar question on here.
The GNU Autotools Page provides the best documentation on the system as a whole but it is quite verbose.
Wikipedia has a page which explains how everything works. Autoconf configures the project based upon the platform that you are about to compile on, Automake builds the Makefiles for your project, and Libtool handles libraries.
A Makefile.am example and a configure.ac example should help you get started.
Some more links:
http://www.lrde.epita.fr/~adl/autotools.html
http://www.developingprogrammers.com/index.php/2006/01/05/autotools-tutorial/
http://sources.redhat.com/autobook/
One thing that I am not certain on is any type of Windows wrapper for GNU Autotools. I know you are able to use it inside of Cygwin, but as for actually distributing files and dependencies on Windows platforms you are probably better off using a Windows MSI installer (or something that can package your project inside of Visual Studio).
If you want to distribute dependencies you can set them up under a different subdirectory, for example, libzip, with a specific Makefile.am entry which will build that library. When you perform a make install the library will be installed to the lib folder that the configure script determined it should use.
Good luck!
There are several interesting make replacements that automatically track implicit dependencies (from header files), are cross-platform and can cope with generated files (e.g. shader definitions). Two examples I used to work with are SCons and Jam/BJam.
I don't know of a cross-platform way of getting *make to automatically track dependencies.
The best you can do is use some script that scans source files (or has C++ compiler do that) and finds #includes (conditional compilation makes this tricky) and generates part of makefile.
But you'd need to call this script whenever something might have changed.
The Question is if there is an feasible way to get the dependencies set up automatically.
What do you mean set up?
As you said, CMake will compile everything once the dependencies are on the machines. Are you just looking for a way to package up the dependency source? Once all the source is there, CMake and a build tool (gcc, nmake, MSVS, etc.) is all you need.
Edit: Side note, CMake has the file command which can be used to download files if they are needed: file(DOWNLOAD url file [TIMEOUT timeout] [STATUS status] [LOG log])
Edit 2: CPack is another tool by the CMake guys that can be used to package up files and such for distribution on various platforms. It can create NSIS for Windows and .deb or .tgz files for *nix.
At my place of work (we build embedded systems for power protection) we used CMake to solve the problem. Our setup allows cmake to be run from various locations.
/
CMakeLists.txt "install precompiled dependencies and build project"
project/
CMakeLists.txt "build the project managing dependencies of subsystems"
subsystem1/
CMakeLists.txt "build subsystem 1 assume dependecies are already met"
subsystem2/
CMakeLists.txt "build subsystem 2 assume dependecies are already met"
The trick is to make sure that each CMakeLists.txt file can be called in isolation but that the top level file can still build everything correctly. Technically we don't need the sub CMakeLists.txt files but it makes the developers happy. It would be an absolute pain if we all had to edit one monolithic build file at the root of the project.
I did not set up the system (I helped but it is not my baby). The author said that the boost cmake build system had some really good stuff in it, that help him get the whole thing building smoothly.
On many *nix systems, some kind of package manager or build system is used for this. The most common one for source stuff is GNU Autotools, which I've heard is a source of extreme grief. However, with a few scripts and an online depository for your deps you can set up something similar like so:
In your project Makefile, create a target (optionally with subtargets) that covers your dependencies.
Within the target for each dependency, first check to see if the dep source is in the project (on *nix you can use touch for this, but you could be more thorough)
If the dep is not there, you can use curl, etc to download the dep
In all cases, have the dep targets make a recursive make call (make; make install; make clean; etc) to the Makefile (or other configure script/build file) of the dependency. If the dep is already built and installed, make will return fairly promptly.
There are going to be lots of corner cases that will cause this to break though, depending on the installers for each dep (perhaps the installer is interactive?), but this approach should cover the general idea.
Right now I'm working on a tool able to automatically install all dependencies of a C/C++ app with exact version requirement :
compiler
libs
tools (cmake, autotools)
Right now it works, for my app. (Installing UnitTest++, Boost, Wt, sqlite, cmake all in correct order)
The tool, named «C++ Version Manager» (inspired by the excellent ruby version manager), is coded in bash and hosted on github : https://github.com/Offirmo/cvm
Any advices and suggestions are welcomed.