I have two applications. One I wrote and one that comes from a third party vendor. My application in written in C, theirs in C++. My project builds in two ways, within eclipse or with autotools utilizing a makefile.am and configure.ac. To build their application in eclipse I have to run a build target. The build targets are:
build make --makefile=./Makefile ARCH=x86IsoApp
clean make --makefile=./Makefile ARCH=x86IsoApp clean
This works to give me an executable. isoApp, this can then be placed into a run configuration allowing me to debug.
My question is, how do I include this my application? I have tried copying in the files, adding build paths but their code is large and complex. There are many dependencies that I am unable to resolve. Once resolved I get linking errors. I have tried many times to get their code into my codebase. (They have C++ files with C extensions that refuse to compile until renamed). Is there a way to use the generated binary to access their APIs?
My code runs on an iMX and is compiled using an SDK I created using Yocto. If I cannot build the third party code in eclipse, I cannot place it in my Yocto build.
I am tweaking LLVM files and do a "make" in my build director to rebuild LLVM with the tweaked files, which is taking a while even though my changes were rather small (I understand that my one file will be affect other files). Do I have to use 'cmake --build .' to generate a new make file in the build directory or is it right to just call 'make'. And is it common for rebuilds to take a while?
I think, most time is spent re-linking the binaries, of which LLVM has many (opt, llc, etc.). One option to speed up the build is to enable LLVM_BUILD_LLVM_DYLIB and LLVM_LINK_LLVM_DYLIB and the other is to issue make opt instead of make, if you are mostly working with opt.
These options would make build system produce a single giant dynamic library (.so or .dll) containing all LLVM components (LLVMSupport, LLVMCodegen, etc), and make tools link to it. Linking to a dynamic library is much faster, because you don't need to re-link all the static code for each tool executable.
Also if you for example just modify a backend target, then its enough to issue make in tools/llc dir. This way only the required tool will be relinked, thus fasten the build process.
At the company I'm currently working for, several IDEs are being used (they develop firmware for different embedded platforms).
All their C projects use a Makefile, so we decided to also add rules to their default Makefile to run static code analysis tools.
One of the IDEs they use is Eclipse.
Here we have added additional targets to the Make Target view, that triggers the lint target from the Makefile, for example.
Since we use multiple IDEs we can tell the tools called by the Makefile to generate specific output for the IDE being used.
For Eclipse we do this by adjusting the Build Command and adding something like IDE_ENV=eclipse to the end.
This works just fine.
Recently one of the engineers mentioned that it would be really helpful if he could run the tools, as defined in the Makefile, for a single file.
So, I updated the Makefile and it now accepts a variable SOURCE_FILE with the path of the file that needs to be checked.
In Eclipse I tried adding SOURCE_FILE=${selected_resource_loc} and just SOURCE_FILE=${resource_loc}, but these variable do not seem to work when running a Make Target.
I also tried to use $(selected_resource_loc) and $(resource_loc) directly in the Makefile, but without any luck.
Can somebody tell me how I can pass the current selected file to Make when running a target from the Make Target view?
Some Eclipse special variables can be not recognized in a build configuration. Instead of running build procedure try to use External Tools Configuration.
Similar problem was described here: Custom command for Eclipse on current file .
I would like to launch multiple ninja builds simultaneously. Each build is in its own directory and has its own build.ninja file.
I could just do the following:
cd <build-dir-1>
ninja &
cd <build-dir-2>
ninja &
...
cd <build-dir-last>
ninja
....but there are a couple of issues with this:
The default number of threads used by Ninja probably isn't optimal when launching multiple independent builds simultaneously.
Output will, I expect, be interleaved in a non-sensible way.
EDIT I could also just keep the ninja calls in the foreground (which is what I'm currently doing), but then there would be no easy way to estimate what the current progress of the (entire) build is.
So, I would like to do one of the following:
merge the build.ninja files into one big file that can perform
both builds in one ninja invocation.
somehow launch ninja with multiple target build.ninja scripts.
It doesn't look like that second option is supported by ninja, but the first seems like it could be done easily enough using subninja <build-dir-n>/build.ninja. Has anyone done something like this before? Are there any hidden pitfalls? Alternatively, I could just perform the builds in sequence (i.e. the above sequence but without the &s), but this doesn't feel like the right solution.
Use-case
I'm using CMake, which generates a separate build.ninja file for each build configuration (release and debug). I'm also targeting multiple platforms, so I have multiple compiler, and CMake must be run separately for each platform. So if I want to build release and debug code for all platforms, I need to run ninja multiple times.
I'm in the middle of setting up an build environment for a c++ game project. Our main requirement is the ability to build not just our game code, but also its dependencies (Ogre3D, Cegui, boost, etc.). Furthermore we would like to be able build on Linux as well as on Windows as our development team consists of members using different operating systems.
Ogre3D uses CMake as its build tool. This is why we based our project on CMake too so far. We can compile perfectly fine once all dependencies are set up manually on each team members system as CMake is able to find the libraries.
The Question is if there is an feasible way to get the dependencies set up automatically. As a Java developer I know of Maven, but what tools do exist in the world of c++?
Update: Thanks for the nice answers and links. Over the next few days I will be trying out some of the tools to see what meets our requirements, starting with CMake. I've indeed had my share with autotools so far and as much as I like the documentation (the autobook is a very good read), I fear autotools are not meant to be used on Windows natively.
Some of you suggested to let some IDE handle the dependency management. We consist of individuals using all possible technologies to code from pure Vim to fully blown Eclipse CDT or Visual Studio. This is where CMake allows use some flexibility with its ability to generate native project files.
In the latest CMake 2.8 version there is the new ExternalProject module.
This allows to download/checkout code, configure and build it as part of your main build tree.
It should also allow to set dependencies.
At my work (medical image processing group) we use CMake to build all our own libraries and applications. We have an in-house tool to track all the dependencies between projects (defined in a XML database). Most of the third party libraries (like Boost, Qt, VTK, ITK etc..) are build once for each system we support (MSWin32, MSWin64, Linux32 etc..) and are commited as zip-files in the version control system. CMake will then extract and configure the correct zip file depending on which system the developer is working on.
I have been using GNU Autotools (Autoconf, Automake, Libtool) for the past couple of months in several projects that I have been involved in and I think it works beautifully. Truth be told it does take a little bit to get used to the syntax, but I have used it successfully on a project that requires the distribution of python scripts, C libraries, and a C++ application. I'll give you some links that helped me out when I first asked a similar question on here.
The GNU Autotools Page provides the best documentation on the system as a whole but it is quite verbose.
Wikipedia has a page which explains how everything works. Autoconf configures the project based upon the platform that you are about to compile on, Automake builds the Makefiles for your project, and Libtool handles libraries.
A Makefile.am example and a configure.ac example should help you get started.
Some more links:
http://www.lrde.epita.fr/~adl/autotools.html
http://www.developingprogrammers.com/index.php/2006/01/05/autotools-tutorial/
http://sources.redhat.com/autobook/
One thing that I am not certain on is any type of Windows wrapper for GNU Autotools. I know you are able to use it inside of Cygwin, but as for actually distributing files and dependencies on Windows platforms you are probably better off using a Windows MSI installer (or something that can package your project inside of Visual Studio).
If you want to distribute dependencies you can set them up under a different subdirectory, for example, libzip, with a specific Makefile.am entry which will build that library. When you perform a make install the library will be installed to the lib folder that the configure script determined it should use.
Good luck!
There are several interesting make replacements that automatically track implicit dependencies (from header files), are cross-platform and can cope with generated files (e.g. shader definitions). Two examples I used to work with are SCons and Jam/BJam.
I don't know of a cross-platform way of getting *make to automatically track dependencies.
The best you can do is use some script that scans source files (or has C++ compiler do that) and finds #includes (conditional compilation makes this tricky) and generates part of makefile.
But you'd need to call this script whenever something might have changed.
The Question is if there is an feasible way to get the dependencies set up automatically.
What do you mean set up?
As you said, CMake will compile everything once the dependencies are on the machines. Are you just looking for a way to package up the dependency source? Once all the source is there, CMake and a build tool (gcc, nmake, MSVS, etc.) is all you need.
Edit: Side note, CMake has the file command which can be used to download files if they are needed: file(DOWNLOAD url file [TIMEOUT timeout] [STATUS status] [LOG log])
Edit 2: CPack is another tool by the CMake guys that can be used to package up files and such for distribution on various platforms. It can create NSIS for Windows and .deb or .tgz files for *nix.
At my place of work (we build embedded systems for power protection) we used CMake to solve the problem. Our setup allows cmake to be run from various locations.
/
CMakeLists.txt "install precompiled dependencies and build project"
project/
CMakeLists.txt "build the project managing dependencies of subsystems"
subsystem1/
CMakeLists.txt "build subsystem 1 assume dependecies are already met"
subsystem2/
CMakeLists.txt "build subsystem 2 assume dependecies are already met"
The trick is to make sure that each CMakeLists.txt file can be called in isolation but that the top level file can still build everything correctly. Technically we don't need the sub CMakeLists.txt files but it makes the developers happy. It would be an absolute pain if we all had to edit one monolithic build file at the root of the project.
I did not set up the system (I helped but it is not my baby). The author said that the boost cmake build system had some really good stuff in it, that help him get the whole thing building smoothly.
On many *nix systems, some kind of package manager or build system is used for this. The most common one for source stuff is GNU Autotools, which I've heard is a source of extreme grief. However, with a few scripts and an online depository for your deps you can set up something similar like so:
In your project Makefile, create a target (optionally with subtargets) that covers your dependencies.
Within the target for each dependency, first check to see if the dep source is in the project (on *nix you can use touch for this, but you could be more thorough)
If the dep is not there, you can use curl, etc to download the dep
In all cases, have the dep targets make a recursive make call (make; make install; make clean; etc) to the Makefile (or other configure script/build file) of the dependency. If the dep is already built and installed, make will return fairly promptly.
There are going to be lots of corner cases that will cause this to break though, depending on the installers for each dep (perhaps the installer is interactive?), but this approach should cover the general idea.
Right now I'm working on a tool able to automatically install all dependencies of a C/C++ app with exact version requirement :
compiler
libs
tools (cmake, autotools)
Right now it works, for my app. (Installing UnitTest++, Boost, Wt, sqlite, cmake all in correct order)
The tool, named «C++ Version Manager» (inspired by the excellent ruby version manager), is coded in bash and hosted on github : https://github.com/Offirmo/cvm
Any advices and suggestions are welcomed.