Many a time, I would do the standard ./configure, make, and make install when compiling a package.
However, there are times when I see a package being built with optional make parameters that are specific to that package.
Is there a easy way to list the bundled targets other than going through the Makefile source?
As a general statement, no.
If there are meaningful variables the project should call them out specifically.
The bash (and probably zsh) tab completion support does attempt to get available make targets (with varying degrees of success) if that is of help though.
Related
I have to deal with a cross-platform application which needs an additional 3rd party library on Windows but not on Linux (and that library doesn't support Linux anyway). I have packed the library into a Conan package, supporting only `os=Windows complier="Visual Studio". Now if I put this library as
[requires]
Library/1.2.3#foo/bar
in my conanfile.txt, conan install will logically fail with the Error Missing prebuilt package on Linux.
So is there a way to specify required packages conditionally in conanfile.txt? Something like Library/1.2.3#foo/bar [os="Windows"]. I read through the Conan docs but found nothing.
Or to tell conan install ignore the error?
Or my only luck is to use two different conanfile.txt on two platforms?
I can not use a conanfile.py, since the build process is not managed by Conan, only the dependencies.
I would say the best option in your case is conanfile.py, but as you are not able to use it, you will need to keep two conanfile.txt, one per platform.
It's not possible adding condition in conanfile.txt, because it would cost a big effort of development, and conafile.py is able to deal with conditions since is a python script.
#nada solution is good, you can use CMake to call conan according your OS, but also you can try cmake-conan, which sounds better for your specific scenario.
I've been trying to figure out and adopt the latest cmake best practices, as I'm setting up a large project. Several people have advocated what NOT to put in your cmake files. In general, it seems to boil down to (paraphrasing):
CMake files should just describe what's required for a target to build, but not make assumptions about anything else.
This all makes sense, however, this leads to the question:
Where should you define all of the "optional" settings, if not in the cmake scripts?
Also, it seems different if we're just talking about a library, vs an application that's dependent on a lot of libraries.
If you're just building a small library, this all seems fine, whoever uses the library is responsible for deciding all of these extra details.
But when building a larger application with many dependencies, it'd be nice to define all of these settings somewhere. In most build systems you get common configurations like debug and release builds. CMake seems to have standard support for "Debug", "Release", "MinSizeRel", "RelWithDebInfo" (I can never remember the abbreviations by the way), but none of these are enforced, so you might just get an empty string.
And even if you intend to respect these, do you just check the build config and set everything in the root cmake script or what?
As a specific example: My project depends on a bunch of 3rd party libraries, and instead of building them all as part of the project, I am trying to pre-build them.
So to make it simple, I currently have a build script which standardizes how I build all of the 3rd party libraries:
cmake -DBUILD_SHARED_LIBS:BOOL=OFF \
-DBUILD_STATIC_LIBS:BOOL=ON \
-DCMAKE_POSITION_INDEPENDENT_CODE=On \
-DCMAKE_INSTALL_PREFIX=${install_prefix} \
-DCMAKE_PREFIX_PATH="${DIST}" $# ${CMAKE_SRC_DIR}
However this build script is bash, and it's not exactly portable. There are better options right?
I don't expect the other developers of the project to have to memorize a lot of arguments to pass to cmake to build the project the way it was intended.
When building multiply dependant C++ CMake projects (in Linux) in topologically sorted order, we have two possibilities:
Go through every project, and ...
... "make install" it in some prefix. When building library in project, link to already installed libraries.
... build it via "make", do not install. When building library in project, link to already builded libraries inplace.
What are pros/cons of those choices? Build performed by homebrew script, which resolves dependencies, builds in right order, etc.
Of course you can do both. But the idea of 'installing' is that libraries, headers, documentation etc. are placed in a well defined directory, that does not depend on the layout of source code trees.
Separate source, which is most often only interesting go the programmer of that package, and compiled proagrams libraries etc., which are interesting for users andd programmers of other packages.
Imagine you have to change the directory structure of one subpackage. Without installing you would have to adapt all the other man scripts.
So:
Pros of solution 1 (== Cons of solution 2)
Better maintainability of the whole package
The "expected" way
make and make install are expected to perform two conceptually different things. There is no better or worse of them. I will explain by describing usual sequence of program installation using make (from "Art of Unix Programming"):
make (all) - Your all production should make every executable of your project. Usually
the all production doesn’t have an explicit rule; instead it refers to all of your
project’s top-level targets (and, not accidentally, documents what those are).
Conventionally, this should be the first production in your makefile, so it will
be the one executed when the developer types make with no argument.
make test - Run the program’s automated test suite, typically consisting of a set of unit
tests to find regressions, bugs, or other deviations from expected behavior
during the development process. The ‘test’ production can also be used
by end-users of the software to ensure that their installation is functioning
correctly.
make install - Install the project’s executables and documentation in system directories so
they will be accessible to general users (this typically requires root privileges).
Initialize or update any databases or libraries that the executables require in
order to function.
Credits go to Eric Steven Raymond for this answer
I am trying to understand the exact actions being taken by a make file (first time working with builds using make).
I want to know whether the make file is intended to be used by BSD make, GNU make or Windows nmake. 1) How can I do this without reading documentation of all three and understanding their differences?
2) After reading several articles, I have come to the conclusion that make is a utility for the primary purpose of building source code into an executable or a DLL or some form of output, something that IDEs usually allow us to do. Makefile is the means of giving instructions to the make utility. Is this right?
3) What is the relation between make and the platform for which the code will be built?
4) The commands that are issued under a dependency line (target:components) are shell commands?
Thanks in advance.
How can I do this without reading documentation of all three and understanding their differences?
Well, most likely, that you will use GNU Make. I believe, it is relatively simple to distinguish Makefiles written for different versions of Make.
AFAIK, GNU Make and BSD Make have many differences, at least in their language syntax. For example, in GNU Make a typical conditional directive looks like:
ifdef VARIABLE
# ...
endif
And in BSD Make it is something like this (though I'm not sure):
.if
# ...
.endif
See also:
How similar/different are gnu make, microsoft nmake and posix standard make?
Use the same makefile for make (Linux) and nmake(Windows)
BSD Make and GNU Make compatible makefile
Makefile is the means of giving instructions to the make utility. Is this right?
Yep, absolutely right.
What is the relation between make and the platform for which the code will be built?
Make is just an utility for tracking dependencies between artifacts to be built and running commands in the proper order to satisfy these dependencies. Which exact commands should be used to accomplish the task is up to the user. Thus you can use Make for cross compilation as well.
The commands that are issued under a dependency line (target : components) are shell commands?
Yes, typically Make spawns a sub-shell which executes the specified recipe.
A little background, we have a fairly large code base, which builds in to a set of libraries - which are then distributed for internal use in various binaries. At the moment, the build process for this is haphazard and everything is built off the trunk.
We would like to explore whether there is a build system which will allow us to manage releases and automatically pull in dependencies. Such a tool exists for java, Maven. I like it's package, repository and dependency mechanism, and I know that with either the maven-native or maven-nar plugin we could get this. However the problem is that we cannot fix the source trees to the "maven way" - and unfortunately (at least the maven-nar) plugins don't seem to like code that is not structured this way...
So my question is, is there a tool which satisfies the following for C++
build
package (for example libraries with all headers, something like the .nar)
upload package to a "repository"
automatically pull in the required dependencies from said repository, extract headers and include in build, extract libraries and link. The depedencies would be described in the "release" for that binary - so if we were to use CI server to build that "release", the build script has the necessary dependencies listed (like the pom.xml files).
I could roll my own by modifying either make+shell scripts or waf/scons with extra python modules for the packaging and dependency management - however I would have thought that this is a common problem and someone somewhere has a tool for this? Or does everyone roll their own? Or have I missed a significant feature of waf/scons or CMake?
EDIT: I should add, OS is preferred, and non-MS...
Most of the linux distributions, for example, contain dependency tracking for their packages. Of all the things that I've tried to cobble together myself to take on your problem, in the end they all are "not quite perfect". The best thing to do, IMHO, is to create a local yum/deb repository or something (continuing my linux example) and then pull stuff from there as needed.
Many of the source-packages also quickly tell you the minimum components that must be installed to do a self-build (as opposed to installing a binary pre-compiled package).
Unfortunately, these methods are that much easier, though it's better than trying to do it yourself. In the end, to be cross-platform supporting, you need one of these systems per OS as well. Fun!
I am not sure if I understand correctly what you want to du, but I will tell you what we use and hope it helps.
We use cmake for our build. It hat to be noted that cmake is quite powerful. Among other things, you can "make install" in custom directories to collect headers and binaries there to build your release. We combine this with some python scripting to build our releases. YMMV, but some things might just be too specific for a generic tool and a custom script may be the simpler solution.
Our build tool builds releases directly from a svn reposity (checkout, build, ...) which I can really recommend to avoid some local state polluting the release in some unforseen way. It also enforces reproducability.
It depends a lot on the platforms you're targeting. I can only really speak for Linux, but there it also depends on the distributions you're targeting, packages being a distribution-level concept. To make things a bit simpler, there are families of distributions using similar packaging mechanisms and package names, meaning that the same recipe for making a Debian package will probably make an Ubuntu package too.
I'd definitely say that if you're willing to target a subset of all known Linux distros using a manageable set of packaging mechanisms, you will benefit in the long run from not rolling your own and building packages the way the distribution creators intended. These systems allow you to specify run- and build-time dependencies, and automatic CI environments also exist (like OBS for rpm-based distros).