I'm new to make and makefiles, so forgive me if this is very basic.
I'm looking through some makefiles in my project and I'm seeing 2 types of targets -- targets that don't begin with a . character and targets that do.
And from what I'm guessing, it seems like the ".target-name" targets are always executed, is my assumption true? I did read about makefiles by Googling but didn't find anything specific to this.
And as always, thanks for the answers!
No.
The targets with a dot are normally special meaning targets (i.e. their functioniality is builtin into make). One of them is
.PHONY, this is the one that defines the targets which are always executed (that means, the commands in their rules are run unconditionally).
But there are also others, like .DEFAULT for the default rule, or .PRECIOUS with does not delete implicit built targets when interrupted.
For learning about make, and especially gmake, I'd suggest having a look at the excellent book "Managing Projects with GNU Make" (sanitised Amazon link).
HTH.
cheers,
Related
I am working on a C++ project. It is not much complicated so far, yet depends on a bunch of "popular" libraries (nlohmann/json, ToruNiina/toml11 just to name a few). All of them have some CMakeLists.txt and from my not-that-experienced point of view, I consider them well structured.
Now of course I can compile the libraries one by one, or include a "copy" into my project repo, but I want to be better than that. After researching about available build tools, I have decided to use cmake to build and manage a C++ project. The promise was to get a stable, widely supported tool that will help to simplify & unify the build process. Moreover, from the project nature I have no privilege to impose any requirements on the target machine; I need to pack everything for the deployment.
I have spent several days reading, watching and testing out various cmake tutorials, handbooks and manuals. I have to admit, I quickly started to feel that a tool that is supposed to clarify development process keeps introducing new obscurities contrary to its purpose. Originally, I credited this to my lack of experience, yet...
I read articles about why not to bundle dependencies, only to be followed by methods of doing so. I have found recommendation to use one way A over B, C over B and later A over C. It took me a while to figure out the differences between 2.8 and 3.0, the obscurity of target_link_libraries, setting cxx version and/or compiler warning flags and so on.
My point is that even after an exhausting expedition into the seas of cmake, I am still not sure about some elementary questions:
How is cmake meant to be used?
What is a standard, what is a courtesy, and what is none of those?
How can I tell that something is a feature, an archaic backwards compatibility, or both?
Now I will illustrate this on my project. I only need something like this
cmake_minimum_required(VERSION 3.14)
project(CaseCore CXX)
add_executable(myBinary list/of/cpp/sources.cpp)
target_link_libraries(myBinary PUBLIC someExternalLibs likeForExample nlohmann_json::nlohmann_json oqs)
The only problem is with the libraries (there is no space for other problems anyway). I want to build them with the project and dont want to make a local copy (not to drag ton of unrelated files all along). First, I created forks of library repos in order to have a reliable source and be able to merge newer versions into my fork.
Now the decision was whether to use git submodule or some other scheme, I've read submodule doesnt perform that well and also preferred the whole thing to be managed by cmake alone. I started with ExternalProject_Add but later I found about FetchContent which helped me to add the external dependencies easily into my cmake list
FetchContent_Declare(nlohmann
GIT_REPOSITORY https://github.com/my-reliable-fork-of/json
GIT_TAG v3.7.3
)
message(STATUS "Fetching Json...this may take a while")
FetchContent_MakeAvailable(nlohmann)
Seems and works well and minimal. However, I always have to search the library itself in order to find/guess which targets to link to my executable target? There seems to be close to zero convention and unless the respective CMakeLists.txt is simple enough to read it, I tend to guess the target names until i find it.
Recently I wanted to link liboqs from here and the abovementioned scenario did not really help; for some reason, I can link oqs, #include "oqs/oqs.h", yet the dynamic library is not created and execution terminates. I am pretty sure I can resolve the problem after another span of time spent googling and playing around with various cmake variables. Yet this is not the way that I expected cmake to help me manage my project; it is actually quite the opposite.
Just to be clear, I turned down other methods including
add_subdirectory from local repo copy (git submodule)
ExternalProject_Add from local repo copy (git submodule)
ExternalProject_Add from online repo
find_package
as they seemed to be much more obscure/old-style etc (eventhough despite hours of researching, they all still seem as pretty much as just many ways to do the same thing to me)
Now that I have
Am I doing something wrong, or is it really what working with cmake should look like?
Do I really have to "reverse-engineer" other people's CMakeLists in order to use a library?
Under these circumstances, how can I convince my coworkers to use similar work process?
and finally
How can I adjust my work in order to ease these difficulities for others?
I love C++ the more I use it. Yet I spend a tremendous amount of my productive time on solving dependencies...and I do not want to make this guy even more angry.
How is cmake meant to be used?
The typical cmake usage matches the old autotools usage:
$ cmake /path/to/src #replaces /path/to/src/configure
$ make
$ make install
Some targets changed (e.g., make check vs make test), and cmake doesn't provide all the same standard targets (e.g., make distclean), but the usage I have above is what most developers will do (and since cmake re-runs itself, it's really just the second step most of the time).
If your CMakeLists.txt doesn't support this workflow, you should have a very good reason. Most tooling will assume a workflow like this, so you're severely limiting yourself.
What is a standard, what is a courtesy, and what is none of those?
Outside of the above, cmake is pretty much the wild west. Things are becoming more standardized thanks to better documentation and training, but it's far from perfect.
A well-behaved cmake project should export its targets (lots of questions and answers on Stack Overflow about this) and propagate flags and dependencies. This makes it far easier for dependent projects to consume exported targets, and luckily it's easy to do.
How can I tell that something is a feature, an archaic backwards compatibility, or both?
There's nothing I'm aware of that makes these distinctions. In general, newer methods leverage the target_* functions instead of the global ones (e.g., target_include_directories vs include_directories). The target_* functions are also used to propagate flags, include directories, compiler features, and dependent libraries like I mentioned above.
Am I doing something wrong, or is it really what working with cmake should look like?
You're talking about managing external dependencies, and I'm going to skip this to avoid getting into opinions. The short version is that C and C++ dependencies are hard, and there's many competing ways of managing them in a project. They each have pros and cons, but most are still designed for the authors' use cases. You'll have to figure out what use cases you need, and choose tools and workflows based on that.
Do I really have to "reverse-engineer" other people's CMakeLists in order to use a library?
A well-behaved cmake project will export its targets properly, even if they use different dependency management than you do. If they don't, send the project a pull request (exporting isn't hard, and it's good to learn how) or just file bugs against them, especially if they're already using cmake as a build system.
Under these circumstances, how can I convince my coworkers to use similar work process?
It depends on your coworkers, and mileage will vary. I've dealt with coworkers who want to embrace best practices and support flexibility, and I've dealt with coworkers who are content only doing enough to solve the problems we're facing right now.
Many a time, I would do the standard ./configure, make, and make install when compiling a package.
However, there are times when I see a package being built with optional make parameters that are specific to that package.
Is there a easy way to list the bundled targets other than going through the Makefile source?
As a general statement, no.
If there are meaningful variables the project should call them out specifically.
The bash (and probably zsh) tab completion support does attempt to get available make targets (with varying degrees of success) if that is of help though.
My program depends on two libraries. The first one uses scons and the last one uses qmake. The program itself uses scons. So to build the whole project, I have a makefile that builds the first library with scons and the second library with qmake.
Is it considered bad practice to use multiple build tools on the same project? Should I create a scons-file to build the last library too?
I would prefer not over-complicating things if its not necessary. This is often referred to as KISS.
SCons can do both normal compilations and Qt compilations. Likewise with Qt (qmake).
My personal preference would be to use SCons for both. To build Qt with SCons, refer to the qt4tools, or the normal SCons qt tool, mentioned here.
I wouldn't say so. On complex projects it's common for source to come from several different source control systems (I think I read somewhere that the Chromium project references something like 7 different repositories). Projects grow and have quirks of history that mean different parts may be grafted on over time, contributed by different people with different backgrounds.
If it looks like it might be trivial to convert the project to use scons then do that. If it looks any more complicated than that then it may be worth sticking with what you've got. If you find that maintaining the qmake system becomes a time-sink then it may be worth investing the time in migrating to scons.
Think of it in economic terms: if it's not taking time or effort to maintain then leave it as is. If it is taking time to maintain, consider if it would actually take longer to migrate to something else. Don't forget the xkcd classic:
I am trying to understand the exact actions being taken by a make file (first time working with builds using make).
I want to know whether the make file is intended to be used by BSD make, GNU make or Windows nmake. 1) How can I do this without reading documentation of all three and understanding their differences?
2) After reading several articles, I have come to the conclusion that make is a utility for the primary purpose of building source code into an executable or a DLL or some form of output, something that IDEs usually allow us to do. Makefile is the means of giving instructions to the make utility. Is this right?
3) What is the relation between make and the platform for which the code will be built?
4) The commands that are issued under a dependency line (target:components) are shell commands?
Thanks in advance.
How can I do this without reading documentation of all three and understanding their differences?
Well, most likely, that you will use GNU Make. I believe, it is relatively simple to distinguish Makefiles written for different versions of Make.
AFAIK, GNU Make and BSD Make have many differences, at least in their language syntax. For example, in GNU Make a typical conditional directive looks like:
ifdef VARIABLE
# ...
endif
And in BSD Make it is something like this (though I'm not sure):
.if
# ...
.endif
See also:
How similar/different are gnu make, microsoft nmake and posix standard make?
Use the same makefile for make (Linux) and nmake(Windows)
BSD Make and GNU Make compatible makefile
Makefile is the means of giving instructions to the make utility. Is this right?
Yep, absolutely right.
What is the relation between make and the platform for which the code will be built?
Make is just an utility for tracking dependencies between artifacts to be built and running commands in the proper order to satisfy these dependencies. Which exact commands should be used to accomplish the task is up to the user. Thus you can use Make for cross compilation as well.
The commands that are issued under a dependency line (target : components) are shell commands?
Yes, typically Make spawns a sub-shell which executes the specified recipe.
For those who have compiled from source knows how much of a pain it is to run "./configure" only to find that X library or missing, worst yet it spits out a silly line saying a cryptic lib file is missing, which you then have to go to a web browser type in the missing file cross you fingers that Google can find the answer for you...
I find that very repetitive, so my question is:
Is there a way to work out all the required dependencies but without doing "./configure"
Read the README* or INSTALL* files in the source distribution, if there are any, or look for any documentation on the website where you downloaded it from. If the package is well documented, dependencies will usually be listed somewhere.
Given that there's no mention of a specific pkg has been mentioned, I assume this is a generic "how to avoid using configure" question. From a source tarball, no there is no automated way to work the dependencies out. That's what configure is for (you can always read the Makefiles and autoconf files and understand the dependencies manually, but then you'll miss configure very quickly). To avoid it, you need use something other the straight tarball, which has already worked out the dependencies.
For example you can switch to building source rpms (or debs, dependending on your system). Or you can use a system such as Gentoo which is really good at working out the dependencies for you. But all of these require the pkg you're interested in to be available in their format, so they won't work for tarballs that you download from the source provider.
Read configure.ac/configure.in. Look for calls to AC_CHECK_LIB, AC_CHECK_LIBS, AC_SEARCH_LIBS, AM_PATH_* (some old packages that don't use pkg-config put their checks into the AM_* namespace for some reason), PKG_CHECK_MODULES (for pkg-config), AX_* (many autoconf-archive macros are written to check for uncommon dependencies) and any macro call that start with an odd name (i.e., not AC_*, AM_* or AX_*. Try grep '^[^A]'?).
One thing you can do that would be good for the community is to submit a bug report/feature request to the package maintainers. There are quite a few packages whose configure script does not abort on the first missing dependency, but runs to completion and then prints a summary of all the dependencies that are missing. That greatly reduces the tedium you describe. Unfortunately, "quite a few" translates to less than .00001 percent (this is a made up statistic). If you can convince the package maintainers to re-write their configure script to support this behavior, you will contribute to making the world a better place.
Good luck with that!