Is there a "Keep-going" flag for configure scripts? - build

There's a flag for the make program that causes the compilation to go as far as possible to show as many errors as possible.
From make(1):
-k, --keep-going
Continue as much as possible after an error. While the target
that failed, and those that depend on it, cannot be remade, the
other dependencies of these targets can be processed all the same.
I was wondering if there's anything one can do to get the same behavior from ./configure scripts.
I was trying to ./configure Pidgin to install it from source. But the configure script kept bugging me about dependencies that I don't need, and my only solution to the interruptions was to give --disable flags to the configure script.
That's why I would like to run through the configure script as far as possible so that it can notify me of all the dependencies at once. That way I can choose which I need to --disable and which I need to install, in one pass rather than having to run the configure script for each and every unmet dependency.
Is this possible?

autoconf doesn't have a concept of dependencies, so the person building the autoconf input would have to do that for themselves and it would be highly painful. And the reason autoconf doesn't try is that, while compilation dependencies in a Makefile are generally simple, figuring build dependencies can be complex — and even when they're simple, it needs to be done in m4 so it's going to be nightmarish. (m4 is easy for computers but hard for people.)

I believe it's not possible. If the author of configure.ac decides to use exit from the script there's not much we can do about it. Normal way (using AC_MSG_ERROR et al.) calls exit too (Here have a look at as_fn_exit function.)
You may try though to hack the configure script and change calls to exit to something that doesn't quit. But beware that the logic may be broken completely…

Related

Is it possible use Go build with extra build steps?

What to do when go build is not enough and ones need to run extra commands along with go build? Does go tools have this use case covered? If so, what's the convention?
I noticed it's possible to pass extra flags to build tools with:
//#cgo pkg-config: glib-2.0 gobject-2.0 etc etc
import "C"
Is it possible to run extra commands or at least tell go build to use a Makefile?
No. The go tool isn't intended to be a generic build system. There are some provisions made for cgo (like pkg-config), but it's not extendable.
in go1.4 there will be the generate command. This will let you run arbitrary commands for pre-processing source files, but it always has to be a separate step that is run explicitly. You can't hook it into go get, go build, or go install.
Many projects that require a more complicated build use a script or a Makefile, and eschew the general ability go get. Library packages however should strive to be get-able for simplicity in dependecy resolution.
I don't believe you can add extra steps.
pkg-config is a special keyword for the built in build system.
Normally, more complex builds are accomplished with a makefile that calls go build at the appropriate step.

Is there a way to work out all the required dependencies but without doing "./configure" - C

For those who have compiled from source knows how much of a pain it is to run "./configure" only to find that X library or missing, worst yet it spits out a silly line saying a cryptic lib file is missing, which you then have to go to a web browser type in the missing file cross you fingers that Google can find the answer for you...
I find that very repetitive, so my question is:
Is there a way to work out all the required dependencies but without doing "./configure"
Read the README* or INSTALL* files in the source distribution, if there are any, or look for any documentation on the website where you downloaded it from. If the package is well documented, dependencies will usually be listed somewhere.
Given that there's no mention of a specific pkg has been mentioned, I assume this is a generic "how to avoid using configure" question. From a source tarball, no there is no automated way to work the dependencies out. That's what configure is for (you can always read the Makefiles and autoconf files and understand the dependencies manually, but then you'll miss configure very quickly). To avoid it, you need use something other the straight tarball, which has already worked out the dependencies.
For example you can switch to building source rpms (or debs, dependending on your system). Or you can use a system such as Gentoo which is really good at working out the dependencies for you. But all of these require the pkg you're interested in to be available in their format, so they won't work for tarballs that you download from the source provider.
Read configure.ac/configure.in. Look for calls to AC_CHECK_LIB, AC_CHECK_LIBS, AC_SEARCH_LIBS, AM_PATH_* (some old packages that don't use pkg-config put their checks into the AM_* namespace for some reason), PKG_CHECK_MODULES (for pkg-config), AX_* (many autoconf-archive macros are written to check for uncommon dependencies) and any macro call that start with an odd name (i.e., not AC_*, AM_* or AX_*. Try grep '^[^A]'?).
One thing you can do that would be good for the community is to submit a bug report/feature request to the package maintainers. There are quite a few packages whose configure script does not abort on the first missing dependency, but runs to completion and then prints a summary of all the dependencies that are missing. That greatly reduces the tedium you describe. Unfortunately, "quite a few" translates to less than .00001 percent (this is a made up statistic). If you can convince the package maintainers to re-write their configure script to support this behavior, you will contribute to making the world a better place.
Good luck with that!

How to unit test WIX merge modules?

I am building merge modules with WIX. The batch files which calls the WIX tools to generate the merge modules from *.wxs files are run by my daily build.
I am trying to figure out how could I automate the testing of these merge modules. Things I would like to test are, whether the merge module installs required files, whether the versions of the files are correct etc.
One idea I have is to write a script (may be VB Script) to install the merge module at a temporary location and check if it has installed everything correctly. However, I am not sure if this is a correct way to do it.
Are there any standard ways of writing unit tests for merge modules? Any ideas around how to go about this are welcome.
When you test an installer, the primary goals are to verify that
When installing the msi file, msiexec reports success (i.e. return code 0).
After running the installer, your application can be started and works as expected.
The first point should be easy enough to do, though if you want to keep the test automated you can only test the non-interactive install. Here is an example how to do that from a batch file or on the command line:
msiexec /i myinstaller.msi /passive || echo ERROR: non-zero return code!
The second point is a bit more complicated. I think the best way to achieve this is to build some integration tests into your application, and invoke those tests after the install:
"c:\program files\mystuff\app.exe" /selftest || echo ERROR: non-zero return code!
In your case you're testing a merge module rather than an entire installer. The same approach can be used. You just will have to do the additional work of building a "self test" application and its installer which consumes your merge module.
I've often thought about this but haven't come up with anything that I like. First I would call this integration testing not strictly unit testing. Second the problem of "right files" and "right versions" is difficult to define.
I'm often tempted to say WiX/MSI is just data that defines what the installer is to do. It's declarative in nature and therefore by definition correct. It's tempting to want to create yet another set of data that cross checks the implementation of the installer but what exactly does that accomplish that the first data set didn't already represent? It's hard enough sometimes to maintain what files go into an application yet alone to maintain a second list of files.
I continue to think about this and wonder if there's an approach that would make sense but at this point I just do my normal MSI validation.
You could try to use scripts or other small console program that will do the job, just like you suggested.
With your build process you could also build a basic setup that just uses the merge module. Your script could just install this, run the other script or console app that will check if all the files are in place, that they have the correct version, that all the registry keys are installed, etc. After all the output is gathered your main script would just uninstall everything. You could also run the check program after uninstalling to be sure that everything is gone and that the uninstall works correctly. I would recommend this if, for example, you have custom actions set for install and uninstall.
Ideally this whole install / uninstall process should be done on a separate machine, or a virtual one, in order to avoid messing up the build server.
You'll have some work to do with all this scripts but once you have it, you'll be able to use it with little changes for any other future merge module projects or just plain setup projects.
Hope this would help,
Adrian.

Differences between build and make?

Recently, I downloaded a copy of mysql source code from their source tree. but i am not sure how can i compile the code. i do not quite understand the different processes involved in c++ software building. after i have built the code, how can i install it? or do i need to make? or how do i even know if the 'build' is successful, it printed a lot of information.
thanks in advance!
Well, make program is used to build entire program. It controls how compiler will compile MySQL. If you are using *NIX OS, standard way of doing things is
./configure
which will customize makefile used by make to your system. Then goes
make
which will make program. In the end, if you want to install it for everyone goes
sudo make install
I also recommend that you run
./configure --help
first. It will show you options which can be used with configure. This way you won't miss some optional feature you might want to use.
Also, the wall of text you got may be important. If there are any errors or warnings during compiling, they will show up there. You may want to redirect output of make to a file so you can read it later.

Making a Makefile

How I can make a Makefile, because it's the best way when you distribute a program by source code. Remember that this is for a C++ program and I'm starting in the C development world. But is it possible to make a Makefile for my Python programs?
From your question it sounds like a tutorial or an overview of what Makefiles actually do might benefit you.
A good places to start is the GNU Make documentation.
It includes the following overview "The make utility automatically determines which pieces of a large program need to be recompiled, and issues commands to recompile them."
And its first three chapters covers:
Overview of make
An Introduction to Makefiles
Writing Makefiles
I use Makefiles for some Python projects, but this is highly dubious... I do things like:
SITE_ROOT=/var/www/apache/...
site_dist:
cp -a assets/css build/$(SITE_ROOT)/css
cp -a src/public/*.py build/$(SITE_ROOT)
and so on. Makefile are nothing but batch execution systems (and fairly complex ones at that). You can use your normal Python tools (to generate .pyc and others) the same way you would use GCC.
PY_COMPILE_TOOL=pycompiler
all: myfile.pyc
cp myfile.pyc /usr/share/python/...wherever
myfile.pyc: <deps>
$(PY_COMPILE_TOOL) myfile.py
Then
$ make all
And so on. Just treat your operations like any other. Your pycompiler might be something simple like:
#!/usr/bin/python
import py_compile
py_compile.compile(file_var)
or some variation on
$ python -mcompileall .
It is all the same. Makefiles are nothing special, just automated executions and the ability to check if files need updating.
How i can make a MakeFile, because it's the best way when you distribuite a program by source code
It's not. For example, KDE uses CMake, and Wesnoth uses SCons. I would suggest one of these systems instead, they are easier and more powerful than make. CMake can generate makefiles. :-)
A simple Makefile usually consists of a set of targets, its dependencies, and the actions performed by each target:
all: output.out
output.out: dependency.o dependency2.o
ld -o output.out dependency.o dependency2.o
dependency.o: dependency.c
gcc -o dependency.o dependency.c
dependency2.o: dependency2.c
gcc -o dependency2.o dependency2.c
The target all (which is the first in the example) and tries to build its dependencies in case they don't exist or are not up to date. will be run when no target argument is specified in the make command.
For Python programs, they're usually distributed with a setup.py script which uses distutils in order to build the software. distutils has extensive documentation which should be a good starting point.
If you are asking about a portable form of creating Makefiles you can try to look at http://www.cmake.org/cmake/project/about.html