Is there any quick guideline for when it is safe to ask make to do its work with multiple jobs?
I ask because, in the past, it usually seemed to work fine for me but recently it was persistently causing troubles.
I did a "make -j8" (use eight jobs to speed the building) and kept getting:
".so file format not recognized" on one of the shared libraries that was being generated. This was even after cleaning the shared library out (make clean successfully did remove it, but once I also did the unnecessary step of manually removing that) and starting again.
After seeing the problem I'm now leery to use multiple jobs at all. Is there some way to tell ahead of time if multiple jobs can or can't be used with make?
This all depends on how well your dependencies are laid out in your Makefile. You have to be very careful and about specifying every dependency and not just rely on line order and coincidence.
I've had situations where the output of make -j2 worked just fine but make -j4 didn't, because one item was getting compiled before it should, but I hadn't been careful enough in specifying that. Similarly, I've had make -j4 appear to work, only to find that certain parts were compiling with stale code, meaning the final product would be different than I expected. I had to make sure to make clean before any build before I found the dependency issue that allowed me to safely use make -j4 at will again.
Let me answer each of your questions:
Is there any quick guideline for when it is safe to ask make to do
its work with multiple jobs?
Not in my book. If your dependencies are entirely correct, you are good to go. Otherwise, be careful.
Is there some way to tell ahead of time if multiple jobs can or can't
be used with make?
I think the answer is the same as the previous item. I usually assume that it will work until I know it doesn't, and then I try to find the problem. That might not be a safe way to work, however. If you are noticing unexpected results, use make clean and then make without using multiple jobs, and see if that resolves the issue. If it does, you can reasonably assume your Makefile's dependencies are not correct.
You also have an implied question about the .so file format not recognized issue. That sounds like the same issue. Specifically, perhaps the .so is getting built with the wrong dependencies, or the wrong .so is getting pulled in before the correct one is found or built, or the .so is an incomplete state when it is being called upon.
Related
Every so often I (re)compile some C (or C++) file I am working on -- which by the way succeeds without any warnings -- and then I execute my program only to realize that nothing has changed since my previous compilation. To keep things simple, let's assume that I added an instruction to my source to print out some debugging information onto the screen, so that I have a visual evidence of trouble: indeed, I compile, execute, and unexpectedly nothing is printed onto the screen.
This happened me once when I had a buggy code (I ran out of the bounds of a static array). Of course, if your code has some kind of hidden bug (What are all the common undefined behaviours that a C++ programmer should know about?) the compiled code can be pretty much anything.
This happened me twice when I used some ridiculously slow network hard drive which -- I guess -- simply did not update my executable file after compilation, and I kept running-and-running the old version, despite the updated source. I just speculate here, and feel free to correct me, if such a phenomenon is impossible, but I suspect it has had to do something with certain processes waiting for IO.
Well, such things could of course happen (and they indeed do), when you execute an old version in the wrong directory (that is: you execute something similar, but actually completely unrelated to your source).
It is happening again, and it annoys me enough to ask: how do you make sure that your executable is matching the source you are working on? Should I compare the date strings of the source and the executable in the main function? Should I delete the executable prior compilation? I guess people might do something similar by means of version control.
Note: I was warned that this might be a subjective topic likely doomed to be closed.
Just use ol' good version control possibilities
In easy case you can just add (any) visible version-id in the code and check it (hash, revision-id, timestamp)
If your project have a lot of dependent files and you suspect older version, than "latest", in produced code, you can (except, obvioulsly, good makefile-rules) monitor also version of every file, used for building code (VCS-dependent, but not so heavy trick)
Check the timestamp of your executable. That should give you a hint regarding whether or not it is recent/up-to-date.
Alternatively, calculate a checksum for your executable and display it on startup, then you have a clue that if the csum is the same the executable was not updated.
I am working on a large C++ code base and I want to speed up its compilation times. One thing I know is that all of my library includes are on a network drive which slows down things a lot. If I can get make or something else to automatically cache them, either in /tmp, or to a ramdisk, I expect that to improve the compile times for me quite a bit. Is there a way to do that? Of course I can copy them manually and set up a sync job but then every developer will have to do that on every box they ever compile so I am looking for an automated solution.
Thanks!
Of course. There are lots and lots of ways. You can just have a make rule that copies things then have people run make copylocal or whatever. This is fast but people have to remember to do it, and if the libraries change a lot this could be annoying. You can make a rule that copies things then put it as a prerequisite to some other target so it's done on every build, first. This will take longer: will the copy step plus using local copies take longer total time than just using the remote copies? Who knows?
You could also use a tool like ccache to locally cache the results of the compilation, rather than copying the library headers. This will give you a lot more savings for a typical small incremental build and it's easily integrated into the build system, but it will slightly slow down "rebuild the world" steps to cache the objects.
Etc. Your question is too open-ended to be easily answerable.
Avoid using network file systems for code. Use a version control system like git.
You might also consider using ccache.
In C++ I can achieve the same results by using a shell script where I write all the compilation instructions. So my question is:
Are there any good reasons in using a makefile?
Do you have any examples to demonstrate this?
One of the main reasons to use a makefile is that it will recompile only the source files which have changed since the last time you built your project. Writing a shell script to do this will take much more work than writing the makefile.
Wear and tear on the keyboard.
Preventing it taking ages to compile everything
Easier to change between compiling for debugging and production
As to examples - See most GNU projects wrote in C/C++
You might want to take a look on autotools. The will make a Makefile for you while they can help with code portebility as well. However, you have to make some relatively simple template files that the auto tools will use to construct configure file and a end user can run ./configure [options]; make. They provide many features to your makefile that a end user might expect. For a good introduction see : http://www.freesoftwaremagazine.com/articles/brief_introduction_to_gnu_autotools
Let's say you do write a shell script. It will work and you will be happy. You will keep using it every chance you get. You will add parameters to it to allow you to specify options. You will also notice that it re-compiles everything, all the time. So you will then try and make it smarter so it only re-compiles the files that have changed. What you will be doing, in effect, is writing your own make system.
That's fine as long as you had a good reason to do it. For example: Existing make solutions don't do X well, so you wrote one to solve that problem.
You, however, don't have a problem that cannot be solved by an existing make system (or at least, it sounds like you don't :) ). The problem you're trying to solve has already been solved. Just read up and use the solution - a make file :)
So, to answer your question, yes, there are a lot - most of which you won't be aware of until you need the functionality. When you do, you will be grateful it already does what you want.
It's the same logic you apply to using libraries in code.
I'm working on a project with very lightweight build steps that look like this:
cat f1.js f2.js f3.js f4.js > glom.js
So first I wrote a makefile that does it and it was good.
Then as I was iterating on the project I realized that having to run make manually was really annoying so I wrote a python script that watches the filesystem and makefile and runs make whenever something changes.
That was fine too, but it occurred to me that this is something make should do on its own, and I would rather not have a python script floating around the source tree when make can do the job just fine.
So I searched around but didn't find any examples of this. My questions are as follows:
Does make have this feature?
If not...
What's a sensible way to get it to behave this way?
Is this a sensible feature for make to have? (if I were to implement it, would anyone care?)
This isn't the responsibility of Make, which is why it doesn't do it. In many cases, rebuilding is a complex, time-consuming process, in which case you certainly don't want it to occur on every single change to the source files.
However, many IDEs are capable of performing auto-rebuild when changes are made (e.g. Eclipse CDT).
I created a script to remove useless code in many c++ libs (like ifdefs, comments, etc.)
Now, I want to compare the original lib and the "treated" lib to check if my script has done a good job.
The only solution I found is to compare the exported symbols.
I'm wondering if you have any other ideas to check the integrity?
FIRST of all: Unit tests are designed for this purpose.
You might get some mileage out of
compiling without optimization (-O0) and without debug information (or strip it afterwards)
objdump -dCS
and compare the disassemblies. Prepare to meet some / many spurious errors (the strip step was there to prevent needless differences in source line number info). In particular you will have to
ignore addresses
ignore generated label names
But if the transformation would really lead to unmodified code, you'd be able to verify it 1:1 using this technique and a little work.
assert based unit test would help you. Have some test cases , run them against the original library and then run with the code removed .