I'm working on a project with very lightweight build steps that look like this:
cat f1.js f2.js f3.js f4.js > glom.js
So first I wrote a makefile that does it and it was good.
Then as I was iterating on the project I realized that having to run make manually was really annoying so I wrote a python script that watches the filesystem and makefile and runs make whenever something changes.
That was fine too, but it occurred to me that this is something make should do on its own, and I would rather not have a python script floating around the source tree when make can do the job just fine.
So I searched around but didn't find any examples of this. My questions are as follows:
Does make have this feature?
If not...
What's a sensible way to get it to behave this way?
Is this a sensible feature for make to have? (if I were to implement it, would anyone care?)
This isn't the responsibility of Make, which is why it doesn't do it. In many cases, rebuilding is a complex, time-consuming process, in which case you certainly don't want it to occur on every single change to the source files.
However, many IDEs are capable of performing auto-rebuild when changes are made (e.g. Eclipse CDT).
Related
CodeMap looks like it would be incredibly useful, but I haven't been able to make it useable yet. My most frequent use case is extracting functionality into new classes to clean things up, but despite appearing to be exactly what I need, I can't get it to tap into any of the things the IDE knows about.
Here are my biggest roadblocks:
Making a new .dgml and adding even a single method requires connecting to the code index, a process which takes 5-10 minutes, and which requires a full rebuild regardless of what state the code is in at the time. I can turn off the build, but how do I make the code stay indexed? Must it re-index the entire codebase again for each file?
Once I successfully add a method, when I add another, no relations are ever detected. I want to map what calls what, and can only get relations when adding an entire project. This would take me under 1s each using the Call Hierarchy functionality. What am I doing wrong that it can't obtain this information on its own?
I know I can do these things manually, but I might as well just be using View Call Hierarchy and Visio, which at least wouldn't make me index for every new file!
I haven't been able to find anything on the internet about issues with native C++ issues, just very high-level instructions on use. Has anyone managed to make this thing work like professional software for a production native C++ project?
Is there any quick guideline for when it is safe to ask make to do its work with multiple jobs?
I ask because, in the past, it usually seemed to work fine for me but recently it was persistently causing troubles.
I did a "make -j8" (use eight jobs to speed the building) and kept getting:
".so file format not recognized" on one of the shared libraries that was being generated. This was even after cleaning the shared library out (make clean successfully did remove it, but once I also did the unnecessary step of manually removing that) and starting again.
After seeing the problem I'm now leery to use multiple jobs at all. Is there some way to tell ahead of time if multiple jobs can or can't be used with make?
This all depends on how well your dependencies are laid out in your Makefile. You have to be very careful and about specifying every dependency and not just rely on line order and coincidence.
I've had situations where the output of make -j2 worked just fine but make -j4 didn't, because one item was getting compiled before it should, but I hadn't been careful enough in specifying that. Similarly, I've had make -j4 appear to work, only to find that certain parts were compiling with stale code, meaning the final product would be different than I expected. I had to make sure to make clean before any build before I found the dependency issue that allowed me to safely use make -j4 at will again.
Let me answer each of your questions:
Is there any quick guideline for when it is safe to ask make to do
its work with multiple jobs?
Not in my book. If your dependencies are entirely correct, you are good to go. Otherwise, be careful.
Is there some way to tell ahead of time if multiple jobs can or can't
be used with make?
I think the answer is the same as the previous item. I usually assume that it will work until I know it doesn't, and then I try to find the problem. That might not be a safe way to work, however. If you are noticing unexpected results, use make clean and then make without using multiple jobs, and see if that resolves the issue. If it does, you can reasonably assume your Makefile's dependencies are not correct.
You also have an implied question about the .so file format not recognized issue. That sounds like the same issue. Specifically, perhaps the .so is getting built with the wrong dependencies, or the wrong .so is getting pulled in before the correct one is found or built, or the .so is an incomplete state when it is being called upon.
I am using Emacs + Tuareg mode to do my OCaml project.
It is working fine and I get used to it.
However, along with my project source base getting bigger and bigger, I find managing the project is getting harder and harder.
Especially for refactoring. If I change a module name or function name, I have to search everywhere for the part that need to changed accordingly or I just constantly compile again and again to let compiler tell me where I should go.
It is not convenient.
Anyone can suggest a good way for source base management?
thanks
A good option is TypeRex. This is an alternative Emacs mode created by OCamlPro that has a bunch of OCaml-aware features including proper support for refactoring (like renaming identifiers).
It also has a bunch of other nice features like good auto-complete, semantic grep and so on.
Unfortunately, this involves changing your build process to use some wrapper programs. These generate the additional information the mode needs to function. However, once you get the build set up, it's a really awesome editing environment.
In C++ I can achieve the same results by using a shell script where I write all the compilation instructions. So my question is:
Are there any good reasons in using a makefile?
Do you have any examples to demonstrate this?
One of the main reasons to use a makefile is that it will recompile only the source files which have changed since the last time you built your project. Writing a shell script to do this will take much more work than writing the makefile.
Wear and tear on the keyboard.
Preventing it taking ages to compile everything
Easier to change between compiling for debugging and production
As to examples - See most GNU projects wrote in C/C++
You might want to take a look on autotools. The will make a Makefile for you while they can help with code portebility as well. However, you have to make some relatively simple template files that the auto tools will use to construct configure file and a end user can run ./configure [options]; make. They provide many features to your makefile that a end user might expect. For a good introduction see : http://www.freesoftwaremagazine.com/articles/brief_introduction_to_gnu_autotools
Let's say you do write a shell script. It will work and you will be happy. You will keep using it every chance you get. You will add parameters to it to allow you to specify options. You will also notice that it re-compiles everything, all the time. So you will then try and make it smarter so it only re-compiles the files that have changed. What you will be doing, in effect, is writing your own make system.
That's fine as long as you had a good reason to do it. For example: Existing make solutions don't do X well, so you wrote one to solve that problem.
You, however, don't have a problem that cannot be solved by an existing make system (or at least, it sounds like you don't :) ). The problem you're trying to solve has already been solved. Just read up and use the solution - a make file :)
So, to answer your question, yes, there are a lot - most of which you won't be aware of until you need the functionality. When you do, you will be grateful it already does what you want.
It's the same logic you apply to using libraries in code.
I'm working on a large C++ system built with ant+cpptasks. It works well enough, but the build.xml file is getting out of hand, due to standard operating procedure for adding a new library or executable target being to copy-and-paste another lib/exe's rules (which are already quite large). If this was "proper code", it'd be screaming out for refactoring, but being an ant newbie (more used to make or VisualStudio solutions) I'm not sure what the options are.
What are ant users' best-practices for stopping ant build files exploding ?
One obvious option would be to produce the build.xml via XSLT, defining our own tags for commonly recurring patterns. Does anyone do that, or are there better ways ?
you may be interested in:
<import>
<macrodef>
<subant>
Check also this article on "ant features for big projects".
If the rules are repetitive then you can factor them into an ant macro using macrodef and reuse that macro.
If it is the sheer size of the file that is unmanageable, then you can perhaps break it into smaller files and have the main build.xml call targets within those files.
If it's neither of these, then you may want to consider using a build system. Even though I have not used Maven myself, I hear it can solve many issues of large and unmanageable build files.
Generally, if your build file is large and complex then this is a clear indication that the way you have your code layed out, in terms of folders and packages, it complex and too complicated. I find that a complex ant script is a clear smell of a poorly laid out code base.
To fix this, think about how your code is laid out. How many projects do you have? Do those projects know how to build themselves with a master build script that knows how to bundle the individual projects/apps/components together into a larger whole.
When you are refactoring code, you are looking at ways or breaking things down so that they are easier to understand--smaller methods, smaller classes, methods and classes that do one thing. You need to apply these same principles to your code base as well.
Create smaller components that are functionally cohesive and are very loosely decoupled from the rest of the code. Use a build script to build that component into a library. Do this with the rest of your code. Now create a master build script that knows how to bundle up all of your libraries and build them into your application. If you have several applications, then create build script for each app and a master one that knows how to bundle the apps into distributables.
You should be able to see and understand the layout and structure of your code base just by looking at your build scripts. If they/it is not clean and understandable then neither is your source code.
Use Antlib files. It's a very clean way to
remove copy/pasted code
define default values
If you want to see an example, you can take a look at some of the build script I'm writing for my sandbox projects.
I would try Ant-Ivy- the agile dependency manager. We have recently started using it for some of our more complex systems and it works like a charm. The advantage here is that you dont get the overhead and transition cost to maven (it uses ant targets so will work with your current set up). Here is a comparison between the two.