For past 4 years, I have been programming with Eclipse (for Java), and Visual Studio Express (for C#). The IDEs mentioned always seemed to provide every facility a programmer might ask for (related to programming, of course).
Lately I have been hearing about something called "build tools". I heard they're used almost in all kind of real world development. What are they exactly? What problems are they designed to solve? How come I never needed them in past four years? Are they kind of command-line stripped down IDEs?
What are build tools?
Build tools are programs that automate the creation of executable
applications from source code (e.g., .apk for an Android app). Building
incorporates compiling,linking and packaging the code into a usable or
executable form.
Basically build automation is the act of scripting or automating a
wide variety of tasks that software developers do in their day-to-day
activities like:
Downloading dependencies.
Compiling source code into binary code.
Packaging that binary code.
Running tests.
Deployment to production systems.
Why do we use build tools or build automation?
In small projects, developers will often manually invoke the build
process. This is not practical for larger projects, where it is very
hard to keep track of what needs to be built, in what sequence and
what dependencies there are in the building process. Using an
automation tool allows the build process to be more consistent.
Various build tools available(Naming only few):
For java - Ant,Maven,Gradle.
For .NET framework - NAnt
c# - MsBuild.
For further reading you can refer following links:
1.Build automation
2.List of build automation software
Thanks.
Build tools are tools to manage and organize your builds, and are very important in environments where there are many projects, especially if they are inter-connected. They serve to make sure that where various people are working on various projects, they don't break anything. And to make sure that when you make your changes, they don't break anything either.
The reason you have not heard of them before is that you have not been working in a commercial environment before. There is a whole lot of stuff that you have probably not encountered that you will within a commercial environments, especially if you work in software houses.
As others have said, you have been using them, however, you have not had to consider them, because you have probably been working in a different way to the usual commercial way of working.
Build tools are usually run on the command line, either inside an IDE or completely separate from it.
The idea is to separate the work of compiling and packaging your code from creation, debugging, etc.
A build tool can be run on the command or inside an IDE, both triggered by you. They can also be used by continuous integration tools after checking your code out of a repository and onto a clean build machine.
make was an early command tool used in *nix environments for building C/C++.
As a Java developer, the most popular build tools are Ant and Maven. Both can be run in IDEs like IntelliJ or Eclipse or NetBeans. They can also be used by continuous integration tools like Cruise Control or Hudson.
Build tools are generally to transform source code into binaries - it organize source code, set compile flags, manage dependencies... some of them also integrate with running unit test, doing static analysis, a generating documentation.
Eclipse or Visual Studio are also build systems (but more of an IDE), and for visual studio it is the underlying msbuild to parse visual studio project files under the hood.
The origin of all build systems seems like the famous 'make'.
There are build systems for different languages:
C++: make, cmake, premake
Java: ant+ivy, maven, gradle
C#: msbuild
Usually, build systems either using a propriety domain specific language (make, cmake), or xml (ant, maven, msbuild) to specify a build. The current trend is using a real scripting language to write build script, like lua for premake, and groovy for gradle, the advantage of using a scripting is it is much more flexible, and also allows you the to come up with a set of standard APIs(as build DSL).
These are different types of processes by which you can get your builds done.
1. Continuous Integration build: In this mainly developers check-in their code and right after their check-in a build initiates for building of the recent changes so we should know whether the changes done by the developer has worked or not right after the check-in is done. This is preferred for smaller projects or components of the projects. In case where multiple teams are associated with the project or there are a large no. of developers working on the same project this scenario becomes difficult to handle as if there are 'n' no. of check-in’s and the build fails at certain points it becomes highly difficult to trace whether all the breakage has occurred because of one issue or with multiple issues so if the older issues are not addressed properly than it becomes very difficult to trace down the later defects that occurred after that change. The main benefit of these builds is that we get to know whether a particular check-in is successful or not.
2. Gated check-in builds: In this type of check in a build is initiated right after the check in is done keeping the changes in a shelve sets. In this case if the build succeeds than the shelve-set check-in gets committed otherwise it will not be committed to the Team Foundation Server. This gives a slightly better picture from the continuous integration build as only the successful check-in's are allowed to get committed.
3. Nightly builds: This is also referred as Scheduled builds. In this case we schedule the builds to run for a specific time in order to build the changes. All the previous uncommitted changes from the last build are built during this build process. This is practiced when we want to check in multiple times but do not want a build every time we check in our code so we can have a fixed time or period in which we can initiate the build for building of the checked-in code.
The more details about these builds can be found at the below location.
Gated-check in Builds
Continuous Integration Builds
Nightly Builds
Build Process is a Process of compiling your source code for any errors using some build tools and creating builds(which are executable versions of the project). We(mainly developers) do some modifications in the source code and check-in that code for the build process to happen. After the build process it gives two results :
1. Either build PASSES and you get an executable version of your project(Build is ready).
2. It fails and you get certain errors and build is not created.
There are different types of build process like :
1. Nightly Build
2. gated Build
3. Continuous integration build etc.
Build tools help and automates the process of creating builds.
*So in Short Build is a Version of Software in pre-release format used by the Developer or Development team to gain confidence for the final result of their Product by continuously monitoring their Product and solving any issues early during the development process.*
You have been using them - IDE is a build tool. For the command line you can use things like make.
People use command line tools for things like a nightly build - so in the morning with a hangover the programmer has realised that the code that he has been fiddling with with the latest builds of the libraries does not work!
"...it is very hard to keep track of what needs to be built" - Build tools does not help with that all. You need to know what you want to build. (Quoted from Ritesh Gun's answer)
"I heard they're used almost in all kind of real-world development" - For some reason, software developers like to work in large companies. They seem to have more unclear work directives for every individual working there.
"How come I never needed them in past four years". Probably because you are a skilled programmer.
Pseudo, meta. I think build tools do not provide any really real benefit at all. It is just there to add a sense of security arising from bad company practices, lack of direction - bad software architectural leadership leading to bad actual knowledge of the project. You should never have to use build tools(for testing) in your project. To do random testing with a lack of knowledge of the software project does not give any sort of help at all.
You should never ever add something to a project without knowing it's purpose, and how it will work with the other components. Components can be functional separate, but not work together. (This is the responsibility of the software architect I assume).
What if 4-5 components are added into the project. You add a 6th component. Together with the first added component, it might screw up everything. No automatic would help to detect that.
There is no shortcut other than to think think think.
Then there is the auto download from repositories. Why would you ever want to do that? You need to know what you download, what you add to the project. How do you detect changes in versions of the repositories? You need to know. You can't "auto" anything.
What if we were to test bicycles and baby transports blindfolded with a stick and just randomly hit around with it. That seems to be the idea of build tool testing.
I'm sorry there are no shortcut
https://en.wikipedia.org/wiki/Scientific_method
and
https://en.wikipedia.org/wiki/Analysis
Related
I haven't done much "front-end" development in about 15 years since moving to database development. I'm planning to start work on a personal project using C++ and since I already have MSDN I'll probably end up doing it in Visual Studio 2010. I'm thinking about using Subversion as a version control system eventually. Of course, I'd like to get up and running as quickly as I can, but I'd also like to avoid any pitfalls from a poorly organized project environment.
So, my question is, are there any good resources with common best practices for setting up a development environment? I'm thinking along the lines of where to break down a solution into multiple projects if necessary, how to set up a unit testing process, organizing resources, directories, etc.
Are there any great add-ons that I should make sure I have set up from the start?
Most tutorials just have one simple project, type in your code and click on build to see that your new application says, "Hello World!".
This will be a Windows application with several DLLs as well (no web development), so there doesn't need to be a deploy to a web server kind of process.
Mostly I just want to make sure that I don't miss anything big and then have to extensively refactor because of it.
Thanks!
I would also like a good answer to this question. What I've done is set it up so that each solution makes reference to a $(SolutionDir)\build directory for includes and libraries. That way each project that has dependencies on other projects can access them and versions won't compete. Then there are post-build commands to package up headers and .lib files into a "distribution" folder. I use CC.net to build each package on checkin. When we decide to update a dependency project we "release" it to ourselves, which requires manual tagging, manual copying current.zip into a releases area and giving it a version number, and copying that into the /build of the projects that depend on the upgrade.
Everything works pretty great except this manual process at the end. I'd really love to get rid of it but can't seem to. Read an article from ACM about "Continuous Release" that would be really nice to have an implementation of but there isn't any. I keep telling myself I'll make one.
If I use "junctions" in the windows filesystem I can link "distribute" to "build" and then build a secondary solution that includes all the projects that are dependent on each other to build a product. When I did that though it encouraged developers to use it for active development, which discouraged TDD and proper releasing.
What we need in our firm is a sort of release management tool for Linux/C++. Our products consist of multiple libraries and config files. Here I will list the basic features we want such system to have:
Ability to track dependencies, easily increase major versions of libraries whose dependencies got their major version increased. It should build some sort of dependency graph internally so it can know who is affected by an update.
Know how to build the products it handle. Either a specific build file or even better - ability to read and understand makefiles.
Work with SVN so it can check for new releases from there and does the build.
Generate some installers - in rpm or tar.gz format. For that purpose it should be able to understand the rpm spec file format.
Currently we are working on such tool which is already pretty usable. However I believe that our task is not unique and there should be some tool out there which does the job.
You should look into using a mix between Hudson, Maven (for build management), Ivy (for dependencies management) and Archiva (for artifacts archival).
Also, if you are looking into cross.compilation, take a look at Make Project Creator (MPC) and Bakefile.
Have fun!!
In the project I'm currently working on we use cmake and other Kitware tools to handle most of this issues for native code (C++). Answering point by point:
The cmake scripts handle the dependencies for our different projects. We have a dependency graph but I don't know if is a home-made script or it is a functionality that cmake provides.
Well cmake generates the makefiles regarding the platform. I generates projects for eclipse cdt and visual studio if it is asked to do so in case of developing.
Cmake has a couple of tools, ctest and cdash that we use to do the daily build and see how the test are doing.
In order to create the installer cmake has cpack. From just one script it can generate tar.gz, deb or rpm files in Linux or an automatically generated NSIS script to generate installers in windows.
For Java code we use maven and hudson that have been already mentioned here.
Take a look at this article from DDJ, in which a more robust build system concept (than make) is presented and implemented. Not sure it will fit well to your requirements, but it's the closest I've ever seen. I was looking for the same thing months ago, and then I discovered the article.
http://www.drdobbs.com/architect/218400678
Maven has a native code plugin. I don't think it'll do everything you want, but it's good at tracking version numbers of dependencies, will build artefacts and it'll work with your VCS.
No idea
cmake/scons: I have used cmake but I don't exactly love it, but I have heard really good things about scons. But scons is python-based, so you need to have python installed on the build/dev machines.
I use Hudson, which has a plugin to fetch from svn. It performs intelligently in general, and in particular builds only if some file has changed in an svn update. Hudson is easy to get started with. Hudson is java-based and is pretty popular with the Java community. This means it is quite cross-platform, but you need to have JRE installed on the build machine.
Probably can call some rpm tool within hudson.
What are the best policies for unit testing build files?
The reason I ask is my company produces highly reliable embedded devices. Software patches are just not an option, as they cost our customers thousands to distribute. Because of this we have very strict code quality procedures(unit tests, code reviews, tracability, etc). Those procedures are being applied to our build files (autotools if you must know, I expect pity), but if feels like a hack.
Uh... the project compiles... mark the build files as reviewed and unit tested.
There has got to be a better way. Ideas?
Here's the approach we've taken when building a large code base (many millions of lines of code) across more than a dozen platforms.
Makefile changes are reviewed by the build team. These people know the errors people tend to make in our build environment, and they are the ones who feel the brunt of it when a build breaks, so they're motivated to find issues.
Minimize what needs to go in a Makefile, so there are fewer opportunities for error. We have a layer on top of make, that generates the Makefile. A developer just has to indicate in the higher-level file, using tags, that for example a given target is a shared library or a unit test. Usually a target is defined on one line, which then results in multiple settings/targets in the generated Makefile. Similar things could be done with build tools like scons that allow one to abstract away things like platform-specific details, making targets very simple.
Unit tests of our build tool. The tool is written in Perl, so we use Perl's Test::More unit test framework there to verify that the tool generates the correct Makefile given our higher-level file. If we used something like scons instead, I'd use their testing framework.
Unit tests of our nightly build/test scripts. We have a set of scripts that start nightly builds on each platform, run static analysis tools, run unit tests, run functional tests, and report all results to a central database. We test the various scripts individually, mostly using the shunit2 unit-testing framework for sh/bash/ksh/etc.
End-to-end tests of our build/test process. I am working on an end-to-end test that operates on a tiny source tree rather than our production code, since the latter can take hours to build. These tests are mainly aimed at verifying that our build targets still work and report results into our central database even after, for example, upgrading our code coverage tool or making changes to our build scripts.
Have your build file to compile a known version of your software (or simpler piece of code that is similar from a build perspective) and compare the result obtained with your new build tools to a expected result (built with a validated version of the build tools).
In my projects build-files don't change very often. Even more, I can reuse build-files from earlier projects, only changing some variables (that I moved to an easy to recognize section). That's why for me it is unneeded to unit-test the build-files. That can be different in other projects.
What would be the best choice of build system for a more than one million line multi platform project, which produces drivers, libraries, command line tools, GUIs, and OS install packages for all the mainstream OSes, using both the GNU and Microsoft toolchains?
Our source code is mainly C, with Python, C# and GNU makefile, and a little C++ and bash. It resides mainly in one repository, but we push source code to various third parties all of whom have their source code code repositories. There is also some interest in keeping the build fast, which might involve splitting up the project.
Currently we use a mixture of GNU make, bash, python and Microsoft's DDKBUILD. The main problems are that we are maintaining a complex set of scripts on top of make and would prefer to use third party (preferably open source) tools, and that cygwin is not proving to be robust on Windows (e.g. fork isn't always possible), and that our current build system does not build or install the toolchain so is vulnerable to tool chain version changes.
I vote for CMake, as a meta-building tool that really rewrite KDE4 build system from scratch -- and make KDE4 now a cross-platform desktop that even running on WindowsCE!
CMake is the carrier porting KDE4 to any OS on earth -- by generating Makefile( or vcprojs in Windows case) for about 40 OSes with relative toolchains!
JetBrains TeamCity works very well in general, so should be worth having on the eval list.
ThoughtWorks Cruise is also in the same space. While its v1, it comes from a stable that's been around for a while.
There's nothing about Team Foundation Server that would make necessarily count it out for your situation, but out of the box it might be more MS-shop centric that the other two I've mentioned.
As a general comment, with the level of variety you have, you definitely want to trial whatever it is you want to use - just because something is supported as a tick on the box doesnt mean its going to suit what oyu're looking for.
Dickson,
Is your build mostly monolithic or do you want to build some libraries separately and assemble them into the larger application? If inter-project dependencies are a big deal, your choices become limited quickly. AnthillPro does it well, and I think TeamCity has some Ivy integration support. From what you're saying, it sounds like this is not an absolute need, but might be helpful in speeding the build. It's certainly a strategy that we've seen a number of teams execute effectively.
Since you're looking at cross-platform (I assume multiple machine) builds, most of the open source tools other than Hudson are ruled out.
A build server comparison matrix is hosted by our friends at Thoughtworks here: confluence.public.thoughtworks.org/display/CC/CI+Feature+Matrix
Good luck.
You should have CMake on your list of alternatives to investigate. CMake is a meta-tool, i.e. it generates the input to the build-tool of your choice (GNU make, Visual Studio, etc.). I can recommend it strongly.
You may want to look at Cruise. It is built on Java so it will run on any platform that supports that. You can also have multiple build agents on different machines that can perform the different tasks on the different platforms. Thoughtworks is still building it out so some of the functionality is lacking, but it may be a a good option since you are looking for true cross-platform capabilities.
SCons is a cross-platform build system implemented in Python. We use it to build our code on three platforms. It can automatically detect your build tools but you can also put arbitrary Python code in your build script. It also lets you separate your environment setup from description of your project structure, a great feature for reuse of your buidl scripts in different environments. Besides building your project directly, it can also generate Visual Studio project files.
Does anyone have experience using makefiles for Visual Studio C++ builds (under VS 2005) as opposed to using the project/solution setup. For us, the way that the project/solutions work is not intuitive and leads to configuruation explosion when you are trying to tweak builds with specific compile time flags.
Under Unix, it's pretty easy to set up a makefile that has its default options overridden by user settings (or other configuration setting). But doing these types of things seems difficult in Visual Studio.
By way of example, we have a project that needs to get build for 3 different platforms. Each platform might have several configurations (for example debug, release, and several others). One of my goals on a newly formed project is to have a solution that can have all platform build living together, which makes building and testing code changes easier since you aren't having to open 3 different solutions just to test your code. But visual studio will require 3 * (number of base configurations) configurations. i.e. PC Debug, X360 Debug, PS3 Debug, etc.
It seems like a makefile solution is much better here. Wrapped with some basic batchfiles or scripts, it would be easy to keep the configuration explotion to a minimum and only maintain a small set of files for all of the different builds that we have to do.
However, I have no experience with makefiles under visual studio and would like to know if others have experiences or issues that they can share.
Thanks.
(post edited to mention that these are C++ builds)
I've found some benefits to makefiles with large projects, mainly related to unifying the location of the project settings. It's somewhat easier to manage the list of source files, include paths, preprocessor defines and so on, if they're all in a makefile or other build config file. With multiple configurations, adding an include path means you need to make sure you update every config manually through Visual Studio's fiddly project properties, which can get pretty tedious as a project grows in size.
Projects which use a lot of custom build tools can be easier to manage too, such as if you need to compile pixel / vertex shaders, or code in other languages without native VS support.
You'll still need to have various different project configurations however, since you'll need to differentiate the invocation of the build tool for each config (e.g. passing in different command line options to make).
Immediate downsides that spring to mind:
Slower builds: VS isn't particularly quick at invoking external tools, or even working out whether it needs to build a project in the first place.
Awkward inter-project dependencies: It's fiddly to set up so that a dependee causes the base project to build, and fiddlier to make sure that they get built in the right order. I've had some success getting SCons to do this, but it's always a challenge to get working well.
Loss of some useful IDE features: Edit & Continue being the main one!
In short, you'll spend less time managing your project configurations, but more time coaxing Visual Studio to work properly with it.
Visual studio is being built on top of the MSBuild configurations files. You can consider *proj and *sln files as makefiles. They allow you to fully customize build process.
While it's technically possible, it's not a very friendly solution within Visual Studio. It will be fighting you the entire time.
I recommend you take a look at NAnt. It's a very robust build system where you can do basically anything you need to.
Our NAnt script does this on every build:
Migrate the database to the latest version
Generate C# entities off of the database
Compile every project in our "master" solution
Run all unit tests
Run all integration tests
Additionally, our build server leverages this and adds 1 more task, which is generating Sandcastle documentation.
If you don't like XML, you might also take a look at Rake (ruby), Bake/BooBuildSystem (Boo), or Psake (PowerShell)
You can use nant to build the projects individually thus replacing the solution and have 1 coding solution and no build solutions.
1 thing to keep in mind, is that the solution and csproj files from vs 2005 and up are msbuild scripts. So if you get acquainted with msbuild you might be able to wield the existing files, to make vs easier, and to make your deployment easier.
We have a similar set up as the one you are describing. We support at least 3 different platforms, so the we found that using CMake to mange the different Visual Studio solutions. Set up can be a bit painful, but it pretty much boils down to reading the docs and a couple of tutorials. You should be able to do virtually everything you can do by going to the properties of the projects and the solution.
Not sure if you can have all three platforms builds living together in the same solution, but you can use CruiseControl to take care of your builds, and running your testing scripts as often as needed.