My task is to define and implement a homogeneous update system for all tools of a big organization.
The various tools have grown over many years and are implemented in many different technologies and languages.
For the Eclipse/RCP based tools the equinox/p2 mechanism seems to be perfect and easy to implement.
But what to do regarding e.g. the .net/C# tools and the C/C++ tools? p2 seems to be tailored for RCP applications.
I like to have a common/uniform update repository for all tools. Any ideas/thoughts?
Use the tools you already have.
The eclipse repository format should be easy to generate (eg, via some task you could integrate to your CI server.)
For the .net applications, clickonce is a solution.
There are frameworks out there, like googke's ohama http://code.google.com/p/omaha/ for dealing with applications update
You could build your owns tools, it's basically files download & replacement. It can get nasty if you have to deals with the UAC thought.
Having only one repository seems difficult. But you could build a tool to generate the needed formats (equinox/p2, clickone, etc)
Building a server and update tools yourself could be time consuming, avoid it unless you have a really good reason to do so.
A build server, some scripts on the top of git repositories (or something like that) and an open source framework should do the job.
Related
I am beginning a fairly large new project using ColdFusion. This new project will include several developers and as such documentation of code will be key.
Another issue I am hoping to avoid (either with adequate code documentation or some other tool) is the duplication of code. A tool that would be able to "index" the code for searching or diagramming would likely help here.
What are others out there using either specifically for ColdFusion or language agnostic. We will likely be using ColdBox for the underlying framework if that makes a difference.
Thanks for any any all suggestions.
-c
Well, it's impossible to tell you which framework to use without knowing more about your project, but I can list out some tools that will be useful no matter which framework you use.
Language-agnostic tools:
GitHub.com organization+teams
Jenkins continuous integration
Apache ANT build scripts
Apache Maven for project management
Coldfusion-specific tools:
MXunit unit testing framework
MockBox for unit testing (if you use ColdBox: ColdBox-specific tutorial)
ColdDoc documentation generator
Javascript-specific tools:
JSLint or JSHint for JS code cleanup
Jasmine unit testing
Ideally, your Jenkins build server should:
Do a fresh checkout from source control
Run all unit tests and stop the build if they fail
Generate documentation
Generate a production-ready package of your project
At a minimum, I highly recommend using source control, setting up Jenkins with MXunit tests, and scheduling daily automated builds.
We used the Atlassian suite at my last job. Mostly Jira for tracking and Greenhopper for agile but the other tools may help, fisheye, bamboo, and crucible. If you host it yourself I believe they have a one time $10/product price tag that. Depending on your team's size may or may not work. If money is no subject the suite worked really nicely. It also has built in support for svn and maybe more by now.
http://www.atlassian.com/software
Sounds to me what you need is a methodology, not a tool. If you have a clearly defined set of objects/responsibilities. There should be no crossover in scripting, and if you determine a common API for the objects being coded, I would think you'd be fine.
I'm currently using jenkins/hudson for continuous integration a large mostly C++ project. We have separate projects for trunk and every branch. Also, there are some related projects for the Java code, but the setup for those are fairly basic right now (we may do more later though). The C++ projects do the following:
Builds everything with options for whether to reconfigure, do a clean build, or use a fresh checkout
Optionally builds and runs all tests
Optionally runs all tests using Valgrind's memcheck
Runs cppcheck
Generates doxygen documentation
Publishes reports: unit tests, valgrind, cppcheck, compiler warnings, SLOC, open tasks, and code coverage (using gcov, gcovr, and the cobertura plugin)
Deploys code nightly or on demand to a test environment and a package repository
Everything is configurable for automatic builds and optional for on demand builds. Underneath, there's a bash script that controls much of this, which farther depends on our build system, which uses automake and autoconf along with custom bash scripts.
We started using Hudson (at the time) because that's what the Java guys were using and we just wanted nightly builds. Since then, we've added a lot more and continue to add more. In some ways Hudson is great, but certainly isn't ideal.
I've looked at other solutions and the only one that looks like it could be a replacement is buildbot. Would buildbot be better for this situation? Is the investment worth it since we're already using Hudson? Why?
EDIT: Someone asked why I haven't found Hudson/Jenkins to be ideal. The short answer is that everything can be improved. I'm simply wondering if Jenkins is the best current solution for my use case or whether there is something better (buildbot?) that would be easier to maintain in the long run even as new requirements come up.
Both are open source projects, but you do not need to change buildbot code to "extend" it, it is actually quite easy to import your own packages in its configuration in which you can sub-class most of the features with your own additions. Examples: your own compilation or test code, some parsing of outputs/errors to be given to the next steps, your own formating of alert emails etc. there are lots of possibilities.
Generally I would say that buildbot is the most "general purpose" automatic builds tools. Jenkins however might be the best related to running tests, especially for parsing and presenting results in nice ways (results, details, charts.. some clicks away), things that buildbot does not do "out-of-the-box". I'm actually thinking of using both to have sexier test result pages.. :-)
Also as a rule of thumb it should not be difficult to create a new tool's config: if the specification of what to do (configs, builds, tests) is too hard to switch from one tool to another, it is a (bad) sign that not enough configuration scripts are moved to the sources. Buildbot (or Jenkins) should only call simple commands. If it is simple to run tests, then developers will do it as well and this will improve the success rate, whereas if only the continuous integration system runs the tests, you will be running after it to fix the new code failures, and will loose its non-regression value, just my 0.02€ :-)
Hope it'll help.
The 'result integration' is also in jenkins/hudson, and you can relatively easily capture build products without having to 'copy them elsewhere'.
For our instance, the coverage reports and unit test metrics and javadoc for the java code is all integrated. For our C++ code, the plugins are a little lacking, but you can still get most of it.
we ran buildbot since pre 0.7, and are now running 0.8 and are only now seeing any real reason to switch, as buildbot 0.8 forgot about windows slaves for an extended period of time and the support was pretty poor.
There are many other solutions out there, besides Jenkins/Hudson/BuildBot:
TeamCity by Jetbrains
Bamboo by Atlassian
Go by Thoughtworks
Cruise Control
OpenMake Meister
The specifics about what you are doing are not so important, in fact, as long as the agents (aka nodes) that you are doing them on support those tasks.
The beauty of a CI server is noticing when the build changes to trigger a new build (and test), publish the artifacts, and publish test results.
When you compare CI tools like those we mentioned, consider features like the usability of its interface, how easy is branching (and features it might offer like automatic merging), notifications (like XMPP/jabber), or an information-radiator (like hooking up a monitor to always show status). Product support is another thing to consider - Jenkins' support is only as good as who is responding to community questions at the time you have questions.
My personal favorite is Bamboo, but it comes with a license fee.
I'm a long-time Jenkins user in the middle of evaluating Buildbot and would like to offer a few items for folks considering using Buildbot for multi-module solutions:
*) Buildbot doesn't have any out-of-the-box concept of file artifacts related to each build. It's not in the UI and it's not in any of the builtin "steps" modules as far as I can see:
http://docs.buildbot.net/current/manual/configuration/buildsteps.html
...and I see no third party plugin:
https://github.com/buildbot/buildbot/wiki/PluginList#steps
Buildbot does collect all the console output from a given build, but critically, you can't collect files related to it.
*) Given that artifacts are not supported, it's not easy to create "collector" projects that bring multiple modules into say, a single installer. Jenkins has a great feature that lets you parameterize a build with builds from other modules (the parameter type is a run).
*) Establishing dependencies between modules is trickier in Buildbot. Say you have a library that three binaries depend on, and you want those binaries to rebuild each time the library changes. Jenkins has triggers built into the UI. If you want to do triggers in Buildbot you have to script them using schedulers.Dependent, and it causes a lot of item congestion in the Schedulers UI.
*) When you're working in Buildbot, it seems that pretty much all of the configuration is done in master.cfg in code. This is awesome and frustrating.
*) Buildbot forces you to create a worker in addition to a master server. This is annoying for beginners and systems for which a single build server is sufficient.
My impression after two days of Buildbot evaluation is that we'll stick with Jenkins, primarily due to it having artifacts. Buildbot is a tool we'd only use if we had more extensive customization needs, and the time to do it.
On the subject of buildbot and artifacts -- I don't have enough user score to make a comment -- you can get artifacts from buildbot 2.x series pretty easy with built-in file/directory upload actions. However you rarely want to just move files. Typically you make a triggered buildstep that does deployment directly off the worker for best results. eg push to cloud storage, containers, thirdparty (steam uploads), etc.
This way you can get metrics on the uploads and conditionally control them better (or even mix and match artifacts across worker machines).
Do you know any generic tool that can be used to check custom software-hardware prerequisites on a local machine? I mean, I'd like to have a tool which I can easily configure to check if the machine has, say .NET version this and this, and if SQL server 2005 is installed, and if IIS 6 or later is installed etc. then send it to somebody and he would start the tool and immediately know if a machine meets all defined conditions.
I imagine endless plugin possibilities for such tool: checking for python, java, ruby, php, all components, app servers, database servers, etc. It could do basic checks (e.g. only find if say Python 2.5 is installed) or more advanced (if installed - check if it's configured in this and this way). It could then check for hardware prerequisites like CPU, memory, hard drives, etc.
I'm saying 'tool' but this could be some low-level C or C++ library with documented methods. It would be possible to build higher-level tools around it, use it in the installation wizards etc etc. Is there anything like this out there? Would be nice and saved a lot of effort sometimes.
I'm not quite sure it is what you need, but you might want to take a look at NSIS (it's an installer framework).
There is a number of plugins and scripts already written for it that check for such prerequisites, and even offer a way to install those with your application.
I need your recommendations for continuous build products for a large (1-2MLOC) software development project. Characteristics:
ClearCase revision control
Approx 80% C++; 15% Java; 5% script or low-level
Compiles for Green Hills Integrity OS, but also some windows and JVM chunks
Mostly an embedded system; also includes some UI pieces and some development support (simulation tools, config tools, etc...)
Each notional "version" of the deliverable includes deployment images for a number of boards, UI machines, etc... (~10 separate images; 5 distinct operating systems)
Need to maintain/track many simultaneous versions which, notably, are built for a variety of different board support packages
Build cycle time is a major issue on the project, need support for whatever features help address this (mostly need to manage a large farm of build machines, I guess..)
Operates in a secure environment (this is a gov't program) (Edited to add: This is a classified program; outsourcing the build infrastructure is a non-starter.)
Interested in any best practices or peripheral guidance you might offer. The build automation issues is one of several overlapping best practices that appear to be missing on the program, but try to keep your answers focused on build infrastructure piece and observations directly related.
Cost is not the driving concern. Scalability and ease of retrofitting onto an existing infrastructure are key.
(Edited to address #Dan's comment. ;-)
From my experience with similar systems, there are approximately two parts to this problem:
A repeatable method for checking out sources, building the software, and testing it (if you want to do continual testing as well as building), using a small number of command-line invocations.
A means of calling these command lines on various servers in the build farm.
For the latter, we've been using BuildBot, which seems to work pretty well.
For the former, we have a homegrown solution that started out as a simple bash shell script and grew ... rather substantially. From experience, I'd suggest starting out in python rather than bash -- you'll spend far more code in handling setup and configuration than in actually invoking programs. (Also, it's probably easier to run it on Windows if you're doing that.)
The things I've found to be really key in our script's usefulness are:
Ironclad repeatability. We have a standard set of build tools, and the scripts start out by scrubbing environment variables. There are very few command-line options; everything goes into configuration files, and those go in version control.
Logging. We produce a log of every command that the build script executes.
Configuration file inheritance. Each variant of our software gets a configuration file, and those files can include more-general settings (which include even-more-general settings).
Extensibility. When we add a new source component, it's pretty easy to add a set of instructions for building that component (and the instructions can be arbitrary bash code). The "can be arbitrary code" part is probably key here; no way is a pre-existing product going to be able to do all of the quirky things that you need for a large complex real-world system.
You can get started with a reasonably simple script and let it grow organically as the need arises; honestly, although ours is a bit messy, I think we got a much more usable result that way than we would have with heavy top-down design.
Cost isn't an object? I've worked for GreenHills, and they've solved these issues for their in-house build/test farms. Ask them to do the same for you.
When I see emphasis on things like scalability and security in a build system, I start thinking that you might be a candidate for the enterprise class build systems / CI systems. Conveniently, it sounds like you can afford them as well. A year old SD Times article provides a basic breakdown between the enterprise and team level build tools.
My company makes AnthillPro and we've worked with a number of companies on large embedded projects as well as highly secure projects. IBM is probably the largest other player in the space with BuildForge.
AnthillPro puts some extra emphasis on what you do with the images in the minutes/hours/days post build (do you install them onto simulators / hardware and run automated tests? stage them? promote them?) but we also see folks using it for just build.
I've spent 4 years developing C++ using Visual Studio 2008 for a commercial company; it's now time for me to upgrade my development process.
Here's the problem: I dont have a 1 button build automation. I also dont have a CI server that automatically builds when a commit happens, and emails me whether a build is broken or not. Worse we dont even have a single unit test!!
Can someone please point to me how I can get started?
I have looked at many many tools and I think I might go with:
Visual Build (for build automation) (Note: I also considered Final Builder)
Cruise (for CI server)
I also now am just starting to practice TDD...so I will want to automate my unit tests as well. I chose Google Test/Mock for their extensive documentation. (Cant go wrong with Google brand can I? =p)
Price is not the issue, I want what's best and easiest to get started.
Can people that use real CI/automation tool for unmanaged MSVC++ tell me their tools and how I can go about starting?
Our source control is Subversion.
Last point: I'm also considering project management/tracking tool that integrates right into VSTD ..and thinking about using OnTime. VSTS costs too much. I tried FogBugz, but I think it's too simple. Any others?
I would take some time to seriously consider TeamCity. We used CruiseControl.NET for a while and TeamCity completely demolishes it. Plus it has built-in plugins for Boost and CppUnit, so your unit testing will come for free.
Best of all, the tool is free for < 20 users and gives you three build agents.
I just finished implementing our C++ product at work and it was fairly simple. We did it with msbuild and basically use the msbuild task to compile the solution. Other targets can be used to copy files, run unit tests, etc.
The last time I worked on an unmanaged MSVC++ project (which was moderately sized I might add), we used FinalBuilder to do the automated build & versioning (and even executing PCLint and other profiling tools as well).
Having said that, if you're willing to invest the time, MSBuild (or nAnt perhaps?) can do everything you need - even for unmanaged solutions.
Which brings us to the trade-off: Tools like Visual Build Pro and Final Builder get you up and running quickly. If you want something which offers a greater range of customization, you'll probably be spending a decent amount of time learning and understanding it - i.e. MSBuild, CIFactory, nAnt etc are no cake walk.
So if price isn't an issue - is time an issue? If time is at a premium, I'd investigate the GUI driven tools, they'll get you to where you want to go quickly. If you know you're going to need to extend on the simple one button build + unit tests + deploy scenario (which happens a lot!) then decide if you can invest the time into the more complex tools like MSBuild?
We use a combination of Boost.Build, NAnt, CPPUnit and either Cruise Control.NET or Hudson (we've used them both for various projects but are starting to prefer Hudson).
They are all good tools though we're considering replacing CPPUnit - the Google unit test system is pretty good from what I've seen.
If you're happy running on just Windows you can lose Boost.Build and just call out to Visual Studio from NAnt.
As for issue tracking/project management we settled on Vision Project after a long investigation. It's not well known (yet) but we've found it a very good fit in our environment. Fogbugz is great, a nice, clear interface but we came to the conclusion you did too; way too simple for our needs.
Although the .NET world is spoilt for these kinds of tools Continuous Integration is still pretty easy to set up for C++! I wouldn't think of starting a non-trivial project without putting these systems in place.
we use subversion + cruisecontrol + wix to accomplich CI automated builds outputting one-click installers. this combo has worked very well for us. we've created out own site for admin of svn user groups and permissioning and added the web interface to cc to it. we have a sql server storing all the collected stats from svn and cc and use them for custom reports available on our site. we are looking to add other tools to the mix for checking various attributes of the code stored in svn. this combo has worked very well for us.
At my company we use CruiseControl (http://cruisecontrol.sourceforge.net/). The Java version, not .NET, to build our wxWidgets application on Windows and OS X. Working great for us so far.