Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
A typical solution is to have a CI (Continuous Integration) build running on a build server: It will analyze the source code, make build (in debug) and run tests, measure test coverage, etc.
Now, another build type usually known is "Nightly build": do slow stuff like create code documents, make a setup package, deploy to test environment, and run automatic (smoke or acceptance) tests against the test environment, etc.
Now, the question:
Is it better to have a third separate "Release build" as release build?
Or do "Nightly build" in release mode and use it as a release?
What are you using in your company?
(The release build should also add some kind of tag to source control of potential product version.)
Usually the way I do it is my nightly builds get promoted to Release builds. You usually don't want to create a separate release build. Most QA teams should test nightly builds (hopefully they aren't testing from your CI builds). Once they have deemed it good enough for release, you promote it to release status. You determine what that means. Move it to another location, rename it, tag it, label it, burn it, etc.
You don't want to have QA testing a nightly, then when they deem it good, build another one, that you say is the same. You never know, they can be different. An OS patch may have been applied to your build machine, a third party tool may have been updated, etc. You don't want to make your QA teams work twice to test the "same exact build". It might be from the same source, but there is no guarantee it is the same exact build.
The answer to your question depends highly on the project you're working on and the goals you want to set.
In general, (trivially true for small projects) a build should be very fast and it should include everything needed for deployment. This is for me always the goal even if I don't reach it - at least not at once. It just keeps me looking at what can be improved all the time.
I know from working on big legacy projects that there are so many accumulated problems slowing things down that it might not be feasible. At least not as an immediate target at least. In large legacy projects compiling and linking usually takes too long, tests (if existing) might also run too long and generating all the required information for deployment might also be slow and even manual. Also build hardware might be insufficient. There are many other things to add to this incomplete list.
When working on a project like this, I try to have separate cycles doing things.
First cycle, a solid CI server which builds, runs automated unit tests, packages and archives builds. This must be fast to give fast feedback to development on changes made. If this is slow, get better hardware for building, sort out dependencies and fix slow unit tests etc. You want this to be as fast as possible. The builds are all deployable builds.
The second cycle would be a slower cycle only picking up builds made by the CI system. It does not work with source code as input, but rather release builds. These are picked up as you want (every build produced) or latest available when ready to do another cycle. This longer cycle would consist of deploying the build onto a test server, run automated functional tests and do other things which are "too slow", "not yet fast" or something else you want which takes a long time. Depending on your organization, you can now add to the deployable package (docs etc.), rename the release according to something visible to clients or things like that. Builds passing here could be good-to-go-live.
If you also have performance tests to run, you might want a third cycle which works with the second cycle's builds as input.
This is briefly described, but the main point here is to separate things, so you can have everything in the chain while getting feedback quicker than having one cycle. I find this a good approach as it's possible to get the benefits of speed (feedback) as well as a natural place to do things.
Finally, I want to mention that the way to go about this would vary from project to project as well, especially if you retrofit CI. You may even want to have a separate continuous build with only build and unit tests, and have a one a day (or something) build which feeds the releases and testing. This would of course mean that only development uses the fast CI builds, because they're incomplete and not suitable for deployment. Still, long-term this is not where you want to be. You want to have the whole chain automated.
Over the years I've done this a number of ways. The first was that the release build would happen IF and ONLY IF the 'sanity test' passed for the debug build. It would also auto-deploy to our pre-production environment for user-driven validation.
I've also seen this done where release builds are treated almost as sacred, and that they are only made when it is deemed 'time to get ready to really deploy'. Along with that comes some 'paperwork' and approvals, and then the release build is made (manually) and then it is sanity checked and then deployed.
From my experience it doesn't really matter as long as you are consistent and that it works with the way the company/team understands it should work. Going against the grain is easy at first but then results in what happened at one client which is they actually abandoned a structured build/deployment approach all together (a $100M company did that, imagine, but they did).
Related
Is it better to have a unit-test project per solution or a unit-test project per project?
With per solution, if you have 5 projects in the solution you end-up with 1 unit-test project containing tests for each of the 5 projects.
With per project, if you have 5 projects in the solution you end-up with 5 unit-test projects.
What is the right way?
I think it's not the same question as Write Unit tests into an assembly or in a separate assembly?
Assemblies are a packaging/deployment concern, so we usually split them out because we don't want to deploy them with our product. Whether you split them out per library or per solution there are merits to both.
Ultimately, you want tests to be immediately available to all developers, so that developers know where to find them when needed. You also want an obstacle free environment with minimal overhead to writing new tests so that you aren't arming the cynics who don't want to write tests. Tests must also compile and execute quickly - project structure can play a part in all of this.
You may also want to consider that different levels of testing are possible, such as tests for unit, integration or UI automation. Segregating these types of tests is possible in some tools by using test categories, but sometimes it's easier for execution or reporting if they are separate libraries.
If you have special packaging considerations such as a modular application where modules should not be aware of one another, your test projects should also reflect this.
In small projects where there aren't a lot of projects, a 1:1 ratio is usually the preferred approach. However, Visual Studio performance quickly degrades as the number of projects increases. Around the 40 project mark compilation becomes an obstacle to compiling and running the tests, so larger projects may benefit from consolidating test projects.
I tend to prefer a pragmatic approach so that complexity is appropriate to the problem. Typically, an application will be comprised of several layers where each layer may have multiple projects. I like to start with a single test library per layer and I mimic the solution structure using folders. Divide when complexity warrants it. If you design your test projects for flexibility then changeover is usually painless.
I would say a separate project for each unit test project rather than in one project per solution. I think this is better because it will save you a lot of hassle if you decide to take a particular project out of a solution and move it to another solution.
We have one to one ratio from file to projects of solution in a very large system, we have reached a point where build takes more than 90 mins every time we check-in. I have created new solution configuration which builds only test projects, new configuration runs once a days to make sure all test cases are working ,developers can switch to unittest config to test their code in their development environment. Pl. let me know your feed back
I am going with one of those "it depends" answers. Personally, I tend to put all of the tests in a single project, with separate folders within the project for each assembly (plus further sub-folders as necessary.) This makes it easy to run the entire set, either from within VisualStudio or CruiseControl.net.
If you have thousands of tests, a single project might prove too difficult to maintain. Also, as Peter Kelly mentioned in his response, being able to easily split the tests if you move a project can be useful.
Personally I write one test assembly per project. I then create one nunit project per solution and reference all of the relevant test assemblies from that. This reflects the organisation of the projects and means that if projects are reused in different solutions then only the relevant unit tests need to be run.
I used to be coding in C# in a TDD style - write/or change a small chunk of code, re-compile in 10 seconds the whole solution, re-run the tests and again. Easy...
That development methodology worked very well for me for a few years, until a last year when I had to go back to C++ coding and it really feels that my productivity has dramatically decreased since. The C++ as a language is not a problem - I had quite a lot of C++ dev experience... but in the past.
My productivity is still OK for a small projects, but it gets worse when with the increase of the project size and once compilation time hits 10+ minutes it gets really bad. And if I find the error I have to start compilation again, etc. That is just purely frustrating.
Thus I concluded that in a small chunks (as before) is not acceptable - any recommendations how can I get myself into the old gone habit of coding for an hour or so, when reviewing the code manually (without relying on a fast C# compiler), and only recompiling/re-running unit tests once in a couple of hours.
With a C# and TDD it was very easy to write a code in a evolutionary way - after a dozen of iterations whatever crap I started with was ending up in a good code, but it just does not work for me anymore (in a slow compilation environment).
Would really appreciate your inputs and recos.
p.s. not sure how to tag the question - anyone is welcome to re-tag the question appropriately.
Cheers.
I've found that recompiling and testing sort of pulls me out of the "zone", so in order to have the benefits of TDD, I commit fairly often into a git repository, and run a background process that checks out any new commit, runs the full test suite and annotates the commit object in git with the result. When I get around to it (usually in the evening), I then go back to the test results, fix any issues and "rewrite history", then re-run the tests on the new history. This way I don't have to interrupt my work even for the short times it takes to recompile (most of) my projects.
Sometimes you can avoid the long compile. Aside from improving the quality of your build files/process, you may be able to pick just a small thing to build. If the file you're working on is a .cpp file, just compile that one TU and unit-test it in isolation from the rest of the project. If it's a header (perhaps containing inline functions and templates), do the same with a small number of TUs that between them reference most of the functionality (if no such set of TUs exists, write unit tests for the header file and use those). This lets you quickly detect obvious stupid errors (like typos) that don't compile, and runs the subset of tests you believe to be relevant to the changes you're making. Once you have something that might vaguely work, do a proper build/test of the project to ensure you haven't broken anything you didn't realise was relevant.
Where a long compile/test cycle is unavoidable, I work on two things at once. For this to be efficient, one of them needs to be simple enough that it can just be dropped when the main task is ready to be resumed, and picked up again immediately when the main task's compile/test cycle is finished. This takes a bit of planning. And of course the secondary task has its own build/test cycle, so sometimes you want to work in separate checked-out copies of the source so that errors in one don't block the other.
The secondary task could for example be, "speed up the partial compilation time of the main task by reducing inter-component dependencies". Even so you may have hit a hard limit once it's taking 10 minutes just to link your program's executable, since splitting the thing into multiple dlls just as a development hack probably isn't a good idea. The key thing to avoid is for the secondary task to be, "hit SO", or this.
Since a simple change triggers a 10 minutes recompilation, that means you have a bad build system. Your build should recompile only changed files and files depending on the changed files.
Other then that, there are other techniques to speed up the build time (For example, try to remove unneeded includes. Then instead of including a header, use forward declaration. etc ), but the speed up of these things is not that important as what is recompiled on a change.
I don't see why you can't use TDD with C++. I used CppUnit back in 2001, so I assume it's still in place.
You don't say what IDE or build tool you're using, so I can't comment on how those affect your pace. But small, incremental compiles and running unit tests are both still possible.
Perhaps looking into Cruise Control, Team City, or another hands-off build and test process would be your cup of tea. You can just check in as fast as you can and let the automated build happen on another server.
We develop a product for internal customers. We don't have a QA team, and don't use assertions. Performance is important, application size isn't.
Is it a good idea to have a single configuration (instead of separating Debug and Release), which will have the debug information (pdbs), and will also do the performance optimization?
Are there any cons to this approach?
Keep both. There is a reason for having two configurations! Use the Debug one for debugging and the Release one for every-day use.
THe cons of "merging" configurations are obvious - you wont get the best optimizations you could with clean Release configuration and debugging will be awkward. The few seconds (or minutes) needed to rebuild the project in a different configuration are worth it, trust me.
I would say that you should always keep debug and release versions separate. Release versions are for your customers, Debug versions are for your developers. You say that you don't use assertions: perhaps you should be? Even if you don't use assertions in your own code, you can still trigger assertions in the underlying library code, eg when using invalid iterators. These will give the developer a warning that something's wrong. What would the user do if they saw this message: panic, call tech support, do nothing?
The debug version is there to provide you with extra tools to fix problems before you ship the release version. You should use every tool available to you to increase the quality of your product.
The debug infos will be mostly worthless in an optimized build, because the optimizer will transform the program into something unrecognizable. Also, errors related to undefined behavior are easier to expose if you have a secondary configuration with other optimization flags.
Debugging and optimization tend to work against each other. The compiler's optimizations typically make debugging a pain (functions can be inlined, loops unrolled, etc), and the strictness that makes debug info worthwhile ties the compiler's hands so it can't optimize as well. Basically, if you combine the two, it's the worst of both worlds.
Performance of the finished product thus pretty much demands that it be a "release" version, not a debug version, and certainly not some odd mix of the two.
You should have at least two. One for release (performance) and one for debugging - or do you write perfect code, first time every time?
Is it OK to have a single
configuration, rather than separating
Debug and Release (in our case)?
It may be OK - it depends heavily on your case (but depending on your details I think it is very not OK).
We don't have a QA team, and don't use assertions.
Assertions are not the issue with a debug build. They are another tool you can use (or not).
Having a QA team or not should not influence (heavily) the decision between debug and release builds (but if you do have a QA team, sooner or later you will probably want to have a debug version of your product).
A QA team will affect the quality of your product heavily. Without dedicated QA (by someone other than the people who develop the application) you have no guarantee of the quality or stability of your product, you can provide no guarantee it does what it's supposed to do (or that it's fit for any purpose) and you cannot make meaningful measurements on your product in lots of areas).
It may be you actually don't need a QA team, but in most cases you're just depriving your development team and customers (internal or not) of a lot of necessary data.
A debug build should make it easier to - well - debug your product and track issue and fix them. If you are doing no organized QA, you may not even know what your main issues to fix are.
Methinks that you actually have a QA team, you just don't see it as such: your internal customers (that may even be you) are your QA team. It's a bad idea, to the degree your application's function is important.
Working with no QA team is like creating a car by yourself and taking it on the road for testing: you have no idea if the wheels are held together OK, or if the breaks work until you are in traffic. It may be you don't kill anyone, but I wouldn't put the critical data in your company in your untested application, unless it's not really critical.
Performance is important, application size isn't.
If performance is important, who measures it? Does the measurement code belong to your released application? Do you add it and remove it in the released code?
It sounds like you're doing ad-hoc development and with a performance-critical application with no QA team and no dedicated debugging I'd have lots of doubts your team can actually deliver.
I don't know your situation and there may be a lot I don't see in this so maybe it's OK.
Are there any cons to this approach?
Yes: you will either end up with diagnostics code in your release version, or have to remove the diagnostics code after fixing each problem and add it again when working on the next problem.
You should not remove the debug version only for optimization though. That's not a valid argument, since you can optimize your release version and leave the debug version as is.
I've always programmed alone, I'm still a student so I never programmed with anyone else, I haven't even used a version control system before.
I'm working on a project now that requires knowledge of how programmers work together on a piece of software in a company.
How is the software compiled? Is it from the version control system? Is it by individual programmers? Is it periodic? Is it when someone decides to build or something? Are there any tests that are done to make sure it "works"?
Anything will do.
Actually, there are as many variations on these processes as many companies there are. Meaning: every company has a little bit different conventions than others, but there are some common best practices that are generally used in most places.
Best practices that are always useful
All the source code of the project and anything that is required to build it is under version control (also called source control). Anyone should be able to build the entire project with one click.
Furthermore, unnecessary files (object files or compiled binaries) should not be added to the repository, as they can be regenerated quite easily and would just waste space in the repo.
Every developer should update and commit to the version control a few times per day. Mostly when they have finished the task they are working on and tested it enough so they know that it doesn't contain trivial bugs.
Again: anyone should be able to build the project with a single click. This is important and makes it easy to test for everyone. Big advantage if non-programmers (eg. the boss) are able to do so, too. (It makes them feel to be able to see what the team is working on exactly.)
Every developer should test the new feature or bug fix they are adding before they commit those to the repository.
Set up a server that regulary (in predetermined intervals) updates itself from the repository and tries to build everything in the entire project. If it fails, it sends e-mails to the team along with the latest commits to the version control (since which commit did it fail to build) to help debug the issue.
This practice is called continuous integration and the builds are also called nightly builds.
(This doesn't imply that developers should not build and test the code on their own machines. As mentioned above, they should do that.)
Obviously, everyone should be familiar with the basic design/architecture of the project, so if something is needed, different members of the team doesn't have to reinvent the wheel. Writing reusable code is a good thing.
Some sort of communication is needed between the team members. Everyone should be aware of what the others are doing, at least a little. The more, the better. This is why the daily standup is useful in SCRUM teams.
Unit testing is a very good practice that makes testing the basic functionality of your code automatically.
A bug tracking software (sometimes called time tracking software) is a very good means of keeping track what bugs are there and what tasks the different team members have. It is also good for testing: the alpha/beta testers of your project could communicate with the development team this way.
These simple things ensure that the project doesn't go out of control and everyone works on the same version of the code. The continuos integration process helps when something goes terribly bad.
It also prevents people from committing stuff that don't build to the main repository.
If you want to include a new feature that would take days to implement and it would block other people from building (and testing) the project, use the branches feature of your version control.
If that is not enough, you can set it up to do automated testing, too, if that is possible with the project in question.
Some more thoughts
The above list can be very heavyweight at first glance. I recommend that you follow it on an as-needed basis: start with a version control and a bug tracker, then later on set up the continuous integration server, if you need it. (If it's a large project, you're gonna need it very soon.) Start writing unit tests for the most important parts. If it's not enough, then write more of them.
Some useful links:
Continuous integration, Daily builds are your friends, Version control, Unit testing
Examples:
For version control, I tend to use Git for my personal projects nowadays. Subversion is also popular, and for example, VisualSVN is quite easy to set up if you use a Windows server. For client, TortoiseSVN works best for many people. Here is a comparison between Git and SVN.
For bug tracking software, Jira and Bugzilla are very popular. We also used Mantis at a previous workplace.
For continuous integration software, there is Teamcity for one (also, CruiseControl and its .NET counterpart are notable).
Answer to your question "who decides the main design of the project?"
Of course, that would be the lead developer.
In companies, the lead developer is the person who talks to the financial / marketing people of the project, and decides the arcithecture according to the financial capability of the company, the planned features the requirements from users, and the time that is available.
It is a complex task, and usually more than one people are involved. Sometimes members of the team are also asked to participate or brainstorm about the design of the entire project or specific parts.
I'm a student as well, who completed a software engineering course recently where the entire semester consisted of a giant group project. Let me just start by saying we could have done with 3 people what it took 12 of us the whole semester to do. Working with people is a tough thing. Communication is key.
Definitely utilize a repository. Each person can remotely access all the code, and add/delete/change anything. But the best part about subversion is that if someone breaks the code, your can revert to an earlier version and assess what went wrong from there. Communication is still key though, know what your teammates are doing so that there are no conflicts. Don't sit on your code either, make quick, meaningful commits to the repository to be the most effective.
**I'd also recommend a bug tracker, such as Redmine. You can set up accounts for everyone, and assign people tasks with different priorities, and also track and see if people have taken care of certain problems, or if more have come up.
And, as has been said before, unit testing will help greatly. Best of luck! Hope this helped :-)
how programmers work together on a
piece of software in a company
Developers never work as a team. Teams suck. Dilbert is funny not because he's a comical character like Goofy. He's funny because he's real and people recognize the situations he's in.
Generally it is good practice not to check build artifacts into the repository. The repository will contain the source tree, build configuration, etc - anything written by a human. Software engineers will check out a copy of their code onto their local filesystem and build it locally.
It is also good practice to have unit tests which are run as part of the build process. This way, a developer will know instantly if his changes have invalidated any of the unit tests, and will have the opportunity to fix them before checking in his changes.
You might like to look into the documentation for a version control system (one of Subversion, CVS, Git, etc) and for a build system (for example, in Java there are Ant and Maven).
The big things are:
A plan — If people don't know where they're going, they won't go anywhere. The start of any project therefore needs a few people (often the project graybeards) to get into a huddle and come up with a plan; the plan need not be very detailed, but it's still required.
Version control system — Without this, you aren't working together. You also need the firm commitment that if things aren't committed, they don't count. “Oh, it's in one of my sandboxes” is just a lame excuse.
Issue tracker — You can't keep track of these things by email folders. Should definitely be database-backed.
Notification system — People need to know when things are committed to code that they maintain or comments are made to bugs they are responsible for. Email can work for this, as can IRC (provided everyone uses it, of course).
Build system — It doesn't really matter how this happens, so long as with one action you can get a complete build of the current state of things, both of your development sandbox and of the main repository. The best option for this depends on what language(s) you're using.
Test suite — A test suite helps people avoid silly errors. It needs to be as easy to run as the build (being part of the build is good). Note that tests are only a crude substitute for correctness, but they're a heck of a lot better than nothing.
Finally, you need a willingness to work together toward fulfilling the plan. That's all too often the tough part.
There is no standard for the things you're asking about. Rather, there are conventions and these depend heavily on the size and maturity of the organization. If you're in a small organization, say a couple of programmers, then things will probably be somewhat informal with the individual developers doing coding, builds, and test.
In larger organizations, there may be a dedicated build engineer and process. This kind of organization will usually do a formal build on a regular basis, say once a day, using whatever source code is checked in. The process will also usually include BVT (Build Validation Tests) and perhaps some regression tests. Developers will check out the code from the repository, work on their own portion locally, then check it in.
In the largest organizations, like Microsoft or Google, they will have a completely dedicated group and full lab that will build on a more-or-less continual basis, making the results of each run available. These organizations have very formal processes and procedures in place about what gets checked in and when, what the code review processes are, etc.
There is no cookbook for working with software development, but in general the version control system should be the heart of your build system, even if you are working in a project where you are the only developer. Even in this case, being able to revert versions and read the version log is very welcome help in fixing bugs. This is not the only feature of a version control system, but this alone justifies installing, configuring and maintaining a version control system.
The build can be done either by each developer when adding new code, or periodically by a "build server". The last approach requires more setup, but helps finding out build errors sooner.
The short answer - "It depends".
Currently, I'm working on a project by myself, so I'm the one who builds/uses VCS. I know of other places that you have teams working on the project together by shudder email. Or big (+5) teams using VCS.
On that note, I highly recommend learning at least some VCS, and Joel Spolsky has a great introductory tutorial for Mercurial. Bazaar (my personal choice) is similar, and then Git is the next nearest in terms of similarity, but probably more popular than either (at least ATM). After that you have SVN which is pretty weak in comparison.
Actually, Joel talks about most of your questions - I'd recommend reading the 10 years of archives he has - it's all highly useful information, and most of it pertinent to your current and near-future situation.
Proper programming is a deep thing that benefits greatly from experience. Pair-programming is like running multiple processors of awareness... one can overlook something seen by the other and so long as they are communicating it can result in great progress.
First of all, teams work by using repositories (which can be professional version control, or just a bunch of directories that is considered the 'live' one, however a revision control system is the de facto standard). Also, how the project is managed strategy depends on how you work (waterfall, agile, etc.). If you work in iterations, you build components/plugins/modules/libraries which are self-sustained, and you perform unit testing, until its signed off as finished. As a team, you work in a team which means you do don't work on the entire project everywhere at the same time. Instead, you get a task to perform inside a realm of the project. On some occasions you have to fix code that isn't yours, but that comes usually when a strange behavior occurs. Basically, you are doing the testing of the parts you develop.
Let me examplify this for you. You are inside a team of construction workers. The architect comes with a plan for a building, the foreman looks what the necessities are to construct and then hires the constructors. The mason does the walls, checks them for strength and glues them up nicely. The electrician does all the wiring inside the building so electricity can flow. Each man has their own job. Sometimes, the electrician might want to discuss with the mason if certain walls can be carved, but always in conjunction with the foreman.
I hope this is some help for you!
Typically, the source control system contains the source code and usually does not have the binaries. If you want to build it and run it, you would check the code out and build it on your local machine.
Some places run nightly builds to make sure everything works. There may even be some automated tests that are ran server-side. If the build or anything else fails, someone is notified automatically.
A good introduction to a method of using source control is Eric Sink's Source Control HOWTO http://www.ericsink.com/scm/source_control.html
In his examples he uses SourceGear Vault since he wrote it and all, but the methods can be applied to other version control systems.
This is again one good reason why one should look into Open Source projects.
The lead developers who work in big OpenSource projects (like Chromium , Mozilla Firefox, MySQL , Popular Gnu Software) are professionals. They have lot of experience and these projects have evolved over years with ideas from hundreds of such professionals.
Everything others mentioned in their answers (Plan, Version control system , Issue tracker , Notification system , Build system, Test suite, ) can be found in these OpenSource projects.
If you really want an hands on experience I strongly suggest you to go through some popular & big OpenSource projects and then get the Source from any project (using Version Control) and build it your self.
PS: I'm also a student and involving in OpenSource projects is the best thing I ever did in my life. Trust me! you'll also feel the same.
Is there any way to do automated profiling of unit tests when we run them via TeamCity?
The reason I'm asking is that while we should, and most of the time do, have focus on not creating performance-wise bad code, sometimes code slips through that seems to be OK, and indeed works correctly, but the routine is used multiple places and in some cases, the elapsed run-time of a method now takes 10x the time it did before.
This is not necessarily a bug, but it would be nice to be told "Hey, did you know? One of your unit-tests now takes 10x the time it did before you checked in this code.".
So I'm wondering, is there any way to do this?
Note that I say TeamCity because that's what will ultimately run the code, tools, whatever (if something is found), but of course it could be a wholly standalone tool that we could integrate ourselves.
I also see that TeamCity is gathering elapsed time statistics for our unit tests, so my thought was that perhaps there was a tool that could analyze that set of data, to compare latest elapsed time against statistical trends, etc.
Perhaps it's as "easy" as making our own test-runner program?
Has anyone done this, or seen/know of a potential solution for this?
I'm running TeamCity Professional Version 4.5.5 (build 9103). Does the "test" tab under each individual build do what you need? I'm seeing statistical trends for each test as a function of either each build or averaged over time.