Not really a Ruby on Rails question, but that is the framework which we are working in.
We are migrating data from a legacy system into our own system, and have been testing the code that will do the data migrations. These tests live alongside the rest of the applications tests, and so ran against our build server on commits, etc.
Once we've migrated this data, these tests will seemingly be useless to us, since the code they are testing will never be run again. What's more, is the tests will most likely get stale, and might require maintenance, lest they break our build.
Should we just be throwing these tests out afterward? Tagging them in some way so that they don't get ran after we do things for real? Something else?
Get rid of them.*
*Which is to say, let them sit in source control if you ever need to refer to them.
If it were me, I would separate out the project that does the data migration along with its tests. That way the tests don't generate noise in your current build process, and you only have to modify them if you (for some reason) touch the migration project again.
If this isn't possible, then just rip all of it out once you are done. If you ever need to get it back it should be in source control... right!?!
Related
We've had problems recently where developers commit code to SVN that doesn't pass unit tests, fails to compile on all platforms, or even fails to compile on their own platform. While this is all picked up by our CI server (Cruise Control), and we've instituted processes to try to stop it from happening, we'd really like to be able to stop the rogue commits from happening in the first place.
Based on a few other questions around here, it seems to be a Bad Idea™ to force this as a pre-commit hook on the server side mostly due to the length of time required to build + run the tests. I did some Googling and found this (all devs use TortoiseSVN):
http://cf-bill.blogspot.com/2010/03/pre-commit-force-unit-tests-without.html
Which would solve at least two of the problems (it wouldn't build on Unix), but it doesn't reject the commit if it fails. So my questions:
Is there a way to make a pre-commit hook in TortoiseSVN cause the commit to fail?
Is there a better way to do what I'm trying to do in general?
There is absolutely no reason why your pre-commit hook can't run the Unit tests! All your pre-commit hook has to do is:
Checkout the code to a working directory
Compile everything
Run all the unit tests
Then fail the hook if the unit tests fail.
It's completely possible to do. And, afterwords, everyone in your development shop will hate your guts.
Remember that in a pre-commit hook, the entire hook has to complete before it can allow the commit to take place and control can be returned to the user.
How long does it take to do a build and run through the unit tests? 10 minutes? Imagine doing a commit and sitting there for 10 minutes waiting for your commit to take place. That's the reason why you're told not to do it.
Your continuous integration server is a great place to do your unit testing. I prefer Hudson or Jenkins over CruiseControl. They're easier to setup, and their webpage are more user friendly. Even better they have a variety of plugins that can help.
Developers don't like it to be known that they broke the build. Imagine if everyone in your group got an email stating you committed bad code. Wouldn't you make sure your code was good before you committed it?
Hudson/Jenkins have some nice graphs that show you the results of the unit testing, so you can see from the webpage what tests passed and failed, so it's very clear exactly what happened. (CruiseControl's webpage is harder for the average eye to parse, so these things aren't as obvious).
One of my favorite Hudson/Jenkins plugin is the Continuous Integration Game. In this plugin, users are given points for good builds, fixing unit tests, and creating more passed unit tests. They lose points for bad builds and breaking unit tests. There's a scoreboard that shows all the developer's points.
I was surprised how seriously developers took to it. Once they realized that their CI game scores were public, they became very competitive. They would complain when the build server itself failed for some odd reason, and they lost 10 points for a bad build. However, the number of failed unit tests dropped way, way down, and the number of unit tests that were written soared.
There are two approaches:
Discipline
Tools
In my experience, #1 can only get you so far.
So the solution is probably tools. In your case, the obstacle is Subversion. Replace it with a DVCS like Mercurial or Git. That will allow every developer to work on their own branch without the merge nightmares of Subversion.
Every once in a while, a developer will mark a feature or branch as "complete". That is the time to merge the feature branch into the main branch. Push that into a "staging" repository which your CI server watches. The CI server can then pull the last commit(s), compile and test them and only if this passes, push them to the main repository.
So the loop is: main repo -> developer -> staging -> main.
There are many answers here which give you the details. Start here: Mercurial workflow for ~15 developers - Should we use named branches?
[EDIT] So you say you don't have the time to solve the major problems in your development process ... I'll let you guess how that sounds to anyone... ;-)
Anyway ... Use hg convert to get a Mercurial repo out of your Subversion tree. If you have a standard setup, that shouldn't take much of your time (it will just need a lot of time on your computer but it's automatic).
Clone that repo to get a work repo. The process works like this:
Develop in your second clone. Create feature branches for that.
If you need changes from someone, convert into the first clone. Pull from that into your second clone (that way, you always have a "clean" copy from subversion just in case you mess up).
Now merge the Subversion branch (default) and your feature branch. That should work much better than with Subversion.
When the merge is OK (all the tests run for you), create a patch from a diff between the two branches.
Apply the patch to a local checkout from Subversion. It should apply without problems. If it doesn't, you can clean your local checkout and repeat. No chance to lose work here.
Commit the changes in subversion, convert them back into repo #1 and pull into repo #2.
This sounds like a lot of work but within a week, you'll come up with a script or two to do most of the work.
When you notice someone broke the build (tests aren't running for you anymore), undo the merge (hg clean -C) and continue to work on your working feature branch.
When your colleagues complain that someone broke the build, tell them that you don't have a problem. When people start to notice that your productivity is much better despite all the hoops that you've got to jump, mention "it would be much more simple if we would scratch SVN".
The best thing to do is to work to improve the culture of your team, so that each developer feels enough of a commitment to the process that they'd be ashamed to check in without making sure it works properly, in whatever ways you've all agreed.
I've been ignoring the need to test my project for far to long.
So I spent more than a day looking for ways to implement tests for my current apps and trying to get some TDD going for new apps.
I found a lot of "tutorials" with the steps: "1. Install this 2. Install that 3. Install thisnthat 4. Done!",
but noone seems to talk about how to structure your tests, both file and code wise.
And noone ever talks about how to set up a CI server, or just integrate the testing with the deployment of your project.
A lot of people mention fabric, virtualenv and nose - but noone describes how they work with them together as a whole.
What I keep finding is detailed information about how you set up a proper Rails environment with testing and CI etc...
Does anyone else feel that the Django community lacks in this area, or is it just me? :)
Oh, and does anyone else have any suggestions on how to do it?
As I see it, there are several parts to the problem.
One thing you need are good unit tests. The primary characteristic of unit tests is that they are very fast, so that they can test the combinatorial possibilities of function inputs and branch coverage. To get their speed, and to maintain isolation between tests, even if they are running in parallel, unit tests should not touch the database or network or file system. Such tests are hard to write in Django projects, because the Django ORM makes it so convenient to scatter database access calls throughout your product code. Hence any tests of Django code will inevitably hit the database. Ideally, you should approach this by limiting the database access in your product code to a thin data access layer built on top of the django ORM, which exposes methods pertinent to your application. Another approach is for your tests to mock out the ORM calls. In the worst case, you will give up on this: Your unit tests become integration tests: They actually hit the database, cross multiple layers of your architecture, and take minutes to run, which discourages developers from running them frequently enough.
The implication of this is that writing integration tests is easy - the canonical style of Django tests covers this perfectly.
The final, and hardest part of the problem, is running your acceptance tests. The defining characteristic of acceptance tests is that they invoke your application end-to-end, as a user does in production, to prove that your application actually works. Canonical dhango tests using the django testrunner fall short of this. They do not issue actually HTTP requests (instead, they examine the url config to figure out what middleware and view would get called to handle a particular request, and then they call it, in process.) This means that such tests are not testing your webserver config, nor any javascript, or rendering in the browser, etc. To test this, you need something like selenium.
Additionally, we have many server-side processes, such as cron jobs, which use code from our Django project. Acceptance tests which involve these processes should invoke the jobs just like cron does, as a new process.
Both these scenarios have some problems. Firstly, you can't simply run such tests under the Django test runner. If you try to do so, then you will find that the test data you have written during the test setup (either using the django fixtures mechanism, or by simply calling "MyModel().save()" in a test) are in a transaction which your product code, running in a different process, is not party to. So your tests have to commit the changes they make before the product code can see them. This interferes with the clean-up between tests that Django's testrunner helpfully does, so you have to switch it into a different mode, which does explicit deletes rather than rolling back. Sadly, this is much slower. At last night's London Django user group, a couple of Django core developers assured me that this scenario also has other complications (which I confess I don't know what they are), which it is best to avoid by not running acceptance tests within the Django test runner at all, but creating them as a completely stand-alone set of tests.
If you do this, then your immediate problem is that you have lost the benefits the Django test runnner provides: Namely it creates a test database, and cleans it between each test. You will have to create some equivalent mechanism for yourself. You will need your product code to run against a test database if it is being invoked as part of a test. You need to be absolutely certain that if product code is run as part of a test, even on a production box, then it can NEVER accidentally touch the production database, so this mechanism has to be absolutely failsafe. Forgetting to set an environment variable in a test setup, for example, should not cause a blooper in this regard.
This is all before even considering the complications that arise from deployment, having parts of your project in different repos, dependent on each other, creating pip-installable packages, etc.
All in all, I'd love to hear from someone who feels they have found a good solution to this problem. It is far from a trivial issue as some commenters imply.
Harry Percival is creating a Django / TDD / Selenium tutorial (and accompanying workshop, if you live in London.) His book reads like a hands-on tutorial, and goes into great detail on the subject:
https://www.obeythetestinggoat.com/book/part1.harry.html
In my experience, fine-grained unit tests for web apps are not worth it, the setup/teardown is too expensive and the tests are too fragile. The only exception is isolated components, especially those with clear inputs & outputs and complicated algorithms. Do unit-test those to the smallest details.
I had the best testing experience using a semi-functional testing tool called testbrowser, which simulates browser actions in Python. For integration with Django, install the homophony app (disclaimer: I am the author of the app).
Testbrowser may be a little too coarse for test-driven development, but it's the best testing tool of the ones I have used so far. Most importantly, it scales up fairly well, whereas unit tests and browser-based functional test tools tend to become very brittle as your app grows in size.
As for a CI tool, go with Buildbot or Jenkins.
I use a combination of Django's excellent extension of the python unittest framework for testing api's / models / helper functions, and selenium for in browser testing. Selenium has great instructions for how to set it up and write tests in python.
In one of my Django projects I have a suite of unit tests that are based on TransactionalTestCase class (it takes much longer than TestCase). It is impossible to run tests after each change in code because it takes more than 0.5 hour to run all tests. We looked some time ago for some easy contiuous integration tool that could allow us to (at least) run tests on tests server and send emails with errors to the team members (we have of course code repository and we don't need auto deployment at the momment). Do you have some working solutions or ideas how to accomplish this?
We wrote some 'super extra simple CI server' which does nothing more than running tests and sending email reports (it is integrated with our code repository). But since we had some problems with our not-ideal simple tool recently I'm wondering now if you have sucessfully completed similar scenarios in your working enviroment?
I'm looking for something ligthweight, easy to install and use.
Disclaimer: I don't know Django. But I do know that I use Hudson as my continuous integration tool for a number of languages and platforms. I found it easy to install and confgure on both Windows and Linux (set & forget) and was impressed with the number of plugins available.
Basically, if what you want to do can be automated by a sctript file, then you can use Hudson. It really is worth checking out.
It took me only a few minutes to set it so that I get an email if, and only if, something goes wrong, although you might want to do somethinng else (for which there probably exists a plugin). Hudson also plays well with other tools like BigZilla, all major version control tools, etc
Have you considered having two kinds of tests - basic and advanced and adding additional django command, that would run only basic tests, that are fast? This way you can do basic testing on small changes and run the full test suite only when you are about to commit/push changes?
I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way.
The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail.
This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself.
So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically.
Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb.
It is hard to run all the unit tests and not integration tests in IDEA.
Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain.
Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!#!#!
Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it.
Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful.
After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.
You can set this property in your config file:
grails.gorm.failOnError=true
That will make it a system wide default for save (which you can override with .save(failOnError: false) if you want).
If you only want this behavior in the test, you can put it in that environment specific stanza in Config.groovy. I actually like this as a project wide behavior.
I'm sure theres a way that you could turn failOnError on/off within a defined scope, but I haven't investigated how to do it yet (might be a good blog post, I'll update this if I write one).
I'm not sure what you've got misconfigured in IDEA, but it shows me a red bar when my tests fail and I can click on the lines in the stacktrace and get right to the issues. The latest version of intellij even collapses down the majority of metaclass cruft that isn't interesting when trying to fix issues.
If you haven't done this already to generate your project, I'd try wiping away your existing .ipr/.iml/.iws/.idea files and running this command to have grails regenerate your configuration:
grails integrate-with --intellij
Then run the .ipr file that gets generated.
I've been using MSTest so far for my unit-tests, and found that it would sometimes randomly break my builds for no reason. The builds would fail in VS but compile fine in MSBuild - with error like 'option strict does not allow IFoo to cast to type IFoo'. I believe I have finally fixed it, but after the bug coming back and struggling to make it go away again, and little help from MS, it left a bad taste in my mouth. I also noticed when looking at this forum and other blogs and such, that most people are using NUnit, xUnit, or MBUnit.. We are on VS2008 at work BTW.. So now I am looking to explore other options..
I'm working on moving our team to start doing TDD and real unit testing and have some training planned, but first would like to come up with a set of standard tools & best practices. To this end I've been looking online to come up with the right infrastructure for both a build server and dev machines...I was looking at the typemock website as I've heard great things about their mocking framework, and noticed that it seems like they promote MSTest, and even have some links of people moving TO MSTest from NUnit..
This is making me re-think my decision.. so I guess I'm asking - is anyone using MSTest as part of their TDD infrastructure? Any known limitiations it has, if I want to integrate with a build / CI server, or code coverage or any other kind of TDD tool I may need? I did search these forums and mostly find people comparing the 3rd party frameworks to eachother and not even giving MSTest much of a chance... Is there a good reason why.. ?
Thanks for the advice
EDIT: Thanks to the replies in this thread, I've confirmed MSTest works for my purposes and integreated gracefully with CI tools and build servers.
But does anyone have any experience with FinalBuilder?? This is the tool that I'd like us to use for the build scripts to prevent having to write a ton of XML compared to other build tools. Any limitiations here that I should be aware of before committing to MS Test?
I should also note - we are using VSS =(. I'm hoping we can ax this soon - hopefully as part of, maybe even the first step, of setting up all of this infrastructure.
At Safewhere we currently use MSTest for TDD, and it works out okay.
Personally, I love the IDE integration, but dislike the API. If it ever becomes possible to integrate xUnit.NET with the VS test runner, we will migrate very soon thereafter.
At least with TFS, MSTest works pretty well as part of our CI.
All in all I find that MSTest works adequately for me, but I don't cling to it.
If you are evaluating mock libraries, take a look at this comparison.
I've been using MS Test since VS 2008 came out, but I haven't managed to strong-arm anything like TDD or CI here at work, although I've messed with Cruise Control a little in an attempt to build a CI server on my local box.
In general I've found MS Test to be pretty decent for testing locally, but there are some pain points for institutional use.
First, MS Test adds quite a few things that probably don't belong in source control. The .VSMDI files are particularly annoying; just running MS Test creates anywhere from 1 to 5 of them and adds them to the solution file. Which means churn on your .SLN in source control, and churn of that sort is bad.
I understand the supposed point behind these extra files -- tracking test run history and such -- but I don't find them particularly useful for anything but a single developer. You should use your build service and CI for that sort of thing!
Second, you either must have Team Foundation Server to run your unit tests as part of CI, or you have to have a copy of Visual Studio installed on your build server if you use, for example, Cruise Control.NET. See this Stack Overflow question for details.
In general, there's nothing wrong with MS Test. But going CI will not be as smooth as it could be.
I have been using MSTest very successfully in our company. We are currently setting up standardised build processes within our company and so far, we have had good success with TeamCity. For Continuous integration, we use out the box TeamCity configurations. For the actual release builds, we set up large msbuild scripts that automate the entire process.
I really like mstest because of the IDE integration and also that all our devs automatically can use it without installing any 3rd party dependencies. I would not recommend switching just because of the problem you are experiencing. I have come full circle, where we went over to nunit and then came back again. These frameworks are all the same at the end of the day so pick the one that is easiest for most your devs to get access to and start using.
What I suspect your problem might be... sounds like an obscure problem I have had before where incorrect references of dll's (eg: adding explicit references (via browse) to projects in your solution, and not using the project reference) leads to out-of-date problems that only come up after clean checkouts or builds.
The other really suspect issue that I have found before is if you have some visual component or control that has a public property of some custom type that is being serialised in the forms .resx file. I typically need to flag them with an attribute that says SerializationVisibility.Hidden. This means that the IDE will not try to generate setters for the property value (which is typically some object graph). Just a thought. Could be way out.
I trust the tools and they don't really lie about there being a genuine problem. They only misrepresent them or report them as something completely obscure. It sounds to me like you have this. I suspect this because the error message doesn't make sense if all is in order, but it does make sense if some piece of code has loaded up an out of date or modified version of the dll at that point.
I have successfully deployed several FinalBuilder installations and the customers have been very happy with the outcome. I can highly recommend it.