I do TDD, and I've been fairly loose in organizing my unit tests. I tend to start with a file representing the next story or chunk of functionality and write all the unit-tests to make that work.
Of course, if I'm introducing a new class, I usually make a separate unit-test module or file for that class, but I don't organize the tests themselves into any higher level structure. The result is I write code fast and I believe my actual program is reasonably well structured, but the unit tests themselves are "messy". Especially, their structure tends to recapitulate the phylogeny of the development process. Sometimes I see myself as trading laziness in the code for laziness in the tests.
How big a problem is this? Who here continually refactors and reorganizes their unit tests to try to improve their overall structure? Any tips for this? What should the overall structure of tests look like.
(Note, that I'm not so much asking the "how many assertions per function" question asked here : How many unit tests should I write per function/method? I'm talking about the bigger picture.)
Divide your tests in 2 sets:
functional tests
units tests
Functional tests are per-user story. Unit tests are per-class. The former check that you actually support the story, the latter exercise and document your functionality.
There is one directory (package) for functional tests. Unit tests should be closely bound with functionality they exercise (so they're scattered). You move them around and refactor them as you move & refactor your code around.
The less important part is organizing the tests.
I start by putting the tests into a class that relates to the class under test, so com.jeffreyfredrick.Foo has a test com.jeffreyfredrick.FooTest. But if some subset of those classes need a different setup then I'll move them into their own test class. I put my tests into a separate source directory but keep them in the same project.
The more important part is refactoring the tests.
Yes I try and refactor my tests as I go. The goal is to remove duplication while still remaining declarative and easy to read. This is true both within test classes and across test classes. Within a test class I might have a parametrized method for creating a test fake (mock or stub). My test fakes are usually inner classes within a test class but if I find there's need I'll pull them out for reuse across tests. I'll also create a TestUtil class with common methods when it seems appropriate.
I think refactoring yours tests is important to long term success of unit testing on large projects. Have you ever heard people complaining about how their tests are too brittle or preventing them from change? You don't want to be in a position where changing the behavior of a class means making dozens or even hundreds of changes to your tests. And just like with code, you achieve this through refactoring and keeping the tests clean.
Tests are code.
I write a unit test class for each class in the application, and keep the test classes organized in the same package structure as the classes under test.
Inside each test class I don't really have much organizational structure. Each one only has a handful of methods for each public method in the class under test, so I've never had any problem finding what I'm looking for.
For every class in the software, I maintain a unit test class. The unit test classes follow the same package hierarchy as the classes which are tested.
I keep my unit test code in a separate project. Some people also prefer to keep their test code in the same project under a separate source directory called 'test'. You could follow whatever feels comfortable to you.
I try to look at the unit tests as a project on their own. As with any project the organisation should follow some internal logic. It does not however have to be specific or formally defined - anything you're comfortable with is OK as long as it keeps your project well-organised and clean.
So for the unit tests I usually either follow the main project code structure or (sometimes when the situation calls of it) focus on the functional areas instead.
Leaving them in one heap is as you might imagine messy and difficult to maintain
Related
Let's say that I am refactoring some classes that have unit tests already written. Let's assume that the test coverage is reasonable in the sense that it covers most use cases.
While refactoring I change some implementations. Move some variables, add/remove a few, abstract out things into some functions etc. The api of the classes and it's function has remained the same though.
Should I be adding tests when I am refactoring these classes? Or should I add a new test for every bit of refactoring? This is what I usually do when I am building up code rather than refactoring.
PS: Apologies if this is really vague.
Usually unit tests are work-/design-/use case-specifications of how your refactored System Under Test/Class Under Test (eg.: classes) should really work. So by stating this I would really just go as:
Write the test according to your specification
Refactor the code to adhere to the specification
Check the assertation result of the test
In practice I have came to the conclusion that you don't need to test each and every line of your code just for the sake of high percentage of code coverage, but make sure you always test parts of your code where the behaviour or logic lies.
If your changes are verified by current tests, there's no need to add new ones. Check your code coverage. If there are holes in your new code, that means you made an unverified functional change.
New tests might be valuable if a newly extracted class is moved to another project, where you don't have the original tests.
I've written a 220 line class with 5 public methods. I have a unit testing class that runs 28 tests on this class which takes up over 1200 lines of code, but this is mostly due to repeated code used in setting the tests up. This code is testing the DAL in my project to ensure it interacts with the database correctly and that the stored procedures involved are running correctly. It seems like I have done a lot of work to test very little code. I am using mocks with Rhino mocks to avoid writing my own stubs where possible.
Is this typical unit testing experience?
In what way typical?
If you mean you have more unit test code than actual code, then yes. But you should treat your unit test code the same way as your 'real' code in that you should remove duplication and refactor it until it's as lean as possible/desirable.
Also if you're testing the DAL and the interaction with a real database then what you have there is an integration test.
EDIT
I've recently taken to writing unit test base classes for common testing patterns, I have a lot of setup code and helper methods in there. My most recent unit test base class is a generic one which allows me to test wcf-web-api classes very easily. So, my actual test classes are very lean and 'to the point'. YMMV
It is fairly common that unit test classes contain more LOC than actual tested classes. That's reasonable considering setting up dependencies, preparing faked data and all the unit testing related fuss.
However, testing DAL in terms of interacting with database and checking if correct procedures are invoked smells like an integration test. You might want to rethink what you want to do. With unit testing, all the DB-talking should be mocked/stubbed.
If you're having issues with 1200 lines of code, you can break up your tests into contexts, eg. every context matching particular part of tested class (public method, set of properties and so on).
Edit:
Just to add example that other's do that aswell. You can check sources of Aggregate and AggregateTests classes from Edulinq project. 15 tests to test 3 public methods, with tests class being twice as big as tested one.
Yes this is quite normal for unit testing.
The size of the code required to run tests is often underestimated, particularly code that requires a lot of set up boilerplate, such as database access.
While you could try to refactor the set up code into separate methods, this is quite normal for the situation you are describing.
With 28 tests, your 1200 lines reduces to about 43 per test. Considering you are repeating your setup code this is quite reasonable.
28 tests for one class sounds like the class is doing too many things.
You might want to try to write some tests that run directly against the database and consider if thats better. I find its more work to get the database setup, but then less work as I don't need to fake out lower layers. The tests run slower, but they verify things more completely. Of course, its not really a unit test at this point.
I am new to unit testing, but tend to think that I believe in beautifully written code, and properly designed architectures.
My question is. Aren't unit tests focusing too much on dependencies between objects? What do you do when your unit test fails because a dependency your method used to call befor is no longer called (a design decision) or your method calls another method or a dependency (again a design decision) Do you redesign your tests? If that's the case, then unit testing helps very little to reduce couple and improve cohesion between components.
Maybe my opinion is too broad, but in general how do people treat dependencies in properly mannered unit tests. I guess that the best way would be to have no dependencies at all, and every method relied on the parameters that were given to it, but this is hardly the case in reality. In addition, faking every dependency method for every possible call is also a bit subjective and time wasting, because at a future point in time, the class under test may simply no longer need the dependency.
I would suggest that you look at Test Driven Development (TDD) as I believe this technique will help you with your design issues. By writing unit tests before writing the production code, you will need to think about how to make your production code testable. This is better then the test later approach, where you write the production code first and then try to shoe-horn tests around them.
To deal with dependencies, think about what dependencies are causing you problems.
External Dependencies
If your tests use an external resource, such as a file, then you are writing an integration test, not a unit test. I've written many tests that use an external file, and I simply created a copy of the file in my test project. This file copy will contain dummy data required for my tests.
If your test requires a database, then again your writing an integration test. Personally I create a local copy of the database on my PC and run my tests against it.
Object Dependencies
If you are worried about code dependencies (e.g. your test will fail if a private method's signature is changed) then you are testing at the wrong level of abstraction. By that I mean make sure that your tests are calling public API's and not private ones. To cement this point, use interfaces for your objects to ensure an expected contract for an object that implements it.
I would also recommend that you try using a mocking framework such as RhinoMocks, Moq or TypeMock
A mocking framework will help you remove the dependency on, for example, having a database available for your tests. I personally use TypeMock, it's not cheap but it's by far the most powerful tool out there.
If you are talking about Unit testing you have no dependencies, cause a unit test tests only a single class (Java, C++, Ruby, Python). What you are talking about sounds more like integration testing which is different. Furthermore if you have to much dependencies your coupling is to high which is not very good, but of course not always avoidable.
Unit tests shall test the behavior, not the implementation. That way, one can rely on the unit tests when changing the implementation, or when refactoring the code. Removing a dependency (via inlining the class for instance), does not break the test.
Testing the implementation leads to brittle tests, that gets in the way when refactoring.
I've been working on an ASP.NET MVC project for about 8 months now. For the most part I've been using TDD, some aspects were covered by unit tests only after I had written the actual code. In total the project pretty has good test coverage.
I'm quite pleased with the results so far. Refactoring really is much easier and my tests have helped me uncover quite a few bugs even before I ran my software the first time. Also, I have developed more sophisticated fakes and helpers to help me minimize the testing code.
However, what I don't really like is the fact that I frequently find myself having to update existing unit tests to account for refactorings I made to the software. Refactoring the software is now quick and painless, but refactoring my unit tests is quite boring and tedious. In fact the cost of maintaining my unit tests is higher than the cost of writing them in the first place.
I am wondering whether I might be doing something wrong or if this relation of cost of test development vs. test maintenance is normal. I've already tried to write as many tests as possible so that these cover my user stories instead of systematically covering my object's interface as suggested in this blog article.
Also, do you have any further tips on how to write TDD tests so that refactoring breaks as few tests as possible?
Edit: As Henning and tvanfosson correctly remarked, it's usually the setup part that is most expensive to write and maintain. Broken tests are (in my experience) usually a result of a refactoring to the domain model that is not compatible with the setup part of those tests.
This is a well-known problem that can be addressed by writing tests according to best practices. These practices are described in the excellent xUnit Test Patterns. This book describes test smells that lead to unmaintanable tests, as well as provide guidance on how to write maintanable unit tests.
After having followed those patterns for a long time, I wrote AutoFixture which is an open source library that encapsulates a lot of those core patterns.
It works as a Test Data Builder, but can also be wired up to work as an Auto-Mocking container and do many other strange and wonderful things.
It helps a lot with regards to maintainance because it raises the abstraction level of writing a test considerably. Tests become a lot more declarative because you can state that you want an instance of a certain type instead of explicitly writing how it is created.
Imagine that you have a a class with this constructor signature
public MyClass(Foo foo, Bar bar, Sgryt sgryt)
As long as AutoFixture can resolve all the constructor arguments, you can simply create a new instance like this:
var sut = fixture.CreateAnonymous<MyClass>();
The major benefit is that if you decide to refactor the MyClass constructor, no tests break because AutoFixture will figure it out for you.
That's just a glimpse of what AutoFixture can do. It's a stand-alone library, so it will work with your unit testing framework of choice.
You might be writing your unit tests too close to your classes. What you should do is to test public APIs. When I mean public APIs, I don't mean public methods on all your classes, I mean your public controllers.
By having your tests mimicking how a user would interact with your controller part without ever touching your model classes or helper function directly, you allow yourself to refactor your code without having to refactor your tests. Of course, sometimes even your public API changes and then you'll still have to change your tests, but that will happen way less often.
The downside of this approach is that you'll often have to go through complex controller setup just to test a new tiny helper function you want to introduce, but I think that in the end, it's worth it. Moreover, you'll end up organizing your test code in a smarter way, making that setup code easier to write.
This article helped me a lot: http://msdn.microsoft.com/en-us/magazine/cc163665.aspx
On the other hand, there's no miracle method to avoid refactoring unit tests.
Everything comes with a price, and that's especially true if you want to do unit testing.
What I think he means is that it is the setup part that is quite tedious to maintain.
We're having the exact same problem, especially when we introduce new dependecies, split dependecies, or otherwise change how the code is supposed to be used.
For the most part, when I write and maintain unit tests, I spend my time in writing the setup/arrange code.
In many of our tests we have the exact same setup code, and we've sometimes used private helper methods to do the actual setup, but with different values.
However, that isn't a really good thing, because we still have to create all those values in every test. So, we are now looking into writing our tests in a more specification/BDD style, which should help to reduce the setup code, and therefore the amount of time spent in maintaining the tests.
A few resources you can check out is http://elegantcode.com/2009/12/22/specifications/, and BDD style of testing with MSpec http://elegantcode.com/2009/07/05/mspec-take-2/
Most of the time I see such refactorings affecting the set up of the unit test, frequently involving adding dependencies or changing expectations on these dependencies. These dependencies may be introduced by later features but affect earlier tests. In these cases I've found it to be very useful to refactor the set up code so that it is shared by multiple tests (parameterized so that it can be flexibly configured). Then when I need to make a change for a new feature that affects the set up, I only need to refactor the tests in a single place.
Two area's that I focus on when I start to feel the refactor pain around setup are making my unit tests more specific and my method/class's smaller. Essentially I find I am getting away from SOLID / SRP. Or I have tests that are trying to do to much.
It is worth noting that I do try and stay away from BDD/context spec the further from the UI I get. Testing a behavior is great, but always leads me (perhaps I am not doing it right?) to bigger messier tests, with more context specification than I like.
Another way I have seen this to happen to me is as code debit, creeping into methods that grow their business logic over time. Of course there are always big methods and class with multiple dependencies, but the less I have the less 'test rewrite' I have.
If you find yourself creating complicated test scaffolding involving deep object graphs like Russian dolls, consider refactoring your code so that the Class Under Test gets exactly what it needs in its constructor/arguments, rather than having it walk the graph.
intead of:
public class A {
public void foo(B b) {
String someField = b.getC().getD().getSomeField();
// ...
}
}
Change it to:
public class A {
public void foo(String someField) {
// ...
}
}
Then your test setup becomes trivial.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have tried to introduce unit testing to our current project but it doesn't seem to be working. The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
We have an abstract base class for unit testing our controllers that acts as a template calling into the child classes' abstract method implementations i.e. Framework calls Initialize so our controller classes all have their own Initialize method.
I used to be an advocate of unit testing but it doesn't seem to be working on our current project.
Can anyone help identify the problem and how we can make unit tests work for us rather than against us?
Tips:
Avoid writing procedural code
Tests can be a bear to maintain if they're written against procedural-style code that relies heavily on global state or lies deep in the body of an ugly method.
If you're writing code in an OO language, use OO constructs effectively to reduce this.
Avoid global state if at all possible.
Avoid statics as they tend to ripple through your codebase and eventually cause things to be static that shouldn't be. They also bloat your test context (see below).
Exploit polymorphism effectively to prevent excessive ifs and flags
Find what changes, encapsulate it and separate it from what stays the same.
There are choke points in code that change a lot more frequently than other pieces. Do this in your codebase and your tests will become more healthy.
Good encapsulation leads to good, loosely coupled designs.
Refactor and modularize.
Keep tests small and focused.
The larger the context surrounding a test, the more difficult it will be to maintain.
Do whatever you can to shrink tests and the surrounding context in which they are executed.
Use composed method refactoring to test smaller chunks of code.
Are you using a newer testing framework like TestNG or JUnit4?
They allow you to remove duplication in tests by providing you with more fine-grained hooks into the test lifecycle.
Investigate using test doubles (mocks, fakes, stubs) to reduce the size of the test context.
Investigate the Test Data Builder pattern.
Remove duplication from tests, but make sure they retain focus.
You probably won't be able to remove all duplication, but still try to remove it where it's causing pain. Make sure you don't remove so much duplication that someone can't come in and tell what the test does at a glance. (See Paul Wheaton's "Evil Unit Tests" article for an alternative explanation of the same concept.)
No one will want to fix a test if they can't figure out what it's doing.
Follow the Arrange, Act, Assert Pattern.
Use only one assertion per test.
Test at the right level to what you're trying to verify.
Think about the complexity involved in a record-and-playback Selenium test and what could change under you versus testing a single method.
Isolate dependencies from one another.
Use dependency injection/inversion of control.
Use test doubles to initialize an object for testing, and make sure you're testing single units of code in isolation.
Make sure you're writing relevant tests
"Spring the Trap" by introducing a bug on purpose and make sure it gets caught by a test.
See also: Integration Tests Are A Scam
Know when to use State Based vs Interaction Based Testing
True unit tests need true isolation. Unit tests don't hit a database or open sockets. Stop at mocking these interactions. Verify you talk to your collaborators correctly, not that the proper result from this method call was "42".
Demonstrate Test-Driving Code
It's up for debate whether or not a given team will take to test-driving all code, or writing "tests first" for every line of code. But should they write at least some tests first? Absolutely. There are scenarios in which test-first is undoubtedly the best way to approach a problem.
Try this exercise: TDD as if you meant it (Another Description)
See also: Test Driven Development and the Scientific Method
Resources:
Test Driven by Lasse Koskela
Growing OO Software, Guided by Tests by Steve Freeman and Nat Pryce
Working Effectively with Legacy Code by Michael Feathers
Specification By Example by Gojko Adzic
Blogs to check out: Jay Fields, Andy Glover, Nat Pryce
As mentioned in other answers already:
XUnit Patterns
Test Smells
Google Testing Blog
"OO Design for Testability" by Miskov Hevery
"Evil Unit Tests" by Paul Wheaton
"Integration Tests Are A Scam" by J.B. Rainsberger
"The Economics of Software Design" by J.B. Rainsberger
"Test Driven Development and the Scientific Method" by Rick Mugridge
"TDD as if you Meant it" exercise originally by Keith Braithwaite, also workshopped by Gojko Adzic
Are you testing small enough units of code? You shouldn't see too many changes unless you are fundamentally changing everything in your core code.
Once things are stable, you will appreciate the unit tests more, but even now your tests are highlighting the extent to which changes to your framework are propogated through.
It is worth it, stick with it as best you can.
Without more information it's hard to make a decent stab at why you're suffering these problems. Sometimes it's inevitable that changing interfaces etc. will break a lot of things, other times it's down to design problems.
It's a good idea to try and categorise the failures you're seeing. What sort of problems are you having? E.g. is it test maintenance (as in making them compile after refactoring!) due to API changes, or is it down to the behaviour of the API changing? If you can see a pattern, then you can try to change the design of the production code, or better insulate the tests from changing.
If changing a handful of things causes untold devastation to your test suite in many places, there are a few things you can do (most of these are just common unit testing tips):
Develop small units of code and test
small units of code. Extract
interfaces or base classes where it
makes sense so that units of code
have 'seams' in them. The more
dependencies you have to pull in (or
worse, instantiate inside the class
using 'new'), the more exposed to
change your code will be. If each
unit of code has a handful of
dependencies (sometimes a couple or
none at all) then it is better
insulated from change.
Only ever assert on what the test
needs. Don't assert on intermediate,
incidental or unrelated state. Design by
contract and test by contract (e.g.
if you're testing a stack pop method,
don't test the count property after
pushing -- that should be in a
separate test).
I see this problem
quite a bit, especially if each test
is a variant. If any of that
incidental state changes, it breaks
everything that asserts on it
(whether the asserts are needed or
not).
Just as with normal code, use factories and builders
in your unit tests. I learned that one when about 40 tests
needed a constructor call updated after an API change...
Just as importantly, use the front
door first. Your tests should always
use normal state if it's available. Only used interaction based testing when you have to (i.e. no state to verify against).
Anyway the gist of this is that I'd try to find out why/where the tests are breaking and go from there. Do your best to insulate yourself from change.
One of the benefits of unit testing is that when you make changes like this you can prove that you're not breaking your code. You do have to keep your tests in sync with your framework, but this rather mundane work is a lot easier than trying to figure out what broke when you refactored.
I would insists you to stick with the TDD. Try to check your Unit Testing framework do one RCA (Root Cause Analysis) with your team and identify the area.
Fix the unit testing code at suite level and do not change your code frequently specially the function names or other modules.
Would appreciate if you can share your case study well, then we can dig out more at the problem area?
Good question!
Designing good unit tests is hard as designing the software itself. This is rarely acknowledged by developers, so the result is often hastily-written unit tests that require maintenance whenever the system under test changes. So, part of the solution to your problem could be spending more time to improve the design of your unit tests.
I can recommend one great book that deserves its billing as The Design Patterns of Unit-Testing
HTH
If the problem is that your tests are getting out of date with the actual code, you could do one or both of:
Train all developers to not pass code reviews that don't update unit tests.
Set up an automatic test box that runs the full set of units tests after every check-in and emails those who break the build. (We used to think that that was just for the "big boys" but we used an open source package on a dedicated box.)
Well if the logic has changed in the code, and you have written tests for those pieces of code, I would assume the tests would need to be changed to check the new logic. Unit tests are supposed to be fairly simple code that tests the logic of your code.
Your unit tests are doing what they are supposed to do. Bring to the surface any breaks in behavior due to changes in the framework, immediate code or other external sources. What this is supposed to do is help you determine if the behavior did change and the unit tests need to be modified accordingly, or if a bug was introduced thus causing the unit test to fail and needs to be corrected.
Don't give up, while its frustrating now, the benefit will be realized.
I'm not sure about the specific issues that make it difficult to maintain tests for your code, but I can share some of my own experiences when I had similar issues with my tests breaking. I ultimately learned that the lack of testability was largely due to some design issues with the class under test:
Using concrete classes instead of interfaces
Using singletons
Calling lots of static methods for business logic and data access instead of interface methods
Because of this, I found that usually my tests were breaking - not because of a change in the class under test - but due to changes in other classes that the class under test was calling. In general, refactoring classes to ask for their data dependencies and testing with mock objects (EasyMock et al for Java) makes the testing much more focused and maintainable. I've really enjoyed some sites in particular on this topic:
Google testing blog
The guide to writing testable code
Why should you have to change your unit tests every time you make changes to your framework? Shouldn't this be the other way around?
If you're using TDD, then you should first decide that your tests are testing the wrong behavior, and that they should instead verify that the desired behavior exists. Now that you've fixed your tests, your tests fail, and you have to go squish the bugs in your framework until your tests pass again.
Everything comes with price of course. At this early stage of development it's normal that a lot of unit tests have to be changed.
You might want to review some bits of your code to do more encapsulation, create less dependencies etc.
When you near production date, you'll be happy you have those tests, trust me :)
Aren't your unit tests too black-box oriented ? I mean ... let me take an example : suppose you are unit testing some sort of container, do you use the get() method of the container to verify a new item was actually stored, or do you manage to get an handle to the actual storage to retrieve the item directly where it is stored ? The later makes brittle tests : when you change the implementation, you're breaking the tests.
You should test against the interfaces, not the internal implementation.
And when you change the framework you'd better off trying to change the tests first, and then the framework.
I would suggest investing into a test automation tool. If you are using continuous integration you can make it work in tandem. There are tools aout there which will scan your codebase and will generate tests for you. Then will run them. Downside of this approach is that its too generic. Because in many cases unit test's purpose is to break the system.
I have written numerous tests and yes I have to change them if the codebase changes.
There is a fine line with automation tool you would definatelly have better code coverage.
However, with a well wrttien develper based tests you will test system integrity as well.
Hope this helps.
If your code is really hard to test and the test code breaks or requires much effort to keep in sync, then you have a bigger problem.
Consider using the extract-method refactoring to yank out small blocks of code that do one thing and only one thing; without dependencies and write your tests to those small methods.
The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
The alternative is that when your Framework changes, you don't test the changes. Or you don't test the Framework at all. Is that what you want?
You may try refactoring your Framework so that it is composed from smaller pieces that can be tested independently. Then when your Framework changes, you hope that either (a) fewer pieces change or (b) the changes are mostly in the ways in which the pieces are composed. Either way will get you better reuse of both code and tests. But real intellectual effort is involved; don't expect it to be easy.
I found that unless you use IoC / DI methodology that encourages writing very small classes, and follow Single Responsibility Principle religiously, the unit-tests end up testing interaction of multiple classes which makes them very complex and therefore fragile.
My point is, many of the novel software development techniques only work when used together. Particularly MVC, ORM, IoC, unit-testing and Mocking. The DDD (in the modern primitive sense) and TDD/BDD are more independent so you may use them or not.
Sometime designing the TDD tests launch questioning on the design of the application itself. Check if your classes have been well designed, your methods are only performing one thing a the time ... With good design it should be simple to write code to test simple method and classes.
I have been thinking about this topic myself. I'm very sold on the value of unit tests, but not on strict TDD. It seems to me that, up to a certain point, you may be doing exploratory programming where the way you have things divided up into classes/interfaces is going to need to change. If you've invested a lot of time in unit tests for the old class structure, that's increased inertia against refactoring, painful to discard that additional code, etc.