Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I understand the idea behind test-driven development, write tests first, code against tests until it's successful. It's just not coming together for me in my workflow yet.
Can you give me some examples where unit-tests could be used in a front or back end web development context?
You didn't specify a language, so I'll try and keep this somewhat generic. It'll be hard though, since it's a lot easier to express concepts with actual code.
Unit tests can be a bit confusing at first. Sometimes it's not always clear how to test something, or what the purpose of a test is.
I like to treat unit testing as a way to test small individual pieces of code.
The first place I use unit tests is to verify some method works like I expect it to for all cases. I just recently wrote a validation method for a phone number for my site. I accept any inputs, from 123-555-1212, (123) 555-1212, etc. I want to make sure my validation method works for all the possible formats. Without a unit test, I'd be forced to manually enter each different format, and check that the form posts correctly. This is very tedious and error prone. Later on, if someone makes a change to the phone validation code, it would be nice if we could easily check to make sure nothing else broke. (Maybe we added support for the country code). So, here's a trivial example:
public class PhoneValidator
{
public bool IsValid(string phone)
{
return UseSomeRegExToTestPhone(phone);
}
}
I could write a unit test like this:
public void TestPhoneValidator()
{
string goodPhone = "(123) 555-1212";
string badPhone = "555 12"
PhoneValidator validator = new PhoneValidator();
Assert.IsTrue(validator.IsValid(goodPhone));
Assert.IsFalse(validator.IsValid(badPhone));
}
Those 2 Assert lines will verify that the value returned from IsValid() is true and false respectively.
In the real world, you would probably have lots and lots of examples of good and bad phone numbers. I have about 30 phone numbers that I test against. Simply running this unit test in the future will tell you if your phone validation logic is broke.
We can also use unit tests to simulate things outside of our control.
Unit tests should run independent of any outside resources. Your tests shouldn't depend on a database being present, or a web service being available. So instead, we simulate these resources, so we can control what they return. In my app for example, I can't simulate a rejected credit card on registration. The bank probably would not like me submitting thousands of bad credit cards just to make sure my error handling code is correct. Here's some sample code:
public class AccountServices
{
private IBankWebService _webService = new BankWebService();
public string RegisterUser(string username, string creditCard)
{
AddUserToDatabase(username);
bool success = _webService.BillUser(creditCard);
if (success == false)
return "Your credit card was declined"
else
return "Success!"
}
}
This is where unit testing is very confusing and not obvious. What should a test of this method do? The first thing, it would be very nice if we could check to see that if billing failed, the appropriate error message was returned. As it turns out, by using a mock, there's a way. We use what's called Inversion of Control. Right now, AccountServices() is responsible for creating the BankWebService object. Let's let the caller of this class supply it though:
public class AccountServices
{
public AccountServices(IBankWebService webService)
{
_webService = webService;
}
private IBankWebService _webService;
public string RegisterUser(string username, string creditCard)
{
AddUserToDatabase(username);
bool success = _webService.BillUser(creditCard);
if (success == false)
return "Your credit card was declined"
else
return "Success!"
}
}
Because the caller is responsible for creating the BankWebService object, our Unit test can create a fake one:
public class FakeBankWebService : IBankWebService
{
public bool BillUser(string creditCard)
{
return false; // our fake object always says billing failed
}
}
public void TestUserIsRemoved()
{
IBankWebService fakeBank = FakeBankWebService();
AccountServices services = new AccountServices(fakeBank);
string registrationResult = services.RegisterUser("test_username");
Assert.AreEqual("Your credit card was declined", registrationResult);
}
By using that fake object, anytime our bank's BillUser() is called, our fake object will always return false. Our unit test now verifies that if the call to the bank fails, RegisterUser() will return the correct error message.
Suppose one day you are making some changes, and a bug creeps in:
public string RegisterUser(string username, string creditCard)
{
AddUserToDatabase(username);
bool success = _webService.BillUser(creditCard);
if (success) // IT'S BACKWARDS NOW
return "Your credit card was declined"
else
return "Success!"
}
Now, when your billing fails, Your RegisterUser() method returns "Success!". Fortunately, you have a unit test written. That unit test will now fail because it's no longer returning "Your credit card was declined".
It's much easier and quicker to find the bug this way than to manually fill out your registration form with a bad credit card, just to check the error message.
Once you look at different mocking frameworks, there are even more powerful things you can do. You can verify your fake methods were called, you can verify the number of times a method was called, you can verify the parameters that methods were called with, etc.
I think once you understand these 2 ideas, you will understand more than enough to write plenty of unit tests for your project.
If you tell us the language you're using, we can direct you better though.
I hope this helps. I apologize if some of it is confusing. I'll clean it up if something doesn't make sense.
Related
Now I have some question of unit test?
Business
when order should display some available coupon list
Code
public List<Coupon> getAvailableCouponsWhenOrder(String productCategory, int productPrice, String shopId, String userId){
// get all available coupons -- that is not used
List<Coupon> couponList = couponMapper.queryCoupons(userId);
// filter coupons
List<Coupon> availableList = new ArrayList<>();
for (Coupon coupon : couponList) {
// some coupon only support specific category, e.g. book, phone
if(!checkCategory(coupon,productCategory)){
continue;
}
// some coupon have use condition , e.g. full 100 then minus 50
if(!checkPrice(coupon,productPrice)){
continue;
}
// you cannot use other shop's coupon
if(!checkShopCoupon(coupon,shopId)){
continue;
}
availableList.add(coupon);
}
return availableList;
}
And have below unit test
#Test
public void test_exclude_coupon_which_belong_to_other_shop(){
String productCategory = "book";
int productPrice = 200;
String shopId = RandomStringUtils.randomAlphanumeric(10);
String userId = RandomStringUtils.randomAlphanumeric(10);
Coupon coupon = new Coupon();
String anotherShopId = RandomStringUtils.randomAlphanumeric(10);
coupon.setShopId(anotherShopId); // other shop's coupon
coupon.setCategory("book"); // only use for buying book
coupon.setFullPrice(200); // full 200 minus 50
coupon.setPrice(50);
when(couponMapper.queryCoupons(userId)).thenReturn(newArrayList(coupon));
List<Coupon> availableCoupons = service.getAvailableCoupons(productCategory, productPrice, shopId, userId);
// check whether exclude other shop's coupon
assertEquals(0, availableCoupons.size());
}
but even if it passed, I still have no confidence that it is right? because maybe previous check is failed, e.g. checkCategory always return false?
So how to know it really executed checkShopCoupon ?
Please stay away from using a mocking framework to test your private methods. The only thing that (probably!) requires mocking is this line:
List<Coupon> couponList = couponMapper.queryCoupons(userId);
It is really simple: you have to make sure that your list couponList has a specifc content. In other words: you create a whole set of different test methods that each pre-sets couponList with different values. And then you check that the list returned matches your expectations.
And for the record: better use the one and only assert that one really needs:
assertThat(service.getAvailableCoupons(productCategory, productPrice, shopId, userId), is(somePredefinedList));
(where is is a Hamcrest matcher method)
Long story short: you probably need to use mocking to gain control over the return value of that call to queryCoupons() - but that is the only part that requires mocking here!
There are 2 things that you need to check in the test: the logic is correct and that the logic is invoked correctly. In your case it can be:
A test to check the logic: CouponMapperTest#queryingCouponReturnsCouponByUserId()
To check whether it's actually invoked you create a component test (without a mock) that invokes getAvailableCouponsWhenOrder() which in turn invokes couponMapper.queryCoupons(userId)
But with the suggested approach there is a risk that you end up with a lot of component tests which could take time to run. To fight with that you need to build a balanced Test Pyramid (here is a thorough example) to ensure that you have a lot of unit tests, fewer component tests and even fewer of system tests. Some advise on that:
Move towards OOP if possible. That means ObjectA contains ObjectB as opposed to containing IdOfObjectB. That would allow you to move logic lower to the domain objects. Which leads to the..
Move domain logic as low as possible (preferably to the domain model where it actually belongs). A.k.a. Rich Model. Then you'll be able to test things without initializing half of the app. And that's faster.
Mocking frameworks add a lot of burden to maintain your tests. If something is refactored you often have to change a whole bunch of tests. Another problem with the mocking frameworks - you test things in isolation while you have to also ensure that the project works as a whole.
So I'd discourage the usage of mocking unless it's a boundary of your app or it's a very heavy dependency.
Imagine that we have a game with different types of servers in it to represent different countries. Now, lets say the game only allows users to "buddy" other players that are only in the same server that their account is attached to. As a developer, I am tasked to write a test case only to see if the feature for users to "buddy" one another works. However, here is where my dilemma lies, since I only have to test whether or not a user can "buddy" someone else, do I have to also test if the user cannot add users from another server in this same test case, or should that be written in another test case totally separate from the current one that I am writing?
Yes, you should.
The first test you're describing, where you just test if they can have a buddy, is known as Happy Path testing. You're testing that it works with no exceptions or errors or weird use cases.
Happy Path testing is a good start. Everything else is where the real fun begins.
In your case, the things that come to mind are...
What if they buddy with an invalid buddy?
What if they buddy with someone who is already their buddy?
What if they buddy with something that's not a user?
How these should be organized is a matter of taste. Ideally they're all separate tests. This makes the purpose of each test clear via the test name, and it avoids interactions and dependencies between the tests. Here's a sketch in no particular language.
describe add_buddy {
test happy_path {
assert user.add_buddy(valid_buddy);
assert user.buddies.contains(valid_buddy);
}
test buddy_on_another_server {
buddies = user.buddies;
assert !buddies.contains(invalid_buddy);
assertThrows {
user.add_buddy(invalid_buddy);
} InvalidBuddy;
assert buddies == user.buddies, "buddy list unchanged";
}
test buddy_with_non_user {
buddies = user.buddies;
assertThrows {
user.add_buddy(non_user);
} ArgumentError;
assert buddies == user.buddies, "buddy list unchanged";
}
test buddy_an_existing_buddy {
assert user.add_buddy(valid_buddy);
# Should this return true? False? An exception?
# You have to decide. I decided false.
assert !user.add_buddy(valid_buddy);
# The buddy is only on the list once.
assert user.buddies.numContains(valid_buddy) == 1;
}
}
Things like user and valid_buddy can be created in the setup routine, or better they're available via a fixtures generator such as Factory_Girl.
When people say "test only one thing". Does that mean that test one feature at a time or one scenario at a time?
method() {
//setup data
def data = new Data()
//send external webservice call
def success = service.webserviceCall(data)
//persist
if (success) {
data.save()
}
}
Based on the example, do we test by feature of the method:
testA() //test if service.webserviceCall is called properly, so assert if called once with the right parameter
testB() //test if service.webserviceCall succeeds, assert that it should save the data
testC() //test if service.webserviceCall fails, assert that it should not save the data
By scenario:
testA() //test if service.webserviceCall succeeds, so assert if service is called once with the right parameter, and assert that the data should be saved
testB() //test if service.webserviceCall fails, so again assert if service is called once with the right parameter, then assert that it should not save the data
I'm not sure if this is a subjective topic, but I'm trying to do the by feature approach. I got the idea from Roy Osherove's blogs, but I'm not sure if I understood it correct.
It was mentioned there that it would be easier to isolate the errors, but I'm not sure if its overkill. Complex methods will tend to have lots of tests.
(Please excuse my wording on the by feature/scenario, I'm not sure how to word them)
You are right in that this is a subjective topic.
Think about how you want this method to behave, not just on how it's currently implemented. Otherwise your tests will just mirror the production code and will break everytime the implementation changes.
Based on the limited context provided, I'd write the following (separate) tests:
Is the webservice command called with the expected data?
If the command returns successfully, is the data saved? Don't overspecify the arguments provided to your webservice call here, as the previous test covers this.
If it's important that the data is not saved when the command returns a failure, I'd write a third test for this. If it's not important, I wouldn't even bother.
You might have heard the adage "one assert per test". This is good advice in general because a test stops executing as soon as a single assert fails. All asserts further down are not executed. By splitting up the asserts in multiple tests you will receive more feedback when something goes wrong. When tests go red, you know exactly all the asserts that fail and don't have to run through the -fix assertion failure, run tests, fix next assertion failure, repeat- cycle.
So in the terminology you propose, my approach would also be to write a test per feature of the method.
Sidenote: you construct your data object in the method itself and call the save method of that object. How do you sense that the data is saved in your tests?
I understand it like this:
"unit test one thing" == "unit test one behavior"
(After all, it is the behavior that the client wants!)
I would suggest that you approach your testing "one feature at a time". I agree with you where you quoted that with this approach it is "easier to isolate the errors". Roy Osherove really does know what he is talking about especially when it comes to TDD.
In my experience I like to focus on the behaviors that I am trying to test (and I am not particularly referring to BDD here). Essentially I would test each behavior that I am expecting from this code. You said that you are mocking out the dependencies (webservice, and data storage) so I would still class this as a unit test with the following expected behaviors:
a call to this method will result in a particular call to a web service
a successful web service call will result in the data being saved
an unsuccessful web service call will result in the data not being saved
Having tests for these three behaviors will help you isolate any issues with the code immediately.
Your tests should also have no dependency on the actual code written to achieve the behavior. For example, if my implementation called some decorator internal to my class which in turn called the webservice correctly then that should be no concern of my test. My test should only be concerned with the external dependencies and public interface of the class itself.
If I exposed internal methods of my class (or implementation details, such as the decorator mentioned above) for the purposes of testing its particular implementation then I have created brittle tests that will fail when the implementation changes.
In summary, I would recommend that your tests should lock down the behavior of a class and isolate failures to identify the 'unit of behavior' that is failing.
A unit test in general is a test that is done without a call to database or file system or even to that effect doesnot call a webservice either. The idea of a unit test is that if you did not have any internet connection you should be able to unit test. So having said that , if a method calls a webservice or calls a database, then you basically are expected to mock the responses from an external system. You should be testing that unit of work only. As mentioned above by prgmtc on how you should be asserting one assert per method is the way to go.
Second, if you are calling a real webservice or database etc, then consider calling those test as integrated or integration test depending upon what you are trying to test.
In my opinion to get the most out of TDD you want to be doing test first development. Have a look at uncle Bobs 3 Rules of TDD.
If you follow these rules strictly, you end up writing tests that generally only have a single assert statements. In reality you will often find you end up with a number of assert statements that act as a single logical assert as it often helps with the understanding of the unit test itself.
Here is an example
[Test]
public void ValidateBankAccount_GivenInvalidAccountType_ShouldReturnValidationFailure()
{
//---------------Set up test pack-------------------
const string validBankAccount = "99999999999";
const string validBranchCode = "222222";
const string invalidAccountType = "99";
const string invalidAccoutTypeResult = "3";
var bankAccountValidation = Substitute.For<IBankAccountValidation>();
bankAccountValidation.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType)
.Returns(invalidAccoutTypeResult);
var service = new BankAccountCheckingService(bankAccountValidation);
//---------------Assert Precondition----------------
//---------------Execute Test ----------------------
var result = service.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType);
//---------------Test Result -----------------------
Assert.IsFalse(result.IsValid);
Assert.AreEqual("Invalid account type", result.Message);
}
And the ValidationResult class that is returned from the service
public interface IValidationResult
{
bool IsValid { get; }
string Message { get; }
}
public class ValidationResult : IValidationResult
{
public static IValidationResult Success()
{
return new ValidationResult(true,"");
}
public static IValidationResult Failure(string message)
{
return new ValidationResult(false, message);
}
public ValidationResult(bool isValid, string message)
{
Message = message;
IsValid = isValid;
}
public bool IsValid { get; private set; }
public string Message { get; private set; }
}
Note I would have unit tests the ValidationResult class itself, but in the test above I feel it gives more clarity to include both Asserts.
I am trying to become more familiar with test driven development. So far I have seen some easy examples, but I have still problems approaching complex logic like for example this method in my DAL:
public static void UpdateUser(User user)
{
SqlConnection conn = new SqlConnection(ConfigurationSettings.AppSettings["WebSolutionConnectionString"]);
SqlCommand cmd = new SqlCommand("WS_UpdateUser", conn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#UserID", SqlDbType.Int, 4);
cmd.Parameters.Add("#Alias", SqlDbType.NVarChar, 100);
cmd.Parameters.Add("#Email", SqlDbType.NVarChar, 100);
cmd.Parameters.Add("#Password", SqlDbType.NVarChar, 50);
cmd.Parameters.Add("#Avatar", SqlDbType.NVarChar, 50);
cmd.Parameters[0].Value = user.UserID;
cmd.Parameters[1].Value = user.Alias;
cmd.Parameters[2].Value = user.Email;
cmd.Parameters[3].Value = user.Password;
if (user.Avatar == string.Empty)
cmd.Parameters[4].Value = System.DBNull.Value;
else
cmd.Parameters[4].Value = user.Avatar;
conn.Open();
cmd.ExecuteNonQuery();
conn.Close();
}
What would be good TDD practices for this method?
Given that the code is already written, let's talk about what makes it hard to test instead. The main issue here is that this method is purely a side-effect: it returns nothing (void), and its effect is not observable in your code, in object-land - the observable side effect should be that somewhere in a database far away, a record has now been updated.
If you think of your unit tests in terms of "Given these conditions, When I do this, Then I should observe this", you can see that the code you have is problematic for unit testing, because the pre-conditions (given a connection to a DB that is valid) and post-conditions (a record was updated) are not directly accessible by the unit test, and depend on where that code is running (2 people running the code "as is" on 2 machines have no reason to expect the same results).
This is why technically, tests that are not purely in-memory are not considered unit tests, and are somewhat out of the realm of "classic TDD".
In your situation, here are 2 thoughts:
1) Integration test. If you want to verify how your code works with the database, you are in the realm of integration testing and not unit testing. TDD inspired techniques like BDD can help. Instead of testing a "unit of code" (usually a method), focus on a whole user or system scenario, exercised at a higher level. In this case for instance you could take it at a much higher level, and assuming that somewhere on top of your DAL you have methods called CreateUser, UpdateUser, ReadUser, the scenario you may want to test is something like "Given I created a User, When I update the User Name, Then when I Read the user the name should be updated" - you would then be exercising the scenario against a complete setup, involving data as well as DAL and possibly UI.
I found the following MSDN article on BDD + TDD interesting from that regard - it illustrates well how the 2 can mesh together.
2) If you want to make your method testable, you have to expose some state. The main part of the method revolves around Building a command. You could outline the method that way:
* grab a connection
* create the parameters and types of the command
* fill in the parameters of the command from the object
* execute the command and clean up
You can actually test most of these steps: the observable state is then the Command itself. You could have something along these lines:
public class UpdateUserCommandBuilder
{
IConnectionConfiguration config;
public void BuildAndExecute(User user)
{
var command = BuildCommand(user);
ExecuteCommand(command);
}
public SqlCommand BuildCommand(User user)
{
var connection = config.GetConnection(); // so that you can mock it
var command = new SqlCommand(...)
command = CreateArguments(command); // you can verify that method now
command = FillArguments(command, user); // and this one too
return command;
}
}
I won't go all the way here, but I assume the outline conveys the idea. Going that route would help make the steps of the builder verifiable: you could assert whether a correct command was created. This has some value, but would still tell you nothing about whether the execution of the command succeeded, so it's worth thinking whether this is a worthwhile use of your testing budget! Arguably, a higher-level integration test which exercises the whole DAL may be more economical.
Hope this helps!
I would change the method declaration to:
public static void UpdateUser(User user, SqlConnection conn);
then you can pass in a configured SQL connection. In the real application you rely on what AppSettings tells you about the desired connection, but in your test you give it a fake connection that just lets you record the commands executed against that connection. You can then verify that the method requests the stored query correctly and sends the correct parameters as a result.
I've read tons of articles, seen tons of screencasts about TDD, but I'm still struggling with using it in real world project. My main issue is I don't know where to start, what test should be the first one.
Suppose I have to write client library calling external system's methods (e.g. notification).
I want this client to work as follows
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Response code = client.notifyOnEvent(Event.LIMIT_REACHED, 100); // some params of call
There is some translation and message format preparation behind the scenes, so I'd like to hide it from my client apps.
I don't know where and how to start.
Should I make up some rough classes set for this library?
Should I start with testing NotificationClient as below
public void testClientSendInvalidEventCommand() {
NotificationClient client = new NotificationClient(...);
Response code = client.notifyOnEvent(Event.WRONG_EVENT);
assertEquals(1223, code.codeValue());
}
If so, with such test I'm forced to write complete working implementation at once, with no baby steps as TDD states. I can mock out sosmething in Client but then I have to know this thing to be mocked upfront, so I need some upfront desing to be made.
Maybe I should start from the bottom, test this message formatting component first and then use it in right client test?
What way is the right one to go?
Should we always start from top (how to deal with this huge step required)?
Can we start with any class realizing tiny part of desired feature (as Formatter in this example)?
If I'd know where to hit with my tests it'd be a lot easier for me to proceed.
I'd start with this line:
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Sounds like we need a NotificationClient, which needs a client ID. That's an easy thing to test for. My first test might look something like:
public void testNewClientAbcd1234HasClientId() {
NotificationClient client = new NotificationClient("abcd1234");
assertEquals("abcd1234", client.clientId());
}
Of course, it won't compile at first - not until I'd written a NotificationClient class with a constructor that takes a string parameter and a clientId() method that returns a string - but that's part of the TDD cycle.
public class NotificationClient {
public NotificationClient(string clientId) {
}
public string clientId() {
return "";
}
}
At this point, I can run my test and watch it fail (because I've hard-coded clientId()'s return to be an empty string). Once I've got my failing unit test, I write just enough production code (in NotificationClient) to get the test to pass:
public string clientId() {
return "abcd1234";
}
Now all my tests pass, so I can consider what to do next. The obvious (well, obvious to me) next step is to make sure that I can create clients whose ID isn't "abcd1234":
public void testNewClientBcde2345HasClientId() {
NotificationClient client = new NotificationClient("bcde2345");
assertEquals("bcde2345", client.clientId());
}
I run my test suite and observe that testNewClientBcde2345HasClientId() fails while testNewClientAbcd1234HasClientId() passes, and now I've got a good reason to add a member variable to NotificationClient:
public class NotificationClient {
private string _clientId;
public NotificationClient(string clientId) {
_clientId = clientId;
}
public string clientId() {
return _clientId;
}
}
Assuming no typographical errors have snuck in, that'll get all my tests to pass, and I can move on to whatever the next step is. (In your example, it would probably be testing that notifyOnEvent(Event.WRONG_EVENT) returns a Response whose codeValue() equals 1223.)
Does that help any?
Don't confuse acceptance tests that hook into each end of your application, and form an executable specifications with unit tests.
If you are doing 'pure' TDD you write an acceptance test which drives the unit tests that drive the implementation. testClientSendInvalidEventCommand is your acceptance test, but depending on how complicated things are you will delegate the implementation to multiple classes you can unit test separately.
How complicated things get before you have to split them up to test and understand them properly is why it is called Test Driven Design.
You can choose to let tests drive your design from the bottom up or from the top down. Both work well for different developers in different situations. Either approach will force to make some of those "upfront" design decisions but that's a good thing. Making those decisions in order to write your tests is test-driven design!
In your case you have an idea what the high level external interface to the system you are developing should be so let's start there. Write a test for how you think users of your notification client should interact with it and let it fail. This test is the basis for your acceptance or integration tests and they are going to continue failing until the features they describe are finished. That's ok.
Now step down one level. What are the steps which need to occur to provide that high level interface? Can we write an integration or unit test for those steps? Do they have dependencies you had not considered which might cause you to change the notification center interface you have started to define? Keep drilling down depth-first defining behavior with failing tests until you find that you have actually reached a unit test. Now implement enough to pass that unit test and continue. Get unit tests passing until you have built enough to pass an integration test and so on. You'll eventually have completed a depth-first construction of a tree of tests and should have a well tested feature whose design was driven by your tests.
One goal of TDD is that the testing informs the design. So the fact that you need to think about how to implement your NotificationClient is a good thing; it forces you to think of (hopefully) simple abstractions up front.
Also, TDD sort of assumes constant refactoring. Your first solution probably won't be the last; so as you refine your code the tests are there to tell you what breaks, from compile errors to actual runtime issues.
So I would just jump right in and start with the test you suggested. As you create mocks, you will need to create tests for the actual implementations of what you are mocking. You will find things make sense and need to be refactored, so you will need to modify your tests as you go. That's the way it's supposed to work...