TDD and "honesty" of test - unit-testing

I have a concern with "honesty" of test when doing TDD. TDD is
Write red test
Write just enough code to make it green
Refactor and let the test green
So far so good. Now here is an example of applying the principle above, such kind of example were already met in tutorial & real life :
I want to check that the current user email is displayed on the default page of my webapp.
Write a red test : "example#user.com" is displayed inside default_page.html
Write just enough code to make it green : hardcode "example#user.com" inside default_page.html
Refactor by implementing get_current_user(), some other code in some others layers etc, letting the test green.
I'm "shocked" by step 2. There is something wrong here : the test is green even if nothing is actually working. There a test smell here, it means that maybe at some point someone could break the production code without breaking the test suite.
What I am missing here ?

Your assertion that "nothing is working" is false. The code functions correctly for the case that the email address is example#user.com. And you do not need that final refactoring. Your next failing test might be to make it fail for the case that the user has a different email address.

I would say that what you have is only partially complete. You said:
I want to check that the current user email is displayed on the default page of my webapp.
The test doesn't check the current users email address on the default page, it checks that the fixed email address "example#user.com" is in the page.
To address this you either need to provide more examples (ie have multiple tests with different email addresses) or to randomly generate the email address in the test setup.
So I would say what you have is something like this is pseudo code:
Given current user has email address "example#user.com"
When they visit the default page
The page should contain the email address "example#user.com"
This is the first test you can write in TDD and you can indeed hardcode this to avoid implementing unnecessary stuff. You can now add another test which will force you to implement the correct behavior
Given current user has email address "example2#user.com"
When they visit the default page
The page should contain the email address "example2#user.com"
Now you have to remove the hardcoding as you cannot satisfy both of these tests with a hardcoded solution.So this will force you to get the actual email address from the current user and display this.
Often it makes sense to end up with 3 examples in your tests. These don't need to be 3 separate tests, you can use data driven tests to reuse the same test method with different values. You don't say what test framework you are using, so I can't give a specific example.
This approach is common in TDD and is called triangualtion.

You are correct about
step 2. There is something wrong here
but it's not in the TDD approach. IMHO it's in the test logic. After all this (step 2) validates that the test harness is working correctly. That the new test does not mistakenly pass without requiring any new code, and that the required feature does not already exist.
What I am missing here ?
This step also should tests the test itself, in the negative: it rules out the possibility that the new test always passes, and therefore is worthless. The new test should also fail for the expected reason. It's vital that this step increases the developer's confidence that it is testing the right thing, and passes only in intended cases.

Related

Best way to setup test cases that are "late" in a process

Best way of handling many test cases needing to navigate to a particular place before they run their asserts? For example a process has 5 steps and a test case needs to test a part of step 5, how can I set it up? Call the test case methods of the previous steps inside this test case, and do that for all test cases that test step 5?
Similarly, if a test case goes deep into the website - through many pages - should that navigation be re-written for every test case, or just call some test that already does that?
Any tips on this situations?
Best way of handling many test cases needing to navigate to a particular place before they run their asserts? For example a process has 5 steps and a test case needs to test a part
of step 5, how can I set it up?
I would create a Transporter class / pattern that the test case can call to get to that state. That will make the navigation code reusable to other tests and not make the test too big/complicated. You can also use the setUp() methods in your xUnit testing frameworks which will be called before each test and place the navigator code there, if you need to use it for other tests.
Similarly, if a test case goes deep into the website - through many
pages - should that navigation be re-written for every test case, or
just call some test that already does that?
I would extract that code into a helper class called Transporter and have the tests call it to easily navigate to the deep page in one method call.
I wrote about this and other test design patterns in a conference paper at the Pacific Northwest Software Quality Conference. Look for the Transporter pattern in that paper.
Here's an example using a Transporter class where you have to login and navigate to the checkout page:
public class Transporter {
public static void login() {
//App specific code to navigate to login screen and login to the application
}
public static void gotoCheckout() {
//App specific code to navigate to the checkout page
}
}
Now your tests can just call this Transporter class to do the navigation for them.
If you are using BDD, such as JBehave (not sure if Cucumber has the same feature) where you have the Given, When, Then story (feature) structure in Gherkin syntax, you can actually use "GivenStories" feature which are like prequel test cases to set you up for your specific test case, exactly as you are describing.
There's nothing wrong, however, when using BDD to simply make multiple step scenarios leading up to the particular test case, i.e. first scenario logs-in, second scenario navigates to certain page, third scenario performs your actual test.
By writing it as a separate story (feature), however, you can re-use those as "GivenStories" in JBehave as a shortcut to get where you need to be without duplicating the steps.

Django tests reliant on other pages/behaviour

I've started writing some tests for my Django app and I'm unsure how best to structure the code.
Say I have a register page and a page for logged in users only.
My first plan was to have an earlier method perform the register and a later method use that login to test the page:
def test_register_page(self):
//send request to register page and check user has been registered correctly
def test_restricted_page(self):
c = Client();
c.login("someUser","pass");
c.post("/someRestrictedPage/");
//Test response
However this means that now one of my tests rely on the other.
The alternatives I see are calling register in setUp() but this still means that the restricted page test relies on the register page working.
I could try creating a new user manually in setup which I also don't like because this isn't testing a user created by the system.
What is the usual pattern for testing this kind of situation?
You are trying to mix together a lot of different functionalities in one test case. A clean design would be having one test case
for user registration and
one for the view.
Having them depend on each other will introduce a lot of dependencies between them - and - if the test fails the error will be even harder to debug. The success of the registration test should be determined through the correct creation of the user instance (so check necessary attributes etc of the user) and not through being able to login on a certain page. Therefore you will need to set up a "correct" user instance for the view test case. This may seem a bit more complicated than necessary, but it will make future maintainance a lot easier.
What you are trying to do is more something like an integration test, which tests a whole system, but before that you should split up your system in functional units and do unit tests on this units!
The smaller and well-defined the single tests are, the easier will be their maintainance and debugging.

Unit Testing Authentication

I am fairly new to unit testing. I am building an ASP.NET MVC3 application (although my question seems language agnostic) and am confused about a basic test.
I want to make a unit test that makes sure my "ValidatePassword" function works - It will take in a username and password, then hash the password and see if it matches the hash for a user in the database. If so, it returns true. The problem is that I am using a mock repository, so I will have to add the user to the db before running my test. I can't really create this user in my test setup because I don't know what the encrypted password will be until I actually run it through the function I am testing. Is the answer to run it through the Hash function once, write it down in my test, and then test with that?
Hope this is clear. Thanks!
I prefer to set up my test data where possible through the public interface of my code, rather than giving the test code knowledge of how the code is implemented. So personally I would not use a hardcoded encrypted password in the test code. Let me explain...
Presumably, you have a method to add a new user, which internally will create an new entry in the database with a hashed password. Then the test would look something like this:
AddNewUser("username", "passsword");
bool isValid = ValidateUser("username", "password");
Assert.IsTrue(isValid);
This of course would have to be complimented with invalid user/password tests:
test: ValidUser_InvalidPassword:
AddNewUser("username2", "pwd");
bool isValid = ValidateUser("username2", "wrongPassword");
Assert.IsFalse(isValid);
test: NonExistingUser:
bool isValid = ValidateUser("non_existing_user", "anyPassword");
Assert.IsFalse(isValid);
The argument against this would be that you are testing more than one unit in a single test. But personally I think this is better. Why?
Because the tests are not so brittle - i.e. if you make an internal change to the hashing algorithm the test is there to check that everything still works. You don't have to change the hard coded encrypted password in the test code.
That is one of the main benefits of unit tests: to check that we don't break anything when we refactor. So when we want to change the internal implementation for whatever reason (code cleanliness/performance or security improvements), the tests give us confidence that we have not broken the functionality.
An interesting article discussing the benefits of higher-level tests can be found in this Dr Dobbs article:
Yes, you could have your setup function add the user with an hardcoded encrypted password to the mock repository. When unit-testing, you should use known values so that the behavior of the tested functions can be predicted.

Unit testing style question: should the creation and deletion of data be in the same method?

I am writing unit tests for a PHP class that maintains users in a database. I now want to test if creating a user works, but also if deleting a user works. I see multiple possibilities to do that:
I only write one method that creates a user and deletes it afterwards
I write two methods. The first one creates the user, saves it's ID. The second one deletes that user with the saved ID.
I write two methods. The first one only creates a user. The second method creates a user so that there is one that can afterwards be deleted.
I have read that every test method should be independent of the others, which means the third possibility is the way to go, but that also means every method has to set up its test data by itself (e.g. if you want to test if it's possible to add a user twice).
How would you do it? What is good unit testing style in this case?
Two different things = Two tests.
Test_DeleteUser() could be in a different test fixture as well because it has a different Setup() code of ensuring that a User already exists.
[SetUp]
public void SetUp()
{
CreateUser("Me");
Assert.IsTrue( User.Exists("Me"), "Setup failed!" );
}
[Test]
public void Test_DeleteUser()
{
DeleteUser("Me");
Assert.IsFalse( User.Exists("Me") );
}
This means that if Test_CreateUser() passes and Test_DeleteUser() doesn't - you know that there is a bug in the section of the code that is responsible for deleting users.
Update: Was just giving some thought to Charlie's comments on the dependency issue - by which i mean if Creation is broken, both tests fail even though Delete. The best I could do was to move a guard check so that Setup shows up in the Errors and Failures tab; to distinguish setup failures (In general cases, setup failures should be easy to spot by an entire test-fixture showing Red.)
How you do this codependent on how you utilize Mocks and stubs. I would go for the more granular approach so having 2 different tests.
Test A
CreateUser("testuser");
assertTrue(CheckUserInDatabase("testuser"))
Test B
LoadUserIntoDB("testuser2")
DeleteUser("testuser2")
assertFalse(CheckUserInDatabase("testuser2"))
TearDown
RemoveFromDB("testuser")
RemoveFromDB("testuser2")
CheckUserInDatabase(string user)
...//Access DAL and check item in DB
If you utilize mocks and stubs you don't need to access the DAL until you do your integration testing so won't need as much work done on the asserting and setting up the data
Usually, you should have two methods but reality still wins over text on paper in the following case:
You need a lot of expensive setup code to create the object to test. This is a code smell and should be fixed but sometimes, you really have no choice (think of some code that aggregates data from several places: You really need all those places). In this case, I write mega tests (where a test case can have thousands of lines of code spread over many methods). It creates the database, all tables, fills them with defined data, runs the code step by step, verifies each step.
This should be a rare case. If you need one, you must actively ignore the rule "Tests should be fast". This scenario is so complex that you want to check as many things as possible. I had a case where I would dump the contents of 7 database tables to files and compare them for each of the 15 SQL updates (which gave me 105 files to compare in a single test) plus about a million asserts that would run.
The goal here is to make the test fail in such a way that you notice the source of the problem right away. It's like pouring all the constraints into code and make them fail early so you know which line of app code to check. The main drawback is that these test cases are hell to maintain. Every change of the app code means that you'll have to update many of the 105 "expected data" files.

At what level should I unit test?

Let's say in my user model I have a ChangePassword method. Given an already initialised user model, it takes the new password as a parameter and does the database work to make the magic happen. The front end to this is a web form, where the user enters their current password and their desired new password. The controller then checks to see if the user's current password is correct. If so, it invokes the user model's ChangePassword method. If not, it displays an error to the user.
From what I hear you're supposed to unit test the smallest piece of code possible, but doing that in this case completely ignores the check to make sure the user entered the correct current password. So what should I do?
Should I:
A) Unit test only from the controller, effectively testing the model function too?
OR
B) Create 2 different tests; one for the controller and one for the model?
When in doubt, test both. If you only test the controller and the test fails, you don't know whether the issue is in the controller or the model. If you test both, then you know where the problem lies by looking at the model's test result - if it passes, the controller is at fault, if it fails, then the model is at fault.
A)
The test fails. You have a problem in either the model or the controller, or both and spend time searching through the model and controller.
B)
The model and controller tests fail... chances are you have a problem in the model.
Only the controller test fails... chances are better that the problem is not in the model, only in the controller.
Only the model test fails... hard to see this happening, but if it does somehow then you know the problem is in the model, not in the controller.
It's good to test both layers. It'll make finding the problem later that much easier.
There should be multiple tests here:
Verify the correct password was entered.
Validate the new password, e.g. doesn't match existing one, has minimum length, sufficient complexity, tests for errors thrown, etc.
Updating the database to the new password.
Don't forget that the tests can also help act as documentation of the code in a sense so that it becomes clear for what each part of the code is there.
You might want to consider another option: Mock objects. Using these, you can test the controller without the model, which can result in faster test execution and increased test robustness (if the model fails, you know that the controller still works). Now you have two proper unit tests (both testing only a single piece of code each), and you can still add an integration test if required.
Unit testing means to test every unit on its own, so in this case you would need to build two unit tests, one for the frontend and one for the backend.
To test the combination of both an integration test is needed, at least the ITSQB calls it like that.
If you code object oriented you usually build unit tests for every class as that is the smallest independent unit testable.
A) is not a unit test in my opinion since it uses more than one class (or layer). So you should really be unit-testing the model only.