Django tests reliant on other pages/behaviour - django

I've started writing some tests for my Django app and I'm unsure how best to structure the code.
Say I have a register page and a page for logged in users only.
My first plan was to have an earlier method perform the register and a later method use that login to test the page:
def test_register_page(self):
//send request to register page and check user has been registered correctly
def test_restricted_page(self):
c = Client();
c.login("someUser","pass");
c.post("/someRestrictedPage/");
//Test response
However this means that now one of my tests rely on the other.
The alternatives I see are calling register in setUp() but this still means that the restricted page test relies on the register page working.
I could try creating a new user manually in setup which I also don't like because this isn't testing a user created by the system.
What is the usual pattern for testing this kind of situation?

You are trying to mix together a lot of different functionalities in one test case. A clean design would be having one test case
for user registration and
one for the view.
Having them depend on each other will introduce a lot of dependencies between them - and - if the test fails the error will be even harder to debug. The success of the registration test should be determined through the correct creation of the user instance (so check necessary attributes etc of the user) and not through being able to login on a certain page. Therefore you will need to set up a "correct" user instance for the view test case. This may seem a bit more complicated than necessary, but it will make future maintainance a lot easier.
What you are trying to do is more something like an integration test, which tests a whole system, but before that you should split up your system in functional units and do unit tests on this units!
The smaller and well-defined the single tests are, the easier will be their maintainance and debugging.

Related

Django DRF APITestCase chain test cases

For example I want to write several tests cases like this
class Test(APITestCase):
def setUp(self):
....some payloads
def test_create_user(self):
....create the object using payload from setUp
def test_update_user(self):
....update the object created in above test case
In the example above, the test_update_user failed because let's say cannot find the user object. Therefore, for that test case to work, I have to create the user instead test_update_user again.
One possible solution, I found is to run create user in setUp. However, I would like to know if there is a way to chain test cases to run one after another without deleting the object created from previous test case.
Rest framework tests include helper classes that extend Django's existing test framework and improve support for making API requests.
Therefore all tests for DRF calls are executed with Django's built in test framework.
An important principle of unit-testing is that each test should be independent of all others. If in your case the code in test_create_user must come before test_update_user, then you could combine both into one test:
def test_create_and_update_user(self):
....create and update user
Tests in Django are executed in a parallell manner to minimize the time it takes to run all tests.
As you said above if you want to share code between tests one has to set it up in the setUp method
def setUp(self):
pass

TDD and "honesty" of test

I have a concern with "honesty" of test when doing TDD. TDD is
Write red test
Write just enough code to make it green
Refactor and let the test green
So far so good. Now here is an example of applying the principle above, such kind of example were already met in tutorial & real life :
I want to check that the current user email is displayed on the default page of my webapp.
Write a red test : "example#user.com" is displayed inside default_page.html
Write just enough code to make it green : hardcode "example#user.com" inside default_page.html
Refactor by implementing get_current_user(), some other code in some others layers etc, letting the test green.
I'm "shocked" by step 2. There is something wrong here : the test is green even if nothing is actually working. There a test smell here, it means that maybe at some point someone could break the production code without breaking the test suite.
What I am missing here ?
Your assertion that "nothing is working" is false. The code functions correctly for the case that the email address is example#user.com. And you do not need that final refactoring. Your next failing test might be to make it fail for the case that the user has a different email address.
I would say that what you have is only partially complete. You said:
I want to check that the current user email is displayed on the default page of my webapp.
The test doesn't check the current users email address on the default page, it checks that the fixed email address "example#user.com" is in the page.
To address this you either need to provide more examples (ie have multiple tests with different email addresses) or to randomly generate the email address in the test setup.
So I would say what you have is something like this is pseudo code:
Given current user has email address "example#user.com"
When they visit the default page
The page should contain the email address "example#user.com"
This is the first test you can write in TDD and you can indeed hardcode this to avoid implementing unnecessary stuff. You can now add another test which will force you to implement the correct behavior
Given current user has email address "example2#user.com"
When they visit the default page
The page should contain the email address "example2#user.com"
Now you have to remove the hardcoding as you cannot satisfy both of these tests with a hardcoded solution.So this will force you to get the actual email address from the current user and display this.
Often it makes sense to end up with 3 examples in your tests. These don't need to be 3 separate tests, you can use data driven tests to reuse the same test method with different values. You don't say what test framework you are using, so I can't give a specific example.
This approach is common in TDD and is called triangualtion.
You are correct about
step 2. There is something wrong here
but it's not in the TDD approach. IMHO it's in the test logic. After all this (step 2) validates that the test harness is working correctly. That the new test does not mistakenly pass without requiring any new code, and that the required feature does not already exist.
What I am missing here ?
This step also should tests the test itself, in the negative: it rules out the possibility that the new test always passes, and therefore is worthless. The new test should also fail for the expected reason. It's vital that this step increases the developer's confidence that it is testing the right thing, and passes only in intended cases.

Unit Testing basic Controllers

I have a number of simple controller classes that use Doctrine's entity manager to retrieve data and pass it to a view.
public function indexAction() {
$pages = $this->em->getRepository('Model_Page')->findAll();
$this->view->pages = $pages;
}
What exactly should we be testing here?
I could test routing on the action to ensure that's configured properly
I could potentially test that the appropriate view variables are being set, but this is cumbersome
The findAll() method should probably be in a repository layer which can be tested using mock data, but then this constitutes a different type of test and brings us back to the question of
What should we be testing as part of controller tests?
Controller holds core logic for your application. Though simple "index" controller actions don't have any specific functions, those that verify/actively use data and generate viewmodels have pretty much the most functionality of the system.
For example, consider login form. Whenever the login data is posted, controller should validate login/password and return: 1) to index page whenever logins are good. Show welcome,{user} text. 2) to the login page saying that login is not found in db. 3) to the login page saying that password is not found in db.
These three types of outputs make perfect test cases. You should validate that correct viewmodels/views are being sent back to the client with the appropriate actions.
You shouldn't look at a controller like at something mysterious. It's just another code piece, and it's tested as any other code - any complicated logic that gives business-value to the user should be tested.
Also, I'd recommend using acceptance testing with framework like Cucumber to generate meaningful test cases.
Probably the controller is the hardest thing to test, as it has many dependencies. In theory you should test it in full isolation, but as you already seen - it has no sense.
Probably you should start with functional or acceptance test. It tests your controller action in a whole. I agree with previous answer, that you should try acceptance testing tools. But Cucumber is for Ruby, for PHP you can try using Codeception. It makes tests simple and meaningful.
Also on a Codeception page there is an article on how to test sample controllers.

Unit testing style question: should the creation and deletion of data be in the same method?

I am writing unit tests for a PHP class that maintains users in a database. I now want to test if creating a user works, but also if deleting a user works. I see multiple possibilities to do that:
I only write one method that creates a user and deletes it afterwards
I write two methods. The first one creates the user, saves it's ID. The second one deletes that user with the saved ID.
I write two methods. The first one only creates a user. The second method creates a user so that there is one that can afterwards be deleted.
I have read that every test method should be independent of the others, which means the third possibility is the way to go, but that also means every method has to set up its test data by itself (e.g. if you want to test if it's possible to add a user twice).
How would you do it? What is good unit testing style in this case?
Two different things = Two tests.
Test_DeleteUser() could be in a different test fixture as well because it has a different Setup() code of ensuring that a User already exists.
[SetUp]
public void SetUp()
{
CreateUser("Me");
Assert.IsTrue( User.Exists("Me"), "Setup failed!" );
}
[Test]
public void Test_DeleteUser()
{
DeleteUser("Me");
Assert.IsFalse( User.Exists("Me") );
}
This means that if Test_CreateUser() passes and Test_DeleteUser() doesn't - you know that there is a bug in the section of the code that is responsible for deleting users.
Update: Was just giving some thought to Charlie's comments on the dependency issue - by which i mean if Creation is broken, both tests fail even though Delete. The best I could do was to move a guard check so that Setup shows up in the Errors and Failures tab; to distinguish setup failures (In general cases, setup failures should be easy to spot by an entire test-fixture showing Red.)
How you do this codependent on how you utilize Mocks and stubs. I would go for the more granular approach so having 2 different tests.
Test A
CreateUser("testuser");
assertTrue(CheckUserInDatabase("testuser"))
Test B
LoadUserIntoDB("testuser2")
DeleteUser("testuser2")
assertFalse(CheckUserInDatabase("testuser2"))
TearDown
RemoveFromDB("testuser")
RemoveFromDB("testuser2")
CheckUserInDatabase(string user)
...//Access DAL and check item in DB
If you utilize mocks and stubs you don't need to access the DAL until you do your integration testing so won't need as much work done on the asserting and setting up the data
Usually, you should have two methods but reality still wins over text on paper in the following case:
You need a lot of expensive setup code to create the object to test. This is a code smell and should be fixed but sometimes, you really have no choice (think of some code that aggregates data from several places: You really need all those places). In this case, I write mega tests (where a test case can have thousands of lines of code spread over many methods). It creates the database, all tables, fills them with defined data, runs the code step by step, verifies each step.
This should be a rare case. If you need one, you must actively ignore the rule "Tests should be fast". This scenario is so complex that you want to check as many things as possible. I had a case where I would dump the contents of 7 database tables to files and compare them for each of the 15 SQL updates (which gave me 105 files to compare in a single test) plus about a million asserts that would run.
The goal here is to make the test fail in such a way that you notice the source of the problem right away. It's like pouring all the constraints into code and make them fail early so you know which line of app code to check. The main drawback is that these test cases are hell to maintain. Every change of the app code means that you'll have to update many of the 105 "expected data" files.

At what level should I unit test?

Let's say in my user model I have a ChangePassword method. Given an already initialised user model, it takes the new password as a parameter and does the database work to make the magic happen. The front end to this is a web form, where the user enters their current password and their desired new password. The controller then checks to see if the user's current password is correct. If so, it invokes the user model's ChangePassword method. If not, it displays an error to the user.
From what I hear you're supposed to unit test the smallest piece of code possible, but doing that in this case completely ignores the check to make sure the user entered the correct current password. So what should I do?
Should I:
A) Unit test only from the controller, effectively testing the model function too?
OR
B) Create 2 different tests; one for the controller and one for the model?
When in doubt, test both. If you only test the controller and the test fails, you don't know whether the issue is in the controller or the model. If you test both, then you know where the problem lies by looking at the model's test result - if it passes, the controller is at fault, if it fails, then the model is at fault.
A)
The test fails. You have a problem in either the model or the controller, or both and spend time searching through the model and controller.
B)
The model and controller tests fail... chances are you have a problem in the model.
Only the controller test fails... chances are better that the problem is not in the model, only in the controller.
Only the model test fails... hard to see this happening, but if it does somehow then you know the problem is in the model, not in the controller.
It's good to test both layers. It'll make finding the problem later that much easier.
There should be multiple tests here:
Verify the correct password was entered.
Validate the new password, e.g. doesn't match existing one, has minimum length, sufficient complexity, tests for errors thrown, etc.
Updating the database to the new password.
Don't forget that the tests can also help act as documentation of the code in a sense so that it becomes clear for what each part of the code is there.
You might want to consider another option: Mock objects. Using these, you can test the controller without the model, which can result in faster test execution and increased test robustness (if the model fails, you know that the controller still works). Now you have two proper unit tests (both testing only a single piece of code each), and you can still add an integration test if required.
Unit testing means to test every unit on its own, so in this case you would need to build two unit tests, one for the frontend and one for the backend.
To test the combination of both an integration test is needed, at least the ITSQB calls it like that.
If you code object oriented you usually build unit tests for every class as that is the smallest independent unit testable.
A) is not a unit test in my opinion since it uses more than one class (or layer). So you should really be unit-testing the model only.