I'm setting up some unit tests in Codeception and need to create an instance of an Object. I can do this in _before but this then creates a new instance before every test. I have tried to use _beforeSuite but the constructor for the Object requires an environment variable, from my understanding this won't work as beforeSuite is ran before bootstrap? When I try this out I seem to be getting null instead of the variable.
I am new to testing so I am curious if it is okay to create the Object in _before or if I should be using something else?
What you should strive for in testing in general is, that when executing your system-under-test (SUT) this happens in a clearly defined context. During test execution you want all the aspects that could influence the execution of your SUT under control. Re-using objects between tests is therefore (normally) not advisable, because a previous test might have made modifications to the objects. Which may then have an impact on the results of the later tests. The advice against sharing objects between tests holds even if you know the exact order in which tests will be executed (tests should be independent - lots of information about this on the web, e.g. Why in unit testing tests should not depend on the order of execution?).
Therefore, unless in exceptional circumstances, prefer that you have a fresh object for every single test. You can have it created in _before, but it might even be better (for readability purposes) to create it directly within each test case that needs it.
Related
I have a large program that I need to write tests for. I'm wondering if it would be wrong to write tests that run in a specific order, as some necessarily have to run in order and that depend upon a previous test.
For example a scenario like the following:
CreateEmployer
CreateEmployee (requires employer)
Add Department
The drawback I see to this approach is that if one test fails, all the of the follow tests will also fail. But, I am going to have to write the code to build the database anyway, so it might be a more effective approach to use the code that builds the mock database as a sort of integration-like unit test.
Should I create the database without using the tests as a seed method, and then run each of the methods again to see the result? The problem I see with this approach is that if the seed method does not work all of the tests will fail and it won't be immediately clear that the error is in the seed method and not the services or the tests themselves.
Yes, this is discouraged. Tests shouldn't be "temporally coupled".
Each test should run in complete isolation of other tests. If you find yourself in a situation where the artifacts created by Test A are needed by Test B then you have two problems to correct:
Test A shouldn't be creating artifacts (or side-effects).
Test B should be using mock data as part of its "arrange" step.
Basically, the unit tests shouldn't be using a real database. They should be testing the logic of the code, not the interaction with the infrastructure dependencies. Mock those dependencies in order to test just the code.
It is a bad idea to have unit tests depend upon each other. In your example you could have a defect in both CreateEmployer and AddDepartment, but when all three fail because of the CreateEmployer test you might mistakenly assume that only the CreateEmployer test is 'really' failing. This means you've lost the potently valuable information that AddDepartment is failing as well.
Another problem is that you might create a separate workflow in the future that calls AddDepartment without calling CreateEmployer. Now your tests assume that CreateEmployer will always be called, but in reality it isn't. All three tests could pass, but the application could still break because you have a dependency you didn't know was there.
The best tests won't rely on a database at all, but will instead allow you to manually specify or "Mock" the data. Then you don't have to worry about an unrelated database problem breaking all of your tests.
If these are truly Unit tests then yes, requiring a specific order is a bad practice for several reasons.
Coupling - As you point out, if one test fails, then subsequent ones will fail as well. This will mask real problems.
TDD - A core principle of TDD is make tests easy to run. If you do so, developers are more likely to run them. If they are hard to run (e.g. I have to run the entire suite), then they are less likely to be run and their value is lost.
Ideally, unit tests should not depend upon the completion of another test in order to run. It is also hard to run them in a given sequence, but that may depend upon your unit testing tool.
In your example, I would create a test that tests the CreateEmployer method and makes sure it returns a new object the way you expect.
The second test I would create would be CreateEmployee, and if that test requires an Employer object, using dependency injection, your CreateEmployee method could receive its Employer object. Here is where you would use a mock object (one that code to get created, returning a fixed/known Employer) as the Employer object the CreateEmployee method would consume. This lets you test the CreateEmployee method and its actions upon that object, with a given/known instance of the Employer object.
Your third test, AddDepartment, I assume also depends upon an Employer object. This unit test can follow the same pattern, and receive a mock Employer object to consumer during its test. (The same object you pass to unit test two above.)
Each test now runs/fails on its own, and can run in any sequence.
Exactly how independent should unit tests be? What should be done in a "before" section of a unit testing suite?
Say, for example, I am testing the functionality of a server - should the server be created, initialised, connected to it's various data sources, &c. inside the body of every test case. Are there situations where it may be appropriate to initialise the server once, and then test more than one case.
The other situation I am considering is mobile app testing - where the phone objects need to be created to perform a unit test. Should this be done every time. Create phone, initialise, run test, destroy phone, repeat?
Unit tests should be completely independent, i.e. each should be able to run in any order so each will need to have its own initialization steps.
Now, if you are talking about server or phone initialization, it sounds more like integration tests rather than unit tests.
Ideally yes. Every test should start from scratch, and put the system into a particular well-defined state before executing the function under test. If you don't, then you make it more difficult to isolate the problem when the test fails. Even worse, you may cause the test not to fail because of some extra state left behind by an earlier test.
If you have a situation where the setup time is too long, you can mock or stub some of the ancillary objects.
If you are worried about having too much setup code, you can refactor the setup code into reusable functions.
I'm currently broadening my Unit Testing by utilising Mock objects (nSubsitute in this particular case). However I'm wondering what the current wisdom when creating a Mock objects. For instance, I'm working with an object that contains various routines to grab and process data - no biggie here but it will be utilised in a fair number of tests.
Should I create a shared function that returns the Mock Object with all the appropriate methods and behaviours mocked for pretty much most of the Testing project and call that object into my Unit Tests? Or shall I Mock the object into every Unit Test, only mocking the behaviour I need for that test (although there will be times I'll be mocking the same behaviour more than one occasion).
Thoughts or advice is gratefully received...
I'm not sure if there is an agreed "current wisdom" on this, but here's my 2 cents.
First, as #codebox pointed out, re-creating your mocks for each unit test is a good idea, as you want your unit tests to run independently of each other. Doing otherwise can result in tests that pass when run together but fail when run in isolation (or vis versa). Creating mocks required for tests is commonly done in test setup ([SetUp] in NUnit, constructor in XUnit), so each test will get a newly created mock.
In terms of configuring these mocks, it depends on the situation and how you test. My preference is to configure them in each test with the minimum amount of configuration necessary. This is a good way of communicating exactly what that test requires of its dependencies. There is nothing wrong with some duplication in these cases.
If a number of tests require the same configuration, I would consider using a scenario-based test fixture (link disclaimer: shameless self-promotion). A scenario could be something like When_the_service_is_unavailable, and the setup for that scenario could configure the mocked service to throw an exception or return an error code. Each test then makes assertions based on that common configuration/scenario (e.g. should display error message, should send email to admin etc).
Another option if you have lots of duplicated bits of configuration is to use a Test Data Builder. This gives you reusable ways of configuring a number of different aspects of your mock or other any other test data.
Finally, if you're finding a large amount of configuration is required it might be worth considering changing the interface of the test dependency to be less "chatty". By looking for a valid abstraction that reduces the number of calls required by the class under test you'll have less to configure in your tests, and have a nice encapsulation of the responsibilities on which that class depends.
It is worth experimenting with a few different approaches and seeing what works for you. Any removal of duplication needs to be balanced with keeping each test case independent, simple, maintainable and reliable. If you find you have a large number of tests fail for small changes, or that you can't figure out the configuration an individual tests needs, or if tests fail depending on the order in which they are run, then you'll want to refine your approach.
I would create new mocks for each test - if you re-use them you may get unexpected behaviour where the state of the mock from earlier tests affects the outcome of later tests.
It's hard to provide a general answer without looking at a specific case.
I'd stick with the same approach as I do everywhere else: first look at the tests as independent beings, then look for similarities and extract the common part out.
Your goal here is to follow DRY, so that your tests are maintainable in case the requirements change.
So...
If it's obvious that every test in a group is going to use the same mock behaviour, provide it in your common set-up
If each of them is significantly different, as in: the content of the mock constitutes a significant part of what you're testing and the test/mock relationship looks like 1:1, then it's reasonable to keep them close to the tests
If the mocks differ between them, but only to some degree, you still want to avoid redundancy. A common SetUp won't help you, but you may want to introduce an utility like PrepareMock(args...) that will cover different cases. This will make your actual test methods free of repetitive set-up, but still let you introduce any degree of difference between them.
The tests look nice when you extract all similarities upwards (to a SetUp or helper methods) so that the only thing that remains in test methods is what's different between them.
I'm just getting stated with boost-test and unit testing in general with a new application, and I am not sure how to handle the applications initialisation (eg loading config files, connecting to a database, starting an embedded python interpretor, etc).
I want to test this initialisation process, and also most of the other modules in the application require that the initialisation occurred successfully.
Some way to run some shut down code would also be appreciated.
How should I go about doing this?
It seems what you intent to do is more integration test than unit-test. It's not to pinpoint on wording, but it makes a difference. Unit testing mean testing methods in isolation, in an environment called a fixture, created just for one test, end then deleted. Another instance of the fixture will be re-created if the next case require the same fixture. This is done to isolate the tests so that an error in one test does not affect the outcome of the subsequent tests.
Usually, one test has three steps:
Arrange - prepare the fixture : instantiate the class to be tested, possibly other objects needed
Act - call the method to be tested
Assert - verify the expectations
Unit tests typically stays away of external resources such as files and databases. Instead mock objects are used to satisfy the dependencies of the class to be tested.
However, depending on the type of your application, you can try to run tests from the application itself. This is not "pure" unit testing, but can be valuable anyway, especially if the code has not been written with unit testing in mind, it might not be "flexible" enough to be unit tested.
This need a special execution mode, with a "-test" parameter for instance, which will initialize the application normally, and then invoke tests that will simulate inputs and use assertions to verify the application reacted as expected. Likewise, it might be possible to invoke the shutdown code and verify with assertions if the database connection has be closed (if the objects are not deleted).
This approach has several drawbacks compared to unit tests: it depends on the config files (the software may behave differently depending on the parameters), on the database (on its content and on the ability to connect to it), the tests are not isolated ... The two first can be overcome using default values for the configuration and connecting to a test database in test mode.
Are you defining BOOST_TEST_MAIN? If so, and you have no main function of your own (where you would otherwise put initialization code) you could feasibly use some form of singleton object that exposes an init function that you can call before each test if required.
Presume you have a class which passes all its current unit tests.
If you were to add or pull out some methods/introduce a new class and then use composition to incorporate the same functionality would the new class require testing?
I'm torn between whether or not you should so any advice would be great.
Edit:
Suppose I should have added I use DI (Dependency Injection) therefore should I inject the new class as well?
Not in the context of TDD, no, IMHO. The existing tests justify everything about the existence of the class. If you need to add behavior to the class, that would be the time to introduce a test.
That being said, it may make your code and tests clearer to move the tests into a class that relates to the new class you made. That depends very much on the specific case.
EDIT: After your edit, I would say that that makes a good case for moving some existing tests (or a portion of the existing tests). If the class is so decoupled that it requires injection, then it sounds like the existing tests may not be obviously covering it if they stay where they are.
Initially, no, they're not necessary. If you had perfect coverage, extracted the class and did nothing more, you would still have perfect coverage (and those tests would confirm that the extraction was indeed a pure refactoring).
But eventually - and probably soon - yes. The extracted class is likely to be used outside its original context, and you want to constrain its behavior with tests that are specific to the new class, so that changes for a new context don't inadvertently affect behavior for the original caller. Of course the original tests would still reveal this, but good unit tests point directly to the problematic unit, and the original tests are now a step removed.
It's also good to have the new tests as executable documentation for the newly-extracted class.
Well, yes and no.
If I understand correctly, you have written tests, and wrote production code that makes the tests pass - i.e. the simplest thing that works.
Now you are in the refactoring phase. You want to extract code from one class and put it in a class of its own, probably to keep up with the Single Responsibility Principle (or SRP).
You may make the refactoring without adding tests, since your tests are there precisely to allow you to refactor without fear. Remember - refactor means changing the code, without modifying the functionality.
However, it is quite likely that refactoring the code will break your tests. This is most likely caused by fragile tests that test behavior, rather than state - i.e. you mocked the the methods you ported out.
On the other hand, if your tests are primarily state-driven (i.e. you assert results, and ignore implementation), then your new service component (the block of code you extracted to a new class) will not be tested. If you use some form of code coverage testing tool, you'll find out. If that is the case, you may wish to test that it works. Might, because 100% Code Coverage is neither desirable nor feasible. If possible, I'd try to add the tests for that service.
In the end, it may very well boil down to a judgment call.
I would say no. It is already being tested by the tests run on the old class.
As others have said, it's probably not entirely needed right away, since all the same stuff is still under test. But once you start making changes to either of those two classes individually, you should separate the tests.
Of course, the tests shouldn't be too hard to write; since you have the stuff being tested already, it should be fairly trivial to break out the various bits of the tests.