I'm just getting stated with boost-test and unit testing in general with a new application, and I am not sure how to handle the applications initialisation (eg loading config files, connecting to a database, starting an embedded python interpretor, etc).
I want to test this initialisation process, and also most of the other modules in the application require that the initialisation occurred successfully.
Some way to run some shut down code would also be appreciated.
How should I go about doing this?
It seems what you intent to do is more integration test than unit-test. It's not to pinpoint on wording, but it makes a difference. Unit testing mean testing methods in isolation, in an environment called a fixture, created just for one test, end then deleted. Another instance of the fixture will be re-created if the next case require the same fixture. This is done to isolate the tests so that an error in one test does not affect the outcome of the subsequent tests.
Usually, one test has three steps:
Arrange - prepare the fixture : instantiate the class to be tested, possibly other objects needed
Act - call the method to be tested
Assert - verify the expectations
Unit tests typically stays away of external resources such as files and databases. Instead mock objects are used to satisfy the dependencies of the class to be tested.
However, depending on the type of your application, you can try to run tests from the application itself. This is not "pure" unit testing, but can be valuable anyway, especially if the code has not been written with unit testing in mind, it might not be "flexible" enough to be unit tested.
This need a special execution mode, with a "-test" parameter for instance, which will initialize the application normally, and then invoke tests that will simulate inputs and use assertions to verify the application reacted as expected. Likewise, it might be possible to invoke the shutdown code and verify with assertions if the database connection has be closed (if the objects are not deleted).
This approach has several drawbacks compared to unit tests: it depends on the config files (the software may behave differently depending on the parameters), on the database (on its content and on the ability to connect to it), the tests are not isolated ... The two first can be overcome using default values for the configuration and connecting to a test database in test mode.
Are you defining BOOST_TEST_MAIN? If so, and you have no main function of your own (where you would otherwise put initialization code) you could feasibly use some form of singleton object that exposes an init function that you can call before each test if required.
Related
I'm setting up some unit tests in Codeception and need to create an instance of an Object. I can do this in _before but this then creates a new instance before every test. I have tried to use _beforeSuite but the constructor for the Object requires an environment variable, from my understanding this won't work as beforeSuite is ran before bootstrap? When I try this out I seem to be getting null instead of the variable.
I am new to testing so I am curious if it is okay to create the Object in _before or if I should be using something else?
What you should strive for in testing in general is, that when executing your system-under-test (SUT) this happens in a clearly defined context. During test execution you want all the aspects that could influence the execution of your SUT under control. Re-using objects between tests is therefore (normally) not advisable, because a previous test might have made modifications to the objects. Which may then have an impact on the results of the later tests. The advice against sharing objects between tests holds even if you know the exact order in which tests will be executed (tests should be independent - lots of information about this on the web, e.g. Why in unit testing tests should not depend on the order of execution?).
Therefore, unless in exceptional circumstances, prefer that you have a fresh object for every single test. You can have it created in _before, but it might even be better (for readability purposes) to create it directly within each test case that needs it.
My business web application (PHP with HTML/Javascript) has lots of very different options (about 1000) which are stored in the database, so the users can change them theirselves. These options for example define if a button, tab or inputfield is visible, the validation of inputs and the workflow, like when e-mails should be sent. Each user has a user-role, which also defines what they're able to see and do.
My users can use any combination of these options, so I find it very difficult to write tests for all these situations. I have 100+ customers so writing tests for each customer is definetely not an option.
The problem is that some options work together. So while testing one option it's necessary to know the value of some other options. Ideally the tests should also be able to read the options-profiles for each customer. But that would almost be like rewriting the whole application, just for testing, which seems error-prone by itself.
Is it common in unit testing to read the database to get the test-data and options, or is that not a good idea?
How would you handle the situation I described ?
First of all yes, that's perfectly possible. Although it's not recommended to write unit tests after the application is already written and its extremely difficult.
Here are a few advises for your case:
Data Providers
Data Providers are making it possible to call the same test with different parameters, which prevents code duplication in your tests. Perfect if you want to test the same method with different configurations.
https://phpunit.de/manual/3.7/en/writing-tests-for-phpunit.html#writing-tests-for-phpunit.data-providers
Mock objects
If objects depend on other objects you use mock objects. Mocking an object is basically nothing more than creating a dummy object which has a defined behavior and won't do anything else than the things you told it to do.
Note that you can also mock the tested class itself! A mock will keep the methods of the mocked class by default, so you can mock the class you want to test and define a specific behavior for some methods while testing another.
If this is still not enough, you might want to think about splitting up you methods in smaller, more specific methods to get smaller units.
https://phpunit.de/manual/3.7/en/test-doubles.html
Keep it small
Unit tests are called unit tests because they test the smallest possible unit of your code without executing anything else. So instead of testing if a button behaves the way it should, just test if its visible if it should be. Just test one kind of behavior and nothing more.
Don't read the database
It is highly unusual to read the database when writing unit tests and its even more unusual to use actual user data. Instead you define test data. Instead of testing your users configuration, you should test every possible configuration
Code Coverage
A decent way to check if your code is covered by the tests is code coverage. It will show you how much and which code was executed by the tests. Although 100% coverage does not mean full coverage in reality, specially in your case. Just because all lines of code were executed does not mean every option was considered. But its a handy tool anyways and you can see which code you are done with and which you forgot about.
https://phpunit.de/manual/current/en/code-coverage-analysis.html
Conclusion:
What you are trying to do is error-prone itself, yes. Because usually you'd write all your tests before writing the actual method. And you will probably write more test code than the application has itself, but that's not uncommon.
I have a large program that I need to write tests for. I'm wondering if it would be wrong to write tests that run in a specific order, as some necessarily have to run in order and that depend upon a previous test.
For example a scenario like the following:
CreateEmployer
CreateEmployee (requires employer)
Add Department
The drawback I see to this approach is that if one test fails, all the of the follow tests will also fail. But, I am going to have to write the code to build the database anyway, so it might be a more effective approach to use the code that builds the mock database as a sort of integration-like unit test.
Should I create the database without using the tests as a seed method, and then run each of the methods again to see the result? The problem I see with this approach is that if the seed method does not work all of the tests will fail and it won't be immediately clear that the error is in the seed method and not the services or the tests themselves.
Yes, this is discouraged. Tests shouldn't be "temporally coupled".
Each test should run in complete isolation of other tests. If you find yourself in a situation where the artifacts created by Test A are needed by Test B then you have two problems to correct:
Test A shouldn't be creating artifacts (or side-effects).
Test B should be using mock data as part of its "arrange" step.
Basically, the unit tests shouldn't be using a real database. They should be testing the logic of the code, not the interaction with the infrastructure dependencies. Mock those dependencies in order to test just the code.
It is a bad idea to have unit tests depend upon each other. In your example you could have a defect in both CreateEmployer and AddDepartment, but when all three fail because of the CreateEmployer test you might mistakenly assume that only the CreateEmployer test is 'really' failing. This means you've lost the potently valuable information that AddDepartment is failing as well.
Another problem is that you might create a separate workflow in the future that calls AddDepartment without calling CreateEmployer. Now your tests assume that CreateEmployer will always be called, but in reality it isn't. All three tests could pass, but the application could still break because you have a dependency you didn't know was there.
The best tests won't rely on a database at all, but will instead allow you to manually specify or "Mock" the data. Then you don't have to worry about an unrelated database problem breaking all of your tests.
If these are truly Unit tests then yes, requiring a specific order is a bad practice for several reasons.
Coupling - As you point out, if one test fails, then subsequent ones will fail as well. This will mask real problems.
TDD - A core principle of TDD is make tests easy to run. If you do so, developers are more likely to run them. If they are hard to run (e.g. I have to run the entire suite), then they are less likely to be run and their value is lost.
Ideally, unit tests should not depend upon the completion of another test in order to run. It is also hard to run them in a given sequence, but that may depend upon your unit testing tool.
In your example, I would create a test that tests the CreateEmployer method and makes sure it returns a new object the way you expect.
The second test I would create would be CreateEmployee, and if that test requires an Employer object, using dependency injection, your CreateEmployee method could receive its Employer object. Here is where you would use a mock object (one that code to get created, returning a fixed/known Employer) as the Employer object the CreateEmployee method would consume. This lets you test the CreateEmployee method and its actions upon that object, with a given/known instance of the Employer object.
Your third test, AddDepartment, I assume also depends upon an Employer object. This unit test can follow the same pattern, and receive a mock Employer object to consumer during its test. (The same object you pass to unit test two above.)
Each test now runs/fails on its own, and can run in any sequence.
Exactly how independent should unit tests be? What should be done in a "before" section of a unit testing suite?
Say, for example, I am testing the functionality of a server - should the server be created, initialised, connected to it's various data sources, &c. inside the body of every test case. Are there situations where it may be appropriate to initialise the server once, and then test more than one case.
The other situation I am considering is mobile app testing - where the phone objects need to be created to perform a unit test. Should this be done every time. Create phone, initialise, run test, destroy phone, repeat?
Unit tests should be completely independent, i.e. each should be able to run in any order so each will need to have its own initialization steps.
Now, if you are talking about server or phone initialization, it sounds more like integration tests rather than unit tests.
Ideally yes. Every test should start from scratch, and put the system into a particular well-defined state before executing the function under test. If you don't, then you make it more difficult to isolate the problem when the test fails. Even worse, you may cause the test not to fail because of some extra state left behind by an earlier test.
If you have a situation where the setup time is too long, you can mock or stub some of the ancillary objects.
If you are worried about having too much setup code, you can refactor the setup code into reusable functions.
Basically I have two main questions:
What exactly should you unit test?
How do you do it?
The problem is I have several applications that rely in a database connection and/or are communication applications, which mean most of the test cases are integration tests (or so I think).
Most classes are fairly simple by themselves, but the ones that implement the communication protocol, which are the ones that would be useful to automate the testing, can seem to fit well into the "unit test" model.
Another example. I developed I pipe structure with multithreaded support for a consumer/producer pattern. When a thread reads the pipe and finds it empty it blocks until a writer writes into the pipe. Should I use unit tests to test that class?
How do you decide what to unit test?
Edit: I mean writing unit tests for automated unit testing.
You Unit tests units of your code. The real question is what exactly makes up a unit?
In an object oriented environment a unit is a class. A class because behaviours of an object vary with the state of the object, so testing a method in isolation will not yeild the most complete results.
First you need to identify the invariants of the class. That is, the things that will always be true for all instances ofthe class. E.g. in a Fraction class an invariant may be denominator != 0.
Next you need to identify the contracts of each method, that is, the pre and post conditions of the methods.
Then you write tests for each condition that may arise. So for a single class you may end up with many test methods to test the various conditions that each method could encounter. At each test you ensure that the invariants of the class holds and the contract of the method is never broken.
In some cases like the example that you provided it may be necessary to create other objects in the environment in order to test the conditions of your class. In those instances you may use mock objects.
You should abstract your infrastructure concerns (ie code that retrieves data from your database, code that does file i/o, etc) so that you can stub/mock those parts in order to unit test your application. And then you will be able to write targeted/specific tests against your infrastructure code to test that out.
You will be finding yourself creating more interfaces (to create seems within your application), and needing to use better OO principles (ie. SOLID) in order to develop an application that is more testable.
I was in the same boat a year ago that you were in. And the one book that really helped me through it (along with some hands on practice) is The Art of Unit Testing by Roy Osherove
Unit tests test units (that is method or function) in isolation, in a dedicated, controlled environment. Each unit test creates in own environment by instantiating only the classes needed to execute one test, putting them in a known state, then it invokes the method to be tested and verify the outcome. This verification is done by assertions on the behavior of the method (as opposed to its implementation).
Performing the verification on the behavior and not on the implementation is important as this allows modifying the implementation without breaking the unit tests, and therefore using the unit tests as a safety net for the modification.
All language have [at least] one unit test framework whose role is to execute the unit tests. There are two ways to write unit tests: test-first or test-last.
Test-first is also called Test-Driven Development. Basically it takes three steps:
write a failing test
write just enought code to make it pass
refactor the code to clean it up (remove duplication ...)
Proponents of TDD claim that this leads to testable code, while it could be hard to write unit tests after the fact, especially when methods do several things. It is recommended to follow the Single Responsibility Principle.
Regarding the pipe structure and communications protocol example, some guidelines say that
a test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
...
When a thread reads the pipe and finds
it empty it blocks until a writer
writes into the pipe. Should I use
unit tests to test that class?
I would test the class, but not the blocking read method, as I presume it is build from a blocking call to the read() function of the Operating System.
Unit testing is testing the entire unit works the same as it did before the change, with the improvements made in the change.. If you are making a change to a window of a module, then you test the module. This is in comparison to a system test, which is to test every module. If your change affects many modules (not sure how your system is set up), then it should get a system test.
Here's another way to think about it - particularly if you are not focusing on object oriented code, but perhaps more functional or procedural code.
Ideally, unit tests should cover every path through the code. It should be your chance to see every path through your code works as expected and as needed. If you are practicing Test Driven Development then it's implied everything gets a test.
I think looking at this blog post will help clarify: https://earnestengineer.blogspot.com/2018/03/unit-testing-what-to-test.html