I am not that good with "unittest" topic. I'd liked to create a unit test in order to say "Hey body, this is the wrong (or right) answer because blabla!". I need to place a unit test because it took 3 MF weeks to find why the prediction in the machine learning model did not work! Hence I want to avoid that type of error in the future.
Questions :
How can I ask the code to alert me when len(X) - len(pred_values) is not equal to num_step?
Do I need to create a unit test file to gather all the unit tests, e.g. unittest.py?
Do we need to place the unit tests away from the main code?
1.
The test code can alert you by means of an assertion. In your test, you can use self.assertEqual()
self.assertEqual(len(X) - len(pred_values), num_step)
2.
Yes you would normally gather your TestCase classes in a module prefixed with test_. So if the code under test lives in a module called foo.py, you would place your tests in test_foo.py.
Within test_foo.py you can create multiple TestCase classes that group related tests together.
3.
It is a good idea to separate the tests from the main code, although not mandatory. Reasons why you might want to separate the tests include (as quoted from the docs):
The test module can be run standalone from the command line.
The test code can more easily be separated from shipped code.
There is less temptation to change test code to fit the code it tests without a good reason.
Test code should be modified much less frequently than the code it tests.
Tested code can be refactored more easily.
Tests for modules written in C must be in separate modules anyway, so why not be consistent?
If the testing strategy changes, there is no need to change the source code.
Lots more info in the official docs.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
If I have UI automation tests, why do I need to write unit tests?
If I need to check that a method returns some output for a given input, for example a result of an addition which is then displayed in a view, why do I need unit test if I can confirm that output in the view is correct (or not correct) through UI automation test
Unit test and end to end test (UI tests) have two different purposes
Unit test tell you when unit of code (module, class, function, interface) has an issue
End to end tests tell you how that failure affects end to end output.
Lets use an analogy to understand why we need both.
Suppose you were manufacturing a car by assembling different components like carburettor, gear box, tyres, crankshaft etc. All these parts are being made by different vendors(think developers).
When car fails to work as expected, will you need to test individual components to figure out where the problem originates from ?
Will testing components before assembling the car, make you save time and effort ?
Typically what you want to do is to make sure each component work as expected (unit tests) before you add them to your car.
When the car does not work as expected, you test each component to find the root cause of the problem.
This typically works by creating an assembly line (CI pipeline). Your testing strategy looks like
test individual components
test if they work when interfaced with other components
test the car once all components are assembled together.
This testing strategy is what we call a testing pyramid in programming.
Reading this might give you more insight : https://martinfowler.com/bliki/TestPyramid.html
Two reasons immediately come to mind as to why you would want unit tests despite having automation tests.
Unit tests make ruthless code refactoring a much less daunting challenge, and mitigate that risk
Unit tests provide invaluable documentation of the code, what each module does (automation tests don't give you this), when the code changes the unit tests change also unlike stale documentation in some wiki or doc that never gets updated later as code continues to change and evolve over time.
In addition to Nishants and James' answers: With UI/End-to-End tests it is much harder to test for certain error conditions.
First of all, you need to understand that unit test cases and user interface (UI) test automation are two different concepts. In unit test cases, you write test cases per unit and test them module by module---you're actually testing each module separately.
Test automation, on the other hand, covers end-to-end testing. It tests your end-to-end inputs and their respective outputs. Both have their own advantages, so you need to use both on your product to make sure it is bug-free. Let's better understand the need for unit tests with the help of an example:
You're building a chatting app. For the app, you're integrating different modules like login, registration, send and receive a message, message history etc. Now, suppose there are multiple developers working on this product: each developer has worked on a different module. In this scenario, you need to join all the modules into the system flow to make the complete product. When you integrate all the modules, you find that the product is not able to store messages. So, now you need to test each module independently because you can't tell which specific module didn't work.
To avoid such cases, it's better to test each module before merging it with the others. This is called unit testing. If unit testing is done correctly, you will get the bug immediately. Once all the unit test cases pass, you can finally start integrating modules.
Unit testing is generally executed through the use of an assembly line (CI pipeline). Your product usually works if you create a good testing strategy and write the best test cases. The flow is a bit like this:
Test individual modules
Start integrating and testing each functionality and see if it's working or not
Run UI automation test cases on the product once you have integrated all the modules
In the end, if all test cases pass, that means your system is ready to work flawlessly.
We use SWTBot for writing of functional tests. To test some cases is very difficult and some programmers use classes and their methods directly from implementation (for example call methods from class AddUserDialog etc.). Is this good approach? And why?
And next qustion please. It is SWTBot enough for testing of eclipse RCP based application? Is is necessary to write unit tests please?
Note: We are scrum team.
SWTBot and JUnit serve two different purposes.
JUnit
As the name implies, JUnit is meant for unit testing. Unit tests should be small and fast to execute. They test only a single unit of code and the above mentioned attributes allow them to be executed often while developing the unit under test.
But there is more to (good) units tests. You may want to read one of the following posts for further attributes of unit tests:
Key qualities of a good unit test
What attribute should a good Unit-Test have?
I would go one step further and say that unit tests only make sense in TDD, that is you write the test before the production code. Otherwise you neglect the tests. Who want's to do the extra effort of writing tests for something that already works. And even if you have the discipline to write the tests afterwards, they merely manifest the state of your production code. Whereas, like in TDD, writing tests beforehand leads to lean production code that only does what is required by the tests.
But I guess that's something not everyone will agree on.
In an RCP setting, unit tests would ideally be able to run without starting the platform (which takes rather long). I.e. they would not require to be run as PDE JUnit Tests but as plain JUnit Tests instead. Therefore the unit under test should be isolated from the RCP APIs.
On a related note, see also this question: How to efficiently JUnit test Eclipse RCP Plugins
SWTBot
While SWTBot uses the JUnit runtime to execute the tests, it is rather meant as a utility to create integration or functional tests. SWTBot, when used with RCP, starts the entire workbench and runs all tests within the same instance. Therefore great care should be taken to ensure that each test leaves the environment in the same state as it was before the test started. Specialized Rules may help here to set up and tear down a particular recurring scenario.
It is perfectly valid in order to setup an SWTBot test to call methods from your application. For example, you could programmatically open the wizard and then use SWTBot to simulate a user that enters data and presses the OK button. There is no need to use SWTBot to laboriously open the wizard itself.
In my experience, SWTBot is even too much for simple use cases. Consider a test that should enter some data into a dialog and then press OK. If you already have the dialog opened programmatically you can as well continue without SWTBot:
dialog.textField.setText( "data" );
dialog.okButton.notifyListeners( SWT.Selection, null );
assertThat( dialog.getEnteredData() ).isEqualTo( "data" );
Use Both
The best bet is to have both, unit tests that ensure the behavior of the respective units and functional tests that make sure that the particular units play together as desired.
Not sure if that answers the question, if you have further concerns please leave a comment.
I have often seen tests where canned inputs are fed into a program, one checks the outputs generated against canned (expected) outputs usually via diff. If the diff is accepted, the code is deemed to pass the test.
Questions:
1) Is this an acceptable unit test?
2) Usually the unit test inputs are read in from the file system and are big
xml files (maybe they represent a very large system). Are unit tests supposed
to touch the file system? Or would a unit test create a small input on the fly
and feed that to the code to be tested?
3) How can one refactor existing code to be unit testable?
Output differences
If your requirement is to produce output with certain degree of accuracy, then such tests are absolutely fine. It's you who makes the final decision - "Is this output good enough, or not?".
Talking to file system
You don't want your tests to talk to file system in terms of relying on some files to exists somewhere in order for your tests to work (for example, reading values from configuration files). It's a bit different with tests input resources - you can usually embed them in your tests (or at least test project), treat them as part of codebase, and on top of that they usually should be loaded before test executes. For example, when testing rather large XMLs it's reasonable to have them stored as separete files, rather than strings in code files (which sometimes can be done instead).
Point is - you want to keep your tests isolated and repeatable. If you can achieve that with file being loaded at runtime - it's probably fine. However it's still better to have them as part of codebase/resources than standard system file lying somewhere.
Refactoring
This question is fairly broad, but to put you in the right direction - you want to introduce more solid design, decouple objects and separate responsibilities. Better design will make testing easier and, what's most important - possible. Like I said, it's broad and complex topic, with entire books dedicated to it.
1) is this an acceptable unit test?
This is not a unit test by definition. A unit test focuses on the smallest possible amount of code. Your test can still be a useful test, regression test, self-documenting test, TDD test, etc. It is just not a unit test, although it may be equally useful.
2) Are unit tests supposed to touch the file system?
Typically not, unless you need to unit test something explicitly related to the filesystem. One reason is, if you have several hundred unit tests it is nice to have them run in a couple seconds rather than minutes.
3) How can one refactor existing code to be unit testable?
A better question is why do you want the code to be unit testable? If you are trying to learn TDD it is better to start with a new project. If you have bugs then try to write tests for the bugs. If the design is slowing you down then you can refactor towards testability over time.
Addressing only the 3rd question. It is extremely difficult. You really need to write tests at the same time you write the code, or before. It is a nightmare to try to slap tests onto an existing code base, and it is often more productive to throw away the code and start over.
This is an acceptable unit test.
The files being read should be part of the test project so anyone that checks out the project from repository will have the same files at the same relative location.
Having black box tests is a great start, you can refactor the existing code and use the current tests to verify that it is still working (depending on the quality of the tests). Here is a short blog about refactoring for testability: http://www.beletsky.net/2011/02/refactoring-to-testability.html
A diff test can be acceptable as a Unit Tests, especially when your using test data that is shared between Unit Tests.
If you don't know how many items there are in the SUT you could use the following:
int itemsBeforeTest = SUT.Items.Count;
SUT.AddItem();
Assert.AreEqual(itemsBeforeTest + 1, SUT.Items.Count);
If a Unit Tests requires so much data that it needs to be read from a big XML file, it's not a real Unit Test. A Unit Test should test a class in complete isolation and mock out all dependencies.
Using a pattern like the Builder pattern, can also help in creating test data for your unit test. The biggest problem with having your test data in a separate file, is that it's hard to understand what the test does exactly. If you create your test data in the arrange part of your unit test, it's immediately clear what is important for your test.
For example, let's say you have the following arrange code to test if the price of an invoice is correct:
Address billingAddress = new Address("Stationsweg 9F",
"Groningen", "Nederland", "9726AE"); shippingAddress = new Address("Aweg 1",
"Groningen", "Nederland", "9726AB");
Customer customer = new Customer(99, "Piet", "Klaassens",
30,
billingAddress,
shippingAddress);
Product product = new Product(88, "Tafel", 19.99);
Invoice invoice = new Invoice(customer);
Can be changed to the following when using a Builder
Invoice invoice = Builder<Invoice>.CreateNew()
.With(i => i.Product = Builder<Product>.CreateNew()
.With(p => p.Price = 19.99)
.Build())
.Build();
When using a Builder its much easier to see what is important and your code is also more maintainable.
Refactoring code to become more testable is a broad topic. It comes down to thinking about 'how would I test this code?' while you are writing the code.
Take the following example:
public class AlarmClock
{
public AlarmClock()
{
SatelliteSyncService = new SatelliteSyncService();
HardwareClient = new HardwareClient();
}
}
This is hard to test. You need to make sure that both the SatteliteSyncService and the HardwareClient are functional when testing the AlarmClock.
This change tot the constructor makes it much easier to test:
public AlarmClock(IHardwareClient hardwareClient, ISatelliteSyncService satelliteSyncService)
{
SatelliteSyncService = satelliteSyncService;
HardwareClient = hardwareClient;
}
Techniques like Dependency Injection help with refactoring your code to be more testable. Also watch out for static values like DateTime.Now or the use of a Singleton because they are hard to test.
A really good introduction to writing testable code can be found here.
You should not require the code to be refactored to be able to perform unit tests. Unit tests, as the name implies, are testing a unit of code for the system. The best unit tests are small, quick to execute and exercise only a very small subset of the piece of code being tested (e.g. class).
The reason for having small, compact unit tests that only exercise one part of the code is that the objective of unit tests is to find bugs in that unit of code. If the unit test takes a long time to execute and tests lots of things it makes the identification of a bug in the code that much harder.
As to accessing the file system, I see no problem. Some unit tests may require a database to be constructed before the test is carried out, the output to be checked that would be difficult or time expensive to write the checks in code.
The files for unit testing should be treated like the rest of the code - put under version control. If you are paranoid you could implement a check within the unit test such as perform a MD5 on it and check against a hard coded value so future reruns of the test can verify that the test data has not inadvertenly changed.
Just my humble thoughts.
I just started to use TDD and it seems to be quite helpful in my case. The only thing that bothers me is that I do not see the way to mark your test as "to be implemented". While I develop an application I sometimes come up with new tests that it should pass in the future when I will be done with current changes, so I write these tests and certainly they fail as there is no such functionality yet. But as I am not going to "fix" them till I finish current part I would like to see something like yellow state (between red and green) as I would like to get red color only for broken tests and to be able to mark TODO tests with another color. Is there any practice that can help me here? I can write such tests to some kind of list but in this case it will be to double work as I will say the same first time in words and then in code.
EDIT: I just found there are todo tests in Perl standard unit framework, maybe there is something similar in Java?
The point of TDD is to write tests first and then keep coding until all your tests fill out. Having yellow tests might help you organize yourself, but, TDD loses some clarity with that.
Both MSTest and NUnit for example, support an Inconclusive state, but it depends on your Test Runner as to whether these appear as Yellow in the UI. JUnit may also have support for inconclusive.
In Kent Beck's TDD by Example, he suggests writing a list of tests to write on a notepad aka a "test list". You then work on only one test at a time and progress through the list in an order that makes sense to you. It's a good approach because you might realize that some tests on your list might be unnecessary after you finish working on a test. In effect, you only write the tests (and code) that you need.
To do what you're suggesting, you would write your test list in code, naming the test methods accordingly with all the normal attributes, but the body for each test would be "Assert.Inconclusive()"
I've done this in the past, but my test body would be "Assert.Fail()". It's a matter of how you work -- I wouldn't check in my changes until all the tests passed. As asserting inconclusive is different than asserting failure, it can be used as a means to check code in to share changes without breaking the build for your entire team (depending on your build server configuration and agreed upon team process).
In JUnit you can use an #Ignore annotation on the test to have the framework skip it during the run. Presuming you have a test list you could just place them in the test as follows:
#Ignore
#Test
public void somethingShouldHappenUnderThisCircumstance() {}
#Ignore
#Test
public void somethingShouldHappenUnderThatCircumstance() {}
Of course, if you don't mark them as a test in the first place you won't need the ignore. IDEs such as IntelliJ will flag ignored tests so that they stand out better.
Perfectly alright - as long as you do not break your train of thought to go implement these new tests. You could make a note of this on your "test-list" a piece of paper OR write empty test stubs with good names and mark them up with an Ignore("Impl pending") attribute.
The NUnit GUI shows ignored tests as yellow.. so there's a good chance that JUnit does the same. You'd need the corresponding Ignore annotation to decorate your tests
I am using the Boost 1.34.1 unit test framework. (I know the version is ancient, but right now updating or switching frameworks is not an option for technical reasons.)
I have a single test module (#define BOOST_TEST_MODULE UnitTests) that consists of three test suites (BOOST_AUTO_TEST_SUITE( Suite1 );) which in turn consist of several BOOST_AUTO_TEST_CASE()s.
My question:
Is it possible to run only a subset of the test module, i.e. limit the test run to only one test suite, or even only one test case?
Reasoning:
I integrated the unit tests into our automake framework, so that the whole module is run on make check. I wouldn't want to split it up into multiple modules, because our application generates lots of output and it is nice to see the test summary at the bottom ("X of Y tests failed") instead of spread across several thousand lines of output.
But a full test run is also time consuming, and the output of the test you're looking for is likewise drowned; thus, it would be nice if I could somehow limit the scope of the tests being run.
The Boost documentation left me pretty confused and none the wiser; anyone around who might have a suggestion? (Some trickery allowing to split up the test module while still receiving a usable test summary would also be welcome.)
Take a look at the --run_test parameter - it should provide what you're after.