I just started to use TDD and it seems to be quite helpful in my case. The only thing that bothers me is that I do not see the way to mark your test as "to be implemented". While I develop an application I sometimes come up with new tests that it should pass in the future when I will be done with current changes, so I write these tests and certainly they fail as there is no such functionality yet. But as I am not going to "fix" them till I finish current part I would like to see something like yellow state (between red and green) as I would like to get red color only for broken tests and to be able to mark TODO tests with another color. Is there any practice that can help me here? I can write such tests to some kind of list but in this case it will be to double work as I will say the same first time in words and then in code.
EDIT: I just found there are todo tests in Perl standard unit framework, maybe there is something similar in Java?
The point of TDD is to write tests first and then keep coding until all your tests fill out. Having yellow tests might help you organize yourself, but, TDD loses some clarity with that.
Both MSTest and NUnit for example, support an Inconclusive state, but it depends on your Test Runner as to whether these appear as Yellow in the UI. JUnit may also have support for inconclusive.
In Kent Beck's TDD by Example, he suggests writing a list of tests to write on a notepad aka a "test list". You then work on only one test at a time and progress through the list in an order that makes sense to you. It's a good approach because you might realize that some tests on your list might be unnecessary after you finish working on a test. In effect, you only write the tests (and code) that you need.
To do what you're suggesting, you would write your test list in code, naming the test methods accordingly with all the normal attributes, but the body for each test would be "Assert.Inconclusive()"
I've done this in the past, but my test body would be "Assert.Fail()". It's a matter of how you work -- I wouldn't check in my changes until all the tests passed. As asserting inconclusive is different than asserting failure, it can be used as a means to check code in to share changes without breaking the build for your entire team (depending on your build server configuration and agreed upon team process).
In JUnit you can use an #Ignore annotation on the test to have the framework skip it during the run. Presuming you have a test list you could just place them in the test as follows:
#Ignore
#Test
public void somethingShouldHappenUnderThisCircumstance() {}
#Ignore
#Test
public void somethingShouldHappenUnderThatCircumstance() {}
Of course, if you don't mark them as a test in the first place you won't need the ignore. IDEs such as IntelliJ will flag ignored tests so that they stand out better.
Perfectly alright - as long as you do not break your train of thought to go implement these new tests. You could make a note of this on your "test-list" a piece of paper OR write empty test stubs with good names and mark them up with an Ignore("Impl pending") attribute.
The NUnit GUI shows ignored tests as yellow.. so there's a good chance that JUnit does the same. You'd need the corresponding Ignore annotation to decorate your tests
Related
I am not that good with "unittest" topic. I'd liked to create a unit test in order to say "Hey body, this is the wrong (or right) answer because blabla!". I need to place a unit test because it took 3 MF weeks to find why the prediction in the machine learning model did not work! Hence I want to avoid that type of error in the future.
Questions :
How can I ask the code to alert me when len(X) - len(pred_values) is not equal to num_step?
Do I need to create a unit test file to gather all the unit tests, e.g. unittest.py?
Do we need to place the unit tests away from the main code?
1.
The test code can alert you by means of an assertion. In your test, you can use self.assertEqual()
self.assertEqual(len(X) - len(pred_values), num_step)
2.
Yes you would normally gather your TestCase classes in a module prefixed with test_. So if the code under test lives in a module called foo.py, you would place your tests in test_foo.py.
Within test_foo.py you can create multiple TestCase classes that group related tests together.
3.
It is a good idea to separate the tests from the main code, although not mandatory. Reasons why you might want to separate the tests include (as quoted from the docs):
The test module can be run standalone from the command line.
The test code can more easily be separated from shipped code.
There is less temptation to change test code to fit the code it tests without a good reason.
Test code should be modified much less frequently than the code it tests.
Tested code can be refactored more easily.
Tests for modules written in C must be in separate modules anyway, so why not be consistent?
If the testing strategy changes, there is no need to change the source code.
Lots more info in the official docs.
Scenario: I need to write a complex nHibernate query, that would return projected DTO, but I want to use TDD approach. The method would look like this:
public PrintDTO GetUsersForPrinting(int userId)
{
Session.QueryOver<User>().//some joins, conditions etc.
//returns projected dto
}
Questions:
Since the most common approach is to use in memory database for this kind of operations. Should I write integration test?
If I am using in memory db can I write Unit tests?
Is one test is enough?
Since my integration test probably will check projection, how should I name it? "GetUserForPrinting_return_correct_DTO" seems too abstract and silly.
I ask because:
There is lots of abstract information about TDD and integration testing, but when it comes to concrete implementation it is very difficult to apply that information.
TDD suggests that integration test should be made of unit tests:
This is not really a very good problem to learn TDD with. I assume you don't already know what the complex query looks like, and you want to use test-driven techniques to drive it out. Awesome :)
But let's see if I can answer your questions.
Yes
any test that includes a real db, whether it is in-memory or on-disk, is not a unit test. A unit test would use a mock db.
Maybe - if you query is complex enough, then no.
testGetUsersForPrinting or getUsersForPrintingTest or similar
Most probably I would drive out the query in a SQL interpreter, not in code. The aim would be to produce a series of integration tests against an in-memory db based on what I learn during this process.
Start from the minimum possible DTO you can think of, and build up from there.
Finally convert the query into nhibernate calls, then make the integration tests pass.
Test-driven, but not really unit-test-driven.
If you are willing to accept maximum TDD discipline and deal with working slower and being more annoyed than usual, you can automate each integration test as you develop it and write code to make it pass. This will mean you are switching frequently among 3 levels of abstraction / editors / environments (direct SQL queries, integration tests, c# code) - I deal with this by setting up techniques to force myself to follow the right steps each time.
This last bit is why this is not a good problem to learn TDD with. You will need a lot of discipline you probably haven't forced yourself to acquire yet!
Good luck.
ok some concrete examples. I would modify your code sample to look like this
public PrintDTO GetUsersForPrinting(int userId, ISession session)
{
var data = session.QueryOver<User>().//some joins, conditions etc.
return data; // or whatever
}
In your unit test you would write
public testDTO()
{
//Arrange
StubSession session = .... setup a stub session, which returns hardcoded values
// Act
PrintDTO users = GetUsersForPrinting(111, session);
// Assert
Assert.That(users.size(), Is.EqualTo(1));
Assert.That(users.get(0).userId, Is.EqualTo(111));
}
In your integration test, you would use a real db, and your session object would actually connect to it, and the queries would be resolved against that db
Arrange-Act-Assert is a standard method for organizing unit tests.
Generally you want as few Asserts as possible in a unit test. And you will have multiple unit tests.
When you are writing a unit test, start by writing the Assert, then fill in the rest to make it compile/get the result you want. Make the test fail first, because then you know you have really delivered something when it passes.
In this example to implement a stub ISession you would derive a local StubSession class (only visible to the test suite) from ISession and just fill in the absolute minimum to get it to compile, and return the minimum data to get the test to pass.
To build up to your whole DTO - assuming you know what you want in your DTO - proceed, as you say in the comments, incrementally. Build up each part of your DTO
a piece at a time, add a unit test for each piece.
Keeping track of this is another piece of TDD discipline.
Set yourself up with a TODO list - just a simple text file, or possibly a lengthy comments at the start of your test suite. List all the things you want to test e.g. zero results, one result, two results, 20 results. User id, whatever other pieces of information you need to have.
If you are doing a complex query across tables or whatever add an todo item for each join, each part of the where clause, etc.
Add items for ordering and paging etc if you are using those.
Pick the simplest things first. Only do one small thing (in a single red-green-refactor cycle) at a time. As you work through your list, you might want to break items up into smaller pieces, or you might think of additional things you need to do. Add them to the TODO list rather than working directly on them.
In this particular case I would swap - after each red-green-refactor cycle - into the SQL environment and/or the sqlite integration test to work out how to make the next piece work. I guess this is a sort of step between red and green - choose what you will test next, write the test (which fails obviously), fiddle around in SQL until you know how to make it pass, write the nHibernate calls to make your test green, then refactor.
Be aware some of the things you list might run out not to be necessary, or take too long, etc. It's good to write them down still, so you know what are not doing as well as what you are doing. Keep focused on your goal.
I tend to also develop a list of "smells" and/or refactorings that I can see I will want to do but am not quite ready for this cycle. Remember to minimise duplication/refactor your tests as well as your SUT (System Under Test).
It's a doing rather then seeing thing. The list of what unit tests you end up with, and the code they exercise, is not a very good description of the journey. Kent Beck's original TDD book is slim and will give you some good overall pointers, but not really about constructing queries.
Does any of that help?
Since the most common approach is to use in memory database for this kind of operations. Should I write integration test?
Using in memory database still is an integration test (because it actually tests if your query generates correct SQL and execute it against a database, see).
If I am using in memory db can I write Unit tests?
No, it would be an integration test
Is one test is enough?
Probably not, you should check each condition of your query, for example one test per one where clause, one for paging and one for sorting if applicable.
Since my integration test probably will check projection, how should I
name it? "GetUserForPrinting_return_correct_DTO" seems too abstract
and silly.
GivenUserForPrinting_WhenGetUserForPrinting_ThenMapToDTO would be a better naming
We use SWTBot for writing of functional tests. To test some cases is very difficult and some programmers use classes and their methods directly from implementation (for example call methods from class AddUserDialog etc.). Is this good approach? And why?
And next qustion please. It is SWTBot enough for testing of eclipse RCP based application? Is is necessary to write unit tests please?
Note: We are scrum team.
SWTBot and JUnit serve two different purposes.
JUnit
As the name implies, JUnit is meant for unit testing. Unit tests should be small and fast to execute. They test only a single unit of code and the above mentioned attributes allow them to be executed often while developing the unit under test.
But there is more to (good) units tests. You may want to read one of the following posts for further attributes of unit tests:
Key qualities of a good unit test
What attribute should a good Unit-Test have?
I would go one step further and say that unit tests only make sense in TDD, that is you write the test before the production code. Otherwise you neglect the tests. Who want's to do the extra effort of writing tests for something that already works. And even if you have the discipline to write the tests afterwards, they merely manifest the state of your production code. Whereas, like in TDD, writing tests beforehand leads to lean production code that only does what is required by the tests.
But I guess that's something not everyone will agree on.
In an RCP setting, unit tests would ideally be able to run without starting the platform (which takes rather long). I.e. they would not require to be run as PDE JUnit Tests but as plain JUnit Tests instead. Therefore the unit under test should be isolated from the RCP APIs.
On a related note, see also this question: How to efficiently JUnit test Eclipse RCP Plugins
SWTBot
While SWTBot uses the JUnit runtime to execute the tests, it is rather meant as a utility to create integration or functional tests. SWTBot, when used with RCP, starts the entire workbench and runs all tests within the same instance. Therefore great care should be taken to ensure that each test leaves the environment in the same state as it was before the test started. Specialized Rules may help here to set up and tear down a particular recurring scenario.
It is perfectly valid in order to setup an SWTBot test to call methods from your application. For example, you could programmatically open the wizard and then use SWTBot to simulate a user that enters data and presses the OK button. There is no need to use SWTBot to laboriously open the wizard itself.
In my experience, SWTBot is even too much for simple use cases. Consider a test that should enter some data into a dialog and then press OK. If you already have the dialog opened programmatically you can as well continue without SWTBot:
dialog.textField.setText( "data" );
dialog.okButton.notifyListeners( SWT.Selection, null );
assertThat( dialog.getEnteredData() ).isEqualTo( "data" );
Use Both
The best bet is to have both, unit tests that ensure the behavior of the respective units and functional tests that make sure that the particular units play together as desired.
Not sure if that answers the question, if you have further concerns please leave a comment.
I'm new to unit testing so I'd like to get the opinion of some who are a little more clued-in.
I need to write some screen-scraping code shortly. The target system is a web ui where there'll be copious HTML parsing and similar volatile goodness involved. I'll never be notified of any changes by the target system (e.g. they put a redesign on their site or otherwise change functionality). So I anticipate my code breaking regularly.
So I think my real question is, how much, if any, of my unit testing should worry about or deal with the interface (the website I'm scraping) changing?
I think unit tests or not, I'm going to need to test heavily at runtime since I need to ensure the data I'm consuming is pristine. Even if I ran unit tests prior to every run, the web UI could still change between tests and runtime.
So do I focus on in-code testing and exception handling? Does that mean to draw a line in the sand and exclude this kind of testing from unit tests altogether?
Thanks
Unit testing should always be designed to have repeatable known results.
Therefore, to unit test a screen-scraper, you should be writing the test against a known set of HTML (you may use a mock object to represent this)
The sort of thing you are talking about doesn't really sound like a scenario for unit-testing to me - if you want to ensure your code runs as robustly as possible, then it is more, as you say, about in-code testing and exception handling.
I would also include some alerting code, so they system made you aware of any occasions when the HTML does not get parsed as expected.
You should try to separate your tests as much as possible. Test the data handling with low level tests that execute the actual code (i.e. not via a simulated browser).
In the simulated browser, just make sure that the right things happen when you click on buttons, when you submit forms, and when you follow links.
Never try to test whether the layout is correct.
I think the thing unit tests might be useful for here is if you have a build server they will give you an early warning the code no longer works. You can't write a unit test to prove that screenscraping will still work if the site changes its HTML (because you can't tell what they will change).
You might be able to write a unit test to check that something useful is returned from your efforts.
i have project in .net , i want to test it.
But i dont know anything about testing and its method.
how can i go ahead with testing.
which method is better for me for begining?
Is there anything to decide which testing method is taken into account for better result?
There is no "right" or "wrong" in testing. Testing is an art and what you should choose and how well it works out for you depends a lot from project to project and your experience.
But as a professional Tester Expert my suggestion is that you have a healthy mix of automated and manual testing.
AUTOMATED TESTING
Unit Testing
Use NUnit to test your classes, functions and interaction between them.
http://www.nunit.org/index.php
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.peterkrantz.com/2005/selenium-for-aspnet/
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
Since it is not clear about the scale of the project you have, all you need to do is make sure:
Your tests are trustworthy - you should know they are telling u the truth.
Repeatable
Consistent - If you repeat test with same test data it should provide same output.
Proves you are covering all the problem areas.
To get this you can use:
Standard way : NUnit, MbUnit (myFav) or xUnit (havent got around to working with it) or MSTest
Quick and Dirty : Console app (not cool, not so flexible)
If you are using .Net, I'd recommend checking out NUnit. It's a great testing framework to use.
As far as learning about the "testing method", there are many different ways to test an application. When using a tool like NUnit, for example, you are writing automated tests which run without user interaction. In these types of tests, you typically write tests for each of the public methods in your application, and you ensure that given known inputs, these methods produce the expected outputs. Over time as the application changes (via enhancements, bug fixes, etc.) you have a core set of tests that you can re-run to ensure nothing breaks as a result of the changes. You can also do failure testing to ensure that given an invalid set of inputs to a method, it throws the proper exceptions, etc.
Besides automated testing with a tool like NUnit, it's also important to ensure that your end users test the product. "End users" here could be a Quality Assurance group in your company, or it could be the actual customer. The point is that you need to ensure that someone actually uses your application to make sure it works as expected, because no matter how good the automated tests are, there will still be many things you won't think of that your users will discover. One way to approach this type of testing is to write test scenarios, and have your users execute them to make sure the scenario results in the correct behavior.
I think the best testing approach combines both of the above, namely automated testing and user testing (with documented test scenarios).