Is it possible to use cxxunit or any other unit testing framework (excluding QtTestLib) to test qt widgets?
If yes, then there are two more questions :
How?
Since I am running unit tests using valgrind, can this report some errors?
Yes, it should be possible. I'm not sure about cxxunit specifically, but it is theoretically possible.
To properly test Qt objects, you will probably need to create/destroy a QApplication object in your global setup and teardown functions. Unless you are specifically testing QApplication functionality, you should only create one for the entire run of the test application. This will allow you to test portions of the widget's logic, but not easily the appearance or UI interactivity of the widget. Also, testing certain items may rely on having the application's event loop running, which would be more difficult.
Valgrind may report some errors. It may also report errors with Qt's code, in particular static allocations that are left to application teardown to reclaim.
If you want to test your UI, I suggest to use a UI testing tool like Squish. Unit tests I find more suited to test the logic behind the widgets, not the widgets itself. If you really want to unit-test your Qt widgets, I don't think there is a better solution than QtTestLib.
Valgrind: There is a valgrind plugin for Squish. I haven't used that one myself though. Other unit tests can of course easily be run in valgrind, although I don't know of any solution that fully automates this. One would have to make sure to really suppress all warnings from outside one owns code so that some error in e.g. x11 libs doesn't trigger the unit test to fail.
I'm debugging a set of WCF services. Initially, I created some unit tests, but since I'm using threading I often receive "Aborted" or "Stopped" tests without any clear explanation why (this is a known bug in Visual Studio).
I found it extremely challenging to debug the services when I can't even read the log output, so I quickly wrote a custom Assert class and converted all unit tests to console applications. This way, I was able to fix a huge number of simple problems immediately that were hard to impossible before.
So I'm wondering if it is a good idea to write unit tests as (fully automated) console apps first and convert them to real (executes when launching unit tests in VS) tests later.
if you want to stick to the stand alone console app you can have a one fits all aproach: Change
the application type of the MsUnitTest (or NUnitTest) to "Console application"
add a public static void Main() that call your unittests you are interested in.
This exe is can run on its own or it runs in the unittest-ide.
I prefer a standalone consolerunner as described in how-do-i-use-mstest-without-visual-studio
I am attempting add some tests to an existing QT GUI application using QTest. The GUI uses quite a bit of complicated start-up code so I'd rather not write another main() to start it again.
To me, it seems like the easiest way would be instantiate the app and then run the tests on it. I am just not sure, however what function I could plug my test object into so that I don't block the flow of messages.
I could send a special message to start the test or set a timer but that's complicated and tests are supposed to simplify things.
So where would be the best place in existing GUI to insert and qexec a Qtest object?
I'm willing to be proven wrong, but test frameworks in general (and QTest specifically from what I've used of it) tend to assume that the test framework will be driving the code to be tested, as opposed to running along side of it.
I'm also concerned about "The GUI uses quite a bit of complicated start-up code...." Are you intending on testing the startup code? Or testing other stuff around it?
Generally speaking, when I start looking at adding tests to an application, I try to find smaller pieces that are used in a lot of the application, and write tests for those. I then build up to testing the bigger pieces that integrate those smaller ones. My general idea is that if the small pieces work properly, then either I've put them together correctly and things should work, or I haven't and things should obviously fail when I try to run the application.
I should mention that there are other options for testing GUIs for Qt applications in particular. They tend to be more like scripts run on your program, with the output recorded. If that interests you, then you could look into Squish for Qt.
As I am coding my unit tests, I tend to find that I insert the following lines:
Console.WriteLine("Starting InteropApplication, with runInBackground set to true...");
try
{
InteropApplication application = new InteropApplication(true);
application.Start();
Console.WriteLine("Application started correctly");
}
catch(Exception e)
{
Assert.Fail(string.Format("InteropApplication failed to start: {0}", e.ToString()));
}
//test code continues ...
All of my tests are pretty much the same thing. They are displaying information as to why they failed, or they are displaying information about what they are doing. I haven't had any formal methods of how unit tests should be coded. Should they be displaying information as to what they are doing? Or should the tests be silent and not display any information at all as to what they are doing, and only display failure messages?
NOTE: The language is C#, but I don't care about a language specific answer.
I'm not sure why you would do that - if your unit test is named well, you already know what it's doing. If it fails, you know what test failed (and what assert failed). If it didn't fail you know that it succeeded.
This seems completely subjective, but to me this seems like completely redundant information that just adds noise.
I personally would recommend that you output only errors and a summary of the number of tests run and how many passed. This is a completely subjective view though. Display what suits your needs.
I recommend against it - I think that the unit testing should work on the Unix tools philosophy - don't say anything when things are going well.
I find that constructing tests to give meaningful information when they fail is best - that way you get nice short output when things work and it's easy to see what went wrong when there are problems - errors aren't lost to scroll blindness.
I would actually suggest against it (though not militantly). It couples the user interface of your tests with the test implementation (what if the tests are run through GUI viewer?). As alternative I would suggest one of the following:
I'm not familiar with NUnit, but PyUnit allows you to add a description of the test and when tests are run with the verbose option the description is printed. I would look into the NUnit documentation to see if this is something you can do.
Extend the TestCase class that you're inheriting from to add a function from which you call that logs what the test is trying to do. That way different implementations can handle messages in different ways.
I'd say you should output whatever suits your needs, but showing too much can dilute output from test runner.
BTW, your example code hardly looks as a unit test, more of a integration/system test.
I like to buffer the verbose log (about last 20 lines or so), but I don't display it until it gets to some error. When the error happens, it's nice to have some context.
OTOH, unit tests should be small pieces of unrelated code with specific input and output requirements. In most cases, displaying input that caused the error (i.e. wrong output) is enough to trace the problem to its roots.
This might be a bit too language specific, but when I'm writing NUnit tests I tend to do this, only I use the System.Diagnostics.Trace library instead of the console, that way the information is only shown if I decide to watch the tracing.
You don't need to, if the tests are running silently then that means there was no error. There is usually no reason for tests to give any output other than if the test failed. If it's running, then it is running indicated by the test runner that the test has passed, i.e. it is "green". Running the test (together with many tests with console output) through a test runner in an IDE, you'll be spamming the console log with messages nobody will care about.
The test you've written is not a unit test, but looks more like an integration/system test because you seem to be running an application as a whole. A unit test will test a public method in a class, preferably keeping the class as isolated as possible.
Using console i/o kinda defies the whole purpose of a unit testing framework. you might as well code the whole test manually. If you are using a unit testing framework, your tests should be very malleable, tied to as few things as possible
Displaying information can be useful; if you're trying to find out why a test failed, it can be useful to be able to see more than just a stack trace, and what happened before the program reached the point where it failed.
However, in the "normal" case where everything succeeds, these messages are unnecessary clutter that distract from what you're really trying to do - ie. looking at an overview of which tests succeeded and failed.
I'd suggest redirecting your debugging messages to a log file. You can either do this by writing all your log message code to call a special "log print" function, or if you're writing a console program, you should be able to redirect stdout to a different file (I know for a fact that you can do this in both Unix and Windows). This way, you get the high level overview but the details are there if you need them.
I would avoid putting extra Try/Catch statements in Unit Tests. First of all, an expected exception in a unit test will already cause the test to Fail. That is the default behavior of NUnit. Essentitally, the test harness wraps each call to your test functions with that code already. Also, by just using the e.ToString() to display what happened, I believe you are losing a lot of information. By default, I believe NUnit will display not just the Exception type, but also the Call Stack, which I don't believe you're seeing with your method.
Secondly, there are times when its necessary. For instance, you can use the [ExpectedException] attribute to actually say when it occurs. Just be sure that when you test non-exception related Asserts (for instance Asserting a list count > 0, etc) that you put in a good description as the argument to the assert. That is useful.
Everything else is generally not needed. If your unit tests are so large that you start putting in WriteLines with what "step" of the test you're on, that is generally a sign that your test should really be broken out into multiple smaller tests. In other words, that you're not doing a unit test, but rather an integration test.
Have you looked at the xUnit style of unit test frameworks?
See Ron Jeffries site for a rather large list.
One of the principles of these frameworks is that they produce little or no output during the test run and only really an indicator of success at the end. In the case of failures its possible to get a more descriptive output of the reason for failure.
The reason for this mode is that while everything is OK you don't want to be bothered by extra output, and certainly if there is a failure you don't want to miss it because of the noise of other output.
Well, you should only know when a test failed and why it failed. It's no use to know what's going on, unless, for example, you have a loop and you want to know exactly where in the loop the test died.
I think your making far more work for yourself. The tests either pass or fail, the failure should hopefully be the exception to the rule and you should let the unit test runner handle and throw the exception. What you're doing is adding cruft, the exception logged by the test runner will tell you the same thing.
The only time I would display what's happening is if there was some aspect of it that would be easier to test non-automatically. For example, if you've got code that takes a little while to run, and might get stuck in an infinite loop, you might want to print out a message every so often to indicate that it is still making progress.
Always make sure failure messages clearly stand out from other output, however.
You could have written the test method like this. It's up to your code-nose which style of test you prefer. I prefer not writing extra try-catches and Console.WriteLines.
public void TestApplicationStart()
{
InteropApplication application = new InteropApplication(true);
application.Start();
}
Test frameworks that I have worked with would interpret any unhandled (and unexpected) exception as a failed test.
Think about the time you took to gold-plate this test and how many more meaningful tests you could have written with that time.
I'm convinced from this presentation and other commentary here on the site that I need to learn to Unit Test. I also realize that there have been many questions about what unit testing is here. Each time I go to consider how it should be done in the application I am currently working on, I walk away confused. It is a xulrunner application application, and a lot of the logic is event-based - when a user clicks here, this action takes place.
Often the examples I see for testing are testing classes - they instantiate an object, give it mock data, then check the properties of the object afterward. That makes sense to me - but what about the non-object-oriented pieces?
This guy mentioned that GUI-based unit testing is difficult in most any testing framework, maybe that's the problem. The presentation linked above mentions that each test should only touch one class, one method at a time. That seems to rule out what I'm trying to do.
So the question - how does one unit testing procedural or event-based code? Provide a link to good documentation, or explain it yourself.
On a side note, I also have a challenge of not having found a testing framework that is set up to test xulrunner apps - it seems that the tools just aren't developed yet. I imagine this is more peripheral than my understanding the concepts, writing testable code, applying unit testing.
The idea of unit testing is to test small sections of code with each test. In an event based system, one form of unit testing you could do, would be to test how your event handlers respond to various events. So your unit test might set an aspect of your program into a specific state, then call the event listener method directly, and finally test the subsequent state of of your program.
If you plan on unit testing an event-based system, you will make your life a lot easier for yourself if you use the dependency injection pattern and ideally would go the whole way and use inversion of control (see http://martinfowler.com/articles/injection.html and http://msdn.microsoft.com/en-us/library/aa973811.aspx for details of these patterns)
(thanks to pc1oad1etter for pointing out I'd messed up the links)
At first I would test events like this:
private bool fired;
private void HandlesEvent(object sender, EventArgs e)
{
fired = true;
}
public void Test()
{
class.FireEvent += HandlesEvent;
class.PErformEventFiringAction(null, null);
Assert.IsTrue(fired);
}
And Then I discovered RhinoMocks.
RhinoMocks is a framework that creates mock objects and it also handles event testing. It may come in handy for your procedural testing as well.
Answering my own question here, but I came across an article that take explains the problem, and does a walk-through of a simple example -- Agile User Interface Development. The code and images are great, and here is a snippet that shows the idea:
Agile gurus such as Kent Beck and
David Astels suggest building the GUI
by keeping the view objects very thin,
and testing the layers "below the
surface." This "smart object/thin
view" model is analogous to the
familiar document-view and
client-server paradigms, but applies
to the development of individual GUI
elements. Separation of the content
and presentation improves the design
of the code, making it more modular
and testable. Each component of the
user interface is implemented as a
smart object, containing the
application behavior that should be
tested, but no GUI presentation code.
Each smart object has a corresponding
thin view class containing only
generic GUI behavior. With this design
model, GUI building becomes amenable
to TDD.
The problem is that "event based programming" links far too much logic to the events. The way such a system should be designed is that there should be a subsystem that raises events (and you can write tests to ensure that these events are raised in the proper order). And there should be another subsystem that deals only with managing, say, the state of a form. And you can write a unit test that will verify that given the correct inputs (ie. events being raised), will set the form state to the correct values.
Beyond that, the actual event handler that is raised from component 1, and calls the behavior on component 2 is just integration testing which can be done manually by a QA person.
Your question doesn't state your programming language of choice, but mine is C# so I'll exemplify using that. This is however just a refinement over Gilligans answer by using anonymous delegates to inline your test code. I'm all in favor of making tests as readable as possible, and to me that means all test code within the test method;
// Arrange
var car = new Car();
string changedPropertyName = "";
car.PropertyChanged += delegate(object sender, PropertyChangedEventArgs e)
{
if (sender == car)
changedPropertyName = e.PropertyName;
};
// Act
car.Model = "Volvo";
// Assert
Assert.AreEqual("Model", changedPropertyName,
"The notification of a property change was not fired correctly.");
The class I'm testing here implements the INotifyPropertyChanged interface and therefore a NotifyPropertyChanged event should be raised whenever a property's value has changed.
An approach I've found helpful for procedural code is to use TextTest. It's not so much about unit testing, but it helps you do automated regression testing. The idea is that you have your application write a log then use texttest to compare the log before and after your changes.
See the oft-linked Working Effectively with Legacy Code. See the sections titled "My Application Is All API Calls" and "My Project is Not Object-Oriented. How Do I Make Safe Changes?".
In C/C++ world (my experience) the best solution in practice is to use the linker "seam" and link against test doubles for all the functions called by the function under test. That way you don't change any of the legacy code, but you can still test it in isolation.