BRAND NEW to unit testing, I mean really new. I've read quite a bit and am moving slowly, trying to follow best practices as I go. I'm using MS-Test in Visual Studio 2010.
I have come up against a requirement that I'm not quite sure how to proceed on. I'm working on a component that's responsible for interacting with external hardware. There are a few more developers on this project and they don't have access to the hardware so I've implemented a "dummy" or simulated implementation of the component and moved as much shared logic up into a base class as possible.
Now this works fine as far as allowing them to compile and run the code, but it's not terrible useful for simulating the events and internal state changes needed for my unit tests (don't forget I'm new to testing)
For example, there are a couple events on the component that I want to test, however I need them to be invoked in order to test them. Normally to raise the event I would push a button on the hardware or shunt two terminals, but in the simulated object (obviously) I can't do that.
There are two concerns/requirements that I have:
I need to provide state changes and raise events for my unit tests
I need to provide state changes and raise events for my team to test dependencies on the component (e.g. a button on a WPF view becomes enabled when a certain hardware event occurs)
For the latter I thought about some complicated control panel dialog that would let me trigger events and generally simulate hardware operation and user interaction. This is complicated as it requires a component with no message pump to provide a window with controls. Stinky. Or another approach could be to implement the simulated component to take a "StateInfo" object that I could use to change the internals of the object.
This can't be a new problem, I'm sure many of you have had to do something similar to this and I'm just wondering what patterns or strategies you've used to accomplish this. I know I can access private fields with a n accessor, but this doesn't really provide an interactive (in the case of runtime simulation) changes.
If there is an interface on the library you use to interact with the external hardware you can just create a mock object for it and raise events from that in your unit tests.
If there isn't, then you'll need to wrap the hardware calls in a wrapper class so you mock it and provide the behaviours you want in your test.
For examples of how to raise events from mock objects have a look at Mocking Comparison - Raising Events
I hope that helps!
Related
When we are testing Rx.NET code we can use TestScheduler that is implementing VirtualTimeScheduler and it allow us to create virtual timeline of events (CreateColdObservable) that occur in system and than test the state of the system in some point of times using methods like AdvanceTo.
I’m wondering if there is some equivalent in Akka.Net for testing system using some king of virtual timeline like in Rx.Net?
While testing your actors using Akka.TestKit, your actor system is configured to make use of TestScheduler, which has Advance/AdvanceTo methods allowing moving forward in time. It's directly inspired by the VirtualTimeScheduler known from the Rx.NET.
Keep in mind that TestScheduler has limited usage: AFAIK it's only used in context of scheduling future events using i.e. Context.System.Scheduler.ScheduleTellxxx methods. It doesn't affect other time-based actions happening inside actor system.
So the docs suggest using a mock store, but it's just recording all the actions and is not connected to any reducer. I basically just want to unit a test a component, and see that given an action has been dispatched, it changed- something like(in the most general way to describe):
expect(counter.props).to.equal(1)
dispatch(increment())
expect(counter.props).to.equal(2)
any ideas? thanks
There's a couple factors involved here.
First, even in normal rendering and usage, dispatching an action does not immediately update a component's props. The wrapper components generated by connect are immediately notified after the action is dispatched, but the actual re-rendering of the wrapped component generally gets batched up and queued by React. So, dispatching an action on one line will not be reflected in the props on the next line.
Second, ideally the "plain" component shouldn't actually know anything about Redux itself. It just knows that it's getting some data as props, and when an event like a button click occurs, calls some function it was given as a prop. So, testing the component should be independent from testing anything Redux-related.
If it helps, I have a whole bunch of articles on React and Redux-related testing as part of my React/Redux links list. Some of those articles might help give you some ideas.
I am making an application in C++ that runs a simulation for a health club. The user simply enters the simulation data at the beginning (3 integer values) and presses run. After that there is no user input - so very simple.
After starting the simulation a lot of the logic is deep down in lower classes but a lot of them need to print simple output messages to the UI. Returning a message is not possible as objects need to print to the UI but keep working.
I was going to pass a reference to the UI object to all classes that need it but I end up passing it around quite a lot - there must be a better way.
What I really need is something that can make calling the UI's printOutput(string) function as easy (or not much more difficult) than cout << string;
The UI also has a displayConnectionStatus(bool[] connections) method.
Bear in mind the UI inherits an abstract 'UserInterface' class so simple console UIs and GUIs can be changed in and out easily.
How do you suggest I implement this link to the UI?
If I were to use a global function, how can I redirect it to call methods of the UserInterface implementation that I selected to use?
Don't be afraid of globals.
Global objects hurt encapsulation, but for a targeted solution with no concern for immediate reusability, globals are just fine.
Expose a global object that processes events from your simulation. You can then choose to print the events, send them by e-mail, render them with OpenGL or whatever you fancy. Make a uniform interface that catches what happens inside the simulation via report callbacks, and then you can subclass this global object to suit your needs.
If the object wasn't global, you'd be passing a pointer around all the codebase.
I would suggest to go for logging framework i.e. your own class LogMessages, which got functions which get data and log the data, it can be to a UI, file, over network or anything.
And each class which needs logging, can use your logging class.
This way you can avoid globals and a generic solution , also have a look at http://www.pantheios.org/ which is open source C/C++ Diagnostic Logging API library, you may use that also...
I have a program that (amongst other things) has a command line interface that lets the user enter strings, which will then be sent over the network. The problem is that I'm not sure how to connect the events, which are generated deep inside the GUI, to the network interface. Suppose for instance that my GUI class hierarchy looks like this:
GUI -> MainWindow -> CommandLineInterface -> EntryField
Each GUI object holds some other GUI objects and everything is private. Now the entryField object generates an event/signal that a message has been entered. At the moment I'm passing the signal up the class hierarchy so the CLI class would look something like this:
public:
sig::csignal<void, string> msgEntered;
And in the c'tor:
entryField.msgEntered.connect(sigc::mem_fun(this, &CLI::passUp));
The passUp function just emits the signal again for the owning class (MainWindow) to connect to until I can finally do this in the main loop:
gui.msgEntered.connect(sigc::mem_fun(networkInterface, &NetworkInterface::sendMSG));
Now this seems like a real bad solution. Every time I add something to the GUI I have to wire it up all through the class hierarchy. I do see several ways around this. I could make all objects public, which would allow me to just do this in the main loop:
gui.mainWindow.cli.entryField.msgEntered.connect(sigc::mem_fun(networkInterface, &NetworkInterface::sendMSG));
But that would go against the idea of encapsulation. I could also pass a reference to the network interface all over the GUI, but I would like to keep the GUI code as seperate as possible.
It feels like I'm missing something essential here. Is there a clean way to do this?
Note: I'm using GTK+/gtkmm/LibSigC++, but I'm not tagging it as such because I've had pretty much the same problem with Qt. It's really a general question.
The root problem is that you're treating the GUI like its a monolithic application, only the gui is connected to the rest of the logic via a bigger wire than usual.
You need to re-think the way the GUI interacts with the back-end server. Generally this means your GUI becomes a stand-alone application that does almost nothing and talks to the server without any direct coupling between the internals of the GUI (ie your signals and events) and the server's processing logic. ie, when you click a button you may want it to perform some action, in which case you need to call the server, but nearly all the other events need to only change the state inside the GUI and do nothing to the server - not until you're ready, or the user wants some response, or you have enough idle time to make the calls in the background.
The trick is to define an interface for the server totally independently of the GUI. You should be able to change GUIs later without modifying the server at all.
This means you will not be able to have the events sent automatically, you'll need to wire them up manually.
Try the Observer design pattern. Link includes sample code as of now.
The essential thing you are missing is that you can pass a reference without violating encapsulation if that reference is cast as an interface (abstract class) which your object implements.
Short of having some global pub/sub hub, you aren't going to get away from passing something up or down the hierarchy. Even if you abstract the listener to a generic interface or a controller, you still have to attach the controller to the UI event somehow.
With a pub/sub hub you add another layer of indirection, but there's still a duplication - the entryField still says 'publish message ready event' and the listener/controller/network interface says 'listen for message ready event', so there's a common event ID that both sides need to know about, and if you're not going to hard-code that in two places then it needs to be passed into both files (though as global it's not passed as an argument; which in itself isn't any great advantage).
I've used all four approaches - direct coupling, controller, listener and pub-sub - and in each successor you loosen the coupling a bit, but you don't ever get away from having some duplication, even if it's only the id of the published event.
It really comes down to variance. If you find you need to switch to a different implementation of the interface, then abstracting the concrete interface as a controller is worthwhile. If you find you need to have other logic observing the state, change it to an observer. If you need to decouple it between processes, or want to plug into a more general architecture, pub/sub can work, but it introduces a form of global state, and isn't as amenable to compile-time checking.
But if you don't need to vary the parts of the system independently it's probably not worth worrying about.
As this is a general question I’ll try to answer it even though I’m “only” a Java programmer. :)
I prefer to use interfaces (abstract classes or whatever the corresponding mechanism is in C++) on both sides of my programs. On one side there is the program core that contains the business logic. It can generate events that e.g. GUI classes can receive, e.g. (for your example) “stringReceived.” The core on the other hand implements a “UI listener” interface which contains methods like “stringEntered”.
This way the UI is completely decoupled from the business logic. By implementing the appropriate interfaces you can even introduce a network layer between your core and your UI.
[Edit] In the starter class for my applications there is almost always this kind of code:
Core core = new Core(); /* Core implements GUIListener */
GUI gui = new GUI(); /* GUI implements CoreListener */
core.addCoreListener(gui);
gui.addGUIListener(core);
[/Edit]
You can decouple ANY GUI and communicate easily with messages using templatious virtual packs. Check out this project also.
In my opinion, the CLI should be independant from GUI. In a MVC architecture, it should play the role of model.
I would put a controller which manages both EntryField and CLI: each time EntryField changes, CLI gets invoqued, all of this is managed by the controller.
I'm convinced from this presentation and other commentary here on the site that I need to learn to Unit Test. I also realize that there have been many questions about what unit testing is here. Each time I go to consider how it should be done in the application I am currently working on, I walk away confused. It is a xulrunner application application, and a lot of the logic is event-based - when a user clicks here, this action takes place.
Often the examples I see for testing are testing classes - they instantiate an object, give it mock data, then check the properties of the object afterward. That makes sense to me - but what about the non-object-oriented pieces?
This guy mentioned that GUI-based unit testing is difficult in most any testing framework, maybe that's the problem. The presentation linked above mentions that each test should only touch one class, one method at a time. That seems to rule out what I'm trying to do.
So the question - how does one unit testing procedural or event-based code? Provide a link to good documentation, or explain it yourself.
On a side note, I also have a challenge of not having found a testing framework that is set up to test xulrunner apps - it seems that the tools just aren't developed yet. I imagine this is more peripheral than my understanding the concepts, writing testable code, applying unit testing.
The idea of unit testing is to test small sections of code with each test. In an event based system, one form of unit testing you could do, would be to test how your event handlers respond to various events. So your unit test might set an aspect of your program into a specific state, then call the event listener method directly, and finally test the subsequent state of of your program.
If you plan on unit testing an event-based system, you will make your life a lot easier for yourself if you use the dependency injection pattern and ideally would go the whole way and use inversion of control (see http://martinfowler.com/articles/injection.html and http://msdn.microsoft.com/en-us/library/aa973811.aspx for details of these patterns)
(thanks to pc1oad1etter for pointing out I'd messed up the links)
At first I would test events like this:
private bool fired;
private void HandlesEvent(object sender, EventArgs e)
{
fired = true;
}
public void Test()
{
class.FireEvent += HandlesEvent;
class.PErformEventFiringAction(null, null);
Assert.IsTrue(fired);
}
And Then I discovered RhinoMocks.
RhinoMocks is a framework that creates mock objects and it also handles event testing. It may come in handy for your procedural testing as well.
Answering my own question here, but I came across an article that take explains the problem, and does a walk-through of a simple example -- Agile User Interface Development. The code and images are great, and here is a snippet that shows the idea:
Agile gurus such as Kent Beck and
David Astels suggest building the GUI
by keeping the view objects very thin,
and testing the layers "below the
surface." This "smart object/thin
view" model is analogous to the
familiar document-view and
client-server paradigms, but applies
to the development of individual GUI
elements. Separation of the content
and presentation improves the design
of the code, making it more modular
and testable. Each component of the
user interface is implemented as a
smart object, containing the
application behavior that should be
tested, but no GUI presentation code.
Each smart object has a corresponding
thin view class containing only
generic GUI behavior. With this design
model, GUI building becomes amenable
to TDD.
The problem is that "event based programming" links far too much logic to the events. The way such a system should be designed is that there should be a subsystem that raises events (and you can write tests to ensure that these events are raised in the proper order). And there should be another subsystem that deals only with managing, say, the state of a form. And you can write a unit test that will verify that given the correct inputs (ie. events being raised), will set the form state to the correct values.
Beyond that, the actual event handler that is raised from component 1, and calls the behavior on component 2 is just integration testing which can be done manually by a QA person.
Your question doesn't state your programming language of choice, but mine is C# so I'll exemplify using that. This is however just a refinement over Gilligans answer by using anonymous delegates to inline your test code. I'm all in favor of making tests as readable as possible, and to me that means all test code within the test method;
// Arrange
var car = new Car();
string changedPropertyName = "";
car.PropertyChanged += delegate(object sender, PropertyChangedEventArgs e)
{
if (sender == car)
changedPropertyName = e.PropertyName;
};
// Act
car.Model = "Volvo";
// Assert
Assert.AreEqual("Model", changedPropertyName,
"The notification of a property change was not fired correctly.");
The class I'm testing here implements the INotifyPropertyChanged interface and therefore a NotifyPropertyChanged event should be raised whenever a property's value has changed.
An approach I've found helpful for procedural code is to use TextTest. It's not so much about unit testing, but it helps you do automated regression testing. The idea is that you have your application write a log then use texttest to compare the log before and after your changes.
See the oft-linked Working Effectively with Legacy Code. See the sections titled "My Application Is All API Calls" and "My Project is Not Object-Oriented. How Do I Make Safe Changes?".
In C/C++ world (my experience) the best solution in practice is to use the linker "seam" and link against test doubles for all the functions called by the function under test. That way you don't change any of the legacy code, but you can still test it in isolation.