I want to test a function in which a new View is created within TornadoFX. When i call the function however, i get this error.
java.lang.ExceptionInInitializerError
at tornadofx.ControlsKt.button(Controls.kt:190)
at tornadofx.ControlsKt.button$default(Controls.kt:190)
at view.PeopleMenuView$setupTopBox$1$1.invoke(PeopleMenuView.kt:33)
at view.PeopleMenuView$setupTopBox$1$1.invoke(PeopleMenuView.kt:8)
at tornadofx.LayoutsKt.vbox(Layouts.kt:388)
at tornadofx.LayoutsKt.vbox$default(Layouts.kt:103)
at view.PeopleMenuView$setupTopBox$1.invoke(PeopleMenuView.kt:31)
at view.PeopleMenuView$setupTopBox$1.invoke(PeopleMenuView.kt:8)
at tornadofx.LayoutsKt.hbox(Layouts.kt:384)
at tornadofx.LayoutsKt.hbox$default(Layouts.kt:96)
at view.PeopleMenuView.setupTopBox(PeopleMenuView.kt:29)
at view.PeopleMenuView.<init>(PeopleMenuView.kt:15)
at presenter.MainMenuPresenter.managePeoplePressed(MainMenuPresenter.kt:11)
at presenter.TestMainMenuPresenter.testManagePeoplePressed(TestMainMenuPresenter.kt:16)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.lang.IllegalStateException: Toolkit not initialized
at com.sun.javafx.application.PlatformImpl.runLater(PlatformImpl.java:273)
at com.sun.javafx.application.PlatformImpl.runLater(PlatformImpl.java:268)
at com.sun.javafx.application.PlatformImpl.setPlatformUserAgentStylesheet(PlatformImpl.java:550)
at com.sun.javafx.application.PlatformImpl.setDefaultPlatformUserAgentStylesheet(PlatformImpl.java:512)
at javafx.scene.control.Control.<clinit>(Control.java:87)
... 36 more
It is because a new instance of a view is created in the function. The simplified code looks like this:
fun managePeoplePressed() {
view.replaceWith(PeopleMenuView())
}
When i call the method from a test, i get the error. I googled around but there's not much to find about this.
I want to be able to test methods that create a view. Thank you.
In general: you don't want to test UI views. Hold tight, I'm still going to tell you how. I just want you to know it's a rare case that can be indicating design problems.
Without TestFx such tests are quite difficult to write. With TestFx they're easier, but still both very slow and very fragile, and you'll have to take extra steps in any of the standard CI environments to allow them to run on the build boxes, which are normally not equipped to run JavaFx and not equipped with virtual displays.
The biggest problem you (and TestFx) encounter is with getting your threads right inside the test. The tests are on one thread. The visual parts of JavaFx are on another. Your own application and JavaFx itself frequently pour their tasks into Platform.RunLater(), and if you don't account for the emptying of that queue, you'll get results that are either uniformly wrong or, worse, flicker-y. Sleeps work in some cases, but a) are sleeps and slow you down, and b) won't work as well when you run on a slower box, like a low-configured windows box in the cloud.
Back to your general question: It likely means that your intra-View connections are complex, where UI component X depends on UI component Y. Broadly, you want UI component X to depend on properties in the Model, and you want UI component Y to depend on properties in the Model, and you want the Model handling the complex interactions between properties. TornadoFx has direct support for this, and Model classes don't need the UI running to be tested. For some cases, the Controller is the place to put that interconnected logic, but that's relatively rare. Most of what I do just doesn't call for that, but it does call for Model classes. The key insight that (Model != Domain) is well-supported in TornadoFx's ViewModel and ItemViewModel classes.
Having said all that, if you need robot-control over the view, the way to do it is to use TestFx. If you don't, this is agreeing with and elaborating on Edvin's answer, is to have something like this:
class UiTest {
companion object {
private var javaFxRunning: Boolean = false
fun start() {
StartWith.isUi = true
Errors.reallyShow = false
try {
runJavaFx()
} catch (e: InterruptedException) {
throw RuntimeException(e)
}
}
#Throws(InterruptedException::class)
private fun runJavaFx() {
if (javaFxRunning) return
val latch = CountDownLatch(1)
SwingUtilities.invokeLater {
JFXPanel()
latch.countDown()
}
latch.await()
javaFxRunning = true
}
}
}
In the #BeforeEach or even in individual #Tests, call UiTest.start(), which will only start the javafx one time no matter how many tests need it running.
My actual experience: the JavaFx Component <-> property relationship "just works". That is, I gain very little from testing it, as it's not my code, and it works "every time". What is my code and doesn't work every time is the relationship between the properties. That's why I use ViewModel, put the interactions between properties there, and test them rigorously with microtests, which doesn't require the JavaFx thread to be running. (JavaFx properties use no multi-threading in their callbacks, so addListener targets are called synchronously.) This is particularly handy when you need to be clever with your bindings. Inlining complex JavaFx bindings inside the TornadoFx view builder is a mug's game. Extracting them to the model is dead-easy.
I learned all this by trial and error. There is a third approach, described in this stack overflow, that can be made to work. It amounts to making 100% of your tests run in the JavaFx thread. I found it difficult, because most of my work isn't on that thread in production. Basic JUnit test for JavaFX 8
Good luck, and reach out if you need more help!
GeePawHill
P.S. Shouts out to Edvin: Great work on TornadoFx. I use it every day.
You need to initialize the JavaFX Toolkit. If you're using TestFX you can make a call to FxToolkit.registerPrimaryStage(), if not you can instantiate a JFXPanel to achieve the same goal.
Related
I've read tons of articles, seen tons of screencasts about TDD, but I'm still struggling with using it in real world project. My main issue is I don't know where to start, what test should be the first one.
Suppose I have to write client library calling external system's methods (e.g. notification).
I want this client to work as follows
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Response code = client.notifyOnEvent(Event.LIMIT_REACHED, 100); // some params of call
There is some translation and message format preparation behind the scenes, so I'd like to hide it from my client apps.
I don't know where and how to start.
Should I make up some rough classes set for this library?
Should I start with testing NotificationClient as below
public void testClientSendInvalidEventCommand() {
NotificationClient client = new NotificationClient(...);
Response code = client.notifyOnEvent(Event.WRONG_EVENT);
assertEquals(1223, code.codeValue());
}
If so, with such test I'm forced to write complete working implementation at once, with no baby steps as TDD states. I can mock out sosmething in Client but then I have to know this thing to be mocked upfront, so I need some upfront desing to be made.
Maybe I should start from the bottom, test this message formatting component first and then use it in right client test?
What way is the right one to go?
Should we always start from top (how to deal with this huge step required)?
Can we start with any class realizing tiny part of desired feature (as Formatter in this example)?
If I'd know where to hit with my tests it'd be a lot easier for me to proceed.
I'd start with this line:
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Sounds like we need a NotificationClient, which needs a client ID. That's an easy thing to test for. My first test might look something like:
public void testNewClientAbcd1234HasClientId() {
NotificationClient client = new NotificationClient("abcd1234");
assertEquals("abcd1234", client.clientId());
}
Of course, it won't compile at first - not until I'd written a NotificationClient class with a constructor that takes a string parameter and a clientId() method that returns a string - but that's part of the TDD cycle.
public class NotificationClient {
public NotificationClient(string clientId) {
}
public string clientId() {
return "";
}
}
At this point, I can run my test and watch it fail (because I've hard-coded clientId()'s return to be an empty string). Once I've got my failing unit test, I write just enough production code (in NotificationClient) to get the test to pass:
public string clientId() {
return "abcd1234";
}
Now all my tests pass, so I can consider what to do next. The obvious (well, obvious to me) next step is to make sure that I can create clients whose ID isn't "abcd1234":
public void testNewClientBcde2345HasClientId() {
NotificationClient client = new NotificationClient("bcde2345");
assertEquals("bcde2345", client.clientId());
}
I run my test suite and observe that testNewClientBcde2345HasClientId() fails while testNewClientAbcd1234HasClientId() passes, and now I've got a good reason to add a member variable to NotificationClient:
public class NotificationClient {
private string _clientId;
public NotificationClient(string clientId) {
_clientId = clientId;
}
public string clientId() {
return _clientId;
}
}
Assuming no typographical errors have snuck in, that'll get all my tests to pass, and I can move on to whatever the next step is. (In your example, it would probably be testing that notifyOnEvent(Event.WRONG_EVENT) returns a Response whose codeValue() equals 1223.)
Does that help any?
Don't confuse acceptance tests that hook into each end of your application, and form an executable specifications with unit tests.
If you are doing 'pure' TDD you write an acceptance test which drives the unit tests that drive the implementation. testClientSendInvalidEventCommand is your acceptance test, but depending on how complicated things are you will delegate the implementation to multiple classes you can unit test separately.
How complicated things get before you have to split them up to test and understand them properly is why it is called Test Driven Design.
You can choose to let tests drive your design from the bottom up or from the top down. Both work well for different developers in different situations. Either approach will force to make some of those "upfront" design decisions but that's a good thing. Making those decisions in order to write your tests is test-driven design!
In your case you have an idea what the high level external interface to the system you are developing should be so let's start there. Write a test for how you think users of your notification client should interact with it and let it fail. This test is the basis for your acceptance or integration tests and they are going to continue failing until the features they describe are finished. That's ok.
Now step down one level. What are the steps which need to occur to provide that high level interface? Can we write an integration or unit test for those steps? Do they have dependencies you had not considered which might cause you to change the notification center interface you have started to define? Keep drilling down depth-first defining behavior with failing tests until you find that you have actually reached a unit test. Now implement enough to pass that unit test and continue. Get unit tests passing until you have built enough to pass an integration test and so on. You'll eventually have completed a depth-first construction of a tree of tests and should have a well tested feature whose design was driven by your tests.
One goal of TDD is that the testing informs the design. So the fact that you need to think about how to implement your NotificationClient is a good thing; it forces you to think of (hopefully) simple abstractions up front.
Also, TDD sort of assumes constant refactoring. Your first solution probably won't be the last; so as you refine your code the tests are there to tell you what breaks, from compile errors to actual runtime issues.
So I would just jump right in and start with the test you suggested. As you create mocks, you will need to create tests for the actual implementations of what you are mocking. You will find things make sense and need to be refactored, so you will need to modify your tests as you go. That's the way it's supposed to work...
Short version of my questions:
Can anyone point me toward some good, detailed sources from which I
can learn how to implement testing in my MVC 3 application, using
NUnit, Ninject 2, and Moq?
Can anyone here help clarify for me how Controller-Repository
decoupling, mocking, and dependency injection work together?
Longer version of my questions:
What I'm trying to do ...
I am currently beginning to create an MVC 3 application, which will use Entity Framework 4, with a database first approach. I want to do this right, so I am trying to design the classes, layers, etc., to be highly testable. But, I have little to no experience with unit testing or integration testing, other than an academic understanding of them.
After lots of research, I've settle on using
NUnit as my testing framework
Ninject 2 as my dependency injection framework
Moq as my mocking framework.
I know the topic of which framework is best, etc., could enter into this, but at this point I really don't know enough about any of it to form a solid opinion. So, I just decided to go with these free solutions which seem to be well liked and well maintained.
What I've learned so far ...
I've spent some time working through some of this stuff, reading resources such as:
Implementing the Repository and Unit of Work Patterns in an
ASP.NET MVC Application
Building Testable ASP.NET MVC Applications
NerdDinner Step 12: Unit Testing
Using Repository and Unit of Work patterns with Entity Framework
4.0
From these resources, I've managed to workout the need for a Repository pattern, complete with repository interfaces, in order to decouple my controllers and my data access logic. I have written some of that into my application already, but I admit I am not clear as to the mechanics of the whole thing, and whether I am doing this decoupling in support of mocking, or dependency injection, or both. As such, I certainly wouldn't mind hearing from you guys about this too. Any clarity I can gain on this stuff will help me at this point.
Where things got muddy for me ...
I thought I was grasping this stuff pretty well until I started trying to wrap my head around Ninject, as described in Building Testable ASP.NET MVC Applications, cited above. Specifically, I got completely lost around the point in which the author begins describing the implementation of a Service layer, about half way into the document.
Anyway, I am now looking for more resources to study, in order to try to get various perspectives around this stuff until it begins to make sense to me.
Summarizing all of this, boiling it down to specific questions, I am wondering the following:
Can anyone point me toward some good, detailed sources from which I
can learn how to implement testing in my MVC 3 application, using
NUnit, Ninject 2, and Moq?
Can anyone here help clarify for me how Controller-Repository
decoupling, mocking, and dependency injection work together?
EDIT:
I just discovered the Ninject official wiki on Github, so I'm going to start working through that to see if it starts clarifying things for me. But, I'm still very interested in the SO community thoughts on all of this :)
If you are using the Ninject.MVC3 nuget package, then some of the article you linked that was causing confusion will not be required. That package has everything you need to start injecting your controllers which is probably the biggest pain point.
Upon installing that package, it will create a NinjectMVC3.cs file in the App_Start folder, inside that class is a RegisterServices method. This is where you should create the bindings between your interfaces and your implementations
private static void RegisterServices(IKernel kernel)
{
kernel.Bind<IRepository>().To<MyRepositoryImpl>();
kernel.Bind<IWebData>().To<MyWebDAtaImpl>();
}
Now in your controller you can use constructor injection.
public class HomeController : Controller {
private readonly IRepository _Repo;
private readonly IWebData _WebData;
public HomeController(IRepository repo, IWebData webData) {
_Repo = repo;
_WebData = webData;
}
}
If you are after very high test coverage, then basically anytime one logical piece of code (say controller) needs to talk to another (say database) you should create an interface and implementation, add the definition binding to RegisterService and add a new constructor argument.
This applies not only to Controller, but any class, so in the example above if your repository implementation needed an instance of WebData for something, you would add the readonly field and the constructor to your repository implementation.
Then when it comes to testing, what you want to do is provide mocked version of all required interfaces, so that the only thing you are testing is the code in the method you are writing the test for. So in my example, say that IRepository has a
bool TryCreateUser(string username);
Which is called by a controller method
public ActionResult CreateUser(string username) {
if (_Repo.TryCreateUser(username))
return RedirectToAction("CreatedUser");
else
return RedirectToAction("Error");
}
What you are really trying to test here is that if statement and the return types, you do not want to have to create a real repository that will return true or false based on special values you give it. This is where you want to mock.
public void TestCreateUserSucceeds() {
var repo = new Mock<IRepository>();
repo.Setup(d=> d.TryCreateUser(It.IsAny<string>())).Returns(true);
var controller = new HomeController(repo);
var result = controller.CreateUser("test");
Assert.IsNotNull(result);
Assert.IsOfType<RedirectToActionResult>(result)
Assert.AreEqual("CreatedUser", ((RedirectToActionResult)result).RouteData["Action"]);
}
^ That won't compile for you as I know xUnit better, and do not remember the property names on RedirectToActionResult from the top of my head.
So to sum up, if you want one piece of code to talk to another, whack an interface in between. This then allows you to mock the second piece of code so that when you test the first you can control the output and be sure you are testing only the code in question.
I think it was this point that really made the penny drop for me with all this, you do this not necessarily becase the code demands it, but because the testing demands it.
One last piece of advice specific to MVC, any time you need to access the basic web objects, HttpContext, HttpRequest etc, wrap all these behind an interface as well (like the IWebData in my example) because while you can mock these using the *Base classes, it becomes painful very quickly as they have a lot of internal dependencies you also need to mock.
Also with Moq, set the MockBehaviour to Strict when creating mocks and it will tell you if anything is being called that you have not provided a mock for.
Here is the application that I'm creating. It is open source and available on github, and utilizes all of the required stuff - MVC3, NUnit, Moq, Ninject - https://github.com/alexanderbeletsky/trackyt.net/tree/master/src
Contoller-Repository decoupling is simple. All data operations are moved toward the Repository. Repository is an implementation of some IRepository type. The controller never creates repositories inside itself (with the new operator) but rather receives them either by constructor argument or property.
.
public class HomeController {
public HomeController (IUserRepository users) {
}
}
This technique is called "Inversion of Control." To support inversion of control you have to provide some "Dependency Injection" framework. Ninject is a good one. Inside Ninject you associate some particular interface with an implementation class:
Bind<IUserRepository>().To<UserRepository>();
You also substitute the default controller factory with your custom one. Inside the custom one you delegate the call to the Ninject kernel:
public class TrackyControllerFactory : DefaultControllerFactory
{
private IKernel _kernel = new StandardKernel(new TrackyServices());
protected override IController GetControllerInstance(
System.Web.Routing.RequestContext requestContext,
Type controllerType)
{
if (controllerType == null)
{
return null;
}
return _kernel.Get(controllerType) as IController;
}
}
When the MVC infrastructure is about to create a new controller, the call is delegated to the custom controller factory GetControllerInstance method, which delegates it to Ninject. Ninject sees that to create that controller the constructor has one argument of type IUserRepository. By using the declared binding, it sees that "I need to create a UserRepository to satisfy the IUserRepository need." It creates the instance and passes it to the constructor.
The constructor is never aware of what exact instance would be passed inside. It all depends on the binding you provide for that.
Code examples:
https://github.com/alexanderbeletsky/trackyt.net/blob/master/src/Web/Infrastructure/TrackyServices.cs https://github.com/alexanderbeletsky/trackyt.net/blob/master/src/Web/Infrastructure/TrackyControllerFactory.cs https://github.com/alexanderbeletsky/trackyt.net/blob/master/src/Web/Controllers/LoginController.cs
Check it out : DDD Melbourne video - New development workflow
The whole ASP.NET MVC 3 development process was very well presented.
The third party tools I like most are:
Using NuGet to install Ninject to enable DI throughout the MVC3
framework
Using NuGet to install nSubstite to create mocks to enable unit
testing
Sorry for the long post...
While being introduced to a brown field project, I'm having doubts regarding certain sets of unit tests and what to think. Say you had a repostory class, wrapping a stored procedure and in the developer guide book, a certain set guidelines (rules), describe how this class should be constructured. The class could look like the following:
public class PersonRepository
{
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (new SomeProfiler("someKey"))
{
var sp = Ioc.Resolve<IPersonStoredProcedure>();
sp.addNameArguement(personName);
sp.addCityArguement(cityName);
return sp.invoke();
}
} }
Now, I would of course write some integration tests, testing that the SP can be invoked, and that the behavior is as expected. However, would I write unit tests that assert that:
Constructor for SomeProfiler with the input parameter "someKey" is called
The Constructor of PersonStoredProcedure is called
The addNameArgument method on the stored procedure is called with parameter personName
The addCityArgument method on the stored procedure is called with parameter cityName
The invoke method is called on the stored procedure -
If so, I would potentially be testing the whole structure of a method, besides the behavior. My initial thought is that it is overkill. However, in regards to the coding practices enforced by the team, these test ensure a uniform and 'correct' structure and that the next layer is called correctly (from DAL to DB, BLL to DAL etc).
In my case these type of tests, are performed for each layer of the application.
Follow up question - the use of the SomeProfiler class smells a little like a convention to me - Instead creating explicit tests for this, could one create convention styled test by using static code analysis or unittest + reflection?
Thanks in advance.
I think that your initial thought was right - this is an overkill. Although you can use reflection to make sure that the class has the methods you expect I'm not sure you want to test it that way.
Perhaps instead of unit testing you should use some tool such as FxCop/StyleCop or nDepend to make sure all of the classes in a specific assembly/dll has these properties.
Having said that I'm a believer of "only code what you need" why test that a method exist, either you use it somewhere in your code and in that can you can test the specific case or you don't - and so it's irrelevant.
Unit tests should focus on behavior, not implementation. So writing a test to verify that certain arguments are set or passed in doesn't add much value to your testing strategy.
As the example provided appears to be communicating with your database, it can't truly be considered a "unit test" as it must communicate with physical dependencies that have additional setup and preconditions, such as availability of the environment, database schema, existing data, stored-procedures, etc. Any test you write is actually verifying these preconditions as well.
In it's present condition, your best bet for these types of tests is to test the behavior provided by the class -- invoke a method on your repository and then validate that the results are what you expected. However, you'll suddenly realize that there's a hidden cost here -- the database maintains state between test runs, and you'll need additional setup or tear-down logic to ensure that the database is in a well-known state.
While I realize the intent of the question was about the testing a "black box", it seems obvious that there's some hidden magic here in your API. My preference to solve the well-known state problem is to use an in-memory database that is scoped to the current test, which isolates me from environment considerations and enables me to parallelize my integration tests. I'd wager that under the current design, there is no "seam" to programmatically introduce a database configuration so you're "hemmed in". In my experience, magic hurts.
However, a slight change to the existing design solves this problem and the "magic" goes away:
public class PersonRepository : IPersonRepository
{
private ConnectionManager _mgr;
public PersonRepository(ConnectionManager mgr)
{
_mgr = mgr;
}
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (var p = _mgr.CreateProfiler("somekey"))
{
var sp = new PersonStoredProcedure(p);
sp.addArguement("name", personName);
sp.addArguement("city", cityName);
return sp.invoke();
}
}
}
This is a tough one because not too many people use Pex & Moles or so I think (even though Pex is a really great product - much better than any other unit testing tool)
I have a Data project that has a very simple model with just one entity (DBItem). I've also written a DBRepository within this project, that manipulates this EF model. Repository has a method called GetItems() that returns a list of business layer items (BLItem) and looks similar to this (simplified example):
public IList<BLItem> GetItems()
{
using (var ctx = new EFContext("name=MyWebConfigConnectionName"))
{
DateTime limit = DateTime.Today.AddDays(-10);
IList<DBItem> result = ctx.Items.Where(i => i.Changed > limit).ToList();
return result.ConvertAll(i => i.ToBusinessObject());
}
}
So now I'd like to create some unit tests for this particular method. I'm using Pex & Moles. I created my moles and stubs for my EF object context.
I would like to write parametrised unit test (I know I've first written my production code, but I had to, since I'm testing Pex & Moles) that tests that this method returns valid list of items.
This is my test class:
[PexClass]
public class RepoTest
{
[PexMethod]
public void GetItemsTest(ObjectSet<DBItem> items)
{
MEFContext.ConstructorString = (#this, name) => {
var mole = new SEFContext();
};
DBRepository repo = new DBRepository();
IList<BLItem> result = repo.GetItems();
IList<DBItem> manual = items.Where(i => i.Changed > DateTime.Today.AddDays(-10));
if (result.Count != manual.Count)
{
throw new Exception();
}
}
}
Then I run Pex Explorations for this particular parametrised unit test, but I get an error path bounds exceeded. Pex starts this test by providing null to this test method (so items = null). This is the code, that Pex is running:
[Test]
[PexGeneratedBy(typeof(RepoTest))]
[Ignore("the test state was: path bounds exceeded")]
public void DBRepository_GetTasks22301()
{
this.GetItemsTest((ObjectSet<DBItem>)null);
}
This was additional comment provided by Pex:
The test case ran too long for these inputs, and Pex stopped the analysis. Please notice: The method Oblivious.Data.Test.Repositories.TaskRepositoryTest.b__0 was called 50 times; please check that the code is not stuck in an infinite loop or recursion. Otherwise, click on 'Set MaxStack=200', and run Pex again.
Update attribute [PexMethod(MaxStack = 200)]
Question
Am I doing this the correct way or not? Should I use EFContext stub instead? Do I have to add additional attributes to test method so Moles host will be running (I'm not sure it does now). I'm running just Pex & Moles. No VS test or nUnit or anything else.
I guess I should probably set some limit to Pex how many items should it provide for this particular test method.
Moles is not designed to test the parts of your application that have external dependencies (e.g. file access, network access, database access, etc). Instead, Moles allows you to mock these parts of your app so that way you can do true unit testing on the parts that don't have external dependencies.
So I think you should just mock your EF objects and queries, e.g., by creating in-memory lists and having query methods return fake data from those lists based on whatever criteria is relevant.
I am just getting to grips with pex also ... my issues surrounded me wanting to use it with moq ;)
anyway ...
I have some methods similar to your that have the same problem. When i increased the max they went away. Presumably pex was satisfied that it had sufficiently explored the branches. I have methods where i have had to increase the timeout on the code contract validation also.
One thing that you should probably be doign though is passing in all the dependant objects as parameters ... ie dont instantiate the repo in the method but pass it in.
A general problem you have is that you are instantiating big objects in your method. I do the same in my DAL classes, but then i am not tryign to unit test them in isolation. I build up datasets and use this to test my data access code against.
I use pex on my business logic and objects.
If i were to try and test my DAL code id have to use IOC to pass the datacontext into the methods - which would then make testing possible as you can mock the data context.
You should use Entity Framework Repository Pattern: http://www.codeproject.com/KB/database/ImplRepositoryPatternEF.aspx
A few weeks ago I jumped on the MEF (ComponentModel) bandwagon, and am now using it for a lot of my plugins and also shared libraries. Overall, it's been great aside from the frequent mistakes on my part, which result in frustrating debugging sessions.
Anyhow, my app has been running great, but my MEF-related code changes have caused my automated builds to fail. Most of my unit tests were failing simply because the modules I was testing were dependent upon other modules that needed to be loaded by MEF. I worked around these situations by bypassing MEF and directly instantiating those objects.
In other words, via MEF I would have something like
[Import]
public ICandyInterface ci { get; set; }
and
[Export(typeof(ICandyInterface))]
public class MyCandy : ICandyInterface
{
[ImportingConstructor]
public MyCandy( [Import("name_param")] string name) {}
...
}
But in my unit tests, I would just use
CandyInterface MyCandy = new CandyInterface( "Godiva");
In addition, the CandyInterface requires a connection to a database, which I have worked around by just adding a test database to my unit test folder, and I have NUnit use that for all of the tests.
Ok, so here are my questions regarding this situation:
Is this a Bad Way to do things?
Would you recommend composing parts in [SetUp]
I haven't yet learned how to use mocks in unit testing -- is this a good example of a case where I might want to mock the underlying database connection (somehow) to just return dummy data and not really require a database?
If you've encountered something like this before, can you offer your experience and the way you solved your problem? (or should this go into the community wiki?)
It sounds like you are on the right track. A unit test should test a unit, and that's what you do when you directly create instances. If you let MEF compose instances for you, they would tend towards integration tests. Not that there's anything wrong with integration tests, but unit tests tend to be more maintainable because you test each unit in isolation.
You don't need a container to wire up instances in unit tests.
I generally recommend against composing Fixtures in SetUp, as it leads to the General Fixture anti-pattern.
It is best practice to replace dependencies with Test Doubles. Dynamic mocks is one of the more versatile ways of doing this, so definitely something you should learn.
I agree that creating the DOCs manually is much better than using MEF composition container to satisfy imports, but regarding the note 'compositing fixtures in setup leads to the general fixture anti pattern' - I want to mention that that's not always the case.
If you’re using the static container and satisfy imports via CompositionInitializer.SatisfyImports you will have to face the general fixture anti pattern as CompositionInitializer.Initialize cannot be called more than once. However, you can always create CompositionContainer, add catalogs, and call SatisyImportOnce on the container itself. In that case you can use a new CompositionContainer in every test and get away with facing the shared/general fixture anti pattern
I blogged on how to do unit tests (not nunit but works just the same) with MEF.
The trick was to use a MockExportProvider and i created a test base for all my tests to inherit from.
This is my main AutoWire function that works for integration and unit tests:
protected void AutoWire(MockExportProvider mocksProvider, params Assembly[] assemblies){
CompositionContainer container = null;
var assCatalogs = new List<AssemblyCatalog>();
foreach(var a in assemblies)
{
assCatalogs.Add(new AssemblyCatalog(a));
}
if (mocksProvider != null)
{
var providers = new List<ExportProvider>();
providers.Add(mocksProvider); //need to use the mocks provider before the assembly ones
foreach (var ac in assCatalogs)
{
var assemblyProvider = new CatalogExportProvider(ac);
providers.Add(assemblyProvider);
}
container = new CompositionContainer(providers.ToArray());
foreach (var p in providers) //must set the source provider for CatalogExportProvider back to the container (kinda stupid but apparently no way around this)
{
if (p is CatalogExportProvider)
{
((CatalogExportProvider)p).SourceProvider = container;
}
}
}
else
{
container = new CompositionContainer(new AggregateCatalog(assCatalogs));
}
container.ComposeParts(this);
}
More info on my post: https://yoavniran.wordpress.com/2012/10/18/unit-testing-wcf-and-mef/