We mainly use class injections theses days and that works great with unit test but there is some(a lot) of code that makes calls directly to the container and this results in problems while running tests.
In each test class I got a SetupData() method that will run once per class. This will add the data for this specific class test. This works fine as long as I do not run multiple (class) tests after each other. If I do that It will tell me that the container is locked.
private void SetupData()
{
TestEntityviewCache evCache;
if (!_dataIsSetup)
{
evCache = new TestEntityviewCache();
IOC.Current.RegisterSingleton<IEntityViewCache>(evCache);
EntityView.GetEntityViewListMethod = (key) => evCache.Get(key);
EntityView.GetEntityViewListByIdMethod = (key) => evCache.Get(key);
}
evCache = IOC.Current.Get<TestEntityviewCache>();
evCache.Update(EntityViewKey.Mappning, false, MyModelData.CreateMappning(), null);
evCache.Update(EntityViewKey.Kontrakt, false, MyModelData.CreateKontrakt(), null);
evCache.Update(EntityViewKey.Folder, false, MyModelData.CreateFolder(), null);
evCache.Update(EntityViewKey.Kontaktgrupp, false, MyModelData.CreateKontaktGrupp(), null);
}
I have tried reseting or recreating the continer for each class test but I can´t find a solution?
The best solution would be to replace all direct IOC calls but that is to much work at this point.
In this case I hade to run the tests in sequence.
Related
How do you configure unit testing framework to help develop code that is part of AnyLogic agents?
To have a suitable test driven development rhythm, I need to be able to run all tests in a few seconds. I thought of exporting the project as a standalone application (jar) each time, but that's pretty slow.
I thought of trying to write all the code outside AnyLogic in separate classes, but there are many references to built-in AnyLogic classes, as well as various agents. My code would need to refer to these somehow, and I'm not sure how to do that except by writing the code inside AnyLogic.
I wonder if there's a way of adding the test runner as a dependency, and executing that test runner from within AnyLogic.
Does anyone have a setup that works nicely?
This definitely requires some advanced Java, but testing, especially unit testing is too often neglected in building good robust models. I hope this simple example is enough to get you (and lots of other modellers) going.
For Junit testing, we make use of two libraries that you can add as a dependency to your model.
Now there are two main types of logic that you will want to test in simulation models.
Functions in Java classes
Model execution
Type 1: Suppose I have this very simple Java class
public class MyClass {
public MyClass() {
}
public boolean getResult() {
return true;
}
}
And I want to test the function getResult()
I can simply create a new class and create a function that I annotate with the #Test modifier and then also make use of the assertEquals() method, which is standard in junit testing
import org.junit.Test;
import static org.junit.Assert.assertEquals;
public class MyTestClass{
#Test
public void testMyClassFunction1() {
boolean result = new MyClass().getResult();
assertEquals("The value of the test class 1", result, true);
}
Now comes the AnyLogic specific implementation (there are other ways to do this but this is the easiest/most useful, you will see in a minute)
You need to create a custom experiment
Now if you run this from the Run Model button you will get this output
SUCCESS
Run: 1
Failed: 0
You can obviously update and change the output as to your liking
Type 2: Suppose we have this very simple model
And the function getResult() simply returns an int of 2.
Now we need to create another custom experiment to run this model
And then we can write a test to run this Custom Experiment and check the result
Simply add the following to your MyTestClass
#Test
public void testMyClassFunction2() {
int result = new SingleRun(null).runExperiment();
assertEquals("Value of a single run", result, 2);
}
And now if you run the RunAllTests customer experiment it will give you this output
SUCCESS
Run: 2
Failed: 0
This is just the beginning, you can read up tons on using junit to your advantage
This is a tough one because not too many people use Pex & Moles or so I think (even though Pex is a really great product - much better than any other unit testing tool)
I have a Data project that has a very simple model with just one entity (DBItem). I've also written a DBRepository within this project, that manipulates this EF model. Repository has a method called GetItems() that returns a list of business layer items (BLItem) and looks similar to this (simplified example):
public IList<BLItem> GetItems()
{
using (var ctx = new EFContext("name=MyWebConfigConnectionName"))
{
DateTime limit = DateTime.Today.AddDays(-10);
IList<DBItem> result = ctx.Items.Where(i => i.Changed > limit).ToList();
return result.ConvertAll(i => i.ToBusinessObject());
}
}
So now I'd like to create some unit tests for this particular method. I'm using Pex & Moles. I created my moles and stubs for my EF object context.
I would like to write parametrised unit test (I know I've first written my production code, but I had to, since I'm testing Pex & Moles) that tests that this method returns valid list of items.
This is my test class:
[PexClass]
public class RepoTest
{
[PexMethod]
public void GetItemsTest(ObjectSet<DBItem> items)
{
MEFContext.ConstructorString = (#this, name) => {
var mole = new SEFContext();
};
DBRepository repo = new DBRepository();
IList<BLItem> result = repo.GetItems();
IList<DBItem> manual = items.Where(i => i.Changed > DateTime.Today.AddDays(-10));
if (result.Count != manual.Count)
{
throw new Exception();
}
}
}
Then I run Pex Explorations for this particular parametrised unit test, but I get an error path bounds exceeded. Pex starts this test by providing null to this test method (so items = null). This is the code, that Pex is running:
[Test]
[PexGeneratedBy(typeof(RepoTest))]
[Ignore("the test state was: path bounds exceeded")]
public void DBRepository_GetTasks22301()
{
this.GetItemsTest((ObjectSet<DBItem>)null);
}
This was additional comment provided by Pex:
The test case ran too long for these inputs, and Pex stopped the analysis. Please notice: The method Oblivious.Data.Test.Repositories.TaskRepositoryTest.b__0 was called 50 times; please check that the code is not stuck in an infinite loop or recursion. Otherwise, click on 'Set MaxStack=200', and run Pex again.
Update attribute [PexMethod(MaxStack = 200)]
Question
Am I doing this the correct way or not? Should I use EFContext stub instead? Do I have to add additional attributes to test method so Moles host will be running (I'm not sure it does now). I'm running just Pex & Moles. No VS test or nUnit or anything else.
I guess I should probably set some limit to Pex how many items should it provide for this particular test method.
Moles is not designed to test the parts of your application that have external dependencies (e.g. file access, network access, database access, etc). Instead, Moles allows you to mock these parts of your app so that way you can do true unit testing on the parts that don't have external dependencies.
So I think you should just mock your EF objects and queries, e.g., by creating in-memory lists and having query methods return fake data from those lists based on whatever criteria is relevant.
I am just getting to grips with pex also ... my issues surrounded me wanting to use it with moq ;)
anyway ...
I have some methods similar to your that have the same problem. When i increased the max they went away. Presumably pex was satisfied that it had sufficiently explored the branches. I have methods where i have had to increase the timeout on the code contract validation also.
One thing that you should probably be doign though is passing in all the dependant objects as parameters ... ie dont instantiate the repo in the method but pass it in.
A general problem you have is that you are instantiating big objects in your method. I do the same in my DAL classes, but then i am not tryign to unit test them in isolation. I build up datasets and use this to test my data access code against.
I use pex on my business logic and objects.
If i were to try and test my DAL code id have to use IOC to pass the datacontext into the methods - which would then make testing possible as you can mock the data context.
You should use Entity Framework Repository Pattern: http://www.codeproject.com/KB/database/ImplRepositoryPatternEF.aspx
A few weeks ago I jumped on the MEF (ComponentModel) bandwagon, and am now using it for a lot of my plugins and also shared libraries. Overall, it's been great aside from the frequent mistakes on my part, which result in frustrating debugging sessions.
Anyhow, my app has been running great, but my MEF-related code changes have caused my automated builds to fail. Most of my unit tests were failing simply because the modules I was testing were dependent upon other modules that needed to be loaded by MEF. I worked around these situations by bypassing MEF and directly instantiating those objects.
In other words, via MEF I would have something like
[Import]
public ICandyInterface ci { get; set; }
and
[Export(typeof(ICandyInterface))]
public class MyCandy : ICandyInterface
{
[ImportingConstructor]
public MyCandy( [Import("name_param")] string name) {}
...
}
But in my unit tests, I would just use
CandyInterface MyCandy = new CandyInterface( "Godiva");
In addition, the CandyInterface requires a connection to a database, which I have worked around by just adding a test database to my unit test folder, and I have NUnit use that for all of the tests.
Ok, so here are my questions regarding this situation:
Is this a Bad Way to do things?
Would you recommend composing parts in [SetUp]
I haven't yet learned how to use mocks in unit testing -- is this a good example of a case where I might want to mock the underlying database connection (somehow) to just return dummy data and not really require a database?
If you've encountered something like this before, can you offer your experience and the way you solved your problem? (or should this go into the community wiki?)
It sounds like you are on the right track. A unit test should test a unit, and that's what you do when you directly create instances. If you let MEF compose instances for you, they would tend towards integration tests. Not that there's anything wrong with integration tests, but unit tests tend to be more maintainable because you test each unit in isolation.
You don't need a container to wire up instances in unit tests.
I generally recommend against composing Fixtures in SetUp, as it leads to the General Fixture anti-pattern.
It is best practice to replace dependencies with Test Doubles. Dynamic mocks is one of the more versatile ways of doing this, so definitely something you should learn.
I agree that creating the DOCs manually is much better than using MEF composition container to satisfy imports, but regarding the note 'compositing fixtures in setup leads to the general fixture anti pattern' - I want to mention that that's not always the case.
If you’re using the static container and satisfy imports via CompositionInitializer.SatisfyImports you will have to face the general fixture anti pattern as CompositionInitializer.Initialize cannot be called more than once. However, you can always create CompositionContainer, add catalogs, and call SatisyImportOnce on the container itself. In that case you can use a new CompositionContainer in every test and get away with facing the shared/general fixture anti pattern
I blogged on how to do unit tests (not nunit but works just the same) with MEF.
The trick was to use a MockExportProvider and i created a test base for all my tests to inherit from.
This is my main AutoWire function that works for integration and unit tests:
protected void AutoWire(MockExportProvider mocksProvider, params Assembly[] assemblies){
CompositionContainer container = null;
var assCatalogs = new List<AssemblyCatalog>();
foreach(var a in assemblies)
{
assCatalogs.Add(new AssemblyCatalog(a));
}
if (mocksProvider != null)
{
var providers = new List<ExportProvider>();
providers.Add(mocksProvider); //need to use the mocks provider before the assembly ones
foreach (var ac in assCatalogs)
{
var assemblyProvider = new CatalogExportProvider(ac);
providers.Add(assemblyProvider);
}
container = new CompositionContainer(providers.ToArray());
foreach (var p in providers) //must set the source provider for CatalogExportProvider back to the container (kinda stupid but apparently no way around this)
{
if (p is CatalogExportProvider)
{
((CatalogExportProvider)p).SourceProvider = container;
}
}
}
else
{
container = new CompositionContainer(new AggregateCatalog(assCatalogs));
}
container.ComposeParts(this);
}
More info on my post: https://yoavniran.wordpress.com/2012/10/18/unit-testing-wcf-and-mef/
I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}
I am using an api which interacts with a db. This api has methods for querying, loading and saving elements to the db. I have written integration tests which do things like create a new instance, then check that when I do a query for that instance, the correct instance is found. This is all fine.
I would like to have faster running unit tests for this code but am wondering about the usefulness of any unit test and if they are actually giving me anything. for example, lets say I have a class for saving some element I have via the API. This is psuedo code, but get the idea of how the api I am using works across.
public class ElementSaver
{
private ITheApi m_api;
public bool SaveElement(IElement newElement, IElement linkedElement)
{
IntPtr elemPtr = m_api.CreateNewElement()
if (elemPtr==IntPtr.Zero)
{
return false;
}
if (m_api.SetElementAttribute(elemPtr,newElement.AttributeName,newElement.AttributeValue)==false)
{
return false;
}
if (m_api.SaveElement(elemPtr)==false)
{
return false;
}
IntPtr linkedElemPtr = m_api.GetElementById(linkedElement.Id)
if (linkedElemPtr==IntPtr.Zero)
{
return false;
}
if (m_api.LinkElements(elemPtr,linkedElemPtr)==false)
{
return false;
}
return true;
}
}
is it worth writing unit tests which mock out the m_api member? it seems that I can test that if any of the various calls fail that false is returned, and that if all of the various calls succeed that true is returned, and I could set expectations that the various methods are called with the expected parameters, but is this useful? If I were to refactor this code so that it used some slightly different methods of the api, but achieved the same result, this would break my tests and I would need to change them. This brittleness doesn't seem very useful.
Should I bother with unit tests for code like this, or should I just stick with the integration tests that I've got?
Look at what the tests are like. If you're only testing if stuff that comes in ends up in the database etc. Your probably doing the right thing by only doing automated integration tests. If there's logic you want to test then you might want to look if you can factor out your logic into separate classes that you can unit test and facades around the infrastructure code that contain no logic.
It is a good idea to mock out m_api in my code for the following reasons (not all of which apply to your psuedo-code example):
As you mentioned, you can verify that your class performs error handling properly
In cases where you have more complex code in your class (e.g., cacheing), you can use expectations on your mock to ensure that the class is behaving properly. For example, retrieve the same object twice but ensure that m_api is only called once.
Your unit test can test behavior without creating an appropriate data set. This increase maintainability over time as the data model underneath m_api changes.