How do we unit test logic in Promises.task?
task{service.method()}
I want to validate invocation of the service method inside the task.
Is this possible? If yes, how?
I read in the documentation that in unit testing async processes, one can use this:
Promises.promiseFactory = new SynchronousPromiseFactory()
Tried adding it in my setup, but still does not work.
The long way
I've been struggling with this for a moment too.
I tried those:
grails unit test + Thread
Verify Spock mock with specified timeout
Also tried the same solution from the docs as you:
Promises.promiseFactory = new SynchronousPromiseFactory()
All went with no luck.
The solution
So I ended up with meta classing.
In the test's setup method, I mocked the Promise.task closure, so it runs the closure in the current thread, not in a new one:
def setup() {
Promises.metaClass.static.task = { Closure c -> c() }
// ...more stuff if needed...
}
Thanks to that, I can test the code as it wouldn't use multi threading.
Even I'm far from being 100% happy with this, I couldn't get anything better so far.
In recent versions of Grails (3.2.3 for instance), there is no need to mock, metaClass or use a Promise factory. I found out the promises in unit tests get executed synchronously. Found no doc for that, I empirically added a sleep inside a promise and noticed the test waited for the pause to complete.
For integration tests and functional tests, that's another story: you have to change the promise provider, for instance in BootStrap.groovy:
if (Environment.current == Environment.TEST) {
Promises.promiseFactory = new SynchronousPromiseFactory()
}
Like Marcin suggested, the metaClass option is not satisfactory. Also bear in mind that previous (or future) versions of Grails are likely to work differently.
If you are stuck with Grails 2 like dinosaurs such as me, then you can just copy the classes SynchronousPromiseFactory and SynchronousPromise from Grails 3 to your project and then the following works:
Promises.promiseFactory = new Grails3SynchronousPromiseFactory()
(Class names are prefixed with Grails3 to make the hack more obvious)
I'd simply mock/override the Promises.task method to invoke the provided closure directly.
Related
I know stubs verify state and the mocks verify behavior.
How can I make a mock in PHPUnit to verify the behavior of the methods? PHPUnit does not have verification methods (verify()), And I do not know how to make a mock in PHPUnit.
In the documentation, to create a stub is well explained:
// Create a stub for the SomeClass class.
$stub = $this->createMock(SomeClass::class);
// Configure the stub.
$stub
->method('doSomething')
->willReturn('foo');
// Calling $stub->doSomething() will now return 'foo'.
$this->assertEquals('foo', $stub->doSomething());
But in this case, I am verifying status, saying that return an answer.
How would be the example to create a mock and verify behavior?
PHPUnit used to support two ways of creating test doubles out of the box. Next to the legacy PHPUnit mocking framework we could choose prophecy as well.
Prophecy support was removed in PHPUnit 9, but it can be added back by installing phpspec/prophecy-phpunit.
PHPUnit Mocking Framework
The createMock method is used to create three mostly known test doubles. It's how you configure the object makes it a dummy, a stub, or a mock.
You can also create test stubs with the mock builder (getMockBuilder returns the mock builder). It's just another way of doing the same thing that lets you to tweak some additional mock options with a fluent interface (see the documentation for more).
Dummy
Dummy is passed around, but never actually called, or if it's called it responds with a default answer (mostly null). It mainly exists to satisfy a list of arguments.
$dummy = $this->createMock(SomeClass::class);
// SUT - System Under Test
$sut->action($dummy);
Stub
Stubs are used with query like methods - methods that return things, but it's not important if they're actually called.
$stub = $this->createMock(SomeClass::class);
$stub->method('getSomething')
->willReturn('foo');
$sut->action($stub);
Mock
Mocks are used with command like methods - it's important that they're called, and we don't care much about their return value (command methods don't usually return any value).
$mock = $this->createMock(SomeClass::class);
$mock->expects($this->once())
->method('doSomething')
->with('bar');
$sut->action($mock);
Expectations will be verified automatically after your test method finished executing. In the example above, the test will fail if the method doSomething wasn't called on SomeClass, or it was called with arguments different to the ones you configured.
Spy
Not supported.
Prophecy
Prophecy is now supported by PHPUnit out of the box, so you can use it as an alternative to the legacy mocking framework. Again, it's the way you configure the object makes it becomes a specific type of a test double.
Dummy
$dummy = $this->prophesize(SomeClass::class);
$sut->action($dummy->reveal());
Stub
$stub = $this->prophesize(SomeClass::class);
$stub->getSomething()->willReturn('foo');
$sut->action($stub->reveal());
Mock
$mock = $this->prophesize(SomeClass::class);
$mock->doSomething('bar')->shouldBeCalled();
$sut->action($mock->reveal());
Spy
$spy = $this->prophesize(SomeClass::class);
// execute the action on system under test
$sut->action($spy->reveal());
// verify expectations after
$spy->doSomething('bar')->shouldHaveBeenCalled();
Dummies
First, look at dummies. A dummy object is both what I look like if you ask me to remember where I left the car keys... and also the object you get if you add an argument with a type-hint in phpspec to get a test double... then do absolutely nothing with it. So if we get a test double and add no behavior and make no assertions on its methods, it's called a "dummy object".
Oh, and inside of their documentation, you'll see things like $prophecy->reveal(). That's a detail that we don't need to worry about because phpspec takes care of that for us. Score!
Stubs
As soon as you start controlling even one return value of even one method... boom! This object is suddenly known as a stub. From the docs: "a stub is an object double" - all of these things are known as test doubles, or object doubles - that when put in a specific environment, behaves in a specific way. That's a fancy way of saying: as soon as we add one of these willReturn() things, it becomes a stub.
And actually, most of the documentation is spent talking about stubs and the different ways to control exactly how it behaves, including the Argument wildcarding that we saw earlier.
Mocks
If you keep reading down, the next thing you'll find are "mocks". An object becomes a mock when you call shouldBeCalled(). So, if you want to add an assertion that a method is called a certain number of times and you want to put that assertion before the actual code - using shouldBeCalledTimes() or shouldBeCalled() - congratulations! Your object is now known as a mock.
Spies
And finally, at the bottom, we have spies. A spy is the exact same thing as a mock, except it's when you add the expectation after the code - like with shouldHaveBeenCalledTimes().
https://symfonycasts.com/screencast/phpspec/doubles-dummies-mocks-spies
I am new to writing test cases for WebAPI's. I have seen similar questions asked in the past, but not answered, but I am wondering how I would test my APIs if they have an ODataQueryOptions as part of the parameters. See below:
public IQueryable<Item> GetByIdAndLocale(ODataQueryOptions opts,
Guid actionuniqueid,
string actionsecondaryid)
Would I have to moq this? If so, how would this look? Any help would be appreciated.
For ODataQueryOptions perspective, you may want to test that all the OData query options can work with your Function. So firstly you need to create an instance of ODataQueryOptions. Here is an example:
HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, requestUri);
ODataQueryContext context = new ODataQueryContext(EdmCoreModel.Instance, elementType);
ODataQueryOptions options = new ODataQueryOptions(context, request);
So you need to create your own EDM model to replace EdmCoreModel.Instance, and replace requestUri with your query. elemntType in ODataQueryContext is "The CLR type of the element of the collection being queried".
I cannot tell from the phrasing, but is the above call (GetByIdAndLocale) the Web API that you are trying to test or are you trying to test something that is calling it?
One uses a mock or a stub to replace dependencies in a Unit Under Test (UUT). If you are testing GetByIdAndLocale() then you would not mock it though if it calls something else that takes the ODataQueryOptions as a parameter, you could use Moq to stub/mock that call.
If you are testing some unit that calls GetByIdAndLocale() then, yes, you could moq it. How exactly you might do this depends upon the goal (checking that the correct values are being passed in vs. checking that the returned IQueryable is correctly processed) basically, matching against It.IsAny() or against some matcher.
Which do you want to test?
GetByIdAndLocale(), something that calls it or something (not shown) that it calls?
What are you interested in verifying?
Correct options are passed in or the processing of the return from the mocked call?
This is a tough one because not too many people use Pex & Moles or so I think (even though Pex is a really great product - much better than any other unit testing tool)
I have a Data project that has a very simple model with just one entity (DBItem). I've also written a DBRepository within this project, that manipulates this EF model. Repository has a method called GetItems() that returns a list of business layer items (BLItem) and looks similar to this (simplified example):
public IList<BLItem> GetItems()
{
using (var ctx = new EFContext("name=MyWebConfigConnectionName"))
{
DateTime limit = DateTime.Today.AddDays(-10);
IList<DBItem> result = ctx.Items.Where(i => i.Changed > limit).ToList();
return result.ConvertAll(i => i.ToBusinessObject());
}
}
So now I'd like to create some unit tests for this particular method. I'm using Pex & Moles. I created my moles and stubs for my EF object context.
I would like to write parametrised unit test (I know I've first written my production code, but I had to, since I'm testing Pex & Moles) that tests that this method returns valid list of items.
This is my test class:
[PexClass]
public class RepoTest
{
[PexMethod]
public void GetItemsTest(ObjectSet<DBItem> items)
{
MEFContext.ConstructorString = (#this, name) => {
var mole = new SEFContext();
};
DBRepository repo = new DBRepository();
IList<BLItem> result = repo.GetItems();
IList<DBItem> manual = items.Where(i => i.Changed > DateTime.Today.AddDays(-10));
if (result.Count != manual.Count)
{
throw new Exception();
}
}
}
Then I run Pex Explorations for this particular parametrised unit test, but I get an error path bounds exceeded. Pex starts this test by providing null to this test method (so items = null). This is the code, that Pex is running:
[Test]
[PexGeneratedBy(typeof(RepoTest))]
[Ignore("the test state was: path bounds exceeded")]
public void DBRepository_GetTasks22301()
{
this.GetItemsTest((ObjectSet<DBItem>)null);
}
This was additional comment provided by Pex:
The test case ran too long for these inputs, and Pex stopped the analysis. Please notice: The method Oblivious.Data.Test.Repositories.TaskRepositoryTest.b__0 was called 50 times; please check that the code is not stuck in an infinite loop or recursion. Otherwise, click on 'Set MaxStack=200', and run Pex again.
Update attribute [PexMethod(MaxStack = 200)]
Question
Am I doing this the correct way or not? Should I use EFContext stub instead? Do I have to add additional attributes to test method so Moles host will be running (I'm not sure it does now). I'm running just Pex & Moles. No VS test or nUnit or anything else.
I guess I should probably set some limit to Pex how many items should it provide for this particular test method.
Moles is not designed to test the parts of your application that have external dependencies (e.g. file access, network access, database access, etc). Instead, Moles allows you to mock these parts of your app so that way you can do true unit testing on the parts that don't have external dependencies.
So I think you should just mock your EF objects and queries, e.g., by creating in-memory lists and having query methods return fake data from those lists based on whatever criteria is relevant.
I am just getting to grips with pex also ... my issues surrounded me wanting to use it with moq ;)
anyway ...
I have some methods similar to your that have the same problem. When i increased the max they went away. Presumably pex was satisfied that it had sufficiently explored the branches. I have methods where i have had to increase the timeout on the code contract validation also.
One thing that you should probably be doign though is passing in all the dependant objects as parameters ... ie dont instantiate the repo in the method but pass it in.
A general problem you have is that you are instantiating big objects in your method. I do the same in my DAL classes, but then i am not tryign to unit test them in isolation. I build up datasets and use this to test my data access code against.
I use pex on my business logic and objects.
If i were to try and test my DAL code id have to use IOC to pass the datacontext into the methods - which would then make testing possible as you can mock the data context.
You should use Entity Framework Repository Pattern: http://www.codeproject.com/KB/database/ImplRepositoryPatternEF.aspx
A few weeks ago I jumped on the MEF (ComponentModel) bandwagon, and am now using it for a lot of my plugins and also shared libraries. Overall, it's been great aside from the frequent mistakes on my part, which result in frustrating debugging sessions.
Anyhow, my app has been running great, but my MEF-related code changes have caused my automated builds to fail. Most of my unit tests were failing simply because the modules I was testing were dependent upon other modules that needed to be loaded by MEF. I worked around these situations by bypassing MEF and directly instantiating those objects.
In other words, via MEF I would have something like
[Import]
public ICandyInterface ci { get; set; }
and
[Export(typeof(ICandyInterface))]
public class MyCandy : ICandyInterface
{
[ImportingConstructor]
public MyCandy( [Import("name_param")] string name) {}
...
}
But in my unit tests, I would just use
CandyInterface MyCandy = new CandyInterface( "Godiva");
In addition, the CandyInterface requires a connection to a database, which I have worked around by just adding a test database to my unit test folder, and I have NUnit use that for all of the tests.
Ok, so here are my questions regarding this situation:
Is this a Bad Way to do things?
Would you recommend composing parts in [SetUp]
I haven't yet learned how to use mocks in unit testing -- is this a good example of a case where I might want to mock the underlying database connection (somehow) to just return dummy data and not really require a database?
If you've encountered something like this before, can you offer your experience and the way you solved your problem? (or should this go into the community wiki?)
It sounds like you are on the right track. A unit test should test a unit, and that's what you do when you directly create instances. If you let MEF compose instances for you, they would tend towards integration tests. Not that there's anything wrong with integration tests, but unit tests tend to be more maintainable because you test each unit in isolation.
You don't need a container to wire up instances in unit tests.
I generally recommend against composing Fixtures in SetUp, as it leads to the General Fixture anti-pattern.
It is best practice to replace dependencies with Test Doubles. Dynamic mocks is one of the more versatile ways of doing this, so definitely something you should learn.
I agree that creating the DOCs manually is much better than using MEF composition container to satisfy imports, but regarding the note 'compositing fixtures in setup leads to the general fixture anti pattern' - I want to mention that that's not always the case.
If you’re using the static container and satisfy imports via CompositionInitializer.SatisfyImports you will have to face the general fixture anti pattern as CompositionInitializer.Initialize cannot be called more than once. However, you can always create CompositionContainer, add catalogs, and call SatisyImportOnce on the container itself. In that case you can use a new CompositionContainer in every test and get away with facing the shared/general fixture anti pattern
I blogged on how to do unit tests (not nunit but works just the same) with MEF.
The trick was to use a MockExportProvider and i created a test base for all my tests to inherit from.
This is my main AutoWire function that works for integration and unit tests:
protected void AutoWire(MockExportProvider mocksProvider, params Assembly[] assemblies){
CompositionContainer container = null;
var assCatalogs = new List<AssemblyCatalog>();
foreach(var a in assemblies)
{
assCatalogs.Add(new AssemblyCatalog(a));
}
if (mocksProvider != null)
{
var providers = new List<ExportProvider>();
providers.Add(mocksProvider); //need to use the mocks provider before the assembly ones
foreach (var ac in assCatalogs)
{
var assemblyProvider = new CatalogExportProvider(ac);
providers.Add(assemblyProvider);
}
container = new CompositionContainer(providers.ToArray());
foreach (var p in providers) //must set the source provider for CatalogExportProvider back to the container (kinda stupid but apparently no way around this)
{
if (p is CatalogExportProvider)
{
((CatalogExportProvider)p).SourceProvider = container;
}
}
}
else
{
container = new CompositionContainer(new AggregateCatalog(assCatalogs));
}
container.ComposeParts(this);
}
More info on my post: https://yoavniran.wordpress.com/2012/10/18/unit-testing-wcf-and-mef/
I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}