My iOS app (Swift 5.3) has the structure below for the API and unit testing via mocking.
This does allow to test some aspects with a mocked-up API, but it's not a complete test: When running the tests, I will still run setupInitialStateForApp against the real API (not mocked out). Is there some way in the test that I can say: Don't initialize the app until I have mocked out the API? Or would there be a better way of writing this to avoid this problem?
MyApp.swift:
#main
struct MyApp: App {
var apiServer: ApiServer?
var dataHandler: DataHandler? // Stores the app state as an observable object
init() {
self.apiServer = apiServer()
self.dataHandler = setupInitialStateForApp(apiServer: apiServer)
}
MyAppTest.swift:
testApi() {
let apiServer = APIServer()
apiServer.session = getMockSession()
// Now testing some stuff, and everything is mocked. Good!
// But `setupInitialStateForApp` already ran,
// so I made real API calls which I want to avoid.
}
Related
I've been following the guidelines here - https://docs.servicestack.net/testing
I'm trying to do unit testing rather than integration, just to cut down the level of mocking and other complexities.
Some of my services call some of my other services, via the recommended IServiceGateway API, e.g. Gateway.Send(MyRequest).
However when running tests i'm getting System.NotImplementedException: 'Unable to resolve service 'GetMyContentRequest''.
I've used container.RegisterAutoWired() which is the service that handles this request.
I'm not sure where to go next. I really don't want to have to start again setting up an integration test pattern.
You're likely going to continually run into issues if you try to execute Service Integrations as unit tests instead of Integration tests which would start in a verified valid state.
But for Gateway Requests, they're executed using an IServiceGateway which you can choose to override by implementing GetServiceGateway() in your custom AppHost with a custom implementation, or by registering an IServiceGatewayFactory or IServiceGateway in your IOC, here's the default implementation:
public virtual IServiceGateway GetServiceGateway(IRequest req)
{
if (req == null)
throw new ArgumentNullException(nameof(req));
var factory = Container.TryResolve<IServiceGatewayFactory>();
return factory != null ? factory.GetServiceGateway(req)
: Container.TryResolve<IServiceGateway>()
?? new InProcessServiceGateway(req);
}
Based on the discussion in the answer by #mythz, this is my solution:
Use case like OP: Test the "main" service, and mock the "sub service", and like OP I'd want to do that with Unit test (so BasicAppHost), because it's quicker, and I believe it is easier to mock services that way (side note: for AppHost based integration test, SS will scan assemblies for (real) Services, so how to mock? Unregister from "container" and replace w. mock?).
Anyway, for the unit test:
My "main" service is using another service, via the IServiceGateway (which is the officially recommended way):
public MainDtoResponse Get(MainDto request) {
// do some stuff
var subResponse = Gateway.Send(new SubDto { /* params */ });
// do some stuff with subResponse
}
In my test setup:
appHost = new BasicAppHost().Init();
var mockGateway = new Mock<IServiceGateway>(); // using Moq
mockGateway.Setup(x => x.Send<SubDtoResponse>(It.IsAny<SubDto>()))
.Returns(new SubDtoResponse { /* ... */ });
container.Register(mockGateway.Object);
So the IServiceGateway must be mocked, and then the Send method is the important one. What I was doing wrong was to mock the service, when I should have mocked the Gateway.
Then call the main service (under test) in the normal fashion for a Unit Test, like in the docs:
var s = appHost.Container.Resolve<MainService>(); // must be populated in DI manually earlier in code
s.Get(new MainDto { /* ... */ })
PS: The mockGateway.Setup can be used inside each test, not necessarily in the OneTimeSetUp.
I have a background task initiated in .net core 2.0 startup, inherits from backgroundservice, implementing StartAsync, StopAsync and ExecuteAsync. This task is to update some data in database table periodically based on some business logic.
While I can run the backgroundtask as an application and test using logs, db check and with the help of other tools, can the unit-testing is necessary for testing the backgroundtask? If so how to register the task as a service with dependencies and trigger the start and stop methods to assert the actual vs expected? Appreciate some basic sample unit-test method on testing timer based .net core ihostedservice backgroundtask.
Here is my basic test start just for sample, but not completed yet. Having said that, this is just a thought but not the exact working test. Here is what need some help from the community. Can also add some more asserts i.e. Assert.Verify()?
[Fact]
public async void Run_background_task_success()
{
//Arrange
IServiceCollection services = new ServiceCollection();
services.AddHostedService<BackgroundManagerTask>();
var serviceProvider = services.BuildServiceProvider();
var service = serviceProvider.GetService<IHostedService>() as BackgroundManagerTask;
var isExecuted = false;
if(await service.StartAsync(CancellationToken.None))
{
isExecuted = true;
}
await Task.Delay(10000);
Assert.True(isExecuted);
await service.StopAsync(CancellationToken.None);
}
Here's how I usually do it. You mention you are going to the database to update some data, so I'm assuming you are expecting that as a dependency from BackgroundManager
[Fact]
public void BackgroundManagerUpdatingDataTest()
{
// Arrange
Mock<IDataAccess> dbMock = new Mock<IDataAccess>();
dbMock.Setup(x => x.UpdateSomethingInDB(It.IsAny<BusinessObject>())).Returns(1); // One row updated from the DML in UpdateSomethingInDB from the BusinessObject
BackgroundManager sut = new BackgroundManager(dbMock.Object); // System under test.
// Act
await sut.StartAsync(CancellationToken.None);
await Task.Delay(500); // Give the test some time to execute.
await sut.StopAsync(CancellationToken.None); // Stop the Background Service.
// Assert
dbMock.Verify(x => x.UpdateSomethingInDB(It.IsAny<BusinessObject>()), Times.Exactly(1));
}
Above, we are plainly testing the update to the database occurred by Mocking the data access call and verifying that it was called exactly once.
You could of course Mock any other dependency out using Moq and Assert on anything else you want to verify.
So let me start by saying I've seen all the threads over the wars between creating a wrapper vs mocking the HttpMethodRequest. In the past, I've done the wrapper method with great success, but I thought I'd go down the path of Mocking the HttpMessageRequest.
For starters here is an example of the debate: Mocking HttpClient in unit tests. I want to add that's not what this is about.
What I've found is that I have tests upon tests that inject an HttpClient. I've been doing a lot of serverless aws lambdas, and the basic flow is like so:
//some pseudo code
public class Functions
{
public Functions(HttpClient client)
{
_httpClient = client;
}
public async Task<APIGatewayResponse> GetData(ApiGatewayRequest request, ILambdaContext context)
{
var result = await _client.Get("http://example.com");
return new APIGatewayResponse
{
StatusCode = result.StatusCode,
Body = await result.Content.ReadStringAsAsync()
};
}
}
...
[Fact]
public void ShouldDoCall()
{
var requestUri = new Uri("http://example.com");
var mockResponse = new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent(expectedResponse) };
var mockHandler = new Mock<HttpClientHandler>();
mockHandler
.Protected()
.Setup<Task<HttpResponseMessage>>(
"SendAsync",
It.IsAny<HttpRequestMessage>(),
It.IsAny<CancellationToken>())
.ReturnsAsync(mockResponse);
var f = new Functions(new HttpClient(handler.Object);
var result = f.GetData().Result;
handlerMock.Protected().Verify(
"SendAsync",
Times.Exactly(1), // we expected a single external request
ItExpr.Is<HttpRequestMessage>(req =>
req.Method == HttpMethod.Get &&
req.RequestUri == expectedUri // to this uri
),
ItExpr.IsAny<CancellationToken>()
);
Assert.Equal(200, result.StatusCode);
}
So here's where I have the problem!
When all my tests run in NCrunch they pass, and pass fast!
When I run them all manually with Resharper 2018, they fail.
Equally, when they get run within the CI/CD platform, which is a docker container with the net core 2.1 SDK on a Linux distro, they too fail.
These tests should not be run in parallel (read the tests default this way). I have about 30 tests around these methods combined, and each one randomly fails on the moq verify portion. Sometimes they pass, sometimes they fail. If I break down the tests per test class and on run the groups that way, instead of all in one, then these will all pass in chunks. I'll also add that I have even gone through trying to isolate the variables per test method to make sure there is no overlap.
So, I'm really lost with trying to handle this through here and make sure this is testable.
Are there different ways to approach the HttpClient where it can consistently pass?
After lots of back n forth. I found two of situations from this.
I couldn't get parallel processing disabled within the docker setup, which is where I thought the issue was (I even made it do thread sleep between tests to slow it down (It felt really icky to me)
I found that all the tests l locally ran through the test runners were telling me they passed when about 1/2 failed on the docker test runner. What ended up being the issue was a magic string area when seeing and getting environment variables.
Small caveat to call out, Amazon updated their .NET Core lambda tools to install via dotnet cli, so this was updated in our docker image.
Sorry if this comes across as a stupid question im just not sure how to get started writing some unit tests.
I have a solution containing an api and a unit test project. The api has a repository/interface used for data access using ninject.
My question is how is my best way to unit test my api controllers. I have read a little about Moq but not sure if I need to use it as I want to test against my database.
I have read that I need to use a [TestInitialize] attribute
[TestInitialize]
public void MyTestInitialize()
{
var kernel = NinjectWebCommon.CreatePublicKernel();
kernel.Bind<BusinessController>().ToSelf();
}
My problem is my test project cant resolve CreatePublicKernel
Checking the NinjectWebCommon class in the api there is no function called CreatePublicKernel.
What am I missing here?
Ninject (or other DI library) is used only to provide dependencies into your controller's constructor. E.g. if you need BusinessController which requires two repositories, then controller should have constructor which expects these dependencies:
public BusinessController(IUserRepository userRepository,
IOrderRepository orderRepository)
{
_userRepository = userRepository;
_orderRepository = orderRepository;
}
If you want to write unit tests for your controller, you should provide mocked implementations of these repositories. Use Moq or other framework for creating mocks:
var userRepositoryMock = new Mock<IUserRepository>();
var orderRepositoryMock = new Mock<IOrderRepository>();
// setup mocks here
var controller = new BusinessController(userRepositoryMock.Object,
orderRepositoryMock.Object);
If you are writing integration tests for your controller, you should provide real implementations of these repositories, which use some real database.
var userRepository = new NHibernateUserRepository();
var orderRepository = new NHibernateOrderRepository();
// prepare some data in database here
var controller = new BusinessController(userRepository, orderRepository);
You can move controller instantiation into some method which is executed before each test (SetUp or TestInitialize method) in order to remove code duplication from your tests.
UPDATE: You also can use Ninject for integration testing. Just create Ninject module which will be used both by your real application and integration tests:
public class FooModule : NinjectModule
{
public override void Load()
{
Bind<IUserRepository>().To<NHibernateUserRepository>();
Bind<IOrderRepository>().To<NHibernateOrderRepository>();
Bind<BusinessController>().ToSelf();
}
}
Then use this module both to create kernel in NinjectWebCommon.CreateKernel method and kernel in your tests:
var kernel = new StandardKernel(new FooModule());
var controller = kernel.Get<ValuesController>();
I am struggling to write high quality tests around my node modules. The problem is the require module system. I want to be able to check that a certain required module has a method or its state has changed. There seem to be 2 relatively small libraries which can be used here: node-gently and mockery. However, due to their low 'profile' it makes me think that either people don't test this, or there is another way of doing this that I am not aware of.
What is the best way to mock out and test a module that has been required?
----------- UPDATE ---------------
node-sandbox works on the same principals as stated below but is wrapped up in a nice module. I am finding it very nice to work with.
--------------- detailed awnser ---------------
After much trial I have found the best way to test node modules in isolation while mocking things out is to use the method by Vojta Jina to run each module inside of a vm with a new context as explained here.
with this testing vm module:
var vm = require('vm');
var fs = require('fs');
var path = require('path');
/**
* Helper for unit testing:
* - load module with mocked dependencies
* - allow accessing private state of the module
*
* #param {string} filePath Absolute path to module (file to load)
* #param {Object=} mocks Hash of mocked dependencies
*/
exports.loadModule = function(filePath, mocks) {
mocks = mocks || {};
// this is necessary to allow relative path modules within loaded file
// i.e. requiring ./some inside file /a/b.js needs to be resolved to /a/some
var resolveModule = function(module) {
if (module.charAt(0) !== '.') return module;
return path.resolve(path.dirname(filePath), module);
};
var exports = {};
var context = {
require: function(name) {
return mocks[name] || require(resolveModule(name));
},
console: console,
exports: exports,
module: {
exports: exports
}
};
vm.runInNewContext(fs.readFileSync(filePath), context);
return context;
};
it is possible to test each module with its own context and easily stub out all external dependencys.
fsMock = mocks.createFs();
mockRequest = mocks.createRequest();
mockResponse = mocks.createResponse();
// load the module with mock fs instead of real fs
// publish all the private state as an object
module = loadModule('./web-server.js', {fs: fsMock});
I highly recommend this way for writing effective tests in isolation. Only acceptance tests should hit the entire stack. Unit and integration tests should test isolated parts of the system.
I think the mockery pattern is a fine one. That said, I usually opt to send in dependencies as parameters to a function (similar to passing dependencies in the constructor).
// foo.js
module.exports = function(dep1, dep2) {
return {
bar: function() {
// A function doing stuff with dep1 and dep2
}
}
}
When testing, I can send in mocks, empty objects instead, whatever seems appropriate. Note that I don't do this for all dependencies, basically only IO -- I don't feel the need to test that my code calls path.join or whatever.
I think the "low profile" that is making you nervous is due to a couple of things:
Some people structure their code similar to mine
Some people have their own helper fulfilling the same objective as mockery et al (it's a very simple module)
Some people don't unit test such things, instead spinning up an instance of their app (and db, etc) and testing against that. Cleaner tests, and the server is so fast it doesn't affect test performance.
In short, if you think mockery is right for you, go for it!
You easily mock require by using "a": https://npmjs.org/package/a
//Example faking require('./foo') in unit test:
var fakeFoo = {};
var expectRequire = require('a').expectRequire;
expectRequire('./foo).return(fakeFoo);
//in sut:
var foo = require('./foo); //returns fakeFoo