When building API's in Nest, I often come across an issue where a particular API might need to make use of several Nest modules to do it's job. I'd like to know if there's a better way to structure my modules so that unit testing them is easier.
For example, imagine we have an OrderService. The OrderService makes use of a few different dependencies in order to do it's job:
It uses the ProductsService in order to look up product details and prices.
It uses the UsersService in order to look up customer information, such as a Stripe customer ID.
It uses a Typeorm to handle writing the Order into the database
And finally, let's say it uses a CouponsService in order to look up and validate coupon details for the Order.
For the sake of this example, let's just imagine each of these 4 dependencies to be it's own Nest module.
When it comes to unit testing the OrdersService, I'd then need to stub out 4 different dependencies. This seems like a lot more work than it should be, and so I realized there must be a better way.
One thing that I've tried doing is creating separate helper files for setting up each services mocks. This cuts down on the amount of boilerplate for each service that uses a particular dependency, but you still end up in a situation where 1 service might have 4 different dependencies.
Ideally, I'd like to implement some pattern or structure so that when I test a file, the dependencies are either really simple or automatic to mock, or a particular file only has 1 dependency at a time, without oversimplifying the system.
My question really boils down to:
How do you handle many dependencies like this in a Service, without ending up in a situation where testing a function requires you to mock 4+ methods. I'd love to hear a Nest specific way to do this, but I'd also appreciate any generic software engineering examples or patterns to look into as well.
There's a pull request to allow for auto-mocking with Nest's #nestjs/testing package, but it hasn't been accepted yet (working on it though!), and there's a package called #golevelup/ts-jest that allows you to set up mocks of an object very quickly and easily. So instead of having to define the entire { get: jest.fn(), create: jest.fn(), ...etc } useValue provider, you could have something as quick as
const modFixture = await Test.createTestingModule({
providers: [
OrderService,
{
provide: ProductService,
useValue: createMock<ProductService>(),
},
{
provide: UserService,
useValue: createMock<UserService>(),
},
...etc
]
}).compile();
And all of the methods will be set up as jest.fn() methods, that can be extended based on your specific test
Related
I've recently had an interesting experience but didn't find a satisfying answer so far: I'm a big fan of DDD and try to define rich domain objects with behavior and good information hiding, even if the team officially doesn't practice DDD. At the end of the day, it doesn't matter, as you have a well-defined object, which represents something in the problem domain.
That said, I would also like to practice TDD more. Unfortunately, if I test a service, which uses such rich domain models, the models are usually not abstracted. Therefore, to test the behavior of the service, I need to set up the model as well. This model comes with its own invariants etc., therefore with every service test, I also test the model the service is using.
This seems like a big no-go, as I'm not only "not really unit-testing", but it's also troublesome to set up the tests, as the arrange-code gets large.
In my opinion, there seems to be no way around this but to start creating interfaces for models. But it seems like I am the only person thinking so. For example, here is a big article, why this is an anti-pattern:
https://lostechies.com/jamesgregory/2009/05/09/entity-interface-anti-pattern/
I’m also not that too delighted to create interfaces for all models, as they should really represent something and adding another layer of abstraction just for testing seems like overkill. That said, what would be the best solution hereby? How are people on the field, which do combine DDD and TDD, handling this?
This seems like a big no-go, as I'm not only "not really unit-testing", but it's also troublesome to set up the tests, as the arrange-code gets large.
I think you can dismiss "not really unit-testing"; the important thing is to use tools that are fit for purpose, not the branding.
That said, troublesome to set up the tests is a legitimate concern, and all by itself sufficient excuse to look for a way to improve the design.
If your service were tightly coupled to some third party implementation, that offered no affordances for substitution, what would you do to decouple that from your tests? The usual answer would be to introduce a seam - a new design element between your code and the 3rd party code.
The two important characteristics of the seam:
it does afford substitution; which is to say, you have an interface.
the implementation of the interface that integrates with the third party code is "so simple there are obviously no deficiencies".
Then, in your tests, you introduce a substitute implementation.
The game with your "domain model" is exactly the same. Assuming that you are applying the usual lifecycle patterns, the seam includes a substitute for the repository and a substitute for the aggregate root entity.
Some good news - you only don't necessarily need to shadow the entire aggregate: only the parts of the interface that your service cares about. In effect, what you are doing is defining - for each service - the contract that describes the interactions between your service and the domain model. "Role interfaces" will be a useful search term here.
First I will make sure these two conditions are meet:
Domain models are POJOs
Domain layer isolation (other layers can access domain layer but not the way around)
Then Factory, Builder or TestHelpers can be used to bring models to desire state for tests.
Basics
Testing Scopes
Unit Testing
Integration Testing
Domain Models
These should be unit tests, which tests the Domain Models / Aggregate's methods.
Services
These should be integration tests, which tests the integration of Service methods and the associated models.
My Broad Approach
When you're testing your domain models, there may be many variance, that you'll need to account for in your unit tests.
When these then translate over to a requirement to use within an integration test, I tend to go for some sort of CreationFactory (or ArrangementFactory) for your domain models.
You can then use these in both sets of tests.
So for example...
public class ArrangeUser {
public static User ArrangeStandardUser() {
return new User(...standard...);
}
public static User ArrangeAdminUser() {
return new User(...admin...);
}
}
Then in your Unit Test...
// Arrange
User standardUser = ArrangeUser.StandardUser();
// Act
bool canDoSomething = standardUser.CanDoSomething();
// Assert
Assert.True(canDoSomething);
Then in your Integration Test...
// Arrange
User standardUser = ArrangeUser.StandardUser();
ServiceToTest service = new ServiceToTest(standardUser); // replace with some sort of Repository Mock or whatever suits.
// Act
var bool canDo = service.CanDoService();
// Assert
Assert.True(canDo);
This way you can test both the unit aspect, and the service aspect - by creation a common way to create the arrangements, without having to abstract out the entities and solves the problem of recreation the same thing over and over again.
NB. This is just a basic code demo than can be made more complex, based on the scenario, or your preferred test style.
I had a similar challenge and, together with my team, we created a tool that simplifies the test data arranging process by employing a random data generator: https://github.com/ocadotechnology/test-arranger. Especially take a look at:
How to organize tests with Test Arranger as it explicitly refers to the common DDD building blocks and explains how to arrange test data around them. In my case, following those recommendations resulted in a significant reduction in the amount and complexity of code for preparing the test data.
Custom Arrangers as it shows how to deal with the model invariants.
Besides the recommendations given on the test-arranger page, it is also handy to use Lombok's #Builder(toBuilder = true) (or an equivalent like Kotlin's copy method from data classes) on your domain classes. With the toBuilder method you can easily adjust randomly generated value objects and entities to the needs of a certain test case.
How can I write a unit test for a Chef provider?
So far, our unit testing strategy uses ChefSpec for recipes, and we stuff most of the interesting logic for our providers in libraries to make the logic more testable. However, we still run into issues where our providers are calling other resources (among other simple logic issues). For example:
action :run do
helper = Helper.new
template '/etc/hosts' do
source 'hosts.erb'
variables ({
"host" => #new_resource.host,
"ip_address" => node['ipaddress']
})
only_if { helper.update_hosts }
end
service 'httpd' do
action :restart
end
end
(this is not real code, just a trivial example)
What we'd like to do is test this provider in isolation to check for logic errors. ChefSpec has the capability of stepping into an LWRP, but it looks like that would force us to put the LWRP into a recipe, and many of our cookbooks are basically LWRP libraries with no recipes. We'd also just like to keep a clean separation in our tests, so it's obvious what component failed by looking at the file name.
Additionally, it would be nice if the test would automatically fail if there are any syntax errors in the LWRP definition. For example:
action :run do
template '/etc/hosts/' do
source_whoops 'hosts.erb'
action :whoops
end
end
It would be really nice if the above statement would cause the test to fail due to the attribute name being defined incorrectly, and the action name not existing (just like ChefSpec).
The only solution I've come up with is to basically create a "test cookbook" - a separate cookbook that defines each LWRP 1:1 with a single recipe, so ChefSpec can step into it that way. It seems like a reasonable, but less than ideal solution.
Looks like there is a (very recent) solution to this.
First, this pull request would basically do what I'm asking, although it has been rejected by the ChefSpec maintainer for understandable reasons.
The maintainer suggests to use a mycookbook_test pattern - a separate cookbook keeps all the unit tests. This would allow a simple 1 recipe-per-lwrp approach.
Additionally, this approach keeps the cookbook clear of any unit tests, which is nice for consumers of the cookbook. Consumers may want to run their own unit tests, and there's no need (or desire) to run tests on third party cookbooks.
I am trying to understand the best practices for structuring an ember.js application. This slide from tomdale:
https://speakerdeck.com/u/tomdale/p/emberjs-more-than-meets-the-eye?slide=55
has a concise description of how to apportion the application logic. However in trying to follow these guidelines I have finding some problems:
The router is growing too large. According to the presentation the router "responds to events from views", but this results in a lot of code when there are dozens of views.
There are a huge number of controllers. In a Rails application the CRUD actions typically reside in the same controller, however for ember apps it seems that there should be a one controller to list records, one to view a record, one to create a record, etc.
It doesn't feel very DRY because I am ending up with so many files between the controllers, views and handlebars templates that each only have a couple of lines of code.
I am trying to decide if the problem is that I am applying the guidelines incorrectly, or whether these guidelines only work for trivial applications.
Does anyone have any advice - especially on how to manage the growth of the router?
I think that saying Ember encourages too many controllers is like saying Javascript encourages too many functions. Yeah, you can go crazy with proliferation of either. Or you can do the opposite, and have it work exactly as you need. In general, always remember that your app should be exactly as complex as it needs to be, and no more so. You don't need to use a certain architecture or pattern just because some famous coder guy used it, nor even because it seems to be 'the Ember way'. Even 'Universal Good Things' like Separation of Concerns, MVC, etc. are principles & models which you should try to understand fully then use to the extent that they serve your needs. I think that the ability to selectively break rules and patterns for the right reasons is far more telling a sign of a great hacker than slavish devotion to the dogma of the programming gods. This is a craft, not a religion. (But YMMV. Perhaps there's a special circle of hell reserved for coders like me. I'm betting against it.)
Specific to Ember, I tend to use Controllers around my data models and/or around a particular user workflow, rather than around each view. Then use Routing/State Managers as the glue between your views, and I generally use Event Managers on views to handle browser events within each view, including sending instructions to the router. So, if I have an app that revolves around, say, Customers and Products, I'll have a controller for each, just as I tend to do in Rails. This will result in each controller holding more functions and computed properties than some people like to have in one place. It also means that I can't necessarily reuse my views in another context, because they're hard-wired to the controller. And yes, this is poor Separation of Concerns. But that's not an absolute good if it causes complexity that has no payoff.
Also on the subject of Controllers, I think folks particularly tend to proliferate controllers unnecessarily for subsets of your main data model. Say you've got a products controller, and you want to store the products that a given user is collecting in a comparison tool. Most people seem to create a new controller for this, but it's perfectly legit to push these into an additional array or other Enumerable inside of your Products controller or Customers controller, or on your Customer model. This keeps objects that rely on the same functions and properties within a closer scope. The content object in each controller is, AFAIK, just another Enumerable. It has a few special implicit references to the Controller, but isn't magic. There's no functional reason I've found not to use additional ones too. They work just as well with bindings, with #each, etc.
Similarly, some people just LOOOVE to break their app down into a million files, nest them 15 deep in the file structure, etc. More power to you, if that helps you to visualize the underlying logic, and make it clear to the rest of your team. For me, it just slows me down on projects with only a 1-3 person engineering team. Folks also tend to reproduce the file-based style of other MVC systems they're familiar with (like Rails), where files are the necessary explicit structure for separating views and other logic objects. This becomes an article of faith and a deeply ingrained habit. But in Javascript MVC, I have found that it often serves no such purpose and is strictly redundant to the implicit design. I tend to use a single, carefully organized file for my entire Ember app (separating it from any other non-library JS), with lots of indentation and nesting where that helps me to visualize the hierarchy. Whatever you do, file-wise, it's all the same at runtime, provided that you deliver it all to the right place at the right time. With Ember and JS, file structure is for the needs of your team, and nothing else. Calibrate accordingly.
(IMPORTANT CAVEAT: if you do use a million files, you'd better be using a pre-compiler to manifest them all together for delivery to the user, or you're going to take a huge latency hit on delivering all those files separately.)
(ANOTHER IMPORTANT CAVEAT: with a large team or a rapid daily release schedule like GitHub's, file-based separation of your logic can make version-control easier than doing lots of merges into the same file, where your merge tool may get confused. Again, this is an issue of managing and monitoring your human processes, and doing merges carefully, rather than a technical requirement imposed by your JS framework.)
(LAST IMPORTANT CAVEAT: Then again, sometimes the difference between a technical requirement and a human/procedural requirement is fuzzy. If you break your developer's brain, you also tend to have a broken app. So, do what works for the people and processes you have to deal with in getting it built.)
As I said before, YMMV. I'm not a coder God, as you can tell from my reputation score, so you may feel free to disregard me. But I stand behind the idea that you should use only as much complexity, only as much file-structure, and only as many higher-level abstractions (like routing, which may actually be overkill for limited-purpose single-page apps) as serves your needs; and no more.
I think we are developing a quite large ember app (about 45 views at the moment). It implies almost the same count of controllers and templates).
Indeed our router is quite large, but we manage it quite easily by splitting it into many files. Basically, each file represent one screen of the app and is responsible for maintaining a functional set. Here is an extract of the router:
Router = Ember.Router.extend({
root: Ember.Route.extend({
index: Ember.Route.extend({
route: '/',
unlogged: Ember.Route.extend({
route: 'welcome',
connectOutlets: function (router) {
var applicationController = router.get('applicationController');
applicationController.connectOutlet('welcome');
}
}),
logged: Ember.Route.extend({
route: 'app',
projects: Ember.Route.extend({
route: 'projects',
collection: ProjectsRoute,
member: ProjectRoute,
showProjects: function (router) {
router.transitionTo('projects.collection');
}
})
})
})
Then it the same in the ProjectRoute. Each time there seems to have to many feature in one route, we split it.
You can even reopen a route to extend it, and to plug other functionnality in it.
ProjectState.reopen({
scenarios: ScenariosRoute,
showScenarios: function (router) {
router.transitionTo('scenarios.collection');
}
});
It implies more files, but with a good organization, it's no hard to maintain, as it's very rare you work on all features at the same time. Usually, I have nore more 4 opened files simultaneously (view, controller, template, route)
I don't know if it's a best practice, but it works pretty fine for us
Ok - I love NancyFx. Writing a web application with that few lines is just amazing!
But how do you test drive your NancyModules on the unit level?
Please note that I am aware of the excellent testframework supplied with Nancy (Nancy.Testing on NuGet), which gives excellent ways to test the whole (almost) application stack. But now I mean the unit level test I use to flesh out the contents of my NancyModule, in TDD fashion.
Since the routes are defined in the constructor, often together with a lamda expression that constitute the whole action, it feels a bit "unreachable" from a unit test. But have I missed something obvious on how to test the actions of the route?
For example, how would a unit test for this simple application look?
public class ResouceModule : NancyModule
{
private IProductRepository _productRepo;
public ResourceModule(IProductRepository repo) : base("/products")
{
Get["/list"] = parameters => {
return View["productList.cshtml", repo.GetAllProducts()];
};
}
}
See there - now I wrote the production code before the test... :) Any suggestions on how to start with the test?
You can do test first dev with the testing tools we provide:
In your test startup configure a bootstrapper that only contains the module you have under test and any any fake objects you want.
In your test execute a specific route (like GET /list) - you might want a small helper for this to remove some repeated code perhaps.
Assert on what comes back - you have full access to the request and response objects (for headers, cookies etc), along with helpers for HTML bodies and, coming in 1.8, helpers for handing JSON, XML and just string responses in the body.
Move onto the next route, rinse and repeat.
Ok, so you're not just testing the module, but if you look at the call stack, there's not much going on before or after you hit your route so it's not that big of a deal in my book :-) If you really do want to test the module in complete isolation then you can just construct it yourself and poke the individual routes accordingly (they're just dictionaries in the module).
As part of Nancy.Testing you can use the configurable bootsrapper to control the setup, including the IoC setup. That should enable testing the module without lower level dependencies, and enable TDD.
I am trying to build a test program in c++ to automate testing for a specific application. The testing will involve sending requests which have a field 'CommandType' and some other fields to a server
The commandType can be 'NEW', 'CHANGE' or 'DELETE'
The tests can be
Send a bunch of random requests with no pattern
Send 100 'NEW' requests, then a huge amount of 'CHANGE' requests followed by 200 'DELETE' requests
Send 'DELETE' requests followed by 'CHANGE' requests
... and so on
How can I design my software (what kind of modules or layers) so that adding any new type of test case is easy and modular?
EDIT: To be more specific, this test will be to only test one specific application that gets requests of the type described above and handles them. This will be a client application that will send the requests to the server.
I would not create your own framework. There are many already written that follow a common pattern and can likely accomodate your needs elegantly.
The xUnit framework in all incarnations I have seen allows you to add new test cases without having to edit the code that runs the tests. For example, CppUnit provides a macro that when added to a test case will auto-register the test case with a global registry (through static initialization I assume). This allows you to add new test cases without cracking open and editing the thing that runs them.
And don't let the "unit" in xUnit and CppUnit make you think it is inappropriate. I've used the xUnit framework for all different kinds of testing.
I would separate out each individual test into it's own procedure or, if it requires code beyond a function or two, it's own source file. Then in my main routine I'd do something like:
void main()
{
run_test_1();
run_test_2();
//...
run_test_N();
}
Alternatively, I'd recommend leveraging the Boost Test Library and following their conventions.
I'm assuming you're not talking about creating unit tests.
IMHO, Your question is too vague to provide useful answers. Is this to test a specific application or are you trying to make something generic enough to test as many different applications as is possible? Where do these applications live? Are they client server apps, web apps, etc.?
If it's more than one application that you want your tool to test, you'll need an architecture that creates a protocol in between the testing tool and the applications such that you can convert the instructions your tool and consumers of your tool can understand, into instructions that the application being tested can understand. I've done similar things in the past but I've only ever had to worry about maybe 5 different "applications" so it was a pretty simple matter of summing up all the unique functionality of the apps and then creating an interfact that supports them all.
I wouldn't presume that NEW, CHANGE, and DELETE would be your only command types either. A lot of testing involves data cleanup, test reporting, etc. And applications all handle this their own special ways.
use C++ unit testing framework , Read this for Detail and examples