Are Spring/Micronaut repository tests necessary [closed] - unit-testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Hello stackoverflow community.
I had an argument at work regarding if repository tests for spring-data JPA (or the Micronaut version) are necessary at all.
This would be my setup of the application:
#Controller ➡ #Service/#Sigleton ➡ Repository<Entity>
in my service tests I'd use the #ExtendWith(SpringExtension::class) extension (Junit5)
when creating the test setup, I'd #MockBean away other systems I need to call (like REST-APIs) but #Autowire my Repository's.
When setting up my test data I'd simply save my required entities into an H2 in-memory database by using the injected repositories.
This will test my database logic and business logic as well. In case of 100% test coverage I've tested all the database calls which can happen in production.
However what I usually see in projects is that the Repository is mocked away.
To test the custom repository calls there are separate Tests to make sure the repository functions work as expected.
What are your comments on this. Do you prefer the approach without repository mocks or with and why?

I have no experience with Micronaut, but I believe my answer will apply to both of the frameworks:
There are different types of integration tests supported by spring for example.
Basically the fact that you're running a spring context in the test, already means that its more than a unit test. In general the Spring test package is for integration tests.
Now in Spring there are annotations like #DataJpaTest, #WebMvcTest. They create a "slice" of application context loading only necessary beans, and mocking / omitting others. For example, in #WebMvcTest the JPA repositories are anot loaded at all.
You can also create your own slices.
Now If you want to check your rest layer (Controller is defined correctly, request is validated in a proper way, upon the request you're getting a response in a proper format and so forth) you can use #WebMvcTest.
If you want to test that the sql queries are correct - that you can use #DataJpaTest (assuming you work with JPA / Spring Data of course).
Now, If you want to test service logic, sometimes you don't even need an integration test (loading spring context) since you can run a unit test for services and mock out the repository calls or calls to external services if you have those.
Regarding the H2 approach. While people use it for testing their DB layer (SQL queries), sometimes it doesn't work exactly like your database in production. This means that sometimes you can't really run the same SQL query in tests. For these cases I recommend TestContainers project: run the Docker image of your db on startup of the test and stop the image after the test ends.
Update
Based on OP's comment:
lot of the 'mysql query' is already been taken care of by the framework, so why should I explicitly test the repository
This is subjective, but let me put it this way: Tests are a tool to gain a confidence a code for developer first of all.
If the developer wants to be sure that the query behaves as expected then the test is a tool that can help. Specifically regarding Queries: Maybe the query is just wrong, maybe its a native query that should be checked anyway. Maybe there are many queries that run one after another. Its only up to you to decide.
Is it worth it to write Unit tests for services?
Well, this depends on what do the services actually do.
If they run a complicated algorithm that can't be easily checked by integration tests (like service requires various mocks in various cases) then unit tests for services can be done.
Besides, in general, unit tests are way faster than Spring's tests. So (again subjective) my personal rule is: if you can gain the confidence in code with unit test - do it. If you need to check integration - go for integration test.

If you are not mocking your Repository in service junits, then it is an Integration test and not a Unit test. And it is fine if you want to keep it this way.
But if you want to write unit tests, which tests your individual layers then you should mock your Repository and write separate integration tests for Repository.

Related

Testing process in Laravel application development

I'm following TDD technique in my new Laravel project. Thus I have a set of tests which cover my controllers, model classes, services, etc. Most of these tests are HTTP tests, so I stored them in the /tests/Feature directory. Additionally I have few unit tests, which cover quite specific methods, which are not (easily) reachable from the HTTP tests.
If I understand correctly, each HTTP test is a functional tests, because it covers a lot of classes included Controller. Should I in that situation separately create unit tests for each method in my project even if it is already covered by HTTP tests? If yes, what benefit can I take from it.
Thank you in advance for explanations.
The philosophy of creating unit tests is to test a small piece of your code, for example, if you have an API, so it only returns the list of 10 recent posts then I guess it's no need to create a unit test for that and it can be covered with writing the functional or HTTP tests. But let's assume you have an endpoint that the user uses that endpoint to upgrade his account from regular type to golden type so he/she can access more posts or videos. There will be definitely a lot of things going on in that endpoint, so yeah you need to write Unit tests in addition to functional tests. Also, one more thing is that when you write functional tests you should see it from the QA perspective, I can categorize it like this
Functional/Http/Feature => [Validation Checks, Response Checks=>[The endpoint throw an error in different situations or return success if all goes well]]
Unit => [Write the test for the small functions that they get called by that endpoint Or write the test for that endpoint and mock everything so you can get what you expect]
Integration => [Write tests for the third party APIs or database persistence or caching persistence that you are using in your application]
So if you have an endpoint and you are writing tests for that endpoint and your functional tests has covered some part of that endpoint logic then I guess it's not necessary to write more unit tests for it.

Automated Unit Testing Oracle BPEL SOA projects

Is there an alternative to the Oracle SCA Unit tests provided with JDeveloper for testing SOA projects/BPEL?
The problem I have is with it is the amount of effort required to write the tests through the clunky UI and the smallest change will invalidate all tests currently built which makes them un-maintainable.
The other issue is due to the graphical interface the SOA composite must first be written before unit tests can be written meaning a test driven-development is not possible.
The final issue is the emulation functionality is incomplete with database partner links.
I use SOAP-UI to perform unit testing. I create separate test scripts with SOAP-UI which allow me to generate a number of different test case scenarios which can be targeted at individual services.
I then invoke these from a Jenkins/Hudson script to provide continuous integration testing.
In this way you can do your TDD without first creating the composite.
With your database partner links you can emulate them either with a stub composite or alternatively with SOAPUI. This depends on what your configuration is and exactly what you use the data for.

Mocking and Repository / Service layers in C#

I am trying to write some unit tests for testing my service layer which I am doing well I think, the service layer as a dependency on a repository so am mocking a repository using RhinoMocks, so I am testing the Service layer "WITHOUT" hitting the database which is great.
Now I need to test my repository layer, this has a direct connection to a database so I have to test it don't i? I have no other option but to test it?
If I test another implementation of the Repository that doesn't hit the database then this is no testing my implementation.
I have managed to mock out all lower layers so anything that depended on code that takes a while to run ie. The repository, then I mocked this out. The result is that all my tests for layers below the repository complete fast and do not hit the database.
The problem is what do I now do with the repository. I have to test it but it has a dependency on a SQL database.
Well, the general answer goes like this. I would write unit tests that verifies the logic of the repository layer and break out the sql dependency in a new class and mock it in the tests of the repo. If the repository layer contains only a sql connection and no logic there is nothing to unit test in my opinion. Then you are more suitable with integration tests with the database connected.
Thus mocking code you don't own is a bad practice, I think best option for you is to test repositories via acceptance/integration tests
You certainly can test your repository layer without talking to a database. Most ADO.net classes are mockable, if you are careful about how you create them and you are careful to couple to interfaces instead of concretions. Unfortunately, ADO.net was created before mocking was a very popular practice, and it is still a bit of a pain to do.
The real question in my mind is whether you should try to mock them. The benefits of mocking are twofold: they run faster, and they force you to encapsulate more details about your database (making it easier to switch out db technologies, if you ever want to do it). The benefits of functional tests are that they also test your database layer (stored procedures, etc), they are arguably easier to write, and they are easier to maintain in the sense that if a db change is made, integration tests notice automatically, instead of you hunting the mocked out tests down.
I would say the "best" approach would be to test them both with moqs and with the real database, since this gives you the best of both worlds. However, it is quite costly of course.

Mocking WebService consumed by a Biztalk Request-Response port

I'm using BizUnit to unit-tests my Biztalk orchestrations, but some orchestrations consume a WebService,and testing these seems more like integration testing than unit testing.
I'm familiar with using a mocking framework to mock the generated proxy objects, in order to test a web service from a Windows Forms application, but I would like to be able to do it in a more integrated way in a request-response port?
How would you approach this problem?
This goes to the heart of one of my main irritations as a BizTalk developer - BizTalk does not lend it self to unit testing. From the fact the 99% of your interfaces into BizTalk applications are message based and have a huge number of possible inputs, through to the opaque nature of orchestrations, BizTalk offers no real way of testing units of functionality as... well... units.
For BizTalk, integration tests are sadly often the only game in town.
That results in, due to no fault on the part of Kevin Smith, BizUnit being (IMO) a misnomer. A better name would perhaps be BizIntegrationIt. BizUnit offers a range of tools that assist in integration testing, the majority of its tests, like checking if a file has been written to a given directory or sending an HTTPRequest to a BizTalk HTTPReceive location are all strictly speaking, testing integration.
Now that I've gotten that rant out, what you are asking for is something I've been thinking about for a long time, the ability to create automated unit tests that give some real confidence that my making a small change to a map won't suddenly break something else downstream, as well as a way to remove dependance on external services.
I've never thought of any nice way of doing this but below is a solution that should work, I've done variations of each part of this in isolation but never tried to but them all together in this specific form.
So given the desire to mock a call to some external service (that may not even exist yet) without needing to actually make any external call and wanting to have the ability to set expectations for that service call and to specify the nature of the response, the only method I can think of is to develop a custom adapter.
Mock webservice using custom adapter
If you build a custom request-response adapter you can plug it into your send port in place of the SOAP adapter. You can then specify properties for the adapter that allow it to behave as a mock of your webservice. The adapter would be similar in concept to a loopback adapter but would allow internal mocking logic.
Things that you might want to include as adapter properties:
Expected document (perhaps a disk location that specifies an example of what you expect your BizTalk applicaiton to send to the webservice).
Response document - the document that the adapter will send back to the messaging engine.
Specific expectations for the test such as lookup values in document elements.
You could also have the custom adapter write to disk and setup a BizUnit step to validate the file that was written out.
Building a custom adapter is non-trivial, but possible, you can get a good start from the BizTalk Adapter Wizard and there is an article on deploying custom adapters here.
There is a bug in the code generated by the wizard, you will need to change new Guid(""), to new Guid().
There are also some examples of building custom adapters in the BizTalk SDK.
Another option is to use a plain http page and the HTTP solicit response as discussed here, all your logic goes in the http page. This is probably simpler if you are happy having an http call, and setting up an IIS port to listen for your test.
Initialising unit tests
You can import binding files into a BizTalk application using a .bat file.
If you make a new binding file for each test you run, as well as for your standard applicaiton set up, you can then run the appropriate batch file to apply the right binding.
Each binding file would change your webservice sendport to use the mock custom adapter and set the specific properties for that test.
You could then even make a custom BizUnit step that (perhaps) generated binding settings based on settings in the test step and then ran the shell commands to update the bindings.
Testing Message Contents
A final thing that you might want to consider, to really tie all this together, is some way of testing the contents of messages. You could do this in your mock adapter, but that would get tedious very quickly for large messages, or for a large range of possible input messages.
One option is to make a custom pipeline that calls Schematron to validate files that it receives. Schematron is a schema language that allows a much richer level of file inspection that xsd, so you can check things like "If element x contains this content, I expect element y to be present".
If you built a custom pipeline that took a schematron schema as a parameter, you could then swap in a testing file for a specific unit test, validating that for this test, when you call the webservice you get a file that actually matches what you want (and doesn't just match the xsd)
As a co-author of BizUnitExtensions (www.codeplex.com/bizunitextensions) i agree that the name "unit" in BizUnit can be confusing but for Biztalk, the 'integration test' is the unit test. Some Biztalk folk have successfully used mocks to test pipeline components and other test harnesses (+ BizUnit/Extensions) to test schemas and maps.
Orchestrations unfortunately are opaque. But theres are good reasons for that.
(a) Because of the huge subscription system in the message box - that orchestrations use when being activated etc, it is not possible to fire up some "virtual" process to host the orchestration (which can be done for pipelines. Tomas Restrepo has done something along these lines).
(b) Also, how would this virtual process handle persistence and dehydration?. I'd wager that people using WF would have the same problem in trying to test the workflow fully.
(c) we dont work with the C# directly, so there is no way we can "inject" a mock
interface into the orchestration code.
(d) An orchestration is not really a "unit". its a composite element. The units are the messages going to and from the message box and the external components called through expression shapes.So even if you could inject a mock webservice interface you cannot inject mock message boxes and correlation sets and other things.
One thing that can be done for orchestrations (and i've been considering an addition to the BizUnitExtensions library to do this) is to link in with the OrchestrationProfiler tool as that tool gives a pretty detailed report of all the shapes and somehow check that individual steps were executed (and perhaps the time it took for execution). This could go quite far in making the orchestration a bit more of a white box.Also considering that the orchestration debugger shows a lot of the variable values, surely it must be possible to get that info via an API to show what the values of variables were at a given point for a given instance.
Back to Richard's question though, my previous dev team had a solution. Basically what we did was to write a generic configurable HttpHandler that parsed incoming service requests and returned pre-set responses. The response sent back was configurable based on conditions such as XPath. In the BUILD and DEV binding files, the webservice end point was the mock. This worked brilliantly in isolating the BUILD and DEV environments from the actual third party webservices. This also helped in a "contract first" approach where we built the mock and the orch developer used it while the webservice author went ahead and built the actual service.
[Update:17-FEB-09: this tool is now on codeplex : http://www.codeplex.com/mockingbird.
If this approach sounds interesting check it out and let me know what you think of the tool ]
Now, before someone throws the old "WHAT ABOUT MOCK OBJECT FRAMEWORKS" chestnut in, let me say that the utility above was used for both Biztalk 'consumers' as well as non Biztalk consumers, BUT i have also worked with NMock2 and found that to be an excellent way to mock interfaces and set expectations when writing CLR consumers. (I'm going to be looking into MoQ and TypeMock etc soon). However, it wont work with orchestrations for the reasons described above.
Hope this helps.
Regards,
Benjy
Don't.
Don't test against arbitrary interfaces, and don't create mocks for them.
Most people seem to see developer (unit) testing as intended for testing nontrivial, individual units of functionality such as a single class. On the other hand, it is also important to perform customer (acceptance/integration) testing of major subsystems or the entire system.
For a web service, the nontrivial unit of functionality is hidden in the classes that actually perform the meaningful service, behind the communication wiring. Those classes should have individual developer test classes that verify their functionality, but completely without any of the web-service-oriented communication wiring. Naturally, but maybe not obviously, that means that your implementation of the functionality must be separate from your implementation of the wiring. So, your developer (unit) tests should never ever see any of that special communication wiring; that is part of integration and it can be viewed (appropriately) as a "presentation" issue rather than "business logic".
The customer (acceptance/integration) tests should address a much bigger scale of functionality, but still not focused on "presentation" issues. This is where the use of the Facade pattern is common--exposing a subsystem with a unified, coarse-grained, testable interface. Again, the web service communication integration is irrelevant and is implemented separately.
However, it is very useful to implement a separate set of tests that actually do include the web service integration. But I strongly recommend against testing only one side of that integration: test it end-to-end. That means building tests that are web service clients just like the real production code; they should consume the web services exactly the way that the real application(s) do(es), which means that those tests then serve as examples to anyone who must implement such applications (like your customers if you are selling a library).
So, why go to all that trouble?
Your developer tests verify that your functionality works in-the-small, regardless of how it is accessed (independent of presentation tier since it is all inside the business logic tier).
Your customer tests verify that your functionality works in-the-large, again regardless of how it is accessed, at the interface boundary of your business logic tier.
Your integration tests verify that your presentation tier works with your business logic tier, which is now managable since you can now ignore the underlying functionality (because you separately tested it above). In other words, these tests are focused on a thin layer of a pretty face (GUI?) and a communication interface (web services?).
When you add another method of accessing your functionality, you only have to add integration tests for that new form of access (presentation tier). Your developer and customer tests ensure that your core functionality is unchanged and unbroken.
You do not need any special tools, such as a test tool specifically for web services. You use the tools/components/libraries/techniques that you would use in production code, exactly as you would use them in such production code. This makes your tests more meaningful, since you are not testing someone else's tools. It saves you lots of time and money, since you are not buying, deploying, developing for, and maintaining for a special tool. However, if you are testing through a GUI (don't do that!), you might need one special tool for that part (e.g., HttpUnit?).
So, let's get concrete. Assume that we want to provide some functionality for keeping track of the cafeteria's daily menu ('cause we work in a mega-corp with its own cafe in the building, like mine). Let's say that we are targeting C#.
We build some C# classes for menus, menu items, and other fine-grained pieces of functionality and its related data. We establish an automated build (you do that, right?) using nAnt that executes developer tests using nUnit, and we confirm that we can build a daily menu and look at it via all these little pieces.
We have some idea of where we are going, so we apply the Facade pattern by creating a single class that exposes a handful of methods while hiding most of the fine-grained pieces. We add a separate set of customer tests that operate only through that new facade, just as a client would.
Now we decide that we want to provide a web page for our mega-corp knowledge workers to check today's cafeteria menu. We write an ASP.NET page, have it invoke our facade class (which becomes our model if we are doing MVC), and deploy it. Since we have already thoroughly tested the facade class via our customer tests, and since our single web page is so simple, we forego writing automated tests against the web page--a manual test using a few fellow knowledge workers will do the trick.
Later, we start adding some major new functionality, like being able to preorder our lunch for the day. We extend our fine-grained classes and the corresponding developer tests, knowing that our pre-existing tests guard us against breaking existing functionality. Likewise, we extend our facade class, perhaps even splitting off a new class (e.g., MenuFacade and OrderFacade) as the interface grows, with similar additions to our customer tests.
Now, perhaps, the changes to the website (two pages is a website, right?) make manual testing unsatisfactory. So, we bring in a simple tool comparable to HttpUnit that allows nUnit to test web pages. We implement a battery of integration/presentation tests, but against a mock version of our facade classes, because the point here is simply that the web pages work--we already know that the facade classes work. The tests push and pull data through the mock facades, only to test that the data successfully made it to the other side. Nothing more.
Of course, our grand success prompts the CEO to request (demand) that we expose the web application to mega-corp's BlackBerrys. So we implement some new pages and a new battery of integration tests. We don't have to touch the developer or customer tests, because we have added no new core functionality.
Finally, the CTO requests (demands) that we extend our cafeteria application to all of mega-corp's robotic workers--you did notice them over the last few days? So, now we add a web services layer that communicates through our facade. Again, no changes to our core functionality, our developer tests, or our customer tests. We apply the Adapter/Wrapper pattern by creating classes that expose the facade with an equivalent web service API, and we create client-side classes to consume that API. We add a new battery of integration tests, but they use plain nUnit to create client-side API classes, which communicate over the web service wiring to the service-side API classes, which invoke mock facade classes, which confirm that our wiring works.
Note that throughout this whole process, we did not need anything significant beyond our production platform and code, our chosen development platform, a few open-source components for automated building and testing, and a few well-defined batteries of tests. Also note that we didn't test anything that we don't use in production, and we didn't test anything twice.
We ended up with a solid core of functionality (business logic tier) that has proven itself mature (hypothetically). We have three separate presentation tier implementations: a website targeted to desktops, a website targeted to BlackBerrys, and a web service API.
Now, please forgive me for the long answer--I tire of inadequate answers and I did not want to provide one. And please note that I have actually done this (though not for a cafeteria menu).
This is a very interesting question that I still haven't seen a good generic answer to. Some people suggest using SoapUI but I haven't had time to actually test that yet. This page might be interesting on that.
Another way might be to somehow wrap the WebDev.WebHost.dll and use that ... Phil Hakkck discusses that in this post.
It's also be discussed before on SO here.
Please let us know if you find another solution to this!
This is the way to do it:
Back to Richard's question though, my
previous dev team had a solution.
Basically what we did was to write a
generic configurable HttpHandler that
parsed incoming service requests and
returned pre-set responses. The
response sent back was configurable
based on conditions such as XPath
I haven't had to do this in a while, but when I would test my Biztalk Apps I always used either soap ui or web service studio. I was able to test different input values without effort.

Unit Testing in web applications that use databases

I am building a web application that uses the database for Users, Security/roles, and to store content.
It seems a little daunting to me to begin on the road of unit testing because I have to make sure my database has been initialized properly for my tests to run.
What are common practices to help in this regard?
i.e. while developing/testing, I might delete a user, but for my test to pass that user has to be in the database, along with his profile, security settings etc.
I know I can create a setup script, something to recreat the databas etc.
I don't want to end up spending my entire time maintaining my tests and ensuring my database is in sych
Or is that the cost of Unit Testing/TDD?
The solution is Mocking. Mocks "replace" the connection. The unit under test will "connect" to the Mock and executes its statement. The Mock returns normal resultsets o.s.e.
After the test, the mock can give you a list of all methods, that were called by the unit under test. Easymock.org
As the other said: DB connection aren't a unit test. So drop it and do it local with Mocking objects
It's not a unit test if you are testing more than one unit.
Usually you'll have one component (your page, or the business layer) talking to a data layer object that is responsible for actually connecting and querying the database. My recommendation is to develop a unit test for the first component, using dependency injection to pass in a mock version of the DataLayer (which acts on hardcoded data, or a List you pass in, etc). This way you are testing your higher level code in isolation from the other components.
Then you are free to develop other unit tests (and integration tests) for the data layer to ensure that it is handling it's job (writing to the database) correctly.
We use an in-memory database (hsqldb) for our unit tests. In the setup we populate it with test data and then before each test case we start a transaction and after each test case we roll it the transaction back. This means each testcase has a clean start of the db.
It sounds like you actually want functional/integration testing. For Web applications I recommend you look into Selenium or Canoo WebTest.
These are also great for automating tasks you do on the Web. I have a set-up suite and a tear-down suite that create business entities and testing users through the admin interface as well as tests for the customer-facing site.
Michael Feathers argues that tests that communicate with databases aren't unit tests by definition. The main reason for this is the point you bring up: unit tests should be simple and easy to run.
This isn't to say that you shouldn't test database code. But you don't want to consider them unit tests. Thus, if you do any database testing, you want to separate the tests from the rest of your unit tests.
Since I used Doctrine for my PHP database work, and since Doctrine has a query abstraction layer (called DQL), I can swap out back ends without having to worry too much about compatibility issues. So in this case for my unit tests I would just at the beginning of my tests load the schema and fixtures into a SQLlite db, test my models, and discard the SQLlite db at the end of testing.
This way I've tested my models and data access to make sure their queries are formed correctly.
Now testing the specific database instance to make sure the current schema is correct is a different story and IMHO probably doesn't belong in your Unit Tests, so much as it belongs in your deployment task list.
The cost of unit testing/TDD is that you have to alter your design so that you have a clean separation between the database and the domain layer, so that you can create a fake that will allow you to create tests that don't hit the database.
But that cleaner design is just the beginning of the cost. After that you have to create tests that both help you make the code work right the first time and alert you when anyone breaks something that used to work.
And after you have a good fundamental design with tests that protect your existing functionality, you'll find yourself cleaning up code to make it easier to work with, with confidence you aren't breaking things along the way.
And so on and so on... The costs of unit testing/TDD just keep piling up over time.
For Java, you may also want to look into dbunit. http://www.dbunit.org/