Unit test - best practice for a multilayer project [closed] - unit-testing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a project with different layers of abstraction, which can be splitted in the groups:
Internal API;
Data Access Layer (DAL)
Business Access Layer (BAL)
...
Public API
Public accessible classes that have access to the internal data;
REST endpointes.
...
And inside Public API services I use Internal APIs.
Is it required to write Unit tests for all this layers or only for Internal API?
Are there any best practices?
Should I start writing my tests from Internal API and move to the next layer bottom-up?

The first thing I would say is "Yes." In other words, test everything.
For the internal API, you can write true unit tests with mock objects for the DAL and each class tested in isolation. This isn't just good to test for verification sake but also to give you confidence your code works and to serve as documentation of the code. That confidence comes in handy too when, for example, a REST API call fails later and you need to narrow down where the problem is.
You can test your DAL with an in-memory database for speed. I would call that an integration test while others would call that a unit test. Just semantics. But you got to do that too.
The Internal API tests are by developers for developers.
The testers should help with anything public facing. You simply write integration tests for the API services and REST client tests to verify the common cases and the obvious exceptional cases.
It sounds like a lot, and it kind of is. But if you take the time to get to know your tools and set up automation everywhere you can, you will be amazed how much you can accomplish pretty fast.
Hope this helps.

Related

Should I create an interface for every class to make my code testable (unit testing) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to learn how to create valuable unit tests.
In each tutorial I saw people create interfaces for every dependency to create a mock.
Is it mean that I should always create an interface for every class I have in my project? I don't know is it a good or a bad idea but every time I see a rule with "always" I get suspicious.
I should always create an interface for every class I have in my project?
No.
There's no one single rule you can follow or thing you can do which would make all of your code automatically unit-testable. What you can do is write code with abstractable dependencies. If you want to test whether or not the code you've written is easily unit testable, try to write a unit test for it. If a dependency gets in your way, you have a coupled dependency. Abstract it.
How you abstract it is up to you. You have a variety of tools at your disposal:
Interfaces
Abstract classes
Concrete classes with lots of virtual members
Mockable pass-through wrapper classes (very useful for non-unit-testable 3rd party dependencies)
etc.
You also have a variety of ways to get the dependency into the code that uses it:
Constructor injection
Property injection
Passing it to the method as an argument
Factories
In some circumstances, service locators (useful when introducing dependency abstraction to a legacy codebase, for example)
etc.
How you structure you code really depends on what you're building and what makes sense for those objects. For a service class which integrates with an external system, an interface makes a lot of sense. For a domain model which has a variety of potential implementations that share functionality, an abstract class may make a lot of sense. There are many possibilities for many potential uses.
The real litmus test of whether or not your code is unit-testable isn't "do I use interfaces?", it's "can I write a meaningful unit test for this?" If the functionality is isolatable without relying on dependencies (either by not having them or by allowing them to be mocked for testing), then it seems pretty unit-testable to me.

Confusion about unit testing frameworks? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I get the concept of unit testing and TDD on a whole.
However, I'm still a little confused on what exactly unit testing frameworks are. Whenever I read about unit testing, it's usually an explanation of what it is, followed by "oh here are the frameworks for this language, i.e JUnit".
But what does that really mean? Are framework just a sort of testing library that allows programmers to write simpler/efficient unit tests?
Also, what are the benefits of using a framework? As I understand it, unit testing is done on small chunks of code at a time, i.e a method. However, I could individually write a test for a method without using a unit testing framework. Is it maybe for standardization of testing practices?
I'm just very new to testing and unit-testing, clarification on some basic concepts would be great.
A bit of a broad question, but I think there are certain thoughts that could count as as facts for an answer:
When 5, 10, 100, ... people go forward to "work" with the same idea/concept (for example unit testing) then, most likely, certain patterns respectively best practices will evolve. People have ideas, and by trial and error they find out which of those ideas are helpful and which are not.
Then people start to communicate their ideas, and those "commonly used" patterns undergo discussions and get further refined.
And sooner or later, people start thinking "I am doing the same task over and over again; I should write a program for me to do that".
And that is how frameworks come into existence: they are tools to support certain aspects of a specific activity.
Let's give an example: using a framework like JUnit, I can completely focus on writing test cases. I don't need to worry about accumulation of failure statistics; I don't need to worry how to make sure that really all my tests are executed when I want that to happen.
I simply understand how to use the JUnit framework; and I know how to further utilize JUnit test cases in conjunction with build systems such as gradle or maven - in order to have all my unit tests executed automatically; each time I push a commit into my source code management system for example.
Of course you can re-invent the wheel here; and implement all of that yourself. But that is just a waste of time. It is like saying: "I want to move my crop to the market - let's start by building the truck myself". No. You rent or buy a pre-build truck; and you use that to do what you actually want to do (move things around).

Would this be a bad use case of Apache Camel? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We are implementing a Web Service using Apache Camel that has many (20-50) "direct:" routes calling Java methods. Every method basically has a route to it, whether it's for business rule processing, or DAO access methods. All the routes use from("direct:").to("direct"), but never to any other component.
While this may seem like it decouples the system from the standard Controller->bo->dao layers, it adds unnecessary book keeping of the Camel routes.
A better alternative would just simply be to define Java interfaces for the Business Objects and Dao layers, with an additional interface for any other service (external to the system, like file://, or http://) requests that would be a dependency inside the Business Objects or Controllers. The implementation to this additional interface would be using Apache Camel to talk to those external services.
As a side note, I'm thinking about how to convince my current colleagues to see my point.
Thoughts?
tldr;
Should Apache Camel be used where there is only 1 or 2 applications present?
I have used applications where there have been 20 systems involved in various complexity, protocols and patterns. I know other places where 50+ systems are involved. The only limit is on your design, performance etc.
Apache Camel is a middleware framework. Essentially your business logic should not know about how the data got to it or where it is to be delivered, only what it should deliver. Camel should take care of the rest.
By the way, does your middleware not talk to the external world? Why only use the direct and not other components?
You can also hide the middleware by using bean integration. That gives you an even more decoupling. See here: http://camel.apache.org/bean-integration.html
It really depends on what it is you want to accomplish and what your requirements are.
Well, for your specific use case I would say that you would probably have been better off with just a regular Java implementation with Camel being used as a glue between your system and the outside world. Like Souciance explains in his answer, Camel really shines when you have to integrate with multiple systems so you should at least keep it for communicating with the outside-world.
However, having already implemented the system's internals using Camel, I would have to say that it doesn't make much sense to put additional effort into replacing it with pure Java, especially since the current implementation gives you the ability to make the system more robust, for instance by using a high-performance MQ as a replacement for the direct routes, which would help your system become more resistant to failures and more easily decoupled later on, not to mention that having routes around your DAO objects makes it much easier to implement batching of DB updates when your system's load grows.

Best practices for Design by Contract (DbC) and Test Driven Development [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm interested in obtaining a concise recommendation on how best to use code contracts and tests in practise, across the board from development to production. I understand that both are different paradigms, both testing different things.
On a method level, a contract may specify the the parameters need to be of type STRING and have a minimum length of THREE CHARACTERS to pass. A unit test may ensure that for ANY given string that matches this, the output hash is correct (assuming that to be the role of the function).
Must a test exist to ensure that the contract fails? I understand that for a test to be valuable, it must be repeatable and thus are not good candidates for assisting in fuzzing. On the converse, DbC would allow for it.
Initially, I thought that a test could simply ensure the contract, but I'm assuming that this moves the contract outside of the definition of the method, and DbC is trying to enforce a tight coupling?
Similarly, are contracts actively enforced only during development? In production code, a build has no sign of the unit tests, but what about DbC? Are contracts "ignored" or are assertions allowed to blatantly fail in practise?
Is it true to say DbC does not exist in methods whose role is validation? For example, one would expect invalid inputs from the front end. I'm assuming DbC is defined strictly within the realms of where any input is assumed "clean".
I use DBC and unit tests for different things, along the lines you are reasoning.
I don't verify contracts using unit tests, instead I create a "bot" that simulates (fuzzy) user behavior and use that to try to trigger error conditions in my app.
I find contracts to be really helpful especially in the cases where the number of permutations of input to my app is really large, since it in that case is very hard to get confidence only from unit tests.
I don't run my code in production with contracts enabled.

TDD approaches for Development DBAs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
The software development team at my company develops using TDD and BDD practices. Consequently we have lots of unit, integration and acceptance tests to let us know whether our code base is working as expected. Needless to say, we now could not live without these tests giving us constant feedback.
The development DBA on our team writes table views with complex logic. He develops these without unit tests, and they invariably break when he does subsequent development, causing frustration in the software development team.
My question is, are DBAs encouraged to use TDD practices when working in an agile environment? Do DBA's have test frameworks to allow them to work in this way? We use IBM's DB2 database; are there any test frameworks for this database to allow database views to be developed in a TDD manner?
In the past I've used two approaches:
Having a very thin Data Access layer in the application and writing tests around that. In other words (assuming your dba uses sprocs), for each new sproc a method to access it is written and a test is created which exercises it appropriately (or better, tests first). This is nice because it integrates easily with test runners. You can use transactions to rollback tests without side effects.
Another option is to use native SQL testing frameworks. I've evaluated tsqlt which is a SQL Server framework, so not appropriate in your case, but the approach is solid and there could be appropriate frameworks for DB2.
There are several framework to test routines in the different kind of databases. Some of them follow the xUnit specification, and this allows to have jUnit-like tests at database level.
For DB2, there is a framework called db2unit: https://github.com/angoca/db2unit
With this framework you can compare objects (numbers, dates, boolean, strings, etc.) like you do in jUnit.
You can include the result of your database level tests into the global tests by capturing the error code, and this can be included in a Continuous Integration system. db2unit uses Travis-CI to test itself.