Unit testing Bean managed transactions in EJB 3.x outside the container - unit-testing

Is there a good way to unit test BMT in EJB 3.x, outside the container. I believe that it would make sense to test transactions right when we are coding it. IMHO it is important to test what we code right from the first possible stage. So, if there is a nice way which is simple and does not take too much time to execute, to test BMT, then it would be really welcome.
At present, I am using an in-memory DB to test my JPAQL in EJB outside the container. I use unitils-DBUnit to inject test data in my DB. So, as the test bed is set, in special scenarios where I need to test the BMT, what should I do?
P.S : I have taken a look into tools like Bitronix, but I am really not sure if it would help my case. I need some tool that is really performance intensive and light so that it does not frustrate the developers - this is unit testing. Kindly give me your inputs on this too. According to you, would such a tool be good for my purpose. If yes, do you also have any examples that I can refer to?
Thanks a lot

Do you really need to test "outside the container", or is "outside the server" sufficient? If the former, have you looked at the embeddable EJB container support in EJB 3.1? The embeddable EJB container runs in a standalone Java process (ideal for unit tests), and BMT is required for the embeddable EJB container per table 27 (section 21.1) of the EJB 3.1 spec.

For unit testing transactions, there should be a complete transaction manager which is standalone and takes very less time to execute. It should not force deployment of any beans or jars.
Bitronix is working fine and satisfies my purpose. It takes less than a second to execute the tests. So, I do not have to mock my transactions anymore and I can be sure that the transactions work as I expect them to work before my code moves to the Integration testing phase. I have also seen positive responses about Atomikos, but I never tried it. May be I can update this thread when I evaluate Atomikos too.

Related

How can I test EJB 3.0?

I am using EJB 3.0 , Oracle WebLogic.
Need help in the following question :
How to test EJB (3.0) ? I mean unit tests and/or integration tests ? Can I use some kind of embedded EJB container or create a mock for it to write unit tests ? Maybe there is some special test frameworks or aproaches? EJB aren't new for me, but I have never written tests for them.
Any information will be useful for me .
Thanks.
One option is using embedded container. Starts up on every test execution, you have to get your beans through jndi lookup, manage container's configuration yourself and all kind of boring, unproductive stuff.
On the other hand, there are frameworks like Arquillian, that do the thing for you. It supports annotations like #EJB in tests and does DI, manages container.... Read the guide on their website, its worth it.
From my experience, mocks are a no good solution for complex ejb testing, even though it may work on testing some non-container dependant functionality.
My advice is going on it with arquillian.
I would try to use plain junit and mockito as much as possible to test small units of code and use embedded containers only for integration tests as these kind of tests are running much longer as simple unit tests. And for developers running the unit tests should never get annoying.
An embedded EJB Container used in junit tests is a good idea to integration test your servies & ejbs.
Using open-ejb (or any other embedded container like glassfish etc.) helps you to simple write small tests using junit as framework. Even JPA is integrated very well, using a memory database.
If it comes to mocking, let's say for remote services, you may still use mockito inside.
Find some documented & runnable examples here: https://tomee.apache.org/examples-trunk/

What should be mocked for an integration test?

Upon reading Growing Object Orientated Software Guided by Tests, I learnt of test isolation and test fragility. The idea that each test should be very specific to a piece of code or functionality, and the overlap of code coverage by tests should be kept to a minimum.
The implied ideal that each change in the code should result in breaking only one test.
Avoiding spending time going through multiple broken tests to confirm that one change is the cause and if it is fixed by the test modifications.
Now this seems easy enough for unit tests, they are very isolated by their nature.
However when presented by integration tests, it seems hard to avoid having multiple tests exercising the same code paths, particularly when run in addition to the unit tests.
So my question, is what dependencies should be mocked when doing integration testing? Should anything be mocked at all? Should a single execution path be tested, and all side effects not directly relevant to this code path be mocked?
I'm toying with the idea of doing pairwise integration testing. Test one relationship between two objects, and mock everything else. Then changes in either one of these objects should have minimal impact on other integration tests, in addition to forming a complete chain of end-to-end tests by means of pairs.
Thanks for any info..
Edit: Just to clarify, I'm basically asking "How do I avoid large numbers of failing integrations tests during the normal course of development?". Which I assume is achieved by using mocks, and why I asked about what to mock.
Update: I've found a very interesting talk about Integration tests by J.B.Rainsberger, which I think answers this fairly well, if perhaps a bit controversially. The title is "Integration Tests are a Scam", so as you can guess, he does not advocate Integration Tests at all (end to end type tests). The argument being that Integration Tests will always be far below the amount needed to thoroughly test the possible interactions (due to combinatoric explosion), and may give a false confidence.
Instead he recommends what he calls Collaboration Tests and Contract Tests. It's a 90 min talk and unfortunately the whiteboard is not very clear and there aren't code examples, so i'm still getting my head around it. When I have a clear explanation I'll write it here! Unless someone else beats me to it..
Here's a brief summary of Contract Tests. Sounds like Design by Contract type assertions, which I believe could/would be implemented in a Non-Virtual Interface pattern in C++.
http://thecodewhisperer.tumblr.com/post/1325859246/in-brief-contract-tests
Integration Tests are a Scam video talk:
http://www.infoq.com/presentations/integration-tests-scam
Summary:
Integration tests are a scam. You’re probably writing 2-5% of
the integration tests you need to test thoroughly. You’re probably
duplicating unit tests all over the place. Your integration tests
probably duplicate each other all over the place. When an integration
test fails, who knows what’s broken? Learn the two-pronged attack that
solves the problem: collaboration tests and contract tests.
For integration tests you should mock the minimum amount of dependencies to get the test working, but not less :-)
Since the integration of the components in your system is obviously the thing you want to test during integration testing, you should use real implementations as much as possible. However, there are some compontents you obviously want to mock, since you don't want your integration tests to start mailing your users for instance. When you don't mock these dependencies, you obviously mock too little.
That doesn't mean btw that you shouldn't allow an integration test to send mails, but at least you want to replace the mail component with one that will only send mail to some internal test mail box.
For integration tests I lean towards mocking the service rather than the representation for example using mirage instead of a 3rd party REST API and Dumpster rather than a real SMTP server.
This means that all layers of your code are tested, but none of the 3rd parties are tested so you are free to refactor without worrying that the tests will fail.
Unit tests should have mock objects, but integration tests should have few if any mocks (otherwise what is being integrated ?) I think it's overkill to do pairwise mocking; it will lead to an explosion of tests that might each take a long time and lots of copy and paste code which will be a pain to change if requirements change or new features are added later.
I think it's fine to to not have any mocks in the integration tests. You should have everything mocked in the unit tests to know that each individual unit works as expected in isolation. The integration test tests that everything works wired together.
The discussion of Contract/Collaborator test pattern (described by JB Rainsberger in "Integration Tests are a Scam" mentioned in the question above). Relating to the question here - I interpreted his talk to mean that when you own the code for both the service side and the client side, then you should not need any integration test at all. Instead you should be able to rely on mocks which implement a contract.
The talk is a good reference for high level description of the pattern but doesn't go into detail (for me at least) about how to define or reference a contract from a collaborator.
One common example of the need for Contract/Collaborator pattern is between an API's server / client (for which you own the code of both). Here's how I've implemented it:
Define the contract:
First define the API Schema, if your API uses JSON you might consider JSONSchema. The schema definition can be considered the "Contract" of the API. (And as a side note, if you're about to do that, make sure you know about RAML or Swagger since they essentially make writing JSONSchema APIs alot easier)
Create fixtures which implement the contract:
On the server side, mock out the client requests to allow unit testing of the requests/responses. To do this you will create client request fixtures (aka mocks). Once you have your API defined, validate the fixtures against the JSONSchema to ensure that they comply. There are a host of schema validators - I currently use AJV (Javascript) and jsonschema (Python), but most languages should have an implementation.
On the client(s) side you will likely mock out the server responses to allow unit testing of the requests. Follow the same pattern as the server, validating the request and response fixtures via JSONSchema.
If both the Client and the Server are validating their fixtures against the contract, then whenever the API Contract changes, the out of date implementations on both sides will fail JSONSchema validation and you'll know it's time to update your fixtures and possibly the code which relies on those fixtures.

When to perform my unit test and why use Moq

Q1: When is it ideal to run unit test? Should it be ran before each time I go to debug the app? Should they be ran before I commit changes to svn? I think if an app only has a couple of unit test it should be ran each time the app is about to debug. But lets say we hundreds of unit test that can take a bit of time to complete, not sure if this is ideal or not. I think then it would be better to just run them before committing or deploying.
Q2: In my app Im using a repository pattern with a service layer. I've done some research on how to test a service when the service is calling a repository and the repository is querying db. So in order for it to be a true unit test and not an integration test, I have to find a way to test without touching the database. I found people are using Moq to mock their repository. Here's where I have a problem, to me it seems if I mock a repository then I'm changing the behavior of how the method is suppose to work and to me seems like a pointless unit test. It doesn't seem you are actually testing your code. Am I completley wrong about this? Thanks for any advice.
Let me take a shot.
A1: When you refactor existing code, you should execute the corresponding unit tests (not all) and see if anything is broken by your changes. For new functionality you should implement new unit tests in parallel using TDD. You should never execute all the unit tests by your own but should use or rely on continuous integration.
A2: I had a same opinion like you. But now, I am convinced that unit testing for service layer is required. Whatever that can be covered using unit testing should be covered. At this point, the core of your services might just be a delegation to repositories but services evolves. The services takes up the responsibility of parameter validation, authorization, logging, transactions, batch-support API etc. Then, it is not only data-access but many more things. If I were in your place, I would go for unit testing of services by mocking repositories. Sometimes, services provide convenient methods on top of the repository.
Hope it might be of some help to you.
A1. When making changes to your code the more often you run the unit tests the faster you will get feedback on whether the behavior they were written to assert has been affected, so the more often the better! Unit tests should be very fast and running several hundred should only take a couple of minutes at most, but it might be worth looking into infinitest (if working with java, i expect an alternative will exist for .net etc) it is a plugin to eclipse and automatically runs your unit tests when eclipse builds your project. It is clever enough to run only the tests that have been affected since the last time it ran, e.g. if you update a test, or if you update some "application" code that is covered by some unit tests the specific tests will be executed.
A2. Unit tests will cover many different scenarios that will call your services + daos many times, using "real" services will make it difficult to guarantee the results of each call (and setting up the data for each test can be painful), but also the results can be slow. It's usually better when unit testing to mock these services and testing them independently with integration tests.

Should I Unit Test Data Access Layer? Is this a good practice and how to do it?

If I'm having Data Access Layer (nHibernate) for example a class called UserProvider
and a Business Logic class UserBl, should I both test their methods SaveUser or GetUserById, or any other public method in DA layer which is called from BL layer. Is this a redundancy or a common practice to do?
Is it common to unit test DA layer, or that belongs to Integration test domain?
Is it better to have test database, or create database data during test?
Any help is appreciated.
There's no right answer to this, it really depends. Some people (e.g Roy Osherove) say you should only test code which has conditional logic (IF statements etc), which may or may not include your DAL. Some people (often those doing TDD) will say you should test everything, including the DAL, and aim for 100% code coverage.
Personally I only test it if it has logic in, so end up with some DAL methods tested and some not. Most of the time you just end up checking that your BL calls your DAL, which has some merit but I don't find necessary. I think it makes more sense to have integration tests which cover the app end-to-end, including the database, which covers things like GetUserById.
Either way, and you probably know this already, but make sure your unit tests don't touch an actual database. (No problem doing this, but that's an integration test not a unit test, as it takes a lot longer and involves complex setup, and should be run separately).
It is a good practice to write unit test for every layer, even the DAL.
I don't think running tests on the real db is a good idea, you might ruin important data. We used to set up a copy of the db for tests with just enough data in it to run tests on.
In our test project we had a special web.config file with test settings, like a ConnectionString to our test db.
In my experience it was helpful to test each layer on its own. Integrating it and test again. Integration test normally does not test all aspects. Sometimes if the Data Access Layer (I don't know nHibernate) is generated code or sort of generic code it looks like overkill. But I have seen it more than once that systematic testing pays off.
Is it redundancy? In my opinion it is not.
Is it common practice? Hard to tell. I would say no. I have seen it in some projects but not in all projects I worked in. Was often dependend on time/resources and mentality of the team / individiual developer.
Is it better to have test database, or create database data during test? This is quite a different question. Cannot be answered easily. Depends on your project. Create a new one is good but sometimes throws up unreal bugs (although bugs). It is depending on your project (product development or a proprietary development). Usually in an proprietary on site development a database gets migrated into from somewhere. So a second test is definitely needed with the migrated data. But this is rather at a system test level.
Unit testing the DAL is worth it as mentioned if there is logic in there, for example if using the same StoredProc for insert & update its worth knowing that an insert works, a subsequent call updates the previous and a select returns it and not a list. In your case SaveUser method probably inserts first time around and subsequently updates, its nice to know that this is whats being done at unit test stage.
If you're using a framework like iBatis or Hibernate where you can implement typehandlers its worth confirming that the handlers handle values in a way that's acceptable to your underlying DB.
As for testing against an actual DB if you use a framework like Spring you can avail of the supported database unit test classes with auto rollback of transactions so you run your tests and the DB is unaffected afterwards. See here for information. Others probably offer similiar support.

Integration testing - can it be done right?

I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?
What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?
An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.
To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.
What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?
IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,
Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
System integration testing is testing of a system end to end [a special case of integration testing]
If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.
For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.
In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.
Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.
Hope this helps, :)
ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...
What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.
Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.
There are several aspects of why it is so:
unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?
* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...
What I would do in order to test the
"real" application: making the http
request (either manually or automated)
with some ids from the database and
then look in the filesystem if the
file was written. Of course this
process could be automated, but still:
doesn´t that duplicate the test-logic?
Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c
your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.
Is this what is called an "integration
test"?
Yes indeed it is.
In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:
Class OutFilevalidator {
function isCorrect(fName, dataList) {
// open file read data and
// validation logic
}
You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications.
YouTube Video
Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.
The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.
There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.
Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.
To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.
You can of course use integration testing, as others pointed out.