In a J2EE web application, how do people manage resources so that they are visible to both the web context and to unit/integration tests?
I find that often you end up having your source/resource folders configured a certain way during development (i.e., what Maven expects) and so your unit tests will run in your IDE. But once the web app is built and packaged into a WAR file (i.e., when your Continuous Integration server has done a build) your unit tests won't run anymore because the resources are located elsewhere.
Do you end up keeping resources in two different places and manually keeping them in sync?
We tried using unit tests in the container but gave up on it years ago. It's much better (for us at least) to make each unit test cover a single class and nothing else, mocking out the dependencies on other classes (see JMock or its many competitors). A good basic rule is that if it touches the database, network, or the filesystem, it isn't a unit test. (It may be useful for something else, but it isn't a unit test. See these unit testing rules for more on this.)
Unit tests written this way can be run anywhere, and they are blazingly fast (we have thousands and run them in under 60 seconds on medium-spec hardware.)
You may also want to run integration tests that check a subsystem or the whole application. We find that subsystem tests can also use mocking at their borders - for instance, we fake an external pricing feed - and that end-to-end tests work best with tools like Selenium or WebDriver, which let you deploy the whole application on a server and then hit it with a browser just like users do.
(By the way, our method of unit testing makes us mockists, rather than classicists, in Martin Fowler's taxonomy.)
Normally this is the reason for multi-module builds. The external services are in a separate build unit than the web application. So you build, package and run your integrations tests when you build that module.
Another module can contain your domain model and its unit tests, which are also run at build time.
It is quite common for a module that results in a WAR to not have any java code in it at all, but only web related artifacts. Although not necessary, this is often done because code that is in a war module cannot be included into another module.
The last special case is the module containing web-tests. This module may often need test-scoped artifacts from the other modules (because it is testing the application from the outside, but may need data from the inside). This can be solves by also packaging test-resources in jar files, creating a separate set of "test" jar files for each modules.
Multi module builds are the norm for maven projects, and are also easy to set up for other build systems like ant.
I won't package testing resources nor tests in a WAR file neither run unit tests from the WAR. Why you are trying to do so?
Related
I am assessing what is the perks and cons of using each approach.
To begin with, I am not sure whether a mockmvc can be considered a true integration test, since it mocks internal dependencies.
Even if I used an actual instance with true requests for my tests, I'm still mocking my external dependencies, and I am not quite sure the aim of a true integration/verify test is testing the environment as if it was real.
Besides, putting this controller tests in verify makes my pipeline longer and slower, since it will be interrupted after an unnecessary package and the like.
What do you thing is a proper schema for optimizing these tools in a build process?
One of the ideas I have is trying to use it like 2 profiles:
-Profile test would execute all IT tests with mocked external dependencies on test phase
- Profile integration would execute all IT tests with real prod config on verify
But tests would be the same.
Out of my personal experience, we've been in the same dilemma. We've ended up using both types of test:
- unit tests managed by surefire plugin
- integration tests managed by failsafe plugin.
Both were running during the build (but at different phases of course)
Now, regarding the controller tests:
I believe unit tests should be blazing fast, tens or hundreds of them should run within 1 second or so they also should not have external dependencies and run all-in-memory (no sockets, networking, databases, etc.)
These tests should be run by the programmer any time during the development, maybe 5 times in a minute, just to make sure the small refactoring doesn't break something, for example.
On the other hand, controller tests run the whole spring thing, which is by definition is not that fast. As for external dependencies, depending on the configuration of mock MVC you can even end up running some kind of internal server to serve the requests, so its far (IMO) from being a unit test.
That's why we've decided to run those with failsafe plugin and be integration tests.
Of course, Spring configurations if used properly can be cached by Spring between the tests, but this fact can only help and make integration tests run faster, but it doesn't mean that this kind of tests is a unit test.
The question is more about the fundamental understanding of a normal/ideal CI flow and understanding the scope of integration testing in it.
As per my understanding, the basic CI CD flow is
UnitTesting --> IntegrationTesting --> Build Artifact --> Deploy to Dev/Sandbox or any other subsequent environments.
So unit tetsing and integration testing collectively decide/make sure if the build is stable and ready to be deployed.
But, recently, we had this discussion in my team where we wanted to run integration tests on deployed instances on Dev/Sandbox etc , so as to verify if the application is working fine after deployment.
And the microsoft's article on Build - Deploy - Test workflows suggests that this could be a possible way.
So , my questions are :-
Are integration tests supposed to test configuration of different environments ?
Are integration tests supposed to be run before packaging application or deploying the application ?
If at all, some automated testing is required to test deployed application functioning on all environments ?
If not integration tests then what could be alternative solutions
You're mixing Integration testing with System testing.
Integration testing checks that some components can work together (can be integrated). You may have integration tests to verify how does the Data layer API operates with a database; or how does the the Web API responds to HTTP calls. You might not have the entire system completely working in order to do integration testing of its components.
Unlike integration tests, the System tests require all the components to be implemented and configured. That is end-to-end testing (e.g. from a web request to a database record). This kind of testing requires the entire system to be deployed which makes them more 'real' but expensive.
I am attempting to set up Java code coverage for a fairly complex app that
combines multiple large modules, only one of which I need to check coverage on
uses a combination of ant and Maven for builds
cannot be run except as an installed application on a server, with configuration
the automated tests to be analyzed for coverage are not part of the application build and make use of API calls to the application server from a remote client
The examples given in the jacoco documentation and in the online sources I have found assume the app under test is not previously installed and the tests are unit/integration tests run as part of the build. The documentation does not cover the details of how the jacoco instrumentation is done or when the call is recorded to a particular line of code. If I use ant or maven to instrument a particular module, use that module to build the full app, install it on a server, and configure it, will my remote tests then generate the .exec file?
Any advice on how to achieve the end goal (knowing how much of our code is covered by the tests) is greatly appreciated, including better search terms than "jacoco for installed app" which as you can imagine is ... not very useful. My google-fu is humbled.
Within a short-time period I'm going to start a project based on Windows Azure. And I was wondering what are the experiences with testing for Windows Azure projects (in continuous intergration (with a TFS build server))? (Eventual using TDD)
Some things I was wondering:
Do you use mocking (in your own written wrapper class)?
Do you use the storage emulator?
Do you deploy the services to Azure and run the tests from the build server to the cloud? (what about costs)?
Thnaks in advance!
The same good practices for writing unit tests for applications outside of Windows Azure apply. If you have an external dependency to what you are actually testing, that dependency should be mocked and injected for your granular unit test.
For example, when I'm using Windows Azure Storage Queues I will have an interface that I use to interact with the queue itself, so in my code consuming the queue service I can mock the subsystem using the interface and use dependency injection to inject the mock. This removes the necessity to actually deal with the emulator during unit tests. For the most part the actual concrete implementation of the code working with the queue is not much more than a very thin wrapper.
I personally don't shoot for 100% test coverage, so I may not have direct unit tests that utilize the concrete implementation of the wrappers. In many cases I try to have integration tests that will exercise these wrappers and exercise multiple aspects of the system working together. In some cases I can run the integration tests in the emulator (for Storage operations for example), but in some cases they simply have to be run with access to the Windows Azure environment (in the case of usage of ACS or Service Bus).
Ideally you'd like to have a set of scripts that can be run to spin up a minimum set of test servers in Azure, deploy your solution and exercise the integration tests that can't be done on premises. Then get the results of that and have the script shut everything down (or optionally leave it running if you need that). Then run the integration tests suite that utilizes these scripts often enough to detect issues, but you certainly don't need to run them every time you check something in unless you are happy with running the test environment all the time. If you okay with the cost of a semi-permanent test environment running in Azure then just make sure to have the scripts to an update deployment rather than a delete and redeploy to cut down on cost a bit (savings would be relative to how often the deploy occurs).
I believe this question is a very subjective one as you're likely to get several different opinions.
I'm familiar with TDD and use it in both my workplace and my home-brewed web applications. However, every time I have used TDD in a web application, I have had the luxury of having full access to the web server. That means that I can update the server then run my unit tests directly from the server. My question is, if you are using a third party web host, how do you run your unit tests on them?
You could argue that if your app is designed well and your build process is sound and automated, that running unit tests on your production server isn't necessary, but personally I like the peace of mind in knowing that everything is still "green" after a major update.
For everyone who has responded with "just test before you deploy" and "don't you have a staging server?", I understand where you're coming from. I do have a staging server and a CI process set up. My unit tests do run and I make sure they all pass before an an update to production.
I realize that in a perfect world I wouldn't be concerned with this. But I've seen it happen before. If a file is left out of the update or a SQL script isn't run, the effects are immediately apparent when running your unit tests but can go unnoticed for quite some time without them.
What I'm asking here is if there is any way, if only to satisfy my own compulsive desires, to run a unit test on a server that I cannot install applications on or remote into (e.g. one which I will only have FTP access to in order to update files)?
I think I probably would have to argue that running unit tests on your production server isn't really part of TDD because by the time you deploy to your production environment technically speaking, you're past "development".
I'm quite a stickler for TDD, and when I'm preaching the benefits to clients I often find myself saying "you can't half adopt TDD, it's all or nothing"
What you probably should have is some form of automated testing that you perform "after" deployment but these are not part of TDD.
Maybe you should look at your process again.
You could write functional tests in something like WATIR, WATIN or Selenium that test what is returned in the reponse page after posting certain form data or requesting specific URLs.
For clarification: what sort of access do you have to your web server? FTP or WebDAV only? From your question, I'm guessing ssh access isn't available - you're dropping files in a directory to deploy. Is that correct?
If so, the answer for unit testing is likely 'do it before you deploy'. You can set up functional testing driven by an automated tool like Selenium to test your app remotely via the web interface, but that's not really unit testing the sense that you're restricted to testing the system as a whole.
Have you considered setting up a staging server, perhaps as a VMWare instance, that mirrors or at least mimics your deployment environment?
What's preventing you from running unit tests on the server? If you can upload your production code and let it run there, why can't you upload this other code and run it as well?
I've written test tools for sites using python and httplib/urllib2 generally it would have been overkill but it was suitable in these cases. Not sure it's going to be of general use though.