I am using Arquillian for testing Java EE application against Glassfish, So far I am facing a performance problem, each Test case takes more than a minute to complete, having 60 Test cases means an hour to run. and hence the build will take more an hour.
I understand that running a test case might take this time in starting a glassfish container, creating and deploying a war file.
Is there a way to group test cases under each project, add all of the classes, create a single Deployment archive and run multiple tests in a single deployment as if they are a single test class?
Arquillian does not support suites by itself.
But I wrote extension that makes suites testing possible.
https://github.com/ingwarsw/arquillian-suite-extension
There you should find documentation with examples.
Are you using an embedded glassfish instance? Because running a remote instance should work faster.
Use a test suite (#Suite) and set up your arquillian container in a #BeforeClass annotated method
see http://www.mkyong.com/unittest/junit-4-tutorial-5-suite-test/
Edit :
And if all your class extend an AbstractTestClass which declare the #BeforeClass annotated method which initialize the container only if it's not already done ?
Related
When testing my Apache Spark application, I want to do some integration tests. For that reason I create a local spark appliciation (with hive support enabled), in which the tests are executed.
How can I achieve that after each test, the derby metastore is cleared, so that the next test has a clean environment again.
What I don't want to do is restarting the spark application after each test.
Are there any best practices to achieve what I want?
I think that introduction of some application level logic for integration testing kind of breaks concept of integration testing.
From my point of view correct approach is to restart application for each test.
Anyway I believe another option is to start/stop SparkContext for each test. It should clean any relevant stuff.
UPDATE - answer to comments
Maybe it's possible to do a cleanup by deleting tables/files?
I would ask more general question - what do you want to test with your test?
In a software development is defined unit testing and integration testing. And nothing in between. If you desire to do something that is not integration and not unit test - then you're doing something wrong. Specifically, with your test you try to test something that is already tested.
For the difference and general idea of unit and integration tests you can read here.
I suggest you to rethink your testing and depending on what you want to test do either integration or unit test. For example:
To test application logic - unit test
To test that your application works in environment - integration test. But here you shouldn't test WHAT is stored in Hive. Only that the fact of storage is happened, because WHAT is stored shall be tested by unit test.
So. The conclusion:
I believe you need integration tests to achieve your goals. And the best way to do it - restart your application for each integration test. Because:
In real life your application will be started and stopped
In addition to your Spark stuff - you need to make sure that all your objects in code are correctly deleted/reused. Singletones, Persistent objects, Configurations.. - it all may interfere with your tests
Finally, the code that will perform integration tests - where is a guarantee, that it will not break production logic at some point?
Within a short-time period I'm going to start a project based on Windows Azure. And I was wondering what are the experiences with testing for Windows Azure projects (in continuous intergration (with a TFS build server))? (Eventual using TDD)
Some things I was wondering:
Do you use mocking (in your own written wrapper class)?
Do you use the storage emulator?
Do you deploy the services to Azure and run the tests from the build server to the cloud? (what about costs)?
Thnaks in advance!
The same good practices for writing unit tests for applications outside of Windows Azure apply. If you have an external dependency to what you are actually testing, that dependency should be mocked and injected for your granular unit test.
For example, when I'm using Windows Azure Storage Queues I will have an interface that I use to interact with the queue itself, so in my code consuming the queue service I can mock the subsystem using the interface and use dependency injection to inject the mock. This removes the necessity to actually deal with the emulator during unit tests. For the most part the actual concrete implementation of the code working with the queue is not much more than a very thin wrapper.
I personally don't shoot for 100% test coverage, so I may not have direct unit tests that utilize the concrete implementation of the wrappers. In many cases I try to have integration tests that will exercise these wrappers and exercise multiple aspects of the system working together. In some cases I can run the integration tests in the emulator (for Storage operations for example), but in some cases they simply have to be run with access to the Windows Azure environment (in the case of usage of ACS or Service Bus).
Ideally you'd like to have a set of scripts that can be run to spin up a minimum set of test servers in Azure, deploy your solution and exercise the integration tests that can't be done on premises. Then get the results of that and have the script shut everything down (or optionally leave it running if you need that). Then run the integration tests suite that utilizes these scripts often enough to detect issues, but you certainly don't need to run them every time you check something in unless you are happy with running the test environment all the time. If you okay with the cost of a semi-permanent test environment running in Azure then just make sure to have the scripts to an update deployment rather than a delete and redeploy to cut down on cost a bit (savings would be relative to how often the deploy occurs).
I believe this question is a very subjective one as you're likely to get several different opinions.
I've a web application that runs in Glassfish v3. It's realized with JSF v2 and JPA (so there's a persistence.xml where is declared a JTA-data-source).
If i try to test my repositories with JUnit, it fails the lookup and gives me this error:
javax.naming.NamingException: Lookup failed for 'java:comp/env/persistence/em' in SerialContext[myEnv=
java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory,
java.naming.factory.url.pkgs=com.sun.enterprise.naming,
java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl}
[Root exception is javax.naming.NamingException: Invocation exception: Got null ComponentInvocation ]
It seems to ask for a transaction-type="RESOURCE_LOCAL" that i can't provide it, since it'd be in conflict with Glassfish's transaction-type="JTA".
So, what i'd like to ask is if it's possible to find a way to run JUnit without [strongly] change my webapp's configuration.
Thanks,
AN
For real in-container tests you should have a look at Arquillian. It allows you to run your unit tests within the container.
You should have a look at the documentation at http://arquillian.org/guides/ and the showcases at GitHub at https://github.com/arquillian/arquillian-showcase/. There is also a JSF related showcase.
Regarding your configuration. I would strongly suggest to configure your project in such a way, that you can use a different configuration as in production.
If you need only a working JPA environment for your tests, then you should do the following:
Create a second JPA configuration with transaction-type="RESOURCE_LOCAL".
Add a setter for the entity manager to your beans.
Create the entity manager within your test setup as you would do it in an standalone Java application.
Inject the entity manager manually in the beans.
Try to use a mocking framework like Mockito to mock all other parts of the application which are not a part of the current test but required for the test.
The second approach depends on your architecture and the possibilities it offers to you. It allows you to write very fine-grained unit tests. The first approach is very usefull to test the real behaviour of your application in the container.
Java EE is a new world for me, my experiences are on embedded systems, but I started a new job and I would like to know if there is a test process to follow for web applications based on Java EE. Which test strategy is usually adopted in this field?
Basic Unit test
Functional test
Integration test
System test, stress test, load test,....
....
and which is the scope of each test phase for web development? As server code and client code are both involved I don't know which is the best approach in this field. Also, several machines are involved: DB, buisness tier, presentation tier, load balancers, authentication with CAS, Active Directory,...
Which is the best test environment for each phases? When using the production CAS authentication, ...
Links, books, simple explanation or other kind of address is well appreciated.
The best test framework is Junit -for unit tests, in my opinion.
http://www.junit.org/
-for mocking objects, which you will need a lot, like to mock the database, mock services and other object in j2ee environment to be able to test in isolation .use http://www.jmock.org/ , http://code.google.com/p/mockito/, http://www.easymock.org/
-for acceptance and functional testing there is selenium http://seleniumhq.org/ this framework enables you to automate your tests.
I Advice you to read this books about testing in general and testing in j2ee evironment in particular.
http://www.manning.com/rainsberger/
http://www.amazon.com/Test-Driven-Development-By-Example/dp/0321146530
http://manning.com/massol/
http://manning.com/koskela/
First, whatever you plan to do as testing, take care of your build process (a good starting point is maven as build tool)
Junit (or testng) is almost good for everything (due to its simplicity)
Unit test:
For mock, I would prefer Mockito to jmock or easymock.
Acceptance test:
Regarding UI testing selenium is fine for web application (give a look at PageObject pattern if you plan to do a lot of UI testing).
For other interface testing (such as webservice), soapui is a nice starting point.
Integration testing:
You will face the middle ware problem, mainly solved in java by a container. Now it becomes fun :) If you run in "real" JEE, then it depends if it's prior to JEE6 or not as from JEE6 you have an embedded container (which really ease the testing). Otherwise, go for a dependency injection framework (Spring, Guice, ...).
Other hints for integration or acceptance testing:
you will may be need to mock some interface (give a look to MOCO to mock external service based on HTTP).
also think about some embedded servlet container (Jetty) to ease web the testing.
configuration and provisioning can be a problem too. ex.: for the DB you can automate this with "flyway" or "liquibase"
DB testing you have two approach: resetting data after each test (see DBUnit) or in transaction testing (see Spring test for an example)
I have developed a JSP servlet based web application and I would like to perform some functional testing on it.I know that functional test is to make sure that the application is performing actions which it is supposed to perform.
I have googled and found out that Selenium can be used for automated functional testing.I saw that I can record my actions which can be replayed to me.
Now since I am new to testing applications, I dont understand how replaying the actions is useful in testing.
I have not performed any unit tests on my application,i mean formally using jUnit and stuff, although I used to just run parts of my code to check if it was working properly.Is that a bad thing as in not using formal unit testing frameworks.
Replaying is only useful to verify if the test is doing everything the tester intended. The key point is that Selenium can export the testcase you're seeing replaying to a fullworthy testcase class for among others JUnit. This class can then be added to the group of other testcases you have for the webapp. This can then be executed after automatic build as part of continuous integration.
For basic functional testing, the Selenium IDE, in addition to record/playback capabilities, provides assertions and verifications for elements in your web app. Establishing these strategically (around perceived problem areas) will enable you to regress through your application ensuring newer implementations do not break existing functionality.