Start spock test from jar on server without installed groovy - unit-testing

I implemented some spock test in groovy. I built a test jar, that include this test. Now I want to run it on server. Server knows nothing about spock or groovy. I decompiled the jar to get valid name for my test fuction. The following it is:
/* Error */
#org.spockframework.runtime.model.FeatureMetadata(line=65, name="connect via jdbc", ordinal=0, blocks={#org.spockframework.runtime.model.BlockMetadata(kind=org.spockframework.runtime.model.BlockKind.WHEN, texts={}), #org.spockframework.runtime.model.BlockMetadata(kind=org.spockframework.runtime.model.BlockKind.THEN, texts={})}, parameterNames={})
public void $spock_feature_0_0()
{
...
So, looks like, valid name to start the test function is $spock_feature_0_0. I can upload my test jar on server. How I can start the test fucntion on server?

Assuming you have a Maven project, there is no need to clean your whole local repo like Georgi Stoyanov said. Just call mvn dependency:tree on your project in order to see something like this:
...
[INFO] +- de.scrum-master:test-resources:jar:1.2:test
[INFO] | +- junit:junit:jar:4.12:test
[INFO] | +- org.spockframework:spock-core:jar:1.1-groovy-2.4:test
[INFO] | +- org.codehaus.groovy:groovy-all:jar:2.4.7:test
[INFO] | +- cglib:cglib-nodep:jar:3.2.5:test
[INFO] | +- org.objenesis:objenesis:jar:2.5.1:test
...

Related

Cannot create directory while running `sbt IntegrationTest/test` with HBaseTestingUtility.startMiniDFSCluster

When creating a mini HDFS cluster in integration tests with the help of HBaseTestingUtility.startMiniDFSCluster, tests are running fine in IntelliJ IDEA but fail when running via sbt IntegrationTest/test. An error looks like this:
22:00:38.430 [pool-5-thread-4] WARN o.a.h.hdfs.server.namenode.NameNode - Encountered exception during format:
java.io.IOException: Cannot create directory /Users/jay/foobar/target/test-data/afd8c5d6-29a7-2a60-685a-d1c80c73a9c8/cluster_aa70cf12-8c75-2fd1-5602-e49c7026f79e/dfs/name-0-1/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:361)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:571)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:592)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:185)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1211)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:406)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:233)
at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1071)
at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:987)
at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:884)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:798)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:667)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:640)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1129)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1104)
...
...
22:00:38.435 [pool-5-thread-4] ERROR o.apache.hadoop.hdfs.MiniDFSCluster - IOE creating namenodes. Permissions dump:
path '/Users/jay/foobar/target/test-data/afd8c5d6-29a7-2a60-685a-d1c80c73a9c8/cluster_aa70cf12-8c75-2fd1-5602-e49c7026f79e/dfs/data':
absolute:/Users/jay/foobar/target/test-data/afd8c5d6-29a7-2a60-685a-d1c80c73a9c8/cluster_aa70cf12-8c75-2fd1-5602-e49c7026f79e/dfs/data
permissions: ----
...
...
[info] FooIntegrationTest:
[info] bar.foo.FooIntegrationTest *** ABORTED ***
[info] java.io.IOException: Cannot create directory /Users/jay/foobar/target/test-data/afd8c5d6-29a7-2a60-685a-d1c80c73a9c8/cluster_aa70cf12-8c75-2fd1-5602-e49c7026f79e/dfs/name-0-1/current
...
Parallel execution of the test suites was causing the issue.
Set execution to serial in build.sbt:
IntegrationTest / parallelExecution := false, // Embedded HBase is having troubles when parallelled

Is the use of 'aggregate' following by 'dependsOn' redundant with the same modules?

In SBT is the use of aggregate following dependsOn redundant if they both contain the same sub-modules? According to the documentation it seems so, but I have seen this behavior used before and I don't understand what the benefit is. If a project is defined with dependencies, doesn't that already imply what aggregate does for those same dependencies? I notice that my project build is much slower with the use of this redundant aggregate than without and I'd like to know if I can safely remove it.
lazy val module = sbt.Project(...) dependsOn (foo, bar) aggregate (foo, bar)
OR just...
lazy val module = sbt.Project(...) dependsOn (foo, bar)
I am using SBT 0.13.6
tl;dr aggregate causes the tasks to be executed in the aggregating module and all aggregated one while dependsOn sets a CLASSPATH dependency so the libraries are visible to the aggregateing module (depending on the configuration that's compile aka default in the example).
A sample to demonstrate the differences.
I'm using the following build.sbt (nothing really interesting):
lazy val a = project
lazy val b = project
lazy val c = project dependsOn b aggregate (a,b)
The build defines three modules a, b, and c with the last c project to be an aggregate for a and b. There's the fourth module - an implicit one - that aggregates all the modules a, b, and c.
> projects
[info] In file:/Users/jacek/sandbox/aggregate-dependsOn/
[info] a
[info] * aggregate-dependson
[info] b
[info] c
When I execute a task in an aggreateing module, the task is going to be executed in the aggregated modules.
> compile
[info] Updating {file:/Users/jacek/sandbox/aggregate-dependsOn/}b...
[info] Updating {file:/Users/jacek/sandbox/aggregate-dependsOn/}a...
[info] Updating {file:/Users/jacek/sandbox/aggregate-dependsOn/}aggregate-dependson...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Updating {file:/Users/jacek/sandbox/aggregate-dependsOn/}c...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[success] Total time: 0 s, completed Oct 22, 2014 9:33:20 AM
The same happens when I execute a task in c that will in turn execute it against a and b, but not in the top-level project.
> show c/clean
[info] a/*:clean
[info] ()
[info] b/*:clean
[info] ()
[info] c/*:clean
[info] ()
[success] Total time: 0 s, completed Oct 22, 2014 9:34:26 AM
When a task's executed in a or b, it runs only within the project.
> show a/clean
[info] ()
[success] Total time: 0 s, completed Oct 22, 2014 9:34:43 AM
Whether or not a task is executed in aggregateing projects is controlled by aggregate key scoped to a project and/or task.
> show aggregate
[info] a/*:aggregate
[info] true
[info] b/*:aggregate
[info] true
[info] c/*:aggregate
[info] true
[info] aggregate-dependson/*:aggregate
[info] true
Change it as described in Aggregation:
In the project doing the aggregating, the root project in this case, you can control aggregation per-task. (...) aggregate in update is the aggregate key scoped to the update task.
Below I'm changing the key for c module and clean task so clean is no longer executed in aggregated modules a and b:
> set aggregate in (c, clean) := false
[info] Defining c/*:clean::aggregate
[info] The new value will be used by no settings or tasks.
[info] Reapplying settings...
[info] Set current project to aggregate-dependson (in build file:/Users/jacek/sandbox/aggregate-dependsOn/)
> show c/clean
[info] ()
[success] Total time: 0 s, completed Oct 22, 2014 9:39:13 AM
The other tasks for c are unaffected and still executing a task in c will run it in the aggregate modules:
> show c/libraryDependencies
[info] a/*:libraryDependencies
[info] List(org.scala-lang:scala-library:2.10.4)
[info] b/*:libraryDependencies
[info] List(org.scala-lang:scala-library:2.10.4)
[info] c/*:libraryDependencies
[info] List(org.scala-lang:scala-library:2.10.4)
While aggregate sets a dependency for sbt tasks so they get executed in the other aggregated modules, dependsOn sets a CLASSPATH dependency, i.e. a code in dependsOned module is visible in the dependsOning one (sorry for the "new" words).
Let's assume b has a main object as follows:
object Hello extends App {
println("Hello from B")
}
Save the Hello object to b/hello.scala, i.e. under b module.
Since c was defined to dependsOn b (see build.sbt above), the Hello object is visible in b (because it belongs to the module), but also in c.
> b/run
[info] Running Hello
Hello from B
[success] Total time: 0 s, completed Oct 22, 2014 9:46:44 AM
> c/runMain Hello
[info] Running Hello
Hello from B
[success] Total time: 0 s, completed Oct 22, 2014 9:46:58 AM
(I had to use runMain in c as run alone couldn't see the class that I can't explain).
Trying to run the task in a ends up with java.lang.ClassNotFoundException: Hello since the class is not visible in the module.
> a/runMain Hello
[info] Updating {file:/Users/jacek/sandbox/aggregate-dependsOn/}a...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Running Hello
[error] (run-main-6) java.lang.ClassNotFoundException: Hello
java.lang.ClassNotFoundException: Hello
at java.lang.ClassLoader.findClass(ClassLoader.java:530)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
[trace] Stack trace suppressed: run last a/compile:runMain for the full output.
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last a/compile:runMain for the full output.
[error] (a/compile:runMain) Nonzero exit code: 1
[error] Total time: 0 s, completed Oct 22, 2014 9:48:15 AM
Redefine a to dependsOn b in build.sbt and the exception vanishes.
You should read Multi-project builds in the official documentation.

Java binding for LMDB, need help to refresh LMDB JNI

we are using LMDB within java application.
The Java bindings that are available are 1 year old.
I would like to refresh the LMDBJNI
https://github.com/chirino/lmdbjni
However, the project owner, did not provide any instructions on how to build his project.
So I cannot just clone his git repository, and drop the new version of LMDB ( https://git.gitorious.org/mdb/mdb.git ) c and h files, and get it rebuilt
It seems that underneath LMDB JNI Is using hawkjni, but that's as far as I had gotten.
these are the steps I tried
a) git clone https://github.com/chirino/lmdbjni.git
b) cd lmdbjni; mvn install
It finishes successfully, however the resulting JAR does not have the actual lmdb library compiled.
So my test program fails with
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no lmdbjni64-99-vspmaster-SNAPSHOT in java.library.path, no lmdbjni-99-vspmaster-SNAPSHOT in java.library.path, no lmdbjni in java.library.path]
at org.fusesource.hawtjni.runtime.Library.doLoad(Library.java:182)
at org.fusesource.hawtjni.runtime.Library.load(Library.java:140)
at org.fusesource.lmdbjni.JNI.<clinit>(JNI.java:41)
at org.fusesource.lmdbjni.Env.create(Env.java:42)
at org.fusesource.lmdbjni.Env.<init>(Env.java:36)
at com.db.locrefdcache.App.main(App.java:27)
... 6 more
c) then I figured, I may not just be able to run mvn install for lmdbjni, and instead I need to explicitly build it's 64 bit Linux subsystem
so I did
cd lmdbjni/lmdbjni-linux64
mvn install
There I can I see that its trying to run configure script (generated by autotools), but I get the
...
[INFO] checking lmdb.h usability... no
[INFO] checking lmdb.h presence... no
[INFO] checking for lmdb.h... no
[INFO] configure: error: cannot find headers for lmdb
[INFO] rc: 1
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
So what I do not quite understand is whether lmdb files (lmdb.h , mdb.c, midl.h ) need to be explicitly dropped somewhere, or if hawkjni actually needs to be ran prior on them and create some sort of 'intermediate' c and h files that later on get dropped into this build environment.
Update with Compile Error I am getting, when using the LMDBJNI deephacks fork
Reason for recompile: DeepHacks's LMDBJNI project had produced a maven archive for LMDBJNI with latest LMDB, however it was compiled with Java 8 (which we do not yet use). So I need to recompile it with Java 7.
I modified pom.xml and changed Sources 1.8 to 1.7
Then, mvn install -P linux64 produces an error
...
[INFO] [hawtjni:build {execution: default}]
[INFO] Extracting /home/dev01/.m2/repository/org/deephacks/lmdbjni/lmdbjni/0.1.3-SNAPSHOT/lmdbjni-0.1.3-SNAPSHOT-native-src.zip to /home/dev01/devel/3dp/lmdbjni/lmdbjni-linux64/target/native-build-extracted
[INFO] executing: /bin/sh -c make install
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] build failed: org.apache.maven.plugin.MojoExecutionException: Make based build did not generate: /home/dev01/devel/3dp/lmdbjni/lmdbjni-linux64/target/native-build/target/lib/liblmdbjni.so
You might have better luck with this fork, which is actively being maintained
https://github.com/deephacks/lmdbjni
They've also provided LMDB itself in Maven, you can see how that was setup here
https://github.com/deephacks/lmdb
It builds fine on my machine with Java 7. Did you provide the correct profile when building the packages? For linux you must use: mvn install -P linux64

Grails 2.3 unittesting with IVY resolver

If I do create-app with grails 2.3, create a simple spock unit-test, and change the configuration en grails to use ivy resolver:
grails.project.dependency.resolver = "ivy" // or maven
The unit test crashes with the following error:
| Running without daemon...
| Running 1 unit test...
| Running 1 unit test... 1 of 1
| Error Error running unit tests: org/hamcrest/SelfDescribing (NOTE: Stack trace has been filtered. Use --verbose to see entire trace.)
java.lang.NoClassDefFoundError: org/hamcrest/SelfDescribing
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at org.junit.runner.JUnitCore.run(JUnitCore.java:138)
at org.junit.runner.JUnitCore.run(JUnitCore.java:117)
at org.springsource.loaded.ri.ReflectiveInterceptor.jlrMethodInvoke(ReflectiveInterceptor.java:1259)
at org.springsource.loaded.ri.ReflectiveInterceptor.jlrMethodInvoke(ReflectiveInterceptor.java:1259)
at org.springsource.loaded.ri.ReflectiveInterceptor.jlrMethodInvoke(ReflectiveInterceptor.java:1259)
Caused by: java.lang.ClassNotFoundException: org.hamcrest.SelfDescribing
... 7 more
| Error Error running unit tests: org/hamcrest/SelfDescribing
| Running 1 unit test....
| Running 1 unit test.....
| Tests FAILED - view reports in C:\ivytry\foobar\target\test-reports
Any ideas how to get around this? The reason why we need to use Ivy is that Maven doesn't seem to support custom remote repositories, where I need to specify username/password. -Besides in buildconfig, but I don't want my credentials under source control :)
EDIT (Solved): See comments!
The issue was because of the "infamous" intellij fix with idea 12 and grails 2.3 - restoring the "sources" and "javadoc" jar files, fixes the issue!

Confusion about using OpenEJB in embedded mode for unit-testing

I started exploring the possibilities of using OpenEJB in embedded mode for unit-testing my EJB3 components. At first I got errors like the below output
Testsuite: HelloBeanTest
Tests run: 4, Failures: 0, Errors: 4, Time elapsed: 1,779 sec
------------- Standard Output ---------------
Apache OpenEJB 3.1.4 build: 20101112-03:32
http://openejb.apache.org/
------------- ---------------- ---------------
------------- Standard Error -----------------
log4j:WARN No appenders could be found for logger
(org.apache.openejb.resource.activemq.ActiveMQResourceAdapter).
log4j:WARN Please initialize the log4j system properly.
------------- ---------------- ---------------
Testcase: sum took 1,758 sec
Caused an ERROR
Name "HelloBeanLocal" not found.
javax.naming.NameNotFoundException: Name "HelloBeanLocal" not found.
at org.apache.openejb.core.ivm.naming.IvmContext.federate(IvmContext.java:193)
at org.apache.openejb.core.ivm.naming.IvmContext.lookup(IvmContext.java:150)
at
org.apache.openejb.core.ivm.naming.ContextWrapper.lookup(ContextWrapper.java:115)
at javax.naming.InitialContext.lookup(InitialContext.java:392)
at HelloBeanTest.bootContainer(Unknown Source)
# ... output is the same for all the rest of the tests
The openejb.home property is set as a system property and points to my OpenEJB installation dir.
The HelloBeanTest#bootContainer() is a setUp method and it fails on the JNDI lookup. Shown below.
#Before
public void bootContainer() throws Exception{
Properties props = new Properties();
props.put(Context.INITIAL_CONTEXT_FACTORY,
"org.apache.openejb.client.LocalInitialContextFactory");
Context context = new InitialContext(props);
hello = (Hello) context.lookup("HelloBeanLocal");
}
After struggling with problems like this I started to try out OpenEJB in non-embedded mode, and started the container from it's installation directory and deployed the components as an ejb.jar. Deployment was successful and I started creating a stand-alone Java client. The stand-alone Java client is still unfinished, but meanwhile I came back to testing in embedded mode.
To my surprise, the tests suddenly started to pass. I added some more functionality to the component and tests for those. Everything worked just fine. Below is the output for that run.
Testsuite: HelloBeanTest
Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 2,281 sec
------------- Standard Output ---------------
Apache OpenEJB 3.1.4 build: 20101112-03:32
http://openejb.apache.org/
------------- ---------------- ---------------
------------- Standard Error -----------------
log4j:WARN No appenders could be found for logger
(org.apache.openejb.resource.activemq.ActiveMQResourceAdapter).
log4j:WARN Please initialize the log4j system properly.
------------- ---------------- ---------------
Testcase: sum took 2,263 sec
Testcase: hello took 0,001 sec
Testcase: sum2 took 0 sec
Testcase: avg took 0,001 sec
I was happily coding and testing until it broke again. It seems that removing the ejb.jar from /apps directory caused it. So, it seems that OpenEJB does the JNDI lookup still from the installation dir, but uses the current dir to find the actual implementations when running in embedded mode. I made this conclusion as the ejb.jar deployed in apps/ dir does not have all the methods that the local version has. (I double checked with javap.) Only the class signature is the same.
After this very long introduction, it's question time.
Can anyone provide any explanation for this behaviour?
Packaging and deploying the EJBs in the apps/ dir before testing is simple task, but can I be sure that even then I am testing the correct implementation?
Does this all have something to do with the openejb.home property pointing at the OpenEJB installation dir?
For summary, OpenEJB version is Apache OpenEJB 3.1.4 build: 20101112-03:32, which is visible in the log outputs as well.
Thanks in advance.
It does have something to do with setting the openejb.home to point to the installation dir.
There's a conf/openejb.xml file that likely has a apps/ listed as where deployments live. All the log output went to the logs/ dir and not in System.out of the test case where you can read it easily.
To use OpenEJB embedded you don't need any config files, directories, or ports. You just include the libs in your project's classpath.
First thing I'd say is to check out the openejb-examples-3.1.4.zip. There are maybe two dozen example projects all setup with both Ant and Maven build scripts. All the examples will work in any environment as long as the OpenEJB libraries are in the classpath. Here's a video of using one of the examples to unit test in Eclipse. I recommend the simple-stateless example as the best starting point.