phpunit restarting tests randomly - unit-testing

I am trying to test my symfony2 application using PHPUnit. I got one project where everything works as expected, but on my other project I have this strange behaviour that PHPUnit either stops executing the Test Suite randomly near the end of all tests and restarts or restarts the tests after finishing the Test Suite and writing the code coverage. Other times it runs normally.
Here is some output to make visible what is happening ( Test is restarting over and over):
PHPUnit 3.6.10 by Sebastian Bergmann.
Configuration read from C:\workspace\cllctr\app\phpunit.xml
................................................................. 65 / 83 ( 78%)
...........PHPUnit 3.6.10 by Sebastian Bergmann.
Configuration read from C:\workspace\cllctr\app\phpunit.xml
................................................................. 65 / 83 ( 78%)
...PHPUnit 3.6.10 by Sebastian Bergmann.
Configuration read from C:\workspace\cllctr\app\phpunit.xml
................................................................. 65 / 83 ( 78%)
............PHPUnit 3.6.10 by Sebastian Bergmann.
Configuration read from C:\workspace\cllctr\app\phpunit.xml
................................................................. 65 / 83 ( 78%)
............PHPUnit 3.6.10 by Sebastian Bergmann.
Configuration read from C:\workspace\cllctr\app\phpunit.xml
................................................................. 65 / 83 ( 78%)
..................
Time: 01:03, Memory: 43.00Mb
OK (83 tests, 145 assertions)
Writing code coverage data to XML file, this may take a moment.
Generating code coverage report, this may take a moment.
Here is an example of the Test Suite restarting after executing all tests:
PHPUnit 3.6.10 by Sebastian Bergmann.
Configuration read from C:\workspace\cllctr\app\phpunit.xml
................................................................. 65 / 83 ( 78%)
..................
Time: 01:29, Memory: 53.25Mb
OK (83 tests, 145 assertions)
Writing code coverage data to XML file, this may take a moment.
Generating code coverage report, this may take a moment.
PHPUnit 3.6.10 by Sebastian Bergmann.
Configuration read from C:\workspace\cllctr\app\phpunit.xml
................................................................. 65 / 83 ( 78%)
............PHPUnit 3.6.10 by Sebastian Bergmann.
As my other project runs without any problems, there must be some problem within my code. But I cannot figure out what possibly triggers this behaviour! The logs don't show nothing unexpected/strange.
EDIT
Yesterday, I noticed something strange: I decided to switch from MongoDB to MySQL because of some unrelated reasons. After the transition was done, all tests run without any problems. I tried it many times and I'm not able to reproduce it anymore. As this only happened with my functional tests, I tend to think that the problem was my WebTestCase class, which runs some commands to clear and rebuild the database. Maybe someone who also uses MongoDB can reproduce this behaviour?

I'd suggest to check the database servers connection limits and pools.
For example if you've got a max limit of 100 connections, and some of the tests leaves connections open ("leaks"), you'd hit the limits there.
That would also explain why sometimes it works and sometimes it hits the limit, as your database could handle other tasks simultaneously, so sometimes you hit the ceiling, other times when nothing else runs, you can successfully run your tests.
Check for persistent network connections and other external resources.

Related

Module test path not found

I'm trying to configure a GitHub action:
My action contains the job for running the unit test by collecting code coverage. As I see in the log:
Test Run Successful.
Total tests: 336
Passed: 336
Total time: 14.0930 Seconds
Calculating coverage result...
Generating report 'TestResults/coverage.netcoreapp2.1.info'
Nonetheless, after these lines log contains an error message:
/home/runner/.nuget/packages/coverlet.msbuild/2.9.0/build/coverlet.msbuild.targets(31,5): error : Module test path not found [/home/runner/work/ObservableComputations/ObservableComputations/src/ObservableComputations.Test/ObservableComputations.Test.csproj]
The job is failed.
I tried to run
dotnet test --no-build --filter Name~Casting --verbosity normal /p:CollectCoverage=true /p:CoverletOutput=TestResults/ /p:CoverletOutputFormat=lcov
at my local machine (MS Windows) and didn't get this error.
Any help is greatly appreciated.
The reason was in -no-build parameter for dotnet test. It seems coverage collecting requires dotnet test to build itself.

TYPO3 extensions with good tests

I am looking for a TYPO3 extensions with good test examples - unit tests, functional etc. By good I mean - test that covers possibly lot of code and are up to date so I can actually execute them without fixing anything upfront.
Here some examples which I check:
news 7.0.7 - 261 tests, 14 failed
realurl 2.4.0 - fails to execute
femanager 4.2.2 - fails to execute
devlog 3.0.2 - fails to execute
cs_seo 3.0.2 - fails to execute
aoe_ipauth 1.1.0 - fails to execute
Checkout extensions from Oliver Klee, who started 2009 to implement tests in TYPO3 core.
oelib https://extensions.typo3.org/extension/oelib/
realty https://extensions.typo3.org/extension/realty/
seminars https://extensions.typo3.org/extension/seminars/
Also have a look at his example extension 'ext_tea' at Github https://github.com/oliverklee/ext-tea.
A TYPO3 example extension for unit testing and best practices.
Oliver Klee is giving workshops, and has more examples on Github.
Christian Kuhn implemented a view days ago the possibility in TYPO3 9.5 to run Tests using Docker. Documentation is in progress
Run Build/Scripts/runTests.sh -h to see what is possible.

PhantomJS Ghostdriver and Nose --with-id

I am working with Selenium tests in python using Nose as a test runner. I run my tests like so
nosetests -a level=gold --with-id --with-xunit
Once tests are done, I usually run
nosetests --failed
So far we have run tests using FireFox and Chrome webdrivers with no issues. It is not uncommon for one or two tests to fail (as our website is undergoes frequent builds, which causes tests to briefly fail), and only these tests are retried.
When I use PhantomJS's Ghostdriver, behavior is similar to Chrome/FF as one or two tests fail. But when I run nosetests --failed ALL tests are rerun, not just the failed ones.
webdriver setup a such:
self.driver = webdriver.PhantomJS(executable_path='C:\\SeleniumTests\\phantomjs.exe')
nosetest.xml on the first pass with phantomjs outputs
<?xml version="1.0" encoding="UTF-8"?><testsuite name="nosetests" tests="37" errors="1" failures="0" skip="0">
but on the second pass all 37 tests are rerun.
Is this a known issue with Ghostdriver? Or is there some something I am missing?

Grails 2.3 unittesting with IVY resolver

If I do create-app with grails 2.3, create a simple spock unit-test, and change the configuration en grails to use ivy resolver:
grails.project.dependency.resolver = "ivy" // or maven
The unit test crashes with the following error:
| Running without daemon...
| Running 1 unit test...
| Running 1 unit test... 1 of 1
| Error Error running unit tests: org/hamcrest/SelfDescribing (NOTE: Stack trace has been filtered. Use --verbose to see entire trace.)
java.lang.NoClassDefFoundError: org/hamcrest/SelfDescribing
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at org.junit.runner.JUnitCore.run(JUnitCore.java:138)
at org.junit.runner.JUnitCore.run(JUnitCore.java:117)
at org.springsource.loaded.ri.ReflectiveInterceptor.jlrMethodInvoke(ReflectiveInterceptor.java:1259)
at org.springsource.loaded.ri.ReflectiveInterceptor.jlrMethodInvoke(ReflectiveInterceptor.java:1259)
at org.springsource.loaded.ri.ReflectiveInterceptor.jlrMethodInvoke(ReflectiveInterceptor.java:1259)
Caused by: java.lang.ClassNotFoundException: org.hamcrest.SelfDescribing
... 7 more
| Error Error running unit tests: org/hamcrest/SelfDescribing
| Running 1 unit test....
| Running 1 unit test.....
| Tests FAILED - view reports in C:\ivytry\foobar\target\test-reports
Any ideas how to get around this? The reason why we need to use Ivy is that Maven doesn't seem to support custom remote repositories, where I need to specify username/password. -Besides in buildconfig, but I don't want my credentials under source control :)
EDIT (Solved): See comments!
The issue was because of the "infamous" intellij fix with idea 12 and grails 2.3 - restoring the "sources" and "javadoc" jar files, fixes the issue!

Confusion about using OpenEJB in embedded mode for unit-testing

I started exploring the possibilities of using OpenEJB in embedded mode for unit-testing my EJB3 components. At first I got errors like the below output
Testsuite: HelloBeanTest
Tests run: 4, Failures: 0, Errors: 4, Time elapsed: 1,779 sec
------------- Standard Output ---------------
Apache OpenEJB 3.1.4 build: 20101112-03:32
http://openejb.apache.org/
------------- ---------------- ---------------
------------- Standard Error -----------------
log4j:WARN No appenders could be found for logger
(org.apache.openejb.resource.activemq.ActiveMQResourceAdapter).
log4j:WARN Please initialize the log4j system properly.
------------- ---------------- ---------------
Testcase: sum took 1,758 sec
Caused an ERROR
Name "HelloBeanLocal" not found.
javax.naming.NameNotFoundException: Name "HelloBeanLocal" not found.
at org.apache.openejb.core.ivm.naming.IvmContext.federate(IvmContext.java:193)
at org.apache.openejb.core.ivm.naming.IvmContext.lookup(IvmContext.java:150)
at
org.apache.openejb.core.ivm.naming.ContextWrapper.lookup(ContextWrapper.java:115)
at javax.naming.InitialContext.lookup(InitialContext.java:392)
at HelloBeanTest.bootContainer(Unknown Source)
# ... output is the same for all the rest of the tests
The openejb.home property is set as a system property and points to my OpenEJB installation dir.
The HelloBeanTest#bootContainer() is a setUp method and it fails on the JNDI lookup. Shown below.
#Before
public void bootContainer() throws Exception{
Properties props = new Properties();
props.put(Context.INITIAL_CONTEXT_FACTORY,
"org.apache.openejb.client.LocalInitialContextFactory");
Context context = new InitialContext(props);
hello = (Hello) context.lookup("HelloBeanLocal");
}
After struggling with problems like this I started to try out OpenEJB in non-embedded mode, and started the container from it's installation directory and deployed the components as an ejb.jar. Deployment was successful and I started creating a stand-alone Java client. The stand-alone Java client is still unfinished, but meanwhile I came back to testing in embedded mode.
To my surprise, the tests suddenly started to pass. I added some more functionality to the component and tests for those. Everything worked just fine. Below is the output for that run.
Testsuite: HelloBeanTest
Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 2,281 sec
------------- Standard Output ---------------
Apache OpenEJB 3.1.4 build: 20101112-03:32
http://openejb.apache.org/
------------- ---------------- ---------------
------------- Standard Error -----------------
log4j:WARN No appenders could be found for logger
(org.apache.openejb.resource.activemq.ActiveMQResourceAdapter).
log4j:WARN Please initialize the log4j system properly.
------------- ---------------- ---------------
Testcase: sum took 2,263 sec
Testcase: hello took 0,001 sec
Testcase: sum2 took 0 sec
Testcase: avg took 0,001 sec
I was happily coding and testing until it broke again. It seems that removing the ejb.jar from /apps directory caused it. So, it seems that OpenEJB does the JNDI lookup still from the installation dir, but uses the current dir to find the actual implementations when running in embedded mode. I made this conclusion as the ejb.jar deployed in apps/ dir does not have all the methods that the local version has. (I double checked with javap.) Only the class signature is the same.
After this very long introduction, it's question time.
Can anyone provide any explanation for this behaviour?
Packaging and deploying the EJBs in the apps/ dir before testing is simple task, but can I be sure that even then I am testing the correct implementation?
Does this all have something to do with the openejb.home property pointing at the OpenEJB installation dir?
For summary, OpenEJB version is Apache OpenEJB 3.1.4 build: 20101112-03:32, which is visible in the log outputs as well.
Thanks in advance.
It does have something to do with setting the openejb.home to point to the installation dir.
There's a conf/openejb.xml file that likely has a apps/ listed as where deployments live. All the log output went to the logs/ dir and not in System.out of the test case where you can read it easily.
To use OpenEJB embedded you don't need any config files, directories, or ports. You just include the libs in your project's classpath.
First thing I'd say is to check out the openejb-examples-3.1.4.zip. There are maybe two dozen example projects all setup with both Ant and Maven build scripts. All the examples will work in any environment as long as the OpenEJB libraries are in the classpath. Here's a video of using one of the examples to unit test in Eclipse. I recommend the simple-stateless example as the best starting point.