I was trying to execute my test cases using phpunit command in laravel but the command replies the following
$ phpunit --env=testing
PHPUnit 3.7.28 by Sebastian Bergmann.
unrecognized option --env
Is this familiar to someone please help me.
Edit:
The following is the help for the command where the option does not exist. What should I do?
$ phpunit --help
PHPUnit 3.7.28 by Sebastian Bergmann.
Usage: phpunit [switches] UnitTest [UnitTest.php]
phpunit [switches] <directory>
--log-junit <file> Log test execution in JUnit XML format to file.
--log-tap <file> Log test execution in TAP format to file.
--log-json <file> Log test execution in JSON format.
--coverage-clover <file> Generate code coverage report in Clover XML format.
--coverage-html <dir> Generate code coverage report in HTML format.
--coverage-php <file> Serialize PHP_CodeCoverage object to file.
--coverage-text=<file> Generate code coverage report in text format.
Default to writing to the standard output.
--testdox-html <file> Write agile documentation in HTML format to file.
--testdox-text <file> Write agile documentation in Text format to file.
--filter <pattern> Filter which tests to run.
--testsuite <pattern> Filter which testsuite to run.
--group ... Only runs tests from the specified group(s).
--exclude-group ... Exclude tests from the specified group(s).
--list-groups List available test groups.
--test-suffix ... Only search for test in files with specified
suffix(es). Default: Test.php,.phpt
--loader <loader> TestSuiteLoader implementation to use.
--printer <printer> TestSuiteListener implementation to use.
--repeat <times> Runs the test(s) repeatedly.
--tap Report test execution progress in TAP format.
--testdox Report test execution progress in TestDox format.
--colors Use colors in output.
--stderr Write to STDERR instead of STDOUT.
--stop-on-error Stop execution upon first error.
--stop-on-failure Stop execution upon first error or failure.
--stop-on-skipped Stop execution upon first skipped test.
--stop-on-incomplete Stop execution upon first incomplete test.
--strict Run tests in strict mode.
-v|--verbose Output more verbose information.
--debug Display debugging information during test execution.
--process-isolation Run each test in a separate PHP process.
--no-globals-backup Do not backup and restore $GLOBALS for each test.
--static-backup Backup and restore static attributes for each test.
--bootstrap <file> A "bootstrap" PHP file that is run before the tests.
-c|--configuration <file> Read configuration from XML file.
--no-configuration Ignore default configuration file (phpunit.xml).
--include-path <path(s)> Prepend PHP's include_path with given path(s).
-d key[=value] Sets a php.ini value.
-h|--help Prints this usage information.
--version Prints the version and exits.
You should know your PHPUnit version is VERY outdated, Nov 2014. Please update.
Also, what are you trying to do with the --env? You can configure the application environment in the phpunit.xml in the root directory like so:
<php>
<env name="APP_ENV" value="testing"/>
... more variables
</php>
Related
I started to work on a series of unit tests for different kernel modules I am currently writing. To that goal, I am using the the excellent KUnit framework.
Following the simple test described on the KUnit webpage, I was able to create a first test series that compiles, runs and displays the results as expected.
For me the next step was to use code coverage on those results to generate a report of the quality of the coverage of the testing strategies on the different modules.
The problem comes when I open the results of the code coverage. It indicates that no lines have been parsed by my tests in the module I am writing. I know for a fact that this is not the case because I generated in the test function a failed test using:
KUNIT_FAIL(test, "This test never passes.");
And kunit.py reports that the test failed. Even the source code for the test was not reported as being covert...
Does someone have a idea on how to solve this?
I'll post the answer to my own question, hoping this will help someone in the same situation.
There were two issues:
The GCC version (as stated in the documentation of Kunit) must be inferior to 7.xx
The code coverage for kunit is available starting at 5.14.
Once I used gcc 6.5.0 and a 5.15 kernel, followed the manual in kunit for code coverage, everything worked as hoped!
Thanks for providing the information.
I am also working on the code coverage part in kunit. I am also facing an issue with that where I am unable to generate .gcda files and some abnormal things that I observed are given below:
unable to enable the CONFIG_GCOV_PROFILE_ALL.
generated coverage.info file with no content.
.gcda file is not generated.
I received a warning while running the below command
lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/
WARNING: no .gcda files found in .kunit/ - skipping!
and received an error while running the below command
genhtml -o /tmp/coverage_html coverage.info
ERROR: no valid records found in tracefile coverage.info
The links for the sources that I have followed are given below: Could you please support me in the right direction to generate code coverage in kunit.
kunit tutorial
gcov in kernel
I have a jacoco-agent generated file of my Maven project (Java), named jacoco.exec.
How can I convert this file into human readable format? (HTML/XML).
I believe that this is described in official JaCoCo documentation. In particular there is jacoco-maven-plugin goal "report" and example of its usage.
Starting from JaCoCo version 0.8.0 there is also Command Line interface. Here is an example of how to use it to produce HTML report from jacoco.exec, note that this also requires class files:
java -jar jacoco-0.8.1/lib/jacococli.jar report jacoco.exec --classfiles directory_with_classes --html directory_for_report
In the Gradle world (for anyone who happened to stumble upon this question), use the JacocoReport task, documented here.
The configured task would look something like this:
jacocoTestReport {
reports {
html.enabled = true
html.destination "${buildDirectory}/coverage/report/html"
xml.enabled = true
xml.destination "${buildDirectory}/coverage/report/xml"
}
executionData = files("path/to/jacoco.exec")
}
Godin's answer is perfect. But if you are using maven, then you can simply run the following command
mvn clean verify
As Jacoco's report goal is attached to the verify phase of maven life cycle.
mvn verify would simply invoke the report goal, and will generate all the reports in the following path
./target/site/jacoco
I run go test in my pkg directory and I get the test results printed to the console as they run, but it would be ideal if I could get them printed to a txt file or even a html file, is it possible to do this? I know you can get coverage reports out from it and generate html files for those which is excellent, but I would have thought it possible to do the same just for the actual results of the tests i.e which tests ran, which passed and which failed. I've been searching the net but even go test help doesn't offer any suggestions for printing results out to a file.
Since I only want to see failed test, I have this script "gt" that I run instead of go test:
go test -coverprofile=coverage.out %*|grep -v -e "^\.\.*$"|grep -v "^$"|grep -v "thus far"
That way, it filters everything but the failed cases.
And you can redirect its content to a file, as mentioned: gt > test.out
It also generates code coverage, which is why I have another script "gc":
grep -v -e " 1$" coverage.out
That way, I don't even wait for a brower to open, I directly see the list of lines which are not yet covered (ie, which don't end with '1' in the coverage.out file)
This will append test results to the test.out file.
go test > test.out
This will overwrite the test results for each test run.
go test |& tee test.out
I am running tests through Jenkins on a windows box. In my "Execute Windows Batch command" portion of the project configuration I have the following command:
nosetests --nocapture --with-xunitmp --eval-attr "%APPLICATION% and priority<=%PRIORITY% and smoketest and not dev" --processes=4 --process-timeout=2000
The post build actions have "Publish JUnit test result report" with the Test report XMLs path being:
trunk\automation\selenium\src\nosetests.xml
When I do a test run, the nosetests.xml file is created, however it is empty, and I am not getting any Test Results for the build.
I am not really sure what is wrong here.
EDIT 1
I ran the tests with just --with-xunit and REM'd out the --processes and got test results. Does anyone of problems with xunitmp not working with a Windows environment?
EDIT 2
I unstalled an reinstalled nose and nose_xunitmp to no avail.
The nosetest plugin for parallelizing tests and plugin for producing xml output are incompatible. Enabling them at the same time will produce the exact result you got.
If you want to keep using nosetest, you need to execute tests sequentially or find other means of parallelizing them (e.g. by executing multiple parallel nosetest commands (which is what I do at work.))
Alternatively you can use another test runner like nose2 or py.test which do not have this limitation.
Apparently the problem is indeed Windows and how it handles threads. We attempted several tests outside of our Windows Jenkins server and they do not work either. Stupid Windows.
When testing with zend & phpunit. Header error show on console.
I find the error:
Cannot modify header information - headers already sent by (output started at /usr/share/php/PHPUnit/Util/Printer.php:173)
I try to debug with the instructions in some topic
Is there a way test STDERR output in PHPUnit?
and
PHPUnit output causing Zend_Session exceptions.
But when use --stderr option, I can not find some output and report of testcase.
This is the output on the console:
root#ubuntu:/home/boingonline/www/testunit# phpunit --stderr
PHPUnit 3.5.15 by Sebastian Bergmann.
root#ubuntu:/home/boingonline/www/testunit#
Any ideas for this problem? Thanks for all answers.
This is a bug in PHP. Whenever something is output (even on CLI, that's the problem), you cannot use header() calls anymore.
A workaround is to use process isolation for the test with #runInSeparateProcess.