I am using ctest with test_discover_tests to unit test my applications. It appears that ctest calls my test executable for each individual test (see below for an example). I believe this results in the default.profraw code coverage file being overwritten for each test, so I only get code coverage for the last test that was executed.
As an example:
ctest --verbose -R Test1|Test2
57: Test command: /home/user/project_dir/build_unit_test/project/project_test "--gtest_filter=TestUnit.Test1" "--gtest_also_run_disabled_tests"
57: Test timeout computed to be: 10000000
57: Running main() from gtest_main.cc
57: Note: Google Test filter = TestUnit.Test1
57: [==========] Running 1 test from 1 test case.
57: [----------] Global test environment set-up.
57: [----------] 1 test from TestUnit
57: [ RUN ] TestUnit.Test1
57: [ OK ] TestUnit.Test1 (6 ms)
57: [----------] 1 test from TestUnit (6 ms total)
57:
57: [----------] Global test environment tear-down
57: [==========] 1 test from 1 test case ran. (6 ms total)
57: [ PASSED ] 1 test.
9/10 Test #57: project.TestUnit.Test1 .............. Passed 0.05 sec
test 58
Start 58: project.TestUnit.Test2
58: Test command: /home/user/project_dir/build_unit_test/project/project_test "--gtest_filter=TestUnit.Test2" "--gtest_also_run_disabled_tests"
58: Test timeout computed to be: 10000000
58: Running main() from gtest_main.cc
58: Note: Google Test filter = TestUnit.Test2
58: [==========] Running 1 test from 1 test case.
58: [----------] Global test environment set-up.
58: [----------] 1 test from TestUnit
58: [ RUN ] TestUnit.Test2
58: [ OK ] TestUnit.Test2 (1 ms)
58: [----------] 1 test from TestUnit (1 ms total)
58:
58: [----------] Global test environment tear-down
58: [==========] 1 test from 1 test case ran. (2 ms total)
58: [ PASSED ] 1 test.
10/10 Test #58: project.TestUnit.Test2 .............. Passed 0.04 sec
Related
In package.json I have 2 script commands:
"test:unit": "jest --watch --testNamePattern='^(?!\\[functional\\]).+$'",
"test:functional": "jest --watch --testNamePattern='\\[functional\\]'",
copying ^(?!\\[functional\\]).+$ into https://regex101.com/, it does not match the test string below inside argument 1 of describe()
describe("[functional] live tests", () => {
When changed to ([functional]).+$, the pattern does match. I have to remove a pair of \ on each end to remove escapes for .json files (I think).
Here is what I see when running npm run test:unit in my project root:
// the functional test runs (not desired)
$ npm run test:unit
functions/src/classes/__tests__/Functional.test.ts:30:47 - error TS2339: Property 'submit' does not exist on type 'Element'.
30 await emailForm.evaluate(form => form.submit());
~~~~~~
RUNS ...s/__tests__/Functional.test.ts
Test Suites: 1 failed, 1 skipped, 3 passed, 4 of 5 total
Tests: 2 skipped, 16 passed, 18 total
Snapshots: 0 total
Time: 8.965s, estimated 27s
Ran all test suites with tests matching "^(?!\[functional\]).+$".
Active Filters: test name /^(?!\[functional\]).+$/
The functional tests are not built out which explains the syntax error, it's not important here. The key issue, is why the tests were not skipped.
I believe the problem has to do with the regex negative matcher. The positive matcher without the ! only matches tests that have, or are nested in a describe block with [functional]
$ npm run test:functional
Test Suites: 1 failed, 4 skipped, 1 of 5 total
Active Filters: test name /\[functional\]/
Anyone know why the negative regex pattern is failing during npm run test:unit ?
Instead of a regex fix I changed the flag on the unit testing script to an ignore, then copying the matching pattern for [functional]:
"test:unit": "jest --watch --testIgnorePattern='\\[functional\\]'",
"test:functional": "jest --watch --testNamePattern='\\[functional\\]'",
I am trying to configure/run my first unit test for Vuejs. But I can't get past the configuration issues. I have tried installing the libraries but for some reason I keep getting errors.
Here is what an example of my code looks like:
My directory structure:
hello/
dist/
node_modules/
src/
components/
hello.vue
test/
setup.js
test.spec.js
.babelrc
package.json
webpack.config.js
Contents inside my files
src/components/hello.vue
<template> <div> {{message}} </div> </template>
<script>
export default {
name: 'hello',
data () { return message: 'Hi' },
created () {
// ...
}
}
test/setup.js
// setup JSDOM
require('jsdom-global')()
// make expect available globally
global.expect = require('expect')
test/test.spect.js
import { shallow } from 'vue/test-utils'
import { hello} from '../../../src/components/hello.vue'
describe('hello', () => {
// just testing simple data to see if it works
expect(1).toBe(1)
})
.babelrc
{
"env": {
"development": {
"presets": [
[
"env",
{
"modules": false
}
]
]
},
"test": {
"presets": [
[
"env",
{
"modules": false,
"targets": {
"node": "current"
}
}
]
],
"plugins": [
"istanbul"
]
}
}
}
package.json
...
"scripts": {
"build": "webpack -p",
"test": "cross-env NODE_ENV=test nyc mocha-webpack --webpack-config webpack.config.js --require test/setup.js test/**/*.spec.js"
},
"devDependencies": {
"babel-core": "^6.26.0",
"babel-loader": "^7.1.2",
"babel-preset-env": "^1.6.1",
"cross-env": "^5.1.1",
"css-loader": "^0.28.7",
"file-loader": "^1.1.5",
"node-sass": "^4.7.2",
"sass-loader": "^6.0.6",
"vue-loader": "^13.5.0",
"vue-template-compiler": "^2.5.9",
"webpack": "^3.10.0",
"webpack-dev-server": "^2.9.7",
"jsdom": "^11.3.0",
"jsdom-global": "^3.0.2",
"mocha": "^3.5.3",
"mocha-webpack": "^1.0.0-rc.1",
"nyc": "^11.4.1",
"expect": "^21.2.1",
"#vue/test-utils": "^1.0.0-beta.12"
},
...
"nyc": {
"include": [
"src/**/*.(js|vue)"
],
"instrument": false,
"sourceMap": false
}
and finally my webpack.config.js
...
if(process.env.NODE_ENV == "test") {
module.exports.externals = [ require ('webpack-node-externals')()]
module.exports.devtool = 'inline-cheap-module-source-map'
}
now when I run npm test from my root folder hello/ I get this error:
> hello#1.0.0 test C:\Users\john\vue-learn\hello
> npm run e2e
> hello#1.0.0 e2e C:\Users\john\vue-learn\hello
> node test/e2e/runner.js
Starting selenium server... started - PID: 12212
[Test] Test Suite
=====================
Running: default e2e tests
× Timed out while waiting for element <#app> to be present for 5000 milliseconds. - expected "visible" but got: "not found"
at Object.defaultE2eTests [as default e2e tests] (C:/Users/john/Google Drive/lab/hello/test/e2e/specs/test.js:13:8)
at _combinedTickCallback (internal/process/next_tick.js:131:7)
FAILED: 1 assertions failed (20.281s)
_________________________________________________
TEST FAILURE: 1 assertions failed, 0 passed. (20.456s)
× test
- default e2e tests (20.281s)
Timed out while waiting for element <#app> to be present for 5000 milliseconds. - expected "visible" but got: "not found"
at Object.defaultE2eTests [as default e2e tests] (C:/Users/john/Google Drive/lab/hello/test/e2e/specs/test.js:13:8)
at _combinedTickCallback (internal/process/next_tick.js:131:7)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! hello#1.0.0 e2e: `node test/e2e/runner.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the hello#1.0.0 e2e script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\john\AppData\Roaming\npm-cache\_logs\2018-04-03T23_53_15_976Z-debug.log
npm ERR! Test failed. See above for more details.
I don't know why this happens. When I installed my webpack project at first I didn't install a testing library with the npm init command so there are no conflicts, but still I get that error:
update (after bounty)
I'm just trying to test my vuejs application. Hopefully with jasmine/karma. If anyone knows how to integrate these into a simple app and run the firsts test, I can take it from there. My problem is not writing tests but configuring it
So first thing you didn't need to enable the end to end testing in your project. I would say start fresh
$ npm install -g vue-cli
$ vue init webpack vue-testing
? Project name vue-testing
? Project description A Vue.js project
? Author Tarun Lalwani <tarun.lalwani#payu.in>
? Vue build standalone
? Install vue-router? Yes
? Use ESLint to lint your code? Yes
? Pick an ESLint preset Standard
? Set up unit tests Yes
? Pick a test runner karma
? Setup e2e tests with Nightwatch? No
? Should we run `npm install` for you after the project has been created? (recommended) yarn
Say N to Setup e2e tests with Nightwatch and use Karma for the Pick a test runner.
$ npm test
> vue-testing#1.0.0 test /Users/tarun.lalwani/Desktop/tarunlalwani.com/tarunlalwani/workshop/ub16/so/vue-testing
> npm run unit
> vue-testing#1.0.0 unit /Users/tarun.lalwani/Desktop/tarunlalwani.com/tarunlalwani/workshop/ub16/so/vue-testing
> cross-env BABEL_ENV=test karma start test/unit/karma.conf.js --single-run
07 04 2018 21:35:28.620:INFO [karma]: Karma v1.7.1 server started at http://0.0.0.0:9876/
07 04 2018 21:35:28.629:INFO [launcher]: Launching browser PhantomJS with unlimited concurrency
07 04 2018 21:35:28.645:INFO [launcher]: Starting browser PhantomJS
07 04 2018 21:35:32.891:INFO [PhantomJS 2.1.1 (Mac OS X 0.0.0)]: Connected on socket M1HeZIiOis3eE3mLAAAA with id 44927405
HelloWorld.vue
✓ should render correct contents
PhantomJS 2.1.1 (Mac OS X 0.0.0): Executed 1 of 1 SUCCESS (0.061 secs / 0.041 secs)
TOTAL: 1 SUCCESS
=============================== Coverage summary ===============================
Statements : 100% ( 2/2 )
Branches : 100% ( 0/0 )
Functions : 100% ( 0/0 )
Lines : 100% ( 2/2 )
================================================================================
Now your npm test would work fine.
According to the error logs you provide here, the failing tests that you spot are the End to End ones. Indeed, by executing the command npm test e2e you're testing using Nightwatch. See under /tests/e2e/specs. Here you should have a default test file checking that your Vue application properly create a DOM element identified as app.
The test should be the following:
// For authoring Nightwatch tests, see
// http://nightwatchjs.org/guide#usage
module.exports = {
'default e2e tests': function (browser) {
// automatically uses dev Server port from /config.index.js
// default: http://localhost:8080
// see nightwatch.conf.js
const devServer = browser.globals.devServerURL
browser
.url(devServer)
.waitForElementVisible('#app', 5000)
.assert.elementPresent('.hello')
.assert.containsText('h1', 'Welcome to Your Vue.js App')
.assert.elementCount('img', 1)
.end()
}
}
In your case this test is failing because you have probably removed the file named App.vue that is generated through vue-cli scaffolding. The error you get is because the above test checks, with a 5 seconds timeout, if a DOM node named "app" is rendered (i.e.: .waitForElementVisible('#app', 5000)).
Basically it is failing because you actually do not provide this div in your application anymore (due of App.vue removal, maybe).
So you have two options here:
restoring the App.vue file (i.e.: create a div identified as 'app' where you mount a Vue instance);
editing the end to end according to your needs.
Hope this helps!
I'm having the strangest bug I've ever seen with a linux system right now and there seem to be only two possible explanations for it -
Either appending sudo makes file writes instant
Or appending sudo produces a short delay in executing statements
Or I've got no clue what's happening with my program
Well let me give you some background. I'm currently writing a c++ program for raspberry pi gpio manipulation. There are no visible error in the program as far as I know & since it works with sudo successfully and with delays successfully too. So here's how rpi's gpio work -
First you've to export one, to reserve it for manipulation, it will create a new directory as gpio+number with several files in it.
echo 17 > /sys/class/gpio/export
Then set it's direction(in means read and out means write)
echo "out" > /sys/class/gpio/gpio17/direction
Then write the value (0 or 1 for off and on)
echo 1 > /sys/class/gpio/gpio17/value
At the end, unexport it back, the directory will get deleted.
echo 17 > /sys/class/gpio/unexport
It doesn't matter whether you do this through bash commands or through c/c++ or any other language IO, since in unix these are just files and you just need to read/write to them. Everything works fine till now. I've tested this manually and it works, so my manual test passes.
Now I've a simple test written for my program which looks like this -
TEST(LEDWrites, LedDevice)
{
Led led1(17, "MyLED");
// auto b = sleep(1);
EXPECT_EQ(true, led1.on());
}
The Led class constructor does the export part - echo 17 > /sys/class/gpio/export, while the .on() call sets the direction - echo "write" > /sys/class/gpio/gpio17/direction and outputs the value as well - echo 1 > /sys/class/gpio/gpio17/value. Forget about unexport here since it is handled by destructor and plays no role here.
If you're curious, these functions handle I/O like this -
{
const std::string direction = _dir ? "out" : "in";
const std::string path = GPIO_PATH + "/gpio" + std::to_string(powerPin) + "/direction";
std::ofstream dirStream(path.c_str(), std::ofstream::trunc);
if (dirStream) {
dirStream << direction;
} else {
// LOG error here
return false;
}
return true;
}
means basic c++ file/io. Now let me explain the bug.
First, here are 3 runs of same test -
Normal run FAILS
[isaac#alarmpi build]$ ./test/testexe
Running main() from gtest_main.cc
[==========] Running 2 tests from 2 test cases.
[----------] Global test environment set-up.
[----------] 1 test from LEDConstruction
[ RUN ] LEDConstruction.LedDevice
[ OK ] LEDConstruction.LedDevice (1 ms)
[----------] 1 test from LEDConstruction (1 ms total)
[----------] 1 test from LEDWrites
[ RUN ] LEDWrites.LedDevice
../test/test.cpp:20: Failure
Value of: led1.on()
Actual: false
Expected: true
[ FAILED ] LEDWrites.LedDevice (2 ms)
[----------] 1 test from LEDWrites (3 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 2 test cases ran. (6 ms total)
[ PASSED ] 1 test.
[ FAILED ] 1 test, listed below:
[ FAILED ] LEDWrites.LedDevice
1 FAILED TEST
run with sudo PASSES
[isaac#alarmpi build]$ sudo ./test/testexe
[sudo] password for isaac:
Running main() from gtest_main.cc
[==========] Running 2 tests from 2 test cases.
[----------] Global test environment set-up.
[----------] 1 test from LEDConstruction
[ RUN ] LEDConstruction.LedDevice
[ OK ] LEDConstruction.LedDevice (1 ms)
[----------] 1 test from LEDConstruction (2 ms total)
[----------] 1 test from LEDWrites
[ RUN ] LEDWrites.LedDevice
[ OK ] LEDWrites.LedDevice (2 ms)
[----------] 1 test from LEDWrites (2 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 2 test cases ran. (5 ms total)
[ PASSED ] 2 tests.
wtf delay run PASSES has uncommented // auto b = sleep(1);
[isaac#alarmpi build]$ ./test/testexe
Running main() from gtest_main.cc
[==========] Running 2 tests from 2 test cases.
[----------] Global test environment set-up.
[----------] 1 test from LEDConstruction
[ RUN ] LEDConstruction.LedDevice
[ OK ] LEDConstruction.LedDevice (1 ms)
[----------] 1 test from LEDConstruction (2 ms total)
[----------] 1 test from LEDWrites
[ RUN ] LEDWrites.LedDevice
[ OK ] LEDWrites.LedDevice (1001 ms)
[----------] 1 test from LEDWrites (1003 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 2 test cases ran. (1005 ms total)
[ PASSED ] 2 tests.
The only difference b/w delay and normal run is of single uncommented line - // auto b = sleep(1); Everything is same including device, directory structure, build conf and everything. The only things that explains this is linux might be creating that file and its friends sometimes later or it takes some time? and I call .on() before that. Well that could explain it...
But then why does sudo invocation with no delay passes? Does it makes those writes faster/instant or does it puts the delay statement by itself? Is this the cause of some kind of buffering? Please say no :/
If it matters, I'm using following dev rule for getting non-sudo access to gpio directory -
SUBSYSTEM=="bcm2835-gpiomem", KERNEL=="gpiomem", GROUP="gpio", MODE="0660"
SUBSYSTEM=="gpio", KERNEL=="gpiochip*", ACTION=="add", PROGRAM="/bin/sh -c 'chown root:gpio /sys/class/gpio/export /sys/class/gpio/unexport ; chmod 220 /sys/class/gpio/export /sys/class/gpio/unexport'"
SUBSYSTEM=="gpio", KERNEL=="gpio*", ACTION=="add", PROGRAM="/bin/sh -c 'chown root:gpio /sys%p/active_low /sys%p/direction /sys%p/edge /sys%p/value ; chmod 660 /sys%p/active_low /sys%p/direction /sys%p/edge /sys%p/value'"
EDIT - As #charles mentioned, I used std::flush after every write I made on I/O operations. Still failing.
Strace to the rescue
Let's see the execution of the failing build command -
open("/sys/class/gpio/export", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/unexport", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/export", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/gpio17/value", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)
open("/sys/class/gpio/gpio17/direction", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)
open("/sys/class/gpio/unexport", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
..., 0666) = -1 EACCES (Permission denied)
Okaaay, here's something, that explains why it is passing with sudo. But why is it passing with delay? Let's check that too,
open("/sys/class/gpio/export", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/unexport", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/export", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/gpio17/value", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/gpio17/direction", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 4
open("/sys/class/gpio/unexport", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
No wait, wtf? This means the permission denied must be for if files aren't created at that time. But how does using sudo solves that?
Here's relevant output for sudo -
open("/sys/class/gpio/export", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/unexport", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/export", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/gpio17/value", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
open("/sys/class/gpio/gpio17/direction", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 4
open("/sys/class/gpio/unexport", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
There is a race between udev and your program. When you write to /sys/class/gpio/export, the write will not return until the GPIO is fully created. However, once it has been created, you have two processes that simultaneously take action on the new device:
A hotplug/uevent triggers udev to evaluate its rules. As part of these rules, it will change the ownership and permissions of /sys/class/gpio/gpio17/value.
Your program continues. It will immediately try to open /sys/class/gpio/gpio17/value.
So there is some chance that your program will open the value file before udev has changed its ownership and permissions. This is in fact very likely, because your udev handler does an execve of a shell which then execve's chown and chmod. But even without that, the scheduler will normally give priority to the task that was already running when returning from a syscall, so your program will usually open the value file before udev has even woken up.
By inserting a sleep, you allow udev to do its thing. So to make it robust, you could poll the file with access() before opening it.
It would also help by giving udev higher priority. E.g. chrt -f -p $(pidof systemd-udevd) 3. This gives udev real-time priority, which means it will always run before your program. It can also make your system unresponsive so take care.
From your strace output
open("/sys/class/gpio/gpio17/value", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)
open("/sys/class/gpio/gpio17/direction", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)
You are first writing value, then direction.
Of course, you should first set the proper direction before writing the value.
Also, you should probably end your output
if (dirStream) {
dirStream << direction;
} else {
// LOG error here
return false;
}
with a newline.
The echo command also appends a newline.
if (dirStream) {
dirStream << direction << std::endl;
} else {
// LOG error here
return false;
}
(In this case, I would explicitly use std::endl to flush. Of course just adding '\n' works as well, but making the flush explicit makes the code more robust. As it is, you are now relying on the fact that the stream gets closed immediately after writing—which it might not if you later decide to keep the stream open until the end of your program.)
The missing trailing newline might explain why it works with a delay: after that delay, the driver might interpret the data as if there was a newline and assumes no more letters are waiting in the stream.
Now I'm setting up a c++ test environment with CMake. Actually I've realized what I want to do, but I'm confused by 2 different test output style.
In my example below, what 'make test' do actually? I think both 'make test' and './test/Test' output are same, but not exactly. 'make test' output is different from googletest output style. Although test results looks same, I couldn't be satisfied with these output.
Output Differences
$ make test
Running tests...
Test project /path/to/sample/build
Start 1: MyTest
1/1 Test #1: MyTest ...........................***Failed 0.02 sec
0% tests passed, 1 tests failed out of 1
Total Test time (real) = 0.02 sec
The following tests FAILED:
1 - MyTest (Failed)
Errors while running CTest
make: *** [test] エラー 8
$ ./test/Test
Running main() from gtest_main.cc
[==========] Running 2 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 2 tests from MyLibTest
[ RUN ] MyLibTest.valCheck
/path/to/test/test.cc:10: Failure
Expected: sqr(1.0)
Which is: 1
To be equal to: 2.0
Which is: 2
[ FAILED ] MyLibTest.valCheck (0 ms)
[ RUN ] MyLibTest.negativeValCheck
[ OK ] MyLibTest.negativeValCheck (0 ms)
[----------] 2 tests from MyLibTest (0 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 1 test case ran. (0 ms total)
[ PASSED ] 1 test.
[ FAILED ] 1 test, listed below:
[ FAILED ] MyLibTest.valCheck
1 FAILED TEST
Commands
mkdir build
cd build
cmake ..
make test // NOT googletest output style
./test/Test // It looks googletest output
My Environment
root
- CMakeLists.txt
+ src/
- CMakeLists.txt
- main.cc
- sqr.cc
- sqr.h
+ test/
- CMakeLists.txt
- test.cc
root /CMakeLists.txt
cmake_minimum_required(VERSION 2.8)
project (MYTEST)
add_subdirectory(src)
add_subdirectory(test)
enable_testing()
add_test(NAME MyTest COMMAND Test)
test/CMakeLists.txt
cmake_minimum_required(VERSION 2.8)
set (CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR})
set(GTEST_ROOT /path/to/googletest/googletest)
include_directories(${GTEST_ROOT}/include/)
link_directories(${GTEST_ROOT}/build/)
add_executable(Test ${CMAKE_CURRENT_SOURCE_DIR}/test.cc)
target_link_libraries(Test sqr gtest gtest_main pthread)
test/test/cc
#include "../src/sqr.h"
#include <gtest/gtest.h>
namespace {
class MyLibTest : public ::testing::Test{};
TEST_F(MyLibTest, valCheck) {
EXPECT_EQ(sqr(3.0), 9.0);
EXPECT_EQ(sqr(1.0), 2.0); // it fails!
}
TEST_F(MyLibTest, negativeValCheck) {
EXPECT_EQ(sqr(-3.0), 9.0);
}
}
You can modify the behaviour of ctest (which is what make test will ultimately execute) with environment variables.
For example:
CTEST_OUTPUT_ON_FAILURE=1 make test
This will print full output for test executables that had a failure.
Another one you may be interested in is CTEST_PARALLEL_LEVEL
I have a Grails 2.5.0 app running and this test:
package moduleextractor
import grails.test.mixin.TestFor
import spock.lang.Specification
/**
* See the API for {#link grails.test.mixin.web.ControllerUnitTestMixin} for usage instructions
*/
#TestFor(ExtractorController)
class ExtractorControllerSpec extends Specification {
def moduleDataService
def mockFile
def setup() {
moduleDataService = Mock(ModuleDataService)
mockFile = Mock(File)
}
def cleanup() {
}
void "calls the moduleDataService"() {
given: 'a term is passed'
params.termCode = termCode
when: 'the getModuleData action is called'
controller.getModuleData()
then: 'the service is called 1 time'
1 * moduleDataService.getDataFile(termCode, 'json') >> mockFile
where:
termCode = "201415"
}
}
If I run grails test-app unit:spock I get this:
| Tests PASSED - view reports in /home/foo/Projects/moduleExtractor/target/test-reports
I don't understand why it sees 2 tests. I have not included spock in my BuildConfig file as it is already included in Grails 2.5.0. Also the test is not supposed to pass, as I do not have a service yet. Why does it pass?
Also when I run this grails test-app ExtractorController I get another result:
| Running 2 unit tests...
| Running 2 unit tests... 1 of 2
| Failure: calls the moduleDataService(moduleextractor.ExtractorControllerSpec)
| Too few invocations for:
1 * moduleDataService.getDataFile(termCode, 'json') >> mockFile (0 invocations)
Unmatched invocations (ordered by similarity):
None
at org.spockframework.mock.runtime.InteractionScope.verifyInteractions(InteractionScope.java:78)
at org.spockframework.mock.runtime.MockController.leaveScope(MockController.java:76)
at moduleextractor.ExtractorControllerSpec.calls the moduleDataService(ExtractorControllerSpec.groovy:27)
| Completed 1 unit test, 1 failed in 0m 3s
| Tests FAILED - view reports in /home/foo/Projects/moduleExtractor/target/test-reports
| Error Forked Grails VM exited with error
If I run grails test-app unit: I get:
| Running 4 unit tests...
| Running 4 unit tests... 1 of 4
| Failure: calls the moduleDataService(moduleextractor.ExtractorControllerSpec)
| Too few invocations for:
1 * moduleDataService.getDataFile(termCode, 'json') >> mockFile (0 invocations)
Unmatched invocations (ordered by similarity):
None
at org.spockframework.mock.runtime.InteractionScope.verifyInteractions(InteractionScope.java:78)
at org.spockframework.mock.runtime.MockController.leaveScope(MockController.java:76)
at moduleextractor.ExtractorControllerSpec.calls the moduleDataService(ExtractorControllerSpec.groovy:27)
| Completed 1 unit test, 1 failed in 0m 3s
| Tests FAILED - view reports in /home/foo/Projects/moduleExtractor/target/test-reports
| Error Forked Grails VM exited with error
First of all could somebody tell me what is the correct syntax to run spock tests?
Also what is the difference between having unit and unit: and unit:spock in the command?
(Since Spock comes with Grails 2.5.0, it will run spocks tests anyway.)
What is the correct syntax and why does it sees 2 tests instead of 1 ?
Don't be concerned with the number of tests. It's never been a problem for me. You can always check the report HTML file to see exactly what ran.
I always run my tests with either
grails test-app
or
grails test-app ExtractorController
The error you're getting means you coded the test to expect moduleDataService.getDataFile() to get called with parameters null and 'json' when controller.getModuleData() is called. However, moduleDataService.getDataFile() never got called, so the test failed.
Spock takes some getting used to. I recommend looking at examples in the Grails documentation and reading the Spock Framework Reference.
First question: for the 'grails test-app unit:spock', have you looked at the results to see the tests it says passed? The test count at the CLI can be wrong, check your results to see what actually ran (if no tests actually ran, then there were no failures).
Your test method doesn't start with 'test', nor does it have a #Test annotation, so the 'void "calls the moduleDataService"' isn't being seen as a spock test case (I believe that is the reason).
When you run 'grails test-app ExtractorController', you aren't specifying that it has to be a spock test, so grails testing finds and executes the 'calls the moduleDataService' test method.
Since spock is the de facto testing framework, you can just use:
grails test-app -unit
Second question:
#TestFor creates your controller, but if you're running a unit test, then the usual grails magic isn't happening. Your controller code is executing in isolation. If your ExtractorController usually has the moduleDataService injected, you'll have to take care of that.
I work in grails 2.4.3, and here would be my interpretation of your test (assuredly in need of tweaking since I'm inferring a lot in this example):
import grails.test.mixin.TestFor
import grails.test.mixin.Mock
import spock.lang.specification
import some.pkg.ModuleDataService // if necessary
import some.pkg.File // if necessary
#TestFor(ExtractorController)
#Mock([ModuleDataService, File])
class ExtractorControllerSpec extends Specification
def "test callsModuleDataService once for a termCode"() {
setup:
def mockFile = mockFor(File)
def mockService = mockFor(ModuleDataService, true) // loose mock
// in this mockService, we expect getDataFile to be called
// just once, with two parameters, and it'll return a mocked
// file
mockService.demand.getDataFile(1) { String termCode, String fmt ->
return mockFile.createMock()
}
controller.moduleDataService = mockService.createMock()
when:
controller.params.termCode = "201415"
controller.getModuleData()
then:
response.status == 200 // all good?
}
}
Last question: is that a Banner term code? (just curious)