Boost data-driven test output: "Assertion occurred in a following context" - c++

I have a question regarding the output of the following minimal example that uses Boost test.
#define BOOST_TEST_MODULE ExampleTestSuite
#include <boost/test/included/unit_test.hpp>
#include <boost/test/unit_test.hpp>
using boost::unit_test::test_suite;
using boost::unit_test::framework::master_test_suite;
#include <boost/test/data/test_case.hpp>
namespace bdata = boost::unit_test::data;
BOOST_DATA_TEST_CASE(ExampleTest, bdata::xrange(2), testDatum) {
int exp = testDatum == 0 ? 0 : 1;
BOOST_CHECK_EQUAL(testDatum, exp);
}
If I run this test with ./boost_testing --log_level=all, I get the following output:
Running 2 test cases...
Entering test module "ExampleTestSuite"
boost_testing.cpp(11): Entering test suite "ExampleTest"
boost_testing.cpp(11): Entering test case "_0"
boost_testing.cpp(15): info: check testDatum == exp has passed
Assertion occurred in a following context:
testDatum = 0;
boost_testing.cpp(11): Leaving test case "_0"
boost_testing.cpp(11): Entering test case "_1"
boost_testing.cpp(15): info: check testDatum == exp has passed
Assertion occurred in a following context:
testDatum = 1;
boost_testing.cpp(11): Leaving test case "_1"
boost_testing.cpp(11): Leaving test suite "ExampleTest"
Leaving test module "ExampleTestSuite"
*** No errors detected
What is the meaning of the output line Assertion occurred in a following context: testDatum = 0;? Does it indicate an issue with the way that I have set up the data-driven test case or can I safely ignore it?

I agree the message is confusing: this is in fact the description of the assertion context for the test BOOST_CHECK_EQUAL(testDatum, exp), and in fact this is not describing an error, but the context attached to the check. If you change --log_level=all to something else, or user another output format, this should go away.
Feel free to raise an issue on Boost.Test project

Related

Monkey Patching in Go show different result when running from CLI

I am using Monkey Patching in Go. When I debug the following code in VSCode it shows that the function proc.Signal return the error programmed.
func TestCheckProcessRunning(t *testing.T) {
monkey.Patch((*os.Process).Signal, func(p *os.Process, sig os.Signal) error {
return errors.New("Signal failed")
})
proc := &os.Process{}
sig_e := proc.Signal(syscall.Signal(0))
fmt.Printf("%s\n", sig_e)
}
Signal failed
But when I tried to run the test using go test . , the patch is no longer applied and got a different error:
os: process not initialized
Any idea of what I am doing wrong?
It seems it needed the flag -gcflags=-l
go test -gcflags=-l .

Too many parts after spliting with regexes

I'm trying to parse some logs using split and regexes in powershell
Here's my code :
$string = "Starting ChromeDriver 78.0.3904.70Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code. Test 229: Passed Test 260: Failed. Error message: Status: Test case failed. Steps: Navigate to: PurchReqTableListPage (purchreqpreparedbyme) Use the Quick Filter to find records. For example, filter on the Purchase requisition fION()</StackTrace> </Error> Playback results: Tests: 2 Passed: 1 Failed: 1"
$string -Split '(Test (\d)+:)'
Result :
Starting ChromeDriver 78.0.3904.70Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
Test 229:
9
Passed
Test 260:
0
Failed. Error message: Status: Test case failed. Steps: Navigate to: PurchReqTableListPage (purchreqpreparedbyme) Use the Quick Filter to find records. For example, filter on the Purchase requisition fION()</StackTrace> </Error> Playback results: Tests: 2 Passed: 1 Failed: 1
Expected result:
Starting ChromeDriver 78.0.3904.70Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
Test 229:
Passed
Test 260:
Failed. Error message: Status: Test case failed. Steps: Navigate to: PurchReqTableListPage (purchreqpreparedbyme) Use the Quick Filter to find records. For example, filter on the Purchase requisition fION()</StackTrace> </Error> Playback results: Tests: 2 Passed: 1 Failed: 1
On this site : https://regexr.com/3c0lf I tried this regex and the groups captured were : Test 260: and Test 229: (which is exactly what I want)
I do not understand where the 0 and the 9 comes from.
Thanks a lot
Those are the last digits of the number. 0 from 26*0* and 9 from 22*9*.
You are seeing those because you've created an additional capturing group by putting parentheses around the digits. Just remove them like so:
$string -Split '(Test \d+:)
You probably don't even need those parentheses either, leaving just
$string -Split 'Test \d+:

Cakephp 3.0 Unit testing Issue

When i execute the plugins/SamplePlugin test cases, it executing perfect except the controller functions which are related to urls.
The test case function like
public function testIndex()
{
$this->get('/sample-plugin /mycontroller/index');
$this->assertResponseOk();
}
when i execute the above testcase the exception
There was 1 error:
1) SamplePlugin\Test\TestCase\Controller\MyControllerTest::test
Index
include(D:\xampp\htdocs\EATZ_V2_3.X\vendor\cakephp\cakephp\tests\test_app\config\routes.php): failed to open stream: No such file or directory
D:\xampp\htdocs\MyApp\vendor\cakephp\cakephp\src\Routing\Router.php:974
D:\xampp\htdocs\MyApp\vendor\cakephp\cakephp\src\Routing\Router.php:974
D:\xampp\htdocs\MyApp\vendor\cakephp\cakephp\src\Routing\Router.php:547
D:\xampp\htdocs\MyApp\vendor\cakephp\cakephp\src\TestSuite\IntegrationTestCase.php:451
D:\xampp\htdocs\MyApp\vendor\cakephp\cakephp\src\TestSuite\IntegrationTestCase.php:392
D:\xampp\htdocs\MyApp\vendor\cakephp\cakephp\src\TestSuite\IntegrationTestCase.php:312
D:\xampp\htdocs\MyApp\vendor\cakephp\cakephp\src\TestSuite\IntegrationTestCase.php:233
D:\xampp\htdocs\MyApp\plugins\SamplePlugin\tests\TestCase\Controller\MyControllerTest.php:29
D:\xampp\php\pear\PHPUnit\TextUI\Command.php:176
D:\xampp\php\pear\PHPUnit\TextUI\Command.php:129
FAILURES!
Tests: 30, Assertions: 42, Errors: 1.
please resove the issue.thanks in advance!!!
The error states that the routes.php file is missing in the folder config. Refer to the CakePHP 3 Documentation to create a meaningful routes.php.

What is the correct Spock syntax for Grails?

I have a Grails 2.5.0 app running and this test:
package moduleextractor
import grails.test.mixin.TestFor
import spock.lang.Specification
/**
* See the API for {#link grails.test.mixin.web.ControllerUnitTestMixin} for usage instructions
*/
#TestFor(ExtractorController)
class ExtractorControllerSpec extends Specification {
def moduleDataService
def mockFile
def setup() {
moduleDataService = Mock(ModuleDataService)
mockFile = Mock(File)
}
def cleanup() {
}
void "calls the moduleDataService"() {
given: 'a term is passed'
params.termCode = termCode
when: 'the getModuleData action is called'
controller.getModuleData()
then: 'the service is called 1 time'
1 * moduleDataService.getDataFile(termCode, 'json') >> mockFile
where:
termCode = "201415"
}
}
If I run grails test-app unit:spock I get this:
| Tests PASSED - view reports in /home/foo/Projects/moduleExtractor/target/test-reports
I don't understand why it sees 2 tests. I have not included spock in my BuildConfig file as it is already included in Grails 2.5.0. Also the test is not supposed to pass, as I do not have a service yet. Why does it pass?
Also when I run this grails test-app ExtractorController I get another result:
| Running 2 unit tests...
| Running 2 unit tests... 1 of 2
| Failure: calls the moduleDataService(moduleextractor.ExtractorControllerSpec)
| Too few invocations for:
1 * moduleDataService.getDataFile(termCode, 'json') >> mockFile (0 invocations)
Unmatched invocations (ordered by similarity):
None
at org.spockframework.mock.runtime.InteractionScope.verifyInteractions(InteractionScope.java:78)
at org.spockframework.mock.runtime.MockController.leaveScope(MockController.java:76)
at moduleextractor.ExtractorControllerSpec.calls the moduleDataService(ExtractorControllerSpec.groovy:27)
| Completed 1 unit test, 1 failed in 0m 3s
| Tests FAILED - view reports in /home/foo/Projects/moduleExtractor/target/test-reports
| Error Forked Grails VM exited with error
If I run grails test-app unit: I get:
| Running 4 unit tests...
| Running 4 unit tests... 1 of 4
| Failure: calls the moduleDataService(moduleextractor.ExtractorControllerSpec)
| Too few invocations for:
1 * moduleDataService.getDataFile(termCode, 'json') >> mockFile (0 invocations)
Unmatched invocations (ordered by similarity):
None
at org.spockframework.mock.runtime.InteractionScope.verifyInteractions(InteractionScope.java:78)
at org.spockframework.mock.runtime.MockController.leaveScope(MockController.java:76)
at moduleextractor.ExtractorControllerSpec.calls the moduleDataService(ExtractorControllerSpec.groovy:27)
| Completed 1 unit test, 1 failed in 0m 3s
| Tests FAILED - view reports in /home/foo/Projects/moduleExtractor/target/test-reports
| Error Forked Grails VM exited with error
First of all could somebody tell me what is the correct syntax to run spock tests?
Also what is the difference between having unit and unit: and unit:spock in the command?
(Since Spock comes with Grails 2.5.0, it will run spocks tests anyway.)
What is the correct syntax and why does it sees 2 tests instead of 1 ?
Don't be concerned with the number of tests. It's never been a problem for me. You can always check the report HTML file to see exactly what ran.
I always run my tests with either
grails test-app
or
grails test-app ExtractorController
The error you're getting means you coded the test to expect moduleDataService.getDataFile() to get called with parameters null and 'json' when controller.getModuleData() is called. However, moduleDataService.getDataFile() never got called, so the test failed.
Spock takes some getting used to. I recommend looking at examples in the Grails documentation and reading the Spock Framework Reference.
First question: for the 'grails test-app unit:spock', have you looked at the results to see the tests it says passed? The test count at the CLI can be wrong, check your results to see what actually ran (if no tests actually ran, then there were no failures).
Your test method doesn't start with 'test', nor does it have a #Test annotation, so the 'void "calls the moduleDataService"' isn't being seen as a spock test case (I believe that is the reason).
When you run 'grails test-app ExtractorController', you aren't specifying that it has to be a spock test, so grails testing finds and executes the 'calls the moduleDataService' test method.
Since spock is the de facto testing framework, you can just use:
grails test-app -unit
Second question:
#TestFor creates your controller, but if you're running a unit test, then the usual grails magic isn't happening. Your controller code is executing in isolation. If your ExtractorController usually has the moduleDataService injected, you'll have to take care of that.
I work in grails 2.4.3, and here would be my interpretation of your test (assuredly in need of tweaking since I'm inferring a lot in this example):
import grails.test.mixin.TestFor
import grails.test.mixin.Mock
import spock.lang.specification
import some.pkg.ModuleDataService // if necessary
import some.pkg.File // if necessary
#TestFor(ExtractorController)
#Mock([ModuleDataService, File])
class ExtractorControllerSpec extends Specification
def "test callsModuleDataService once for a termCode"() {
setup:
def mockFile = mockFor(File)
def mockService = mockFor(ModuleDataService, true) // loose mock
// in this mockService, we expect getDataFile to be called
// just once, with two parameters, and it'll return a mocked
// file
mockService.demand.getDataFile(1) { String termCode, String fmt ->
return mockFile.createMock()
}
controller.moduleDataService = mockService.createMock()
when:
controller.params.termCode = "201415"
controller.getModuleData()
then:
response.status == 200 // all good?
}
}
Last question: is that a Banner term code? (just curious)

Suppressing stack trace when Rails tests error

I'm a Ruby on Rails newbie and writing tests. Some of these generate exceptions; I would like the "rake test" output to give me the exception error message but not the whole backtrace. (I'd like to write tests which exercise unimplemented functionality, which I'll then fill in.)
For example, actual output:
Started
E
Finished in 0.081054 seconds.
1) Error:
test_should_fail(VersioningTest):
ActiveRecord::StatementInvalid: PGError: ERROR: null value in column "client_ip" violates not-null constraint
: INSERT INTO "revisions" ("created_at", "id") VALUES ('2011-02-03 20:14:17', 980190962)
/Users/rpriedhorsky/.rvm/gems/ruby-1.9.2-p136/gems/activerecord-3.0.3/lib/active_record/connection_adapters/abstract_adapter.rb:202:in `rescue in log'
/Users/rpriedhorsky/.rvm/gems/ruby-1.9.2-p136/gems/activerecord-3.0.3/lib/active_record/connection_adapters/abstract_adapter.rb:194:in `log'
/Users/rpriedhorsky/.rvm/gems/ruby-1.9.2-p136/gems/activerecord-3.0.3/lib/active_record/connection_adapters/postgresql_adapter.rb:496:in `execute'
[... etc. etc. etc. ...]
1 tests, 0 assertions, 0 failures, 1 errors, 0 skips
Desired output:
Started
E
Finished in 0.081054 seconds.
1) Error:
test_should_fail(VersioningTest):
ActiveRecord::StatementInvalid: PGError: ERROR: null value in column "client_ip" violates not-null constraint
1 tests, 0 assertions, 0 failures, 1 errors, 0 skips
I found info (e.g.) on the opposite direction, but not on suppressing stack traces.
Edit:
It would be nice to turn them on and off easily; as pointed out below, sometimes they are useful for tracking down bugs.
You could take a look at "backtrace silencers" - for me (Rails 2.3.8), this is the file config/initializers/backtrace_silencers.rb:
# Be sure to restart your server when you modify this file.
# You can add backtrace silencers for libraries that you're using but
# don't wish to see in your backtraces.
# Rails.backtrace_cleaner.add_silencer { |line| line =~ /my_noisy_library/ }
# You can also remove all the silencers if you're trying do debug a
# problem that might steem from framework code.
# Rails.backtrace_cleaner.remove_silencers!
Rails.backtrace_cleaner.add_silencer {|line| line =~ /gems/}
Rails.backtrace_cleaner.add_silencer {|line| line =~ /passenger/}
It looks like you should be able to put a line like
Rails.backtrace_cleaner.add_silencer {|line| true}
In your config/environments/test.rb file, and that would wipe your backtraces clean away (though it might just apply to the logger - I'm not very familiar with the method).
But ask yourself - do you really want to do away with backtraces entirely? They can be pretty useful for tracking down bugs...