C++ Google test aborts on Ubuntu calling Notify() multiple times on Notification object - c++

i use google test and google mock.
There is a mock object on which i expect a method call OnConnectionError() which notifies the absl::Notification object done 3 times.
absl::Notification done;
EXPECT_CALL(*client, OnConnectionError(::testing::_)).Times(3)
.WillRepeatedly(Notify(&done));
bool result = client->ConnectToServer("localhost", 5000, 2);
done.WaitForNotificationWithTimeout(absl::Duration(absl::Seconds(30)));
The method client->ConnectToServer has a loop which results in the repetitive call of OnConnectionError, which is fully fine and the desired behaviour.
On Windows the unit test passes fine. When jenkins runs it on ubuntu, it aborts the whole test run (not only failing one test!!) with the following output.
[notification.cc : 32] RAW: Notify() method called more than once for Notification object 0x7ffffde87320
Is it not allowed to call the Notification object multiple times? Why does the test success on Windows and aborts on ubuntu?
many thanks for your support!

I found the answer by my self:
I reviewed the relevant source of google abseil. In notification.cc i found the relevant error message. The respective source part is surrounded by a
#ifndef NDEBUG
I edited the CMakeLists file in order to rebuild it in Release mode by adding the line set(CMAKE_BUILD_TYPE Release), so the NDEBUG flag is defined on compile time.
As a consequence not directly connected to this issue, i refactored the code under test in a way, to avoid the loop, which notifies the absl::Notification object multiple times, since this issue showed me that there is a demand for improving the code.

Related

Tell Google Test to resume executing the rest of the tests after a crashed test

I have a large unit test suite written in C++ using Google Test.
I have recently made a change to the codebase which may affect different parts of the system, so various tests should now probably fail or even crash. I would like to run once the entire suite (which unfortunately takes a long time to complete), summarize the list of the failed tests, and fix them one by one.
However, when ever a test crashes (e.g. with a segmentation fault) as opposed to simply logically failing, it seems that GTest stops and executes no more tests.
I can than fix the crashed test, however rerunning the entire suite will take a long time.
Is there a way to tell GTest to resume executing the rest of the tests after a test has crashed?
Or, alternatively, at least a way to launch GTest starting from a particular test (assuming the order of the tests is always the same)?
If you are are need to test if assertion is triggered when API is used incorrectly then gtest delivers something called DEATH TEST.
If your test crashed because of Segmentation Fault you should fix this ASAP! You can disable test temporary by adding DISABLED_ prefix, or by adding GTEST_SKIP() in test boy. Alternatively there is also command line argument --gtest_filter=<colon separated positive patterns>[:-<colon separated negative patterns>]. There is no way to recover from segmentation fault, so test suite can't continue.
If you use gcc or clang (msvc has this feature experimentally) you can enable address sanitizer to quickly detect all memory issues in your tested code. You will able to faster fix those issues.
There are cool plugins to IDE to handle gtest, those should you help you track which test were run, which failed and which crashed.
Google tests are not able to do what you need. I'd suggest you write a simple test runner that:
Runs the test executable with --gtest_list_tests to get a list of all tests.
Runs a loop thru all tests that prints out the test number and runs the test executable with --gtest_filter=FooTest.Bar to invoke only one test in each loop iteration.
The loop skips the required number of iterations and runs from the number N after the test with the number N is fixed.
You only need to write such a runner script once, and it shouldn't be hard.

CLIPS system halted and not continuing to execute

I am integrating CLIPS expert system following APG docs, Thanks for the great docs, I am successful at integrating CLIPS to my C++ project, My Application runs continuously and feed the Facts to CLIPS system using EnvAssert method and invoke EnvRun, everything works fine until i receive this error.
[PRNTUTIL7] Attempt to divide by zero in / function.
[DRIVE1] This error occurred in the join network
Problem resides in associated join
Of pattern #1 in rule RULE-1
[PRCCODE4] Execution halted during the actions of defrule RULE-2.
Once i receive this error, further Assert is working but Run seems not working, but i am sure there are definite matching rules are available but still no rules are fired on next Run.
I understood the error and i can fix it, but i cannot understand the behavior. So i tested it in CLIPS console, there when the error was reported consecutive Run seems working as i expected, but not in case of my application, i want to know the underlying difference.
Ref pseudo-code of Application :
<code to create and initialize CLIPS environment>
EnvReset()
While(true)
{
<my code to get facts>
EnvAsset(Fact)
EnvRun(-1)
<my code to receive the generated result facts>
}
Note: I dont call RESET before every RUN.
Fixes for resetting the error flags for API calls have been checked into the subversion repository on sourceforge: https://sourceforge.net/p/clipsrules/code/HEAD/tree/branches/

Testify is seemingly running test suites concurrrently?

Basically I created a new test file in a particular package with some bare bones test structure - no actual tests...just an empty struct type that embeds suite.Suite, and a function that takes in a *testing.T object and calls suite.Run() on said struct. This immediately caused all our other tests to start failing indeterministically.
The nature of the failures were associated with database unique key integrity violations on inserts and deletes into a single Postgres DB. This is leading me to believe that the tests were being run concurrently without calling our setup methods to prepare the environment properly between tests.
Needless to say, the moment I move this test file to another package, everything magically works!
Has anyone else run into this problem before and can possibly provide some insights?
What I've found from my use, is that "go test" runs a single package's test cases sequentially (unless t.Parallel() is called), but if you supply multiple packages (go test ./foo ./bar ./baz), each package's tests are run parallel to other packages. Definitely caused similar headaches with database testing for me.
As it turns out, this is a problem rooted in how go test works, and has nothing to do with testify. Our tests were being ran on ./... This causes the underlining go test tool to run tests in each package in parallel, as justinas pointed out. After digging around more on StackOverflow (here and here) and reading through testify's active issue on this problem, it seems that the best immediate solution is to use the -p=1 flag to limit the number of packages to be run in parallel.
However, it is still unexplained why the tests consistently passed prior to adding these new packages. A hunch is perhaps the packages/test files were sorted and ran in such a manner that concurrency wasn't an issue prior to adding the new packages/files.

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

Google Closure Javascript testing, disable autodiscover tests

Currently i am implementing the Google closure testing possibilities.
It works as a charm.
I Define the TestCase by hand, and add the test by hand. I also create a separate runner for the tests so I can catch all the results and pass them to another function.
This function sends the results through ajax to PHP so the results can be logged in the database (also works as expected).
The problem however is that because I do this, and I load the page in the browser the tests get executed 2 times (one time because of the auto-discovery and once because i defined it in the testcase.
I would like to disable the auto-discovery, but I don't want to disable the flag in the closure library, this because when the library gets updated we need to reset the flag to false again.
So how can i disable auto-discovery without modifying the code in the closure library?
Thanks in advance!
If you look into jsusnit.js, you'll see that goog.testing.jsunit.AUTO_RUN_ONLOAD = true; is hard-coded there and you can override this variable only through closure compiler's define.
If you don't compile your test code (I don't, because of speed of iteration), the only option seems to change this to false, and redo the change on closure library updates.