Go not running some tests - unit-testing

I'm new to Golang and I'm trying to run some tests from a golang app.
Tests are running at a docker container with a Golang 1.12.
My problem is that some tests appear to be running correctly, but others do not.
Example:
I have a function that I wanted to fail on purpose.
func TestLol(t *testing.T) {
assert.EqualValues(t, 1, 2)
t.Fail()
}
When I execute the docker container with a "docker run ... go test -v ./..." it should run all tests and about this function on specific it should fail, but what happens that it doesn't fail. Golang just log a "ok" beside the test.
Then I tried to run only the folder with test file that should fail.
Log:
ok github.com/eventials/csw-notifications/services 0.016s
2021/09/25 21:08:44 Command finished successfully.
Tests exited with status code: 0
Stopping csw-notifications_db_1 ... done
Stopping csw-notifications_broker_1 ... done
Going to remove csw-notifications_app_run_ed70597b5c20, csw-notifications_db_1, csw-notifications_broker_1
Removing csw-notifications_app_run_ed70597b5c20 ... done
Removing csw-notifications_db_1 ... done
Removing csw-notifications_broker_1 ... done
My question is why golang dosn't output any log with a FAIL message for this file in specific?
I think it's somewhat related to this question, but as it didn't received any answear I'm reposting it.
Why the tests are not running ? ( Golang ) - goapp test - bug?
EDIT: I'm editting to this question be more clear.

You can try add -timeout
If your test files contain FuncTest with samenames rename.
You can try change output to go test -v -json ./...
-timeout d
If a test binary runs longer than duration d, panic.
If d is 0, the timeout is disabled.
The default is 10 minutes (10m).

main.go
func Start(s ...string) {
fmt.Println(s)
}
main_test.go
func TestStart(t *testing.T) {
Start("rocket")
if 0 == 1 {
t.Log("Houston, we have a problem")
t.Fail()
}
}
func ExampleStart() {
fmt.Println("Ground Control to Major Tom")
// Output:
// Ground Control to Major Tom
}
Change if condition to 0 == 0 to see Fail with somelogs.

I found out that a import was causing this issue.
Every test file importing porthos-go or porthos-go/mock isn't running the test files. Removing those imports fixed my problem.
lib: https://github.com/porthos-rpc/porthos-go
I still don't know why, but when I do I'll update this answear.

Related

GoLand not showing individual test results after test run

I am trying to fix some unit tests in GoLand using the (I believe) standard "testing" package in Go, but I'm having trouble figuring out which test is failing. After I run the tests, there is nothing shown in the test results dropdown, it is just empty (see below).
I wrote a dummy empty test that just prints "here" to test if it worked on just a simple test, and even then I get no test results in the explorer. The test passes and prints the expected output.
func Test_ResultsShow(t *testing.T) {
println("here")
}
=== RUN Test_ResultsShow
here
--- PASS: Test_ResultsShow (0.00s)
PASS
Process finished with the exit code 0
Additionally, when I try to run my larger suite of tests, the number of passed (24) and failed (1) tests don't add up to the total number of tests indicated (26). I see no indication of any test failure in the test output either, and I've run all the tests individually to see which test is failing, but all of them succeeded.
The blacked out section below is covers the repository name. But the individual test names are not shown below it (though they are confirmed to run by the output).

Command for karma-jasmine to stop unit-test after first fail

Is there any command for karma-jasmine unit-test to stop the test when it encounters the first test fail. For example, in python the command is like:
py.test -x # stop after first failure
py.test --maxfail=2 # stop after two failures
Currently I am using node_modules/karma/bin/karma start that run all the tests and stops only after everything is executed
This would require creating a custom reporter, or changing the reporter in the karma-jasmine adapter to stop on spec failure as such:
this.specDone = function (specResult)
{
var failure = specResult.failedExpectations.length;
if (failure)
{
suiteDone();
jasmineDone();
}
}
References
jasmine.io: custom_reporter.js
karma-jasmine source: adapter.js
Jasmine Issue #842: Async reporter hooks
Protractor Issue #1938: Find a good pattern for waiting for Jasmine Reporters
Alternatively you can just tell jasmine you want to run a specific Spec or Specs in a folder so you only are testing a subset of your tests and not running all in your suite.

Go: how to run tests for multiple packages?

I have multiple packages under a subdirectory under src/,
running the tests for each package with go test is working fine.
When trying to run all tests with go test ./... the tests are running but it fails..
the tests are running against local database servers, each test file has global variables with db pointers.
I tried to run the tests with -parallel 1 to prevent contention in the db, but the tests still fail.
what can be the issue here?
EDIT: some tests are failing on missing DB entries, I completely clear the DB before and after each test. the only reason I can think of why this is happening is because of some contention between tests.
EDIT 2:
each one of my test files has 2 global variables (using mgo):
var session *mgo.Session
var db *mgo.Database
also it has the following setup and teardown functions:
func setUp() {
s, err := cfg.GetDBSession()
if err != nil {
panic(err)
}
session = s
db = cfg.GetDB(session)
db.DropDatabase()
}
func tearDown() {
db.DropDatabase()
session.Close()
}
each tests startup with setUp() and defer tearDown()
also cfg is:
package cfg
import (
"labix.org/v2/mgo"
)
func GetDBSession() (*mgo.Session, error) {
session, err := mgo.Dial("localhost")
return session, err
}
func GetDB(session *mgo.Session) *mgo.Database {
return session.DB("test_db")
}
EDIT 3:
I changed cfg to use a random database, the tests passed.
it seems that the tests from multiple packages are running somewhat in parallel.
is it possible to force go test to run everything sequentially across packages ?
Update: As pointed out by #Gal Ben-Haim, adding the (undocumented) go test -p 1 flag builds and tests all packages in serial. As put by the testflag usage message in the Go source code:
-p=n: build and test up to n packages in parallel
Old answer:
When running go test ./..., the tests of the different packages are in fact run in parallel, even if you set parallel=1 (only tests within a specific package are guaranteed to be run one at a time). If it is important that the packages be tested in sequence, like when there is database setup/teardown involved, it seems like the only way right now is to use the shell to emulate the behavior of go test ./..., and forcing the packages to be tested one by one.
Something like this, for example, works in Bash:
find . -name '*.go' -printf '%h\n' | sort -u | xargs -n1 -P1 go test
The command first lists all the subdirectories containing *.go files. Then it uses sort -u to list each subdirectory only once (removing duplicates). Finally all the subdirectories containing go files get fed to go test via xargs. The -P1 indicates that at most one command is to be run at a time.
Unfortunately, this is a lot uglier than just running go test ./..., but it might be acceptable if it is put into a shell script or aliased into a function that's more memorable:
function gotest(){ find $1 -name '*.go' -printf '%h\n' | sort -u | xargs -n1 -P1 go test; }
Now all tests can be run in the current directory by calling:
gotest .
apparently running go test -p 1 runs everything sequentially (including build), I haven't see this argument in go help test or go help testflag
I am assuming that because the packages individually pass that in this situation you are also dropping the DB before that test as well.
Therefore it sounds like the state of the DB for each package test is expected to be empty.
So between each set of the package tests the DB must be emptied. There are two ways around this, not knowing your entire situation I will briefly explain both options:
Option 1. Test Setup
Add an init() function to the start of each package _test file which you then put processing to remove the DB. This will be run before the init() method of the actual package:
func init() {
fmt.Println("INIT TEST")
// My test state initialization
// Remove database contents
}
Assuming that the package also had a similar print line you would see in the output (note the stdout output is only displayed when the a test fails or you supply the -v option)
INIT TEST
INIT PACKAGE
Option 2. Mock the database
Create a mock for the database (unless that is specifically what you are testing). The mock db can always act like the DB is blank for the starting state of each test.
Please try out the following github repository.
https://github.com/appleboy/golang-testing
Copy coverage.sh to /usr/local/bin/coverage and change permission.
$ curl -fsSL https://raw.githubusercontent.com/appleboy/golang-testing/master/coverage.sh /usr/local/bin/coverage
$ chmod +x /usr/local/bin/coverage

Is there a way to delay the start of a QUnit test suite?

I'm trying to troubleshoot a unit test issue.
I used to have a mostly working Maven -> PhantomJS -> Qunit setup, but it was unpredictable so I took it apart to try to fix it.
I upgraded the software:
Qunit: 1.11.0
PhantomJS: 1.8
Phantom Qunit Runner Latest: https://github.com/jquery/qunit/tree/master/addons/phantomjs
I see the web GUI working. It runs and passes all 102 tests. The console prints this:
$ phantomjs --disk-cache=false runner.js http://localhost/ui/dcx/test.html
$ Took 16ms to run 0 tests. 0 passed, 0 failed.
If I comment out the exit command in the runner, it prints the console output for QUnit.done multiple times.
$ phantomjs --disk-cache=false runner.js http://localhost/ui/dcx/test.html
$ PhantomJS successfully loaded a page
$ QUnit.done callback fired
$ Took 15ms to run 0 tests. 0 passed, 0 failed.
$ QUnit.done callback fired
$ Took 1840ms to run 102 tests. 102 passed, 0 failed.
$ QUnit.done callback fired
$ Took 1841ms to run 102 tests. 102 passed, 0 failed.
$ QUnit.done callback fired
$ Took 1842ms to run 102 tests. 102 passed, 0 failed.
$ QUnit.done callback fired
$ Took 1848ms to run 102 tests. 102 passed, 0 failed.
$ ^C
$
Looks to me like the Qunit.done callback is getting executed too soon, then multiple times.
Anyone know why that callback fires?
My test inclusions and login delay might be relevant. I use AMD modules to define tests and curl.js to bring them in. Nothing happens until the security login does:
curl(['dolla'], function($){
$.ajax({
type: 'POST',
url: '/svc/j_spring_security_check',
data: {
j_username: '7',
j_password: '7'
},
success: function() {
loadTests()
}
});
})
var loadTests = function () {
curl([
// Unit tests
'dcx/dataControls/activity.test'
, 'dcx/dataControls/eventList.test'
, 'dcx/dataControls/mapViewer.view.test'
, 'dcx/pages/deviceDetails.view.test'
, 'dcx/pages/login.test'
, 'dcx/pages/nodeProfiles.test'
, 'dcx/pages/settings.view.test'
], function() {}, function(ex) { throw new Error(ex) })
})
EDIT:
I'm down to a root cause, I think.
If you include QUnit on a blank page, it calls QUnit.begin and QUnit.done right away.
I need to delay execution of Qunit until after the security login is successful and curl has brought in my unit tests. Is there a way to delay the start of QUnit, but still keep the Qunit object available? I can't use stop() because there are many async tests that will call start().
Found the answer. You can configure QUnit to not start, then start it manually when all your tests are loaded. This prevents the duplicate calling of Qunit.done which is the root cause of this issue.
http://forum.jquery.com/topic/are-qunit-and-requirejs-compatible#14737000001967123
This is one way to do it--modify the runner to not exit if there are not test results.
https://gist.github.com/SimpleAsCouldBe/5059623
This doesn't work though--Qunit.done fires whenever the test stack is cleared. In an asynchronously loaded environment like Curl/Require.js, this can happen any time.
This is if you don't want to use require. For example, in a browser context maybe.
Having spent ages to find a method to turn loading of scripts (and stylesheets) into Promises (see here), I then found big problems with QUnit test suites starting to run before all these had loaded. Typically a handful of tests, at the start, would complain than a certain variable or class was undefined, although later tests wouldn't have that difficulty.
You can stop automatic starting by going like this:
QUnit.config.autostart = false;
... seemingly just putting it in one of several files will suffice.
To start the QUnit tests, then, you have to go QUnit.start();. But, understandably perhaps, you can't execute this from inside any code which is being run by a QUnit test. This gets complicated. In the end I did this in my app-starting code:
await this.loadInjectedFile( GLOBAL_SCRIPT );
await this.loadInjectedFile( DBFORM_SCRIPT );
await this.loadInjectedFile( UDV_SCRIPT );
await this.loadInjectedFile( REACTIVITY_SCRIPT );
console.log( '... injected files loaded' );
// to allow QUnit to start testing
window.QUnitGreenLight = true;
... strictly speaking a naughty thing to do (allowing test-related code to sneak into your app code). A more compartmentalised approach could probably be found.
Then, inline in the HTML file from where you launch your testing:
<script>
const tryToStartTesting = function(){
setTimeout( function(){
if( window.QUnitGreenLight ){
QUnit.start();
}
else {
console.log( 'QUnit green light not yet given!' );
tryToStartTesting();
};
}, 10 );
};
tryToStartTesting();
</script>
... in practice it seems to take maybe a few hundredths of a second before the green light is given.
A bit scrappy, perhaps, but it seems to work.

How do you run OpenERP yaml unit tests

I'm trying to run unit tests on my openERP module, but no matter what I write it doesnt show if the test passes or fails! Anyone know how to output the results of a test? (Using Windows OpenERP version 6.1)
My YAML test is:
-
I test the tests
-
!python {model: mymodelname}: |
assert False, "Testing False!"
assert True, "Testing True!"
The output when I reload the module with
openerp-server.exe --update mymodule --log-level=test -dtestdb
shows that the test ran but has no errors?!
... TEST testdb openerp.tools.yaml_import: I test the tests
What am I doing wrong?
Edit: ---------------------------------------------------------------------
Ok so after much fiddling with the !python, I tried out another test:
-
I test that the state
-
!assert {model: mymodel, id: mymodel_id}:
- state == 'badstate'
Which gave the expected failure:
WARNING demo_61 openerp.tools.yaml_import: Assertion "NONAME" FAILED
test: state == 'badstate'
values: ! active == badstate
So I'm guessing it is something wrong with my syntax which may work as expected in version 7.
Thanks for everyone's answers and help!
This is what I've tried. It seems to work for me:
!python {model: sale.order}: |
assert True, "Testing True!"
assert False, "Testing False!"
(Maybe you forgot the "|" character)
And then :
bin/start_openerp --init=your_module_to_test -d your_testing_database --test-file=/absolute/path/to/your/testing_file.yml
You might want to create your testing database before :
createdb mytestdb --encoding=unicode
Hope it helps you
UPDATE: Here are my logs ( I called my test file sale_order_line_test.yml)
ERROR mytestdb openerp.tools.yaml_import: AssertionError in Python code : Testing False!
mytestdb openerp.modules.loading: At least one test failed when loading the modules.
loading test file /path/to/module/test/sale_order_line_test.yml
AssertionError in Python code : Testing False!
Looking at the docs (e.g. here and here), I can't see anything obviously wrong with your code.
However, I'm not familiar with --log-level=test. Maybe try running it with the -v, --debug or --log-level=debug flags instead of --log-level=test? You may also need to try the uppercase variants for the --log-level argument, i.e. --log-level=DEBUG.
test certainly isn't one of the standard Python logging module's logging levels, and while I can't exclude the possibility of them adding a custom log level, I don't think that's the case.
It might also be worthwhile trying to remove the line obj = self.browse(cr, uid, ref("HP001")), just in case..
Try to type following path on your terminal when you start your server.
./openerp-server --addons-path=<..Path>...--test-enable
:Enable YAML and unit tests.
./openerp-server --addons-path=<..Path>...--test-commit
:Commit database changes performed by YAML or XML tests.
Try this in your terminal it will work.
./openerp-server --addons-path=<..Path> --log-level=test --test-enable
Hope This will help you.