GoLand not showing individual test results after test run - unit-testing

I am trying to fix some unit tests in GoLand using the (I believe) standard "testing" package in Go, but I'm having trouble figuring out which test is failing. After I run the tests, there is nothing shown in the test results dropdown, it is just empty (see below).
I wrote a dummy empty test that just prints "here" to test if it worked on just a simple test, and even then I get no test results in the explorer. The test passes and prints the expected output.
func Test_ResultsShow(t *testing.T) {
println("here")
}
=== RUN Test_ResultsShow
here
--- PASS: Test_ResultsShow (0.00s)
PASS
Process finished with the exit code 0
Additionally, when I try to run my larger suite of tests, the number of passed (24) and failed (1) tests don't add up to the total number of tests indicated (26). I see no indication of any test failure in the test output either, and I've run all the tests individually to see which test is failing, but all of them succeeded.
The blacked out section below is covers the repository name. But the individual test names are not shown below it (though they are confirmed to run by the output).

Related

Flag test as expected to fail in unit 5

I have a unit test, written with JUnit 5 (Jupiter), that is failing. I do not currently have time to fix the problem, so I would like to mark the test as an expected failure. Is there a way to do that?
I see #Disable which causes the test to not be run. I would like the test to still run (and ideally fail the build if it starts to work), so that I remember that the test is there.
Is there such an annotation in Junit 5? I could use assertThrows to catch the error, but I would like the build output to indicate that this is not a totally normal test.
You can disable the failing test with the #Disabled annotation. You can then add another test that asserts the first one does indeed fail:
#Test
#Disabled
void fixMe() {
Assertions.fail();
}
#Test
void fixMeShouldFail() {
assertThrows(AssertionError.class, this::fixMe);
}

Why is Jest not outputting my assertions to the terminal when it runs?

When I run my test I see. None of the assertions I've written
What I'm expecting is something like this:
I can see the individual assertions with a tick next to them.
This is the expected output of jest. You just get a single success message per suite. If tests fail, then you get more detailed info about the failed tests.
https://github.com/facebook/jest/issues/148
This can be changed by running jest --verbose

Questions about google test and assertion output (test results); can I trust when gtest says a test passed?

When I create a TEST or TEST_F test, how can I know that my assertion is actually executing?
The problem I have is, when I have an empty TEST_F, for example,
TEST_F(myFixture, test1) {}
When it runs, gtest says this test passes. I would have expected the test to fail, until I write test code. Anyway.
So, my problem is that when gtest says that when test is "OK" or that it passed, I can't trust it, because a test could "pass" if there is no test code.
It would be nice to print what my EXPECT_ or ASSERT calls are doing and then see that they pass. Problem is, if I do any std::cout calls, that seems to be out of sync with the test results at the end. The output messages are not in sync with any of my own std::cout calls.
Is there a verbose option to google test? How can I be sure the EXPECT that I coded is actually running?
You might consider looking at TDD, Test Driven Development, https://en.wikipedia.org/wiki/Test-driven_development
write one test => it will fail
write code to make the test pass => test passes
Rinse and repeat: express each requirement as a test, that initially fails. Write code to make that test pass.

Mock.patch not reset between django test runs

I have 2 tests which are testing a view that makes a call to an external module. I've mocked it with mock.patch. I'm calling the view by using django's test client.
The first test (a test for 404 being returned) completes successfully and the correct mock is called.
When the second test runs, everything runs as normal, but the mock that the code-under-test has access to is the mock from the previous test.
You can see in this example https://dpaste.de/7zT8 that the ids in the test output are incorrect (around line 91).
Where is this getting cached? My initial thought was that the import of the main module is somehow cached between test runs due to urlconf stuff. Tracing through the source code, I couldn't find that as the case.
Expected: Both tests pass.
Actual: Second test fails due to stale mocked import.
If I comment out the 404 test, the other test passes.
The view is registered in the url conf as the string-y version 'repos.views.github_webhook'.
I do not fully understand what causes the exact behaviour you are seeing, especially not why the mock is seemingly working correctly in the first test. But according to the mock docs, you should patch in the namespace under test, i.e. patch("views.tasks").
http://www.voidspace.org.uk/python/mock/patch.html#where-to-patch

Two assert in the same unit test method, how to make?

I'm starting out using unit tests. I have a situation and don't know how to proceed:
For example:
I have a class that opens and reads a file.
In my unit test, I want to test the open method and the read method, but to read the file I need to open the file first.
If the "open file" test fails, the "read file" test would fail too!
So, how to explicit that the read fail because the open? I test the open inside the read??
The key feature of unit tests is isolation: one specific unit test should cover one specific functionality - and if it fails, it should report it.
In your example, read clearly depends on open functionality: if the latter is broken, there's no reason to test the former, as we do know the result. More, reporting read failure will only add some irrelevant noise to your test results.
What can (and should be) reported for read in this case is test skipped or something similar. That's how it's done in PHPUnit, for example:
class DependencyFailureTest extends PHPUnit_Framework_TestCase
{
public function testOne()
{
$this->assertTrue(FALSE);
}
/**
* #depends testOne
*/
public function testTwo()
{
}
}
Here we have testTwo dependant on testOne. And that's what's shown when the test is run:
There was 1 failure:
1) testOne(DependencyFailureTest)
Failed asserting that <boolean:false> is true.
/home/sb/DependencyFailureTest.php:6
There was 1 skipped test:
1) testTwo(DependencyFailureTest)
This test depends on "DependencyFailureTest::testOne" to pass.
FAILURES!
Tests: 2, Assertions: 1, Failures: 1, Skipped: 1.
Explanation:
To quickly localize defects, we want our attention to be focused on
relevant failing tests. This is why PHPUnit skips the execution of a
test when a depended-upon test has failed.
Opening the file is a prerequisite to reading the file, so it's fine to include that in the test. You can throw an exception in your code if the file failed to open. The error message in the test will then make it clear why the test failed.
I would also recommend that you consider creating the file in the test itself to remove any dependencies on existing files. That way you ensure that you always have a valid file to reference.
Generally speaking, you wouldn't find yourself testing your proposed scenario of unit testing the ability to read from a file, since you will usually end up using a file manipulation library of some kind and can usually safely assume that the maintainers of said library have the appropriate unit tests already in place (for example, I feel pretty confident that I can use the File class in .NET without worry).
That being said, the idea of one condition being an impediment to testing a second is certainly valid. That's why mock frameworks were created, so that you can easily set up a mock object that will always behave in a defined manner that can then be substituted for the initial dependency. This allows you to focus squarely on unit testing the second object/condition/etc. in a test scenario.