I'm pretty new to Jenkins. There is this Test Result Trend graph which shows the Failed, Skipped and Passed tests. As seen here:
My question is, what causes a Skipped test? If JUnit tests are marked with #Ignore and Jasmine tests marked using xit/xdescribe, does that show as a Skipped test on Jenkins?
Couldn't find any answers regarding this, any insights would be helpful, thanks!
Related
If this is a silly question, I apologize, but I am sincerely having a hard time understanding if we absolutely need Karma to run unit tests? I am new to Angular 2, Karma and Jasmine, and I was hoping someone with more experience than me could explain if we could just use Jasmine standalone to run unit tests.
I know that Karma is a test runner, and I know all the niceties that come with Karma (test on multiple browsers and devices, automatically rerun tests when files are changed, test coverage results, easy integration with Jenkins .. the list goes on and on!). But I am just having a hard time understanding if it completely necessary. A lot of the examples I've seen (if not all of them) use Karma, but I have not seen an example where only Jasmine is used, and would like an explanation why. Is it because of how Karma loads the spec/app files?
An example I was looking at was Jasmine standalone, and I am trying to use this example to apply to my current structure, but so far no luck:
https://github.com/jasmine/jasmine/releases
Thank you in advance for all your help!
Has anyone has done kdb unit testing before? I have no idea how I should do unit testing. I am confused about time store unit testing as well.
I will load my script, Qunit script and test script. However, I am not sure how to write the test code for queries. What should I write for expected result?
QUnit a unit testing framework for kdb is documented here:
http://www.timestored.com/kdb-guides/kdb-regression-unit-tests
With the source code being available here:
https://github.com/timeseries/kdb/tree/master/qunit
If you have any specific questions, just ask.
I would like to generate statistics from the test run, and would like to keep track of these 4 numbers:
failed / passed / inconclusive and ignored tests.
My question is... is it possible to get the number of skipped/ignored tests (this is a test method attributed with [Ignore]).
I'm not aware of any working solutions for this yet.
Please see this issue in Microsoft tracker: mstest.exe report file (trx) does not list ignored tests. The status for it is "Closed as Deferred".
However, the current design seems to make more sense to me. Statistics for test runs should only include the tests that are supposed to be executed. Tests marked with [Ignore] should not be considered as a part of test run. If people have excluded some tests from test runs intentionally, then why would they want to see them in the test run results?
But I'd like to see this feature personally. More statistics won't hurt after all.
When I using PHPUnit, some tests are failed, I want to repeat unit testing for failed tests and not for passed tests. Is there a way to do that!?
I can filter tests, but I want to automatically do that.
Thanks
For some one else with problem like my problem, following links is useful (Re-run last failed test in PHPUnit)
PHPUnit do not keep track of failed and passed tests. The response is on the fly. The idea of having something like that can kill all the automated test concept. Think about it. You are automating your tests because you wanna be warned when some change break your code. But you only know if something broke your code when you run the automated test. There is no guarantee that the fix you made for one testcase, will not break another testcase.
PHPUnit help you to make sure your code works even when you fixed what was causing some testcase to fail.
I'm working on a variation of this stack overflow answer that provides reliable cleanup of tests. How do you write unit tests for NUnit addins?
Examining how NUnit self tests, I have determined:
You can write tests, that pass, that verify correct behavior of NUnit for failing tests.
You write unit tests against test fixtures in a separate assembly (otherwise the fixtures under test will execute with your unit tests)
Use NUnit.TestUtilities.TestBuilder to create fixtures and call TestSuite.Run method.
What I don't see are any tests of the add-in process. I've got errors occurring sometime between install and execution. How would I unit test implementations the following?
IAddin.Install
ITestDecorator.Decorate
Here's an article by someone who hacked a way to do it: manipulating some of the singletons in the NUnit add-in implementation to swap his add-in in and out.
http://www.bryancook.net/2009/09/testing-nunit-addins-from-within-nunit.html
Sometimes, the easiest thing to do is run integration tests. It been a while since I played with the NUnit add-in API, so I can't really say regarding any existing unit tests for the extensibility mechanism. If you have looked through NUnit source code and haven't found any, then I guess that is not something that was tested or even written using TDD.
Like I said, sometimes it's easier to just run integration tests. Have your addon, for example, print something to the output stream, and have your test verify that the exact message was written. This way you could test that both the installation and initialization of your plugin succeeded.
Hope that helps...