Is there an MSTest equivalent to NUnit's Explicit Attribute? - unit-testing

Is there an MSTest equivalent to NUnit's Explicit Attribute?

No, the closest you will get is with the [Ignore] attribute.
However, MSTest offers other ways of disabling or enabling tests using Test Lists. Whether you like them or not, Test Lists are the recommended way to select tests in MSTest.

When you want the test only to assert when ran with the debugger (implicitly run manually I assume) then you may find this useful:
if (!System.Diagnostics.Debugger.IsAttached) return;
Add the line above at the beginning of the method marked with [TestMethod].
Then the test is always ran, but nothing is asserted when there is no debugger attached.
So when you want to run it manually, do it in debug mode.

I am using this helper:
public static class TestUtilities
{
public static void CheckDeveloper()
{
var _ =
Environment.GetEnvironmentVariable("DEVELOPER") ??
throw new AssertInconclusiveException("DEVELOPER environment variable is not found.");
}
}
Use it at the beginning of the tests you want. The test will only run if the DEVELOPER environment variable is set. In this case, the rest of the tests will be executed correctly and the dotnet test command will return a successful result.

Related

tcltest unit tests: how to check if constraint is active to enable code reuse

We are using tcltest to do our unit testing but we are finding it difficult to reuse code within our test suite.
We have a test that is executed multiple times for different system configurations. I created a proc which contains this test and reuse it everywhere instead of duplicating the test's code many times throughout the suite.
For example:
proc test_config { config_name} {
test $test_name {} -constraints $config_name -body {
<test body>
} -returnCodes ok
}
The problem is that I sometimes want to only test certain configurations. I pass the configuration name as a parameter to the proc as shown above, but the -constraints {} part of the test does not look up the $config_name parameter as expected. The test is always skipped unless I hard code the configuration name, but using a proc is not possible and I would need to duplicate the code everywhere just to hardcode the constraint.
Is there a way to look if the constraint is enabled in the tcltest configuration?
Something like this:
proc test_config { config_name} {
testConstraint X [expr { ::tcltest::isConstraintActive $config_name } ]
test $test_name {} -constraints X -body {
<test body>
} -returnCodes ok
}
So, is there a function in tcltest doing something like ::tcltest::isConstraintActive $config_name?
Is there a way to look if the constraint is enabled in the tcltest configuration?
Yes. The testConstraint command will do that if you don't pass in an argument to set the constraint's status:
if {[tcltest::testConstraint foo]} {
# ...
}
But don't use this to decide whether to run tests or for-a-single-test setup or cleanup. Tests should always only be turned on or off by constraints directly so that the report generated tcltest can properly track what tests were disabled and for what reasons, and each test has -setup and -cleanup options that allow for scripts to be run before and after the test if the constraints are matched.
Personally, I don't recommend putting tests inside procedures or using a variable for a test name. It works and everything, but it's confusing when you're trying to figure out what test failed and why; debugging is hard enough without adding to it. (I also find that apply is great as a way to get a procedure-like thing inside a test without losing the “have the code inspectable right there” property.)

How do I use TestNG SkipException?

How do I use TestNG throw new SkipException() effectively? Does anyone have an example?
I tried throwing this exception at the start of a test method but it blows up the teardown, setup, methods, etc. , and has collateral damage by causing a few (not all) of the subsequent tests to be skipped also, and shows a bunch of garbage on the TestNG HTML report.
I use TestNG to run my unit tests and I already know how to use an option to the #Test annotation to disable a test. I would like my test to show up as "existent" on my report but without counting it in the net result. In other words, it would be nice if there was a #Test annotation option to "skip" a test. This is so that I can mark tests as ignored sortof without having the test disappear from the list of all tests.
Is "SkipException" required to be thrown in #BeforeXXX before the #Test is ran? That might explain the wierdness I am seeing.
Yes, my suspicion was correct. Throwing the exception within #Test doesn't work, and neither did throwing it in #BeforeTest, while I am using parallel by classes. If you do that, the exception will break the test setup and your TestNG report will show exceptions within all of the related #Configuration methods and may even cause subsequent tests to fail without being skipped.
But, when I throw it within #BeforeMethod, it works perfectly. Glad I was able to figure it out. The documentation of the class suggests it will work in any of the #Configuration annotated methods, but something about what I am doing didn't allow me to do that.
#BeforeMethod
public void beforeMethod() {
throw new SkipException("Testing skip.");
}
I'm using TestNG 6.8.1.
I have a few #Test methods from which I throw SkipException, and I don't see any weirdness. It seems to work just as expected.
#Test
public void testAddCategories() throws Exception {
if (SupportedDbType.HSQL.equals(dbType)) {
throw new SkipException("Using HSQL will fail this test. aborting...");
}
...
}
Maven output:
Results :
Tests run: 85, Failures: 0, Errors: 0, Skipped: 2
While using DataProvider empty test using Apache POI create seperate check #BeforeTest we can skip the data base is empty or null in that scenario we can use this skiptest with row check is empty using boolean true check then skipped that expection do not go to entire check its having 1000 input check rather its skip that data provider null...
For skipping test case from #Test annotation option you can use 'enable=false' attribute with #Test annotation as below
#Test(enable=false)
This will skip the test case without running it. but other tests, setup and teardown will run without any issue.

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}

How do I write NUnit unit tests without having to surround them with try catch statements?

At my company we are writing a bunch of unit tests. What we'd like to have done is for the unit tests to execute and whenever one succeeds or fails at the end of the test we can write that somewhere but we don't want to put that logic in every test.
Any idea how we could just write tests without having to surround the content of the test with the try catch logic that we've been using?
I'm guessing you do something like this:
[Test]
public void FailBecauseOfException()
{
try
{
throw new Exception();
}
catch (Exception e)
{
Assert.Fail(e.Message);
}
}
There is no need for this. The tests will fail automatically if they throw an exception. For example, the following test will show up as a failure:
[Test]
public void FailBecauseOfException()
{
throw new Exception();
}
I'm not entirely sure what you are trying to do here. Are you saying you are wrapping it in a try/catch so that you can catch when an exception occurs and log this?
If so, then a better way, probably, is just to get NUnit to write an output file and use this. I haven't used NUnit for about a year, but IIRC you can redirect its output to any file you like using the /out directive.
If there is a reason why you have to log it the way you say, then you'll either have to add your custom code to each test, or have a common "runner" that takes your code (for each test) as an anonymous method and runs it inside a single try..catch. That would prevent you having to repeat the try..catch for every test.
Apologies if I've misunderstood the question.
MSTest has TestCleanup, which runs after every test. In NUnit, the attribute to be used is TearDown (after every test) or TestFixtureTearDown (after all the test are completely). This executes after the end of each test.
If you want something to run just in case a test passes, you could have a member variable shouldRunExtraMethod, which is initialized to false before each test, and is changed to true at the end of the test. And on the TearDown, you only execute it depending on this variable value
If your unit test method covers the scenario in which you expect exceptions to be thrown, use the ExpectedException attribute. There's a post here on SO about using that attribute.
Expect exceptions in nUnit...
NUnit assert statements all have an option to print a message for each test for when it fails.
Although if you'd like to have it write out something somewhere at the end of each test, you can set it up in the teardown of each method. Just set the string to what you want written inside the test itself, and during teardown (which happens after each test) It can do whatever you want with it.
I'm fairly certain teardown occurs even if an exception is thrown. That should do what you're wanting.
The problem you have is that the NUnit Assert.* methods will throw an AssertionException whenever an assert fails - but it does nothing else. So it doesn't look like you can check anything outside of the unit test to verify whether the test failed or not.
The only alternative I can think of is to use AOP (Aspect Oriented Programming) with a tool such as PostSharp. This tool allows you to create aspects that can act on certain events. For example:
public class ExceptionDialogAttribute : OnExceptionAspect
{
public override void OnException(MethodExecutionEventArgs eventArgs)
{
string message = eventArgs.Exception.Message;
Window window = Window.GetWindow((DependencyObject) eventArgs.Instance);
MessageBox.Show(window, message, "Exception");
eventArgs.FlowBehavior = FlowBehavior.Continue;
}
}
This aspect is code which runs whenever an exception is raised:
[ExceptionDialog]
[Test]
public void Test()
{
assert.AreEqual(2, 4);
}
Since the above test will raise an exception, the code in ExceptionDialogAttribute will run. You can get information about the method, such as it's name, so that you can log it into a file.
It's been a long time since I used PostSharp, so it's worth checking out the examples and experimenting with it.

Resharper running all tests when only a single one is selected

I'm using Resharper 4.5 with Visual Studio 2008 and MBUnit testing, and there seems to be something odd with using ReSharpher to run the tests.
On the side there are the icons beside the class each test method with the options Run and Debug. When I select Run it just shows me the results of the single test. However I noticed that the test was taking a considerably long time to run.
When I ran Sql Server profiler and start stepping through the code, I realized that its not just running the selected test, but every single one in the class. Is there any reason it makes it look like its only running one unit test while actually running them all?
Its getting to be a pain waiting for all integration tests to run when I only care about the reuslt of one, is there any way to change this?
I just encountered this today and I think I might have realized what causes this bug, I had my methods named similarly
[TestMethod]
public void TestSomething()
[TestMethod]
public void TestSomethingPart2()
I saw that running TestSomething() would run both, however running TestSomethingPart2() would not. I concluded if you name methods that an exact match can occur for the method name it will run the test. After renaming my second test to TestPart2Something this issue went away.
I can confirm that this is a problem with ReSharper 5.1.
To reproduce run test A from my sample code below (all tests will execute); run test AB (all except A will execute); etc:
[TestMethod]
public void A()
{
Console.WriteLine("A");
}
[TestMethod]
public void AB()
{
Console.WriteLine("AB");
}
[TestMethod]
public void ABC()
{
Console.WriteLine("ABC");
}
[TestMethod]
public void ABCD()
{
Console.WriteLine("ABCD");
}
[TestMethod]
public void ABCDE()
{
Console.WriteLine("ABCDE");
}
It took me ages to work this out. I had the remote debugger attached to a development server, and it was breaking a bit more often than I was expecting it to...
It seems to be doing a StartsWith instead of a Contains as others have said.
The workaround is to not have test method names that start with the name of another test method name.
I hope this shows up under Chris post.
I had a similar situation that confirms the behavior he noticed.
[TestMethod()]
public void ArchiveAccountTest()
[TestMethod()]
public void ArchiveAccountTestRestore()
So running the first method would execute both and running the second would not. Renamed my second method to TestRestore and the problem went away.
Note: I'm using Resharper 5.1 so it's still a problem.
When you right-click in the editor, the context menu appears from which you can run and debug tests. Right-click inside a test method to run or debug that single test. Right-click outside of any test method to run or debug the entire test class contained in the current file.
The current release of Gallio includes a Unit Test runner with MbUnit (and NUnit) support built-in.
From the Resharper menu, you have the option of running a Single unit test or all Tests in your solution. What is cool, is that the Keyboard-shortcuts for this are:
Alt + R, U, R - Run test from current context (if you are at a [Test] level, it runs one test, if you are at a [TestFixture] level, it runs all in the fixture!)
Alt + R, U, N - Runs all Unit Tests in your Solution
I highly recommend that you uninstall your current Gallio and then check C:\Program Files\Jetbrains\Resharper\plugins\bin and clear out and files there. Then install Gallio afresh.
Once you've done this, you should startup VS2008 and goto at the Resharper | Plugins menu to check that the Gallio plugin is active. This will give you support for MbUnit.