UnitTest WorkflowInstanceID Exception - unit-testing

I am unit testing a StateMachineWorkflow and I create my test methods by clicking in my test project and I make Add - UnitTest. In the project window I select the workflow that I want to test and all the methods in it.
Visual Studio generated a Test Reference folder in my Test Project with an accessor to the workflow. It also generated all the TestMethod() necessary for the testing. All test Methods use a MyWorkflow_Accessor target = new MyWorkflow_Accessor(). When I need to call a function I just do something like target.SendEmail().
Everything works fine, except for one thing: I can't use WorkflowInstanceId of the Workflow, when the code reach a line that uses this it throws an exception in the Workflow, "This is an invalid design time operation. You can only perform the operation at runtime."
Is it possible to inject the WorkflowID by code? Is there any workaround to this situation? I use the WorkflowInstanceId in a lot of functions and changing the Workflow code to match my test doesn't seem like a good idea because I believe the problem is in the test and not in the workflow.

It's not clear from your question if you're using WF 3.5 or WF4 with the state machine update. For the latter, you can use Microsoft.Activities.UnitTesting to test workflows.
It sounds like you're using WF 3.5, though. If this is new development, I would seriously consider moving to WF4. Microsoft basically rewrote WF, and the sooner you switch, the easier your migration path will be.
Otherwise, there is some information on testing with WF 3.5 on MSDN.

Related

Any way to manually trigger a Test Discovery pass in VS2019 from a VSPackage?

We're currently building an internal apparatus to run unit tests on a large C++ codebase, using Catch2 for the framework and our in-house VS test adapter (using [ITestDiscoverer] and ITestExecutor) to attune them to our code practices. However, we've encountered issues with unit tests not always being discovered after a build.
There's a couple of things we're doing out of the norm that may be contributing. While we're using VS2019 for coding, we use FASTBuild and Sharpmake to build our solutions (which can contain countless projects). When we realised that VS would try to build the tests again using MSBuild before running them (even after a full rebuild), we disabled that behaviour in the VS options. Everything else seems to be running as expected, except that sometimes tests aren't picked up.
After doing some digging (namely outputting a verification message to VS's Tests Output the moment our TestDiscoverer is entered), it seems like a test discovery pass isn't always being invoked when we would expect it, sometimes even with a full solution rebuild. Beyond the usual expectation that building a project with new changes (or rebuilding outright) would cause a pass to start, the methodology VS uses to determine when to invoke all installed test adapters seems to be fairly blackbox in terms of what exact parameters/conditions trigger it.
An alternative seems to be to allow the user to manually execute a TD pass via some means that could be wrapped in a VSPackage. However, initial looks through the VSSDK API for anything that'd do the job has come up short.
Using the VSSDK, are there any means to invoke a Test Discovery pass independently from VS's normal means of detecting whether a pass is required?
You would want to use the ITestContainerDiscoverer.TestContainersUpdated event. The platform should then call into your Container Discoverer to get the latest set of containers (ITestContainerDiscoverer.TestContainers). As long as the containers returned from the discoverer are different(based on ITestContainer.CompareTo()) the platform should trigger a discovery for the changed containers. This blog has been quite helpful in the past: https://matthewmanela.com/blog/anatomy-of-the-chutzpah-test-adapter-for-vs-2012-rc/

Restart appdomain for each test

I realize it may sound like an odd request, and it certainly will not do wonders for test performance, but it's critical that I get a new AppDomain for the start of each unit test.
Currently I'm using xUnit and Resharper as the test runner. But I'm willing to change if there's a different framework that would yield the behaviour that I need.
The xunit resharper runner doesn't have this kind of functionality, and I don't know any test framework that does this out of the box. If you need each test to run in a new AppDomain, I'd write it so that each test created a new AppDomain and ran some custom code in there.
You could probably use some of xunit's features to make this a little easier - the BeforeAfterTestAttribute allows you to run code before and after, or you could pass in a fixture that provides functionality to setup/teardown the AppDomain.

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

Unit Testing in QTestLib - running single test / tests in class / all tests

I'm just starting to use QTestLib. I have gone through the manual and tutorial. Although I understand how to create tests, I'm just not getting how to make those tests convenient to run. My unit test background is NUnit and MSTest. In those environments, it was trivial (using a GUI, at least) to alternate between running a single test, or all tests in a single test class, or all tests in the entire project, just by clicking the right button.
All I'm seeing in QTestLib is either you use the QTEST_MAIN macro to run the tests in a single class, then compile and test each file separately; or use QTest::qExec() in main() to define which objects to test, and then manually change that and recompile when you want to add/remove test classes.
I'm sure I'm missing something. I'd like to be able to easily:
Run a single test method
Run the tests in an entire class
Run all tests
Any of those would call the appropriate setup / teardown functions.
EDIT: Bounty now available. There's got to be a better way, or a GUI test runner that handles it for you or something. If you are using QtTest in a test-driven environment, let me know what is working for you. (Scripts, test runners, etc.)
You can run only selected test cases (test methods) by passing test names as command line arguments :
myTests.exe myCaseOne myCaseTwo
It will run all inits/cleanups too. Unfortunately there is no support for wildcards/pattern matching, so to run all cases beginning with given string (I assume that's what you mean by "running the tests in an entire class"), you'd have to create script (windows batch/bash/perl/whatever) that calls:
myTests.exe -functions
parses the results and runs selected tests using first syntax.
To run all cases, just don't pass any parameter:
myTests.exe
The three features requested by the OP, are nowadays integrated in to the Qt Creator.
The project will be automatically scanned for tests and they apear on the Test pane. Bottom left in the screenshot:
Each test and corresponding data can be enabled by clicking the checkbox.
The context menu allows to run all tests, all tests of a class, only the selected or only one test.
As requested.
The test results will be available from the Qt Creator too. A color indicator will show pass/fail for each test, along additional information like debug messages.
In combination with the Qt Creator, the use of the QTEST_MAIN macro for each test case will work well, as each compiled executable is invoked by the Qt Creator automatically.
For a more detailed overview, refer to the Running Autotests section of the Qt Creator Manual.

Using Post-Build Event To Execute Unit Tests With MS Test in .NET 2.0+

I'm trying to setup a post-build event in .NET 3.5 that will run a suite of unit tests w/ MS test. I found this post that shows how to call a bat file using MbUnit but I'm wanting to see if anyone has done this type of thing w/ MS Test?
If so, I would be interested in a sample of what the bat file would look like
We were using NUnit in the same style and decided to move to MSTest. When doing so, we just added the following to our Post-Build event of the applicable MSTest project:
CD $(TargetDir)
"$(DevEnvDir)MSTEST.exe" /testcontainer:$(TargetFileName)
The full set of MSTest command line options can be found at the applicable MSDN site.
Personally I would not recomment running unit tests as a part of the compilation process. Instead, consider something like ReSharper (+ appropriate Unit Test Runner or how do they call these nowadays) or some other GUI runner.
Instead of a doing it in a post build event, that will happen every time you compile, I would look at setting up a Continuous Integration Server like CruiseControl.Net. It'll provide you a tight feedback cycle, but not block your work with running tests every time you build your application.
If you are wanting to run the set of tests you are currently developing, Anton's suggestion of using ReSharper will work great. You can create a subset of tests to execute when you wish and it's smart enough to compile for you if it needs to. While you're there picking up the demo, if you don't already have a license, pick up Team City. It is another CI server that has some promise.
If you are wanting to use this method to control the build quality, you'll probably find that as the number of tests grow, you no longer want to wait for 1000 tests to run each time you press F5 to test a change.