Why is my NUnit Assert failing when deploying to Azure? - unit-testing

I have a test project I'm using to practice deploying to Azure
Located:
https://github.com/EdLichtman/HelloAzureCI
When I use Resharper to run the NUnit tests, All of them pass except for the Environment-Specific test case, as should be expected.
However when I run deploy.cmd on my local computer All 4 tests fail because "Object reference not set to an instance of an object."
One of my unit tests is "Assert.AreEqual(1,1)" and that throws a nullReference exception, which leads me to think that Assert is not an instance of an object.
Why is this such a problem? Can anyone else recreate?

There are a few odd things here, but the main one is that you are trying to run NUnit 3.7.1 tests using an NUnit V2 console runner (2.6.2). This is never going to work. My suggestion is that whenever you have trouble with running NUnit in a remote environment or using a third-party runner, you fall back to using the console runner locally. Even if that's not your preferred mode of working, you will usually be able to figure out what's wrong more easily if you eliminate as many middlemen as possible.
If you actually want to run under vsconsole, then you need to install the nunit3-vs-adapter nuget package and point to it's location in your command-line. Note that the adapter, even though it is ours, constitutes another middleman, so debugging using nunit3-console is still a good choice.

Related

Any way to manually trigger a Test Discovery pass in VS2019 from a VSPackage?

We're currently building an internal apparatus to run unit tests on a large C++ codebase, using Catch2 for the framework and our in-house VS test adapter (using [ITestDiscoverer] and ITestExecutor) to attune them to our code practices. However, we've encountered issues with unit tests not always being discovered after a build.
There's a couple of things we're doing out of the norm that may be contributing. While we're using VS2019 for coding, we use FASTBuild and Sharpmake to build our solutions (which can contain countless projects). When we realised that VS would try to build the tests again using MSBuild before running them (even after a full rebuild), we disabled that behaviour in the VS options. Everything else seems to be running as expected, except that sometimes tests aren't picked up.
After doing some digging (namely outputting a verification message to VS's Tests Output the moment our TestDiscoverer is entered), it seems like a test discovery pass isn't always being invoked when we would expect it, sometimes even with a full solution rebuild. Beyond the usual expectation that building a project with new changes (or rebuilding outright) would cause a pass to start, the methodology VS uses to determine when to invoke all installed test adapters seems to be fairly blackbox in terms of what exact parameters/conditions trigger it.
An alternative seems to be to allow the user to manually execute a TD pass via some means that could be wrapped in a VSPackage. However, initial looks through the VSSDK API for anything that'd do the job has come up short.
Using the VSSDK, are there any means to invoke a Test Discovery pass independently from VS's normal means of detecting whether a pass is required?
You would want to use the ITestContainerDiscoverer.TestContainersUpdated event. The platform should then call into your Container Discoverer to get the latest set of containers (ITestContainerDiscoverer.TestContainers). As long as the containers returned from the discoverer are different(based on ITestContainer.CompareTo()) the platform should trigger a discovery for the changed containers. This blog has been quite helpful in the past: https://matthewmanela.com/blog/anatomy-of-the-chutzpah-test-adapter-for-vs-2012-rc/

Run unit tests which fail during TFS-Build

I have a huge amount of test cases running during the TFS-build process.
Is there a way to rerun all those test cases on my local machine which fail on tfs? Maybe via configuration or an extension?
My problem is that it takes quite a while to run all the tests again, so I would like to run just those which fail.
The second problem is, that the tfs build sometimes failes tests which are working locally. So I'd like to figure out which I really broke.
I've never seen anything like this. I do think it would be possible to write a VS extension to pull the test results from TFS and create a test list file with all the failed tests and then load that in VS to rerun only the failed tests.
I wrote a simple extension and it wasn't that bad - http://dotnetcatch.com/2014/09/08/parameterizationpreview-visual-studio-extension/
I've tried the exact same thing. However rerunning the tests locally didn't change anything, the tests still passed locally (even after >1000 tries) but failed sporadically on TFS. ( for that part I just put the tests in a for-loop ).
Check your log on TFS - or put it up here - the log should tell you what has failed in the tests and maybe a reconsideration/refactoring of the written failed tests should be made. Even though they pass locally doesn't mean they are right, if that makes sense. So check the log, rewrite tests and try again, would be my suggestion.

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

dart unitest suite never ending on drone.io

I have continuous integration on drone.io for my dart projects, normally there aren't any issues with this, except for actual bugs in my code, but my latest tests are all passing and the test suite reports it completed successfully, but the drone.io test runner never exists it just keeps running until it times out and reports build failed. Has anyone else had anything something familiar to this? or no how to fix it? here is the build if you kick off a new build from the big-refactor-and-enhancement branch that is where it has this odd behaviour.
After a quick look at your code, I would bet that the server launched under the cover is not shut down. You should add a close() method on it and call it in _tearDown().

Using Post-Build Event To Execute Unit Tests With MS Test in .NET 2.0+

I'm trying to setup a post-build event in .NET 3.5 that will run a suite of unit tests w/ MS test. I found this post that shows how to call a bat file using MbUnit but I'm wanting to see if anyone has done this type of thing w/ MS Test?
If so, I would be interested in a sample of what the bat file would look like
We were using NUnit in the same style and decided to move to MSTest. When doing so, we just added the following to our Post-Build event of the applicable MSTest project:
CD $(TargetDir)
"$(DevEnvDir)MSTEST.exe" /testcontainer:$(TargetFileName)
The full set of MSTest command line options can be found at the applicable MSDN site.
Personally I would not recomment running unit tests as a part of the compilation process. Instead, consider something like ReSharper (+ appropriate Unit Test Runner or how do they call these nowadays) or some other GUI runner.
Instead of a doing it in a post build event, that will happen every time you compile, I would look at setting up a Continuous Integration Server like CruiseControl.Net. It'll provide you a tight feedback cycle, but not block your work with running tests every time you build your application.
If you are wanting to run the set of tests you are currently developing, Anton's suggestion of using ReSharper will work great. You can create a subset of tests to execute when you wish and it's smart enough to compile for you if it needs to. While you're there picking up the demo, if you don't already have a license, pick up Team City. It is another CI server that has some promise.
If you are wanting to use this method to control the build quality, you'll probably find that as the number of tests grow, you no longer want to wait for 1000 tests to run each time you press F5 to test a change.