I'm debugging a set of WCF services. Initially, I created some unit tests, but since I'm using threading I often receive "Aborted" or "Stopped" tests without any clear explanation why (this is a known bug in Visual Studio).
I found it extremely challenging to debug the services when I can't even read the log output, so I quickly wrote a custom Assert class and converted all unit tests to console applications. This way, I was able to fix a huge number of simple problems immediately that were hard to impossible before.
So I'm wondering if it is a good idea to write unit tests as (fully automated) console apps first and convert them to real (executes when launching unit tests in VS) tests later.
if you want to stick to the stand alone console app you can have a one fits all aproach: Change
the application type of the MsUnitTest (or NUnitTest) to "Console application"
add a public static void Main() that call your unittests you are interested in.
This exe is can run on its own or it runs in the unittest-ide.
I prefer a standalone consolerunner as described in how-do-i-use-mstest-without-visual-studio
Related
I have an application where there is a mess of code that has a bunch of non-isolated components. This makes things difficult in terms of doing some unit testing. So along with some unit tests in their own separate test DLL, I'm trying to also create some tests within the application DLL. The application DLL is normally invoked from an application EXE.
For some background, this code is 20+ years old written in native C++. I cannot execute the tests in the DLL directly as the framework is not setup, so any calls executed within the DLL will not execute correctly. I've unsuccessfully tried to do this already, but maybe I need a more fundamental understanding of the MFC framework to do this.
A colleague suggested that maybe it might be possible to have the vstest.console somehow run the tests through the EXE where the framework can be brought up, run the tests through the EXE, which are then forwarded to the DLL, and then have the test results returned back through the EXE to vstest.console, effectively making the EXE a proxy of sorts.
I'm thinking that this might be a longshot, but I'm at a loss as how I can run the tests in the DLL properly. Could it be done? Is there a better way?
For legacy EXE, you may use Generic Test (for console app) or Coded UI Test (for GUI app). Technically, Generic Test or Coded UI Test is System Level Test. You can still get Code Coverage for the two tests.
More on Generic Test
Use a generic test to wrap a console app or test harness console that
• Runs from a command line
• Returns error code: 0 <- Pass; Nonzero <- Fail
• Positive tests only for a console app; A test harness may include negative tests within.
Visual Studio treats generic test just like other tests, but
• Add Generic Tests into Unit Test Project type
• Command line must run .GenericTest file instead of UnitTest1.dll
• vstest.console GenericTest1.GenericTest
NOTE: Set the “run duration” long enough for your EXE.
I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).
We are just about to start a new project. The Proof of Concept (PoC) for this project was done simply using Win32. The plan is/was to flesh out the PoC, tidy the uglier parts and meet the requirements set by the project owners.
One of the requirements for the actual project is 100% code coverage but I can see problems ahead: How can I acheive 100% code coverage with Win32 - the message pump will be exceptionally difficult to test effectively?! I could compile to a DLL but won't there be code in the main app that won't be under coverage?
I am thinking of dropping the Win32 code and moving to MFC - at least then a lot of the boiler plate stuff will be hidden from view (and therefore coverage).
Any thoughts on the problem?
I mean WndProc, but the same applies for WinMain. How can you unit-test that?
I do testing but not unit-testing: I do system/integration testing.
If you exercise your (whole) application while it's running under a debugger/profiler/code coverage analyser, then of course you will find (and the coverage analyser will show) that WinMain etc. are being run (are being covered).
The question then might be, how do you automate the system/integration testing of the whole application? You might have a test framework with automates driving the GUI; I don't know of any myself, but for example there's a list here. Alternatively, it might be acceptable (to the client) if the acceptance test suite is a sequence of non-automated/manual tests.
See also Should one test internal implementation, or only test public behaviour?
I am attempting add some tests to an existing QT GUI application using QTest. The GUI uses quite a bit of complicated start-up code so I'd rather not write another main() to start it again.
To me, it seems like the easiest way would be instantiate the app and then run the tests on it. I am just not sure, however what function I could plug my test object into so that I don't block the flow of messages.
I could send a special message to start the test or set a timer but that's complicated and tests are supposed to simplify things.
So where would be the best place in existing GUI to insert and qexec a Qtest object?
I'm willing to be proven wrong, but test frameworks in general (and QTest specifically from what I've used of it) tend to assume that the test framework will be driving the code to be tested, as opposed to running along side of it.
I'm also concerned about "The GUI uses quite a bit of complicated start-up code...." Are you intending on testing the startup code? Or testing other stuff around it?
Generally speaking, when I start looking at adding tests to an application, I try to find smaller pieces that are used in a lot of the application, and write tests for those. I then build up to testing the bigger pieces that integrate those smaller ones. My general idea is that if the small pieces work properly, then either I've put them together correctly and things should work, or I haven't and things should obviously fail when I try to run the application.
I should mention that there are other options for testing GUIs for Qt applications in particular. They tend to be more like scripts run on your program, with the output recorded. If that interests you, then you could look into Squish for Qt.
I'm trying to setup a post-build event in .NET 3.5 that will run a suite of unit tests w/ MS test. I found this post that shows how to call a bat file using MbUnit but I'm wanting to see if anyone has done this type of thing w/ MS Test?
If so, I would be interested in a sample of what the bat file would look like
We were using NUnit in the same style and decided to move to MSTest. When doing so, we just added the following to our Post-Build event of the applicable MSTest project:
CD $(TargetDir)
"$(DevEnvDir)MSTEST.exe" /testcontainer:$(TargetFileName)
The full set of MSTest command line options can be found at the applicable MSDN site.
Personally I would not recomment running unit tests as a part of the compilation process. Instead, consider something like ReSharper (+ appropriate Unit Test Runner or how do they call these nowadays) or some other GUI runner.
Instead of a doing it in a post build event, that will happen every time you compile, I would look at setting up a Continuous Integration Server like CruiseControl.Net. It'll provide you a tight feedback cycle, but not block your work with running tests every time you build your application.
If you are wanting to run the set of tests you are currently developing, Anton's suggestion of using ReSharper will work great. You can create a subset of tests to execute when you wish and it's smart enough to compile for you if it needs to. While you're there picking up the demo, if you don't already have a license, pick up Team City. It is another CI server that has some promise.
If you are wanting to use this method to control the build quality, you'll probably find that as the number of tests grow, you no longer want to wait for 1000 tests to run each time you press F5 to test a change.