I've been working with Smart Unit Tests (formerly Pex) for some time now. Pex had the ability to run as a standalone, command-line application which was really useful for several scenarios (e.g. extending the list of parameters of the Parameterized Unit Tests).
However, in the case of Smart Unit Tests (integrated in VS2015), I was not able to find a function to run it standalone (it only works with right clicking on the method to be analyzed). So, for example, when I want to extend the list of PUT parameters, I have to do it also in the method under test (which I really want to avoid) in order have it discovered by Smart Unit Tests.
So, my question is, that are there any possibilities to run Smart Unit Tests only from the generated PUT method like it was in Pex?
Please try the VS 2015 RC build. You should be able to 'Run Intellitest' directly from the PUT.
Yes, running an IntelliTest exploration is supported from both the code under test and from the generated PUT.
Please note that starting with the VS 2015 RC build "Smart Unit Tests" has been renamed to "IntelliTest".
Related
I'm a new NUnit user, using NUnit 3.9 under Visual Studio Community 2017. I'm using it on a pet open source library project, and it's going well once I got the hang of it.
The library accesses a publicly available government website via a documented API. Most of my tests use local data, so that I have a stable bed to compare against, and so that I can test without going out to the website every time.
I would like to set it up so that normally, the tests that hit the server do not run. I run the tests over and over as I tweak the code, and just as a matter of courtesy, don't want to bang on the server. Also, I'd like to be able to test even when the remote system is down or when I don't have Internet access.
Is there any way to group or tag my tests so that normally only the ones using local data run, but that I can still, when necessary, run the ones that exercise the server access? Either specifying "run these" or "exclude these" would be fine.
I've grouped the tests into two different classes, UnitTestOffline.cs and UnitTestOnline.cs, and was hoping I could somehow run the tests on a class-by-class basis, but haven't found a way to do that.
You'll get better answers if you say specifically how you run your tests, since there are a number of ways to do it. Since you mention VS2017, I'm going to assume that you are using the NUnit 3 VS Adapter, but let us know if you are using some other approach.
In the VS adapter, use the dropdown to display your tests by class. Right click on the class for which you want to run tests and run them.
If you decide to categorize tests using the CategoryAttribute, you can display tests by "trait" in Visual Studio. As before, right click on the group you want to run tests for and run them.
If you get a lot of tests, you might want to put your unit tests in one assembly and your integration tests in another. In that case, display the tests by project, right click on the project you want and run them.
All of this can also be done using the nunit3-console command-line runner as well. To select by class or category, you use the --where option. To select by assembly, you merely enter the name of the assembly you want on the command-line.
Seems like you want to categorize your tests (unit test, integration tests...) and run only the unit tests... you could use [Category] for that.
In the nunit GUI you could /include /exclude category after that and run only the one you want.
And probably that the filtering of Visual Studio could work.
Try to see one of the solution suggested here as well
We use SWTBot for writing of functional tests. To test some cases is very difficult and some programmers use classes and their methods directly from implementation (for example call methods from class AddUserDialog etc.). Is this good approach? And why?
And next qustion please. It is SWTBot enough for testing of eclipse RCP based application? Is is necessary to write unit tests please?
Note: We are scrum team.
SWTBot and JUnit serve two different purposes.
JUnit
As the name implies, JUnit is meant for unit testing. Unit tests should be small and fast to execute. They test only a single unit of code and the above mentioned attributes allow them to be executed often while developing the unit under test.
But there is more to (good) units tests. You may want to read one of the following posts for further attributes of unit tests:
Key qualities of a good unit test
What attribute should a good Unit-Test have?
I would go one step further and say that unit tests only make sense in TDD, that is you write the test before the production code. Otherwise you neglect the tests. Who want's to do the extra effort of writing tests for something that already works. And even if you have the discipline to write the tests afterwards, they merely manifest the state of your production code. Whereas, like in TDD, writing tests beforehand leads to lean production code that only does what is required by the tests.
But I guess that's something not everyone will agree on.
In an RCP setting, unit tests would ideally be able to run without starting the platform (which takes rather long). I.e. they would not require to be run as PDE JUnit Tests but as plain JUnit Tests instead. Therefore the unit under test should be isolated from the RCP APIs.
On a related note, see also this question: How to efficiently JUnit test Eclipse RCP Plugins
SWTBot
While SWTBot uses the JUnit runtime to execute the tests, it is rather meant as a utility to create integration or functional tests. SWTBot, when used with RCP, starts the entire workbench and runs all tests within the same instance. Therefore great care should be taken to ensure that each test leaves the environment in the same state as it was before the test started. Specialized Rules may help here to set up and tear down a particular recurring scenario.
It is perfectly valid in order to setup an SWTBot test to call methods from your application. For example, you could programmatically open the wizard and then use SWTBot to simulate a user that enters data and presses the OK button. There is no need to use SWTBot to laboriously open the wizard itself.
In my experience, SWTBot is even too much for simple use cases. Consider a test that should enter some data into a dialog and then press OK. If you already have the dialog opened programmatically you can as well continue without SWTBot:
dialog.textField.setText( "data" );
dialog.okButton.notifyListeners( SWT.Selection, null );
assertThat( dialog.getEnteredData() ).isEqualTo( "data" );
Use Both
The best bet is to have both, unit tests that ensure the behavior of the respective units and functional tests that make sure that the particular units play together as desired.
Not sure if that answers the question, if you have further concerns please leave a comment.
I am working on a project in where I decided to use unit tests. This was new to me but after researching I feel pretty confident I am doing it correctly creating mock objects and testing that the correct methods are called. This is working great but now I would like to actually run some tests that use the actual database and external components. How should I go about testing the actual execution of code? I do not want these tests to run when I run all tests. Is there a way to accomplish this using built in testing in vs2012?
Not that I'm aware of.
What you could do is create a separate project for your integration tests and then in the test explorer, separate your tests by class. Or some other logical seperation.
the current implementation that exists seems to specifically have unit testing mainly in mind, but that should change with 2013 and a stronger push towards TDD and Agile development.
Have a look here :http://msdn.microsoft.com/en-us/library/ms243147(v=vs.80).aspx
Scroll down to "Attributes for Identifying and Sorting Tests". There might be something there that's useful that you can use.
I'm working on a variation of this stack overflow answer that provides reliable cleanup of tests. How do you write unit tests for NUnit addins?
Examining how NUnit self tests, I have determined:
You can write tests, that pass, that verify correct behavior of NUnit for failing tests.
You write unit tests against test fixtures in a separate assembly (otherwise the fixtures under test will execute with your unit tests)
Use NUnit.TestUtilities.TestBuilder to create fixtures and call TestSuite.Run method.
What I don't see are any tests of the add-in process. I've got errors occurring sometime between install and execution. How would I unit test implementations the following?
IAddin.Install
ITestDecorator.Decorate
Here's an article by someone who hacked a way to do it: manipulating some of the singletons in the NUnit add-in implementation to swap his add-in in and out.
http://www.bryancook.net/2009/09/testing-nunit-addins-from-within-nunit.html
Sometimes, the easiest thing to do is run integration tests. It been a while since I played with the NUnit add-in API, so I can't really say regarding any existing unit tests for the extensibility mechanism. If you have looked through NUnit source code and haven't found any, then I guess that is not something that was tested or even written using TDD.
Like I said, sometimes it's easier to just run integration tests. Have your addon, for example, print something to the output stream, and have your test verify that the exact message was written. This way you could test that both the installation and initialization of your plugin succeeded.
Hope that helps...
You know how they say, "There's an app for that"? Well, is there a VS plugin for this ................. ?
I want to be able to right click on a method and select "Create unit test method ..." and have it generate an nunit stub in a particular place in my project tree. So for example. I have a TheNextBigThing library with an Idea class and a MakeMeRich() method. I want to have it, for example, create a unit test method in my Tests project in a sub-folder named TheNextBigThing, in a class named IdeaTests.
I know. I know. All the TDD advocates will tell me I'm doing it backward, but humor me. I have some code I want to retrofit with some tests, and I sometimes write methods before tests.
If it doesn't exist, any pointers on how to write it myself?
If you are running Visual Studio 2010 Professional or Premium you have the option to create a unit test with MSTest by right clicking on the method.
Also, I would suggest using Pex. Pex will create the unit test for you, in addition to all unit tests needed to achieve 100% code coverage of a particular method.
It wouldn't be too hard to get the addin started. Since you have VS already, simply create new project > VS Extensibility. :)
You'll most likey have to learn some codegen unless you utilize a templating language of some sort.
Me? I think it's a cool idea and I'd like to see it implemented. Start it up, share it on github (or similar) and watch it grow.