I've got a bunch of unit tests built using Visual Studio 2005's built-in unit testing functionality. For the last little while, it's been taking absolutely forever to start the tests... Everything just sits there at "Pending" for two minutes or more. Now Visual Studio's decided to take things to a new level and never even start the tests. After two to three minutes, it aborts the run and barfs an error message into the log:
Failed to Queue Test Run '(blahblahblah)' with id {bfba05b1-afe5-499e-b452-29167f414f0f}: Microsoft.VisualStudio.TestTools.TestManagement.ExecutionException: Failed to establish communication environment for local run.
Anyone have any ideas? In the medium term, we are planning to switch to NUnit, but for now I'd prefer to stick to Visual Studio because the rest of the team already has that installed and that makes it easier to convince them to actually run the tests... ;-)
Is your computer name UPPERCASE? change it to lower case and try it again.
I've experienced this problem. However, for me the answer was to change my machine name from all lowercase to all UPPERCASE.
Some references for this...
http://teamfoundation.blogspot.com/2008/12/case-of-never-ending-unit-tests.html
http://social.msdn.microsoft.com/Forums/en-US/vststest/thread/fd6f2128-e248-4336-b8be-1eb5480e3de8/
Note, that if you're just changing the case of the machine name then you'll need to use the registry method of changing name as the dialog box will grey-out the OK button when it sees what looks like the same name.
Let me say this MS TEST is not a professional product and should be avoided whenever possible. If you want to use a good testing framework then use MBUnit with TestDriven.NET. MBUnit comes with many new attributes including RollBack and RowTest , Row.
Also, TestDriven.NET will allow you to debug your unit tests (How cool is that). You can also run the code coverage from TestDriven.NET which shows you how much of your code is covered under tests.
Give it a try I am sure you like it.
PS: IT is FREEE
Related
I'm a new NUnit user, using NUnit 3.9 under Visual Studio Community 2017. I'm using it on a pet open source library project, and it's going well once I got the hang of it.
The library accesses a publicly available government website via a documented API. Most of my tests use local data, so that I have a stable bed to compare against, and so that I can test without going out to the website every time.
I would like to set it up so that normally, the tests that hit the server do not run. I run the tests over and over as I tweak the code, and just as a matter of courtesy, don't want to bang on the server. Also, I'd like to be able to test even when the remote system is down or when I don't have Internet access.
Is there any way to group or tag my tests so that normally only the ones using local data run, but that I can still, when necessary, run the ones that exercise the server access? Either specifying "run these" or "exclude these" would be fine.
I've grouped the tests into two different classes, UnitTestOffline.cs and UnitTestOnline.cs, and was hoping I could somehow run the tests on a class-by-class basis, but haven't found a way to do that.
You'll get better answers if you say specifically how you run your tests, since there are a number of ways to do it. Since you mention VS2017, I'm going to assume that you are using the NUnit 3 VS Adapter, but let us know if you are using some other approach.
In the VS adapter, use the dropdown to display your tests by class. Right click on the class for which you want to run tests and run them.
If you decide to categorize tests using the CategoryAttribute, you can display tests by "trait" in Visual Studio. As before, right click on the group you want to run tests for and run them.
If you get a lot of tests, you might want to put your unit tests in one assembly and your integration tests in another. In that case, display the tests by project, right click on the project you want and run them.
All of this can also be done using the nunit3-console command-line runner as well. To select by class or category, you use the --where option. To select by assembly, you merely enter the name of the assembly you want on the command-line.
Seems like you want to categorize your tests (unit test, integration tests...) and run only the unit tests... you could use [Category] for that.
In the nunit GUI you could /include /exclude category after that and run only the one you want.
And probably that the filtering of Visual Studio could work.
Try to see one of the solution suggested here as well
Roy Osherove, author of The Art Of Unit Testing, has commented on a blog that of the many things NUnit supposedly does better, it being much faster is one of them.
My question is how much faster though, if at all? Are we talking an order of magnitude? 10%? 50%?
I'm asking this because for the moment I can't compare the two. I am trying to setup my test project to be in a dual-mode so that I can switch between them. Unfortunately, I am having a problem with NUnit integrating with the latest version of Microsoft Moles, and also NUnit is conflicting with a third party library (appears log4net related).
So far MSTest seems so much easier to use within Visual Studio 2008. All of the version issues and compatibility problems with NUnit (at least for me) is steering me towards choosing MSTest as the framework for the project (though I may keep the dual mode option). Another plus for MSTest is that I can still use most of NUnit asserts with:
using Assert = NUnit.Framework.Assert;
using Is = NUnit.Framework.Is;
But.... If speed is truly much faster in NUnit, then I'd prefer to use it, despite the pain points.
Lastly, has there been any speed improvement in VS2010 for MSTest?
Well I took the time to remove Microsoft Moles and the production code that dealt with log4net related issues so that I could actually compare the two. Then I ran tests in both MSTest 2008 and NUnit 2.5.2.
What I found out is that MSTest better reports the duration time of each individual test. If the test is fast enough in NUnit it gets logged as being 0 seconds both in the TestResults.xml file and in the GUI when one clicks on the properties of a test. Nonetheless, I tried to compare the sum of all test speeds and in some cases NUnit is faster and in other cases MSTest is faster. When one is faster than the other it's by about 30%.
Now, where NUnit definitely seems faster is the wait time before the unit tests run. When I attach the GUI (or console) to the VS Debugger, and run the test project, it takes about 3-6 seconds for NUnit to launch and load before the tests can execute. With MSTest it takes between 15-20 seconds. For MSTest it doesn't matter if there is just 1 test or 26, this load time seems to be the same. As regards to how these wait times scale as the test project gets bigger, say in the thousands of unit tests, I can't comment on the difference, though I'd be very interested in knowing.
There also seems to be a little bit of delay in MSTest while the tests are running, perhaps to update the results pane. Thus, I suspect when people say that NUnit is much faster than MSTest, it is because of the loading and updating delays, but the actual test execution time appears to be very similar.
I have used both MSTest and NUnit (the first more than the latter), and I can't say I noticed any big differences between the two of them regarding speed (don't get me wrong, the difference might be there, I just haven't noticed it).
The reason I choose MSTest is its integration with Visual Studio, as it makes thigns a lot simpler. Additionally, once I had some issues running a couple of tests because the NUnit tests ran in a different thread appartment than MSTests.
I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way.
The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail.
This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself.
So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically.
Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb.
It is hard to run all the unit tests and not integration tests in IDEA.
Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain.
Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!#!#!
Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it.
Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful.
After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.
You can set this property in your config file:
grails.gorm.failOnError=true
That will make it a system wide default for save (which you can override with .save(failOnError: false) if you want).
If you only want this behavior in the test, you can put it in that environment specific stanza in Config.groovy. I actually like this as a project wide behavior.
I'm sure theres a way that you could turn failOnError on/off within a defined scope, but I haven't investigated how to do it yet (might be a good blog post, I'll update this if I write one).
I'm not sure what you've got misconfigured in IDEA, but it shows me a red bar when my tests fail and I can click on the lines in the stacktrace and get right to the issues. The latest version of intellij even collapses down the majority of metaclass cruft that isn't interesting when trying to fix issues.
If you haven't done this already to generate your project, I'd try wiping away your existing .ipr/.iml/.iws/.idea files and running this command to have grails regenerate your configuration:
grails integrate-with --intellij
Then run the .ipr file that gets generated.
I've been using MSTest so far for my unit-tests, and found that it would sometimes randomly break my builds for no reason. The builds would fail in VS but compile fine in MSBuild - with error like 'option strict does not allow IFoo to cast to type IFoo'. I believe I have finally fixed it, but after the bug coming back and struggling to make it go away again, and little help from MS, it left a bad taste in my mouth. I also noticed when looking at this forum and other blogs and such, that most people are using NUnit, xUnit, or MBUnit.. We are on VS2008 at work BTW.. So now I am looking to explore other options..
I'm working on moving our team to start doing TDD and real unit testing and have some training planned, but first would like to come up with a set of standard tools & best practices. To this end I've been looking online to come up with the right infrastructure for both a build server and dev machines...I was looking at the typemock website as I've heard great things about their mocking framework, and noticed that it seems like they promote MSTest, and even have some links of people moving TO MSTest from NUnit..
This is making me re-think my decision.. so I guess I'm asking - is anyone using MSTest as part of their TDD infrastructure? Any known limitiations it has, if I want to integrate with a build / CI server, or code coverage or any other kind of TDD tool I may need? I did search these forums and mostly find people comparing the 3rd party frameworks to eachother and not even giving MSTest much of a chance... Is there a good reason why.. ?
Thanks for the advice
EDIT: Thanks to the replies in this thread, I've confirmed MSTest works for my purposes and integreated gracefully with CI tools and build servers.
But does anyone have any experience with FinalBuilder?? This is the tool that I'd like us to use for the build scripts to prevent having to write a ton of XML compared to other build tools. Any limitiations here that I should be aware of before committing to MS Test?
I should also note - we are using VSS =(. I'm hoping we can ax this soon - hopefully as part of, maybe even the first step, of setting up all of this infrastructure.
At Safewhere we currently use MSTest for TDD, and it works out okay.
Personally, I love the IDE integration, but dislike the API. If it ever becomes possible to integrate xUnit.NET with the VS test runner, we will migrate very soon thereafter.
At least with TFS, MSTest works pretty well as part of our CI.
All in all I find that MSTest works adequately for me, but I don't cling to it.
If you are evaluating mock libraries, take a look at this comparison.
I've been using MS Test since VS 2008 came out, but I haven't managed to strong-arm anything like TDD or CI here at work, although I've messed with Cruise Control a little in an attempt to build a CI server on my local box.
In general I've found MS Test to be pretty decent for testing locally, but there are some pain points for institutional use.
First, MS Test adds quite a few things that probably don't belong in source control. The .VSMDI files are particularly annoying; just running MS Test creates anywhere from 1 to 5 of them and adds them to the solution file. Which means churn on your .SLN in source control, and churn of that sort is bad.
I understand the supposed point behind these extra files -- tracking test run history and such -- but I don't find them particularly useful for anything but a single developer. You should use your build service and CI for that sort of thing!
Second, you either must have Team Foundation Server to run your unit tests as part of CI, or you have to have a copy of Visual Studio installed on your build server if you use, for example, Cruise Control.NET. See this Stack Overflow question for details.
In general, there's nothing wrong with MS Test. But going CI will not be as smooth as it could be.
I have been using MSTest very successfully in our company. We are currently setting up standardised build processes within our company and so far, we have had good success with TeamCity. For Continuous integration, we use out the box TeamCity configurations. For the actual release builds, we set up large msbuild scripts that automate the entire process.
I really like mstest because of the IDE integration and also that all our devs automatically can use it without installing any 3rd party dependencies. I would not recommend switching just because of the problem you are experiencing. I have come full circle, where we went over to nunit and then came back again. These frameworks are all the same at the end of the day so pick the one that is easiest for most your devs to get access to and start using.
What I suspect your problem might be... sounds like an obscure problem I have had before where incorrect references of dll's (eg: adding explicit references (via browse) to projects in your solution, and not using the project reference) leads to out-of-date problems that only come up after clean checkouts or builds.
The other really suspect issue that I have found before is if you have some visual component or control that has a public property of some custom type that is being serialised in the forms .resx file. I typically need to flag them with an attribute that says SerializationVisibility.Hidden. This means that the IDE will not try to generate setters for the property value (which is typically some object graph). Just a thought. Could be way out.
I trust the tools and they don't really lie about there being a genuine problem. They only misrepresent them or report them as something completely obscure. It sounds to me like you have this. I suspect this because the error message doesn't make sense if all is in order, but it does make sense if some piece of code has loaded up an out of date or modified version of the dll at that point.
I have successfully deployed several FinalBuilder installations and the customers have been very happy with the outcome. I can highly recommend it.
I've spent 4 years developing C++ using Visual Studio 2008 for a commercial company; it's now time for me to upgrade my development process.
Here's the problem: I dont have a 1 button build automation. I also dont have a CI server that automatically builds when a commit happens, and emails me whether a build is broken or not. Worse we dont even have a single unit test!!
Can someone please point to me how I can get started?
I have looked at many many tools and I think I might go with:
Visual Build (for build automation) (Note: I also considered Final Builder)
Cruise (for CI server)
I also now am just starting to practice TDD...so I will want to automate my unit tests as well. I chose Google Test/Mock for their extensive documentation. (Cant go wrong with Google brand can I? =p)
Price is not the issue, I want what's best and easiest to get started.
Can people that use real CI/automation tool for unmanaged MSVC++ tell me their tools and how I can go about starting?
Our source control is Subversion.
Last point: I'm also considering project management/tracking tool that integrates right into VSTD ..and thinking about using OnTime. VSTS costs too much. I tried FogBugz, but I think it's too simple. Any others?
I would take some time to seriously consider TeamCity. We used CruiseControl.NET for a while and TeamCity completely demolishes it. Plus it has built-in plugins for Boost and CppUnit, so your unit testing will come for free.
Best of all, the tool is free for < 20 users and gives you three build agents.
I just finished implementing our C++ product at work and it was fairly simple. We did it with msbuild and basically use the msbuild task to compile the solution. Other targets can be used to copy files, run unit tests, etc.
The last time I worked on an unmanaged MSVC++ project (which was moderately sized I might add), we used FinalBuilder to do the automated build & versioning (and even executing PCLint and other profiling tools as well).
Having said that, if you're willing to invest the time, MSBuild (or nAnt perhaps?) can do everything you need - even for unmanaged solutions.
Which brings us to the trade-off: Tools like Visual Build Pro and Final Builder get you up and running quickly. If you want something which offers a greater range of customization, you'll probably be spending a decent amount of time learning and understanding it - i.e. MSBuild, CIFactory, nAnt etc are no cake walk.
So if price isn't an issue - is time an issue? If time is at a premium, I'd investigate the GUI driven tools, they'll get you to where you want to go quickly. If you know you're going to need to extend on the simple one button build + unit tests + deploy scenario (which happens a lot!) then decide if you can invest the time into the more complex tools like MSBuild?
We use a combination of Boost.Build, NAnt, CPPUnit and either Cruise Control.NET or Hudson (we've used them both for various projects but are starting to prefer Hudson).
They are all good tools though we're considering replacing CPPUnit - the Google unit test system is pretty good from what I've seen.
If you're happy running on just Windows you can lose Boost.Build and just call out to Visual Studio from NAnt.
As for issue tracking/project management we settled on Vision Project after a long investigation. It's not well known (yet) but we've found it a very good fit in our environment. Fogbugz is great, a nice, clear interface but we came to the conclusion you did too; way too simple for our needs.
Although the .NET world is spoilt for these kinds of tools Continuous Integration is still pretty easy to set up for C++! I wouldn't think of starting a non-trivial project without putting these systems in place.
we use subversion + cruisecontrol + wix to accomplich CI automated builds outputting one-click installers. this combo has worked very well for us. we've created out own site for admin of svn user groups and permissioning and added the web interface to cc to it. we have a sql server storing all the collected stats from svn and cc and use them for custom reports available on our site. we are looking to add other tools to the mix for checking various attributes of the code stored in svn. this combo has worked very well for us.
At my company we use CruiseControl (http://cruisecontrol.sourceforge.net/). The Java version, not .NET, to build our wxWidgets application on Windows and OS X. Working great for us so far.