Can Karma refresh the file changes without running the whole suite again? - unit-testing

I am using Karma through Grunt. We have around 1000 tests and it is a bit painful to have them all run whenever we change a file (autoWatch = true).
This is what we are doing now:
Start Karma with singleRun=false, autoWatch=false.
Open the debug page and grep for a specific suite (using mocha html reporter).
Change a test or file related to that suite.
Refresh the debug page to run the set of tests again.
My changes in (3) haven't been picked up by Karma so the tests still behave as if nothing had changed.
This is what I need:
Start Karma with singleRun=false, magicOption=true.
Open the debug page and grep for a specific suite (using mocha html reporter).
Change a test or file related to that suite.
Refresh the debug page to run the set of tests again.
My changes are porperly picked up and only the grepped tests are run.
If I set autoWatch=true I get what I need but the whole suite of 1000 tests is run in the background whenever I change a file, which soon collapses my environment.
I don't think there is anything equivalent to magicOption according to Karma docs but, is there any way to achieve the same behaviour?
Thanks a lot.

Related

Can you have cross browser testing using Robot Framework

I have a robot-framework test suite that runs all OK.
I have it running with pabot and selenium Grid, so parallel testing is all good.
My question is, can I run my test suite against multiple browsers without having to manually run the same scripts for each browser or duplicating my test suite for each browser.
Essentially, using a "Resource.txt" file to tell the test to instantiate the browser the grid node is set up for.
For example, in a TestNG project (Using POM method) I use the "if" and "else" methods to tell the test to use the browser that the selenium grid node is set up for.
Python 2.7
RF 3.0.2
Grid 3.5
The common way to do this is to use a variable to hold the name of the browser, and then set the variable from the command line
In your test case:
open browser ${ROOT_URL} ${BROWSER}
From the command line:
robot --variable BROWSER:firefox ...
-or-
robot --variable BROWSER:chrome ...
An alternative to setting the variable on the command line is to have your tests use a variable file which dynamically sets the value of the variable based on runtime conditions.

Run unit tests which fail during TFS-Build

I have a huge amount of test cases running during the TFS-build process.
Is there a way to rerun all those test cases on my local machine which fail on tfs? Maybe via configuration or an extension?
My problem is that it takes quite a while to run all the tests again, so I would like to run just those which fail.
The second problem is, that the tfs build sometimes failes tests which are working locally. So I'd like to figure out which I really broke.
I've never seen anything like this. I do think it would be possible to write a VS extension to pull the test results from TFS and create a test list file with all the failed tests and then load that in VS to rerun only the failed tests.
I wrote a simple extension and it wasn't that bad - http://dotnetcatch.com/2014/09/08/parameterizationpreview-visual-studio-extension/
I've tried the exact same thing. However rerunning the tests locally didn't change anything, the tests still passed locally (even after >1000 tries) but failed sporadically on TFS. ( for that part I just put the tests in a for-loop ).
Check your log on TFS - or put it up here - the log should tell you what has failed in the tests and maybe a reconsideration/refactoring of the written failed tests should be made. Even though they pass locally doesn't mean they are right, if that makes sense. So check the log, rewrite tests and try again, would be my suggestion.

How to debug ember-cli tests running in phantomjs

Context: I have an acceptance test for my ember-cli application, and the test passes just fine in Chrome. However, in phantomjs, my test fails -- the UI doesn't get created the same way, and I'm trying to work out why. (I think the test is broken because of https://github.com/ember-cli/ember-cli/issues/1763, but the general question of how to debug remains)
In Chrome, I can use the standard debugging tools on my tests and all is well -- but in phantomjs, I can't get at it with a debugger. I also don't see console.log() messages show up in the output -- all I get is a list of test results in my terminal window.
I can sort-of get diagnostic info by writing things like
equal(true, false, "This is a log message");
and then I get the message as details for the assertion that failed, or I can try and work out what's in the DOM with
equal(true, false, document.getElementsByClassName("my-class".innerHTML);
but both of those a:stop the test going any further, and b:only let me log information from the test itself, not my application.
Is there a way to run my tests outside of "ember test", or some way to attach to the running test processes? Alternatively, is there a way to get console.log() messages to show up in the output?
You can expose PhantomJS debug port and open it in browser then you can interact with context at your debugger breakpoints.
Debugging tests on PhantomJS using Testem test runner
In testem.json add "phantomjs_debug_port": 9000.
While you run your tests visit http://localhost:9000 in your browser and click the long link that shows up.
Source: cssugared
I had no luck with the other answers, so here's what I found out:
Add a return pauseTest(); at the point in your test where you want to be able to interact with the container in the browser. This is in the docs but I'm not sure it's in the guides.
To answer the part of my original question about "how do I get log messages to show up", if I use the TAP reporter, then console.log (in my app and in my tests) messages show up in the output; the xunit reporter doesn't pass console.log on, which was confusing me.
(I've also hit issues where running the tests on teamcity behaves differently than running locally; in that situation, combining the TAP reporter with https://github.com/aghassemi/tap-xunit (or the TAP teamcity plugin) lets me get log messages and also test counts)

Why am I getting different behaviour when clicking Resharper "Run all tests" button vs using the keyboard shortcut command?

The code I'm unit testing refers to an appsetting in the app.config file. To cater for this, I've added an app.config file to my unit tests project. If I click the "Run All Tests" icon in the Unit Test Sessions window, all my tests pass.
I have mapped the "ReSharper.ReSharper_UnitTest_RunSolution" command to Ctrl+Shift+Alt+U. If I run the tests by pressing this combination, the tests all run, but they fail to find the appsetting, which comes through as null.
I'm assuming this means that the button click runs under the context of the test project, whilst the command does not, but I can't quite work out what the command is doing.
Have I mapped the wrong command?
EDIT 1: I've also tried using the keyboard shortcut Alt-RUN (Resharper > Unit Tests > Run All), as well as clicking the menus manually, and found that this also causes all unit tests to not find the appsetting and therefore fail. Clicking the Run All Tests icon in Unit Test Sessions (the double green arrow) continues to work fine.
EDIT 2: I realised I should probably be mocking a separate class that fetches appsettings from the config file anyway, so this is what I'm now doing. So now there is no dependence on the config file when unit testing.
There are two things going on here. Firstly, the Run All Tests icon in the sessions window runs all tests in the session, while the Run All Tests menu item runs all tests in the solution. Slightly confusing that they've got the same name, but it does make sense given the context. This is why they give different results.
Secondly, when running all tests in a solution, the app setting might not get found. This is due to an optimisation the test runner makes which runs all tests in the same AppDomain. This avoids the overhead of creating a new AppDomain for each assembly, but has the downside that only one app.config will be used for all assemblies. If it picks the wrong one, your app setting is lost.
You can disable this by unchecking ReSharper » Options » Unit Testing » "Use separate AppDomain for each assembly with tests". Ideally, it should be disabled if any project has an app.config - I've added a feature request you can vote for and track: https://youtrack.jetbrains.com/issue/RSRP-428958

Using Post-Build Event To Execute Unit Tests With MS Test in .NET 2.0+

I'm trying to setup a post-build event in .NET 3.5 that will run a suite of unit tests w/ MS test. I found this post that shows how to call a bat file using MbUnit but I'm wanting to see if anyone has done this type of thing w/ MS Test?
If so, I would be interested in a sample of what the bat file would look like
We were using NUnit in the same style and decided to move to MSTest. When doing so, we just added the following to our Post-Build event of the applicable MSTest project:
CD $(TargetDir)
"$(DevEnvDir)MSTEST.exe" /testcontainer:$(TargetFileName)
The full set of MSTest command line options can be found at the applicable MSDN site.
Personally I would not recomment running unit tests as a part of the compilation process. Instead, consider something like ReSharper (+ appropriate Unit Test Runner or how do they call these nowadays) or some other GUI runner.
Instead of a doing it in a post build event, that will happen every time you compile, I would look at setting up a Continuous Integration Server like CruiseControl.Net. It'll provide you a tight feedback cycle, but not block your work with running tests every time you build your application.
If you are wanting to run the set of tests you are currently developing, Anton's suggestion of using ReSharper will work great. You can create a subset of tests to execute when you wish and it's smart enough to compile for you if it needs to. While you're there picking up the demo, if you don't already have a license, pick up Team City. It is another CI server that has some promise.
If you are wanting to use this method to control the build quality, you'll probably find that as the number of tests grow, you no longer want to wait for 1000 tests to run each time you press F5 to test a change.