Google Closure Javascript testing, disable autodiscover tests - unit-testing

Currently i am implementing the Google closure testing possibilities.
It works as a charm.
I Define the TestCase by hand, and add the test by hand. I also create a separate runner for the tests so I can catch all the results and pass them to another function.
This function sends the results through ajax to PHP so the results can be logged in the database (also works as expected).
The problem however is that because I do this, and I load the page in the browser the tests get executed 2 times (one time because of the auto-discovery and once because i defined it in the testcase.
I would like to disable the auto-discovery, but I don't want to disable the flag in the closure library, this because when the library gets updated we need to reset the flag to false again.
So how can i disable auto-discovery without modifying the code in the closure library?
Thanks in advance!

If you look into jsusnit.js, you'll see that goog.testing.jsunit.AUTO_RUN_ONLOAD = true; is hard-coded there and you can override this variable only through closure compiler's define.
If you don't compile your test code (I don't, because of speed of iteration), the only option seems to change this to false, and redo the change on closure library updates.

Related

DryIOC, MediatR - DecoratorWith condition evaluated multiple times with keyed parameter

This question is yet another follow-up to a previous question regarding the setup of DryIOC with MediatR and decorators: DryIOC and MediatR: Injection using InResolutionScopeOf for both IAsyncNotificationHandler and IAsyncRequestHandler
In this example, the setup is similar to the one of my previous question, we have requests (IAsyncRequestHandler) and notifications (IAsyncNotificationHandler), and the notifications are being fired from the requests, and both have a dependency on a DbContext with needs to be injected per resolution scope.
What I'm doing now is decorating IAsyncRequestHandler and i'm passing a dependency of type IActionHandler to the decorator using a key. I'm registering the dependency like this:
c.Register<IActionHandler, SomeActionHandler>(serviceKey: "key1");
And then, passing the parameter to the decorator like this:
c.Register(typeof(IAsyncRequestHandler<,>), typeof(Decorator<,>),
made: Parameters.Of.Type<IActionHandler>(serviceKey: "key1"),
setup: DryIoc.Setup.Decorator);
Set up like this, the notification is fired from the request handler successfully. However if I add more decorators and change the setup parameter of the decorator to DecoratorWith and specify a condition (even if it simply returns true), the notification isn't fired from the request handler because the DbContext isn't injected successfully into the IAsyncNotificationHandler.
Here is a fiddle which shows the problem https://dotnetfiddle.net/ob0nfA
When debugging, i found out that the condition in DecoratorWith of the first decorator is called twice for the same service type, when there are two registrations. I'm not sure if this is intended or not, however I believe that might be related to the problem, because if I simply return true, then multiple decorators will be registered for the same handler, when there should only be one.
I know I would be able to register the decorator dependencies using Made instead, but in this specific instance keyed registration seems better for my intended setup.
So I'd like to know if there's something I'm missing, or in the case that DecoratorWith works as intended by being called more than once for the same service type, I'd like to know if there is a way that I can distinguish the calls, so that i can register the decorator properly only once.
Or maybe the problem lies elsewhere entirely.
Thanks
Found the reason. In current DryIoc version 2.9.3 adding condition to decorator makes it context dependent (which is true btw). But then context dependent service is injected as resolution call instead expression inlining. Using resolution call here messes up with resolution scopes (not yet 100% clear yet how).
So if I remove the switch to resolution call for context dependent decorators, your code works again.
Fix will be released soon. I will update my answer with the fix version.
Update with fix:
Problem is fixed in DryIoc 2.9.5

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

How can I intercept Selenium errors?

In developing Selenium extensions I have scripting to verify the correct handling of failure cases. Unfortunately, I have to execute those commands one-by-one in the IDE, and manually examine each error message. What I would like to do is define a custom Selenium command that I can insert before each command that intentionally fails in a given way. Eg: willFail|expected-error-text.
In other words, I want to alter Selenium command completion behavior such that if the next command throws the given error message, then the result is success and the script continues. But if it succeeds or throws a different error, then the script stops with an error.
I imagine this will involve setting observer function(s), and/or intercepting Selenium function(s). I'd expect the issues to be:
How/where to do the initialization. The relevant Selenium objects can be hard to find.
What/when to return in order to alter the result.
Is there something else left out-of-sync by altering a result?
The PowerDebugger extension allows you to pause the IDE upon a failure, and then resume. So I suspect that the how-to is in there somewhere. But I can't quite figure out how it hooks into Selenium command processing. Samit Badle, are you out there?
I am using Selenium IDE 2.2.0.
With some experimentation I have found that the function TestLoop.resume() is responsible for determining the outcome of each command.
It is defined in chrome/content/selenium-core/scripts/selenium-executionloop.js.
This function executes the command, and either halts the script, or allows it to continue.
To alter this behavior, a Selenium extension can temporarily replace this function with a custom version. To accomplish this, save a reference of editor.selDebugger.runner.IDETestLoop.prototype.resume, and replace it with the custom function. The custom function should then restore the native function, and carry out command execution as appropriate.

Can I get log output only for failures with boost unit tests

I have some logging in my application (it happens to be log4cxx but I am flexible on that), and I have some unit tests using the boost unit test framework. When my unit tests run, I get lots of log output, from both the passing and failing tests (not just boost assertions logged, but my own application code's debug logging too). I would like to get the unit test framework to throw away logs during tests that pass, and output logs from tests that fail (I grew to appreciate this behaviour while using python/nose).
Is there some standard way of doing this with the boost unit test framework? If not, are there some start of test/end of test hooks that I could use to buffer my logs and conditionally output them to implement this behaviour myself?
There are start of test and end of test hooks that you can use for this purpose. To set up these hooks you need to define a subclass of boost::unit_test::test_observer, create an instance of the class that will persist throughout the entire test (either a static global object or a BOOST_TEST_GLOBAL_FIXTURE), and then pass the class to boost::unit_test::framework::register_observer.
The method to override with a start of test hook is test_unit_start, and the method to override with an end of test hook is test_unit_finish. However, these hooks fire both for test suites as well as individual test cases, which may be an issue depending on how the hooks are set up. The test_unit_finish hook also doesn't explicitly tell you whether a given test actually passed, and there doesn't seem to be one clear and obvious way to get that information. There is a boost::unit_test::results_collector singleton, which has a results() method, and if you pass it the test_unit_id of the test unit provided to test_unit_finish, you get a test_results object that has a passed() method. I can't really see a way to get the test_unit_id that is clearly part of the public API -- you can just directly access the p_id member, but that could always change in a future boost version. You could also manually track whether each test is passing or failing using the assertion_result, exception_caught, test_unit_aborted, and test_unit_timed_out hooks from the test_observer subclass (assertion_result indicates a failure of the current test whenever its argument is false and every other hook indicates a failure if it is called at all).
According to the Boost.Test documentation, run your test executable with --log_level=error. This will catch only failing test cases.
I checked that it works using a BOOST_CHECK(false) on an otherwise correctly running project with a few thousand unit tests.
Running with --log_level=all gives the result of all assertions. I checked that by piping it to wc -l that the number of lines in the log is exactly the same as the number of assertions in the tests (which number is also reported by --report_level=detailed). You could of course also grep the log for the strings error or failed.

UnitTest WorkflowInstanceID Exception

I am unit testing a StateMachineWorkflow and I create my test methods by clicking in my test project and I make Add - UnitTest. In the project window I select the workflow that I want to test and all the methods in it.
Visual Studio generated a Test Reference folder in my Test Project with an accessor to the workflow. It also generated all the TestMethod() necessary for the testing. All test Methods use a MyWorkflow_Accessor target = new MyWorkflow_Accessor(). When I need to call a function I just do something like target.SendEmail().
Everything works fine, except for one thing: I can't use WorkflowInstanceId of the Workflow, when the code reach a line that uses this it throws an exception in the Workflow, "This is an invalid design time operation. You can only perform the operation at runtime."
Is it possible to inject the WorkflowID by code? Is there any workaround to this situation? I use the WorkflowInstanceId in a lot of functions and changing the Workflow code to match my test doesn't seem like a good idea because I believe the problem is in the test and not in the workflow.
It's not clear from your question if you're using WF 3.5 or WF4 with the state machine update. For the latter, you can use Microsoft.Activities.UnitTesting to test workflows.
It sounds like you're using WF 3.5, though. If this is new development, I would seriously consider moving to WF4. Microsoft basically rewrote WF, and the sooner you switch, the easier your migration path will be.
Otherwise, there is some information on testing with WF 3.5 on MSDN.