AngularJS mocking $logProvider in config block - unit-testing

Is there a way to inject providers when writing unit tests using Karma(Testacular) and Jasmine in angular?
Our team decided recently to use angularjs $log to write debugging details to the console. This way we can leverage the ability to disable the logging via the $logProvider.debugEnabled() method.
angular.module("App", ["prismLogin", "ui.bootstrap"])
.config(["$routeProvider", "$logProvider",
function ($routeProvider, $logProvider) {
$routeProvider
//routes here edited for brevity
//This is the offending line, it breaks several pre-existing tests
$logProvider.debugEnabled(true);
}]);
However after adding the $logProvider.debugEnabled(true); line several of our tests no longer execute successfully, failing with the following message:
TypeError: Object doesn't support property or method 'debugEnabled' from App
So my question again, is it possible to mock the $logProvider? Or should I provide my own configuration block for the test harness?
I attempted searching for a way to mock the app module with no luck. It seems to me that using the concrete app module instead of a mock is very brittle. I would like to avoid reworking tests associated with the app module every time a change is made in the app or run configuration blocks.
The tests that are failing are units of code with no relation to the $logProvider? I feel as if I a missing something here and making things much harder than they should be. How should one go about writing tests that are flexible and are not affected by other side effects introduced in your application?

It appears that this is a know issue with angular-mocks.
Until the issue is addressed , I was able to resolve the issue by adding the following method to the angular.mock.$LogProvider definition in angular-mocks.js at line 295.
this.debugEnabled = function(flag) {
return this;
};

Related

Mocking PDFtron webviewer with jest in react application

Hi I am new to unit testing and I am trying to mock the 'annotationManager' of the PDFtron webviewer
I am using this below code in my test file
jest.mock('#pdftron/webviewer', () =>({
annotationManager: {
getAnnotationsList: jest.fn().mockReturnValue([]),
deleteAnnotations: jest.fn()
},
}));
In the code, I'm getting the list of annotations using 'getAnnotationsList' function and deleting it using 'deleteAnnotations' function.
In the log of the unit tests, I'm getting this following error
'cannot read the properties of undefined (reading 'getAnnotationsList')
Is this the correct way to doing things or am I missing something?
are you able to share an example of a test you are writing where you need to mock the annotation manager? Depending on how you are using the WebViewer package, mocking the annotation manager can be different. If you prefer to reach out to Apryse directly you can also reach out to them via their support form

Why doesn't the log4net XmlConfigurator attribute work for my unit tests

I'm using log4net, trying to get logging in my unit tests. If I manually call
log4net.Config.XmlConfigurator.Configure();
Since that works, that seems to eliminate all of the "bad config, config location" issues.
it works, but there are a large number of test classes, so that is not good.
I added
[assembly: log4net.Config.XmlConfigurator(Watch=true)]
to the assemblyinfo of my test project, but when I run (either via native MSTest, or Resharper test runner) I get no logging.
Help?
Source
[AssemblyInitialize()]
public static void MyTestInitialize(TestContext testContext)
{
// Take care the log4net.config file is added to the deployment files of the testconfig
FileInfo fileInfo;
string fullPath = Path.Combine(System.Environment.CurrentDirectory, "log4net.config");
fileInfo = new FileInfo(fullPath);
As it says in the documentation for assembly attributes
Therefore if you use configuration attributes you must invoke log4net
to allow it to read the attributes. A simple call to
LogManager.GetLogger will cause the attributes on the calling assembly
to be read and processed. Therefore it is imperative to make a logging
call as early as possible during the application start-up, and
certainly before any external assemblies have been loaded and invoked.
Because the unit test runners load the test assembly in order to find and the tests, it isn't possible to initialise log4net using an assembly attribute in unit test projects, and you will have to use the XmlConfigurator.
Edit: as linked in a comment by OP this can be done in one place for the whole test project by using the AssemblyInitializeAttribute

iOS Unit Testing: XCTestSuite, XCTest, XCTestRun, XCTestCase methods

In my daily unit test coding with Xcode, I only use XCTestCase. There are also these other classes that don't seem to get used much such as: XCTestSuite, XCTest, XCTestRun.
What are XCTestSuite, XCTest, XCTestRun for? When do you use them?
Also, XCTestCase header has a few methods such as:
defaultTestSuite
invokeTest
testCaseWithInvocation:
testCaseWithSelector:
How and when to use the above?
I am having trouble finding documentation on the above XCTest-classes and methods.
Well, this question is pretty good and I just wondering why this question is being ignored.
As the document saying:
XCTestCase is a concrete subclass of XCTest that should be the override point for
most developers creating tests for their projects. A test case subclass can have
multiple test methods and supports setup and tear down that executes for every test
method as well as class level setup and tear down.
On the other hand, this is what XCTestSuite defined:
A concrete subclass of XCTest, XCTestSuite is a collection of test cases. Alternatively, a test suite can extract the tests to be run automatically.
Well, with XCTestSuite, you can construct your own test suite for specific subset of test cases, instead of the default suite ( [XCTestCase defaultTestSuite] ), which as all test cases.
Actually, the default XCTestSuite is composed of every test case found in the runtime environment - all methods with no parameters, returning no value, and prefixed with ‘test’ in all subclasses of XCTestCase.
What about the XCTestRun class?
A test run collects information about the execution of a test. Failures in explicit
test assertions are classified as "expected", while failures from unrelated or
uncaught exceptions are classified as "unexpected".
With XCTestRun, you can record information likes startDate, totalDuration, failureCount etc when the test is starting, or somethings like hasSucceeded when done, and therefore you got the result of running a test. XCTestRun gives you controlability to focus what is happening or happened about the test.
Back to XCTestCase, you will notice that there are methods named testCaseWithInvocation: and testCaseWithSelector: if you read source code. And I recommend you to do for more digging.
How do they work together ?
I've found that there is an awesome explanation in Quick's QuickSpec source file.
XCTest automatically compiles a list of XCTestCase subclasses included
in the test target. It iterates over each class in that list, and creates
a new instance of that class for each test method. It then creates an
"invocation" to execute that test method. The invocation is an instance of
NSInvocation, which represents a single message send in Objective-C.
The invocation is set on the XCTestCase instance, and the test is run.
Some links:
http://modocache.io/probing-sentestingkit
https://github.com/Quick/Quick/blob/master/Sources/Quick/QuickSpec.swift
https://developer.apple.com/reference/xctest/xctest?language=objc
Launch your Xcode, and use cmd + shift + O to open the quickly open dialog, just type 'XCTest' and you will find some related files, such as XCTest.h, XCTestCase.h ... You need to go inside this file to check out the interfaces they offer.
There is a good website about XCTest: http://iosunittesting.com/xctest-assertions/

xunit programmatically add new tests/"[Facts]"?

We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.
The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.

Grails Unit Test Buggy Dynamic Finder

I am in the process of writing unit tests for a service class. This service class calls MyDomain.findAllByIdNotInList. The issue I am facing is that grails does not recognize NotInList as a dynamic finder for a mocked domain. I tried Metaclass-ing this functionality out, but was having issues with it.
Any creative ways for bypassing this short of turning the Unit Test into an Integration test? I would like to avoid this for multiple reasons (time to run, Only our Unit tests run at build time, etc)
Also, it is possible my metaclassing is written poorly:
MyDomain.metaClass.findAllByIdNotInList = {ArrayList list ->
return []
}
Edit: Using grails 1.3.7.
also tried
MyDomain.metaClass.findAllByIdNotInList = {deflist ->
return []
}
Bug report here:
http://jira.grails.org/browse/GRAILS-8593
#Sagar V's comment is correct you should be able to utilize all dynamic finders when a Domain is properly mocked. If you're using a version of Grails before 2.0 you'd have to extend GrailsUnitTestCase and call MockDomain(MyDomain) before attempting to invoke the dynamic finders. As an FYI your metaClassing is not written properly (in my opinion you should use the mocking framework to get your test working I'm providing the correct syntax so you can use it properly in the future).
MyDomain.metaClass.'static'.findAllByIdNotInList = {defList ->
[]
}
When the method that you're overriding is static you need to add the .'static'. inbetween the metaClass and the method name.