Robolectric Espresso does not work when we change the container - unit-testing

I am trying to implement a unit test cases using Robolectric/AndroidX/Espresso APIs.
I launch the fragment which is present in a container using launchFragmentInContainer.
I try to click on the item in recycler-view, which is supposed to inflate another view which is inside another container.
The test case fails here and gives an error stating it cannot find the coordinates for click position.
Is it possible to carry out robolectric testing when we replace fragment containerIds in the flow?

Related

Flutter unit testing - unable to pump full widget tree

I am just getting started with unit testing in Flutter, and I have hit a bit of a wall. I have a fairly simple app located here:
https://github.com/chuckntaylor/kwjs_flutter_demo
The app is essentially a list view of events where you can tap on one to read more details about the event.
I have two screens for this: events_screen.dart for the list view, and event_screen.dart for the details. I have been trying to write my tests in events_screen_test.dart
My testing difficulties are with the events screen (the list view). After running await tester.pumpWidget(MaterialApp(home: EventsScreen()) I can use find.text('Events') for example to find the title in the AppBar, but I cannot find any of the elements that make up the list.
To clarify further. I am using get_it as a serviceLocator to get the viewModel for the EventsScreen when it loads. The viewModel is the ChangeNotifierProvider, and EventsScreen contains a Consumer to update the list. in EventsScreen initState(), it calls loadEvents() on the viewModel. After loadEvents() is done, the viewModel calls notifyListeners(), so that EventsScreen can update.
How do I ensure that all these steps occur so that I can properly test if the EventsScreen is rendering properly?
I could be approaching this all wrong, so any advice is appreciated.
I have solved my problem, but perhaps someone can shed some light on this. In the end I executed:
await tester.pumpWidget(MaterialApp(home: EventsScreen(),));
// followed immediately by this second pump without arguments
await tester.pump();
At this point, the Widget tree was complete and I could create Finders as I hoped and could run all my expect statements without issue.
I am not sure exactly why this works though.
As you saw calling tester.pump(), after tester.pumpWidget(), do the job. This works for the following reason: you wrote you are using a Provider, and you run a notifyListener after the data are fetched. Now in a normal application run, you see the widget rebuild since you are using a consumer to that provider. In the test environment, this does not occur if you don't explicitly call a "time advance". You can do it calling await tester.pump() (as you did) and this asks to run a new frame on screen, so now you have your list rendered.

ViewModelLocator was not found in UWP / WinIot Core

App1
I have an UWP App which uses a ViewModelLocator class (No MVVMLight or Prism).
The ViewModelLocator is integrated as resource in the App.xml and used in the DataContext of the View.
Running this app in Release and Debug mode works fine.
UnitTestApp1
I have a UnitTest App which references the App1 from above.
When running the UnitTests with Release mode, then all tests run.
When I run the UnitTests in Debug mode, then I get the error:
Cannot deserialize XBF metadata type list as 'ViewModelLocator' was not found in namespace 'App1.UI'. [Line: 0 Position: 0]
This problem is available since Fall Creator as minimal target version.
I was reading the in UWP the ResourceDictionaries do not have any code behind and are not initialzed directly. Can this be related to that?
#Schaf,
The ViewModel needs to be able to access the actual model. You have all of them them being initialized before any data is available for them. That's not how they are intended to be used.
The Model-View-ViewModel construct is meant to allow an aggregation of different data points to present a specific set of information, AND be testable at the same time. In Debug mode, the Resources are not used, because that is essentially a set of static objects (images, lists that don't change, etc.) that are called on at actual runtime.
Additionally, in your scenario, it sounds like your data access is integrated into the ViewModel itself. Testing in Debug mode is supposed to be White-Box, to ensure that the flow, and transformation, of data is easily accessible from beginning to end. By default, this requires that the classes under test (the ViewModels in this case) must be accessible directly from the Test Harness, and thus must be able to be instantiated apart from the overall application context (where the application resources live), which isn't fully assembled in Debug mode.
So to answer your question, yes the inability to test your ViewModels in Debug mode is directly related to them being underneath the ResourceDictionary. If you pull your ViewModels out into their own folder in the solution, at the same level as your model, you should be able to reach them in Debug mode, and test not only the data access but that the information that is populating each ViewModel is the correct set of information, to satisfy the Business rules that you are trying to meet.

Unit testing my service and mocking dependencies

I have a service which has two dependencies. One of them is a $http service responsible for making Ajax calls to my Rest API.
Inside my service I have this function:
this.getAvailableLoginOptions = function() {
return $http.get(path.api + '/security/me/twoFA/options').then(function (resp) {
return new TwoFaLoginOptions(resp.data);
});
};
This function gets some options from my API and returns an object.
Now, how should I unit test this function properly? Normally I would just mock the $http service so that when the get function is called with a string parameter ending with '/security/me/twoFA/options' I would return a valid response with options.
But what if some other developer comes in and refactors this function so now it takes the options from another source e.i. another API or from browser's local storage but the function still works perfectly as it returns what it is supposed to do.
So what really unit testing is? Should we test every function as a black box and assume that if we give some input then we expect some particular output OR we should test it as a white box by looking at every line of code inside a function and mock everything but the test will be strongly dependent on all dependencies and the way how I use them.
Is it possible to write a unit test which tests if my function works properly no matter what algorithm or source of data is used to implement it? Or maybe this is actually a part of unit testing to check if my function really uses a dependency in this and that way (in addition to testing the function's logic)?
But what if some other developer comes in and refactors this function so now it takes the options from another source
This is exactly why Dependency Injection is used, because it makes it simple to manage dependencies between objects (easier to break coherent functionality off into separate contracts (interfaces), and thus solve the problem of swapping out object dependencies during runtime or compile time).
For example, you could have a ITwoFaLoginOptions with multiple implementations (service, local storage etc), and then you'll mock the interface get method.
Should we test every function as a black box [...] OR we should test
it as a white box?
In general unit testing is considered white box testing (mocking the dependencies you have in order to obtain predefined responses that help you reach various code paths, while also asserting these dependencies have been called with the expected parameters), while system (or integration) tests would use a black box approach (for example calling the service like a client, assert against the response/DB).

Storing regularly used Grails commands for later use in NetBeans?

I regularly use right mouse button > "Run/Debug Grails Command..." from within NetBeans.
When I do so, it's cumbersome b/c I have to wait for "Reloading Grails commands...", then I have to choose the command and manually type all parameters e.g. "unit:spock -coverage ExampleController".
I'll have to compose the commands everytime I restart NetBeans.
Is there a better solution to this?
Also, everytime I run "test-app" complete Grails restarts - is it possible to leave Grails running and just execute the tests in question via a click again, and again, and again ... ?
Thanks to the help of david, I can now answer my question #2:
When clicking right mouse button > "Run/Debug Grails Command..." from within NetBeans, simply double-click "interactive" from the list.
Then, in the new shell, type the test you want to run without grails. E.g. only test-app unit:spock -coverage ExampleController
Everytime you want to execute the test again just hit ENTER within the shell/console.
Note that Grails won't be able to handle certain changes correctly. In this case you willl most likely see unexpected exceptions and the like.
If that happens, just close the shell, clean the project, rinse & repeat.

Unit Testing in QTestLib - running single test / tests in class / all tests

I'm just starting to use QTestLib. I have gone through the manual and tutorial. Although I understand how to create tests, I'm just not getting how to make those tests convenient to run. My unit test background is NUnit and MSTest. In those environments, it was trivial (using a GUI, at least) to alternate between running a single test, or all tests in a single test class, or all tests in the entire project, just by clicking the right button.
All I'm seeing in QTestLib is either you use the QTEST_MAIN macro to run the tests in a single class, then compile and test each file separately; or use QTest::qExec() in main() to define which objects to test, and then manually change that and recompile when you want to add/remove test classes.
I'm sure I'm missing something. I'd like to be able to easily:
Run a single test method
Run the tests in an entire class
Run all tests
Any of those would call the appropriate setup / teardown functions.
EDIT: Bounty now available. There's got to be a better way, or a GUI test runner that handles it for you or something. If you are using QtTest in a test-driven environment, let me know what is working for you. (Scripts, test runners, etc.)
You can run only selected test cases (test methods) by passing test names as command line arguments :
myTests.exe myCaseOne myCaseTwo
It will run all inits/cleanups too. Unfortunately there is no support for wildcards/pattern matching, so to run all cases beginning with given string (I assume that's what you mean by "running the tests in an entire class"), you'd have to create script (windows batch/bash/perl/whatever) that calls:
myTests.exe -functions
parses the results and runs selected tests using first syntax.
To run all cases, just don't pass any parameter:
myTests.exe
The three features requested by the OP, are nowadays integrated in to the Qt Creator.
The project will be automatically scanned for tests and they apear on the Test pane. Bottom left in the screenshot:
Each test and corresponding data can be enabled by clicking the checkbox.
The context menu allows to run all tests, all tests of a class, only the selected or only one test.
As requested.
The test results will be available from the Qt Creator too. A color indicator will show pass/fail for each test, along additional information like debug messages.
In combination with the Qt Creator, the use of the QTEST_MAIN macro for each test case will work well, as each compiled executable is invoked by the Qt Creator automatically.
For a more detailed overview, refer to the Running Autotests section of the Qt Creator Manual.