I have 2 tests which are testing a view that makes a call to an external module. I've mocked it with mock.patch. I'm calling the view by using django's test client.
The first test (a test for 404 being returned) completes successfully and the correct mock is called.
When the second test runs, everything runs as normal, but the mock that the code-under-test has access to is the mock from the previous test.
You can see in this example https://dpaste.de/7zT8 that the ids in the test output are incorrect (around line 91).
Where is this getting cached? My initial thought was that the import of the main module is somehow cached between test runs due to urlconf stuff. Tracing through the source code, I couldn't find that as the case.
Expected: Both tests pass.
Actual: Second test fails due to stale mocked import.
If I comment out the 404 test, the other test passes.
The view is registered in the url conf as the string-y version 'repos.views.github_webhook'.
I do not fully understand what causes the exact behaviour you are seeing, especially not why the mock is seemingly working correctly in the first test. But according to the mock docs, you should patch in the namespace under test, i.e. patch("views.tasks").
http://www.voidspace.org.uk/python/mock/patch.html#where-to-patch
Related
For a scenario unit testing a user entering a password and password confirmation. when i try to verify the same method being called in a different on() block, i get the following error on the 2nd on()block.
org.mockito.exceptions.verification.TooManyActualInvocations:
activationPasswordView.disableButton();
Wanted 1 time:
But was twice
Here is the code:
given("user set password "){
on(“password is null”){
presenterImpl.validatePassword(null, null)
it("done button should be disabled"){
verify(view).disableButton()
}
}
on("input only one password"){
presenterImpl.validatePassword("Password", "")
it("done button should be disabled"){
verify(view).disableButton()
}
}
}
But if i call a different method, it works correctly. I assume this was not how Spek framework was intended to be used as all the examples i have seen always use an Assert. Is there a way i can write the following conditions in Spek without the error?. Even a different given() still causes the error.
The mocked object counts the number of times the function invoked for the specific mock.
Since you did not reset the mock between each test, the counter is increased each time you invoked the method.
You should use: reset(view) to reset the mocks counter.
This issue is not related to the Spek framework.
I am facing an issue when I run the tests of my django app with the command
python manage.py test app_name OR
python manage.py test
All the test cases where I am fetching some data by calling the GET API, they seem to fail because there is no data in the response in spite of there being in the test data. The structure which I have followed in my test suite is there is a base class of django rest framework's APITestCase and a set_up method which creates test objects of different models used in the APIs and I inherit this class in my app's test_views class for any particular API
such as
class BaseTest(APITestCase):
def set_up(self):
'''
create the test objects which can be accessed by the main test
class.
'''
self.person1= Person.objects.create(.......)
class SomeViewTestCase(BaseTest):
def setUp(self):
self.set_up()
def test_some_api(self):
url='/xyz/'
self.client.login(username='testusername3',password='testpassword3')
response=self.client.get(url,{'person_id':self.person3.id})
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(len(response.data),6)
So whenever I run the test as
./manage.py test abc.tests.test_views.SomeViewTestCase
it works fine, but when I run as
./manage.py test abc
The test above response.data has 0 entries and similarly, with the other tests within the same class the data is just not fetched and hence all the asserts fail.
How can I ensure the successful run of the test when they are run as a whole because during deployment they have to go through CI?
The versions of the packages and system configuration are as follows:
Django Version -1.6
Django Rest Framework - 3.1.1
Python -2.7
Operating System - Mac OS(Sierra)
Appreciate the help.Thanks.
Your test methods are executed in arbitrary order... And after each test, there's a tearDown() method that takes care to "rollback to initial state" so you have isolation between tests execution.
The only part that is shared among them is your setUp() method. that is invoked each time a test runs.
This means that if the runner start from the second test method and you only declare your response.data in your first test, all tests are gonna fail apart the posted one.
Hope it helps...
We use CPPUnit to test our Test framework
The tests are organized in Test fixtures (inherited from CPPUNIT_NS::TestFixture)
There is a new requirement - To flush out the application buffer at the end of a test ONLY if it has failed.
I can do this in the overloaded teardown() function in the Test Fixture.
But how to know if a test has failed.
The result of a test is checked using CPPUNIT_ASSERT.
There are around 12 test fixtures with each fixture having around 10 tests.
How to achieve this with minimal code change?
I think it depends a bit on how you call your tests but my first idea would be to use a TestListener and react to the TestListener::addFailure call.
Note however that the tearDown can in theory also throw an exception (possibly through a CPPUNIT_ASSERT) which would also call the TestListener::addFailure.
If that does not work an obvious but really ugly solution is to set a flag at the end of each test method that signals that test finished successfully and call your code when the flag is not set.
Is there a way to inject providers when writing unit tests using Karma(Testacular) and Jasmine in angular?
Our team decided recently to use angularjs $log to write debugging details to the console. This way we can leverage the ability to disable the logging via the $logProvider.debugEnabled() method.
angular.module("App", ["prismLogin", "ui.bootstrap"])
.config(["$routeProvider", "$logProvider",
function ($routeProvider, $logProvider) {
$routeProvider
//routes here edited for brevity
//This is the offending line, it breaks several pre-existing tests
$logProvider.debugEnabled(true);
}]);
However after adding the $logProvider.debugEnabled(true); line several of our tests no longer execute successfully, failing with the following message:
TypeError: Object doesn't support property or method 'debugEnabled' from App
So my question again, is it possible to mock the $logProvider? Or should I provide my own configuration block for the test harness?
I attempted searching for a way to mock the app module with no luck. It seems to me that using the concrete app module instead of a mock is very brittle. I would like to avoid reworking tests associated with the app module every time a change is made in the app or run configuration blocks.
The tests that are failing are units of code with no relation to the $logProvider? I feel as if I a missing something here and making things much harder than they should be. How should one go about writing tests that are flexible and are not affected by other side effects introduced in your application?
It appears that this is a know issue with angular-mocks.
Until the issue is addressed , I was able to resolve the issue by adding the following method to the angular.mock.$LogProvider definition in angular-mocks.js at line 295.
this.debugEnabled = function(flag) {
return this;
};
I've been trying to write some initial NUnit unit tests for MonoRail, having got some basics working already. However, while I've managed to check whether a Flash["message"] value has been set by a controller action, the BaseControllerTest class doesn't seem to store the output for a view at all, so whether I call RenderView or the action itself, nothing gets added to the Response.OutputContent data.
I've also tried calling InPlaceRenderView to try to get it to write to a StringWriter, and the StringWriter also seems to get nothing back - the StringBuilder that returns is also empty.
I'm creating a new controller instance, then calling
PrepareController(controller,"","home","index");
So far it just seems like the BaseControllerTest is causing any output to get abandoned. Am I missing something? Should this work? I'm not 100% sure, because while I'm also running these unit tests in MonoDevelop on Linux, although MonoRails is working OK there.
While I haven't got an ideal method for testing Views, this is possibly less important when ViewComponents can be tested adequately. To test views within the site itself, I can use Selenium. While in theory that can be made part of an NUnit test suite, that didn't run successfully under MonoDevelop in my tests (failing to start the connection to Selenium RC consistently, despite the RC interactive session working fine). However, the Selenium tests can be run as a set from Firefox, which is not too bad - unit testing with NUnit, then Integration/System testing scripting using a Selenium suite, and that setup will work in a Linux/MonoDevelop setup.
As for testing the underlying elements, you can check for redirections and check the flash value set or the like, so that's all fine, and for testing ViewComponents the part-mocked rendering does return the rendered output in an accessible form, so they've proved much easier to test in NUnit (with a base test class of BaseViewComponentTest) as follows:
[Test]
public void TestMenuComponentRendersOK()
{
var mc = new MenuComponent();
PrepareViewComponent(mc);
var dict = new System.Collections.Specialized.ListDictionary();
dict.Add("data",getSampleMenuData());
dict.Add("Name","testmenu");
// other additional parameters
mc.RenderComponent(mc,dict);
Assert.IsTrue(this.Output.Contains(""),"List items should have been added");
}