Howto do unittest for tornado + async def? - unit-testing

Environment: Python 3, tornado 4.4. The normal unittests cannot be used because methods are asynchronous. There is ttp://www.tornadoweb.org/en/stable/testing.html that explains how to do unit testing for asynchronous code. But that works with tornado coroutines ONLY. The classes I want to test are using the async def statements, and they cannot be tested this way. For example, here is a test case that uses ASyncHTTPClient.fetch and its callback parameter:
class MyTestCase2(AsyncTestCase):
def test_http_fetch(self):
client = AsyncHTTPClient(self.io_loop)
client.fetch("http://www.tornadoweb.org/", self.stop)
response = self.wait()
# Test contents of response
self.assertIn("FriendFeed", response.body)
But my methods are declared like this:
class Connection:
async def get_data(url, *args):
# ....
And there is no callback. How can I "await" for this method from a test case?
UPDATE: based on Jessie's answer, I created this MWE:
import unittest
from tornado.httpclient import AsyncHTTPClient
from tornado.testing import AsyncTestCase, gen_test, main
class MyTestCase2(AsyncTestCase):
#gen_test
async def test_01(self):
await self.do_more()
async def do_more(self):
self.assertEqual(1+1, 2)
main()
The result is this:
>py -3 -m test.py
E
======================================================================
ERROR: all (unittest.loader._FailedTest)
----------------------------------------------------------------------
AttributeError: module '__main__' has no attribute 'all'
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
[E 170205 10:05:43 testing:731] FAIL
There is no traceback. But if I replace tornado.testing.main() with unittest.main() then it suddenly starts working.
But why? I guessed that for asnyc unit tests, I need to use tornado.testing.main ( http://www.tornadoweb.org/en/stable/testing.html#tornado.testing.main )
I'm confused.
UPDATE 2: It is a bug in tornado.testing. Workaround:
all = MyTestCase2
main()

Instead of using the self.wait / self.stop callbacks, you can wait for "fetch" to complete by using it in an "await" expression:
import unittest
from tornado.httpclient import AsyncHTTPClient
from tornado.testing import AsyncTestCase, gen_test
class MyTestCase2(AsyncTestCase):
#gen_test
async def test_http_fetch(self):
client = AsyncHTTPClient(self.io_loop)
response = await client.fetch("http://www.tornadoweb.org/")
# Test contents of response
self.assertIn("FriendFeed", response.body.decode())
unittest.main()
The other change I had to make in your code is to call "decode" on the body in order to compare the body (which is bytes) to "FriendFeed" which is a string.

Related

Python mock/unittest: How do you mock a class member?

I am trying to unit test a script that reads a requirements.txt file and goes through each line installing it with pip. However it my unit test it doesn't seem to enter the loop because the self.requirements is empty. I tried setting it to self.setupvirtualenv.requirements to ['foobar'] but this doesn't work.
My question is how can I mock the self.requirements list so that it enters the for loop and the subprocess.call gets called?
def install_requirements(self):
self.requirements = open(self.requirements_path, 'r').readlines()
for req in self.requirements:
req = req.strip()
pip_command = r'pip install {}'.format(req)
subprocess.call(pip_command, shell=True)
My unit test:
import setupvirualenv
import unittest
from mock import patch, Mock
def test_install_requirements(self, mock_open, mock_subprocess_call):
self.setupvirtualenv.requirements = ['foobar']
self.setupvirtualenv.install_requirements()
mock_open.assert_called_once_with('foobar', 'r')
mock_subprocess_call.assert_called_once()
The failure message
AssertionError: Expected 'call' to have been called once. Called 0 times.
You should mock the open call, so that readlines() returns the dummy data.
It looks as though you may be missing the patch decorators from your test method. I have included what they should look like below.
from mock import patch
#patch('setupvirtualenv.subprocess.call')
#patch('setupvirtualenv.open')
def test_install_requirements(self, mock_open, mock_subprocess_call):
# Mock the call to open() and the return value from readlines()
mock_open.return_value.readlines.return_value = ['foobar\n']
self.setupvirtualenv.install_requirements()
mock_open.assert_called_once_with('foobar', 'r')
mock_subprocess_call.assert_called_once()

Testing Motor calls with IOLoop

I'm running unittests in the callbacks for motor database calls, and I'm successfully catching AssertionErrors and having them surface when running nosetests, but the AssertionErrors are being caught in the wrong test. The tracebacks are to different files.
My unittests look generally like this:
def test_create(self):
#self.callback
def create_callback(result, error):
self.assertIs(error, None)
self.assertIsNot(result, None)
question_db.create(QUESTION, create_callback)
self.wait()
And the unittest.TestCase class I'm using looks like this:
class MotorTest(unittest.TestCase):
bucket = Queue.Queue()
# Ensure IOLoop stops to prevent blocking tests
def callback(self, func):
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception as e:
self.bucket.put(traceback.format_exc())
IOLoop.current().stop()
return wrapper
def wait(self):
IOLoop.current().start()
try:
raise AssertionError(self.bucket.get(block = False))
except Queue.Empty:
pass
The errors I'm seeing:
======================================================================
FAIL: test_sync_user (app.tests.db.test_user_db.UserDBTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/----/Documents/app/app-Server/app/tests/db/test_user_db.py", line 39, in test_sync_user
self.wait()
File "/Users/----/Documents/app/app-Server/app/tests/testutils/mongo.py", line 25, in wait
raise AssertionError(self.bucket.get(block = False))
AssertionError: Traceback (most recent call last):
File "/Users/----/Documents/app/app-Server/app/tests/testutils/mongo.py", line 16, in wrapper
func(*args, **kwargs)
File "/Users/----/Documents/app/app-Server/app/tests/db/test_question_db.py", line 32, in update_callback
self.assertEqual(result["question"], "updated question?")
TypeError: 'NoneType' object has no attribute '__getitem__'
Where the error is reported to be in UsersDbTest but is clearly in test_questions_db.py (which is QuestionsDbTest)
I'm having issues with nosetests and asynchronous tests in general, so if anyone has any advice on that, it'd be greatly appreciated as well.
I can't fully understand your code without an SSCCE, but I'd say you're taking an unwise approach to async testing in general.
The particular problem you face is that you don't wait for your test to complete (asynchronously) before leaving the test function, so there's work still pending in the IOLoop when you resume the loop in your next test. Use Tornado's own "testing" module -- it provides convenient methods for starting and stopping the loop, and it recreates the loop between tests so you don't experience interference like what you're reporting. Finally, it has extremely convenient means of testing coroutines.
For example:
import unittest
from tornado.testing import AsyncTestCase, gen_test
import motor
# AsyncTestCase creates a new loop for each test, avoiding interference
# between tests.
class Test(AsyncTestCase):
def callback(self, result, error):
# Translate from Motor callbacks' (result, error) convention to the
# single arg expected by "stop".
self.stop((result, error))
def test_with_a_callback(self):
client = motor.MotorClient()
collection = client.test.collection
collection.remove(callback=self.callback)
# AsyncTestCase starts the loop, runs until "remove" calls "stop".
self.wait()
collection.insert({'_id': 123}, callback=self.callback)
# Arguments passed to self.stop appear as return value of "self.wait".
_id, error = self.wait()
self.assertIsNone(error)
self.assertEqual(123, _id)
collection.count(callback=self.callback)
cnt, error = self.wait()
self.assertIsNone(error)
self.assertEqual(1, cnt)
#gen_test
def test_with_a_coroutine(self):
client = motor.MotorClient()
collection = client.test.collection
yield collection.remove()
_id = yield collection.insert({'_id': 123})
self.assertEqual(123, _id)
cnt = yield collection.count()
self.assertEqual(1, cnt)
if __name__ == '__main__':
unittest.main()
(In this example I create a new MotorClient for each test, which is a good idea when testing applications that use Motor. Your actual application must not create a new MotorClient for each operation. For decent performance you must create one MotorClient when your application begins, and use that same one client throughout the process's lifetime.)
Take a look at the testing module, and particularly the gen_test decorator:
http://tornado.readthedocs.org/en/latest/testing.html
These test conveniences take care of many details related to unittesting Tornado applications.
I gave a talk and wrote an article about testing in Tornado, there's more info here:
http://emptysqua.re/blog/eventually-correct-links/

Why are my tests not getting run in my TestCase subclass?

I am learning test driven development...
I wrote a test that should fail but it's not...
(env)glitch:ipals nathann$ ./manage.py test npage/
Creating test database for alias 'default'...
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Destroying test database for alias 'default'...
in npage/ I have tests.py:
from django.test import TestCase
from npage.models import Tip
import datetime
# Example
class TipTester(TestCase):
def setUp(self):
print dir(self)
Tip.objects.create(pk=1,
text='Testing',
es_text='Probando')
def tips_in_spanish(self):
my_tip = Tip.objects.get(pk=1)
my_tip.set_language('es')
self.assertEqual(my_tip.text, 'this does not just say \'Probando\'')
What am I doing wrong? I've read this but I still can't figure out what is going wrong here.
Your test functions need to start with test:
def test_tips_in_spanish(self):
Docs here
"When you run your tests, the default behavior of the test utility is to find all the test cases (that is, subclasses of unittest.TestCase) in any file whose name begins with test, automatically build a test suite out of those test cases, and run that suite."

How do I unit test akka actors in Play Framework 2.2.0 Scala (spec2, Mockito)

I am trying to set up some unit tests with the Play framework. A lot of my logic is built into scheduled akka actors that go off and gather data in the background. My problem is that I can't figure out how to unit test them. I literally have no clue how to approach it. I'm trying to use akka-testkit, but I'm basically flailing around. Does anyone have any suggestions on how to even approach it? Examples would be incredibly useful. This is an example of the abomination I am currently working with:
package test
import org.specs2.mutable._
import controllers.Scanner
import java.util.UUID
import org.joda.time.DateTime
import akka.testkit.TestActorRef
import play.api.Logger
import play.api.test.{FakeApplication, TestServer}
import models.PSqlEnum
class ScannerTest extends Specification {
val appId = UUID.randomUUID()
val app = models.App(appId, "TestApp", "TestServer", "TestComponent", "Test Description", DateTime.now(),
DateTime.now(), true, 3, 60, PSqlEnum("scanType", "mandatory"), "http://localhost")
val rules = <Rule name="DivisionDataIsAvailable" elementsToCheck="DivisionDataIsAvailable"
ruleType="is, true, yellow" />
<Rule name="DivisionDataLoadIsHealthy" elementsToCheck="DivisionDataLoadIsHealthy"
ruleType="is, true, red" />;
"Scanner" should {
"test something" in {
val fakeApp = TestServer(3333)
fakeApp.start()
implicit val actorSystem = play.api.libs.concurrent.Akka.system(fakeApp.application)
val scanner = TestActorRef(new Scanner(app, rules)).underlyingActor
Logger.warn(scanner.getResponseFromWebService.toString)
fakeApp.stop()
1 === 1
}
}
}
This is obviously not really testing anything. I am basically trying to get it to get through to the 1 === 1 at this point just to see if I can get the runtime errors to stop. The errors this code is generating are these:
INFO - Starting application default Akka system.
[info] ScannerTest
[info] Scanner should
[info] ! test something
[error] ThrowableException: akka.actor.LocalActorRef.<init>(Lakka/actor/ActorSystemImpl;Lakka/actor/Props;Lakka/actor/InternalActorRef;Lakka/actor/ActorPath;)V (TestActorRef.scala:21)
[error] akka.testkit.TestActorRef.<init>(TestActorRef.scala:21)
[error] akka.testkit.TestActorRef$.apply(TestActorRef.scala:135)
[error] akka.testkit.TestActorRef$.apply(TestActorRef.scala:132)
[error] akka.testkit.TestActorRef$.apply(TestActorRef.scala:125)
[error] test.ScannerTest$$anonfun$1$$anonfun$apply$3.apply(ScannerTest.scala:27)
[error] test.ScannerTest$$anonfun$1$$anonfun$apply$3.apply(ScannerTest.scala:23)
[error] akka.actor.LocalActorRef.<init>(Lakka/actor/ActorSystemImpl;Lakka/actor/Props;Lakka/actor/InternalActorRef;Lakka/actor/ActorPath;)V
[error] akka.testkit.TestActorRef.<init>(TestActorRef.scala:21)
[error] akka.testkit.TestActorRef$.apply(TestActorRef.scala:135)
[error] akka.testkit.TestActorRef$.apply(TestActorRef.scala:132)
[error] akka.testkit.TestActorRef$.apply(TestActorRef.scala:125)
[error] test.ScannerTest$$anonfun$1$$anonfun$apply$3.apply(ScannerTest.scala:27)
[error] test.ScannerTest$$anonfun$1$$anonfun$apply$3.apply(ScannerTest.scala:23)
[info] Total for specification ScannerTest
[info] Finished in 86 ms
[info] 1 example, 0 failure, 1 error
[info] test.ScannerTest
I believe that I need to create a FakeApplication and use that FakeApplication's Akka.system; however, I am not sure how to do it. To be honest, I am not even sure if that is the correct approach. If I could just generate a generic Akka.system and have that work I'd be ecstatic. If anyone has any ideas on how to tackle this I'd be really appreciative.
Okay, I figured it out. Make sure you are using the correct version of akka-testkit. In Play 2.2.0 I was trying to use akka 2.2.M3. Obviously, that doesn't work. I had to put the correct dependencies in my Build.scala, which ended up being this:
"com.typesafe.akka" %% "akka-testkit" % "2.2.0" % "test"
My actual test code looks like this:
package test
import org.specs2.mutable._
import controllers.Scanner
import java.util.UUID
import org.joda.time.DateTime
import akka.testkit.TestActorRef
import play.api.Logger
import models.PSqlEnum
import akka.actor.ActorSystem
import com.typesafe.config.ConfigFactory
import scala.concurrent.ExecutionContext.Implicits.global
class ScannerTest extends Specification {
val appId = UUID.randomUUID()
val app = models.App(appId, "TestApp", "TestServer", "TestComponent", "Test Description", DateTime.now(),
DateTime.now(), true, 3, 60, PSqlEnum("scanType", "mandatory"), "http://localhost")
val rules = <Rule name="DivisionDataIsAvailable" elementsToCheck="DivisionDataIsAvailable"
ruleType="is, true, yellow" />
<Rule name="DivisionDataLoadIsHealthy" elementsToCheck="DivisionDataLoadIsHealthy"
ruleType="is, true, red" />;
"Scanner" should {
"test something" in {
implicit val actorSystem = ActorSystem("testActorSystem", ConfigFactory.load())
val scanner = TestActorRef(new Scanner(app, rules)).underlyingActor
val response = scanner.getResponseFromWebService
response onSuccess {
case result => Logger.warn(result.toString)
}
response onFailure {
case error => Logger.warn(error.toString)
}
1 === 1
}
}
}
Obviously again, this test isn't really doing anything. The actual test being evaluated is 1 === 1. It does print out to the log now though which means I can go back and verify datatypes and the payload of the response, and then build some actual Unit Tests. I hope someone finds this useful. Those error messages in the original question are caused by the akka-testkit dependency not being the same version as Akka though, which might be useful for someone.
I'm not using spec2 or Mockito but here is what I'm doing to unit test Akka actors in Play 2.2.1:
import org.specs2.mutable.Specification
import scala.concurrent.duration._
import org.scalatest.concurrent._
import akka.testkit._
import org.scalatest.matchers.ShouldMatchers
import org.scalatest.{FlatSpec, BeforeAndAfterAll}
import akka.actor.{Props, ActorSystem}
import akka.pattern.ask
import akka.util.Timeout
import scala.util.{Failure, Success}
import model.BlacklistEntry
import scala.concurrent.{Future, Promise, Await}
import scala.concurrent.ExecutionContext.Implicits.global
import play.api.test.FakeApplication
import model.BlacklistEntryImpl
class LicenceBlackListSpec(_system: ActorSystem) extends TestKit(_system) with ImplicitSender with ShouldMatchers with FlatSpec with BeforeAndAfterAll {
play.api.Play.start(FakeApplication())
import akka.testkit.TestKit._
def this() = this( ActorSystem("LicenceBlackListSpec") )
override def afterAll: Unit = {
system.shutdown()
system.awaitTermination(10.seconds)
}
implicit val timeout = Timeout(10 seconds)
val blacklistRef = TestActorRef(Props[LicenceBlackList])
"An LicenceBlackList Actor" should "be able to create a new blacklist entry" in {
blacklistRef ! CreateEntry(BlacklistEntryImpl("NEW_KEY",1000,"Test creation"))
val expected: BlacklistEntry = BlacklistEntryImpl("NEW_KEY", 1000 ,"Test creation")
expectMsg( expected )
}
}
You'll need to include the scalatest lib as a dependency as well akka test kit:
"org.scalatest" % "scalatest_2.10" % "1.9.1"
Hope this will help.
I don't use ActorSystem("testActorSystem" ) , I have tryed to allow play framework to use it's usual akka plugin. This approach has folowing benefits: I can use play.api.libs.concurrent.Akka.system in all code, I can have extra config options that understand booth play and akka code parts .
class BaseActorTester (_app: Application) extends akka.testkit.TestKit(play.api.libs.concurrent.Akka.system(_app)) with FunSuiteLike with BeforeAndAfterAll {
def this() = this(FakeApplication(additionalConfiguration=Map("currency.db"->"airando-test")))
implicit val app: Application = _app
implicit val ec = play.api.libs.concurrent.Akka.system.dispatcher
override def beforeAll {
Play.start(app)
}
// play plugin do it itself ??
//override def afterAll {
// akka.testkit.TestKit.shutdownActorSystem(system)
//}
...
}

How to test custom django-admin commands

I created custom django-admin commands
But, I don't know how to test it in standard django tests
If you're using some coverage tool it would be good to call it from the code with:
from django.core.management import call_command
from django.test import TestCase
class CommandsTestCase(TestCase):
def test_mycommand(self):
" Test my custom command."
args = []
opts = {}
call_command('mycommand', *args, **opts)
# Some Asserts.
From the official documentation
Management commands can be tested with the call_command() function. The output can be redirected into a StringIO instance
You should make your actual command script the minimum possible, so that it just calls a function elsewhere. The function can then be tested via unit tests or doctests as normal.
you can see in github.com example
see here
def test_command_style(self):
out = StringIO()
management.call_command('dance', style='Jive', stdout=out)
self.assertEquals(out.getvalue(),
"I don't feel like dancing Jive.")
To add to what has already been posted here. If your django-admin command passes a file as parameter, you could do something like this:
from django.test import TestCase
from django.core.management import call_command
from io import StringIO
import os
class CommandTestCase(TestCase):
def test_command_import(self):
out = StringIO()
call_command(
'my_command', os.path.join('path/to/file', 'my_file.txt'),
stdout=out
)
self.assertIn(
'Expected Value',
out.getvalue()
)
This works when your django-command is used in a manner like this:
$ python manage.py my_command my_file.txt
A simple alternative to parsing stdout is to make your management command exit with an error code if it doesn't run successfully, for example using sys.exit(1).
You can catch this in a test with:
with self.assertRaises(SystemExit):
call_command('mycommand')
I agree with Daniel that the actual command script should do the minimum possible but you can also test it directly in a Django unit test using os.popen4.
From within your unit test you can have a command like
fin, fout = os.popen4('python manage.py yourcommand')
result = fout.read()
You can then analyze the contents of result to test whether your Django command was successful.