Unit testing in Web2py - unit-testing

I'm following the instructions from this post but cannot get my methods recognized globally.
The error message:
ERROR: test_suggest_performer (__builtin__.TestSearch)
----------------------------------------------------------------------
Traceback (most recent call last):
File "applications/myapp/tests/test_search.py", line 24, in test_suggest_performer
suggs = suggest_flavors("straw")
NameError: global name 'suggest_flavors' is not defined
My test file:
import unittest
from gluon.globals import Request
db = test_db
execfile("applications/myapp/controllers/search.py", globals())
class TestSearch(unittest.TestCase):
def setUp(self):
request = Request()
def test_suggest_flavors(self):
suggs = suggest_flavors("straw")
self.assertEqual(len(suggs), 1)
self.assertEqual(suggs[0][1], 'Strawberry')
My controller:
def suggest_flavors(term):
return []
Has anyone successfully completed unit testing like this in web2py?

Please see: http://web2py.com/AlterEgo/default/show/260
Note that in your example the function 'suggest_flavors' should be defined at 'applications/myapp/controllers/search.py'.

I don't have any experience with web2py, but used other frameworks a lot. And looking at your code I'm confused a bit. Is there an objective reason why execfile should be used? Isn't it better to use regular import statement. So instead of execfile you may write:
from applications.myapp.controllers.search import suggest_flavors
It's more clear code for pythoners.
Note, that you should place __init__.py in each directory along the path in this case, so that dirs will form package/module hierarchy.

Related

ImportError: No module named stanford_segmenter

The StanfordSegmenter does not have an interface in nltk, different from the case of StanfordPOStagger or StanfordNER. So to use it, basically I have to create an interface manually for StanfordSegmenter, namely stanford_segmenter.py under ../nltk/tokenize/. I follow the instructions here http://textminingonline.com/tag/chinese-word-segmenter
However, when I tried to run this from nltk.tokenize.stanford_segmenter import stanford_segmenter, I got an error
msg Traceback (most recent call last):
File "C:\Users\qubo\Desktop\stanfordparserexp.py", line 48, in <module>
from nltk.tokenize.stanford_segmenter import stanford_segmenter
ImportError: No module named stanford_segmenter
[Finished in 0.6s]
The instructions mentioned to reinstall nltk after creating the stanford_segmenter.py. I don't quite get the point but so I did. However, the process can hardly be called 'reinstall', but rather a detaching and reconnecting nltk to python libs.
I'm using Windows 64 and Python 2.7.11. NLTK and all relevant pkgs are updated to the latest version. Wonder if you guys can shed some light on this. Thank you all so much.
I was able to import the module by running the following code:
import imp
yourmodule = imp.load_source("module_name.py", "/path/to/module_name.py")
yourclass = yourmodule.TheClass()
yourclass is an instance of the class and TheClass is the name of the class you want to create the obj in. This is similar to the use of:
from pkg_name.module_name import TheClass
So in the case of StanfordSegmenter, the complete lines of code is as follows:
# -*- coding: utf-8 -*-
import imp
import os
ini_path = 'D:/jars/stanford-segmenter-2015-04-20/'
os.environ['STANFORD_SEGMENTER'] = ini_path + 'stanford-segmenter-3.5.2.jar'
stanford_segmenter = imp.load_source("stanford_segmenter", "C:/Users/qubo/Miniconda2/pkgs/nltk-3.1-py27_0/Lib/site-packages/nltk/tokenize/stanford_segmenter.py")
seg = stanford_segmenter.StanfordSegmenter(path_to_model='D:/jars/stanford-segmenter-2015-04-20/data/pku.gz', path_to_jar='D:/jars/stanford-segmenter-2015-04-20/stanford-segmenter-3.5.2.jar', path_to_dict='D:/jars/stanford-segmenter-2015-04-20/data/dict-chris6.ser.gz', path_to_sihan_corpora_dict='D:/jars/stanford-segmenter-2015-04-20/data')
sent = '我有一只小毛驴我从来也不骑。'
text = seg.segment(sent.decode('utf-8'))

Encountering remote error using grpc with protobuf2.6 in python

I am using grpc with protobuf 2.6.1 in python 2.7, and when I run my client side code, I have the following errors:
Traceback (most recent call last):
File "debate_client.py", line 31, in <module>
run_client()
File "debate_client.py", line 17, in run_client
reply = stub.Answer(debate_pb2.AnswerRequest(question=question, timeout=timeout), 30)
File "/Users/elaine/Desktop/gitHub/grpc/python2.7_virtual_environment/lib/python2.7/site-packages/grpc/framework/crust/implementations.py", line 73, in __call__
protocol_options, metadata, request)
File "/Users/elaine/Desktop/gitHub/grpc/python2.7_virtual_environment/lib/python2.7/site-packages/grpc/framework/crust/_calls.py", line 109, in blocking_unary_unary
return next(rendezvous)
File "/Users/elaine/Desktop/gitHub/grpc/python2.7_virtual_environment/lib/python2.7/site-packages/grpc/framework/crust/_control.py", line 412, in next
raise self._termination.abortion_error
grpc.framework.interfaces.face.face.RemoteError: RemoteError(code=StatusCode.UNKNOWN, details="")
Here is my client side code:
from grpc.beta import implementations
import debate_pb2
import sys
def run_client():
params = sys.argv
print params
how = params[1]
question = params[2]
channel = implementations.insecure_channel('localhost', 29999)
stub = debate_pb2.beta_create_Candidate_stub(channel)
if how.lower() == "answer":
timeout = int(params[3])
reply = stub.Answer(debate_pb2.AnswerRequest(question=question, timeout=timeout), 30)
elif how.lower() == "elaborate":
blah = params[3:len(sys.argv)]
for i in range(0, len(blah)):
blah[i] = int(blah[i])
reply = stub.Elaborate(debate_pb2.ElaborateRequest(topic=question, blah_run=blah), 30)
if reply is None:
print "No comment"
else:
print reply.answer
if __name__ == "__main__":
run_client()
And here is my server side code:
import debate_pb2
import consultation_pb2
import re
import random
from grpc.beta import implementations
class Debate(debate_pb2.BetaCandidateServicer):
def Answer(self, request, context=None):
#Answer implementation
def Elaborate(self, request, context=None):
#Elaborate implementation
def run_server():
server = debate_pb2.beta_create_Candidate_server(Debate())
server.add_insecure_port('localhost:29999')
server.start()
if __name__ == "__main__":
run_server()
Any idea where the remote error comes from? Thank you so much!
Hello Elaine and thank you for trying out gRPC Python.
Nothing leaps out at me as an obvious smoking gun, but a couple of things I see are:
gRPC Python isn't known to work with protobuf 2.6.1. Have you tried working with the very latest protobuf release (3.0.0a3 at this time)?
context isn't an optional keyword parameter in servicer methods; it's a required, positional parameter. Does dropping =None from your servicer method implementations effect any change?
The same happened to me just now, and I figured out why.
Make sure the messages in your proto definition and the message in your implementations match the format.
For example, my Response message had a message= param in my python server, but not in my proto definition.
I think your function implementations should be outside class Debate or might be your functions are not correctly implemented to give the desired result.
I faced a similar error because my functions were inside the class but moving it outside the class fixed it.

Testing Motor calls with IOLoop

I'm running unittests in the callbacks for motor database calls, and I'm successfully catching AssertionErrors and having them surface when running nosetests, but the AssertionErrors are being caught in the wrong test. The tracebacks are to different files.
My unittests look generally like this:
def test_create(self):
#self.callback
def create_callback(result, error):
self.assertIs(error, None)
self.assertIsNot(result, None)
question_db.create(QUESTION, create_callback)
self.wait()
And the unittest.TestCase class I'm using looks like this:
class MotorTest(unittest.TestCase):
bucket = Queue.Queue()
# Ensure IOLoop stops to prevent blocking tests
def callback(self, func):
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception as e:
self.bucket.put(traceback.format_exc())
IOLoop.current().stop()
return wrapper
def wait(self):
IOLoop.current().start()
try:
raise AssertionError(self.bucket.get(block = False))
except Queue.Empty:
pass
The errors I'm seeing:
======================================================================
FAIL: test_sync_user (app.tests.db.test_user_db.UserDBTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/----/Documents/app/app-Server/app/tests/db/test_user_db.py", line 39, in test_sync_user
self.wait()
File "/Users/----/Documents/app/app-Server/app/tests/testutils/mongo.py", line 25, in wait
raise AssertionError(self.bucket.get(block = False))
AssertionError: Traceback (most recent call last):
File "/Users/----/Documents/app/app-Server/app/tests/testutils/mongo.py", line 16, in wrapper
func(*args, **kwargs)
File "/Users/----/Documents/app/app-Server/app/tests/db/test_question_db.py", line 32, in update_callback
self.assertEqual(result["question"], "updated question?")
TypeError: 'NoneType' object has no attribute '__getitem__'
Where the error is reported to be in UsersDbTest but is clearly in test_questions_db.py (which is QuestionsDbTest)
I'm having issues with nosetests and asynchronous tests in general, so if anyone has any advice on that, it'd be greatly appreciated as well.
I can't fully understand your code without an SSCCE, but I'd say you're taking an unwise approach to async testing in general.
The particular problem you face is that you don't wait for your test to complete (asynchronously) before leaving the test function, so there's work still pending in the IOLoop when you resume the loop in your next test. Use Tornado's own "testing" module -- it provides convenient methods for starting and stopping the loop, and it recreates the loop between tests so you don't experience interference like what you're reporting. Finally, it has extremely convenient means of testing coroutines.
For example:
import unittest
from tornado.testing import AsyncTestCase, gen_test
import motor
# AsyncTestCase creates a new loop for each test, avoiding interference
# between tests.
class Test(AsyncTestCase):
def callback(self, result, error):
# Translate from Motor callbacks' (result, error) convention to the
# single arg expected by "stop".
self.stop((result, error))
def test_with_a_callback(self):
client = motor.MotorClient()
collection = client.test.collection
collection.remove(callback=self.callback)
# AsyncTestCase starts the loop, runs until "remove" calls "stop".
self.wait()
collection.insert({'_id': 123}, callback=self.callback)
# Arguments passed to self.stop appear as return value of "self.wait".
_id, error = self.wait()
self.assertIsNone(error)
self.assertEqual(123, _id)
collection.count(callback=self.callback)
cnt, error = self.wait()
self.assertIsNone(error)
self.assertEqual(1, cnt)
#gen_test
def test_with_a_coroutine(self):
client = motor.MotorClient()
collection = client.test.collection
yield collection.remove()
_id = yield collection.insert({'_id': 123})
self.assertEqual(123, _id)
cnt = yield collection.count()
self.assertEqual(1, cnt)
if __name__ == '__main__':
unittest.main()
(In this example I create a new MotorClient for each test, which is a good idea when testing applications that use Motor. Your actual application must not create a new MotorClient for each operation. For decent performance you must create one MotorClient when your application begins, and use that same one client throughout the process's lifetime.)
Take a look at the testing module, and particularly the gen_test decorator:
http://tornado.readthedocs.org/en/latest/testing.html
These test conveniences take care of many details related to unittesting Tornado applications.
I gave a talk and wrote an article about testing in Tornado, there's more info here:
http://emptysqua.re/blog/eventually-correct-links/

__import__ vs imp.load_module

I got an error while trying to install autopep8 with ironpython:
ImportError: No module named logilab
The code snippet it failed is:
def load_module(self, fullname):
self._reopen()
try:
mod = imp.load_module(fullname, self.file, self.filename, self.etc)
finally:
if self.file:
self.file.close()
# Note: we don't set __loader__ because we want the module to look
# normal; i.e. this is just a wrapper for standard import machinery
return mod
using the interpreter ipy64 importing logilab did not fail.
I added a print statement for the filename and it showed:
C:\Program Files (x86)\IronPython 2.7\Lib\site-packages\logilab_common-0.59.1-py2.7.egg\logilab
The path exists and it contains a init.py with the following content:
"""generated file, don't modify or your data will be lost"""
try:
__import__('pkg_resources').declare_namespace(__name__)
except ImportError:
pass
I fixed the error quick and dirty by adding
except ImportError:
mod = __import__(fullname)
but I do not have a good feeling about this fix as I don't know the possible impacts.
Now, why does using imp.load_module fail and what is the difference using import ?

test function with Google App Engine `files` api

I have a function that uses the Google Blobstore API, and here's a degenerate case:
#!/usr/bin/python
from google.appengine.ext import testbed
def foo():
from google.appengine.api import files
blob_filename = files.blobstore.create(mime_type='text/plain')
with files.open(blob_filename, 'a') as googfile:
googfile.write("Test data")
files.finalize(blob_filename)
tb = testbed.Testbed()
tb.activate()
tb.init_blobstore_stub()
foo() # in reality, I'm a function called from a 'faux client'
# in a unittest testcase.
The error this generates is:
Traceback (most recent call last):
File "e.py", line 18, in
foo() # in reality, I'm a function called from a 'faux client'
File "e.py", line 8, in foo
blob_filename = files.blobstore.create(mime_type='text/plain')
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/files/blobstore.py", line 68, in create
return files._create(_BLOBSTORE_FILESYSTEM, params=params)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/files/file.py", line 491, in _create
_make_call('Create', request, response)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/files/file.py", line 230, in _make_call
rpc = _create_rpc(deadline=deadline)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/files/file.py", line 213, in _create_rpc
return apiproxy_stub_map.UserRPC('file', deadline)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 393, in __init__
self.__rpc = CreateRPC(service, stubmap)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 67, in CreateRPC
assert stub, 'No api proxy found for service "%s"' % service
AssertionError: No api proxy found for service "file"
I don't want to have to modify foo in order to be able to test it. Is there a way to make foo work as expected (i.e. create the given file) in Google App Engine's unit tests?
I would expect to be able to do this with Google's API Proxy, but I don't understand it well enough to figure it out on my own.
I'd be grateful for your thoughts and suggestions.
Thanks for reading.
It seems like testbed.init_blobstore_stub() is outdated, because dev_appserver inits blobstore stubs differently. Here is my implementation of init_blobstore_stub that allows you to write to and read from blobstore in your tests.
from google.appengine.ext import testbed
from google.appengine.api.blobstore import blobstore_stub, file_blob_storage
from google.appengine.api.files import file_service_stub
class TestbedWithFiles(testbed.Testbed):
def init_blobstore_stub(self):
blob_storage = file_blob_storage.FileBlobStorage('/tmp/testbed.blobstore',
testbed.DEFAULT_APP_ID)
blob_stub = blobstore_stub.BlobstoreServiceStub(blob_storage)
file_stub = file_service_stub.FileServiceStub(blob_storage)
self._register_stub('blobstore', blob_stub)
self._register_stub('file', file_stub)
# Your code...
def foo():
from google.appengine.api import files
blob_filename = files.blobstore.create(mime_type='text/plain')
with files.open(blob_filename, 'a') as googfile:
googfile.write("Test data")
files.finalize(blob_filename)
tb = TestbedWithFiles()
tb.activate()
tb.init_blobstore_stub()
foo()
I don't know if it was added later to the SDK, but using Testbed.init_files_stub should fix it:
tb = testbed.Testbed()
tb.activate()
tb.init_blobstore_stub()
tb.init_files_stub()
Any chance that you are trying to do this using the gaeunit.py test runner? I see the same error while using that, since it does it's own code to replace the api proxy.
The error disappears when I added 'file' to the "as-is" list of proxies in the _run_test_suite function of gaeunit.py.
Honestly, I'm not sure that the gaeunit.py proxy replacement code is needed at all since I'm also using the more recently recommended testbed code in the test cases as per http://code.google.com/appengine/docs/python/tools/localunittesting.html. So, at this point I've commented it all out of gaeunit.py, which also seems to be working.
Note that I'm doing all this on a dev server only, in highly experimental mode on python27 in GAE with Python 2.7.
Hope this helps.