I'm using Django 2.1 with MySQL.
I have one custom SQL view, which is bound with a model with Meta managed = False. Django's TestCase has no idea how the view is created, so I'd like to provide SQL command to create this view. The best option would be to do this on database create, but I have no idea how to do that.
What I've done so far was to override TestCase's setUp method. It looks like that:
class TaskDoneViewTest(TestCase):
def setUp(self):
"""
Create custom SQL view
"""
cursor = connection.cursor()
file_handle = open('app/tests/create_sql_view.sql', 'r+')
sql_file = File(file_handle)
sql = sql_file.read()
cursor.execute(sql)
cursor.close()
def test_repeatable_task_done(self):
# ...
def test_one_time_task_done(self):
# ...
I've got this solution from similar SO post: How to use database view in test cases. It would be a nice temporary solution, but the problem is with all those 2 test cases active I'm getting following error:
$ python manage.py test app.tests
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
...E..
======================================================================
ERROR: test_repeatable_task_done (app.tests.test_views.TaskDoneViewTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/asmox/AppDev/Python/bubblechecklist/project_root/app/tests/test_views.py", line 80, in setUp
cursor.execute(sql)
File "/home/asmox/AppDev/Python/bubblechecklist/env/lib/python3.6/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/home/asmox/AppDev/Python/bubblechecklist/env/lib/python3.6/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/asmox/AppDev/Python/bubblechecklist/env/lib/python3.6/site-packages/django/db/backends/utils.py", line 80, in _execute
self.db.validate_no_broken_transaction()
File "/home/asmox/AppDev/Python/bubblechecklist/env/lib/python3.6/site-packages/django/db/backends/base/base.py", line 437, in validate_no_broken_transaction
"An error occurred in the current transaction. You can't "
django.db.transaction.TransactionManagementError: An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block.
For some reason this error doesn't happen when I have only one test case active (why?).This error remains until I change my test's base class from TestCase to TransactionTestCase.
Well, I would ask why this happen and if there is any solution to get it okay with simple TestCase class, because my test for now has nothing to do with transactions and I find this working solutions a bit too dirty, but...
I would more likely stick to the main issue, that is, to globally (for all my test cases) do the following thing:
When testing database is created, do one more custom SQL from provided file. It is going to create required view
Can you please help me how to do that?
If you read the documentation for TestCase, you'll see that it wraps each test in a double transaction, one at the class level and one at the test level. The setUp() method runs for each test and is thus inside this double wrapping.
As shown in the above mentioned docs, it is suggested you use setUpTestData() to set up your db at the class level. This is also where you'd add initial data to your db for all your tests to use.
Related
Using the easypost python library I call the buy function passing in the rate like the documentation says but it returns an error.
Can you use your test api key with buy for easypost or not? I didn't see anything in the documentation with it. It might seem to work with production but I am not able to test that yet so I was wondering if I could test it with the test api key?
The code is:
import easypost
def get_shipment(shipment_id):
return easypost.Shipment.retrieve(shipment_id)
......
shipment = get_shipment(shipment_id)
try:
shipment.buy(rate=shipment.lowest_rate())
except Exception as e:
raise ValidationError({'detail': e.message})
The error message I get with test key
Traceback (most recent call last):
File "/app/returns/serializers.py", line 237, in handle_shipment_purchase
shipment.buy(rate=shipment.lowest_rate())
File "/usr/local/lib/python3.6/dist-packages/easypost/__init__.py", line 725, in buy
response, api_key = requestor.request('post', url, params)
File "/usr/local/lib/python3.6/dist-packages/easypost/__init__.py", line 260, in request
response = self.interpret_response(http_body, http_status)
File "/usr/local/lib/python3.6/dist-packages/easypost/__init__.py", line 321, in interpret_response
self.handle_api_error(http_status, http_body, response)
File "/usr/local/lib/python3.6/dist-packages/easypost/__init__.py", line 383, in handle_api_error
raise Error(error.get('message', ''), http_status, http_body)
easypost.Error: The request could not be understood by the server due to malformed syntax.
I got the same issue with Python though my shipment id and API_KEY are correct. with the EasyPost python exception message, it will not show the root cause of the exception. Try to do request with curl or inside exception check e.json_body and raise ValidationError accordingly.
try:
shipment.buy(rate=shipment.lowest_rate())
except Exception as e:
# Put debugger here and Check exception e.json_body
e.json_body
raise ValidationError({'detail': e.http_body})
Yes, you can buy shipments with your TEST API key. From the code you shared, I don't see any obvious problems, but you'll obviously want to double check that your shipment_id is being set correctly and that your API key is as well. Beyond that, write to us as support#easypost.com and we can actually look into our system logs to see what may be coming in "malformed".
I have two models in a wagtail app, PageType and NewPageType, and need to replace PageType with NewPageType.
I thought I could remove PageType from my models.py and then run a migration to remove it, and then rename NewPageType to PageType and run a second migration.
However, I'm running into errors when I do this:
[2019-01-22 23:20:26,344] [ERROR] Internal Server Error: /cms/
Traceback (most recent call last):
File "/.../python3.6/site-packages/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/.../python3.6/site-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
[...snip...]
File "/.../python3.6/site-packages/django/db/models/query.py", line 1121, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/.../python3.6/site-packages/wagtail/core/query.py", line 397, in specific_iterator
yield pages_by_type[content_type][pk]
KeyError: 278
It seems like something didn't get updated automatically by Django's built-in migration handling. I couldn't tell what steps I'm missing here so would love to get some help. Thanks!
That's because Wagtail pages uses Multi-table inheritance and part of your deleted PageType pages are still around.
Let's take a look at a fresh install of Wagtail (i.e. wagtail start mysite) which comes with a home.HomePage model and creates one HomePage by default. We can have a look at the database and confirm that there is indeed an entry in the database:
sqlite> SELECT * FROM home_homepage;
page_ptr_id
3
However it's rather empty. There's no title, no path, nothing but a page_ptr_id. This is because the HomePage inherit from the Page model which isn't abstract. Therefore, there is a database table for that Page model as well (this is how Multi-table inheritance works with Django). Let's have a look at the corresponding table (voluntarily ommitting some columns) :
sqlite> SELECT id, path, title, slug, url_path, content_type_id FROM wagtailcore_page;
id|path |title|slug|url_path|content_type_id
1 |0001 |Root |root|/ |1
3 |00010001|Home |home|/home/ |2
Here it is!
Similarly, in your case, there is the wagtailcore_page, the myapp_pagetype and myapp_newpagetype tables. By deleting the PageType model, django created a migration which then deleted the myapp_pagetype but left the entry in the wagtailcore_page table. Therefore now, when you load the admin interface, Wagtail tries to load the page #3 but fail to do so.
For that reason, before deleting a Page model, you need to delete all the pages first. You can achieve this by adding a RunPython step to your migration.
You would still be left with renaming your second model though which can be difficult with Django, although if you're lucky, renaming it in your models.py file and runing makemigrations might be enough for Django to detect that it should rename the model. If not, or if you have relationships which need to be renamed to, it might be more involed, see 1 and 2.
To recover from it and be able to load admin again, do the following:
Delete the reference:
import django
django.setup()
from wagtail.core.models import PageRevision
PageRevision.objects.filter(page_id= 278).delete()
exit()
Then delete the page.
django-admin dbshell
DELETE FROM wagtailcore_page WHERE id=278;
Hope that helps.
I'm new to wagtail, but I had no issues renaming the model and the correlating template, then running
python manage.py makemigrations
python manage.py migrate
That being said, I was not reusing an old name like OP. I might recommend anyone having this issue to come up with a new name for the model and make it a descriptive one.
I'm running unittests in the callbacks for motor database calls, and I'm successfully catching AssertionErrors and having them surface when running nosetests, but the AssertionErrors are being caught in the wrong test. The tracebacks are to different files.
My unittests look generally like this:
def test_create(self):
#self.callback
def create_callback(result, error):
self.assertIs(error, None)
self.assertIsNot(result, None)
question_db.create(QUESTION, create_callback)
self.wait()
And the unittest.TestCase class I'm using looks like this:
class MotorTest(unittest.TestCase):
bucket = Queue.Queue()
# Ensure IOLoop stops to prevent blocking tests
def callback(self, func):
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception as e:
self.bucket.put(traceback.format_exc())
IOLoop.current().stop()
return wrapper
def wait(self):
IOLoop.current().start()
try:
raise AssertionError(self.bucket.get(block = False))
except Queue.Empty:
pass
The errors I'm seeing:
======================================================================
FAIL: test_sync_user (app.tests.db.test_user_db.UserDBTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/----/Documents/app/app-Server/app/tests/db/test_user_db.py", line 39, in test_sync_user
self.wait()
File "/Users/----/Documents/app/app-Server/app/tests/testutils/mongo.py", line 25, in wait
raise AssertionError(self.bucket.get(block = False))
AssertionError: Traceback (most recent call last):
File "/Users/----/Documents/app/app-Server/app/tests/testutils/mongo.py", line 16, in wrapper
func(*args, **kwargs)
File "/Users/----/Documents/app/app-Server/app/tests/db/test_question_db.py", line 32, in update_callback
self.assertEqual(result["question"], "updated question?")
TypeError: 'NoneType' object has no attribute '__getitem__'
Where the error is reported to be in UsersDbTest but is clearly in test_questions_db.py (which is QuestionsDbTest)
I'm having issues with nosetests and asynchronous tests in general, so if anyone has any advice on that, it'd be greatly appreciated as well.
I can't fully understand your code without an SSCCE, but I'd say you're taking an unwise approach to async testing in general.
The particular problem you face is that you don't wait for your test to complete (asynchronously) before leaving the test function, so there's work still pending in the IOLoop when you resume the loop in your next test. Use Tornado's own "testing" module -- it provides convenient methods for starting and stopping the loop, and it recreates the loop between tests so you don't experience interference like what you're reporting. Finally, it has extremely convenient means of testing coroutines.
For example:
import unittest
from tornado.testing import AsyncTestCase, gen_test
import motor
# AsyncTestCase creates a new loop for each test, avoiding interference
# between tests.
class Test(AsyncTestCase):
def callback(self, result, error):
# Translate from Motor callbacks' (result, error) convention to the
# single arg expected by "stop".
self.stop((result, error))
def test_with_a_callback(self):
client = motor.MotorClient()
collection = client.test.collection
collection.remove(callback=self.callback)
# AsyncTestCase starts the loop, runs until "remove" calls "stop".
self.wait()
collection.insert({'_id': 123}, callback=self.callback)
# Arguments passed to self.stop appear as return value of "self.wait".
_id, error = self.wait()
self.assertIsNone(error)
self.assertEqual(123, _id)
collection.count(callback=self.callback)
cnt, error = self.wait()
self.assertIsNone(error)
self.assertEqual(1, cnt)
#gen_test
def test_with_a_coroutine(self):
client = motor.MotorClient()
collection = client.test.collection
yield collection.remove()
_id = yield collection.insert({'_id': 123})
self.assertEqual(123, _id)
cnt = yield collection.count()
self.assertEqual(1, cnt)
if __name__ == '__main__':
unittest.main()
(In this example I create a new MotorClient for each test, which is a good idea when testing applications that use Motor. Your actual application must not create a new MotorClient for each operation. For decent performance you must create one MotorClient when your application begins, and use that same one client throughout the process's lifetime.)
Take a look at the testing module, and particularly the gen_test decorator:
http://tornado.readthedocs.org/en/latest/testing.html
These test conveniences take care of many details related to unittesting Tornado applications.
I gave a talk and wrote an article about testing in Tornado, there's more info here:
http://emptysqua.re/blog/eventually-correct-links/
I have a function that uses the Google Blobstore API, and here's a degenerate case:
#!/usr/bin/python
from google.appengine.ext import testbed
def foo():
from google.appengine.api import files
blob_filename = files.blobstore.create(mime_type='text/plain')
with files.open(blob_filename, 'a') as googfile:
googfile.write("Test data")
files.finalize(blob_filename)
tb = testbed.Testbed()
tb.activate()
tb.init_blobstore_stub()
foo() # in reality, I'm a function called from a 'faux client'
# in a unittest testcase.
The error this generates is:
Traceback (most recent call last):
File "e.py", line 18, in
foo() # in reality, I'm a function called from a 'faux client'
File "e.py", line 8, in foo
blob_filename = files.blobstore.create(mime_type='text/plain')
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/files/blobstore.py", line 68, in create
return files._create(_BLOBSTORE_FILESYSTEM, params=params)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/files/file.py", line 491, in _create
_make_call('Create', request, response)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/files/file.py", line 230, in _make_call
rpc = _create_rpc(deadline=deadline)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/files/file.py", line 213, in _create_rpc
return apiproxy_stub_map.UserRPC('file', deadline)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 393, in __init__
self.__rpc = CreateRPC(service, stubmap)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 67, in CreateRPC
assert stub, 'No api proxy found for service "%s"' % service
AssertionError: No api proxy found for service "file"
I don't want to have to modify foo in order to be able to test it. Is there a way to make foo work as expected (i.e. create the given file) in Google App Engine's unit tests?
I would expect to be able to do this with Google's API Proxy, but I don't understand it well enough to figure it out on my own.
I'd be grateful for your thoughts and suggestions.
Thanks for reading.
It seems like testbed.init_blobstore_stub() is outdated, because dev_appserver inits blobstore stubs differently. Here is my implementation of init_blobstore_stub that allows you to write to and read from blobstore in your tests.
from google.appengine.ext import testbed
from google.appengine.api.blobstore import blobstore_stub, file_blob_storage
from google.appengine.api.files import file_service_stub
class TestbedWithFiles(testbed.Testbed):
def init_blobstore_stub(self):
blob_storage = file_blob_storage.FileBlobStorage('/tmp/testbed.blobstore',
testbed.DEFAULT_APP_ID)
blob_stub = blobstore_stub.BlobstoreServiceStub(blob_storage)
file_stub = file_service_stub.FileServiceStub(blob_storage)
self._register_stub('blobstore', blob_stub)
self._register_stub('file', file_stub)
# Your code...
def foo():
from google.appengine.api import files
blob_filename = files.blobstore.create(mime_type='text/plain')
with files.open(blob_filename, 'a') as googfile:
googfile.write("Test data")
files.finalize(blob_filename)
tb = TestbedWithFiles()
tb.activate()
tb.init_blobstore_stub()
foo()
I don't know if it was added later to the SDK, but using Testbed.init_files_stub should fix it:
tb = testbed.Testbed()
tb.activate()
tb.init_blobstore_stub()
tb.init_files_stub()
Any chance that you are trying to do this using the gaeunit.py test runner? I see the same error while using that, since it does it's own code to replace the api proxy.
The error disappears when I added 'file' to the "as-is" list of proxies in the _run_test_suite function of gaeunit.py.
Honestly, I'm not sure that the gaeunit.py proxy replacement code is needed at all since I'm also using the more recently recommended testbed code in the test cases as per http://code.google.com/appengine/docs/python/tools/localunittesting.html. So, at this point I've commented it all out of gaeunit.py, which also seems to be working.
Note that I'm doing all this on a dev server only, in highly experimental mode on python27 in GAE with Python 2.7.
Hope this helps.
I'm following the instructions from this post but cannot get my methods recognized globally.
The error message:
ERROR: test_suggest_performer (__builtin__.TestSearch)
----------------------------------------------------------------------
Traceback (most recent call last):
File "applications/myapp/tests/test_search.py", line 24, in test_suggest_performer
suggs = suggest_flavors("straw")
NameError: global name 'suggest_flavors' is not defined
My test file:
import unittest
from gluon.globals import Request
db = test_db
execfile("applications/myapp/controllers/search.py", globals())
class TestSearch(unittest.TestCase):
def setUp(self):
request = Request()
def test_suggest_flavors(self):
suggs = suggest_flavors("straw")
self.assertEqual(len(suggs), 1)
self.assertEqual(suggs[0][1], 'Strawberry')
My controller:
def suggest_flavors(term):
return []
Has anyone successfully completed unit testing like this in web2py?
Please see: http://web2py.com/AlterEgo/default/show/260
Note that in your example the function 'suggest_flavors' should be defined at 'applications/myapp/controllers/search.py'.
I don't have any experience with web2py, but used other frameworks a lot. And looking at your code I'm confused a bit. Is there an objective reason why execfile should be used? Isn't it better to use regular import statement. So instead of execfile you may write:
from applications.myapp.controllers.search import suggest_flavors
It's more clear code for pythoners.
Note, that you should place __init__.py in each directory along the path in this case, so that dirs will form package/module hierarchy.