pytest-django won't allow database access even with mark - django

I'm having a difficult time figuring out what is wrong with my setup. I'm trying to test a login view, and no matter what I try, I keep getting:
Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it.
My test:
import pytest
from ..models import User
#pytest.mark.django_db
def test_login(client):
# If an anonymous user accesses the login page:
response = client.get('/users/login/')
# Then the server should respond with a successful status code:
assert response.status_code == 200
# Given an existing user:
user = User.objects.get(username='user')
# If we attempt to log into the login page:
response = client.post('/users/login/', {'username': user.username, 'password': 'somepass'})
# Then the server should redirect the user to the default redirect location:
assert response.status_code == 302
My conftest.py file, in the same tests directory:
import pytest
from django.core.management import call_command
#pytest.fixture(autouse=True)
def django_db_setup(django_db_setup, django_db_blocker):
with django_db_blocker.unblock():
call_command('loaddata', 'test_users.json')
My pytest.ini file (which specifies the correct Django settings file):
[pytest]
DJANGO_SETTINGS_MODULE = config.settings
I'm stumped. I've tried using scope="session" like in the documentation together with either an #pytest.mark.django_db mark, a db fixture (as a parameter to the test function), or both with no luck. I've commented out each line of the test to figure out which one was triggering the problem, but couldn't figure it out. I could only get the test to run at all if I removed all db-dependent fixtures/marks/code from the test and had a simple assert True. I don't believe the issue is in my Django settings, as the development server runs fine and is able to access the database.
What am I missing here?

Apparently this is a case of "deceiving exception syndrome". I had a migration that created groups with permissions, and since tests run all migrations at once, the post-migrate signal that creates the permissions that migration depends on was never run before getting to that migration.
It seems that if there is any database-related error before the actual tests start running, this exception is raised, which makes it very difficult to debug exactly what is going wrong. I ended up updating my migration script to manually cause permissions to be created so that the migration could run, and the error went away.

You can add below code in your conftest.py as per the official documentation to allow DB access without django_db marker.
#pytest.fixture(autouse=True)
def enable_db_access_for_all_tests(db):
pass
Ref: https://pytest-django.readthedocs.io/en/latest/faq.html#how-can-i-give-database-access-to-all-my-tests-without-the-django-db-marker

Related

Django Rest Framework with Firebase Firestore - API endpoint returns NoneType due to initialize_app runs more than once

I'm trying to build a read-only API that fetches its data from Firebase, Firestore. I'm having an issue when I request any endpoint in my API, multiple times, I get an error.
I won't include Django related files and classes. So, here are the code pieces you need to know.
firebase_initilizer.py
import firebase_admin
from firebase_admin import credentials, firestore
if not firebase_admin._apps:
cred = credentials.Certificate('./FILE_PATH.json')
firebase_admin.initialize_app(cred)
db = firestore.client()
collection_ref = db.collection(u"collection-name")
docs = collection_ref.stream()
views.py [A simplified version of what I use in one of my API endpoints]
class Contact(APIView):
"""
Returns the user's contact details.
"""
def get(self, request, uid, format="json"):
for doc in docs:
if uid == doc.id:
return Response(data=doc.to_dict()["contact"], status=status.HTTP_200_OK)
Again, the issue is that I get an error saying "NoneType" whenever I request any endpoint more than once. At this point, I can run my API only once.
The error:
AssertionError at /api/v1/contact/
Expected a `Response`, `HttpResponse` or `HttpStreamingResponse` to be returned from the view, but received a `<class 'NoneType'>`
"GET /api/v1/contact/ HTTP/1.1" 500 78864
From what I know, I need to initialize Firebase only once. Then, I only need to request whatever I want by using the variable I assigned the Firebase reference. However, I don't know how to do it
I solved my problem by inserting my firebase initializer code piece into manage.py. Plus, it also works in settings.py.
For example, the manage.py file can be rearranged as follows:
import os, sys
import firebase_admin
from firebase_admin import credentials, firestore
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'api.settings')
try:
from django.core.management import execute_from_command_line
if not firebase_admin._apps:
cred = credentials.Certificate('./FILE_PATH.json')
firebase_admin.initialize_app(cred)
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
or you can simply add the following lines to anywhere in settings.py:
import firebase_admin
from firebase_admin import credentials
if not firebase_admin._apps:
cred = credentials.Certificate('./FILE_PATH.json')
firebase_admin.initialize_app(cred)
I hope this answer helps others.

SQLAlchemy doesn't let me set up Flask apps multiple times in a test fixture

I am writing a Flask application that uses SQLAlchemy for its database backend.
The Flask application is created with an app factory called create_app.
from flask import Flask
def create_app(config_filename = None):
app = Flask(__name__)
if config_filename is None:
app.config.from_pyfile('config.py', silent=True)
else:
app.config.from_mapping(config_filename)
from .model import db
db.init_app(app)
db.create_all(app=app)
return app
The database model consists of a single object called Document.
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
class Document(db.Model):
id = db.Column(db.Integer, primary_key=True)
document_uri = db.Column(db.String, nullable=False, unique=True)
I am using pytest to do unit testing. I create a pytest fixture called app_with_documents that calls the application factory to create an application and adds some Document objects to the database before the test is run, then empties out the database after the unit test has completed.
import pytest
from model import Document, db
from myapplication import create_app
#pytest.fixture
def app():
config = {
'SQLALCHEMY_DATABASE_URI': f"sqlite:///:memory:",
'TESTING': True,
'SQLALCHEMY_TRACK_MODIFICATIONS': False
}
app = create_app(config)
yield app
with app.app_context():
db.drop_all()
#pytest.fixture
def app_with_documents(app):
with app.app_context():
document_1 = Document(document_uri='Document 1')
document_2 = Document(document_uri='Document 2')
document_3 = Document(document_uri='Document 3')
document_4 = Document(document_uri='Document 4')
db.session.add_all([document_1, document_2, document_3, document_4])
db.session.commit()
return app
I have multiple unit tests that use this fixture.
def test_unit_test_1(app_with_documents):
...
def test_unit_test_2(app_with_documents):
...
If I run a single unit test everything works. If I run more than one test, subsequent unit tests crash at the db.session.commit() line in the test fixture setup with "no such table: document".
def do_execute(self, cursor, statement, parameters, context=None):
> cursor.execute(statement, parameters)
E sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: document [SQL: 'INSERT INTO document (document_uri) VALUES (?)'] [parameters: ('Document 1',)] (Background on this error at: http://sqlalche.me/e/e3q8)
What I expect is that each unit test gets its own brand-new identical prepopulated database so that all the tests would succeed.
(This is an issue with the database tables, not the unit tests. I see the bug even if my unit tests consist of just pass.)
The fact that the error message mentions a missing table makes it look like the db.create_all(app=app) in create_app is not being called after the first unit test runs. However, I have verified in the debugger that this application factory function is called once for every unit test as expected.
It is possible that my call to db.drop_all() is an incorrect way to clear out the database. So instead of an in-memory database, I tried creating one on disk and then deleting it as part of the test fixture cleanup. (This is the technique recommended in the Flask documentation.)
#pytest.fixture
def app():
db_fd, db_filename = tempfile.mkstemp(suffix='.sqlite')
config = {
'SQLALCHEMY_DATABASE_URI': f"sqlite:///{db_filename}",
'TESTING': True,
'SQLALCHEMY_TRACK_MODIFICATIONS': False
}
yield create_app(config)
os.close(db_fd)
os.unlink(db_filename)
This produces the same error.
Is this a bug in Flask and/or SQLAlchemy?
What is the correct way to write Flask test fixtures that prepopulate an application's database?
This is Flask 1.0.2, Flask-SQLAlchemy 2.3.2, and pytest 3.6.0, which are all the current latest versions.
In my conftest.py I was importing the contents of model.py in my application like so.
from model import Document, db
I was running the unit tests in Pycharm using Pycharm's pytest runner. If instead I run tests from the command line with python -m pytest I see the following error
ModuleNotFoundError: No module named 'model'
ERROR: could not load /Users/wmcneill/src/FlaskRestPlus/test/conftest.py
I can get my tests running from the command line by fully-qualifying the import path in conftest.py.
from myapplication.model import Document, db
When I do this all the unit tests pass. They also pass when I run the unit tests from inside Pycharm.
So it appears that I had incorrectly written an import statement in my unit tests. However, when I ran those unit tests via Pycharm, instead of seeing an error message about the import, the scripts launched but then had weird SQL errors.
I still haven't figured out why I saw the strange SQL errors I did. Presumably something subtle about the way global state is being handled. But changing the import line fixes my problem.

Django Selenium LiveServerTestCase not loading the page in browser (error 500)

I have a test Django project called MyApp, running over WSGI on port 8083. When I go to http://myapp:8083, I see the standard Django "it's working" page. I wrote a functional test using selenium bindings in Django to launch a browser and load the above mentioned page. When I run the test, though, I get an error message "Address already in use". So I run the test using another port like this: python manage.py test --liveserver=myapp:8084
This opens the browser, but shows "Page not found" error instead of the default Django page. What am I doing wrong? Any ideas? Thank you!
The test.py file content:
class CoreSeleniumTestCase(LiveServerTestCase):
#classmethod
def setUpClass(cls):
cls.driver = webdriver.Chrome()
cls.driver.maximize_window()
super(CoreSeleniumTestCase, cls).setUpClass()
#classmethod
def tearDownClass(cls):
cls.driver.quit()
super(CoreSeleniumTestCase, cls).tearDownClass()
def testIndexShouldLoad(self):
self.driver.get('%s%s' % (self.live_server_url, '/'))
I finally found the problem. At some point, Django removed MEDIA_ROOT from the settings.py file by default. It turns out that this setting must be in the file for Selenium tests to work properly. Once I reintroduced the setting and assigned a directory to it, the Selenium tests started to work as expected.

How do you set DEBUG to True when running a Django test?

I'm currently running some Django tests and it looks that DEBUG=False by default. Is there a way to run a specific test where I can set DEBUG=True at the command line or in code?
For a specific test inside a test case, you can use the override_settings decorator:
from django.test.utils import override_settings
from django.conf import settings
class TestSomething(TestCase):
#override_settings(DEBUG=True)
def test_debug(self):
assert settings.DEBUG
Starting with Django 1.11 you can use --debug-mode to set the DEBUG setting to True prior to running tests.
The accepted answer didn't work for me. I use Selenium for testing, and setting #override_settings(DEBUG=True) makes the test browser always display 404 error on every page. And DEBUG=False does not show exception tracebacks. So I found a workaround.
The idea is to emulate DEBUG=True behaviour, using custom 500 handler and built-in django 500 error handler.
Add this to myapp.views:
import sys
from django import http
from django.views.debug import ExceptionReporter
def show_server_error(request):
"""
500 error handler to show Django default 500 template
with nice error information and traceback.
Useful in testing, if you can't set DEBUG=True.
Templates: `500.html`
Context: sys.exc_info() results
"""
exc_type, exc_value, exc_traceback = sys.exc_info()
error = ExceptionReporter(request, exc_type, exc_value, exc_traceback)
return http.HttpResponseServerError(error.get_traceback_html())
urls.py:
from django.conf import settings
if settings.TESTING_MODE:
# enable this handler only for testing,
# so that if DEBUG=False and we're not testing,
# the default handler is used
handler500 = 'myapp.views.show_server_error'
settings.py:
# detect testing mode
import sys
TESTING_MODE = 'test' in sys.argv
Now if any of your Selenium tests encounters 500 error, you'll see a nice error page with traceback and everything. If you run a normal non-testing environment, default 500 handler is used.
Inspired by:
Where in django is the default 500 traceback rendered so that I can use it to create my own logs?
django - how to detect test environment
Okay let's say you want to write tests for error testcase for which the urls are :-
urls.py
if settings.DEBUG:
urlpatterns += [
url(r'^404/$', page_not_found_view),
url(r'^500/$', my_custom_error_view),
url(r'^400/$', bad_request_view),
url(r'^403/$', permission_denied_view),
]
test_urls.py:-
from django.conf import settings
class ErroCodeUrl(TestCase):
def setUp(self):
settings.DEBUG = True
def test_400_error(self):
response = self.client.get('/400/')
self.assertEqual(response.status_code, 500)
Hope you got some idea!
Nothing worked for me except https://stackoverflow.com/a/1118271/5750078
Use Python 3.7
breakpoint()
method.
Works fine on pycharm
You can't see the results of DEBUG=True when running a unit test. The pages don't display anywhere. No browser.
Changing DEBUG has no effect, since the web pages (with the debugging output) are not visible anywhere.
If you want to see a debugging web page related to a failing unit test, then do this.
Drop your development database.
Rerun syncdb to build an empty development database.
Run the various loaddata scripts to rebuild the fixtures for that test in your development database.
Run the server and browse the page.
Now you can see the debug output.

django - how to detect test environment (check / determine if tests are being run)

How can I detect whether a view is being called in a test environment (e.g., from manage.py test)?
#pseudo_code
def my_view(request):
if not request.is_secure() and not TEST_ENVIRONMENT:
return HttpResponseForbidden()
Put this in your settings.py:
import sys
TESTING = len(sys.argv) > 1 and sys.argv[1] == 'test'
This tests whether the second commandline argument (after ./manage.py) was test. Then you can access this variable from other modules, like so:
from django.conf import settings
if settings.TESTING:
...
There are good reasons to do this: suppose you're accessing some backend service, other than Django's models and DB connections. Then you might need to know when to call the production service vs. the test service.
Create your own TestSuiteRunner subclass and change a setting or do whatever else you need to for the rest of your application. You specify the test runner in your settings:
TEST_RUNNER = 'your.project.MyTestSuiteRunner'
In general, you don't want to do this, but it works if you absolutely need it.
from django.conf import settings
from django.test.simple import DjangoTestSuiteRunner
class MyTestSuiteRunner(DjangoTestSuiteRunner):
def __init__(self, *args, **kwargs):
settings.IM_IN_TEST_MODE = True
super(MyTestSuiteRunner, self).__init__(*args, **kwargs)
Just look at request.META['SERVER_NAME']
def my_view(request):
if request.META['SERVER_NAME'] == "testserver":
print "This is test environment!"
There's also a way to temporarily overwrite settings in a unit test in Django. This might be a easier/cleaner solution for certain cases.
You can do this inside a test:
with self.settings(MY_SETTING='my_value'):
# test code
Or add it as a decorator on the test method:
#override_settings(MY_SETTING='my_value')
def test_my_test(self):
# test code
You can also set the decorator for the whole test case class:
#override_settings(MY_SETTING='my_value')
class MyTestCase(TestCase):
# test methods
For more info check the Django docs: https://docs.djangoproject.com/en/1.11/topics/testing/tools/#django.test.override_settings
I think the best approach is to run your tests using their own settings file (i.e. settings/tests.py). That file can look like this (the first line imports settings from a local.py settings file):
from local import *
TEST_MODE = True
Then do ducktyping to check if you are in test mode.
try:
if settings.TEST_MODE:
print 'foo'
except AttributeError:
pass
If you are multiple settings file for different environment, all you need to do is to create one settings file for testing.
For instance, your setting files are:
your_project/
|_ settings/
|_ __init__.py
|_ base.py <-- your original settings
|_ testing.py <-- for testing only
In your testing.py, add a TESTING flag:
from .base import *
TESTING = True
In your application, you can access settings.TESTING to check if you're in testing environment.
To run tests, use:
python manage.py test --settings your_project.settings.testing
While there's no official way to see whether we're in a test environment, django actually leaves some clues for us.
By default Django’s test runner automatically redirects all Django-sent email to a dummy outbox. This is accomplished by replacing EMAIL_BACKEND in a function called setup_test_environment, which in turn is called by a method of DiscoverRunner. So, we can check whether settings.EMAIL_BACKEND is set to 'django.core.mail.backends.locmem.EmailBackend'. That mean we're in a test environment.
A less hacky solution would be following the devs lead by adding our own setting by subclassing DisoverRunner and then overriding setup_test_environment method.
Piggybacking off of #Tobia's answer, I think it is better implemented in settings.py like this:
import sys
try:
TESTING = 'test' == sys.argv[1]
except IndexError:
TESTING = False
This will prevent it from catching things like ./manage.py loaddata test.json or ./manage.py i_am_not_running_a_test
I wanted to exclude some data migrations from being run in tests, and came up with this solution on a Django 3.2 project:
class Migration(migrations.Migration):
def apply(self, project_state, schema_editor, collect_sql=False):
import inspect
if 'create_test_db' in [i.function for i in inspect.stack()]:
return project_state
else:
return super().apply(project_state, schema_editor, collect_sql=collect_sql)
I haven't seen this suggested elsewhere, and for my purposes it's pretty clean. Of course, it might break if Django changes the name of the create_test_db method (or the return value of the apply method) at some point in time, but modifying this to work should be reasonably simple, since it's likely that some method exists in the stack that doesn't exist during non-test migration runs.