I'm attempting to write tests for the frontend behavior of my application, using Selenium. However, the pages I'm attempting to test get their data from Solr, and I don't want to spin up a Solr instance in order to run the tests.
I'm using py.test and py.test-django my tests, and I'm attempting to monkeypatch the views in order to make assertions about the data sent by the Selenium browser.
For example, this is a test that I would expect to fail:
def test_search(self, live_server, browser, monkeypatch):
def mockview(request):
from django.http import HttpResponse
assert True == False
return HttpResponse('Test')
monkeypatch.setattr(project.app.views, 'search', mockview)
browser.get(live_server.url + reverse('app:search'))
I would expect this to fail when the browser attempts to load the 'app:search' page. Instead, it loads the normal version of the page and the test succeeds.
Is there a way to get this behavior? Or is there a better way to approach these tests?
You are monkey patching the view function in the view module. Any location that has already imported that view (a reference to the function) will still hold the reference to the old (real) view function.
Django's urlconf mechanism import and configures itself with the real view, on the first request (which probably happends in another test case).
When you change the function in your views module, the urlconf will not notice it, since it already holds a reference to the old view function. Monkey patching in Python changes names/references, not functions themselves.
You are using the pytest's monkeypatch helper, but this guide in the mock library documentation provides some good info about where to apply monkey patches:
http://www.voidspace.org.uk/python/mock/patch.html#where-to-patch
In this particular case, I think your best bet is to patch the Sorl-call with static test data, rather than the view. Since you are doing a Selenium test, I think it would be very good to keep the real view. What are you actually testing if you replace the entire view?
If the view itself contains a lot of Sorl-specific code, you might want to break that code out into a separate function, which you then can easily patch out.
If you really want to change the view, I suggest you override the urlconf to point at your new view:
https://pytest-django.readthedocs.org/en/latest/helpers.html#pytest-mark-urls-override-the-urlconf
Related
I am trying to write some tests for a Django application I'm working on but I haven't yet decided on the exact urls I want to use for each view. Therefore, I'm using named urls in the tests.
For example, I have a url named dashboard:
c = Client()
resp = c.get(reverse('dashboard'))
This view should only be available to logged in users. If the current user is anonymous, it should redirect them to the login page, which is also a named url. However, when it does this, it uses an additional GET parameter to keep track of the url it just came from, which results in the following:
/login?next=dashboard
When I then try to test this redirect, it fails because of these additional parameters:
# It's expecting '/login' but gets '/login?next=dashboard'
self.assertRedirects(resp, reverse('login'))
Obviously, it works if I hard code them into the test:
self.assertRedirects(resp, '/login?next=dashboard')
But then, if I ever decide to change the URL for my dashboard view, I'd have to update every test that uses it.
Is there something I can do to make it easier to handle these extra parameters?
Any advice appreciated.
Thanks.
As you can see, reverse(...) returns a string. You can use it as:
self.assertRedirects(resp, '%s?next=dashboard' % reverse('login'))
I have a project that uses a SOLR search engine through django-haystack. The search engine is on the different live server and touching it during the test run is undesirable (actually, it's impossible, since the access to that host is firewalled)
I'm using standard django testrunner. Luckily, it gives me the object test-settings I can modify to my liking, but turns out it's not the end of the story.
A lot of stuff in django-haystack is instantiated at the import-time, so by the time I change test-settings in my test runner it is too late, and despite the fact that I change the SEARCH_BACKEND to dummy, tests still make call to SOLR. The problem is not specific to HAYSTACK - same issue happens with mongoengine. Any class-level statements (eg CharField(default=Blah.objects.find(...))) are executed at the instantiation-time before django has a chance to change settings.
Of course the root of the problem is the fact that Django settings is a scary globally mutable mess and that Django provides no centralized place for the instantiation code. Given that, are there any suggestions on what testing solution will be the easiest? At the moment I'm thinking about a shell script which will change DJANGO_SETTINGS environment variable to test_settings and run ./manage.py test afterwards. It would be nicer if I could still do things via ./manage.py though.
Any better ideas? People with similar problems?
I took the answer from here and modified it slightly. This works great for me:
from contextlib import contextmanager
#contextmanager
def connection(**kwargs):
from haystack import connections
for key, new_value in kwargs.items():
setattr(connections, key, new_value)
connections['default'].options['URL'] = connections.connections_info['default']['URL']
yield
My test, then, looks like:
def test_job_detail_by_title_slug_job_id(self):
with connection(connections_info=solr_settings.HAYSTACK_CONNECTIONS):
resp = self.client.get('/0/rts-crb-unix-production-engineer/27216666/job/')
self.assertEqual(resp.status_code, 404)
resp = self.client.get('/indianapolis/indiana/usa/jobs/')
self.assertEqual(resp.status_code, 200)
Forgive me if this has been asked repeatedly, but I couldn't find an example of this anywhere.
I'm struggling to understand how to share code among view functions in Django. For example, I want to check if the user is authenticated in many views. If they're not, I'd like to log some information about that request (IP address, etc.) then display a canned message about needing authentication.
Any advice on how to accomplish this?
You can write those code in a function, then call it in many views.
For example:
def check_login():
pass
def view1():
check_login()
pass
def view2():
check_login()
pass
This is probably best accomplished by creating a utils.py file, rather than a view. Views that don't return an HTTPResponse object are not technically valid.
See: https://docs.djangoproject.com/en/dev/intro/tutorial03/#write-views-that-actually-do-something
"Each view is responsible for doing one of two things: Returning an HttpResponse object containing the content for the requested page, or raising an exception such as Http404." ... "All Django wants is that HttpResponse. Or an exception."
Heroku will throw an error if the view does not return an HttpResponse.
What I usually do in this instance is write a function in a separate file called utils.py and import it and use it from the application files that need it.
from utils import check_login
def view1(request):
check_login(request)
pass
def view2(request):
check_login(request)
pass
One simple solution would be to use decorators just like in django's login_required, however if you need something more complex maybe you want something like class based views
In my Django application, I have a section of code that uploads a file to Amazon S3, and I would like to skip this section during unittests. Unittests happen to run with DEBUG=False, so I can't test for settings.DEBUG == True to skip this section. Any ideas?
You really don't want to "skip" code in your unit tests -- if you do, you'll never have coverage for those areas. It's far better to provide a mock interface to external systems, so you can insure that the rest of the code behaves as expected. This is especially critical when dealing with external resources that may be unavailable, as S3 can be in case of network issues, service interruptions, or configuration errors.
Alternately, you could just use the Django S3 storage backend in your production environment, while configuring tests for use local file storage instead.
You could -- and yes, this is a hack -- import the module that does the uploading, and replace the upload function in that module with another function, that does nothing. Something like this:
foo.py:
def bar():
return 42
biz.py:
import foo
print foo.bar() # prints 42
foo.bar = lambda: 37
print foo.bar() # prints 37
Again, it's a hack, but if this is the only place where you're going to need such functionality it might work for you.
You don't skip a function for testing.
You provide a mock implementation for something that you don't want to run as if it were production.
First, you design for testing by making the S3 Uploader a separate class that has exactly the API your application needs.
Then you write a mock version of this class with the same API. All it does is record that it was called.
Finally, you make sure your unit test plugs in your mock object instead of the real S3 Uploader.
Your Django application should not have any changes made -- except the change "injected" into it by the unit test.
Your views.py that does the upload
import the_uploader
import mock_uploader
from django.conf import settings
uploadClass = eval( settings.S3_UPLOAD_CLASS_NAME )
uploader= uploadClass( ... )
Now, you provide two settings.py files. The default settings.py has the proper uploader class name.
For testing, you have a test_settings.py which looks like this.
import settings.py
S3_UPLOAD_CLASS_NAME = "mock_uploader.mock_upload_class"
This allows you to actually test everything.
I have a requirement something like this:
As soon as the user signsup(and will be in the waiting state untill he confirms his email address), a session variable is set something like "FIRST_TIME_FREE_SESSION_EXPIRY_AGE_KEY"(sorry if the name sounds confusing!) which will be set to a datetime object adding 8hrs to the current time.
How this should impact the user is, the user gets 8hrs time to actually use all the features of our site, without confirming his signedup email address. After 8hrs, every view/page will show a big banner telling the user to confirm. (All this functionality is achieved using a single "ensure_confirmed_user" decorator for every view).
I want to test the same functionality using django's unittest addon(TestCase class). How do I do it?
Update: Do I need to manually update the mentioned session variable value(modified 8hrs to a few seconds) so as to get it done? Or any better way is there?
Update: This may sound insane, but I want to simulate a request from the future.
Generally, if unit testing is difficult because the product code depends on external resources that won't cooperate, you can abstract away those resources and replace them with dummies that do what you want.
In this case, the external resource is the time. Instead of using datetime.now(), refactor the code to accept an external time function. It can default to datetime.now. Then in your unit tests, you can change the time as the test progresses.
This is better than changing the session timeout to a few seconds, because even then, you have to sleep for a few seconds in the test to get the effect you want. Unit tests should run as fast as you can get them to, so that they will be run more often.
I can think of a couple of possibilities. During your test run, override the FIRST_TIME_FREE_SESSION_EXPIRY_AGE_KEY variable and set it to a smaller time limit. You can then wait until that time limit is over and verify if the feature is working as expected.
Alternately replace your own datetime functions (assuming your feature relies on datetime)
You can accomplish these by overriding setup_ and teardown_test_environment methods.
My settings.py differs slightly depending on whether django is run in a production environment or in a development environment. I have 2 settings modules: settings.py and settings_dev.py. The development version looks as follows:
from settings import *
DEBUG = True
INSTALLED_APPS = tuple(list(INSTALLED_APPS) + [
'dev_app',
])
Now you can solve your problem in different ways:
add the variable with different values to both settings modules;
Where you set the variable, choose between two values according to the value of the DEBUG setting. You can also use DEBUG to omit the unit test when on the production server, because the test will probably take too long there anyway.
You can use the active settings module like this:
from django.conf.project_template import settings
if settings.DEBUG:
...