I would like to use #unittest.skip to have my automated testrun pass by some long-running tests. I want to still be able to run them from PyCharm manually. However, when I add the attribute PyCharm is no longer able to execute the test. It fails immediately if the attribute is on the class level. It runs the class initialization and then skips the method if I attribute the method instead. I'm using the "Unittests" runner. Any workarounds for this? Identifying something as in PyCharm or not is insufficient; I really want something like NUnit's Ignore behavior.
Related
Recently I've changed db engine from SQLite to PostgreSQL. I have successfully migrated whole db design to PostgreSQL (just simple makemigaretions, migrate). Once I ran tests some have failed for some unknown reason (errors suggest that some objects were not created). Failure doesn't apply to all tests, just selected few. Everything has been working before.
I'd start investigating what's going on on test-by-test basis, but some bizarre behavior has appeared. Let's say my test is in class MyTestClass and test is called test_do_something and in MyTestClass there are other tests as well.
When I'm running python manage.py test MyTestClass I'm getting info that test_do_something has failed.
When I'm running python manage.py test MyTestClass.test_do_something everything passes.
On SQLite both ways pass.
I'm assuming that setUpTestData() and setUp() methods work same way in SQLite and PostgreSQL. Or they don't?
Any clue why such discrepancy might be happening?
EDIT
I think I've noticed what might be wrong, but I don't understand why. The problem is because my function I've used to call to create object which is later used only once. Which differs from SQLite execution.
What I mean in my test I have something like this:
def create_object(self):
self.client.post(reverse('myurl'), kwargs={'myargs':arg})
def test_mytest1(self):
# Do something
self.create_object()
# Do something
def test_mytest2(self):
# Do something
self.create_object()
# Do something
def test_mytest3(self):
# Do something
self.create_object()
# Do something
Only for one test create_object() will be executed.
I believe I've fond cause of those failures. As a matter of fact it hasn't been an issue with one-time execution of support function as I expected. The problem has been with hardcoded ids I've used for various reasons. It appears that object I've been hoping to see didn't exist.
Let me explain a bit more what I've experienced. E.g. I had test where I've been referring to particular object passing this object id in URL kwargs. Before this operation I've created object and passed id=1 as kwargs, because I assumed that if this is only place within this test and setUp() it will be 1. It appears that with PostgreSQL it's not so clear. It seems that ids are incremented despite DB flush. Which is completely different behavior then SQLite has been providing.
I'd very much appreciate if someone could provide some more detailed answer why is this happening. Is ID counter not zeroed in PostgreSQL on flush? It would look so.
I am using RSpec.shared_context to set variables that all the describe blocks will use.
Something like this
RSpec.shared_context "common" do
let(:name) { #creates a database object }
#more let statements
end
Now I invoke it from describe block like so
describe "common test" do
include_context "common"
#run few tests
end
Now after running the describe block I want to clean it up. How do I rollback all the objects created in the shared context?
I tried cleaning it in the after(:context) hook but since it is a let statement the variable name is only allowed inside examples.
Is there someway I can use use_transactional_fixtures to clean this up after running the tests in the describe block.
You don't need to worry about cleaning up your "lets" if you just setup your test suite properly to wipe the database.
Use let to define a memoized helper method. The value will be cached
across multiple calls in the same example but not across examples.
Note that let is lazy-evaluated: it is not evaluated until the first
time the method it defines is invoked.
In almost every case you want teardown to happen automatically and per example. Thats what config.transactional_fixtures does - it rolls back the database after every example so that you have a fresh slate and don't get test ordering issues. Relying on each example / context whatever to explicitly clean up after itself is just a recipe for failure.
I'm having trouble moving away from django_nose.FastFixtureTestCase to django.test.TestCase (or even the more conservative django.test.TransactionTestCase). I'm using Django 1.7.11 and I'm testing against Postgres 9.2.
I have a Testcase class that loads three fixtures files. The class contains two tests. If I run each test individually as a single run (manage test test_file:TestClass.test_name), they each work. If I run them together, (manage test test_file:TestClass), I get
IntegrityError: Problem installing fixture '<path>/data.json': Could not load <app>.<Model>(pk=1): duplicate key value violates unique constraint "<app_model_field>_49810fc21046d2e2_uniq"
To me it looks like the db isn't actually getting flushed or rolled back between tests since it only happens when I run the tests in a single run.
I've stepped through the Django code and it looks like they are getting flushed or rolled back -- depending on whether I'm trying TestCase or TransactionTestCase.
(I'm moving away from FastFixtureTestCase because of https://github.com/django-nose/django-nose/issues/220)
What else should I be looking at? This seems like it should be a simple matter and is right within what django.test.TestCase and Django.test.TransactionTestCase are designed for.
Edit:
The test class more or less looks like this:
class MyTest(django.test.TransactionTestCase): # or django.test.TestCase
fixtures = ['data1.json', 'data2.json', 'data3.json']
def test1(self):
return # I simplified it to just this for now.
def test2(self):
return # I simplified it to just this for now.
Update:
I've managed to reproduce this a couple of times with a single test, so I suspect something in the fixture loading code.
One of my basic assumptions was that my db was clean for every TestCase. Tracing into the django core code I found instances where an object (in one case django.contrib.auth.User) already existed.
I temporarily overrode _fixture_setup() to assert the db was clean prior to loading fixtures. The assertion failed.
I was able to narrow the problem down to code that was in a TestCase.setUpClass() instead of TestCase.setUp(), and so the object was leaking out of the test and conflicting with other TestCase fixtures.
What I don't understand completely is I thought that the db was dropped and recreated between TestCases -- but perhaps that is not correct.
Update: Recent version of Django includes setUpTestData() that should be used instead of setUpClass()
How can i update column of a fixture for temporary use only with update_column command.
Right now i have following command that is running fine:
name = names(:one)
role = roles(:one)
name.role_id = role.id
assert name.save
And it is running fine, but is there any efficient way to do it in one line something like name.update_column(---, ----) ?
Thanks #richfisher for your answer, later on i figure out another way to do it. update_attributes is not a good idea to be used in test, because the problem with update_attributes is
It runs callbacks and validations
and usually we do not want to run these things in test cases
Instead of update_attributes we can use update_column like this
name.update_column(:role_id, roles(:one).id)
The advantage of using update_column is
It did not run callbacks and validations
name = names(:one)
name.update_attributes(role_id: roles(:one).id)
I have a series of unit tests (all subclasses of TransactionTestCase) spread out through multiple apps in a single Django project. When I run all of them in one go using ./manage.py test an error occurs in one of the tests. But when I run each app's tests individually, one at a time, using ./manage.py test my_project.app_name I get no errors.
The specific error I get is a FieldError in modelform_factory, but my question isn't so much about the specific solution to this error. I'm just curious what possible data/processes/whatever could bleed over between the supposedly-self-contained test cases in Django. Any thoughts?
(For the curious, if I make all my tests subclasses of TestCase (rather than TransactionTestCase) I get a bunch of different errors, but I've chalked those up to some separate issue relating to problems with rolling back the transactions within which Django encapsulates each test case. But who knows, maybe there's a connection?)
Found the answer:
Regardless of how many different test cases are run during one ./manage.py test call, and regardless of how much the database is rolled back/truncated (for TestCase and TransactionTestCase, respectively) between tests, all tests are run under the same python thread and with the same instantiations of all model base classes. Thus any variables in the thread or any modifications to class definitions persist across test instances.
In my case, it was both. A view got a user instance out of the currently running thread, but that user had been deleted when a previous test had been rolled back. Later on a view modified a list that was declared (as a blank list) in its parent class, thus altering the parent classes list to not be a blank list, and causing problems in a later test.