I've been trying to run some unit tests in my spree application, which involves creating a new Order. The first hurdle I ran into had to do with countries not loading, due to seed data not being entered in the test database. A question was posted about it here, if you want extra credit work: https://github.com/spree/spree/issues/5308
However, I was able to bypass that issue by inventing a country inside the test, for the sake of testing the rest of my code. I've tried doing the same for a variant, but I keep running into this error:
Error:
VariantTest#test_variant_test:
RuntimeError: No master variant found to infer price
test/models/variant_test.rb:10:in `block in <class:VariantTest>'
I created a second test to see if Variants were getting made at all, and I got the same error message. This is the test I've run:
require 'test_helper'
class VariantTest < ActiveSupport::TestCase
test "variant test" do
f = Spree::Variant.new
f.cost_price = 20
f.sku = "test"
f.is_master = true
f.track_inventory = false
f.save!
test1 = Spree::Variant.find_by sku: "test"
assert_not_nil(test1, "Variant wasn't created")
end
end
I've tried creating two Variants, one of which is master and one of which is not, and testing the sku for the non-master variant, but I keep getting the exact same error message about the master variant not being found. Am I missing something?
Just to answer your queestion: You need to set price for your variant. You will get then new error because you're missing product for that variant and so on.
Belive me, you really want to use default factories with FactoryGirl, you won't have to lose time reinventing the wheel. Just look at them here or directly at variant factory, if you'll have any questions about them just ask.
Related
Recently I've changed db engine from SQLite to PostgreSQL. I have successfully migrated whole db design to PostgreSQL (just simple makemigaretions, migrate). Once I ran tests some have failed for some unknown reason (errors suggest that some objects were not created). Failure doesn't apply to all tests, just selected few. Everything has been working before.
I'd start investigating what's going on on test-by-test basis, but some bizarre behavior has appeared. Let's say my test is in class MyTestClass and test is called test_do_something and in MyTestClass there are other tests as well.
When I'm running python manage.py test MyTestClass I'm getting info that test_do_something has failed.
When I'm running python manage.py test MyTestClass.test_do_something everything passes.
On SQLite both ways pass.
I'm assuming that setUpTestData() and setUp() methods work same way in SQLite and PostgreSQL. Or they don't?
Any clue why such discrepancy might be happening?
EDIT
I think I've noticed what might be wrong, but I don't understand why. The problem is because my function I've used to call to create object which is later used only once. Which differs from SQLite execution.
What I mean in my test I have something like this:
def create_object(self):
self.client.post(reverse('myurl'), kwargs={'myargs':arg})
def test_mytest1(self):
# Do something
self.create_object()
# Do something
def test_mytest2(self):
# Do something
self.create_object()
# Do something
def test_mytest3(self):
# Do something
self.create_object()
# Do something
Only for one test create_object() will be executed.
I believe I've fond cause of those failures. As a matter of fact it hasn't been an issue with one-time execution of support function as I expected. The problem has been with hardcoded ids I've used for various reasons. It appears that object I've been hoping to see didn't exist.
Let me explain a bit more what I've experienced. E.g. I had test where I've been referring to particular object passing this object id in URL kwargs. Before this operation I've created object and passed id=1 as kwargs, because I assumed that if this is only place within this test and setUp() it will be 1. It appears that with PostgreSQL it's not so clear. It seems that ids are incremented despite DB flush. Which is completely different behavior then SQLite has been providing.
I'd very much appreciate if someone could provide some more detailed answer why is this happening. Is ID counter not zeroed in PostgreSQL on flush? It would look so.
Say I have this model definition:
class Foo(Model):
...
class Bar(Model):
some_m2m_field = ManyToManyField(Foo)
and this code:
bar = Bar.objects.create()
bar.some_m2m_field.set(an_id_array_with_some_invalid_pks)
When I run that normally, the last line will, as it should, throw an IntegrityError. However, if I run the same code from a django.test.TestCase, the last line will NOT throw an error. It instead will wait until the _post_teardown() phase of the test to throw the IntegrityError.
Here's a small project that demonstrates the issue: https://github.com/t-evans/m2mtest
How do I fix that? I suppose that's configurable, but I haven't been able to find it...
Follow-up question:
Ultimately, I need to handle the case when there are bad IDs being passed to the m2m_field.set() method (and I need unit tests that verify that bad IDs are being handled correctly, which is why the delayed IntegrityError in the unit test won't work).
I know I can find the bad IDs by looping over the array and hitting the DB one for each ID. Is there a more efficient way to find the bad IDs or (better) simply tell the set() method to ignore/drop the bad IDs?
TestCase wraps tests in additional atomic() blocks, compared to TransactionTestCase, so to test specific database transaction behaviour, you should use TransactionTestCase.
I believe an IntegrityError is thrown only when the transaction is committed, as that's the moment the db would find out about missing ids.
In general if you want to test for db exceptions raised during a test, you should use a TransactionTestCase and test your code using:
with self.assertRaises(IntegrityError):
# do something that gets committed to db
See the answer from #dirkgroten for how to fix the unit test issue.
As for the followup question on how to more-efficiently eliminate the bad IDs, one way is as follows:
good_ids = Foo.objects.filter(id__in=an_id_array_with_some_invalid_ids).values_list('id', flat=True)
I have a rails 5.0.0.1 app that has model for books and for authors.
In production and development, the pages display as expected showing the book.author.name
However, in my tests, book.author is valid and as expected, but book.author.name produces an error "ActionView::Template::Error: undefined method `name' for nil:NilClass"
using byebug I found that the author_id for the book is set to 459301548 but there is no author with that id.
How does minitest handle things differently such that the error only occurs in the test environment?
I have tried moving things around and trying different ways to populate #books to use in my #books.each do |book| but the only thing that seems to work is to remove the reference to book.author.name
The error is only occurring when not logged in, but I need to check my tests to make sure that I don't get it in other cases.
I don't know why this makes a difference, but when I change the variable assignment in my controller from:
#books = Book.paginate(...)
to:
#books = Book.all.paginate(...)
The error goes away.
The pages still seems to show the correct information in development and production either way.
I'm having trouble moving away from django_nose.FastFixtureTestCase to django.test.TestCase (or even the more conservative django.test.TransactionTestCase). I'm using Django 1.7.11 and I'm testing against Postgres 9.2.
I have a Testcase class that loads three fixtures files. The class contains two tests. If I run each test individually as a single run (manage test test_file:TestClass.test_name), they each work. If I run them together, (manage test test_file:TestClass), I get
IntegrityError: Problem installing fixture '<path>/data.json': Could not load <app>.<Model>(pk=1): duplicate key value violates unique constraint "<app_model_field>_49810fc21046d2e2_uniq"
To me it looks like the db isn't actually getting flushed or rolled back between tests since it only happens when I run the tests in a single run.
I've stepped through the Django code and it looks like they are getting flushed or rolled back -- depending on whether I'm trying TestCase or TransactionTestCase.
(I'm moving away from FastFixtureTestCase because of https://github.com/django-nose/django-nose/issues/220)
What else should I be looking at? This seems like it should be a simple matter and is right within what django.test.TestCase and Django.test.TransactionTestCase are designed for.
Edit:
The test class more or less looks like this:
class MyTest(django.test.TransactionTestCase): # or django.test.TestCase
fixtures = ['data1.json', 'data2.json', 'data3.json']
def test1(self):
return # I simplified it to just this for now.
def test2(self):
return # I simplified it to just this for now.
Update:
I've managed to reproduce this a couple of times with a single test, so I suspect something in the fixture loading code.
One of my basic assumptions was that my db was clean for every TestCase. Tracing into the django core code I found instances where an object (in one case django.contrib.auth.User) already existed.
I temporarily overrode _fixture_setup() to assert the db was clean prior to loading fixtures. The assertion failed.
I was able to narrow the problem down to code that was in a TestCase.setUpClass() instead of TestCase.setUp(), and so the object was leaking out of the test and conflicting with other TestCase fixtures.
What I don't understand completely is I thought that the db was dropped and recreated between TestCases -- but perhaps that is not correct.
Update: Recent version of Django includes setUpTestData() that should be used instead of setUpClass()
How can i update column of a fixture for temporary use only with update_column command.
Right now i have following command that is running fine:
name = names(:one)
role = roles(:one)
name.role_id = role.id
assert name.save
And it is running fine, but is there any efficient way to do it in one line something like name.update_column(---, ----) ?
Thanks #richfisher for your answer, later on i figure out another way to do it. update_attributes is not a good idea to be used in test, because the problem with update_attributes is
It runs callbacks and validations
and usually we do not want to run these things in test cases
Instead of update_attributes we can use update_column like this
name.update_column(:role_id, roles(:one).id)
The advantage of using update_column is
It did not run callbacks and validations
name = names(:one)
name.update_attributes(role_id: roles(:one).id)