I am using be.janbols.spock.extension.dbunit
But content is called for all cases.
How specify it for each case?
#DbUnit def content = {
CATEGORY(CATEGORY_ID: 1L, CATEGORY_NAME: "N", CATEGORY_IMAGE:"I")
}
https://github.com/janbols/spock-dbunit
Yes you would require it at global level of the testcase for class which you are testing.
It would generally be required while testing service layer classes which have many DAOs injected in them.
After initial content setup i.e the values you want to be seen in your inmemory database like H2
Then in, the setup , specify the table schema like,
def setup() {
new Sql(dataSource).execute("CREATE TABLE Category(category_id INT PRIMARY KEY, category_name VARCHAR(255), category_image VARCHAR(255))")}
Do not forget to cleanup after each testcase
Related
I'm a big fan of Django-parler, but I've run into a problem when storing a translated model in two different databases.
My model is:
class InstrumentFamily(TranslatableModel):
primary_key = True
translations = TranslatedFields(
label=CharNullField(_('Label'), max_length=100, unique=False, null=True,)
I have 2 database aliases 'default' and 'test' and my database router directs my model to 'test'.
I insert models in both databases by doing this:
fam = InstrumentFamily(code=TEST_CODE)
with switch_language(fam, 'en'):
fam.label = "test_family_test EN"
with switch_language(fam, 'fr'):
fam.label = "test_family_test FR"
fam.save()
which stores the object and its translations in database 'test', or by doing this:
fam = InstrumentFamily(code="TEST_FAM")
with switch_language(fam, 'en'):
fam.label = "test_family_default_EN"
with switch_language(fam, 'fr'):
fam.label = "test_family_default_FR"
fam.save(using='default')
which saves the object and its translations to database 'default'. So far, so good.
But when accessing the object previously saved in 'default' by doing this (after properly clearing all caches to force a database read):
fam = InstrumentFamily.objects.using('default').get(code=TEST_CODE)
print(f" label: {fam.label}")
django-parler properly retrieves the object from database 'default', but looks for the translation from database 'test' ! (SQL trace below, see the very end of each line):
SELECT "orchestra_instrumentfamily"."id", "orchestra_instrumentfamily"."code" FROM "orchestra_instrumentfamily" WHERE "orchestra_instrumentfamily"."code" = 'TEST_FAM' LIMIT 21; args=('TEST_FAM',); alias=default
SELECT "orchestra_instrumentfamily_translation"."id", "orchestra_instrumentfamily_translation"."language_code", "orchestra_instrumentfamily_translation"."label", "orchestra_instrumentfamily_translation"."master_id" FROM "orchestra_instrumentfamily_translation" WHERE ("orchestra_instrumentfamily_translation"."master_id" = 34 AND "orchestra_instrumentfamily_translation"."language_code" = 'en') LIMIT 21; args=(34, 'en'); alias=test
I'm obviously missing something big... What am I supposed to do to have the 'using("default")' information propagated to the second query? I couldn't find anything in the documentation about storing TranslatableModels in more than one database. Am I trying to achieve something parler does not support?
Thanks in advance for enlightening me!
This looks like a bug in django-parler. It doesn't pass the using information to its internal queries that retrieve translation model data. You can file a bit in the GitHub repository so this can be addressed.
A workaround would be to implement a database-router that enforces using a particular database for this model.
To make a long story short, I am very grateful for hints on how I can accomplish the following. I have an app A that I don't want to change. I have an app B that needs to select data from A or to request data to be added/changed if necessary. Think of B as an app to suggest data that should end in A only after review/approval. By itself, B is pretty useless. Furthermore, a significant amount of what B's users will enter needs to be rejected. That's why I want A to be protected so to say.
# in app A
class Some_A_Model(models.Model): #e.g., think artist
some_value = models.TextField()
# in app B
class MyCustomField(models.ForeignKey):
...
class Some_B_Model(models.Model): # e.g., think personal preference
best_A = MyCustomField('Some_A_Model')
worst_A = MyCustomField('Some_A_Model')
how_many_times_I_like_the one_better_than_the_other = models.FloatField()
class Mediator(models.Model):
# already exists: generic foreign key
content_type = models.ForeignKey(
ContentType,
on_delete=models.CASCADE
)
object_id = models.PositiveIntegerField()
content_object = GenericForeignKey(
'content_type',
'object_id'
)
#does not yet exist or needs to be changed:
add_or_change = PickledObjectField()
Django should create a form for Some_B_Model where I can select instances of Some_A_Model for best_A and worst_A, respectively; if, however, my intended best_A is not yet in A's database, I want to be able to request this item to be added. And if I find worst_A is present but has a typo, I want to be able to request this item to be corrected. An editor should be required to review/edit the data entered in B and either reject or release all the associated changes to A's database as an atomic transaction. I don't want any garbage in A and refrain from adding some status field to track what is considered valid, requiring filtering all the time. If it's in A, it must be good.
I figured I need to define a MyCustomField, which could be a customized ForeignKey. In addition, I need some intermediate model ('mediator' maybe?) that MyCustomField would actually be pointing to and that can hold a (generic) ForeignKey to the item I selected, and a pickled instance of the item I would like to see added to A's database (e.g., a pickled, unsaved instance of Some_A_model), or both to request a change. Note that I consider using PickledObjectField from 'django-picklefield', but this is not a must.
As there is only some documentation on custom model fields but not on the further steps regarding form fields and widgets, it seems I have to dig through django's source to find out how to tie my intended functionality into its magic. That's where I am hoping for some comments and hints. Does my plan sound reasonable to you? Is this a known pattern, and if so, what is it called? Maybe someone has already done this or there is a plugin I could look into? What alternatives would you consider?
Many thanks in advance!
Best regards
It is a very specific question regarding Flask-appbuilder. During my development, I found FAB's ModelView is suitable for admin role, but need more user logic handlers/views for complex designs.
There is a many to many relationship between devices and users, since each device could be shared between many users, and each user could own many device. So there is a secondary table called accesses, describes the access control between devices and users. In this table, I add "isHost" to just if the user owns the device. Therefore, we have two roles: host and (regular) user. However, these roles are not two roles defined as other applications, since one man can be either host or user in same time. In a very simple application, enforce the user to switch two roles are not very convinient. That makes things worse.
Anyway, I need design some custom handlers with traditional Flask/Jinja2 templates. For example:
class PageView(ModelView):
# FAB default URL: "/pageview/list"
datamodel = SQLAInterface(Page)
list_columns = ['name', 'date', 'get_url']
#expose("/p/<string:url>")
def p(self, url):
title = urllib.unquote(url)
r = db.session.query(Page).filter_by(name = title).first()
if r:
md = r.markdown
parser = mistune.Markdown()
body = parser(md)
return self.render_template('page.html', title = title, body = body)
else:
return self.render_template('404.html'), 404
Above markdown page URL is simple, since it is a seperate UI. But if I goes to DeviceView/AccountView/AccessView for list/show/add/edit operations. I realized that I need a unique styles of UI.
So, now how can I reuse the existing templates/widgets of FAB with custom sqlalchemy queries? Here is my code for DeviceView.
class DeviceView(ModelView):
datamodel = SQLAInterface(Device)
related_views = [EventView, AccessView]
show_template = 'appbuilder/general/model/show_cascade.html'
edit_template = 'appbuilder/general/model/edit_cascade.html'
#expose('/host')
#has_access
def host(self):
base_filters = [['name', FilterStartsWith, 'S'],]
#if there is not return, FAB will throw error
return "host view:{}".format(repr(base_filters))
#expose('/my')
#has_access
def my(self):
# A pure testing method
rec = db.session.query(Access).filter_by(id = 1).all()
if rec:
for r in rec:
print "rec, acc:{}, dev:{}, host:{}".format(r.account_id, r.device_id, r.is_host)
return self.render_template('list.html', title = "My Accesses", body = "{}".format(repr(r)))
else:
return repr(None)
Besides sqlalchemy code with render_template(), I guess base_filters can also help to define custom queries, however, I have no idea how to get query result and get them rendered.
Please give me some reference code or example if possible. Actually I have grep keywords of "db.session/render_template/expoaw"in FAB's github sources. But no luck.
Good afternoon,
I have my django server running with a REST api on top to serve my mobile devices. Now, at some point, the mobile device will communicate with Django.
Let's say the device is asking Django to add an object in the database, and within that object, I need to set a FK like this:
objectA = ObjectA.objects.create(title=title,
category_id = c_id, order = order, equipment_id = e_id,
info_maintenance = info_m, info_security = info_s,
info_general = info_g, alphabetical_notation = alphabetical_notation,
allow_comments = allow_comments,
added_by_id = user_id,
last_modified_by_id = user_id)
If the e_id and c_id is received from my mobile devices, should I check before calling this creation if they actually still exists in the DB? That is two extra queries... but if they can avoid any problems, I don't mind!
Thanks a lot!
It think that Django creates constraint on Foreign Key by default ( might depend on database though ). This means that if your foreign keys point to something that does not exist, then saving will fail ( resulting in Exception on Python side ).
You can reduce it to a single query (it should be a single query at least, warning I haven't tested the code):
if MyObject.objects.filter(id__in=[e_id, c_id]).distinct().count() == 2:
# create the object
ObjectA.objects.create(...)
else:
# objects corresponding e_id and c_id do not exist, do NOT create ObjectA
You should always validate any information that's coming from a user or that can be altered by a determined user. It wouldn't be difficult for someone to sniff the traffic and start constructing their own REST requests to your server. Always clean and validate external data that's being added to the system.
I want to make sure I am testing Models/Objects in isolation and not as one huge system.
If I have an Order object and it has Foreign Keys to Customers, Payments, OrderItems, etc. and I want to test Order functionality, I need to create fixtures for all of that related data, or create it in code. I think what I really need to be doing is mocking out the other items, but I don't see an easy (or possible) solution for that if I am doing queries on these Foreign Keys.
The common solutions (fixtures) don't really let me test one Object at a time. I am sure this is partly caused by my app being way over coupled.
I am trying my darndest to adopt TDD as my main method of working, but the way things work with Django, it seems you can either run very trivial unit tests, or these massive integration tests.
[Edit] Better explicit example and a some more humility
What I mean is that I seem to be able to only run trivial unit tests. I have seen people with very well tested and granular modules. I am certain some of this can be followed back to poor design.
Example:
I have a model call Upsell which is linked to a Product model. Then I have a Choices model which are children of Upsell (do you want what's behind door #1, #2, #3).
The Upsell model has several methods on it that derive items necessary to render the template from their choices. The most important one is that it creates a URL for each choice. It does this through some string mangling etc. If I wanted to test the Upsell.get_urls() method, I want to have it not depend on the values of choices in the fixtures, and I want to not have it depend on the value of Product in the fixtures.
Right now I populate the db in the setUp method for the tests, and that works well with the way Django backs out the transaction every time, but only outside of setUp and tearDown. This works fairly well except some of the Models are fairly complex to set up, while I actually only need to get one attribute for it.
I can't give you an example of that, since I can't accomplish it, but here is the type of thing I am doing now. Basically I input an entire order, create the A/B experiment it was attached to, etc. And that's not counting Product, Categories, etc. all set up by fixtures. It's not the extra work I am concerned as I can't even test one database-based object at a time. The Tests below are important, but they are integration tests. I would like to be building up to something like this by testing each item separately. As you pointed out, maybe I shouldn't have chosen a framework so closely tied to the db. Does any sort of dependency injection exist with something like this? (beyond my testing, but the code itself as well)
class TestMultiSinglePaySwap(TestCase):
fixtures = ['/srv/asm/fixtures/alchemysites.json','/srv/asm/fixtures/catalog.json','/srv/asm/fixtures/checkout_smallset.json','/srv/asm/fixtures/order-test-fixture.json','/srv/asm/fixtures/offers.json']
def setUp(self):
self.o = Order()
self.sp = SiteProfile.objects.get(pk=1)
self.c = Customer.objects.get(pk=1)
signals.post_save.disconnect(order_email_first, sender=Order)
self.o.customer = self.c
p = Payment()
p.cc_number = '4444000011110000'
p.cc_exp_month = '12'
p.cc_type = 'V'
p.cc_exp_year = '2020'
p.cvv2 = '123'
p.save()
self.o.payment = p
self.o.site_profile = self.sp
self.o.save()
self.initial_items = []
self.main_kit = Product.objects.get(pk='MOA1000D6')
self.initial_items.append(self.main_kit)
self.o.add_item('MOA1000D6', 1, False)
self.item1 = Product.objects.get(pk='MOA1041A-6')
self.initial_items.append(self.item1)
self.o.add_item('MOA1041A-6', 1, False)
self.item2 = Product.objects.get(pk='MOA1015-6B')
self.initial_items.append(self.item2)
self.o.add_item('MOA1015-6B', 1, False)
self.item3 = Product.objects.get(pk='STP1001-6E')
self.initial_items.append(self.item3)
self.o.add_item('STP1001-6E', 1, False)
self.swap_item1 = Product.objects.get(pk='MOA1041A-1')
def test_single_pay_swap_wholeorder(self):
o = self.o
swap_all_skus(o)
post_swap_order = Order.objects.get(pk = o.id)
swapped_skus = ['MOA1000D','MOA1041A-1','MOA1015-1B','STP1001-1E']
order_items = post_swap_order.get_all_line_items()
self.assertEqual(order_items.count(), 4)
pr1 = Product()
pr1.sku = 'MOA1000D'
item = OrderItem.objects.get(order = o, sku = 'MOA1000D')
self.assertTrue(item.sku.sku == 'MOA1000D')
pr2 = Product()
pr2.sku = 'MOA1015-1B'
item = OrderItem.objects.get(order = o, sku = 'MOA1015-1B')
self.assertTrue(item.sku.sku == 'MOA1015-1B')
pr1 = Product()
pr1.sku = 'MOA1041A-1'
item = OrderItem.objects.get(order = o, sku = 'MOA1041A-1')
self.assertTrue(item.sku.sku == 'MOA1041A-1')
pr1 = Product()
pr1.sku = 'STP1001-1E'
item = OrderItem.objects.get(order = o, sku = 'STP1001-1E')
self.assertTrue(item.sku.sku == 'STP1001-1E')
Note that I have never actually used a Mock framework though I have tried. So I may also just be fundamentally missing something here.
Look into model mommy. It can automagically create objects with Foreign Keys.
This will probably not answer your question but it may give you food for thought.
In my opinion when you are testing a database backed project or application there is a limit to what you can mock. This is especially so when you are using a framework and an ORM such as the one Django offers. In Django there is no distinction between the business model class and the persistence model class. If you want such a distinction then you'll have to add it yourself.
Unless you are willing to add that additional layer of complexity yourself it becomes tricky to test the business objects alone without having to add fixtures etc. If you must do so you will have to tackle some of the auto magic vodoo done by Django.
If you do choose to grit your teeth and dig in then Michael Foord's Python Mock library will come in quite handy.
I am trying my darndest to adopt TDD as my main method of working, but the way things work with Django, it seems you can either run very trivial unit tests, or these massive integration tests.
I have used Django unit testing mechanism to write non-trivial unit tests. My requirements were doubtless very different from yours. If you can provide more specific details about what you are trying to accomplish then users here would be able to suggest other alternatives.