Is it correct, that in Web2Py you are not able to create custom methods within "models", so that they could contain business logic you want models to implement?
In case of Django you can just do something like:
class Aircraft(models.Model):
'''I am an aircraft. I can fly, if I am created in Django.
'''
name = models.CharField(max_length=20)
def fly(self):
# ... some advanced logic here ...
return 'I am flying'
But is it possible to do something like that (create custom methods) in Web2Py without the need to write the whole ORM system from the beginning or to share single method between instances of all the tables? Is there any established way to do that? For example:
db.define_table("aircrafts",
Field("name", type="string", length=20)
)
aircraft = db(db.aircrafts).select().first()
# I am an aircraft too, please make me fly
aircraft.fly()
Yes, you can define virtual fields:
db.aircrafts.fly = Field.Virtual(lambda row: 'I am flying')
aircraft = db(db.aircrafts).select().first()
print aircraft.fly
or
db.aircrafts.fly = Field.Lazy(lambda row: 'I am flying')
aircraft = db(db.aircrafts).select().first()
print aircraft.fly()
In the first example above, the "fly" value is automatically calculated for all records when they are selected. In the second example, the calculation is lazy and only executed when .fly() is actually called on a specific record.
You can also do this with old style virtual fields, which may be better for complex functions.
Note, this is handled differently from Django because web2py uses a database abstraction layer (DAL) rather than an ORM. Tables are not modeled as custom classes but as instances of the DAL Table class.
Related
I've been trying to create entities with a property which should be unique or None something similar to:
class Thing(ndb.Model):
something = ndb.StringProperty()
unique_value = ndb.StringProperty()
Since ndb has no way to specify that a property should be unique it is only natural that I do this manually like this:
def validate_unique(the_thing):
if the_thing.unique_value and Thing.query(Thing.unique_value == the_thing.unique_value).get():
raise NotUniqueException
This works like a charm until I want to do this in an ndb transaction which I use for creating/updating entities. Like:
#ndb.transactional
def create(the_thing):
validate_unique(the_thing)
the_thing.put()
However ndb seems to only allow ancestor queries, the problem is my model does not have an ancestor/parent. I could do the following to prevent this error from popping up:
#ndb.non_transactional
def validate_unique(the_thing):
...
This feels a bit out of place, declaring something to be a transaction and then having one (important) part being done outside of the transaction. I'd like to know if this is the way to go or if there is a (better) alternative.
Also some explanation as to why ndb only allows ancestor queries would be nice.
Since your uniqueness check involves a (global) query it means it's subject to the datastore's eventual consistency, meaning it won't work as the query might not detect freshly created entities.
One option would be to switch to an ancestor query, if your expected usage allows you to use such data architecture, (or some other strongly consistent method) - more details in the same article.
Another option is to use an additional piece of data as a temporary cache, in which you'd store a list of all newly created entities for "a while" (giving them ample time to become visible in the global query) which you'd check in validate_unique() in addition to those from the query result. This would allow you to make the query outside the transaction and only enter the transaction if uniqueness is still possible, but the ultimate result is the manual check of the cache, inside the transaction (i.e. no query inside the transaction).
A 3rd option exists (with some extra storage consumption as the price), based on the datastore's enforcement of unique entity IDs for a certain entity model with the same parent (or no parent at all). You could have a model like this:
class Unique(ndb.Model): # will use the unique values as specified entity IDs!
something = ndb.BooleanProperty(default=False)
which you'd use like this (the example uses a Unique parent key, which allows re-using the model for multiple properties with unique values, you can drop the parent altogether if you don't need it):
#ndb.transactional
def create(the_thing):
if the_thing.unique_value:
parent_key = get_unique_parent_key()
exists = Unique.get_by_id(the_thing.unique_value, parent=parent_key)
if exists:
raise NotUniqueException
Unique(id=the_thing.unique_value, parent=parent_key).put()
the_thing.put()
def get_unique_parent_key():
parent_id = 'the_thing_unique_value'
parent_key = memcache.get(parent_id)
if not parent_key:
parent = Unique.get_by_id(parent_id)
if not parent:
parent = Unique(id=parent_id)
parent.put()
parent_key = parent.key
memcache.set(parent_id, parent_key)
return parent_key
I am designing a contact relationship application that needs to store contacts in groups. Basically I have 7 "group types" (simplified it to 3 for my image), each group type shares the same fields so I thought that it would make sense to use an abstract "group", and let all group types inherit the methods from this abstract group.
So this is basically the idea:
However, this approach results in a couple of unexpected difficulties. For example:
I am not able to use a foreignkey of an abstract class, so if I would want to model a relationship between a group and a contact, I have to use the following approach:
limit = (models.Q(app_label='groups', model="Group type A") |
models.Q(app_label='groups', model="Group type B") |
models.Q(app_label='groups', model="Group type C")
)
group_type = models.ForeignKey(ContentType, limit_choices_to=limit)
group_id = models.PositiveIntegerField()
group = GenericForeignKey('group_type', 'group_id')
This seems quite hacky, and with this approach I am forced to do some hard coding as well. I am not able to call all groups with a simple query, maybe a new group will be added in the future.
Is there a better approach to model a relationship like this? Am I using the abstract class completely wrong?
Edit: some extra explanation in response to the questions.
A user is connected to a group with another object called "WorkRelation", because there is some extra data that is relevant when assigning a user to a group (for example his function).
I initially went for an abstract class because I thought that this would give me the flexibility to get all Group types be just calling Group.objects.all(). If I would use a base model, the groups aren't connected and I will also have to hard-code all group names.
Since your child models do not have additional fields, you can make them proxy models of the base group model. Proxy models do not create new database tables, they just allow having different programmatic interfaces over the same table.
You could then define your ForeignKey to the base group model:
group = ForeignKey(BaseGroup)
Use django-polymodels or a similar app to have the groups casted to the right type when queried.
More on model inheritance in the doc.
Why don't use solid base model instead of abstract model? Then you just put contacts as either ForeignKey or ManyToMany to the base model.
I have a scenario where a user need to enter a type of contribution. It can be cash or material. Based on his contribution type, I need to store the cash in IntegerField or material in CharField. How can I do it without making two fields in the model and leaving one always as empty field.
class Contribution(models.Model):
CONTRIBUTION_TYPE_CASH = "cash"
CONTRIBUTION_TYPE_MATERIAL = "material"
CONTRIBUTION_TYPE_CHOICES = [
(CONTRIBUTION_TYPE_CASH, _("cash")),
(CONTRIBUTION_TYPE_MATERIAL, _("material"))
]
contributor = models.ForeignKey(Contributor, related_name="donor", verbose_name=_("contributor"))
type = models.CharField(max_length=20, choices=CONTRIBUTION_TYPE_CHOICES, verbose_name=_("contribution type"))
First variant, keep a single CharField and make sure you properly validate input depending on type. You will have to deal with strings all the time, even if the actual value is a number.
Second variant, use model inheritance and define two different models, one for material contributions and another for cash contributions. You can use an abstract parent in which case you'd have to manually merge the two querysets for getting a global contribution list. Or you could use a concrete parent and use a third party package such as django_polymorphic to seamlessly deal with inherited instances. Either way you'd have to create the apropriate model instance in your backend, even if you use the same dynamic form in your frontend.
In a nutshell: my models are B --> A <-- C, I want to filter Bs where at least one C exists, satisfying some arbitrary conditions and related to the same A as that B. Help with some complicating factors (see below) is also appreciated.
Details:
I'm trying to create a generic model to limit user access to rows in other models. Here's a (simplified) example:
class CanRead(models.Model):
user = models.ForeignKey(User)
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
content_object = generic.GenericForeignKey('content_type', 'object_id')
class Direct(models.Model):
...
class Indirect(models.Model):
direct = models.ForeignKey(Direct)
...
class Indirect2(models.Model):
indirect = models.ForeignKey(Indirect)
...
It's not feasible to associate a CanRead to every row in every model (too costly in space), so only some models are expected to have that association (like Direct above). In this case, here's how I'd see if a Direct is accessible to a user or not:
Direct.objects.filter(Q(canread__user=current_user), rest_of_query)
(Unfortunately, this query won't work - in 1.2.5 at least - because of the generic fk; any help with this would be appreciated, but there are workarounds, the real issue is what follows next)
The others' accessibility will be dictated by their relations with other models. So, Indirect will be accessible to an user if direct is accessible, and Indirect2 will be if indirect__direct is, etc.
My problem is, how can I do this query? I'm tempted to write something like:
Indirect.objects.filter(Q(canread__content_object=F('direct'), canread__user=current_user), rest_of_query)
Indirect2.objects.filter(Q(canread__content_object=F('indirect__direct'), canread__user=current_user), rest_of_query)
but that doesn't work (Django expects a relation between CanRead and Indirect - which doesn't exist - for the reverse query to work). If I were writing it directy in SQL, I would do something like:
SELECT *
FROM indirect i
JOIN direct d ON i.direct = d.id
JOIN canread c ON c.object_id = d.id
WHERE
c.content_type = <<content type for Direct>> AND
c.user = <<current user>> AND
<<rest_of_query>>
but I can't translate that query to Django. Is it possible? If not, what would be the least instrusive way of doing it (using as little raw SQL as possible)?
Thanks for your time!
Note: The workaround mentioned would be not to use generic fk... :( I could discard the CanRead model and have many CanReadDirect, CanReadDirect2, CanReadDirect3, etc. It's a minor hassle, but wouldn't hinder my project too much.
For the simple case you've given, the solution is simple:
B.objects.filter(a__c__isnull=False)
For the actual query, here's my try:
Indirect.objects.filter(direct__id__in=
zip(*CanRead.objects.filter(
content_type=ContentType.objects.get_for_model(Direct)
).values_list('id'))[0])
But this way is very slow: you extract IDs from one queryset, then do a query with
where id in (1, 2, 3, ... 10000)
Which is VERY SLOW. We had a similar issue with joins on generic foreign keys in our project and decided to resort to raw queries in the model manager.
class DirectManager(Manager):
def can_edit(self, user):
return self.raw(...)
I'd also recommend checking out the per-row permissions framework in Django 1.3.
access control models are not that simple...
use a well-known access control model such as:
DAC/MAC
or
RBAC
also there is a project called django-rbac.
I'm building a food logging database in Django and I've got a query related problem.
I've set up my models to include (among other things) a Food model connected to the User model through an M2M-field "consumer" via the Consumption model. The Food model describes food dishes and the Consumption model describes a user's consumption of Food (date, amount, etc).
class Food(models.Model):
food_name = models.CharField(max_length=30)
consumer = models.ManyToManyField("User", through=Consumption)
class Consumption(models.Model):
food = models.ForeignKey("Food")
user = models.ForeignKey("User")
I want to create a query that returns all Food objects ordered by the number of times that Food object appears in the Consumption table for that user (the number of times the user has consumed the food).
I'm trying something in the line of:
Food.objects.all().annotate(consumption_times = Count(consumer)).order_by('consumption_times')`
But this will of course count all Consumption objects related to the Food object, not just the ones associated with the user. Do I need to change my models or am I just missing something obvious in the queries?
This is a pretty time-critical operation (among other things, it's used to fill an Autocomplete field in the Frontend) and the Food table has a couple of thousand entries, so I'd rather do the sorting in the database end, rather than doing the brute force method and iterate over the results doing:
Consumption.objects.filter(food=food, user=user).count()
and then using python sort to sort them. I don't think that method would scale very well as the user base increases and I want to design the database as future proof as I can from the start.
Any ideas?
Perhaps something like this?
Food.objects.filter(consumer__user=user)\
.annotate(consumption_times=Count('consumer'))\
.order_by('consumption_times')
I am having a very similar issue. Basically, I know that the SQL query you want is:
SELECT food.*, COUNT(IF(consumption.user_id=123,TRUE,NULL)) AS consumption_times
FROM food LEFT JOIN consumption ON (food.id=consumption.food_id)
ORDER BY consumption_times;
What I wish is that you could mix aggregate functions and F expression, annotate F expressions without an aggregate function, have a richer set of operations/functions for F expressions, and have virtual fields that are basically an automatic F expression annotation. So that you could do:
Food.objects.annotate(consumption_times=Count(If(F('consumer')==user,True,None)))\
.order_by('consumtion_times')
Also, just being able more easily able to add your own complex aggregate functions would be nice, but in the meantime, here's a hack that adds an aggregate function to do this.
from django.db.models import aggregates,sql
class CountIf(sql.aggregates.Count):
sql_template = '%(function)s(IF(%(field)s=%(equals)s,TRUE,NULL))'
sql.aggregates.CountIf = CountIf
consumption_times = aggregates.Count('consumer',equals=user.id)
consumption_times.name = 'CountIf'
rows = Food.objects.annotate(consumption_times=consumption_times)\
.order_by('consumption_times')