There is an Institution entity, and institutions can be deleted. But after deletion the data is not deleted in database, a column "deleted", which has a default value of NULL, is changed to the date and time of deletion.
The project has already progressed and institutions are called in many places. The problem is that not all calls of institutions are queried by the deleted value.
To not manually add an additional where clause, I have been searching for a way to add a default condition to all institution repository queries to return only institutions with a "deleted" value of NULL. But nothing really helped.
Related
I want to add fields from Table - "Customer Bank Account" in Page - "Bank Receipt Journal" which has a current source table as "Gen. Journal Line".
It might be helpful if I get to the know the process for adding fields as a lookup from different Table source.
I guess it depends on your requirements:
If you think the additional fields may also be used on other pages that have "Gen. Journal Line" as their SourceTable, I would consider adding FlowFields of method "Lookup" to your "Gen. Journal Line" table (assuming you have sufficient permissions to do so). In your response, please elaborate on how you want your lookup logic to work for further assistance, if needed.
If your lookup logic is more complex than FlowFields can handle, a function on the "Gen. Journal Line." that returns the relevant field value may be a good solution.
Alternatively, if this is the only page where you will need your new fields, or if you don't have permissions to modify table definitions, define a function in your page object that performs the lookup and returns the value. Use this function as the SourceExpr of your page control.
My application creates several rows of data per customer per day. Each row is modified as necessary using a form. Several modifications to a row may take place daily. At the end of the day the customer will "commit" the changes, at which point no further changes will be allowed. In each row I have a 'stage' field, stage=1 allows edits, stage=2 is committed, no further changes allows.
How can I update the stage value to 2 on commit?
In my model I have:
#property
def commit_stage(self):
self.stage = 2
self.save()
Is this the correct way to do this? And if so, how to I attach this function to a "commit" button.
I suspect you are confused about what properties do. You should absolutely not attach this sort of functionality to a property. It would be extremely surprising behaviour for something which is supposed to retrieve a value from an instance to instead modify it and save it to the db.
You can of course put this in a standard model method. But it's so trivial there is no point in doing so.
In terms of "attaching it to a button", nothing in Django can be called from the front-end without a URL and a view. You need a view that accepts the ID of the model instance from a POST request, gets the instance and modifies its stage value, then saves it. There is nothing different from the form views you already use.
So I have a list of unique pupils (pupil is the primary_key in an LDAP database, each with an associated teacher, which can be the same for several pupils.
There is a box in an edit form for each teacher's pupils, where a user can add/remove an pupil, and then the database is updated according using the below function. My current function is as follows. (teacher is the teacher associated with the edit page form, and updated_list is a list of the pupils' names what has been submitted and passed to this function)
def update_pupils(teacher, updated_list):
old_pupils = Pupil.objects.filter(teacher=teacher)
for pupils in old_pupils:
if pupil.name not in updated_list:
pupil.delete()
else:
updated_list.remove(pupil.name)
for pupil in updated_list:
if not Pupil.objects.filter(name=name):
new_pupil = pupil(name=name, teacher=teacher)
new_pupil.save()
As you can see the function basically finds what was the old pupil list for the teacher, looks at those and if an instance is not in our new updated_list, deletes it from the database. We then remove those deleted from the updated_list (or at least their names)...meaning the ones left are the newly created ones, which we then iterate over and save.
Now ideally, I would like to access the database as infrequently as possible if that makes sense. So can I do any of the following?
In the initial iteration, can I simply mark those pupils up for deletion and potentially do the deleting and saving together, at a later date? I know I can bulk delete items but can I somehow mark those which I want to delete, without having to access the database which I know can be expensive if the number of deletions is going to be high...and then delete a lot at once?
In the second iteration, is it possible to create the various instances and then save them all in one go? Again, I see in Django 1.4 that you can use bulk_create but then how do you save these? Plus, I'm actually using Django 1.3 :(...
I am kinda assuming that the above steps would actually help with the performance of the function?...But please let me know if that's not the case.
I have of course been reading this https://docs.djangoproject.com/en/1.3/ref/models/querysets/ So I have a list of unique items, each with an associated email address, which can be the same for several items.
First, in this line
if not Pupil.objects.filter(name=name):
It looks like the name variable is undefined no ?
Then here is a shortcut for your code I think:
def update_pupils(teacher, updated_list):
# Step 1 : delete
Pupil.objects.filter(teacher=teacher).exclude(name__in=updated_list).delete() # delete all the not updated objects for this teacher
# Step 2 : update
# either
for name in updated_list:
Pupil.objects.update_or_create(name=name, defaults={teacher:teacher}) # for updated objects, if an object of this name exists, update its teacher, else create a new object with the name from updated_list and the input teacher
# or (but I'm not sure this one will work)
Pupil.objects.update_or_create(name__in=updated_list, defaults={teacher:teacher})
Another solution, if your Pupil object only has those 2 attributes and isn't referenced by a foreign key in another relation, is to delete all the "Pupil" instances of this teacher, and then use a bulk_create.. It allows only 2 access to the DB, but it's ugly
EDIT: in first loop, pupil also is undefined
Ember Data's Adapter saves edited records in different groups of Ember.OrderedSets, namely: commitDetails.created, commitDetails.updated, and commitDetails.deleted.
model.save() from model controller's createRecord() will be placed in the commitDetails.created group. model.save() from model controller's acceptChanges will placed be in the commitDetails.updated group. But I can't find in code where the placement association happens.
I know that they are instantiated in Ember Transaction's commit function (which calls Adapter's commit, in turn calling Adapter's save). Throughout this process, I can't figure out where exactly the records are sorted according to the created/updated/deleted criteria.
I'm not quite clear what you're asking, but if you're looking for where records get added to their appropriate commitDetails set, I believe this is the line you're looking for, in the commitDetails property itself.
Here's the relevant code.
forEach(records, function(record) {
if(!get(record, 'isDirty')) return;
record.send('willCommit');
var adapter = store.adapterForType(record.constructor);
commitDetails.get(adapter)[get(record, 'dirtyType')].add(record);
});
Let's walk through it.
forEach(records, function(record) {
if(!get(record, 'isDirty')) return;
The above says, for each record in the transaction, if it's not dirty, ignore it.
record.send('willCommit');
Otherwise, update its state to inFlight.
var adapter = store.adapterForType(record.constructor);
Get the record's adapter.
commitDetails.get(adapter)
Look up the adapter's created/updated/deleted trio object, which was instantiated at the top of this method here. It's simply an object with the 3 properties created, updated, and deleted, whose values are empty OrderedSets.
[get(record, 'dirtyType')]
Get the appropriate OrderedSet from the object we just obtained. For example, if the record we're on has been updated, get(record, 'dirtyType') will return the string updated. The brackets are just standard JavaScript property lookup, and so it grabs the updated OrderedSet from our trio object in the previous step.
.add(record);
Finally, add the record to the OrderedSet. On subsequent iterations of the loop, we'll add other records of the same type, so all created records get added to one set, all updated records get added to another set, and all deleted records get added to the third set.
What we end up with at the end of the entire method and return from the property is a Map whose keys are adapters, and whose values are these objects with the 3 properties created, updated, and deleted. Each of those, in turn, are OrderedSets of all the records in the transaction that have been created for that adapter, updated for that adapter, and deleted for that adapter, respectively.
Notice that this computed property is marked volatile, so it will get recomputed each time that someone gets the commitDetails property.
http://msdn.microsoft.com/en-us/library/dd918848.aspx
"It is important to understand that a scope is the combination of tables and filters. For example, you could define a filtered scope named sales-WA that contains only the sales data for the state of Washington from the customer_sales table. If you define another filter on the same table, such as sales-OR, this is a different scope. If you define filters, be aware that Sync Framework does not automatically handle the deletion of rows that no longer satisfy a filter condition. For example, if a user or application updates a value in a column that is used for filtering, a row moves from one scope to another. The row is sent to the new scope that the row now belongs to, but the row is not deleted from the old scope. Your application must handle this situation."
I am just wondering someone can shed some light on how to handle "Sync Framework does not automatically handle the deletion of rows that no longer satisfy a filter condition"?
Many thanks.
The sync providers will (as part of the provisioning step) automatically create tombstone tables and triggers to track row deletions. When rows are not deleted, but updated in such a way, as to fall out of the scope, then the automatically generated schema won't log these as deletions. It will log them as updates. So to extend the Microsoft example, assume your application is syncing only Washington data to Washington sales reps. Some sales that were originally entered as Washington sales are corrected and moved to Oregon. The sync framework won't know that it should remove these now-Oregon records from the Washington reps' local databases.
You have a couple of options to solve this:
Modify the provisioning tools to generate triggers that would handle the situation, instead of the default triggers that don't. Look into extending SqlSyncScopeProvisioning to accomplish this. If done correctly, this is probably the most scale-able/extensible solution.
Modify your application to detect the attempt to move a row out of a scope and have the application delete the row and re-insert it instead of just updating it (probably in a stored procedure). If you already use stored procedures to handle updates, this might be a good option.
Add a background service or process that goes through and looks for records that don't match the scope and delete them. This may end up being the easiest solution - especially if your application is already deployed.