Are required fields forced when updating while consuming CrmService? - web-services

MSCRM 4.0
When writing plugins, I have assumed that the required fields will always exist either in the Target image or the PreImage image.
But recently when coding an external application that consumes the CrmService, I realised that the service will allow a business entity (or dynamic entity) to be created using the 'Create' method, even if the required fields do not exist or contain a value.
Is this the case? Is there a way to force required fields when calling the Update method of the service? Does anyone know why this may not be the case? Can anyone shed some light on the issue? Will I have to manage these required fields myself?

There is no validation. That's why we need to make sure that those properties are filled properly with the valid value.
Proper validation rules need to be enforced at PreCreate event, so that you can throw InvalidPluginExecutionException to notify users that certain mandatory properties are not filled properly.

No, there is no validation. For standard entities you can look for platform required fields - these are required. But generally they're limited to things like the business unit on a report or something - rare cases. If you want business validation you will need to add it into the Pre-Create/Update plugin.

Related

Best Practice Using Django Signal (For user authentication?)

I am new to Django and want to know deeper about the concept of signals.
I know how it works but really don't understand when should one really use it.
From the doc it says 'They’re especially useful when many pieces of code may be interested in the same events.'
What are some real applications that use signals for its advantage?
e.x. I'm trying to make a phone verification after user signup. Because it can be integrated inside the single app and the event that interested for the signal is only this 'verify' function, therefore I don't really need signal. I can just pass the information from one view to the other, rather than using pre_save signal from the registration.
I'm sorry if my question is kind of basic. But I really want to know some insight what is the real application, in which many codes interested in one particular event and what are some trade off in my application.
Thanks!!
Often signals is used when you need to do some database-specific low-level stuff. For example, if you use ElasticSearch for better searching documents on your site, you may want to automatically update search indexes, when new document is created or old one was edited.
Also you may have some complex logic of managing database objects. For example, you may need some specific logic of deleting object. For example, when user is deleted, you may want change all the links to his profile by some placeholder, or when new message is created or other action is performed by user, you want to update "last visited" field in user's profile and there's no direct relation between this action and updating the profile.
But when you're just implementing business-logic as in your example with verification, you don't need to use signals, because you don't need any universal logic related to deleting/creating/editing any object: you have a certain object with which you work and can do stuff directly.

Where the permissions should be checked in web service?

I have got an architectural question. Where should I check user permissions for certain operations?
For example:
1) In a controller, I get parameters from view and start a process in the intermediate model.
2) Intermediate model decide which parameter should be converted and transformed in any way and modify or create data through Models
3) Model communicate directly with DataBase
Where do You think is the right place in that "architecture" to check privileges to for example save sth to database?
I would actually put the authorization check before the controller is being called, kinda like described here (I really need to update that old post). Preferably as a decorator around the controller instance, which would give you a fine-grained control over what operation user is permitted to do, based on controller+method pair.
Another point where you might think about is "authorization lookup" helper function for use in your templates, because you might need to show or hide some UI elements from users, who should not be able to perform the associated operations. The controller+method check, before execution would still work as the actual safeguard then, but it tends to be a quality-of-life improvement.
You should not put the authorization checks inside the each controller or (even worse) model layer, because that tends to promote an excessive amount of copy-paste, which in turn can cause mistakes and becomes a huge problem, when you want to alter the mechanics of your authorization system.

QT: Form validation with inter-fields rules

I would like to dynamically build a form to edit a set of properties (say from a xml file or so).
On top of that, I would like to perform validation for each property (mandatory values/optional values) with a set of rules (ideally also dynamically loaded).
These rules could be associated to a single field (allowed values, range, ...) but could also link several fields (conditional validation).
I would like to be able to save the results "on the fly" (as soon as a field loses focus).
Does someone have a good lead to get me started?
Here is what I found so far:
I could start from the Qt property browser framework for the dynamic form generation. I could extend this framework to suit my needs.
Regarding the validation, I read about QValidator which seems to be a good start. However, I couldn't find anything involving several fields (cross-parameter validation)
The QSettings framework does this auto-save feature quite nicely and I guess I could reuse that.
I just wanted to be sure I am not missing some existing framework to deal with my goals since
it seems like a relatively standard thing to do.
Assuming that the fields of the form are fixed. Then you could use a shared instance of a QValidatorto validate the text in all the fields by running your validaton over a list /dictionary /map containing pointers to the fields. The list/*dictionary*/map will have to by dynamically populated and cleared, and a pointer to it hard-coded inside QValidate::validate. And if QValidator sharing is not allowed you will have to create individual ones and execute your cross-field validation.
Alternatively, you could use Qt's Signal-Slot mechanism to implement your validation whenever the text in your field is changed.
I had no idea of QSetting, and would have used the very same signal-slot mechanism to do the autosave.

What kind of validations should I use in my db models?

My form validators are pretty good, and if a form passes is_valid, all data should be ok to insert in the db. Should I still validate something on the db model? What else could there be validated on the db side? Because right now, except maybe for uniqueness ( which I can't do from my FormModel ), I can't think of anything else.
EDIT:
I did some work with Rails earlier, and there you would validate a form on the client side, using JS, and on the server side using model validations. I saw in django you can validate on the client side, using JS, and on the server side you have 2 validation checks: forms and models. This is what confused me.
All data should be validated in the database if possible whether you validate from the front end or not. The first validation should be the datatype, for instance using a date datatype will ensure that no nondates can ever get into your database. If you have relationships between tables these absolutely must be enforced at the database level. If the data must be unique, it is irresponsible to not put a unique index on it. If you have a distinct set of values that are the only ones allowed, then put them in a lookup table and add a forign key constraint to that table.
The reason why it is CRITICAL to do validations in the database itself is that the user interface will not be the only thing that interacts with the database (even if you think it will be). Other applications may do so, people will need to make data changes through imports or at a query window (to fix/change large amounts of data such as when client a buys client B and you need to convert all the data to client A). Also if you change the application interface you might lose the some of the critical data integrity checks in the rewrite. Data integrity is one of the most critical factors in database design and maintenance. If you can't count on data integrity, you have no data. I have never seen a database that lets this stuff be handled by the application that didn't lose data integrity over time. Remember the database will far outlast the current application. People will still be looking at this data for years to come. The application typically doesn't consider reporting which is where the data integrity problems tend to come to light. You don't want to have to explain why you have 10,000,000 in orders that you can't identify who they were shipped to, for instance.
If your data has a constraint that's always valid, you should force it in the model/database level (and optionally at the form level). Your DB can be input in multiple ways besides just a form where validation was checked. E.g., someone can go to the django shell to save models directly or someone could create/edit a model in the admin interface or some later designer creates a new form somehow, that doesn't validate correctly.
Granted this is only required if there are additional constraints on the data. Django automatically will validate for things like fields storing proper values, if you are using the correct field types. E.g., IntegerField validates to ensure it contains an integer, EmailField checks that its entered in the form of a valid email address, django.contrib.localflavor.us.models.PhoneNumberField is a US phone number, etc. Note, this only happens if your models have the proper fields (e.g., if you use CharFields for email addresses no validation can be performed.
But there may be other links between data structures, where you should write your own validation. E.g., if all custom orders requiring special instructions (and non-custom orders only sometimes have special instructions), you should check to enforce all custom orders have something in the special instructions field (and maybe have some minimum length).
EDIT: In response to your edit, the reason for three potential validations in django is straightforward -- different validations at different points for different reasons.
Client side (javascript/jquery) validation can't be trusted at all, and should only be given as a convenience for users almost as an afterthought (if you want a spiffy smooth interface). AFAIK, django doesn't have JS validation unless you use an external package like django-ajax-forms or something, but you don't trust that the validation is correct.
Second, there's a difference between form and model validation. One model may have multiple forms for different purposes. For example, you may have a blog with a Comment Model and allow two types of users to comment: signed in users, or anonymous users. The form for anonymous users may require giving a name/email before they comment, while the form for logged in users doesn't need those fields. The signed in user form, when processed in a view may automatically add the correct name and email addresses of the signed in user to the comment model before being saved.
In contrast, model validation always applies and will always be true at the database level, regardless of how they tried saving the data. If you want to make sure some condition always applies make sure it is at the DB level. (And you don't have to write put that validation in at the form level).

How can I easily mark records as deleted in Django models instead of actually deleting them?

Instead of deleting records in my Django application, I want to just mark them as "deleted" and have them hidden from my active queries. My main reason to do this is to give the user an undelete option in case they accidentally delete a record (these records may also be needed for certain backend audit tracking.)
There are a lot of foreign key relationships, so when I mark a record as deleted I'd have to "Cascade" this delete flag to those records as well. What tools, existing projects, or methods should I use to do this?
Warning: this is an old answer and it seems that the documentation is recommending not to do that now: https://docs.djangoproject.com/en/dev/topics/db/managers/#don-t-filter-away-any-results-in-this-type-of-manager-subclass
Django offers out of the box the exact mechanism you are looking for.
You can change the manager that is used for access through related objects. If you new custom manager filters the object on a boolean field, the object flagged inactive won't show up in your requests.
See here for more details :
http://docs.djangoproject.com/en/dev/topics/db/managers/#using-managers-for-related-object-access
Nice question, I've been wondering how to efficiently do this myself.
I am not sure if this will do the trick, but django-reversion seems to do what you want, although you probably want to examine to see how it achieves this goal, as there are some inefficient ways to do it.
Another thought would be to have the dreaded boolean flag on your Models and then creating a custom manager that automatically adds the filter in, although this wouldn't work for searches across different Models. Yet another solution suggested here is to have duplicate models of everything, which seems like overkill, but may work for you. The comments there also discuss different options.
I will add that for the most part I don't consider any of these solutions worth the hassle; I usually just suck it up and filter my searches on the boolean flag. It avoids many issues that can come up if you try to get too clever. It is a pain and not very DRY, of course. A reasonable solution would be a mixture of the Custom manager while being aware of its limitations if you try searching a related model through it.
I think using a boolean 'is_active' flag is fine - you don't need to cascade the flag to related entries at the db level, you just need to keep referring to the status of the parent. This is what happens with contrib.auth's User model, remember - marking a user as not is_active doesn't prompt django to go through related models and magically try to deactivate records, rather you just keep checking the is_active attribute of the user corresponding to the related item.
For instance if each user has many bookmarks, and you don't want an inactive user's bookmarks to be visible, just ensure that bookmark.user.is_active is true. There's unlikely to be a need for an is_active flag on the bookmark itself.
Here's a quick blog tutorial from Greg Allard from a couple of years ago, but I implemented it using Django 1.3 and it was great. I added methods to my objects named soft_delete, undelete, and hard_delete, which set self.deleted=True, self.deleted=False, and returned self.delete(), respectively.
A Django Model Manager for Soft Deleting Records and How to Customize the Django Admin
There are several packages which provide this functionality: https://www.djangopackages.com/grids/g/deletion/
I'm developing one https://github.com/meteozond/django-permanent/
It replaces default Manager and QuerySet delete methods to bring in logical deletion.
It completely shadows default Django delete methods with one exception - marks models which are inherited from PermanentModel instead of deletion, even if their deletion caused by relation.