I have a view that saves data in multiple models, as there are numerous relations.
Model1.object.create(**name)
Model2.object.create(**name)
Model3.object.create(**name)
Currently im using try except for each model.
Is there a way to handle exception for all these in a better way?
A good way to handle that is using the Design by contract, instead of the Defensive programming one. In your case, that means you can verify the integrity of the data passed as argument and handle possible errors before calling the create method, making sure you only call those methods in situations were no error will occur. This way, there is no need for try-except
Related
Can we use same serializer for creating, updating and getting a resource. is it a best practice to do so ?
Can we use same serializer for creating, updating and getting a resource.
Why, yes, of course. Even more than that, we can use the exact same serializer for partially updating (PATCH) and deleting (DELETE) a resource.
This is because the serializer doesn't actually "knows" about all of these operations, it only serializes and deserializes data -- it is the view that handles http methods.
is it a best practice to do so ?
It is most definitely not bad practice.
But is it good ? It really depends on what type of behaviour you are expecting for each of these, whether you have nested objects or not, etc.
I would strongly suggest you read more from the docs, especially about ModelSerializer.
Good luck.
What is the best way to implement functions while writing an app in django? For example, I'd like to have a function that would read some data from other tables, merge then into the result and update user score based on it.
I'm using postgresql database, so I can implement it as database function and use django to directly call this function.
I could also get all those values in python, implement is as django function.
Since the model is defined in django, I feel like I shouldn't define functions in the database construction but rather implement them in python. Also, if I wanted to recreate the database on another computer, I'd need to hardcode those functions and load them into database in order to do that.
On the other hand, if the database is on another computer such function would need to call database multiple times.
Which is preferred option when implementing an app in django?
Also, how should I handle constraints, that I'd like the fields to have? Overloading the save() function or adding constraints to database fields by hand?
This is a classic problem: do it in the code or do it in the DBMS? For me, the answer comes from asking myself this question: is this logic/functionality intrinsic to the data itself, or is it intrinsic to the application?
If it is intrinsic to the data, then you want to avoid doing it in the application. This is particularly true where more than one app is going to be accessing / modifying the data. In which case you may be implementing the same logic in multiple languages / environments. This is a situation that is ripe with ways to screw up—now or in the future.
On the other hand, if this is just one app's way of thinking about the data, but other apps have different views (pun intended), then do it in the app.
BTW, beware of premature optimization. It is good to be aware of DB accesses and their costs, but unless you are talking big data, or a very time sensitive UI, then machine-time, and to a lesser degree user-time, is less important than your time. Getting v1.0 out the door is often more important. As the inimitable Fred Brooks said, "Plan to throw one away; you will anyhow."
I'm writing an app in Django where I'd like to make use of implicit inheritence when using ForeignKeys. As far as I'm concerned the only way to handle this nicely is to use django_polymorphic library (no single table inheritence in Django, WHY OH WHY??).
I'd like to know about the performance implications of this solution. What kind of joins are performed when doing polymorphic queries? Does it have to hit the database multiple times as compared to regular queries (the infamous N+1 queries problem)? The docs warn that "the type of queries that are performed aren't handled efficiently by the modern RDBMs"? However it doesn't really tell what those queries are. Any statistics, experiences would be really helpful.
EDIT:
Is there any way of retrieving a list of objects, each being an instance of its actual class with a constant number of queries ?? I thought this is what the aforementioned library does, however now I got confused and I'm not that certain anymore.
Django-Typed-Models is an alternative to Django-Polymorphic which takes a simple & clean approach to solving the single table inheritance issue. It works off a 'type' attribute which is added to your model. When you save it, the class is persisted into the 'type' attribute. At query time, the attribute is used to set the class of the resulting object.
It does what you expect query-wise (every object returned from a queryset is the downcasted class) without needing special syntax or the scary volume of code associated with Django-Polymorphic. And no extra database queries.
In Django inherited models are internally represented through an OneToOneField. If you are using select_related() in a query Django will follow a one to one relation forwards and backwards to include the referenced table with a join; so you wouldn't need to hit the database twice if you are using select_related.
Ok, I've digged a little bit further and found this nice passage:
https://github.com/bconstantin/django_polymorphic/blob/master/DOCS.rst#performance-considerations
So happily this library does something reasonably sane. That's good to know.
I'm writing a GUI app with wxwidgets in C++ for one of my programming classes. We have to validate input and throw custom exceptions if it doesn't meet certain conditions. My question is, what is best practice when it comes to this? Should I write a seperate function that checks for errors, and have my event handler's call that function? Or should I do my error-checking in my event handlers? Or does it really matter?
Thanks!
Throwing exceptions for this seems a little odd.
I've always felt that a GUI app should prevent invalid data from even being entered in a control, which can be done using event handlers. For data that is inconsistent between several controls on a form, that should be validated when OK is pressed or whatever, and if the data isn't OK, prevent the form from closing and indicate the error.
The validation can also be done prior to pressing OK, but it shouldn't prevent the data being entered, only indicate the problem (and perhaps disable the OK button until its fixed). Basically, the inconsistent value entered in this control may not be inconsistent once the user has edited the value in the next control he's moving on to.
Try to avoid message boxes for reporting these errors - a status bar or a read-only text control within the form is better.
A function/method should be defined that does all the consistency checks for the form, but it shouldn't throw an exception. Bad input isn't the "exceptional" case from typical users - it happens all the time, especially when I'm the user. More to the point, you should only use exceptions when error-handling would otherwise mess up the structure of your code, which shouldn't be the case in an "is the data in this form OK - yes, fine, accept and close" event handler.
The function should just return an error code to indicate which error to report - or report the error itself and return an error flag.
Not so long ago, wxWidgets wasn't even exception-safe - IIRC, you still have to enable exception-safety as an option when you build the library.
You are raising a bigger concern than that, in fact. To answer your question, you can factor the validation code to a different function if you have, say, multiple widgets requiring the same validation. Or just because you see it fit -- it's up to you.
But the decision of where error handling should be is a more architectural question. Consider design by contract, which means any method assumes its parameters or input are in a valid state, and guarantees that its output or return value is valid as well. This implies that you should validate the user's input as soon as possible, rather than have the internal logic take care of that.
In our current codebase we are using MFC db classes to connect to DB2. This is all old code that has been passed onto us by another development team, so we know some of the history but not all.
Most of the Code abstracts away the creation of SQL queries through functions such as Update() and Insert() that prepend something like "INSERT INTO SCHEMA.TABLE" onto a string that you supply. This is done through the recordset classes that sit on top of the database class
The other way to do the SQL queries is to execute them directly on the database class using dbclass.ExecuteSQL(String).
We are wondering what the pro's and cons of each approach is. From our perspective its much easier to do the ExecuteSQL() call, as we dont have to write another class etc, but there must be good reasons to do the other way. we are just not sure what they are.
Any help would be great!
Thanks Mark
Update----
I think I may have misunderstood Dynamic and Static SQL. I think our code always uses Dynamic, so my question really becomes, should I construct the SQL strings myself and do an ExecuteSQL() or should this be abstracted away in a class for each table in the database, as the recordset classes from mfc seem to do?
The ATL OLE DB consumer database classes are absolutely the way to go. Beyond the risks of injection (mentioned by Skurmedel), piles of string-concatenated queries will become impossible to maintain very quickly.
While the ATL classes can be initially tedious, they provide the benefit of strong-typed and named columns, result navigation, connection and session management, etc.
I would try to abstract it away if it's many SQL statements. Managing dozens of different SQL queries quickly become tedious. Also it's easier to validate input that way.