Django BinaryField - Justification for statement in documentation - django

In the Django 1.10 documentation for the BinaryField field type, they give a warning about its use:
Abusing BinaryField
Although you might think about storing files in the database, consider that it is bad design in 99% of the cases. This field is not a replacement for proper static files handling.
It does not continue with any justification for this claim. Are there any generalized indicators for what falls in the 99% "bad design" or 1% "not bad design" cases? Does this ring particularly true with Django because it has great static files support?

I consider this premature optimization at best and cargo cult programming at worst.
While it is true that relational database systems aren't optimized for storing large fields (whether binary or text) and some of them treat them specially or at least have some restrictions on their use, most of them handle at least moderately sized binary values (let's say up to a few hundred megabytes) quite well. Storing pictures or PDFs in the database will be less efficient than storing them in the file system, but for 99% of all applications it will be efficient enough.
On the other hand, if you store these files in the file system, you lose several advantages:
Updates will be outside of transactions, so you can't be sure that an update to the file (in the filesystem) and the metadata (in the database) will be atomic.
You lose referential integrity: Your database may refer to files which have been deleted or renamed.
You have two different places where you store your data. This complicates access, backups, etc.
I would try to store all data together which belongs logically together. Usually that means storing everything in the database. If this is not technically possible (e.g. because your files are too big - most RDBMS have a size limit on blobs) or because tests show that it is too slow or otherwise inconvenient, you can always optimize it later.

Django models are an abstraction for relational database. These excel in storing small amount of data with well defined format and relationship. They are optimised for fixed length row and low memory usage.
Is your data fixed length, smaller than 4Kb, and is not meant to be served by a webserver ? You are probably in the 1%.

Related

Is there a persistency layer for list/queue containers?

Is there some kind of persistency layer that can be used for a regularly-modified list/queue container that stores strings?
The data in the list is just strings, nothing fancy. It could be useful, though, to store a key or hash with each string for definite references, so I thought I'd wrap each string in a struct with an extra key field.
The persistency should be saved on each modification, more or less, as spontaneous power offs might happen.
I looked into Boost::Serialisation and it seems easy to use, but I guess I'd have to write the whole queue everytime it gets modified to close the file and be safe for power offs, as I see no journaling option there.
I saw SQLite, but it could be over the top as I don't need relations or any sophisticated queries.
And I don't want to reinvent the wheel by doing it manually in some files.
Is there anything available worth looking into?
I have few experience with C++ and an OS beneath, so I'm unaware of what's available and what's suitable. And couldn't find any better.
Potentially simpler alternative to relational databases, when you don't need those relations, are "nosql" databases. A document oriented database might be a reasonable choice based on the description.

Efficiency of using JSONFields vs. Pure SQL approach Django Postgres

I am wondering what is the difference in efficiency using JSONFields vs. a pure SQL approach in my Postgres DB.
I now know that I can query JSONFields like this:
MyModel.objects.filter(json__title="My title")
In pure SQL, it would look like this:
MyModel.objects.filter(title="My title")
Are these equal in efficiency?
Having separate columns for each thing is definitely more efficient.
The advantage of a JSONField is flexibility. You can store anything you want in there, and you don't have to change your database schema. But this comes at a cost. If you have a column that is a CharField with max 255 characters for example, then lots of time and effort will have gone into making a database that can optimise for that particular type (likewise for other types). With a JSONField however, it can be literally anything and it becomes very difficult to optimise a query (at the actual database level) for this.
Unless you have a good reason to use a JSON field (namely you need that level of flexibility) it is much much much better to go with separate columns for each of your fields. There are other advantages besides performance as well. You can define defaults, you can know for certain what types different variables are, which will make programming with them a whole heap easier and avoid a load of errors.

Fast embedded database

I am working on an application which will need to store metadata associated with music files (artist, title, play count, etc.), as well as sets of integers (in particular, SHA-1 hashes).
The solution I pick needs to:
Provide "fast" storage & retrieval (when viewing a list of potentially thousands of songs I need to be able to retrieve metadata more or less interactively).
Be cross-platform (to Linux, Windows and OSX).
Provide an interface I can interact with from C++.
Be open-source (or, at the very least, be free as in beer).
Provide fast set operations (union, intersection, difference) - if the solution doesn't provide this, but it will allow me to store binary data, I could implement this myself using a technique like "Fast Set Operations Using Treaps".
Be "embedded" - that is, operate without me having to fork another process, or at least provide an easy interface to do so (like libmysqld).
Solutions I have considered include:
Flat files. This is extremely simple, but doesn't provide any features besides flat data storage.
SQlite. This seems to be a very popular option, but it seems to have some issues regarding performance and concurrency (see KDE's Akonadi, for some example issues).
Embedded MySQL/MariaDB. This seems to be a reasonable option, but it also might be a bit heavyweight considering I won't be needing a lot of complicated SQL features.
A hypothetical solution I'm thinking would be perfect would be something like Redis, but which persists data to the disk, and only stores some portion of the data in memory to make retrieval fast. Redis itself might not be a good option because 1) I would need to fork it manually, 2) its Windows port seems less than rock-solid, and 3) storing all of my data in RAM would be less than ideal.
Are there any other solutions for this type of problem, or is one of the solutions I have already listed far better than the others?
In the end, I've decided to use SQlite for metadata. It seems to be as fast if not faster than e.g. libmysqld, and it has a really simple clean C interface. According to benchmarks, it should be way more than fast enough to suit my needs.
For larger data structures, I'm planning on just storing them in separate binary files (the SQlite website says it can store binary data, but that if your data size exceeds a certain amount it is faster to store it in flat files instead - see this page).
Don't store you binary files BLOBS inside SQLite, unless you want an elephant size database. Just store a string with the path file name on the file system. The only downside of SQLite is that it does not allow remote (web) access, but you can embedded it inside a small TCP/HTTP server.

An alternative to hierarchical data model

Problem domain
I'm working on a rather big application, which uses a hierarchical data model. It takes images, extracts images' features and creates analysis objects on top of these. So the basic model is like Object-(1:N)-Image_features-(1:1)-Image. But the same set of images may be used to create multiple analysis objects (with different options).
Then an object and image can have a lot of other connected objects, like the analysis object can be refined with additional data or complex conclusions (solutions) can be based on the analysis object and other data.
Current solution
This is a sketch of the solution. Stacks represent sets of objects, arrows represent pointers (i.e. image features link to their images, but not vice versa). Some parts: images, image features, additional data, may be included in multiple analysis objects (because user wants to make analysis on different sets of object, combined differently).
Images, features, additional data and analysis objects are stored in global storage (god-object). Solutions are stored inside analysis objects by means of composition (and contain solution features in turn).
All the entities (images, image features, analysis objects, solutions, additional data) are instances of corresponding classes (like IImage, ...). Almost all the parts are optional (i.e., we may want to discard images after we have a solution).
Current solution drawbacks
Navigating this structure is painful, when you need connections like the dotted one in the sketch. If you have to display an image with a couple of solutions features on top, you first have to iterate through analysis objects to find which of them are based on this image, and then iterate through the solutions to display them.
If to solve 1. you choose to explicitly store dotted links (i.e. image class will have pointers to solution features, which are related to it), you'll put very much effort maintaining consistency of these pointers and constantly updating the links when something changes.
My idea
I'd like to build a more extensible (2) and flexible (1) data model. The first idea was to use a relational model, separating objects and their relations. And why not use RDBMS here - sqlite seems an appropriate engine to me. So complex relations will be accessible by simple (left)JOIN's on the database: pseudocode "images JOIN images_to_image_features JOIN image_features JOIN image_features_to_objects JOIN objects JOIN solutions JOIN solution_features") and then fetching actual C++ objects for solution features from global storage by ID.
The question
So my primary question is
Is using RDBMS an appropriate solution for problems I described, or it's not worth it and there are better ways to organize information in my app?
If RDBMS is ok, I'd appreciate any advice on using RDBMS and relational approach to store C++ objects' relationships.
You may want to look at Semantic Web technologies, such as RDF, RDFS and OWL that provide an alternative, extensible way of modeling the world. There are some open-source triple stores available, and some of the mainstream RDBMS also have triple store capabilities.
In particular take a look at Manchester Universities Protege/OWL tutorial: http://owl.cs.manchester.ac.uk/tutorials/protegeowltutorial/
And if you decide this direction is worth looking at further, I can recommend "SEMANTIC WEB for the WORKING ONTOLOGIST"
Just based on the diagram, I would suggest that an RDBMS solution would indeed work. It has been years since I was a developer on an RDMS (called RDM, of course!), but I was able to renew my knowledge and gain very many valuable insights into data structure and layout very similar to what you describe by reading the fabulous book "The Art of SQL" by Stephane Faroult. His book will go a long way to answer your questions.
I've included a link to it on Amazon, to ensure accuracy: http://www.amazon.com/The-Art-SQL-Stephane-Faroult/dp/0596008945
You will not go wrong by reading it, even if in the end it does not solve your problem fully, because the author does such a great job of breaking down a relation in clear terms and presenting elegant solutions. The book is not a manual for SQL, but an in-depth analysis of how to think about data and how it interrelates. Check it out!
Using an RDBMS to track the links between data can be an efficient way to store and think about the analysis you are seeking, and the links are "soft" -- that is, they go away when the hard objects they link are deleted. This ensures data integrity; and Mssr Fauroult can answer what to do to ensure that remains true.
I don't recommend RDBMS based on your requirement for an extensible and flexible model.
Whenever you change your data model, you will have to change DB schema and that can involve more work than change in code.
Any problems with DB queries are discovered only at runtime. This can make a lot of difference to the cost of maintenance.
I strongly recommend using standard C++ OO programming with STL.
You can make use of encapsulation to ensure any data change is done properly, with updates to related objects and indexes.
You can use STL to build highly efficient indexes on the data
You can create facades to get you the information easily, rather than having to go to multiple objects/collections. This will be one-time work
You can make unit test cases to ensure correctness (much less complicated compared to unit testing with databases)
You can make use of polymorphism to build different kinds of objects, different types of analysis etc
All very basic points, but I reckon your effort would be best utilized if you improve the current solution rather than by look for a DB based solution.
http://www.boost.org/doc/libs/1_51_0/libs/multi_index/doc/index.html
"you'll put very much effort maintaining consistency of these pointers
and constantly updating the links when something changes."
With the help of Boost.MultiIndex you can create almost every kind of index on a "table". I think the quoted problem is not so serious, so the original solution is manageable.

Where to store SQL code for a C++ application?

We have a C++ application that utilizes some basic APIs to send raw queries to a MS SQL Server. Scattered through the various translation units in our program, we have simple 1-2 line queries as C++ strings, and every now and then you'll see more complex queries that can be over 20 lines.
I can't help but think that the larger queries, specifically the 20+ line ones, should not be embedded in C++ code as constant strings. I want to propose pulling these out into separate text files that are loaded on-demand by the C++ application, however I'm not sure if this is the best approach.
What design choices are typical for situations like this? I definitely feel there needs to be improvement, I just don't know if moving the SQL queries out into data files (text files) is the best idea.
You could make a DAL (Data Access Layer).
It would be the API that the rest of the program talks to. Then you can mess around and try anything and everything (Stored procedures, caching, etc.) without disturbing the main program.
Move them into their own files, or even into their own stored procedures. Queries embedded in the application cannot be changed without a recompile, and depending on your release procedures, that could severely impair your ability to respond to emergencies or deploy hot fixes. You could alter your app to cache the file contents, if you go down that road, and even periodically check the files for updates.
the best "design choice" - for many different reasons - is to use MSSQL stored procedures whenever/wherever possible.
I've seen code that segregates SQL queries into a common module, but I don't think there's much benefit to a common "queries module" (or a standalone text file) over having the SQL queries spelled out as string literals in the module that's calling them.
Stored procedures, on the other hand, increase modularity, enhance security, and can vastly improve performance.
IMHO...
I would leave the SQL embedded in the C++ functions that use it: it will be easier to read and understand what the code does.
If you have SQL queries scattered around your code I'd say that there is some problem with the overall structure of the classes you are using: you should have some (or even just one) 'low level' classes that handle the interaction with the database, and the rest of the code uses these classes.
I personally don't like using stored procedure: if you have to support a different database server the porting will be a pain, I never saw that much of a performance improvement and to understand what the code does you have to jump back and forth between the stored procedures and the C++.
It really depends, here are some notes:
1) If all your sql code resides in the application, then your application is pretty much self contained in terms of logic. This is good as you have done in the current application. In terms of speed, this can be a little slower as SQL will need to be parsed when when you run these queries(also depends if you used Prepared statements,etc which can speed it up).
2) The second approach is to put all SQL logic as stored procedures on the server. This is a very much preferred approach for even small SQL queries whether one line or not. You just build a DAL layer. In terms of performance this is very good, however the logic lives in two different systems, your C++ app and the SQL server. You will quite likely need to build a small utility application that can translate the stored procedures input and output to template code (be it C++ or any other) to make your life easier.
3) A mixed approach with the above two. I would not recommend this route.
You need to think about how these queries are likely to change over time, and compare it to how the related C++ code is likely to change. If the queries are relatively independent of the code, and have a higher likelihood of change, then I would either load them at runtime from separate files, or use stored procedures instead. That approach allows for changing the queries without recompiling the C++ code. On the other hand, if the queries are highly coupled to the C++ code, making a change in one likely to accompany a change in the other, I would keep the queries in the code. This approach makes a change more localized and less error prone.