An alternative to hierarchical data model - c++

Problem domain
I'm working on a rather big application, which uses a hierarchical data model. It takes images, extracts images' features and creates analysis objects on top of these. So the basic model is like Object-(1:N)-Image_features-(1:1)-Image. But the same set of images may be used to create multiple analysis objects (with different options).
Then an object and image can have a lot of other connected objects, like the analysis object can be refined with additional data or complex conclusions (solutions) can be based on the analysis object and other data.
Current solution
This is a sketch of the solution. Stacks represent sets of objects, arrows represent pointers (i.e. image features link to their images, but not vice versa). Some parts: images, image features, additional data, may be included in multiple analysis objects (because user wants to make analysis on different sets of object, combined differently).
Images, features, additional data and analysis objects are stored in global storage (god-object). Solutions are stored inside analysis objects by means of composition (and contain solution features in turn).
All the entities (images, image features, analysis objects, solutions, additional data) are instances of corresponding classes (like IImage, ...). Almost all the parts are optional (i.e., we may want to discard images after we have a solution).
Current solution drawbacks
Navigating this structure is painful, when you need connections like the dotted one in the sketch. If you have to display an image with a couple of solutions features on top, you first have to iterate through analysis objects to find which of them are based on this image, and then iterate through the solutions to display them.
If to solve 1. you choose to explicitly store dotted links (i.e. image class will have pointers to solution features, which are related to it), you'll put very much effort maintaining consistency of these pointers and constantly updating the links when something changes.
My idea
I'd like to build a more extensible (2) and flexible (1) data model. The first idea was to use a relational model, separating objects and their relations. And why not use RDBMS here - sqlite seems an appropriate engine to me. So complex relations will be accessible by simple (left)JOIN's on the database: pseudocode "images JOIN images_to_image_features JOIN image_features JOIN image_features_to_objects JOIN objects JOIN solutions JOIN solution_features") and then fetching actual C++ objects for solution features from global storage by ID.
The question
So my primary question is
Is using RDBMS an appropriate solution for problems I described, or it's not worth it and there are better ways to organize information in my app?
If RDBMS is ok, I'd appreciate any advice on using RDBMS and relational approach to store C++ objects' relationships.

You may want to look at Semantic Web technologies, such as RDF, RDFS and OWL that provide an alternative, extensible way of modeling the world. There are some open-source triple stores available, and some of the mainstream RDBMS also have triple store capabilities.
In particular take a look at Manchester Universities Protege/OWL tutorial: http://owl.cs.manchester.ac.uk/tutorials/protegeowltutorial/
And if you decide this direction is worth looking at further, I can recommend "SEMANTIC WEB for the WORKING ONTOLOGIST"

Just based on the diagram, I would suggest that an RDBMS solution would indeed work. It has been years since I was a developer on an RDMS (called RDM, of course!), but I was able to renew my knowledge and gain very many valuable insights into data structure and layout very similar to what you describe by reading the fabulous book "The Art of SQL" by Stephane Faroult. His book will go a long way to answer your questions.
I've included a link to it on Amazon, to ensure accuracy: http://www.amazon.com/The-Art-SQL-Stephane-Faroult/dp/0596008945
You will not go wrong by reading it, even if in the end it does not solve your problem fully, because the author does such a great job of breaking down a relation in clear terms and presenting elegant solutions. The book is not a manual for SQL, but an in-depth analysis of how to think about data and how it interrelates. Check it out!
Using an RDBMS to track the links between data can be an efficient way to store and think about the analysis you are seeking, and the links are "soft" -- that is, they go away when the hard objects they link are deleted. This ensures data integrity; and Mssr Fauroult can answer what to do to ensure that remains true.

I don't recommend RDBMS based on your requirement for an extensible and flexible model.
Whenever you change your data model, you will have to change DB schema and that can involve more work than change in code.
Any problems with DB queries are discovered only at runtime. This can make a lot of difference to the cost of maintenance.
I strongly recommend using standard C++ OO programming with STL.
You can make use of encapsulation to ensure any data change is done properly, with updates to related objects and indexes.
You can use STL to build highly efficient indexes on the data
You can create facades to get you the information easily, rather than having to go to multiple objects/collections. This will be one-time work
You can make unit test cases to ensure correctness (much less complicated compared to unit testing with databases)
You can make use of polymorphism to build different kinds of objects, different types of analysis etc
All very basic points, but I reckon your effort would be best utilized if you improve the current solution rather than by look for a DB based solution.

http://www.boost.org/doc/libs/1_51_0/libs/multi_index/doc/index.html
"you'll put very much effort maintaining consistency of these pointers
and constantly updating the links when something changes."
With the help of Boost.MultiIndex you can create almost every kind of index on a "table". I think the quoted problem is not so serious, so the original solution is manageable.

Related

Is there a persistency layer for list/queue containers?

Is there some kind of persistency layer that can be used for a regularly-modified list/queue container that stores strings?
The data in the list is just strings, nothing fancy. It could be useful, though, to store a key or hash with each string for definite references, so I thought I'd wrap each string in a struct with an extra key field.
The persistency should be saved on each modification, more or less, as spontaneous power offs might happen.
I looked into Boost::Serialisation and it seems easy to use, but I guess I'd have to write the whole queue everytime it gets modified to close the file and be safe for power offs, as I see no journaling option there.
I saw SQLite, but it could be over the top as I don't need relations or any sophisticated queries.
And I don't want to reinvent the wheel by doing it manually in some files.
Is there anything available worth looking into?
I have few experience with C++ and an OS beneath, so I'm unaware of what's available and what's suitable. And couldn't find any better.
Potentially simpler alternative to relational databases, when you don't need those relations, are "nosql" databases. A document oriented database might be a reasonable choice based on the description.

How to call SQL functions / stored procedure when using the Repository pattern

What is the best way to call a SQL function / stored procedure when converting code to use the repository pattern? Specifically, I am interested in read/query capabilities.
Options
Add an ExecuteSqlQuery to IRepository
Add a new repository interface specific to the context (i.e. ILocationRepository) and add resource specific methods
Add a special "repository" for all the random stored procedures until they are all converted
Don't. Just convert the stored procedures to code and place the logic in the service layer
Option #4 does seem to be the best long term solution, but it's also going to take a lot more time and I was hoping to push this until a future phase.
Which option (above or otherwise) would be "best"?
NOTE: my architecture is based on ardalis/CleanArchitecture using ardalis/Specification, though I'm open to all suggestions.
https://github.com/ardalis/CleanArchitecture/issues/291
If necessary, or create logically grouped Query services/classes for
that purpose. It depends a bit on the functionality of the SPROC how I
would do it. Repositories should be just simple CRUD, at most with a
specification to help shape the result. More complex operations that
span many entities and/or aggregates should not be added to
repositories but modeled as separate Query objects or services. Makes
it easier to follow SOLID that way, especially SRP and OCP (and ISP)
since you're not constantly adding to your repo
interfaces/implementations.
Don't treat STORED PROCEDURES as 2nd order citizens. In general, avoid using them because they very often take away your domain code and hide it inside database, but sometimes due to performance reasons, they are your only choice. In this case, you should use option 2 and treat them same as some simple database fetch.
Option 1 is really bad because you will soon have tons of SQL in places you don't want (Application Service) and it will prevent portability to another storage media.
Option 3 is unnecessary, stored procedures are no worse than simple Entity Framework Core database access requests.
Option 4 is the reason why you cannot always avoid stored procedures. Sometimes trying to query stuff in application service/repositories will create very big performance issues. That's when, and only when, you should step in with stored procedures.

Common information model for SOA systems

We are looking at the possibility of implementing a Common Information Model for data across several systems in a SOA architecture.
Many of these services will be consumed by a composite UI, we therefore see a benefit in having common data types.
What we are wondering is if this is a feasible approach, or if we should just map to common types in the client?
This question is framed pretty broadly, so my answer is going to remain pretty broad as well.
The key consideration here would seem to be location independence - though you're working with several applications, they're all going to share certain sorts of data (though not, as far as I can see from your question, actual data). An obvious use case for this is authentication and authorization data.
If you have determined that the common data is truly cooked enough to isolate in the fashion you're describing then I think it makes perfect sense to layer it off into a service. I think the perfect example of this is Windows Identity Framework. It takes something that we as architects have always treated as data and turns it into a service.
What you lose with the location independence is a little bit of efficiency that you would otherwise have in making batches calls to the same server, though SOA applications lose this efficiency early in their design, in my experience. But the efficiency you gain from "patternizing" a section of your apps generally outweighs that enormously.
Having a common information model doesn't imply common data types or common classes. Simply defining the relationships between, for instance, Customer, Order, OrderItem and Product goes a great distance toward common business logic and the ability to have different services and applications be able to interoperate in an SOA environment.
You might consider having an actual common model in some modeling language. From this, concrete data types and classes could be generated for particular circumstances. One might use UML for this, but I personally prefer to use NORMA, an Object-Role Modeling tool. It works at the conceptual level, so creates models that are independent of the data store technology.
NORMA runs as an add-in to Visual Studio Standard edition or above, but out of the box generates artifacts for several databases, as well as LINQ to SQL classes and even PHP web services, all from the same model. It is extensible so that you can generate your own artifacts from the model. And of course, the model is represented as XML, so you can do whatever you like with it.

Does SAP BusinessObjects require a Universe for relational database?

Goal: I wish users to be able to directly connect to a RDBMS (e.g., MS SQL Server) and do some queries with possible cross references.
Tool: SAP BusinessObjects XI Enterprise
Description:
The main reason is that Universe creation is pretty techy. Imagine the SQL DB structure changing frequently, may be even daily. Hense the synchronization issues.
Is BO capable of doing a cross reference using the BO query GUI usable by non-techy do generate a request like:
SELECT
Classroom.Location
FROM
Student,
Classroom
WHERE
Student.Name = 'Foo' AND
Student.ClassroomName = Classroom.Name
...with only a ODBC connection and no Universe (or an autogenerated Universe)?
If yes, does it require foreign keys to be defined?
If no, is there a simple way to create and update (synch) a BO Universe directly from the DB structure? May be using their new XML format?
Good question.
Background
I have implemented one very large and "complex" banking database, 500+ tables, that the customer bought BO for. The "complex" is in quotes because although I created a pure 5NF (correctly Normalised to 5NF) RDB, and most developers and the power users did not find it "complex", some developers and users found it "complex". The first BO consultant could not even create a working Universe, and overran his budgeted one month. The second BO consultant created the entire Universe in 10 days. The whole structure (one 5NF RDB; 5 apps; one Universe; web reporting) all worked beautifully.
But as a result of that exercise, it was clear to me that although the Universe is very powerful, it is only required to overcome the impediments of an un-normalised database, or a data warehouse that has tables from many different source databases, which then need to be viewed together as one logical table. The first consultant was simply repeating what he was used to, doing his techie thing, and did not understand what a Normalised db meant. The second realisation was that BO Universe was simply not required for a true (normalised) RDB.
Therefore on the next large banking project, in which the RDB was pretty much 120% of the previous RDB, I advised against BO, and purchased Crystal Reports instead, which was much cheaper. It provided all the reports that users required, but it did not have the "slice and dice" capability or the data cube on the local PC. The only extra work I had to do was to provide a few Views to ease the "complex" bits of the RDB, all in a days work.
Since then, I have been involved in assignments that use BO, and fixed problems, but I have not used XI (and its auto-generated Universe). Certainly, a preponderance towards simple reporting tools, and avoiding the Universe altogether, which has been proved many times.
In general then, yes, BO Query GUI (even pre-XI) will absolutely read the RDB catalogue directly and you can create and execute any report you want from that, without an Universe. Your example is no sweat at all. "Cross references" are no sweat at all. The non-techie users can create and run such reports themselves. I have done scores of these, it takes minutes. Sometimes (eg. for Supertype-Subtype structures), creating Views eases this exercise even further.
Your Question
Exposes issues which are obstacles to that.
What is coming across is that you do not have a Relational Database. Pouring some data into a container called "relational DBMS" does not transform that content into a Relational Database.
one aspect of a true RDB is that all definitions are in the ISO/IEC/ANSI standard SQL catalogue.
if our "foreign keys" are not in the catalogue then you do not have Foreign Keys, you do not have Referential Integrity that is defined, maintained by the server.
you probably do not have Rules and Check Constraints either; therefore you do not have Data Integrity that is defined and maintained by the server.
Noting your comments regarding changing "db" structure. Evidently then, you have not normalised the data.
If the data was normalised correctly, then the structure will not change.
Sure, the structure will be extended (columns added; new tables added) but the existing structure of Entities and Attributes will not change, because they have been (a) modelled correctly and (b) normalised
therefore any app code written, or any BO Universe built (and reports created from that), are not vulnerable to such extensions to the RDB; they continue running merrily along.
Yes of course they cannot get at the new columns and new tables, but providing that is part of the extension; the point is the existing structure, and everything that was dependent on it, is stable.
Noting your example query. That is prima facie evidence of complete lack of normalisation: Student.ClassroomName is a denormalised column. Instead of existing once for every Student, it should exist once for each Classroom.
I am responding to your question only, but it should be noted that lack of normalisation will result in many other problems, not immediately related to your question: massive data duplication; Update Anomalies; lack of independence between the "database" and the "app" (changes in one will affect the other); lack of integrity (data and referential); lack of stability, and therefore a project that never ends.
Therefore you not only have some "structure" that changes almost daily, you have no structure in the "structure" of that, that does not change. That level of ongoing change is classic to the Prototype stage in a project; it has not yet settled down to the Development stage.
If you use BO, or the auto-generated Universe, you will have to auto-generate the Universe daily. And then re-create the report definition daily. The users may not like the idea of re-developing an Universe plus their reports daily. Normally they wait for the UAT stage of a project, if not the Production stage.
if you have Foreign Keys, since they are in the Standard SQL catalogue, BO will find them
if your do not have Foreign Keys, but you have some sort of "relation" between files, and some sort of naming convention from which such "relations" can be inferred, BO has a check box somewhere in the auto-generate window, that will "infer foreign keys from column names". Of course, it will find "relations" that you may not have intended.
if you do not have naming conventions, then there is nothing that BO can use to infer such "relations". there is only so much magic that a product can perform
and you still have the problem of "structure" changing all the time, so whatever magic you are relying on today may not work tomorrow.
Answer
Business Objects, Crystal reports, and all high end to low end report tools, are primarily written for Relational Databases, which reside in an ISO/IEC/ANSI Standard SQL DBMS. that means, if the definition is in the catalogue, they will find it. The higher end tools have various additional options (that's what you pay for) to assist with overcoming the limitations of sub-standard contents of a RDBMS, culminating in the Universe; but as you are aware takes a fair amount of effort and technical qualification to implement.
The best advise I can give you therefore, is to get a qualified modeller and model your data; such that it is stable, free of duplication, and your code is stable, etc, etc; such that simple (or heavy duty) report tools can be used to (a) define reports easily and (b) run those report definitions without changing them daily. You will find that the "structure" that changes daily, doesn't. What is changing daily is your understanding of the data.
Then, your wish will come true, the reports can be easily defined once, by the users, "cross references" and all, without an Universe, and they can be run whenever they like.
Related Material
This, your college or project, is not the first in the universe to be attempting to either (a) model their data or (b) implement a Database, relational or not. You may be interested in the work that other have already done in this area, as often much information is available free, in order to avoid re-inventing the wheel, especially if your project does not have qualified staff. Here is a simplified version (they are happy for me to publish a generic version but not the full customer-specific version) of a recent project I did for a local college; I wrote the RDB, they wrote the app.
Simplified College Data Model
Readers who are not familiar with the Relational Modelling Standard may find IDEF1X Notation useful.
Response to Comments
To be clear then. First a definition.
a Relational Database is, in chronological order, in the context of the last few days of 2010, with over 25 years of commonly available true relational technology [over 35 years of hard-to-use relational technology], for which there are many applicable Standards, and using such definitions (Wikipedia being unfit to provide said definitions, due to the lack of technical qualification in the contributors):
adheres the the Relational Model as a principle
Normalised to at least Third Normal Form (you need 5NF to be completely free of data duplication and Update Anomalies)
complies with the various existing Standards (as applicable to each particular area)
modelled by a qualified and capable modeller
is implemented in ISO/IEC/ANSI Standard SQL (that's the Declarative Referential Integrity ala Foreign Key definitions; Rule and Check constraints; Domains; Datatypes)
is Open Architecture (to be used by any application)
treated as as a corporate asset, of substantial value
and therefore reasonably secured against unauthorised access; data and referential integrity; uncontrolled change (unplanned changes affecting other users, etc).
Without that, you cannot enjoy the power, performance, ease of change, and ease of use, of a Relational Database.
What it is not, is the content of an RDBMS platform. Pouring unstructured or un-organised data into a container labelled "Relational Database Engine" does not magically transform the content into the label of the container.
Therefore, if it is reasonably (not perfect, not 100% Standard-complaint), a Relational Database, the BO Universe is definitely not required to access and use it to it full capability (limited only by functions of the report tool).
If it has no DRI (FK definitions), and no older style "defined keys" and no naming conventions (from which "relations can be derived) and no matching datatypes, then no report tool (or human being) will be able to find anything.
It is not just the FK definitions.
Depending on exactly which bits of a Relational Database has been implemented in the data heap, and on the capability of the report tool (how much the licence costs), some capability somewhere within the two ends of the spectrum, is possible. BO without the Universe is the best of breed for report tools; their Crystal Reports item is about half the grunt. The Universe is required to provide the database definitions for the non-database.
Then there is the duplication issue. Imagine how an user is going to feel when they find out that the data that they finally got through to, after 3 months, turns out to be a duplicate that no one keeps up-to-date.
"Database" Object Definition
If you have unqualified developers or end users implementing "tables" in the "database", then there is no limit to the obstacles and contradictions they place on themselves. ("Here, I've got an RDBMS but the content isn't; I've got BO but it can't; I've got encryption but I've copied the payroll data to five places, so that people can get at it when they forget their encryption key".) Every time I think I have seen the limit of insanity, someone posts a question on SO, and teaches me again that there is no limit to insanity.
BO via an ODBC connection is capable of doing JOIN (cross reference) without Universe as long as there are the correct FK defined?
(ODBC has nothing to do with it; it will operate the same via a native connection or via a browser.)
For that one time, re FKs defined correctly, yes. But the purpose of my long response is to identify the that are many other factors.
It isn't a BO or BO Universe question, it is "just how insane are the users' definitions and duplication". FKs could work sometimes and not others; could work today and not tomorrow.

Relational databases application [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
When developing an application which mostly interacts with a database, what is a good way to start? The application requires a lot of filtering based on user input, sorting and structuring.
The best way to start is by figuring out "user stories" (or "use cases" -- but the "story" approach tends to really work great and start dragging shareholder into the shared storytelling...!-); on top of that, designing the database schema as the best-normalized idea you can find to satisfy all data layer needs of the user stories.
Thirdly, you may sketch layers such as views on top of the schema; fourthly, and optionally, triggers and stored procedures that might live in the DB to ensure consistency and ease of use for higher layers (but, no matter how strongly DBAs will push you towards those, don't accept their assurances that they're a MUST: they aren't -- if your storage layer is well designed in terms of normalization and maybe useful views on top, non-storage-layer functionality CAN always reside elsewhere, it's an issue of convenience and performance, NOT logical consistency, completeness, correctness).
I think the business layer and user-experience layers should come after. I realize that's a controversial position, but my point is that the user stories (and implied business-rules that come with them) have ALREADY told you a LOT about the business and user layers -- so, "nailing down" (relatively speaking -- agility and "embrace change!" should always rule;-) the data storage layer is the next order of business, and refining ("drilling down") the higher layers can and should come after.
When you get to the database layer you'll want to handle the database access via stored procedures. This will help give you additional protection against SQL Injection attacks, and make it much easier to push logic changes to the database layer.
If it's mostly users interacting with data, you can design using a form perspective.
What forms are needed for user input?
What forms are needed for output reports?
Once you've determined that, the use of the forms will dictate the business logic needed to be coded behind the scenes. You'll take the inputs, create the set of procedures or methods to deal with them, and output what is necessary. Once you know the inputs and outputs, you will be able to easily design the necessary functions.
The scope of the question is very broad. You are expecting me to tell what to do. I can only do a good job of telling how to do things. Do investigate upon using Hibernate/Spring. Since most of your operations looks like querying db, hibernate should help. Make sure the tables are sufficiently indexed so your queries can run faster if filtered based on index fields. The challenging task is design your DB layer which will be the glue between your application and db. Design your db layer generic enough so that it can build queries based on the params that you pass to it. Then move on to develop the above presentation layer. Developing your application layer by layer helps since it will force you to decouple the db logic from the presentation logic. When you develop the db layer, assume that not just your presentation layer but any client can call it. This will help you to design applications that can be scalable and adaptable to new requirements.
So bottom line : Start with DB, DB integeration layer, Controller and last Presentation Layer.
For the purpose of discussion, I'm going to assume that you are working with a starting application that doesn't have a pre-existing database. If this is false, I'd probably move the order of steps around quite a bit.
1 - Understand the Universe
First, you've got to get a sense of what's around you so you can really understand the problem that you are trying to solve.
User stories or use cases are often a good starting point. Starting with what tasks the user will try to do, and evaluating how frequently they are likely to be is a great starting point. I like to start with screen mockups as well, with or without lots of hands on time with users, I find that having a screen gives our team something really finite to argue about.
What other tools exist in this sphere? These days, it seems to me that users never use just one tool, they swap around alot. You need to know two main things about the other tools you users use:
(1) - what will they be using as part of the process, along side your tool? Consider direct input/output needs - what might they want to cut/copy/paste from or to? What tools might you want to offer file upload/download for with specific formats, what tools are they using alongside your tool that you might want to share terminology, layout, color coding, icons or other GUI elements with. Focus especially on the edges of the tools - a real gotcha I hit in a recent project was emulating the databases of previous tools. It turned out that we had massive database shift, and we would likely have been better starting fresh.
(2) What (if anything) are you replacing or competing with? Steal the good stuff, dump and improve the bad stuff. Asking users is always best. If you can't at least understanding the management initiative is important - is this tool replacing a horrible legacy tool? It may be legacy, but there may be the One True Feature that has kept the tool in business all these years...
At this stage, I find that things are really mushy - there's some screen shots, some writing, some schemas or ICDs - but not a really gelled clue.
2 - Logical Entities
Or at least that's what the OO books call it.
I don't care much for all the writing I see on this task - but I find that any any given system, I have one true diagram that I draw over and over. It's usually about 3-10 boxes, and hopefully less than an exponentially large number of lines connecting them. W
The earlier you can get that diagram the better.
It doesn't matter to me if it's in UML, a database logical model, something older, or on the back of a napkin (as long as the napkin is shrouded in plastic and hung where everyone can see it).
The earlier you can make this diagram correctly, the better.
After the diagram is made, you can start working on the follow on work that may be more official.
I think it's a chicken and egg question on whether you start with your data or you start with your screens and business logic. I know that you certianly want to optimize for database sizing and searchability... but how do you know exactly what your database needs are without screens and interfaces giving you a sense for the data?
In practice, I think this is an ever-churning cycle. You do a little bit everywhere, and then you change it all.
Even if you don't get to do a formal agile lifecycle, I think you're best bet is to view design as agile -- it will take many repetitions and arguments before you really feel it's "right".
The most important thing to keep in mind is that your first, and most likely 2nd 3rd attempt at designing the database will be wrong in some way. That might sound negative, maybe even a little rash, (it's certainly more towards the 'agile' software design philosophy) but it's important thing to keep in mind.
You still need to do your analysis thoroughly of course, try to implement one feature at a time, but try to get all layers working first. That way you won't have to do to much rework when the specs change and you understand the issues better. One you have a lot of data loaded into a system, changing things becomes increasingly difficult.
The main benefit of this approach is you find out quickly where you design is broken, where you haven't separated you design layers correctly. One trick I find extremely useful is to do both a sqllite and a mysql version, so seamless switching between the two is possible. Because the two use a different accent of SQL it highlights where you have too tight a coupling between the layers.
A good start would be to get familiar with Multitier architecture
Then you design your presentation layer.
In your business logic layer implement all logic
And finally you implement your data access layer.
Try to setup a prototype with something that is more productive then C++ for example Ruby, Python and well maybe even PHP.
When the prototype works and you see your data model is okay and your queries are too slow then you can start using C++.
But as your questions suggests you have more options then data and in this case the speed of a scripting langauge should be enough.