Comparing two SQLite databases in C++ - c++

I have two C++ functions, which each construct an SQLite database.
First function constructs database version 1, and then upgrades it to newest version by adding all tables/columns that have been added to the database since the first version. Another function constructs a database that is already in the newest version. As result, each function gives one database that has all necessary tables and columns, but no values.
I wish to write an unit test that compares the results of those two functions. I want to test that they have exactly the same tables and columns, and that all columns have the same CHECK and NOT NULL constraints. I only need to compare columns and tables, because the databases have no values in them at this point.
I would prefer to get the differences in a human readable form (to place them in an error message), but a boolean value (different/not different) is also fine.
How can I do that, given that both databases are in different variables and I cannot combine them?
There are other questions that suggest external applications for this, but can I do it in a simple way in C++? One possibility is to execute some SQL commands for each database, and compare the results in a for loop, but which commands do I need?

You can read the sqlite_master table to read the SQL used to create each table and compare that:
SELECT name, type, sql FROM sqlite_master;
For more information on sqlite_master, consult the SQLite documentation.

Related

When to use cte and temp table?

i know this a common question, but most frequently people ask about performancy between this two.
What I'm asking for is use cases of cte and temp table, for better understanding the usage of them
With a temp table you can use CONSTRAINT's and INDEX's. You can also create a CURSOR on a temp table where a CTE terminates after the end of the query(emphasizing a single query).
I will answer through specific use cases with an application I've had experience with in order to aid with my point.
Common use cases in an example enterprise application I've used is as follows:
Temp Tables
Normally, we use temp tables in order to transform data before INSERT or UPDATE in the appropriate tables in time that require more than one query. Gather similar data from multiple tables in order to manipulate and process the data.
There are different types of orders (order_type1, order_type2, order_type3) all of which are on different TABLE's but have similar COLUMN's. We have a STORED PROCEDURE that UNION's all these tables into one #orders temp table and UPDATE's a persons suggested orders depending on existing orders.
CTE's
CTE's are awesome for readability when dealing with single queries. When creating reports that requires analysis using PIVOT's,Aggregates, etc. with tons of lines of code, CTE's provide readability by being able to separate a huge query into logical sections.
Sometimes there is a combination of both. When more than one query is required. Its still useful to break down some of those queries with CTE's.
I hope this is of some usefulness, cheers!

MySQL check if table has correct schema

I am currently developing server software in C++ with a MySQL data backend. I am using the official MySQL/connector library from Oracle to work with MySQL. The connection itself is working and I'm not having any issues with that.
My problem is that the database and the table schemas tend to change every once in a while because new tables and columns keep getting added. Also exiting column may be changed for the same reason. To make sure I recognize outdated server software quickly I wanted to add a warning when the database has changed.
My first idea was to hardcode how the database (and tables and such) should look and then check whether the current database matches the hardcoded data. But I have no clue how to achive that.
In summary I want to be able to detect whether
A table has been added or removed
A column in a table has been altered
A column in a table has been added or removed
with as little C++ code as possible. Also it should be quite easy to maintain.
Additional information will be added when required.
I would suggest the following approach:
1) fork and execute the mysql command line client. Set up a pair of pipes, to mysql's standard input and output.
2) At this point you should be able to execute simple commands by piping them to mysql via the standard input pipe, and read the output from the standard output pipe.
You will need to make careful notes as to the output format of each mysql command, so that you know when you finished reading its output, and you can send the next command.
3) As the first order of being, execute:
show tables;
The output that comes back will list all tables in the database. Parsing the output into a list of table names is trival. Then execute for each table:
show create table <tablename>;
The resulting output shows all fields in the table, its keys, and constraints. Pretty much all of this table's schema. Lather, rinse, repeat, for every table.
4) In this manner you can capture a basic schema of the entire database, for comparison purposes. If necessary, use the same approach to capture the triggers, and other objects. You'll likely need to do some minor massaging of the data, and exclude a few bits. "show create table", for example, will include the current AUTO_INCREMENT values, which you can ignore.
This general approach, of driving a mysql process via its standard input and output, is bit wobbly, of course. With a little bit of work, you can use mysql's native client library, and execute all of these commands, and capture their results, directly. This should be more reliable.

Django and Oracle nested table support

Can Django support Oracle nested tables or varrays or collections in some manner? Asking just for completeness as our project is reworking the data model, attempting to move away from EAV organization, but I don't like creating a bucket load of dependent supporting tables for each main entity.
e.g.
(not the proper Oracle syntax, but gets the idea across)
Events
eventid
report_id
result_tuple (result_type_id, result_value)
anomaly_tuple(anomaly_type_id, anomaly_value)
contributing_factors_tuple(cf_type_id, cf_value)
etc,
where the can be multiple rows of the tuples for one eventid
each of these tuples can, of course exist as separate tables, but this seems to be more concise. If it 's something Django can't do, or I can't modify the model classes to do easily, then perhaps just having django create the extra tables is the way to go.
--edit--
I note that django-hstore is doing something very similar to what I want to do, but using postgresql's hstore capability. Maybe I can branch off of that for an Oracle nested table implementation. I dunno...I'm pretty new to python and django, so my reach may exceed my grasp in this case.
Querying a nested table gives you a cursor to traverse the tuples, one member of which is yet another cursor, so you can get the rows from the nested table.

Verify the structure of a database? (SQLite in C++ / Qt)

I was wondering what the "best" way to verify the structure of my database is with SQLite in Qt / C++. I'm using SQLite so there is a file which contains my database, and I want to make sure that, when launching the program, the database is structured the way it should be- i.e., it has X tables each with their own Y columns, appropriately named, etc. Could someone point my in the right direction? Thanks so much!
You can get a list of all the tables in the database with this query:
select tbl_name from sqlite_master;
And then for each table returned, run this query to get column information
pragma table_info(my_table);
For the pragma, each row of the result set will contain: a column index, the column name, the column's type affinity, whether the column may be NULL, and the column's default value.
(I'm assuming here that you know how to run SQL queries against your database in the SQLite C interface.)
If you have QT and thus QtSql at hand, you can also use the QSqlDatabase::tables() (API doc) method to get the tables and QSqlDatabase::record(tablename) to get the field names. It can also give you the primary key(s), but for further details you will have to follow pkh's advice to use the table_info pragma.

Simple SubSonic 3 Generation issues

I'm trying to do a proof of concept using SubSonic 3 but Sstraight away i'm hitting numerous errors with the generation. I started making alterations to the generator settings but that seems a little odd when I'm just trying to do a simple one to one mapping of my DB.
Firstly I found an SP that had #delagate as an SP parameter name, this was easily fixed, but should probably be in the standard templates as a user shouldn't have to make template changes for this simple an issue.
Next I found that the system choked on two tables and tried to create signatures the same
the tables where
Field
Fields
now i know SubSonix 2 had a fixPluralClassName property but buggered if I can find one in the template for SubSonic 3
Any help on that one will get me started
Generally 'X' and 'Datum' type appendages/substitutions happen when you have used a 'reserved' word in a column or table name. In this case 'Reserved' being a word that Subsonic doesn't like to use for data objects.
A couple of rules I follow are;
Ensure both table names and column
names are not 'reserved' words (ie
'Data' or 'Int' or 'Table')
Ensure that each table has a primary key
Don't use date and time column types
as they are not supported yet
(DateTime is, just not Date and Time types)
Don't have a column with the same
name as the table
The Subsonic FAQ might be helpful.