Need to create Indices (Index) for Sqlite3 Database - c++

I am using C++ - sqlite3 for creating a Database. The database is regularly updated and after point of time becomes very large in size (in GB's) which make it very slow when we execute query (via C++ syntax).
I read in sqlite site that for large Database tables we can create Index tables for optimization and speeding up database queries. I now have successfully written an index syntax just after creating my database.
sqlQuery << "CREATE INDEX IF NOT EXISTS 'IdxNode_Val' ON node_values (aliasDevice,aliasProperty,sourceTimestamp);";
int rc = sqlite3_exec(this->db, sqlQuery.str().c_str(), 0, 0, &zErrMsg);
Query:
The question is since my database gets updated on a daily basis (new entries added), will the index table also be updated automatically or should I have to write a syntax to update the index table just after my "insert into database" syntax?
Thanks & best Regards
rG

The database automatically updates indexes.

Related

Doctrine Migration from string to Entity

I've an apparently simple task to perform, i have to convert several tables column from a string to a new entity (integer FOREIGN KEY) value.
I have DB 10 tables with a column called "app_version" which atm are VARCHAR columns type. Since i'm going to have a little project refactor i'd like to convert those VARCHAR columns to a new column which contains an ID representing the newly mapped value so:
V1 -> ID: 1
V2 -> ID: 2
and so on
I've prepared a Doctrine Migration (i'm using symfony 3.4) which performs the conversion by DROPPING the old column and adding the new id column for the AppVersion table.
Of course i need to preserve my current existing data.
I know about preUp and postUp but i can't figure how to do it w/o hitting the DB performance too much. I can collect the data via SELECT in the preUp, store them in some PHP vars to use later on inside postUp to write new values to DB but since i have 10 tables with many rows this become a disaster real fast.
Do you guys have any suggestion i could apply to make this smooth and easy?
Please do not ask why i have to do this refactor now and i didn't setup the DB correctly in the first time. :D
Keywords for ideas: transaction? bulk query? avoid php vars storage? write sql file? everything can be good
I feel dumb but the solution was much more simple, i created a custom migration with all the "ALTER TABLE [table_name] DROP app_version" to be executed AFTER one that simply does:
UPDATE [table_name] SET app_version_id = 1 WHERE app_version = "V1"

How to temporarily disable Django indexes (for SQLite)

I'm trying to create a large SQLite database from around 500 smaller databases (each 50-200MB) to put into Django, and would like to speed up this process. I'm doing this via a custom command.
This answer helped me a lot, in reducing the speed to around a minute each in processing a smaller database. However it's still quite a long time.
The one thing I haven't done in that answer is to disable database indexing in Django and re-create them. I think this matters for me as my database has few tables with many rows.
Is there a way to do that in Django when it's running live? If not in Django then perhaps there's some SQLite query to remove all the indexes and re-create them after I insert my records?
I just used raw SQL to remove the indexes and re-create them. This improved the speed of creating a big database from 2 of my small databases from 1:46 to 1:30, so quite significant. It also reduced the size from 341.7MB to 321.1MB.
# Delete all indexes for faster database creation
with connection.cursor() as cursor:
cursor.execute(f'SELECT name, sql FROM sqlite_master WHERE name LIKE "{app_label}_%" AND type == "index"')
indexes = cursor.fetchall()
names, create_sqls = zip(*indexes)
for name in names:
cursor.execute(f'DROP INDEX {name}')
After I create the databases re-create the index:
# Re-create indexes
with connection.cursor() as cursor:
for create_sql in create_sqls:
cursor.execute(create_sql)

What happens if I drop some "special" SQLite tables

First, some background info, maybe someone suggests some better way then I try to do. I need to export SQLite database into text file. For that I have to use C++ and chosen to use CppSQLite lib.
That I do is collecting create queries and after that export every table data, the problem is that there are tables like sqlite_sequence and sqlite_statN. During import I cannot create these tables because these are special purpose, so the main question, would it affect stability if these tables are gone?
Another part of question. Is there any way to export and import SQLite database using CppSQLite or any other SQLite lib for C++?
P.S. Solution to copy database file is not appropriate in this particular situation.
Object names beginning with sqlite_ are reserved; you cannot create them directly even if you wanted to. (But you change the contents of some of them, and you can drop the sqlite_stat* tables.)
The sqlite_sequence table is created automatically when a table with an AUTOINCREMENT column is created.
The record for the actual sequence value of a table is created when it is needed first.
If you want to save/restore the sequence value, you have to re-insert the old value.
The sqlite_stat* tables are created by ANALYZE.
Running ANALYZE after importing the SQL text would be easiest, but slow; faster would be to create an empty sqlite_stat* table by running ANALYZE on a table that will not be analyzed (such as sqlite_master), and then inserting the old records manually.
All this is implemented in the .dump command of the sqlite3 command-line tool (source code in shell.c):
SQLite version 3.8.4.3 2014-04-03 16:53:12
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create table t(x integer primary key autoincrement);
sqlite> insert into t default values;
sqlite> insert into t default values;
sqlite> analyze;
sqlite> .dump
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE t(x integer primary key autoincrement);
INSERT INTO "t" VALUES(1);
INSERT INTO "t" VALUES(2);
ANALYZE sqlite_master;
INSERT INTO "sqlite_stat1" VALUES('t',NULL,'2');
DELETE FROM sqlite_sequence;
INSERT INTO "sqlite_sequence" VALUES('t',2);
COMMIT;
sqlite>

how to check in sqlite3 whether number of columns are changed or not

Iam coding in c and using sqlite3 as database .I want to ask that how can we check whether no. of columns in a table got changed or not. Situation is like this that i am going to run the application with new executable according to which new columns will be added in the table.So when the DB will be created again application should check whether the table schema is same or not and according to the new schema it should create table.Iam developing application for a embedded enviroment(specifically for a device).
When iam changing the number of columns of table in db and running new executable in the device new tables are not getting created because of the presence of old tables but when iam deleting the old db and creating fresh tables then changes are coming.So how to handle this situation ?
Platform : Linux , gcc compiler
Thanks in advance
Pls guide me like this : (assuming Old DB is already present )
That firstly we have to check for the schema of Old DB and if there is any change in some of the tables(like some new columns added or deleted ) then create the new DB according to that .
Use Versioning and Explicit Column References
You can make use of database versioning to help assist with this sort of problem.
Create a separate table with only one column and one record to store the database version.
Whenever you upgrade your database, set the version number in the separate table.
Design your insert queries to specify the columns.
Define default values for new columns so that old programs insert default values.
Examples
UPDATE databaseVersion SET version=2;
Version 1 Query
INSERT INTO MyTable (id, var1, var2) VALUES (2, '5', '6');
Version 2 Query
INSERT INTO MyTable (id, var1, var2, var3) VALUES (3, '5', '6', '7');
This way your queries should still be compatible on the new DB when using the old program.

SQLite - pre allocating database size

Is there a way to pre allocate my SQLite database to a certain size? Currently I'm adding and deleting a number of records and would like to avoid this over head at create time.
The fastest way to do this is with the zero_blob function:
Example:
Y:> sqlite3 large.sqlite
SQLite version 3.7.4
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create table large (a);
sqlite> insert into large values (zeroblob(1024*1024));
sqlite> drop table large;
sqlite> .q
Y:> dir large.sqlite
Volume in drive Y is Personal
Volume Serial Number is 365D-6110
Directory of Y:\
01/27/2011 12:10 PM 1,054,720 large.sqlite
Note: As Kyle properly indicates in his comment:
There is a limit to how big each blob can be, so you may need to insert multiple blobs if you expect your database to be larger than ~1GB.
There is a hack - Insert a bunch of data into the database till the database size is what you want and then delete the data. This works because:
"When an object (table, index, or
trigger) is dropped from the database,
it leaves behind empty space. This
empty space will be reused the next
time new information is added to the
database. But in the meantime, the
database file might be larger than
strictly necessary."
Naturally, this isn't the most reliable method. (Also, you will need to make sure that auto_vacuum is disabled for this to work). You can learn more here - http://www.sqlite.org/lang_vacuum.html