SQLite UPDATE 100ms - c++

I'm using the Qt database abstraction layer to interface with Sqlite3.
int x = GetTickCount();
database.exec("UPDATE controls SET dtype=32 WHERE id=2");
qDebug() << GetTickCount()-x;
The table is:
CREATE TABLE controls (
id INTEGER PRIMARY KEY AUTOINCREMENT,
internal_id TEXT,
name TEXT COLLATE NOCASE,
config TEXT,
dtype INTEGER,
dconfig TEXT,
val TEXT,
device_id INTEGER REFERENCES devices(id) ON DELETE CASCADE
);
Results in an update time of ~100 ms! Even though nothing else is accessing the db and there are a grand total of 3 records in that table.
That seems ridiculously long to me. 10 records would already take a second to complete. Is this the performance I should expect from sqlite or is something messing me up somewhere? SELECT queries are fast enough ~1ms.
Edit 1
So it's not Qt.
sqlite3 *db;
if ( sqlite3_open("example.db",&db ) != SQLITE_OK )
{
qDebug() << "Could not open";
return;
}
int x = GetTickCount();
sqlite3_exec(db, "UPDATE controls SET dtype=3 WHERE id=2",0,0,0);
qDebug() << "Took" << GetTickCount() - x;
sqlite3_close(db);
This guy takes just the same amount of time.

When you access your hard disk it can take long.
Try one of these:
PRAGMA journal_mode = memory;
PRAGMA synchronous = off;
So it will not touch the disk immediately.
SQLite can be very very fast when it is used well. The select statement for your small database is answered from cache.
There are other ways to tweak your database. See other questions like this:
Improve INSERT-per-second performance of SQLite?

Related

X devAPI batch insert extremely slow

I use the C++ connector for MySQL and the X dev API code.
On my test server (my machine), doing a single insert in loop is pretty slow (about 1000 per second) on a basic table with a few columns. It has a unique index on a char(40) field which is possibly the cause of the slowness. But since the DB is configured as developer mode, I guess this should be expected.
I wanted to improve this by doing batch inserts. The problem is that it is even slower (about 20 per second). The execute() itself is quite fast, but the .values() are extremely slow. The code looks something like this:
try
{
mysqlx::TableInsert MyInsert = m_DBRegisterConnection->GetSchema()->getTable("MyTable").insert("UniqueID", "This", "AndThat");
for (int i = 0; i < ToBeInserted; i++)
{
MyInsert = MyInsert.values(m_MyQueue.getAt(i)->InsertValues[0],
m_MyQueue.getAt(i)->InsertValues[1],
m_MyQueue.getAt(i)->InsertValues[2]);
}
MyInsert.execute();
}
catch (std::exception& e)
{
}
Here is the table create:
CREATE TABLE `players` (
`id` bigint NOT NULL AUTO_INCREMENT,
`UniqueID` char(32) CHARACTER SET ascii COLLATE ascii_general_ci NOT NULL,
`PlayerID` varchar(500) DEFAULT NULL,
`Email` varchar(255) DEFAULT NULL,
`Password` varchar(63) DEFAULT NULL,
`CodeEmailValidation` int DEFAULT NULL,
`CodeDateGenerated` datetime DEFAULT NULL,
`LastLogin` datetime NOT NULL,
`Validated` tinyint DEFAULT '0',
`DateCreated` datetime NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UniqueID_UNIQUE` (`UniqueID`)
) ENGINE=InnoDB AUTO_INCREMENT=21124342 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
Any clue why this is much slower? Is there a better way to do a batch insert?
The issue is in your code.
MyInsert = MyInsert.values(m_MyQueue.getAt(i)->InsertValues[0],
m_MyQueue.getAt(i)->InsertValues[1],
m_MyQueue.getAt(i)->InsertValues[2]);
You are copying over and over again the MyInsert object to a temporary and restroying it....
Should only be:
MyInsert.values(m_MyQueue.getAt(i)->InsertValues[0],
m_MyQueue.getAt(i)->InsertValues[1],
m_MyQueue.getAt(i)->InsertValues[2]);
However, since this could be prevented on the connector code, I'll report a bug to fix the copy behavior.
INSERT up to 1000 rows in a single INSERT statement. That will run 10 times as fast.
Is that CHAR(40) some form of UUID or Hash? If so, would it be possible sort the data before inserting? That may help it run faster. However, please provide SHOW CREATE TABLE so I can discuss this aspect further. I really need to seen all the indexes and datatypes.

Using C++ and MariaDB to operate on database records

I am trying to use the data records present in my database (MariaDB) by connecting with C++ and using mysql_query to executeSQL query in C++, my goal is to use the data present in the database to my C++ code and perform operations and again push the operated data which is stored in the variable back to the database.
I am facing the problem of
Segmentation error
Query out of sync
I think this is happening because I am using a loop to take the data from the database and then update the records back to the database, and while updating the records I can update static data but I am not able to update the variable data which stores the operating values.
con = mysql_connection_setup(mysqlD); // connection using database id
resultRecord = mysql_execute_query(con, "SELECT Date FROM tableData LIMIT 1000;");
//take data from database into C++ code
cout << "Displaying 10 Database Records: \n"
for (int i = 0; i < 10; i++) // operation on first 10 values
{
databaseTable = mysql_fetch_row(resultRecord); // Fetch the particular data from database
char *DatePtr[i] = {databaseTable[0]}; // Date Column
string dateString = *DatePtr; // function uses string
cout << "Operated Date: ";
DateOperator(dateString); // Function for date operation which returns the operated date.
updateRecord = mysql_execute_query(con, "UPDATE tableData SET DataAge = (#variable value) WHERE Id < 0 ");
}
mysql_free_result(resultRecord); //Free Up the Query
mysql_close(con);
This is my first question on Stack Overflow so please let me know if any update is required in the question. Please help me out of this problem.
Thank you for your time and response.

Query breaks whenever I add MAX() functions or grouping statements

I have some tables laid out like so:
Airplane
(airplaneID number(2) primary key, airplaneName char(20), cruisingRange number(5));
Flights
(airplaneID number (2), flightNo number(4) primary key,
fromAirport char(20), toAirport char(20), distance number(4), depart timestamp,
arrives timestamp, foreign key (airplaneID) references Airplane);
Employees
(employeeID number(10) primary key, employeeName char(18), salary number(7));
Certified
(employeeID number(10), airplaneID number(2),
foreign key (airplaneID) references Airplane,
foreign key (employeeID) references Employees );
And I need to write a query to get the following information:
For each pilot who is certified for at least 4 airplanes, find the
employeeName and the maximum cruisingRange of the airplanes for which
that pilot is certified.
The query I have written is this:
SELECT Employees.employeeName, MAX(Airplane.cruisingRange)
FROM Employees
JOIN Certified ON Employees.employeeID = Certified.employeeID
JOIN Airplane ON Airplane.airplaneID = Certified.airplaneID
GROUP BY Employees.employeeName
HAVING COUNT(*) > 3
Lastly, this is the function that executes and reads in the query information:
void prepareAndExecuteIt() {
// Prepare the query
//sqlQueryToRun.len = strlen((char *) sqlQueryToRun.arr);
exec sql PREPARE dbVariableToHoldQuery FROM :sqlQueryToRun;
/* The declare statement, below, associates a cursor with a
* PREPAREd statement.
* The cursor name, like the statement
* name, does not appear in the Declare Section.
* A single cursor name can not be declared more than once.
*/
exec sql DECLARE cursorToHoldResultTuples cursor FOR dbVariableToHoldQuery;
exec sql OPEN cursorToHoldResultTuples;
int i = 0;
exec sql WHENEVER NOT FOUND DO break;
while(1){
exec sql FETCH cursorToHoldResultTuples INTO empName, cruiseRange;
printf("%s\t", empName);
printf("%s\n", cruiseRange);
i++;
// This is temporary while I debug so it doesn't just loop on forever when the query breaks.
if (i > 500){
printf("Entered break statement\n");
break;
}
}
exec sql CLOSE cursorToHoldResultTuples;
}
The query works until I add the MAX(), GROUP BY, and HAVING statements. Then it just reads in nothing infinitely. I don't know if this is an issue with the way I've written my query or if it's an issue with the C++ code that executes it. I'm using the ProC interface to access an Oracle database. Any ideas as to what's going wrong?
You can't mix implicit and explicit joins. I suggest
SELECT Employees.employeeName, MAX(Airplane.cruisingRange)
FROM Employees
JOIN Certified ON Employees.employeeID = Certified.employeeID
JOIN Airplane ON Airplane.airplaneID = Certified.airplaneID
GROUP BY Employees.employeeName
HAVING COUNT(*) > 3
which works fine.
db<>fiddle here

QSqlQuery not giving correct results when selecting TINYINT column

I have a very simple MySQL Table and want to select some of the rows using QSqlQuery.
I am connected to my local development mysql server (Windows; 64-bit).
When I run a SELECT query with other tools like mysql workbench, I always get the correct results (of course!).
Doing the same with QSqlQuery gives me no rows at all!
QString sql = "SELECT `ID`, `Name`, `ModbusID`, `DeviceType` FROM `Devices`;";
qDebug() << sql;
QSqlQuery query(m_db);
if(!query.prepare(sql))
{
qFatal("could not prepare query");
return;
}
if(!query.exec())
{
qFatal("could not execute query");
return;
}
while(query.next())
qDebug() << "result";
qDebug() << "finished.";
SQL-Table Definition
CREATE TABLE IF NOT EXISTS `Devices` (
`ID` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
`MeasurementID` INT UNSIGNED NOT NULL,
`Name` VARCHAR(255) NOT NULL,
`ModbusID` TINYINT UNSIGNED NOT NULL,
`DeviceType` INT NOT NULL,
`TimestampCreated` DATETIME NOT NULL,
`TimestampLastModified` DATETIME,
FOREIGN KEY(`MeasurementID`) REFERENCES `Measurements`(`ID`) ON DELETE CASCADE,
UNIQUE (`MeasurementID`, `Name`)
);
In both programs I am connected to the same local mysql instance. There are no other mysql servers on my machine or on the network.
This happens with Oracle's MySQL and MariaDB! So the problem must be in my program.
Am I doing something wrong or what's happening here?
UPDATE After multiple tests the problem only occurs when I select ModbusID. I changed the table definition from
`ModbusID` TINYINT UNSIGNED NOT NULL,
to
`ModbusID` INT UNSIGNED NOT NULL,
and now it works. Looks like a bug in QT. But I dont think that this is a good solution. If you know a way to use TINYINT then please write an answer or comment!

Qt/SQL - Get column type and name from table without record

Using Qt, I have to connect to a database and list column's types and names from a table. I have two constraints:
1 The database type must not be a problem (This has to work on PostgreSQL, SQL Server, MySQL, ...)
2 When I looked on the internet, I found solutions that work but only if there are one or more reocrd into the table. And I have to get column's type and name with or without record into this database.
I searched a lot on the internet but I didn't find any solutions.
I am looking for an answer in Qt/C++ or using a query that can do that.
Thanks for help !
QSqlDriver::record() takes a table name and returns a QSqlRecord, from which you can fetch the fields using QSqlRecord::field().
So, given a QSqlDatabase db,
fetch the driver with db.driver(),
fetch the list of tables with db.tables(),
fetch the a QSqlRecord for each table from driver->record(tableName), and
fetch the number of fields with record.count() and the name and type with record.field(x)
According to the previous answers, I make the implementation as below.It can work well, hope it can help you.
{
QSqlDatabase db = QSqlDatabase::addDatabase("QSLITE", "demo_conn"); //create a db connection
QString strDBPath = "db_path";
db.setDatabaseName(strDBPath); //set the db file
QSqlRecord record = db.record("table_name"); //get the record of the certain table
int n = record.count();
for(int i = 0; i < n; i++)
{
QString strField = record.fieldName(i);
}
}
QSqlDatabase::removeDatabase("demo_conn"); //remove the db connection
Getting column names and types is a database-specific operation. But you can have a single C++ function that will use the correct sql query according to the QSqlDriver you currently use:
QStringlist getColumnNames()
{
QString sql;
if (db.driverName.contains("QOCI", Qt::CaseInsensitive))
{
sql = ...
}
else if (db.driverName.contains("QPSQL", Qt::CaseInsensitive))
{
sql = ...
}
else
{
qCritical() << "unsupported db";
return QStringlist();
}
QSqlQuery res = db.exec(sql);
...
// getting names from db-specific sql query results
}
I don't know of any existing mechanism in Qt which allows that (though it might exist - maybe by using QSqlTableModel). If noone else knows of such a thing, I would just do the following:
Create data classes to store the information you require, e.g. a class TableInfo which stores a list of ColumnInfo objects which have a name and a type.
Create an interface e.g. ITableInfoReader which has a pure virtual TableInfo* retrieveTableInfo( const QString& tableName ) method.
Create one subclass of ITableInfoReader for every database you want to support. This allows doing queries which are only supported on one or a subset of all databases.
Create a TableInfoReaderFactory class which allows creation of the appropriate ITableInfoReader subclass dependent on the used database
This allows you to have your main code independent from the database, by using only the ITableInfoReader interface.
Example:
Input:
database: The QSqlDatabase which is used for executing queries
tableName: The name of the table to retrieve information about
ITableInfoReader* tableInfoReader =
_tableInfoReaderFactory.createTableReader( database );
QList< ColumnInfo* > columnInfos = tableInfoReader->retrieveTableInfo( tableName );
foreach( ColumnInfo* columnInfo, columnInfos )
{
qDebug() << columnInfo.name() << columnInfo.type();
}
I found the solution. You just have to call the record function from QSqlDatabase. You have an empty record but you can still read column types and names.