SQLite3 slow execution (ESP32 & C++)? - c++

I have setup the SQLite3 C Interface on a ESP32 (Compiler: Arduino IDE) with a CPU frequency of 240Mhz.
All the functions working properly, except the sqlite3_step() function.
This command is "incredible" slow.
My Function for insert 3 data-pairs in a vector which has a size of 1k (vector<struct DATA>):
struct DATA{
pair<int, byte> id;
pair<int, byte> value;
pair<string, byte> s_timestamp;
};
bool ESPDB::insert_vector(const vector<struct DATA> &data_buffer) {
char *errmsg;
if(sqlite3_open(("/sdcard/" + database_).c_str(), &db) != SQLITE_OK){
return 0;
}
sqlite3_stmt *stmt;
sqlite3_prepare_v2(db, ("INSERT INTO " + table_ + " " + columns_ + " VALUES " +
placeholder_).c_str(), -1, &stmt, 0);
sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, &errmsg);
for (const struct DATA &data_pair : data_buffer){
if(data_pair.id.first !=0 && data_pair.id.second !=0){
sqlite3_bind_int(stmt, data_pair.id.second, data_pair.id.first);
}
if(data_pair.value.first !=0 && data_pair.value.second !=0){
sqlite3_bind_int(stmt, data_pair.value.second, data_pair.value.first);
}
if(data_pair.s_timestamp.first.length() > 0 && data_pair.s_timestamp.second !=0){
sqlite3_bind_text(stmt, data_pair.s_timestamp.second, data_pair.s_timestamp.first.c_str(),
data_pair.s_timestamp.first.length(), SQLITE_STATIC);
}
sqlite3_step(stmt);
sqlite3_reset(stmt);
}
sqlite3_exec(db, "END TRANSACTION", NULL, NULL, &errmsg);
sqlite3_finalize(stmt);
sqlite3_close(db);
return 1;
}
The average result of the performance test was around 3000ms for 1k vector - (a few times, its excecuting it in less than 300ms).
The step command itself taking around 16-25ms for each row insert.
The strange thing is, that the sqlite3_step() command speeds up after around 100 inserts to finally 0ms (<1ms) for each row.
The sequence of the code is following:
-open database
-drop table
-create table if not exists
-single insert function (single DATA struct)
-select query
-insert the vector (size = 1000)
-select query in JSON
in ALL functions which handling the database connection, i open and close the database by following code:
Drop-Table Example:
bool ESPDB::drop_table(const string &table){
if(sqlite3_open(("/sdcard/" + database_).c_str(), &db) != SQLITE_OK){
return 0;
}
sql_stmt_(("DROP TABLE IF EXISTS " + table).c_str());
sqlite3_close(db);
return 1;
}
Update:
opened database only once
ensured 240MHz CPU speed
Here a chart for visualisation:
(the issue is, lateron i have a vector size of 250 due to small available heap size in my code - this makes it really slow - as you can see, the first ~130 taking around 15ms. after this its operating really fast)
chart3:
time of insert a vector with size 100 in a while loop - total excecutions 85~]

Related

How to store varbinary(max) and varchar(max) data in SQL Server when length is not known (using C++ ODBC)

How to store the varbinary(max) and varchar(max) columns using C++ ODBC API's. Any advice here?
I am using SQL Server native client.
I am binding Arrays of Parameters using column wise binding. The aim here is to prepare the statement once and insert/update multiple rows at a time to improve the performance. I got a sample code from this link for this,
https://learn.microsoft.com/en-us/sql/odbc/reference/develop-app/binding-arrays-of-parameters?view=sql-server-ver15
Also found this link, where use of SQLPutData is given.
https://learn.microsoft.com/en-us/sql/odbc/reference/syntax/sqlputdata-function?view=sql-server-ver15
but in this link the example is given using SQL_LEN_DATA_AT_EXEC i.e. length of column is known at compile time. What should be the solution when the data length is not known before or for same column the data length changes for every row. How should we bind such columns in such case?
I tried passing SQL_SS_LENGTH_UNLIMITED as columnsize parameter to SQLBindPatameter but this does not even compile.
I also tried passing 0 as columnSize parameter to SQLBindParameter hoping it will somehow work but this also does not work.
I have a table called "Parts" which has 5 columns out of that 2 are varchar(MAX) and I am trying to insert 3 rows. Below is my code for the same. I have TEXTSIZE defined as 12000. As my sample application data length will not cross 12000 I use this value as time being. But in real application data can be bigger than this.
Sample application code
SQLBindParameter(hstmt1, 1, SQL_PARAM_INPUT, SQL_C_ULONG, SQL_INTEGER, 5, 0,
PartIDArray, 0, PartIDIndArray);
SQLBindParameter(hstmt1, 2, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_CHAR, DESC_LEN - 1, 0,
DescArray, DESC_LEN, DescLenOrIndArray);
SQLBindParameter(hstmt1, 3, SQL_PARAM_INPUT, SQL_C_ULONG, SQL_INTEGER, 7, 0,
PriceArray, 0, PriceIndArray);
retcode = SQLBindParameter(hstmt1, 4, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_LONGVARCHAR, TEXTSIZE, 0,
(VOID *)4, 0, cbTextSize);
retcode = SQLBindParameter(hstmt1, 5, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_LONGVARCHAR, TEXTSIZE, 0,
(VOID *)5, 0, cbTextSize1);
for (i = 0; i < ARRAY_SIZE; i++) {
//GetNewValues(&PartIDArray[i], DescArray[i], &PriceArray[i]);
std::cout << "Enter the value for PartID(integer)" << std::endl;
std::cin >> PartIDArray[i];
std::cout<< std::endl;
std::cout << "Enter the value for description(string)" << std::endl;
std::cin >> DescArray[i];
std::cout << std::endl;
std::cout << "Enter the value for price(integer)" << std::endl;
std::cin >> PriceArray[i];
std::cout << std::endl;
PartIDIndArray[i] = 0;
DescLenOrIndArray[i] = SQL_NTS;
PriceIndArray[i] = 0;
cbTextSize[i] = SQL_DATA_AT_EXEC;
cbTextSize1[i] = SQL_DATA_AT_EXEC;
}
retcode = SQLPrepare(hstmt1, (SQLCHAR*)TEXT("INSERT INTO Parts (PartID, Description, Price, memo, memo1) VALUES (?, ?, ?, ?, ?)"), SQL_NTS);
// Execute the statement.
retcode = SQLExecute(hstmt1);
if ((retcode != SQL_SUCCESS) && (retcode != SQL_NEED_DATA) && (retcode != SQL_SUCCESS_WITH_INFO)) {
printf("SQLExecDirect Failed\n\n");
Cleanup();
return(9);
}
PTR pParmID;
int index1 = -1;
for (i = 0; i < 6; i++)
{
if (i % 2 == 0)
index1++;
retcode = SQLParamData(hstmt1, &pParmID);
size_t index = ((size_t)pParmID) - 4;
char* data = Data[index][index1];
lbytes = strlen(data);
if (retcode == SQL_NEED_DATA) {
while (lbytes > 256) {
retcode = SQLPutData(hstmt1, (SQLPOINTER)data, 256);
lbytes -= 256;
}
// Put final batch.
retcode = SQLPutData(hstmt1, (SQLPOINTER)data, lbytes);
}
if ((retcode != SQL_SUCCESS) && (retcode != SQL_SUCCESS_WITH_INFO)) {
printf("SQLParamData Failed\n\n");
Cleanup();
return(9);
}
}
// Make final SQLParamData call.
retcode = SQLParamData(hstmt1, &pParmID);
if ((retcode != SQL_SUCCESS) && (retcode != SQL_SUCCESS_WITH_INFO)) {
printf("Final SQLParamData Failed\n\n");
Cleanup();
return(9);
}
Please let me know if we have any solution for this.
If I remember right, I have dynamically queried that information from the datasource using the SQLDescribeParam Function.
And, if I remember right, if you want to only transfer binary values you are interested in the Transfer Octet Length, while if you want to get the result as a string, you probably want to get the Display Size.
Sorry I dont remember the details, but I'm pretty sure the entry point about getting the required size dynamically is on those pages here:
SQLDescribeParam: https://learn.microsoft.com/en-us/sql/odbc/reference/syntax/sqldescribeparam-function?view=sql-server-ver15
Column Size, Decimal Digits, Transfer Octet Length, and Display Size: https://learn.microsoft.com/en-us/sql/odbc/reference/appendixes/column-size-decimal-digits-transfer-octet-length-and-display-size?view=sql-server-ver15
Take care what the driver returns as result code - some databases / drivers do not support calculating those values in advance, so you will need to test for every database / driver you want to support.
Good luck

sqlite query not prepared row insterted but empty values

I am writing a function to prepare a sql query and execute it against a sqlite db. I am using a query to insert values in to a table, the query looks like this
"insert into files values (?, ?, ?, ?, ?, 0)";
A row is inserted but all values for text fields are empty except the last field which is 0. The first 5 fields are of type TEXT
// If I hard code a value for value.data() then the row is inserted correctly with my hardcoded data, the exact line is below
status = sqlite3_bind_text(ppStmt, index, /* if I hardcode it works*/ value.data(), -1, SQLITE_STATIC);
The full function is shown below.The list is created on stack by the calller, I am not sure why it is not replacing question marks with my string args, hardcoded ones work though.. no error codes are returned
//db is already open before I call this
void MediaCache::prepareAndExecuteQuery(string query, list<string> args)
{
sqlite3_stmt *ppStmt = 0;
const char **pzTail = 0;
int status = 0;
if( sqlite3_prepare_v2(db, query.data(), query.length(), &ppStmt, pzTail) != SQLITE_OK )
{
string error = sqlite3_errmsg(db);
//throw an exception
}
if(ppStmt)
{
list<string>::iterator current = args.begin();
int index = 1;
for(current = args.begin() ; current != args.end(); current++)
{
string value = *current;
status = sqlite3_bind_text(ppStmt, index, value.data(), -1, SQLITE_STATIC);
if(status != SQLITE_OK)
{
//log error;
}
index++;
}
status = sqlite3_step(ppStmt);
status = sqlite3_finalize(ppStmt);
//sqlite3_exec(db, "COMMIT", NULL, NULL, NULL);
}
else
{
//ppStmt is null
//throw an exception
}
}
I suspect the problem is here:
string value = *current;
status = sqlite3_bind_text(ppStmt, index, value.data(), -1, SQLITE_STATIC);
In this case value goes out of scope before the execution of the statement takes place. So the pointers given to sqlite3_bind_text are no longer valid. To "fix" that, you could use SQLITE_TRANSIENT, which would force the library to make its own copy of the data before returning.
Also, if you are not using C++11, I don't believe string::data() is guaranteed to be NULL terminated in which case the -1 parameter could be incorrect. It should maybe be value.length() instead.

Sqlite C++ Image to Blob wont store nothing

I googled a lot and couldn't find any solution. I'm currently writing myself an
app in Codegear 2007 C++. I'm writing this app for my little kittens, its kinda' a dairy
but I call it KittyBook.
So i have two tables(sorry didn't understand how to CodeBlock) :
Kitten Info
Kitten Data.
KittenInfo Stores their names and their ID ( primary key ), their gender and their birth. This one works.
The other one should store a Blob. So after trying so many ways. It won't be stored in the table, not even the other data if I do the Query with normal insert but excluding the blob table.
So, I dunno what I'm doing wrong BUT I love SQLite so far. No turning back then eh?
The function is :
void CDatabase::InsertKittenData(int Kitten_ID, int kittenDay, bool DayOrWeek,
char * kitten_weight, char * Kitten_Comment, string PhotoFile) {
unsigned char * blob;
ifstream::pos_type size;
int size2 = 0;
if (FileExists(PhotoFile.c_str())) {
ifstream file(PhotoFile.c_str(), ios::in | ios::binary | ios::ate);
if (file.is_open()) {
size = file.tellg();
blob = new char[size];
file.seekg(0, ios::beg);
file.read(blob, size);
file.close();
}
}
else {
blob = NULL;
}
sqlite3 *dbp;
sqlite3_stmt *ppStmt;
// NULL = primary key autoinc.
char * Sql = "INSERT INTO KittenData VALUES ( NULL, ? , ? ,? , ? , ? , ?);";
int rc = sqlite3_open("KittyBook.db", &dbp);
if (rc)
return;
if (sqlite3_prepare_v2(dbp, Sql, -1, &ppStmt, NULL) != SQLITE_OK) {
return;
}
if (ppStmt) {
if (sqlite3_bind_int(ppStmt, 1, Kitten_ID) != SQLITE_OK)
return;
if (sqlite3_bind_int(ppStmt, 2, kittenDay) != SQLITE_OK)
return;
if (sqlite3_bind_int(ppStmt, 3, DayOrWeek) != SQLITE_OK)
return;
if (sqlite3_bind_text(ppStmt, 4, // Index of wildcard
kitten_weight, strlen(kitten_weight), // length of text
SQLITE_STATIC) != SQLITE_OK)
return;
if (sqlite3_bind_text(ppStmt, 5, // Index of wildcard
Kitten_Comment, strlen(Kitten_Comment), // length of text
SQLITE_STATIC) != SQLITE_OK)
return;
if (sqlite3_bind_blob(ppStmt, 6, blob, size2, SQLITE_TRANSIENT)
!= SQLITE_OK)
return;
if (sqlite3_step(ppStmt) != SQLITE_DONE)
return;
}
sqlite3_finalize(ppStmt);
sqlite3_exec(dbp, "COMMIT", NULL, NULL, NULL);
sqlite3_close(dbp);
}
You declare and initialize size2 as zero:
int size2 = 0;
then the next use of size2 is when you bind the blob:
sqlite3_bind_blob(ppStmt, 6, blob, size2, SQLITE_TRANSIENT)
From the fine manual:
In those routines that have a fourth argument, its value is the number of bytes in the parameter. To be clear: the value is the number of bytes in the value, not the number of characters.
and the fourth argument in this case is size2 and that's zero. The result is that you're telling SQLite to bind a zero-length blob and then you're wondering why nothing gets stored. Well, nothing gets stored because you're asking SQLite to store zero bytes and it is only doing what it is told.
Perhaps you want to use size instead of size2.
mu is too short, gave me the idea to do the Function step by step and increasing column
by column. Dunno what i did wrong, but its working now.
Cheers mu is too short :)

SQLite multi insert from C++ just adding the first one

I have the following code for SQLite:
std::vector<std::vector<std::string> > InternalDatabaseManager::query(std::string query)
{
sqlite3_stmt *statement;
std::vector<std::vector<std::string> > results;
if(sqlite3_prepare_v2(internalDbManager, query.c_str(), -1, &statement, 0) == SQLITE_OK)
{
int cols = sqlite3_column_count(statement);
int result = 0;
while(true)
{
result = sqlite3_step(statement);
std::vector<std::string> values;
if(result == SQLITE_ROW)
{
for(int col = 0; col < cols; col++)
{
std::string s;
char *ptr = (char*)sqlite3_column_text(statement, col);
if(ptr) s = ptr;
values.push_back(s);
}
results.push_back(values);
} else
{
break;
}
}
sqlite3_finalize(statement);
}
std::string error = sqlite3_errmsg(internalDbManager);
if(error != "not an error") std::cout << query << " " << error << std::endl;
return results;
}
When I try to pass a query string like:
INSERT INTO CpuUsage (NODE_ID, TIME_ID, CORE_ID, USER, NICE, SYSMODE, IDLE, IOWAIT, IRQ, SOFTIRQ, STEAL, GUEST) VALUES (1, 1, -1, 1014711, 117915, 175551, 5908257, 112996, 2613, 4359, 0, 0); INSERT INTO CpuUsage (NODE_ID, TIME_ID, CORE_ID, USER, NICE, SYSMODE, IDLE, IOWAIT, IRQ, SOFTIRQ, STEAL, GUEST) VALUES (1, 1, 0, 1014711, 117915, 175551, 5908257, 112996, 2613, 4359, 0, 0); INSERT INTO CpuUsage (NODE_ID, TIME_ID, CORE_ID, USER, NICE, SYSMODE, IDLE, IOWAIT, IRQ, SOFTIRQ, STEAL, GUEST) VALUES (1, 1, 1, 1014711, 117915, 175551, 5908257, 112996, 2613, 4359, 0, 0);
It results just inserting the first insert. Using some other tool lite SQLiteStudio it performs ok.
Any ideas to help me, please?
Thanks,
Pedro
EDIT
My query is a std::string.
const char** pzTail;
const char* q = query.c_str();
int result = -1;
do {
result = sqlite3_prepare_v2(internalDbManager, q, -1, &statement, pzTail);
q = *pzTail;
}
while(result == SQLITE_OK);
This gives me Description: cannot convert ‘const char*’ to ‘const char**’ for argument ‘5’ to ‘int sqlite3_prepare_v2(sqlite3*, const char*, int, sqlite3_stmt*, const char*)’
SQLite's prepare_v2 will only create a statement from the first insert in your string. You can think of it as a "pop front" mechanism.
int sqlite3_prepare_v2(
sqlite3 *db, /* Database handle */
const char *zSql, /* SQL statement, UTF-8 encoded */
int nByte, /* Maximum length of zSql in bytes. */
sqlite3_stmt **ppStmt, /* OUT: Statement handle */
const char **pzTail /* OUT: Pointer to unused portion of zSql */
);
From http://www.sqlite.org/c3ref/prepare.html
If pzTail is not NULL then *pzTail is made to point to the first byte
past the end of the first SQL statement in zSql. These routines only
compile the first statement in zSql, so *pzTail is left pointing to
what remains uncompiled.
The pzTail parameter will point to the rest of the inserts, so you can loop through them all until they have all been prepared.
The other option is to only do one insert at a time, which makes the rest of your handling code a little bit simpler usually.
Typically I have seen people do this sort of thing under the impression that they will be evaluated under the same transaction. This is not the case, though. The second one may fail and the first and third will still succeed.

MySQL Opimization of insert sequences

I have a realtime application that processes information and log's it to a MySQL database (actually MariaDB, a fork of MySQL). It does anywhere around 1.5 million inserts a day + 150,000 deletes.
I am having great problems with performance and don't know how to make it function any better.
The basic structure of the application is that I have a producer class, that pushes a Struct to a threadsafe deque. The following code
#include "dbUserQueue.h"
dbUserQueue::~dbUserQueue() {
}
void dbUserQueue::createConnection()
{
sql::Driver * driver = sql::mysql::get_driver_instance();
std::auto_ptr< sql::Connection > newCon(driver->connect(dbURL, dbUser, dbPass));
con = newCon;
std::auto_ptr< sql::Statement > stmt(con->createStatement());
stmt->execute("USE twitter");
}
inline void dbUserQueue::updateStatement(const std::string & value,
std::auto_ptr< sql::PreparedStatement> & stmt, const int index)
{
if(value != "\0") stmt->setString(index, value);
else stmt->setNull(index,sql::DataType::VARCHAR);
}
inline void dbUserQueue::updateStatement(const boost::int64_t & value,
std::auto_ptr< sql::PreparedStatement> & stmt, const int index)
{
if(value != -1) stmt->setInt64(index,value);
else stmt->setNull(index,sql::DataType::BIGINT);
}
inline void dbUserQueue::updateStatement(const bool value,
std::auto_ptr< sql::PreparedStatement> & stmt, const int index)
{
stmt->setBoolean(index, value);
}
inline void dbUserQueue::updateStatement(const int value,
std::auto_ptr< sql::PreparedStatement> & stmt, const int index)
{
if(value != -1) stmt->setInt(index,value);
else stmt->setNull(index,sql::DataType::INTEGER);
}
inline void dbUserQueue::updateStatementDateTime(const std::string & value,
std::auto_ptr< sql::PreparedStatement> & stmt, const int & index)
{
if(value != "\0") stmt->setDateTime(index, value);
else stmt->setNull(index,sql::DataType::DATE);
}
/*
* This method creates a database connection
* and then creates a new thread to process the incoming queue
*/
void dbUserQueue::start() {
createConnection();
if(con->isClosed() == false)
{
insertStmt = std::auto_ptr< sql::PreparedStatement>(con->prepareStatement("\
insert ignore into users(contributors_enabled, created_at, \
description, favourites_count, followers_count, \
following, friends_count, geo_enabled, id, lang, listed_count, location, \
name, notifications, screen_name, show_all_inline_media, statuses_count, \
url, utc_offset, verified) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"));
}
thread = boost::thread(&dbUserQueue::processLoop, this);
}
/*
* Stops the thread once it is finished processing the information
*/
void dbUserQueue::join(){
thread.interrupt();
thread.join();
}
/*
* The worker function of the thread.
* Pops items from the queue and updates the database accordingly.
*/
void dbUserQueue::processLoop() {
user input;
int recordCount = 0;
con->setAutoCommit(false);
while (true) {
try {
if(recordCount >= 1000)
{
recordCount = 0;
con->commit();
}
// Insert all the data into the prepared statement
if (userQ.wait_and_pop(input)) {
updateStatement(input.contributors_enabled, insertStmt, 1);
updateStatementDateTime(input.created_at, insertStmt, 2);
updateStatement(input.description, insertStmt, 3);
updateStatement(input.favourites_count, insertStmt, 4);
updateStatement(input.followers_count, insertStmt, 5);
updateStatement(input.following, insertStmt, 6);
updateStatement(input.friends_count, insertStmt, 7);
updateStatement(input.geo_enabled, insertStmt, 8);
updateStatement(input.id, insertStmt, 9);
updateStatement(input.lang, insertStmt, 10);
updateStatement(input.listed_count, insertStmt, 11);
updateStatement(input.location, insertStmt, 12);
updateStatement(input.name, insertStmt, 13);
updateStatement(input.notifications, insertStmt, 14);
updateStatement(input.screenName, insertStmt, 15);
updateStatement(input.show_all_inline_media, insertStmt, 16);
updateStatement(input.statuses_count, insertStmt, 17);
updateStatement(input.url, insertStmt, 18);
updateStatement(input.utc_offset, insertStmt, 19);
updateStatement(input.verified, insertStmt, 20);
insertStmt->executeUpdate();
insertStmt->clearParameters();
recordCount++;
continue;
}
} catch (std::exception & e) {
}
}// end of while
// Close the statements and the connection before exiting
insertStmt->close();
con->commit();
if(con->isClosed() == false)
con->close();
}
My questions is on how to improve the performance? Things I have tried:
Having multiple consumers connecting to one MySQL/MariaDB
Committing after a large number of records
Single Producer, Single consumer, commit after 1000 records = ~275 Seconds
Dual Producer, Triple consumers, commit after 1000 records = ~100 Seconds
Dual Producer, Triple consumers, commit after 2000 records = ~100 Seconds
Dual Producer, Triple consumers, commit every 1 record = ~100 Seconds
Dual Producer, 6 Consumers, commit every 1 record = ~95 Seconds
Dual Producer, 6 Consumers, commit every 2000 records = ~100 Seconds
Triple Producer, 6 Consumesr, commit every 2000 records = ~100 Seconds
A couple notes on the problem domain. The messages to insert and or delete come randomly throughout the day with an average of ~20 inserts/deletes per second, bursts much higher but there is no reason that the updates can't be queued for a short period, as long as the queue doesn't grow to large.
The table that the data is currently being inserted into has approximately 52 million records in it. Here is the MySQL table information
CREATE TABLE `users` (
`id` bigint(20) unsigned NOT NULL,
`contributors_enabled` tinyint(4) DEFAULT '0',
`created_at` datetime NOT NULL,
`description` varchar(255) DEFAULT NULL,
`favourites_count` int(11) NOT NULL,
`followers_count` int(11) DEFAULT NULL,
`following` varchar(255) DEFAULT NULL,
`friends_count` int(11) NOT NULL,
`geo_enabled` tinyint(4) DEFAULT '0',
`lang` varchar(255) DEFAULT NULL,
`listed_count` int(11) DEFAULT NULL,
`location` varchar(255) DEFAULT NULL,
`name` varchar(255) DEFAULT NULL,
`notifications` varchar(45) DEFAULT NULL,
`screen_name` varchar(45) NOT NULL,
`show_all_inline_media` tinyint(4) DEFAULT NULL,
`statuses_count` int(11) NOT NULL,
`url` varchar(255) DEFAULT NULL,
`utc_offset` int(11) DEFAULT NULL,
`verified` tinyint(4) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `id_UNIQUE` (`id`)
) ENGINE=MARIA DEFAULT CHARSET=latin1 CHECKSUM=1 PAGE_CHECKSUM=1 TRANSACTIONAL=1
You could change the code to do bulk inserts, rather than insert one row at a time.