libmysqlclient.18.dylib memory leak - c++

PROBLEM: What's the cause of the memory leaks?
SITUATION:
I've build a simple command line program using C++ together with MySQL using the MySQL C API
The problem is, the program has many "minor" memory leaks from the object malloc xx bytes" with xx ranging from a few bytes to 8 kb. All of the leaks links to the library libmysqlclient.18.dylib.
I've already removed all the mysql_free_result() from the code to see if that was the problem, but its still the same.
My MySQL code mainly consists of simple code like:
to connect:
MYSQL *databaseConnection()
{
// declarations
MYSQL *connection = mysql_init(NULL);
// connecting to database
if(!mysql_real_connect(connection,SERVER,USER,PASSWORD,DATABASE,0,NULL,0))
{
std::cout << "Connection error: " << mysql_error(connection) << std::endl;
}
return connection;
}
executing a query:
MYSQL_RES *getQuery(MYSQL *connection, std::string query)
{
// send the query to the database
if (mysql_query(connection, query.c_str()))
{
std::cout << "MySQL query error: " << mysql_error(connection);
exit(1);
}
return mysql_store_result(connection);
}
example of a query:
void resetTable(std::string table)
{
MYSQL *connection = databaseConnection();
MYSQL_RES *result;
std::string query = "truncate table " + table;
result = getQuery(connection, query);
mysql_close(connection);
}

First of all: Opening a new connection for every query (like you're doing in resetTable()) is incredibly wasteful. What you really want to do is open a single connection when the application starts, use that for everything (possibly by storing the connection in a global), and close it when you're done.
To answer your question, though: You need to call mysql_free_result() on result sets once you're done with them.

Related

Random Segmentation Fault - Raspberry Pi, C++, MySQL

I'm having an issue with segmentation faults crashing my c++ program. I've narrowed it down to executing mysql queries. If I remove the query it never crashes but when it does run the query it may work 5 times in a row and crash or it might go 20 times before a segmentation fault, it's very random.
This is the code to connect.
MYSQL *conn;
void connectMysql(){
const char *server = "localhost";
const char *user = "root";
const char *password = "myPpassword";
const char *database = "myDatabase";
conn = mysql_init(NULL);
/* Connect to database */
if (!mysql_real_connect(conn, server,
user, password, database, 0, NULL, 0)) {
fprintf(stderr, "%s\n", mysql_error(conn));
exit(1);
}
}
Then I do this to run my queries. It'll happen with other queries as well.
connectMysql();
std::string query = "UPDATE settings SET tempFormat = '" + to_string(tempFormat) + "'";
std::cout << query << std::endl; // print string to terminal
mysql_query(conn, query.c_str());
std::cout << "done" << std::endl; // print done to terminal
mysql_close(conn);
Using std::cout to write to terminal it looks like it's happening with mysql_query function. I print the query string to terminal and that always looks good going into mysql_query even when it crashes. When it does crash I never see the "done" printed to terminal so it doesn't make it that far.
I've just recently discovered this as I don't save to database that often so not exactly sure what c++ mysql library I'm using but this is what I use to install it.
sudo apt-get install libmariadb-dev-compat libmariadb-dev -y
Thanks

C++ - Mysql connector

I'm using the c++ mysql connector to do operations in my mysql database.
My c++ program is a real time application (rest api) which is always running in the cloud, always waiting for user requests.
When i start my program for the first type i automatically create a connection to the database (the fields for the connector i load from a configuration file). Example:
conDataBase = new ConDatabase;
if (!conDataBase->Init()) return false;
The conDataBase is a global pointer accessible to all classes.
The Init() function:
bool conDatabase::Init()
{
GetParameterStr("DATABASE", "HOST", "", hostname, 255);
db_hostname = hostname;
GetParameterStr("DATABASE", "USER", "", user, 255);
db_user = user;
GetParameterStr("DATABASE", "PASSWORD", "", password, 255);
db_password = password;
GetParameterStr("DATABASE", "SCHEMA", "", schema, 255);
db_schema = schema;
printf("DATABASE: Connecting to %s \n",db_hostname.c_str());
printf("DATABASE: Connecting at %s with user %s \n",db_schema.c_str(), db_user.c_str());
try
{
driver = get_driver_instance();
con = driver->connect(db_hostname.c_str(), db_user.c_str(), db_password.c_str());
con->setSchema(db_schema.c_str());
stmt = con->createStatement();
printf("DATABASE: Connected to database... OK \n");
return true;
}
catch (sql::SQLException &e)
{
std::cout << "# ERR: SQLException in " << __FILE__;
std::cout << "(" << __FUNCTION__ << ") on line " << __LINE__ << std::endl;
std::cout << "# ERR: " << e.what();
std::cout << " (MySQL error code: " << e.getErrorCode();
std::cout << ", SQLState: " << e.getSQLState() << " )" << std::endl;
return false;
}
}
So when i receive a request for example to list the userInfo in the userInfo request class i call the global pointer for the database class like this:
conDataBase->GetUserInfo(// the parameters);
Inside the GetUserInfo() i build my query like this:
res = stmt->executeQuery(query);
Its works but my real doubt is: Its is necessary to delete the pointer from mysqlconnector (res, pstmt, con, etc)?. I'm scary about memory leaks in future. I'm only delete the pointers when the program exits but it is a real time program reason why he is not expected to be finished. If i delete the pointer in each query, insert etc (like the mysqlconnector examples do) in next time i have segmentation fault because when i run the program in first time i create the database pointers con, res, etc, so i cannot delete these pointers in each database operation because if i do this, in next time the pointers are deleted and i dont have access to him and its result in segmentation fault. Whats is the solution in these case to prevent memory leaks in future?
For such cases you can write a connectionManager class. It can be used to provide api's for :
1- creating and maintaining connection pool,
2- getConnection api to get a connection instance from the pool,
3- a release connection api to put the connection instance back into opened connection's pool,
4- you should be using stl containers to store opened connections, etc

SQLite in C++. DB is BUSY (Multithread)

I've got a problem. I'm using SQLite3 in my C++ project. In the log, I've got errors: DB is locked error code 5. As I know, error code 5 means that DB is busy. To solve this, I started to use WAL journal mode. But it doesn't help.
In my program, I've got 2 connections to the same DB. I use mutexes for both DB connections.
I'm opening connections with this code:
if (sqlite3_open_v2(db_path.c_str(), &this->db, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | SQLITE_OPEN_NOMUTEX, 0) ) {
LOG4CPLUS_FATAL(this->logger, "Can not open/create DB " << sqlite3_errmsg(db));
sqlite3_close(this->db);
}
if (sqlite3_exec(this->db, "PRAGMA journal_mode = WAL;", 0, 0, &err)) {
LOG4CPLUS_ERROR(this->logger, "SQL det journal mode error: " << err);
sqlite3_free(err);
}
The first connection is used for inserting data to the DB. It happens 4 times every second.
The second connection is used for starting transaction, selecting, updating, deleting data, and committing. It happens every 5 seconds.
I'm getting errors from the first connection.
Please help me to solve this problem.
Update:
First connection:
void readings_collector::flushToDb()
{
this->db_mutex.lock();
LOG4CPLUS_DEBUG(this->logger, "Flush to DB start.");
const char *query = "INSERT INTO `readings` (`sensor_id`, `value`, `error`, `timestamp`) VALUES (?,?,?,?)";
sqlite3_stmt *stmt = NULL;
int rc = sqlite3_prepare_v2(this->db, query, -1, &stmt, NULL);
if (SQLITE_OK != rc) {
LOG4CPLUS_ERROR(this->logger, "sqlite prepare insert statment error: " << sqlite3_errmsg(this->db));
}
LOG4CPLUS_TRACE(this->logger, "--------------------");
LOG4CPLUS_TRACE(this->logger, this->readings.size());
while(!this->readings.empty()) {
sensor::reading temp_reading = this->readings.front();
this->readings.pop();
LOG4CPLUS_TRACE(this->logger, "Reading " << temp_reading.sensor_id << " : " << temp_reading.value << " : " << temp_reading.error << " : " << temp_reading.timestamp);
sqlite3_clear_bindings(stmt);
sqlite3_bind_int(stmt, 1, temp_reading.sensor_id);
sqlite3_bind_text(stmt, 2, temp_reading.value.c_str(), sizeof(temp_reading.value.c_str()), NULL);
sqlite3_bind_int(stmt, 3, temp_reading.error);
sqlite3_bind_int(stmt, 4, temp_reading.timestamp);
rc = sqlite3_step(stmt);
if (SQLITE_DONE != rc) {
LOG4CPLUS_ERROR(this->logger, "sqlite insert statment exec error: " << sqlite3_errmsg(this->db) << "; status: " << rc);
}
}
sqlite3_finalize(stmt);
LOG4CPLUS_TRACE(this->logger, "Flush to DB finish.");
this->db_mutex.unlock();
}
Second connection:
void dataSend_task::sendData()
{
this->db_mutex.lock();
char *err = 0;
LOG4CPLUS_INFO(this->logger, "Send data function");
if (sqlite3_exec(this->db, "BEGIN TRANSACTION", 0, 0, &err)) {
LOG4CPLUS_ERROR(this->logger, "SQL exec error: " << err);
sqlite3_free(err);
}
if (sqlite3_exec(this->db, this->SQL_UPDATE_READINGS_QUERY, 0, 0, &err)) {
LOG4CPLUS_ERROR(this->logger, "SQL exec error: " << err);
sqlite3_free(err);
}
this->json.clear();
this->readingsCounter = 0;
if (sqlite3_exec(this->db, this->SQL_SELECT_READINGS_QUERY, +[](void *instance, int x, char **y, char **z) {
return static_cast<dataSend_task *>(instance)->callback(0, x, y, z);
}, this, &err)) {
LOG4CPLUS_ERROR(this->logger, "SQL exec error: " << err);
sqlite3_free(err);
} else {
LOG4CPLUS_TRACE(this->logger, "Json data: " << this->json);
if (this->curlSend()) {
if (sqlite3_exec(this->db, this->SQL_DELETE_READINGS_QUERY, 0, 0, &err)) {
LOG4CPLUS_ERROR(this->logger, "SQL exec error: " << err);
sqlite3_free(err);
}
}
}
if (sqlite3_exec(this->db, "COMMIT", 0, 0, &err)) {
LOG4CPLUS_ERROR(this->logger, "SQL exec error: " << err);
sqlite3_free(err);
}
this->db_mutex.unlock();
this->json.clear();
}
As you've no doubt realized, SQLite only allows one connection at a time to be be updating the database.
From the code you have pasted, it looks as though you have two separate mutexes, one for the readings_collector instance, another for the dataSend_task instance. These would protect against multiple executions of each of the two functions but not against both of those functions running at once.
It wasn't that clear from your question what the purpose of the mutexes is, but it certainly isn't going to prevent both of those connections from simultaneously trying to update the database.
I can suggest two approaches to fix your problem.
The first would be to use a single shared mutex between those two instances, so that only one of them at a time can be updating the database.
The second would be to take advantage of the facilities SQLite provides for resolving contention when accessing the database. SQLite allows you to install a 'busy handler' which will be called in the event that an attempt is made to access a database which is already locked by another thread or process. The busy handler can take whatever action is desired, but the simplest case is normally just to wait a while and try again, which is catered for by the built in busy handler which you can install by calling sqlite3_busy_timeout.
For example, immediately after opening your database connection, you could do this:
sqlite3_busy_timeout(this->db, 1000); // Wait 1000mS if busy
It is also possible to set such a timeout by command, using the busy_timeout pragma.
You may also wish to consider starting your transaction using BEGIN IMMEDIATE TRANSACTION or BEGIN EXCLUSIVE TRANSACTION so that the transaction can be guaranteed to complete without blocking. See the documentation on transactions.
Please check these two Stack Overflow posts. They seem to be related to your issue.
Can different connections of the same sqlite's database begin transactions concurrently?
If you read the SQLite documentation, you will see that it supports
multiple connections for reading only, you cannot write to the
database from mulitple connections, because it's not designed for
that.
Read and Write Sqlite database data concurrently from multiple connections
Multiple processes can have the same sqlite database open at the same
time, several read accesses can be satisfied in parallel.
In case of write, a single write to the database does lock the
database for a short time, nothing, even reading, can access the
database file at all.
Beginning with version 3.7.0, a new “Write Ahead Logging” (WAL) option
is available. In which Reading and writing can proceed concurrently.
By default, WAL is not enabled. To turn WAL on, please refer to Sqlite
documentation.

MySQL server-side timeout

I have some connection code that causes a timeout when queries take too long. The connection options are setup like this (timeout is an integer):
sql::ConnectOptionsMap com;
com["hostName"] = url; // Some string
com["userName"] = user; // Some string
com["password"] = pwd; // Some string
com["OPT_RECONNECT"] = true;
com["OPT_READ_TIMEOUT"] = timeout; // Usually 1 (second)
com["OPT_WRITE_TIMEOUT"] = timeout; // Usually 1 (second)
After testing the timeout setup above what I found is that a throw does occur but MySQL continues trying to execute the query. In other words, the below try goes to the catch after the configured timeout with error code 2013 but it doesn't stop MySQL from trying to execute the query (2013 is an error code related to lost connection):
// Other code
try
{
stmt = con->createStatement();
stmt->execute("DROP DATABASE IF EXISTS MySQLManagerTest_TimeoutRead");
stmt->execute("CREATE DATABASE MySQLManagerTest_TimeoutRead");
stmt->execute("USE MySQLManagerTest_TimeoutRead");
stmt->execute("CREATE TABLE foo (bar INT)");
for (int i = 0; i < 100; i++)
stmt->execute("INSERT INTO foo (bar) VALUES (" + LC(i) + ")");
// A bit of playing is needed in the loop condition
// Make it longer than a second but not too long
// Using 10000 seems to take roughly 5 seconds
stmt->execute(
"CREATE FUNCTION waitAWhile() "
"RETURNS INT READS SQL DATA "
"BEGIN "
"DECLARE baz INT DEFAULT 0; "
"WHILE baz < 10000 DO "
"SET baz = baz + 1; "
"END WHILE; "
"RETURN baz; "
"END;"
);
res = stmt->executeQuery("SELECT 1 FROM foo WHERE bar = waitAWhile()");
} catch (sql::SQLException &e) {
std::cout << e.getErrorCode() << std::endl;
}
// Other code
I was able to notice that MySQL did not stop by running "top" at the same time as the above testing code. Making the above MySQL waitAWhile() function instead be an infinite loop further confirmed that MySQL was not stopping because I had to kill the MySQL process to make it stop
This kind of timeout is not what I wanted, I wanted MySQL to give up on a query if it took too long. Can this be done (so that both my execution and MySQL stop doing work)? Additionally, can this be specified only for INSERT queries?
You can do it in regular SQL by having SQL Server set a max query execution time. However, it doesn't look like MySQL supports this; see the following accepted SO answer for more details:
MySQL - can I limit the maximum time allowed for a query to run?

Can't connect to SQL database with mysql++

I've been trying to use the mysql++ library in my application (windows x64 based) but I can't seem to connect to my sql server.
Some information:
I used this code to connect to the server:
mysqlpp::Connection conn(db, 0, user, pass, 3306);
this definitely has the right data in it.
and then, my sql server is the standard service from the MySQL install. And I'm pretty sure I used the standard settings. I can connect to it using the MySql Workbench and I edited some new tables and such but my own program doesn't seem to connect.
I read the documentation and I can't find anything specific that might suggest something why I can't connect.
Oh, so many issues, so little time...
Have you checked that your program has permissions to access the database?
Does your program have the correct privileges?
Is your host name correct?
What errors are you getting?
What exception is thrown?
When you use the debugger, what line is the error on?
Here's my method:
sql::Connection * const
Manager ::
get_db_connection(void) const
{
//-------------------------------------------------------------------------
// Use only one connection until proven that more connections will make
// the program more efficient or have a beneficial impact on the user.
// Thus the change in returning sql::Connection * rather than a smart pointer.
// A smart pointer will delete its contents.
//-------------------------------------------------------------------------
static const char host_text[] = "tcp://127.0.0.1:3306/";
static std::string host_name;
if (!m_connection_initialized)
{
host_name = host_text;
initialize_db_driver();
host_name += m_dataset_info.m_dsn_name;
try
{
m_p_connection = m_p_sql_driver->connect(host_name.c_str(),
m_dataset_info.m_user_name.c_str(),
m_dataset_info.m_password.c_str());
}
catch (sql::SQLException &e)
{
/*
The MySQL Connector/C++ throws three different exceptions:
- sql::MethodNotImplementedException (derived from sql::SQLException)
- sql::InvalidArgumentException (derived from sql::SQLException)
- sql::SQLException (derived from std::runtime_error)
*/
wxString wx_text = wxT("# ERR: SQLException in ");
wx_text += wxT(__FILE__);
wxLogDebug(wx_text);
wx_text.Printf(wxT("# ERR: (%s) on line %d"),
__FUNCTION__,
__LINE__);
wxLogDebug(wx_text);
wx_text.Printf(wxT("# ERR: %s (MySQL error code: %d, SQLState: %s)"),
e.what(),
e.getErrorCode(),
e.getSQLState());
wxLogDebug(wx_text);
wxLogDebug(wxT("Verify that mysqlcppconn.dll is in the PATH or in the working directory."));
// throw Manager_Connection_Not_Initialized();
m_connection_initialized = false;
}
catch (...)
{
std::cout << "Unhandled database SQL exception\n" << flush;
m_connection_initialized = false;
}
m_connection_initialized = true;
}
return m_p_connection;
}