I am trying to run through a series of checks/inserts into a MySQL 5.5 db, but I am having frequent yet intermittent issues with SIGSEGV errors. Over the course of many queries being executed, the SELECT statements run just fine. However, after some variable amount of time or number of executed queries (sometimes thousands of checks, sometimes 1 or 2, sometimes not at all and the program exits normally), I inexplicably get a segfault...
Program received signal SIGSEGV, Segmentation fault.
0x100188a8 in mysql_send_query () from K:\Programming\C\Test\libmysql.dll
(gdb) bt full
#0 0x100188a8 in mysql_send_query () from K:\Programming\C\Test\libmysql.dll
No symbol table info available.
#1 0x100188e5 in mysql_real_query () from K:\Programming\C\Test\libmysql.dll
No symbol table info available.
#2 0x00000000 in ?? ()
No symbol table info available.
(gdb)
This is from my heavily reduced code:
int main() {
for (int i = 0; i < 5000; i++) {
int iNewX = GenerateRandomInt(1, 50);
int iNewY = GenerateRandomInt(1, 50);
std::string str = "SELECT * FROM Resources WHERE XPOS = ";
str = str +
StatToString(iNewX) + " AND YPOS = " +
StatToString(iNewY) + ";";
const char * Query = str.c_str();
MYSQL *connect;
connect=mysql_init(NULL);
connect=mysql_real_connect(connect,SERVER,USER,PASSWORD,DATABASE,0,NULL,0);
// Print SQL statment for debugging only...
// This appears to always be good, even in the case of the segfault.
std::cout << Query << std::endl;
if (mysql_query(connect, Query)) {
// Supposed to log an error; I don't get this far...
// This does work when I intentionally break the statement.
std::cout << printf("Failed to SELECT, Error: %s", mysql_error(connect));
std::cout << printf("Query: %s", Query) << std::endl;
mysql_close(connect);
return 0;
}
mysql_close(connect);
}
return 1;
}
I have been unsuccessful in searching online for a case that really matches what I have going on here (though there are lots of MySQL/segfault related forum/Q+A topics/threads). Since this appears to be happening within the .dll itself, how can I fix this?
Can anyone explain why the issue seems to come and go?
I have not yet tried to reinstall MySQL, as that will likely be a very big headache that I would rather avoid. If I must, then I must.
If I am missing any details in my question or any pertinent code, please let me know and I will add.
After following Christian.K's advice, I was able to see that this was error 23 (as returned by mysql_error(connect)) after connect=mysql_init(NULL).
This led me to a few resources, most clearly, this one. This says that this is a know problem when working within Windows, and there's not much I can do about this.
You might get around the open file limit (error 23) by not opening a connection for every loop iteration (which is questionable anyway), but rather use one connection for all loop iterations.
Together with my comments about error handling, and the strange cout << printf use you end up with something like this:
int main() {
MYSQL *connect;
connect=mysql_init(NULL);
if (connect == NULL)
{
printf("Insufficient memory to initialize.\n");
return 1;
}
connect=mysql_real_connect(connect,SERVER,USER,PASSWORD,DATABASE,0,NULL,0);
if (connect == NULL)
{
printf("Could not connect: %s\n", mysql_error(connect);
return 1;
}
for (int i = 0; i < 5000; i++) {
int iNewX = GenerateRandomInt(1, 50);
int iNewY = GenerateRandomInt(1, 50);
std::string str = "SELECT * FROM Resources WHERE XPOS = ";
str = str +
StatToString(iNewX) + " AND YPOS = " +
StatToString(iNewY) + ";";
const char * Query = str.c_str();
if (mysql_query(connect, Query)) {
// Supposed to log an error; I don't get this far...
// This does work when I intentionally break the statement.
printf("Failed to SELECT, Error: %s", mysql_error(connect));
printf("Query: %s", Query);
mysql_close(connect);
return 1;
}
}
mysql_close(connect);
return 0;
}
Note that I also changed the return values. Per convention main() should return 0 on success and something else (mostly 1) otherwise.
Related
I'm learning how to use RDMA via Inifniband and one problem I'm having is using a connection with more than 1 thread because I cant figure out how to create another completion queue so the work completions get mixed up between the threads and it craps out, how do I create a queue for each thread using the connection?
Take this vomit for example:
void worker(struct ibv_cq* cq){
while(conn->peer_mr.empty()) Sleep(1);
struct ibv_wc wc{};
struct ibv_send_wr wr{};
memset(&wr, 0, sizeof wr);
struct ibv_sge sge{};
sge.addr = reinterpret_cast<unsigned long long>(conn->rdma_memory_region);
sge.length = RDMA_BUFFER_SIZE;
sge.lkey = conn->rdma_mr->lkey;
wr.wr_id = reinterpret_cast<unsigned long long>(conn);
wr.opcode = IBV_WR_RDMA_READ;
wr.sg_list = &sge;
wr.num_sge = 1;
wr.send_flags = IBV_SEND_SIGNALED;
struct ibv_send_wr* bad_wr = nullptr;
while(true){
if(queue >= maxqueue) continue;
for(auto i = 0ULL; i < conn->peer_mr.size(); ++i){
wr.wr.rdma.remote_addr = reinterpret_cast<unsigned long long>(conn->peer_mr[i]->mr.addr) + conn->peer_mr[i]->offset;
wr.wr.rdma.rkey = conn->peer_mr[i]->mr.rkey;
const auto err = ibv_post_send(conn->qp, &wr, &bad_wr);
if(err){
std::cout << "ibv_post_send " << err << "\n" << "Errno: " << std::strerror(errno) << "\n";
exit(err);
}
++queue;
conn->peer_mr[i]->offset += RDMA_BUFFER_SIZE;
if(conn->peer_mr[i]->offset >= conn->peer_mr[i]->mr.length) conn->peer_mr[i]->offset = 0;
}
int ne;
do{
ne = ibv_poll_cq(cq, 1, &wc);
} while(!ne);
--queue;
++number;
}
}
If I had more than one of them they would all be receiving each others work completions, I want them to receive only their own and not those of other threads.
The completion queues are created somewhere outside of this code (you are passing in an ibv_cq *). If you'd like to figure out how to create multiple ones, that's the area to focus on.
However, the "crapping out" is not (just) happening because completions are mixed up between threads: the ibv_poll_cq and ibv_post_send functions are thread safe. Instead, the likely problem is that your code isn't thread-safe: there are shared data structures that are accessed without locks (conn->peer_mr). You would have the same issues even without RDMA.
The first step is to figure out how to split up the work into pieces. Think about the pieces that each thread will need to make it independent from the others. It'll likely be a single peer_mr, a separate ibv_cq *, and a specific chunk of your rdma_mr. Then code that :)
We have a c++ server application which is connecting to postgresql database using libpq library. Application creating 100s of connection to database and most of the connection's life time is application scope.
Initially application was running fine, but over a period of time postgres server consuming more memory for long running connections. By writing a below sample program I come to know creating prepared statements using PQsendPrepare and PQsendQueryPrepared is causing the memory consumption issue in database server.
How we can fix this server memory issue? is there any libpq function to free the memory in server?
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
#include <stdio.h>
#include <stdlib.h>
#include <libpq-fe.h>
int main(int argc, char *argv[]) {
const int LEN = 10;
const char *paramValues[1];
int paramFormats[1];
int rowId = 7369;
Oid paramTypes[1];
char str[LEN];
snprintf(str, LEN, "%d", rowId);
paramValues[0] = str;
paramTypes[0]=20;
paramFormats[0]=0;
long int c=1;
PGresult* result;
//PGconn *conn = PQconnectdb("user=scott dbname=dame");
PGconn *conn = PQsetdbLogin ("", "", NULL, NULL, "dame", "scott", "tiger") ;
if (PQstatus(conn) == CONNECTION_BAD) {
fprintf(stderr, "Connection to database failed: %s\n",
PQerrorMessage(conn));
do_exit(conn);
}
char *stm = "SELECT coalesce(ename,'test') from emp where empno=$1";
for(;;)
{
std::stringstream strStream ;
strStream << c++ ;
std::string strStatementName = "s_" + strStream.str() ;
if(PQsendPrepare(conn,strStatementName.c_str(), stm,1,paramTypes) )
{
result = PQgetResult(conn);
if (PQresultStatus(result) != PGRES_COMMAND_OK)
{
PQclear(result) ;
result = NULL ;
do
{
result = PQgetResult(conn);
if(result != NULL)
{
PQclear (result) ;
}
} while (result != NULL) ;
std::cout<<"error prepare"<<PQerrorMessage (conn)<<std::endl;
break;
}
PQclear(result) ;
result = NULL ;
do
{
result = PQgetResult(conn);
if(result != NULL)
{
PQclear (result) ;
}
} while (result != NULL) ;
}
else
{
std::cout<<"error:"<<PQerrorMessage (conn)<<std::endl;
break;
}
if(!PQsendQueryPrepared(conn,
strStatementName.c_str(),1,(const char* const *)paramValues,paramFormats,paramFormats,0))
{
std::cout<<"error:prepared "<<PQerrorMessage (conn)<<std::endl;
}
if (!PQsetSingleRowMode(conn))
{
std::cout<<"error singrow mode "<<PQerrorMessage (conn)<<std::endl;
}
result = PQgetResult(conn);
if (result != NULL)
{
if((PGRES_FATAL_ERROR == PQresultStatus(result)) || (PGRES_BAD_RESPONSE == PQresultStatus(result)))
{
PQclear(result);
result = NULL ;
do
{
result = PQgetResult(conn);
if(result != NULL)
{
PQclear (result) ;
}
} while (result != NULL) ;
break;
}
if (PQresultStatus(result) == PGRES_SINGLE_TUPLE)
{
std::ofstream myfile;
myfile.open ("native.txt",std::ofstream::out | std::ofstream::app);
myfile << PQgetvalue(result, 0, 0)<<"\n";
myfile.close();
PQclear(result);
result = NULL ;
do
{
result = PQgetResult(conn) ;
if(result != NULL)
{
PQclear (result) ;
}
}
while(result != NULL) ;
sleep(10);
}
else if(PQresultStatus(result) == PGRES_TUPLES_OK || PQresultStatus(result) == PGRES_COMMAND_OK)
{
PQclear(result);
result = NULL ;
do
{
result = PQgetResult(conn) ;
if(result != NULL)
{
PQclear (result) ;
}
}
while(result != NULL) ;
}
}
}
PQfinish(conn);
return 0;
}
Initially application was running fine, but over a period of time
postgres server consuming more memory for long running connections. By
writing a below sample program I come to know creating prepared
statements using PQsendPrepare and PQsendQueryPrepared is causing the
memory consumption issue in database server.
Well that seems unsurprising. You are generating a new prepared statement name at each iteration of your outer loop, and then creating and executing a prepared statement of that name. All the resulting, differently-named prepared statements will indeed remain in the server's memory as long as the connection is open. This is intentional.
How we can fix this server memory issue?
I'd characterize it as a program logic issue, not a server memory issue, at least as far as the test program goes. You obtain resources (prepared statements) and then allow them to hang around when you have no further use for them. The statements aren't leaked per se, as you could recreate the algorithmically-generated statement names, but the problem is similar to a resource leak. In your program, not in Postgres.
If you want to use one-off prepared statements then give them the empty string, "", as their name. Postgres calls these "unnamed" statements. Each unnamed statement you prepare will replace any previous one belonging to the same connection.
But even that's a hack. The most important feature of prepared statements in the first place is that they can be reused. Every statement prepared by your test program is identical, so not only are you wasting memory, you are also wasting CPU cycles. You should prepare it once only -- via PQsendPrepare(), or maybe simply PQprepare() -- and when it has successfully been prepared, execute it as many times as you want with PQsendQueryPrepared() or PQqueryPrepared(), passing the same statement name every time (but possibly different parameters).
is there any libpq function
to free the memory in server?
The documentation for the synchronous versions of the query functions says:
Prepared statements for use with PQexecPrepared can also be created by
executing SQL PREPARE statements. Also, although there is no libpq
function for deleting a prepared statement, the SQL DEALLOCATE
statement can be used for that purpose.
To the best of my understanding, there is only one flavor of prepared statement in Postgres, used by the synchronous and asynchronous functions alike. So no, libpq provides no function specifically for dropping prepared statements associated with a connection, but you can write a statement in SQL to do the job. Of course, it would be pointless to create a new, uniquely-named prepared statement to execute such a statement.
Most programs do not need anywhere near so many distinct prepared statements as to produce the kind of problem you report having.
I have some connection code that causes a timeout when queries take too long. The connection options are setup like this (timeout is an integer):
sql::ConnectOptionsMap com;
com["hostName"] = url; // Some string
com["userName"] = user; // Some string
com["password"] = pwd; // Some string
com["OPT_RECONNECT"] = true;
com["OPT_READ_TIMEOUT"] = timeout; // Usually 1 (second)
com["OPT_WRITE_TIMEOUT"] = timeout; // Usually 1 (second)
After testing the timeout setup above what I found is that a throw does occur but MySQL continues trying to execute the query. In other words, the below try goes to the catch after the configured timeout with error code 2013 but it doesn't stop MySQL from trying to execute the query (2013 is an error code related to lost connection):
// Other code
try
{
stmt = con->createStatement();
stmt->execute("DROP DATABASE IF EXISTS MySQLManagerTest_TimeoutRead");
stmt->execute("CREATE DATABASE MySQLManagerTest_TimeoutRead");
stmt->execute("USE MySQLManagerTest_TimeoutRead");
stmt->execute("CREATE TABLE foo (bar INT)");
for (int i = 0; i < 100; i++)
stmt->execute("INSERT INTO foo (bar) VALUES (" + LC(i) + ")");
// A bit of playing is needed in the loop condition
// Make it longer than a second but not too long
// Using 10000 seems to take roughly 5 seconds
stmt->execute(
"CREATE FUNCTION waitAWhile() "
"RETURNS INT READS SQL DATA "
"BEGIN "
"DECLARE baz INT DEFAULT 0; "
"WHILE baz < 10000 DO "
"SET baz = baz + 1; "
"END WHILE; "
"RETURN baz; "
"END;"
);
res = stmt->executeQuery("SELECT 1 FROM foo WHERE bar = waitAWhile()");
} catch (sql::SQLException &e) {
std::cout << e.getErrorCode() << std::endl;
}
// Other code
I was able to notice that MySQL did not stop by running "top" at the same time as the above testing code. Making the above MySQL waitAWhile() function instead be an infinite loop further confirmed that MySQL was not stopping because I had to kill the MySQL process to make it stop
This kind of timeout is not what I wanted, I wanted MySQL to give up on a query if it took too long. Can this be done (so that both my execution and MySQL stop doing work)? Additionally, can this be specified only for INSERT queries?
You can do it in regular SQL by having SQL Server set a max query execution time. However, it doesn't look like MySQL supports this; see the following accepted SO answer for more details:
MySQL - can I limit the maximum time allowed for a query to run?
I wrote whole application in debug mode and everything works fine in this mode. Unfortunately, now when I trying to run release app two unexpected things happen.
Base information:
Qt 5.1.1
Qt Creator 2.8.1
Windows 7 64x
Application has got second thread which decapsulated data from buffer which is update in main thread.
First problem - memory race:
In one of my methods strange memory race occures in release version - in debug everything is ok. Method looks like:
std::vector<double> dataVec;
std::vecotr<unsigned char> frame("U+014-00300027950l");
//EFrame_AccXPos == 1;
dataVec.push_back(decapsulateAcc(frame.begin()+EFrame_AccXPos));
double Deserializator::decapsulateAcc(std::vector<unsigned char>::iterator pos)
{
const char frac[2] = {*(pos+2),*(pos+3)};
const char integ[] = {*(pos+1)};
double sign;
if (*pos == consts::frame::plusSign) {
sign = 1.0;
} else {
sign = -1.0;
}
double integer = (std::strtod(integ, 0));
double fractial = (std::strtod(frac, 0))/100;
qDebug() << QString::fromStdString(std::string(integ));
//prints "014Rd??j?i" should be "0 ?s"
qDebug() << QString::number(integer);
//prints "14" should be "0"
qDebug() << QString::number(fractial);
//prints "0.14" - everything ok.
return sign*integer+sign*fractial;
}
What wrong with this method?
Second problem:
In additional thread I emit signal to manage data which it decapsulate from buffer. After emit thread wait until flag change to false. When I add some qDebug prints - it's start works, but without them it blocks (even though the flag is already false). Below code:
void DataManager::sendPlottingRequest()
{
numberOfMessurement++;
if (numberOfMessurement == plotAfterEquals ) {
numberOfMessurement = consts::startsFromZero;
isStillPlotting=true;
emit requestPlotting(dataForChart);
//block in next line
while (isStillPlotting);
//it starts work when:
//int i = 0;
//while (isStillPlotting) {
//i++
//if (i == 10000) qDebug() << isStillPlotting;
//}
}
}
void DataManager::continueProcess()
{
plottingState++;
if (plottingState == consts::plottingFinished) {
//program reach this point
isStillPlotting = false;
plottingState = consts::startsFromZero;
}
}
while (isStillPlotting); gets optimized out to if(isStillPlotting)while(true);
you should make isStillPlotting volatile or use an atomicInt instead.
or you can emit a signal plottingDone() from the if in continueProcess() and then connect the slot that executes the code that is after the while
PROBLEM: What's the cause of the memory leaks?
SITUATION:
I've build a simple command line program using C++ together with MySQL using the MySQL C API
The problem is, the program has many "minor" memory leaks from the object malloc xx bytes" with xx ranging from a few bytes to 8 kb. All of the leaks links to the library libmysqlclient.18.dylib.
I've already removed all the mysql_free_result() from the code to see if that was the problem, but its still the same.
My MySQL code mainly consists of simple code like:
to connect:
MYSQL *databaseConnection()
{
// declarations
MYSQL *connection = mysql_init(NULL);
// connecting to database
if(!mysql_real_connect(connection,SERVER,USER,PASSWORD,DATABASE,0,NULL,0))
{
std::cout << "Connection error: " << mysql_error(connection) << std::endl;
}
return connection;
}
executing a query:
MYSQL_RES *getQuery(MYSQL *connection, std::string query)
{
// send the query to the database
if (mysql_query(connection, query.c_str()))
{
std::cout << "MySQL query error: " << mysql_error(connection);
exit(1);
}
return mysql_store_result(connection);
}
example of a query:
void resetTable(std::string table)
{
MYSQL *connection = databaseConnection();
MYSQL_RES *result;
std::string query = "truncate table " + table;
result = getQuery(connection, query);
mysql_close(connection);
}
First of all: Opening a new connection for every query (like you're doing in resetTable()) is incredibly wasteful. What you really want to do is open a single connection when the application starts, use that for everything (possibly by storing the connection in a global), and close it when you're done.
To answer your question, though: You need to call mysql_free_result() on result sets once you're done with them.