C++ - SQLite3 leaks handles in multithread environment - c++

I wrote a simple program that spawns 10 threads, each thread opens a database (common to all the threads), or creates it (with "Write-Ahead Log" option) if open fails, creates a table on the database and then it goes into an infinite loop in which it adds one row at the time into its table. I found out that the program leaks about 2 handles every 5 minutes, I tried a tool called Memory Verify which tells me that the leaked handles are SQLite3 file locks (line 34034 on the version 3.7.13) but I am not sure whether the bug is in SQLite or in the way I use it.
I haven't specified any compiler option to build SQLite3 so it is built as Multi-Thread and as far as I understand Multi-Thread should work fine in my case as every threads has its own SQLite connection.
To open or create a database I use the following code:
bool Create()
{
int iFlags = 0;
iFlags = iFlags | SQLITE_OPEN_READWRITE | SQLITE_OPEN_NOMUTEX | SQLITE_OPEN_CREATE;
return sqlite3_open_v2(dbName_sm.c_str(), &pHandle_m, iFlags, 0) == SQLITE_OK;
}
bool Open()
{
int iFlags = 0;
iFlags = iFlags | SQLITE_OPEN_READWRITE | SQLITE_OPEN_NOMUTEX;
return sqlite3_open_v2(dbName_sm.c_str(), &pHandle_m, iFlags, 0) == SQLITE_OK;
}
The hard loop in every thread calls ExecuteQuery which does prepare, step and finalize of an INSERT statement:
bool ExecuteQuery(const std::string& statement)
{
bool res = Prepare(statement);
if(!res)
{
return false;;
}
SQLiteStatus status = Step();
Finalize();
res = (ESuccess == status || EDatabaseDone == status);
return res;
}
bool Prepare(const std::string& statement)
{
return sqlite3_prepare_v2(pHandle_m, statement.c_str(), -1, &pStmt_m, 0) == SQLITE_OK;
}
enum SQLiteStatus { ESuccess, EDatabaseDone, EDatabaseTimeout, EDatabaseError };
SQLiteStatus Step()
{
int iRet = sqlite3_step(pStmt_m);
if (iRet == SQLITE_DONE)
{
return EDatabaseDone;
}
else if (iRet == SQLITE_BUSY)
{
return EDatabaseTimeout;
}
else if (iRet != SQLITE_ROW)
{
return EDatabaseError;
}
return ESuccess;
}
bool Finalize()
{
int iRet = sqlite3_finalize(pStmt_m);
pStmt_m = 0;
return iRet == SQLITE_OK;
}
Do you guys see any mistake in my code or is it a known issue in SQLite? I tried to google it for a couple of days but I couldn't find anything about it.
Thank you very much for your help.
Regards,
Andrea
P.S. I forgot to say that I am running my test on a WinXP 64bit PC, the compiler is VS2010, the application is compiled in 32bit, SQLite version is 3.7.13...

check whether you have sqlite3_reset after every sqlite3_step because this is one case that might causes leaks. after preparing a statement with sqlite3_prepare and executing it with sqlite3_step,you need to always reset it with sqlite3_reset.

Related

Implementing background process in a dummy C++ shell

I've been trying to mimic & in my dummy shell.
The foreground process works fine, but as soon as I include the & symbol, it doesn't behave as expected. The program shows unexpected behavior. It first executes the process (which should not be executed as a foreground process) and then it just freezes until I press the Enter key.
Here is the snippet of my code.
if(background)
{
int bgpid;
pid_t fork_return;
fork_return = fork();
if(fork_return == 0)
{
setpgid(0,0);
if(execvp(path, args) == -1)
{
bgpid = getpid();
cout<<"Error\n";
return 1;
}
else if(fork_return != -1)
{
addToTable(bgpid);
return 1;
}
}else{
court<<"ERROR\n";
return 1;
}
}
Output Image is also attached here

postgresql consume more memory in db server for long running connection

We have a c++ server application which is connecting to postgresql database using libpq library. Application creating 100s of connection to database and most of the connection's life time is application scope.
Initially application was running fine, but over a period of time postgres server consuming more memory for long running connections. By writing a below sample program I come to know creating prepared statements using PQsendPrepare and PQsendQueryPrepared is causing the memory consumption issue in database server.
How we can fix this server memory issue? is there any libpq function to free the memory in server?
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
#include <stdio.h>
#include <stdlib.h>
#include <libpq-fe.h>
int main(int argc, char *argv[]) {
const int LEN = 10;
const char *paramValues[1];
int paramFormats[1];
int rowId = 7369;
Oid paramTypes[1];
char str[LEN];
snprintf(str, LEN, "%d", rowId);
paramValues[0] = str;
paramTypes[0]=20;
paramFormats[0]=0;
long int c=1;
PGresult* result;
//PGconn *conn = PQconnectdb("user=scott dbname=dame");
PGconn *conn = PQsetdbLogin ("", "", NULL, NULL, "dame", "scott", "tiger") ;
if (PQstatus(conn) == CONNECTION_BAD) {
fprintf(stderr, "Connection to database failed: %s\n",
PQerrorMessage(conn));
do_exit(conn);
}
char *stm = "SELECT coalesce(ename,'test') from emp where empno=$1";
for(;;)
{
std::stringstream strStream ;
strStream << c++ ;
std::string strStatementName = "s_" + strStream.str() ;
if(PQsendPrepare(conn,strStatementName.c_str(), stm,1,paramTypes) )
{
result = PQgetResult(conn);
if (PQresultStatus(result) != PGRES_COMMAND_OK)
{
PQclear(result) ;
result = NULL ;
do
{
result = PQgetResult(conn);
if(result != NULL)
{
PQclear (result) ;
}
} while (result != NULL) ;
std::cout<<"error prepare"<<PQerrorMessage (conn)<<std::endl;
break;
}
PQclear(result) ;
result = NULL ;
do
{
result = PQgetResult(conn);
if(result != NULL)
{
PQclear (result) ;
}
} while (result != NULL) ;
}
else
{
std::cout<<"error:"<<PQerrorMessage (conn)<<std::endl;
break;
}
if(!PQsendQueryPrepared(conn,
strStatementName.c_str(),1,(const char* const *)paramValues,paramFormats,paramFormats,0))
{
std::cout<<"error:prepared "<<PQerrorMessage (conn)<<std::endl;
}
if (!PQsetSingleRowMode(conn))
{
std::cout<<"error singrow mode "<<PQerrorMessage (conn)<<std::endl;
}
result = PQgetResult(conn);
if (result != NULL)
{
if((PGRES_FATAL_ERROR == PQresultStatus(result)) || (PGRES_BAD_RESPONSE == PQresultStatus(result)))
{
PQclear(result);
result = NULL ;
do
{
result = PQgetResult(conn);
if(result != NULL)
{
PQclear (result) ;
}
} while (result != NULL) ;
break;
}
if (PQresultStatus(result) == PGRES_SINGLE_TUPLE)
{
std::ofstream myfile;
myfile.open ("native.txt",std::ofstream::out | std::ofstream::app);
myfile << PQgetvalue(result, 0, 0)<<"\n";
myfile.close();
PQclear(result);
result = NULL ;
do
{
result = PQgetResult(conn) ;
if(result != NULL)
{
PQclear (result) ;
}
}
while(result != NULL) ;
sleep(10);
}
else if(PQresultStatus(result) == PGRES_TUPLES_OK || PQresultStatus(result) == PGRES_COMMAND_OK)
{
PQclear(result);
result = NULL ;
do
{
result = PQgetResult(conn) ;
if(result != NULL)
{
PQclear (result) ;
}
}
while(result != NULL) ;
}
}
}
PQfinish(conn);
return 0;
}
Initially application was running fine, but over a period of time
postgres server consuming more memory for long running connections. By
writing a below sample program I come to know creating prepared
statements using PQsendPrepare and PQsendQueryPrepared is causing the
memory consumption issue in database server.
Well that seems unsurprising. You are generating a new prepared statement name at each iteration of your outer loop, and then creating and executing a prepared statement of that name. All the resulting, differently-named prepared statements will indeed remain in the server's memory as long as the connection is open. This is intentional.
How we can fix this server memory issue?
I'd characterize it as a program logic issue, not a server memory issue, at least as far as the test program goes. You obtain resources (prepared statements) and then allow them to hang around when you have no further use for them. The statements aren't leaked per se, as you could recreate the algorithmically-generated statement names, but the problem is similar to a resource leak. In your program, not in Postgres.
If you want to use one-off prepared statements then give them the empty string, "", as their name. Postgres calls these "unnamed" statements. Each unnamed statement you prepare will replace any previous one belonging to the same connection.
But even that's a hack. The most important feature of prepared statements in the first place is that they can be reused. Every statement prepared by your test program is identical, so not only are you wasting memory, you are also wasting CPU cycles. You should prepare it once only -- via PQsendPrepare(), or maybe simply PQprepare() -- and when it has successfully been prepared, execute it as many times as you want with PQsendQueryPrepared() or PQqueryPrepared(), passing the same statement name every time (but possibly different parameters).
is there any libpq function
to free the memory in server?
The documentation for the synchronous versions of the query functions says:
Prepared statements for use with PQexecPrepared can also be created by
executing SQL PREPARE statements. Also, although there is no libpq
function for deleting a prepared statement, the SQL DEALLOCATE
statement can be used for that purpose.
To the best of my understanding, there is only one flavor of prepared statement in Postgres, used by the synchronous and asynchronous functions alike. So no, libpq provides no function specifically for dropping prepared statements associated with a connection, but you can write a statement in SQL to do the job. Of course, it would be pointless to create a new, uniquely-named prepared statement to execute such a statement.
Most programs do not need anywhere near so many distinct prepared statements as to produce the kind of problem you report having.

EnumProcesses - weird behaviour

I have some weird behaviour while using WIndows API function EnumProcesses()
I have a function to determine wether a process with a certain name is already running which delivery different results wether I open the .executable manually (doubleclick) or open it via shell.
When I open it via shell it detects its running only 1 time (itself) and all is fine. When I open it using doubleclick on the .exe file however the function is returning true (already running) because the loop lists me the same process twice.
For the following code-snipped it is to mention that:
this->thisExecutableFile
contains argv[0] (initialised from running the program) to get the own process-name as you can see here:
int main(int argc, char* argv[])
{
ClientUpdate* update = ClientUpdate::getInstance();
update->setThisExecutableFile(argv[0]);
if (update->clientUpdateProcessIsRunning() == false) {
...
My goal is to find out if another instance of this process is already running and in this case exit it.
Here is my code:
bool ClientUpdate::clientUpdateProcessIsRunning()
{
bool retVal = false;
uint16_t processCount = 0;
unsigned long aProcesses[1024], cbNeeded, cProcesses;
if(!EnumProcesses(aProcesses, sizeof(aProcesses), &cbNeeded))
return false;
cProcesses = cbNeeded / sizeof(unsigned long);
for(unsigned int i = 0; i < cProcesses; i++) {
if (aProcesses[i] == 0) {
continue;
}
HANDLE hProcess = OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, 0, aProcesses[i]);
wchar_t buffer[50];
GetModuleBaseNameW(hProcess, 0, buffer, 50);
CloseHandle(hProcess);
std::wstring tempBuffer(buffer);
std::string tempStringBuffer(tempBuffer.begin(), tempBuffer.end());
boost::filesystem::path p(this->thisExecutableFile);
if(_strcmpi(p.filename().string().c_str(), tempStringBuffer.c_str()) == 0) {
processCount++;
if(processCount > 1) {
retVal = true;
break;
}
}
}
return retVal;
}
I know that the base-path is different when using doubleclick on the file or calling it via shell. (shell produces only filename while doubleclick passes entire path + filename into argv[0]) but I fixed that issue using
boost::filesystem::path p(this->thisExecutableFile);
p.fileName()
Which returns the correct filename (without path) in both cases I checked using print.
I am pretty puzzled why EnumProcesses() returns me the same file twice when calling the file via doubleclick instead of shell. Its not spawning two processed and in taskmanager I dont see anything like this either.
Is this a bug or I need to know something about the method I couldnt find in docs?
Thanks to the hint by Richard Critten I was able to fix it. My method is much smaller now and easier. (Also probably also alot more performant then scanning entire process-stack.) :D
Here is the solution
bool ClientUpdate::clientUpdateProcessIsRunning()
{
HANDLE hMutex = CreateMutexA(NULL, TRUE, "client-updater-mtx");
DWORD dwErr = GetLastError();
return dwErr == ERROR_ALREADY_EXISTS;
}
Thanks!

Nvidia graphics driver causing noticeable frame stuttering

Ok I've been researching this issue for a few days now so let me go over what I know so far which leads me to believe this might be an issue with NVidia's driver and not my code.
Basically my game starts stuttering after running a few seconds (random frames take 70ms instead of 16ms, on a regularish pattern). This ONLY happens if a setting called "Threaded Optimization" is enabled in the Nvidia control panel (latest drivers, windows 10). Unfortunately this setting is enabled by default and I'd rather not have to have people tweak their settings to get an enjoyable experience.
The game is not CPU or GPU intensive (2ms a frame without vsync on). It's not calling any openGL functions that need to synchronize data, and it's not streaming any buffers or reading data back from the GPU or anything. About the simplest possible renderer.
The problem was always there it just only started becoming noticeable when I added in fmod for audio. fmod is not the cause of this (more later in the post)
Trying to debug the problem with NVidia Nsight made the problem go away. "Start Collecting Data" instantly causes stuttering to go away. No dice here.
In the Profiler, a lot of cpu time is spent in "nvoglv32.dll". This process only spawns if Threaded Optimization is on. I suspect it's a synchronization issue then, so I debug with visual studio Concurrency Viewer.
A-HA!
Investigating these blocks of CPU time on the nvidia thread, the earliest named function I can get in their callstack is "CreateToolhelp32Snapshot" followed by a lot of time spent in Thread32Next. I noticed Thread32Next in the profiler when looking at CPU times earlier so this does seem like I'm on the right track.
So it looks like periodically the nvidia driver is grabbing a snapshot of the whole process for some reason? What could possibly be the reason, why is it doing this, and how do I stop it?
Also this explains why the problem started becoming noticeable once I added in fmod, because its grabbing info for all the processes threads, and fmod spawns a lot of threads.
Any help? Is this just a bug in nvidia's driver or is there something I can do to fix it other telling people to disable Threaded "Optimization"?
edit 1: The same issue occurs with current nvidia drivers on my laptop too. So I'm not crazy
edit 2: the same issue occurs on version 362 (previous major version) of nvidia's driver
... or is there something
I can do to fix it other telling people to disable Threaded
"Optimization"?
Yes.
You can create custom "Application Profile" for your game using NVAPI and disable "Threaded Optimization" setting in it.
There is a .PDF file on NVIDIA site with some help and code examples regarding NVAPI usage.
In order to see and manage all your NVIDIA profiles I recommend using NVIDIA Inspector. It is more convenient than the default NVIDIA Control Panel.
Also, here is my code example which creates "Application Profile" with "Threaded Optimization" disabled:
#include <stdlib.h>
#include <stdio.h>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
const wchar_t* profileName = L"Your Profile Name";
const wchar_t* appName = L"YourGame.exe";
const wchar_t* appFriendlyName = L"Your Game Casual Name";
const bool threadedOptimization = false;
void CheckError(NvAPI_Status status)
{
if (status == NVAPI_OK)
return;
NvAPI_ShortString szDesc = {0};
NvAPI_GetErrorMessage(status, szDesc);
printf("NVAPI error: %s\n", szDesc);
exit(-1);
}
void SetNVUstring(NvAPI_UnicodeString& nvStr, const wchar_t* wcStr)
{
for (int i = 0; i < NVAPI_UNICODE_STRING_MAX; i++)
nvStr[i] = 0;
int i = 0;
while (wcStr[i] != 0)
{
nvStr[i] = wcStr[i];
i++;
}
}
int main(int argc, char* argv[])
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
status = NvAPI_Initialize();
CheckError(status);
status = NvAPI_DRS_CreateSession(&hSession);
CheckError(status);
status = NvAPI_DRS_LoadSettings(hSession);
CheckError(status);
// Fill Profile Info
NVDRS_PROFILE profileInfo;
profileInfo.version = NVDRS_PROFILE_VER;
profileInfo.isPredefined = 0;
SetNVUstring(profileInfo.profileName, profileName);
// Create Profile
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_CreateProfile(hSession, &profileInfo, &hProfile);
CheckError(status);
// Fill Application Info
NVDRS_APPLICATION app;
app.version = NVDRS_APPLICATION_VER_V1;
app.isPredefined = 0;
SetNVUstring(app.appName, appName);
SetNVUstring(app.userFriendlyName, appFriendlyName);
SetNVUstring(app.launcher, L"");
SetNVUstring(app.fileInFolder, L"");
// Create Application
status = NvAPI_DRS_CreateApplication(hSession, hProfile, &app);
CheckError(status);
// Fill Setting Info
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.settingLocation = NVDRS_CURRENT_PROFILE_LOCATION;
setting.isCurrentPredefined = 0;
setting.isPredefinedValid = 0;
setting.u32CurrentValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
setting.u32PredefinedValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
// Set Setting
status = NvAPI_DRS_SetSetting(hSession, hProfile, &setting);
CheckError(status);
// Apply (or save) our changes to the system
status = NvAPI_DRS_SaveSettings(hSession);
CheckError(status);
printf("Success.\n");
NvAPI_DRS_DestroySession(hSession);
return 0;
}
Thanks for subGlitch's answer first, based on that proposal, I just make a safer one, which would enable you to cache and change the thread optimization, then restore it afterward.
Code is like below:
#include <stdlib.h>
#include <stdio.h>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
enum NvThreadOptimization {
NV_THREAD_OPTIMIZATION_AUTO = 0,
NV_THREAD_OPTIMIZATION_ENABLE = 1,
NV_THREAD_OPTIMIZATION_DISABLE = 2,
NV_THREAD_OPTIMIZATION_NO_SUPPORT = 3
};
bool NvAPI_OK_Verify(NvAPI_Status status)
{
if (status == NVAPI_OK)
return true;
NvAPI_ShortString szDesc = {0};
NvAPI_GetErrorMessage(status, szDesc);
char szResult[255];
sprintf(szResult, "NVAPI error: %s\n\0", szDesc);
printf(szResult);
return false;
}
NvThreadOptimization GetNVidiaThreadOptimization()
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
NvThreadOptimization threadOptimization = NV_THREAD_OPTIMIZATION_NO_SUPPORT;
status = NvAPI_Initialize();
if(!NvAPI_OK_Verify(status))
return threadOptimization;
status = NvAPI_DRS_CreateSession(&hSession);
if(!NvAPI_OK_Verify(status))
return threadOptimization;
status = NvAPI_DRS_LoadSettings(hSession);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;;
}
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_GetBaseProfile(hSession, &hProfile);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;;
}
NVDRS_SETTING originalSetting;
originalSetting.version = NVDRS_SETTING_VER;
status = NvAPI_DRS_GetSetting(hSession, hProfile, OGL_THREAD_CONTROL_ID, &originalSetting);
if(NvAPI_OK_Verify(status))
{
threadOptimization = (NvThreadOptimization)originalSetting.u32CurrentValue;
}
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;
}
void SetNVidiaThreadOptimization(NvThreadOptimization threadedOptimization)
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
if(threadedOptimization == NV_THREAD_OPTIMIZATION_NO_SUPPORT)
return;
status = NvAPI_Initialize();
if(!NvAPI_OK_Verify(status))
return;
status = NvAPI_DRS_CreateSession(&hSession);
if(!NvAPI_OK_Verify(status))
return;
status = NvAPI_DRS_LoadSettings(hSession);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_GetBaseProfile(hSession, &hProfile);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.u32CurrentValue = (EValues_OGL_THREAD_CONTROL)threadedOptimization;
status = NvAPI_DRS_SetSetting(hSession, hProfile, &setting);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
status = NvAPI_DRS_SaveSettings(hSession);
NvAPI_OK_Verify(status);
NvAPI_DRS_DestroySession(hSession);
}
Based on the two interfaces (Get/Set) above, you may well save the original setting and restore it when your application exits. That means your setting to disable thread optimization only impact your own application.
static NvThreadOptimization s_OriginalNVidiaThreadOptimization = NV_THREAD_OPTIMIZATION_NO_SUPPORT;
// Set
s_OriginalNVidiaThreadOptimization = GetNVidiaThreadOptimization();
if( s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_NO_SUPPORT
&& s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_DISABLE)
{
SetNVidiaThreadOptimization(NV_THREAD_OPTIMIZATION_DISABLE);
}
//Restore
if( s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_NO_SUPPORT
&& s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_DISABLE)
{
SetNVidiaThreadOptimization(s_OriginalNVidiaThreadOptimization);
};
Hate to state the obvious but I feel like it needs to be said.
Threaded optimization is notorious for causing stuttering in many games, even those that take advantage of multithreading. Unless your application works well with the threaded optimization setting, the only logical answer is to tell your users to disable it. If users are stubborn and don't want to do that, that's their fault.
The only bug in recent memory I can think of is that older versions of the nvidia driver caused applications w/ threaded optimization running in Wine to crash, but that's unrelated to the stuttering issue you describe.
Building off of #subGlitch's answer, the following checks to see if an application profile already exists, and if so updates the existing profile instead of creating a new one. It is also encapsulated into a function which can be called, that will bypass the logic if the nvidia api is not found on the system (AMD/Intel users), or an issue is encountered which prohibits modifying the profile:
#include <iostream>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
const wchar_t* profileName = L"Application for testing nvidia api";
const wchar_t* appName = L"nvapi.exe";
const wchar_t* appFriendlyName = L"Nvidia api test";
const bool threadedOptimization = false;
bool nvapiStatusOk(NvAPI_Status status)
{
if (status != NVAPI_OK)
{
// will need to not print these in prod, just return false
// full list of codes in nvapi_lite_common.h line 249
std::cout << "Status Code:" << status << std::endl;
NvAPI_ShortString szDesc = { 0 };
NvAPI_GetErrorMessage(status, szDesc);
printf("NVAPI Error: %s\n", szDesc);
return false;
}
return true;
}
void setNVUstring(NvAPI_UnicodeString& nvStr, const wchar_t* wcStr)
{
for (int i = 0; i < NVAPI_UNICODE_STRING_MAX; i++)
nvStr[i] = 0;
int i = 0;
while (wcStr[i] != 0)
{
nvStr[i] = wcStr[i];
i++;
}
}
void initNvidiaApplicationProfile()
{
NvAPI_Status status;
// if status does not equal NVAPI_OK (0) after initialization,
// either the system does not use an nvidia gpu, or something went
// so wrong that we're unable to use the nvidia api...therefore do nothing
/*
if (!nvapiStatusOk(NvAPI_Initialize()))
return;
*/
// for debugging use ^ in prod
if (!nvapiStatusOk(NvAPI_Initialize()))
{
std::cout << "Unable to initialize Nvidia api" << std::endl;
return;
}
else
{
std::cout << "Nvidia api initialized successfully" << std::endl;
}
// initialize session
NvDRSSessionHandle hSession;
if (!nvapiStatusOk(NvAPI_DRS_CreateSession(&hSession)))
return;
// load settings
if (!nvapiStatusOk(NvAPI_DRS_LoadSettings(hSession)))
return;
// check if application already exists
NvDRSProfileHandle hProfile;
NvAPI_UnicodeString nvAppName;
setNVUstring(nvAppName, appName);
NVDRS_APPLICATION app;
app.version = NVDRS_APPLICATION_VER_V1;
// documentation states this will return ::NVAPI_APPLICATION_NOT_FOUND, however I cannot
// find where that is defined anywhere in the headers...so not sure what's going to happen with this?
//
// This is returning NVAPI_EXECUTABLE_NOT_FOUND, which might be what it's supposed to return when it can't
// find an existing application, and the documentation is just outdated?
status = NvAPI_DRS_FindApplicationByName(hSession, nvAppName, &hProfile, &app);
if (!nvapiStatusOk(status))
{
// if status does not equal NVAPI_EXECUTABLE_NOT_FOUND, then something bad happened and we should not proceed
if (status != NVAPI_EXECUTABLE_NOT_FOUND)
{
NvAPI_Unload();
return;
}
// create application as it does not already exist
// Fill Profile Info
NVDRS_PROFILE profileInfo;
profileInfo.version = NVDRS_PROFILE_VER;
profileInfo.isPredefined = 0;
setNVUstring(profileInfo.profileName, profileName);
// Create Profile
//NvDRSProfileHandle hProfile;
if (!nvapiStatusOk(NvAPI_DRS_CreateProfile(hSession, &profileInfo, &hProfile)))
{
NvAPI_Unload();
return;
}
// Fill Application Info, can't re-use app variable for some reason
NVDRS_APPLICATION app2;
app2.version = NVDRS_APPLICATION_VER_V1;
app2.isPredefined = 0;
setNVUstring(app2.appName, appName);
setNVUstring(app2.userFriendlyName, appFriendlyName);
setNVUstring(app2.launcher, L"");
setNVUstring(app2.fileInFolder, L"");
// Create Application
if (!nvapiStatusOk(NvAPI_DRS_CreateApplication(hSession, hProfile, &app2)))
{
NvAPI_Unload();
return;
}
}
// update profile settings
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.settingLocation = NVDRS_CURRENT_PROFILE_LOCATION;
setting.isCurrentPredefined = 0;
setting.isPredefinedValid = 0;
setting.u32CurrentValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
setting.u32PredefinedValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
// load settings
if (!nvapiStatusOk(NvAPI_DRS_SetSetting(hSession, hProfile, &setting)))
{
NvAPI_Unload();
return;
}
// save changes
if (!nvapiStatusOk(NvAPI_DRS_SaveSettings(hSession)))
{
NvAPI_Unload();
return;
}
// disable in prod
std::cout << "Nvidia application profile updated successfully" << std::endl;
NvAPI_DRS_DestroySession(hSession);
// unload the api as we're done with it
NvAPI_Unload();
}
int main()
{
// if building for anything other than windows, we'll need to not call this AND have
// some preprocessor logic to not include any of the api code. No linux love apparently...so
// that's going to be a thing we'll have to figure out down the road -_-
initNvidiaApplicationProfile();
std::cin.get();
return 0;
}

C++ Map Iteration and Stack Corruption

I am trying to use a system of maps to store and update data for a chat server. The application is mutlithreaded and uses a lock system to prevent multiple threads from accessing the data.
The problem is this: when a client is removed individually from the map, it is ok. However, when I try to call multiple closes, it leaves some in the memory. If I at any point call ::clear() on the map, it causes a debug assertion error with either "Iterator not compatible" or similar. The code will work the first time (tested using 80+ consoles connected as a test), but due to it leaving chunks behind, will not work again. I have tried researching ways, and I have written systems to stop the code execution until each process has completed. I appreciate any help so far, and I have attached the relevant code snippets.
//portion of server code that handles shutting down
DWORD WINAPI runserver(void *params) {
runserverPARAMS *p = (runserverPARAMS*)params;
/*Server stuff*/
serverquit = 0;
//client based cleanup
vector<int> tokill;
map<int,int>::iterator it = clientsockets.begin();
while(it != clientsockets.end()) {
tokill.push_back(it->first);
++it;
}
for(;;) {
for each (int x in tokill) {
clientquit[x] = 1;
while(clientoffline[x] != 1) {
//haulting execution until thread has terminated
}
destoryclient(x);
}
}
//client thread based cleanup complete.
return 0;
}
//clientioprelim
DWORD WINAPI clientioprelim(void* params) {
CLIENTthreadparams *inparams = (CLIENTthreadparams *)params;
/*Socket stuff*/
for(;;) {
/**/
}
else {
if(clientquit[inparams->clientid] == 1)
break;
}
}
clientoffline[inparams->clientid] = 1;
return 0;
}
int LOCKED; //exported as extern via libraries.h so it's visible to other source files
void destoryclient(int clientid) {
for(;;) {
if(LOCKED == 0) {
LOCKED = 1;
shutdown(clientsockets[clientid], 2);
closesocket(clientsockets[clientid]);
if((clientsockets.count(clientid) != 0) && (clientsockets.find(clientid) != clientsockets.end()))
clientsockets.erase(clientsockets.find(clientid));
if((clientname.count(clientid) != 0) && (clientname.find(clientid) != clientname.end()))
clientname.erase(clientname.find(clientid));
if((clientusername.count(clientid) != 0) && (clientusername.find(clientid) != clientusername.end()))
clientusername.erase(clientusername.find(clientid));
if((clientaddr.count(clientid) != 0) && (clientaddr.find(clientid) != clientaddr.end()))
clientaddr.erase(clientusername.find(clientid));
if((clientcontacts.count(clientid) != 0) && (clientcontacts.find(clientid) != clientcontacts.end()))
clientcontacts.erase(clientcontacts.find(clientid));
if((clientquit.count(clientid) != 0) && (clientquit.find(clientid) != clientquit.end()))
clientquit.erase(clientquit.find(clientid));
if((clientthreads.count(clientid) != 0) && (clientthreads.find(clientid) != clientthreads.end()))
clientthreads.erase(clientthreads.find(clientid));
LOCKED = 0;
break;
}
}
return;
}
Are you really using an int for locking or was it just a simplification of the code? If you really use an int: this won't work and the critical section can be entered twice (or more) simultaneously, if both threads check the variable before one assigns to it (simplified). See mutexes in Wikipedia for reference. You could either use some sort of mutex provided by windows or boost thread instead of the int.