Nvidia graphics driver causing noticeable frame stuttering - c++

Ok I've been researching this issue for a few days now so let me go over what I know so far which leads me to believe this might be an issue with NVidia's driver and not my code.
Basically my game starts stuttering after running a few seconds (random frames take 70ms instead of 16ms, on a regularish pattern). This ONLY happens if a setting called "Threaded Optimization" is enabled in the Nvidia control panel (latest drivers, windows 10). Unfortunately this setting is enabled by default and I'd rather not have to have people tweak their settings to get an enjoyable experience.
The game is not CPU or GPU intensive (2ms a frame without vsync on). It's not calling any openGL functions that need to synchronize data, and it's not streaming any buffers or reading data back from the GPU or anything. About the simplest possible renderer.
The problem was always there it just only started becoming noticeable when I added in fmod for audio. fmod is not the cause of this (more later in the post)
Trying to debug the problem with NVidia Nsight made the problem go away. "Start Collecting Data" instantly causes stuttering to go away. No dice here.
In the Profiler, a lot of cpu time is spent in "nvoglv32.dll". This process only spawns if Threaded Optimization is on. I suspect it's a synchronization issue then, so I debug with visual studio Concurrency Viewer.
A-HA!
Investigating these blocks of CPU time on the nvidia thread, the earliest named function I can get in their callstack is "CreateToolhelp32Snapshot" followed by a lot of time spent in Thread32Next. I noticed Thread32Next in the profiler when looking at CPU times earlier so this does seem like I'm on the right track.
So it looks like periodically the nvidia driver is grabbing a snapshot of the whole process for some reason? What could possibly be the reason, why is it doing this, and how do I stop it?
Also this explains why the problem started becoming noticeable once I added in fmod, because its grabbing info for all the processes threads, and fmod spawns a lot of threads.
Any help? Is this just a bug in nvidia's driver or is there something I can do to fix it other telling people to disable Threaded "Optimization"?
edit 1: The same issue occurs with current nvidia drivers on my laptop too. So I'm not crazy
edit 2: the same issue occurs on version 362 (previous major version) of nvidia's driver

... or is there something
I can do to fix it other telling people to disable Threaded
"Optimization"?
Yes.
You can create custom "Application Profile" for your game using NVAPI and disable "Threaded Optimization" setting in it.
There is a .PDF file on NVIDIA site with some help and code examples regarding NVAPI usage.
In order to see and manage all your NVIDIA profiles I recommend using NVIDIA Inspector. It is more convenient than the default NVIDIA Control Panel.
Also, here is my code example which creates "Application Profile" with "Threaded Optimization" disabled:
#include <stdlib.h>
#include <stdio.h>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
const wchar_t* profileName = L"Your Profile Name";
const wchar_t* appName = L"YourGame.exe";
const wchar_t* appFriendlyName = L"Your Game Casual Name";
const bool threadedOptimization = false;
void CheckError(NvAPI_Status status)
{
if (status == NVAPI_OK)
return;
NvAPI_ShortString szDesc = {0};
NvAPI_GetErrorMessage(status, szDesc);
printf("NVAPI error: %s\n", szDesc);
exit(-1);
}
void SetNVUstring(NvAPI_UnicodeString& nvStr, const wchar_t* wcStr)
{
for (int i = 0; i < NVAPI_UNICODE_STRING_MAX; i++)
nvStr[i] = 0;
int i = 0;
while (wcStr[i] != 0)
{
nvStr[i] = wcStr[i];
i++;
}
}
int main(int argc, char* argv[])
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
status = NvAPI_Initialize();
CheckError(status);
status = NvAPI_DRS_CreateSession(&hSession);
CheckError(status);
status = NvAPI_DRS_LoadSettings(hSession);
CheckError(status);
// Fill Profile Info
NVDRS_PROFILE profileInfo;
profileInfo.version = NVDRS_PROFILE_VER;
profileInfo.isPredefined = 0;
SetNVUstring(profileInfo.profileName, profileName);
// Create Profile
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_CreateProfile(hSession, &profileInfo, &hProfile);
CheckError(status);
// Fill Application Info
NVDRS_APPLICATION app;
app.version = NVDRS_APPLICATION_VER_V1;
app.isPredefined = 0;
SetNVUstring(app.appName, appName);
SetNVUstring(app.userFriendlyName, appFriendlyName);
SetNVUstring(app.launcher, L"");
SetNVUstring(app.fileInFolder, L"");
// Create Application
status = NvAPI_DRS_CreateApplication(hSession, hProfile, &app);
CheckError(status);
// Fill Setting Info
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.settingLocation = NVDRS_CURRENT_PROFILE_LOCATION;
setting.isCurrentPredefined = 0;
setting.isPredefinedValid = 0;
setting.u32CurrentValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
setting.u32PredefinedValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
// Set Setting
status = NvAPI_DRS_SetSetting(hSession, hProfile, &setting);
CheckError(status);
// Apply (or save) our changes to the system
status = NvAPI_DRS_SaveSettings(hSession);
CheckError(status);
printf("Success.\n");
NvAPI_DRS_DestroySession(hSession);
return 0;
}

Thanks for subGlitch's answer first, based on that proposal, I just make a safer one, which would enable you to cache and change the thread optimization, then restore it afterward.
Code is like below:
#include <stdlib.h>
#include <stdio.h>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
enum NvThreadOptimization {
NV_THREAD_OPTIMIZATION_AUTO = 0,
NV_THREAD_OPTIMIZATION_ENABLE = 1,
NV_THREAD_OPTIMIZATION_DISABLE = 2,
NV_THREAD_OPTIMIZATION_NO_SUPPORT = 3
};
bool NvAPI_OK_Verify(NvAPI_Status status)
{
if (status == NVAPI_OK)
return true;
NvAPI_ShortString szDesc = {0};
NvAPI_GetErrorMessage(status, szDesc);
char szResult[255];
sprintf(szResult, "NVAPI error: %s\n\0", szDesc);
printf(szResult);
return false;
}
NvThreadOptimization GetNVidiaThreadOptimization()
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
NvThreadOptimization threadOptimization = NV_THREAD_OPTIMIZATION_NO_SUPPORT;
status = NvAPI_Initialize();
if(!NvAPI_OK_Verify(status))
return threadOptimization;
status = NvAPI_DRS_CreateSession(&hSession);
if(!NvAPI_OK_Verify(status))
return threadOptimization;
status = NvAPI_DRS_LoadSettings(hSession);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;;
}
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_GetBaseProfile(hSession, &hProfile);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;;
}
NVDRS_SETTING originalSetting;
originalSetting.version = NVDRS_SETTING_VER;
status = NvAPI_DRS_GetSetting(hSession, hProfile, OGL_THREAD_CONTROL_ID, &originalSetting);
if(NvAPI_OK_Verify(status))
{
threadOptimization = (NvThreadOptimization)originalSetting.u32CurrentValue;
}
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;
}
void SetNVidiaThreadOptimization(NvThreadOptimization threadedOptimization)
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
if(threadedOptimization == NV_THREAD_OPTIMIZATION_NO_SUPPORT)
return;
status = NvAPI_Initialize();
if(!NvAPI_OK_Verify(status))
return;
status = NvAPI_DRS_CreateSession(&hSession);
if(!NvAPI_OK_Verify(status))
return;
status = NvAPI_DRS_LoadSettings(hSession);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_GetBaseProfile(hSession, &hProfile);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.u32CurrentValue = (EValues_OGL_THREAD_CONTROL)threadedOptimization;
status = NvAPI_DRS_SetSetting(hSession, hProfile, &setting);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
status = NvAPI_DRS_SaveSettings(hSession);
NvAPI_OK_Verify(status);
NvAPI_DRS_DestroySession(hSession);
}
Based on the two interfaces (Get/Set) above, you may well save the original setting and restore it when your application exits. That means your setting to disable thread optimization only impact your own application.
static NvThreadOptimization s_OriginalNVidiaThreadOptimization = NV_THREAD_OPTIMIZATION_NO_SUPPORT;
// Set
s_OriginalNVidiaThreadOptimization = GetNVidiaThreadOptimization();
if( s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_NO_SUPPORT
&& s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_DISABLE)
{
SetNVidiaThreadOptimization(NV_THREAD_OPTIMIZATION_DISABLE);
}
//Restore
if( s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_NO_SUPPORT
&& s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_DISABLE)
{
SetNVidiaThreadOptimization(s_OriginalNVidiaThreadOptimization);
};

Hate to state the obvious but I feel like it needs to be said.
Threaded optimization is notorious for causing stuttering in many games, even those that take advantage of multithreading. Unless your application works well with the threaded optimization setting, the only logical answer is to tell your users to disable it. If users are stubborn and don't want to do that, that's their fault.
The only bug in recent memory I can think of is that older versions of the nvidia driver caused applications w/ threaded optimization running in Wine to crash, but that's unrelated to the stuttering issue you describe.

Building off of #subGlitch's answer, the following checks to see if an application profile already exists, and if so updates the existing profile instead of creating a new one. It is also encapsulated into a function which can be called, that will bypass the logic if the nvidia api is not found on the system (AMD/Intel users), or an issue is encountered which prohibits modifying the profile:
#include <iostream>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
const wchar_t* profileName = L"Application for testing nvidia api";
const wchar_t* appName = L"nvapi.exe";
const wchar_t* appFriendlyName = L"Nvidia api test";
const bool threadedOptimization = false;
bool nvapiStatusOk(NvAPI_Status status)
{
if (status != NVAPI_OK)
{
// will need to not print these in prod, just return false
// full list of codes in nvapi_lite_common.h line 249
std::cout << "Status Code:" << status << std::endl;
NvAPI_ShortString szDesc = { 0 };
NvAPI_GetErrorMessage(status, szDesc);
printf("NVAPI Error: %s\n", szDesc);
return false;
}
return true;
}
void setNVUstring(NvAPI_UnicodeString& nvStr, const wchar_t* wcStr)
{
for (int i = 0; i < NVAPI_UNICODE_STRING_MAX; i++)
nvStr[i] = 0;
int i = 0;
while (wcStr[i] != 0)
{
nvStr[i] = wcStr[i];
i++;
}
}
void initNvidiaApplicationProfile()
{
NvAPI_Status status;
// if status does not equal NVAPI_OK (0) after initialization,
// either the system does not use an nvidia gpu, or something went
// so wrong that we're unable to use the nvidia api...therefore do nothing
/*
if (!nvapiStatusOk(NvAPI_Initialize()))
return;
*/
// for debugging use ^ in prod
if (!nvapiStatusOk(NvAPI_Initialize()))
{
std::cout << "Unable to initialize Nvidia api" << std::endl;
return;
}
else
{
std::cout << "Nvidia api initialized successfully" << std::endl;
}
// initialize session
NvDRSSessionHandle hSession;
if (!nvapiStatusOk(NvAPI_DRS_CreateSession(&hSession)))
return;
// load settings
if (!nvapiStatusOk(NvAPI_DRS_LoadSettings(hSession)))
return;
// check if application already exists
NvDRSProfileHandle hProfile;
NvAPI_UnicodeString nvAppName;
setNVUstring(nvAppName, appName);
NVDRS_APPLICATION app;
app.version = NVDRS_APPLICATION_VER_V1;
// documentation states this will return ::NVAPI_APPLICATION_NOT_FOUND, however I cannot
// find where that is defined anywhere in the headers...so not sure what's going to happen with this?
//
// This is returning NVAPI_EXECUTABLE_NOT_FOUND, which might be what it's supposed to return when it can't
// find an existing application, and the documentation is just outdated?
status = NvAPI_DRS_FindApplicationByName(hSession, nvAppName, &hProfile, &app);
if (!nvapiStatusOk(status))
{
// if status does not equal NVAPI_EXECUTABLE_NOT_FOUND, then something bad happened and we should not proceed
if (status != NVAPI_EXECUTABLE_NOT_FOUND)
{
NvAPI_Unload();
return;
}
// create application as it does not already exist
// Fill Profile Info
NVDRS_PROFILE profileInfo;
profileInfo.version = NVDRS_PROFILE_VER;
profileInfo.isPredefined = 0;
setNVUstring(profileInfo.profileName, profileName);
// Create Profile
//NvDRSProfileHandle hProfile;
if (!nvapiStatusOk(NvAPI_DRS_CreateProfile(hSession, &profileInfo, &hProfile)))
{
NvAPI_Unload();
return;
}
// Fill Application Info, can't re-use app variable for some reason
NVDRS_APPLICATION app2;
app2.version = NVDRS_APPLICATION_VER_V1;
app2.isPredefined = 0;
setNVUstring(app2.appName, appName);
setNVUstring(app2.userFriendlyName, appFriendlyName);
setNVUstring(app2.launcher, L"");
setNVUstring(app2.fileInFolder, L"");
// Create Application
if (!nvapiStatusOk(NvAPI_DRS_CreateApplication(hSession, hProfile, &app2)))
{
NvAPI_Unload();
return;
}
}
// update profile settings
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.settingLocation = NVDRS_CURRENT_PROFILE_LOCATION;
setting.isCurrentPredefined = 0;
setting.isPredefinedValid = 0;
setting.u32CurrentValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
setting.u32PredefinedValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
// load settings
if (!nvapiStatusOk(NvAPI_DRS_SetSetting(hSession, hProfile, &setting)))
{
NvAPI_Unload();
return;
}
// save changes
if (!nvapiStatusOk(NvAPI_DRS_SaveSettings(hSession)))
{
NvAPI_Unload();
return;
}
// disable in prod
std::cout << "Nvidia application profile updated successfully" << std::endl;
NvAPI_DRS_DestroySession(hSession);
// unload the api as we're done with it
NvAPI_Unload();
}
int main()
{
// if building for anything other than windows, we'll need to not call this AND have
// some preprocessor logic to not include any of the api code. No linux love apparently...so
// that's going to be a thing we'll have to figure out down the road -_-
initNvidiaApplicationProfile();
std::cin.get();
return 0;
}

Related

WNetOpenEnum returns ERROR_NETWORK_UNREACHABLE for the "Microsoft Windows Network" node

Our program has a piece of code that calculates the list of computers on our local network. It uses the Windows Networking API (WNetOpenEnum/WNetEnumResource) to unwind the network. For many years, the resulting list was identical to the one that can be seen in Windows Explorer under the "Network" entry. However, recently we have noticed that the same code returns an empty list. During debugging I found that WNetOpenEnum returns error 1231 (ERROR_NETWORK_UNREACHABLE) when it is called for the "Microsoft Windows Network" under the root node.
I have to mention, though I'm pretty sure it has nothing to do with the matter, that the network unwinding is done multithreaded, to avoid possible delays in the main GUI thread. Each time a node of type RESOURCEUSAGE_CONTAINER is encountered, a new worker thread is launched. The thread function calls the following procedure:
DWORD WINAPI EnumNetwork(NETRESOURCE_M* lpNR)
{
const int BUF_SIZE = 16384; // 16K is a good size.
HANDLE hEnum;
DWORD Result;
// Call the WNetOpenEnum function to begin the enumeration.
Result = ::WNetOpenEnum(RESOURCE_GLOBALNET, // all network
RESOURCETYPE_ANY, // all resource types
0, // enumerate all
(LPNETRESOURCE)lpNR,// parent resource
&hEnum); // enumeration handle
if (Result != NO_ERROR) // -> for "Microsoft Windows Network" Result = 1231
return Result;
std::vector<std::wstring> SrvList;
// Allocate buffer for enumeration.
LPNETRESOURCE lpEnumNR = (LPNETRESOURCE)new char[BUF_SIZE];
if (lpEnumNR == 0)
Result = ERROR_OUTOFMEMORY;
else
{
while (1)
{
::ZeroMemory(lpEnumNR, BUF_SIZE); // Initialize the buffer.
DWORD NumEntries = -1; // Enumerate all entries.
DWORD BufSize = BUF_SIZE;
// Call WNetEnumResource to continue the enumeration.
Result = ::WNetEnumResource(hEnum, // enumeration handle
&NumEntries,// number of entries to enumerate
lpEnumNR, // array of resources to return
&BufSize); // buffer size
if (Result == NO_ERROR)
{
// If the call succeeds, loop through the array.
for (unsigned i = 0; i < NumEntries; ++i)
{
if (lpEnumNR[i].dwDisplayType == RESOURCEDISPLAYTYPE_SERVER)
{
// Collect servers.
LPCWSTR SrvName = lpEnumNR[i].lpRemoteName;
if (PathHelpers::IsFullPath(SrvName))
SrvList.push_back(SrvName);
}
else if ((lpEnumNR[i].dwUsage & RESOURCEUSAGE_CONTAINER) &&
lpEnumNR[i].lpRemoteName != 0)
{
TCHAR PathBuf[1024] = {0};
if (lpNR && lpNR->Path)
{
_tcscpy(PathBuf, lpNR->Path);
::PathAddBackslash(PathBuf);
}
_tcscat(PathBuf, lpEnumNR[i].lpRemoteName);
if (RegisterServer(PathBuf))
{
// Start new thread for recursive enumeration.
NETRESOURCE_M* lpChildNR = DeepCopyNR(&lpEnumNR[i], PathBuf);
ExploreNetwork(lpChildNR); // -> this starts a worker thread
}
else
{
GetLogger().LogMessage(
_T("Cycles found while unwinding network: %s"), PathBuf);
}
}
}
}
else
{
if (Result == ERROR_NO_MORE_ITEMS)
Result = NO_ERROR;
break;
}
} // end while
delete [] (char*)lpEnumNR;
} // end if
::WNetCloseEnum(hEnum);
if (!SrvList.empty())
NotifyServerAdded(SrvList);
return Result;
}
where NETRESOURCE_M is the structure
struct NETRESOURCE_M
{
NETRESOURCE NR;
LPTSTR Path;
};
Trying to figure out what could have caused such a sudden change in behavior, I found in Google that a few years ago Microsoft disabled the SMB1 protocol, which could affect Network Discovery. However, I can't believe they could have damaged their own API without saying a word in the documentation.
EDIT: At the same time, Windows Explorer has a bunch of computers under its "Network" node. In the network settings, the network type is "Domain", and the network discovery is ON. Services "Function Discovery Provider Host" and "Function Discovery Resources Publication" are running. Windows OS build is 19042.685.
Edit 2: The Sysinternals' "ShareEnum" tool also fails with the error: "No domains or workgroups where found on your network". Because of this, and also because some time ago our company moved all of its computers to a different network, I got the feeling that the problem is in the network configuration. Such as though the network is declared as "Domain", the computers were not enrolled to this domain. I do not understand much in that, but something like this.

QtBluetooth Win10, how to check if bluetooth adapter is available and ON?

I'm using QtBluetooth under Win10. Works fine.
However, as my app is deployed both on laptops (that may or may not have a BT adapter) and desktops (that are likely not to have an adapter), I'd like to programmatically check if the adapter is available or not (present and enabled).
Considering the documentation, I tested 4 functions:
bool isBluetoothAvailable1()
{
return !QBluetoothLocalDevice::allDevices().empty();
}
bool isBluetoothAvailable2()
{
QBluetoothLocalDevice localDevice;
return localDevice.isValid();
}
bool isBluetoothAvailable3()
{
std::shared_ptr<QLowEnergyController> created( QLowEnergyController::createPeripheral() );
if ( created )
{
if ( !created->localAddress().isNull() )
return true;
}
return false;
}
bool isBluetoothAvailable4()
{
std::shared_ptr<QLowEnergyController> created( QLowEnergyController::createCentral( QBluetoothDeviceInfo() ) );
if ( created )
{
if ( !created->localAddress().isNull() )
return true;
}
return false;
}
But when I run my code on a Win10 laptop, they all return false! Even if I can search an connect a remote device using the QBluetooth API.
What's the right method to know if a BLE adapter is available?
The correct solution would be to use isBluetoothAvailable1() because the call to allDevices() lists all connected Bluetooth adapters. However, this does not works on Windows.
I do not fully understand their reasoning, but there are 2 Windows implementations of this function in Qt.
https://code.qt.io/cgit/qt/qtconnectivity.git/tree/src/bluetooth/qbluetoothlocaldevice_win.cpp?h=5.15.2
https://code.qt.io/cgit/qt/qtconnectivity.git/tree/src/bluetooth/qbluetoothlocaldevice_winrt.cpp?h=5.15.2
And by default it uses the one that always returns empty list (qbluetoothlocaldevice_win.cpp).
QList<QBluetoothHostInfo> QBluetoothLocalDevice::allDevices()
{
QList<QBluetoothHostInfo> localDevices;
return localDevices;
}
The simplest solution is to use the code from the other Windows implementation that works (qbluetoothlocaldevice_winrt.cpp)
#include <Windows.h>
#include <BluetoothAPIs.h>
QList<QBluetoothHostInfo> allDevices()
{
BLUETOOTH_FIND_RADIO_PARAMS params;
::ZeroMemory(&params, sizeof(params));
params.dwSize = sizeof(params);
QList<QBluetoothHostInfo> foundAdapters;
HANDLE hRadio = nullptr;
if (const HBLUETOOTH_RADIO_FIND hSearch = ::BluetoothFindFirstRadio(&params, &hRadio)) {
for (;;) {
BLUETOOTH_RADIO_INFO radio;
::ZeroMemory(&radio, sizeof(radio));
radio.dwSize = sizeof(radio);
const DWORD retval = ::BluetoothGetRadioInfo(hRadio, &radio);
::CloseHandle(hRadio);
if (retval != ERROR_SUCCESS)
break;
QBluetoothHostInfo adapterInfo;
adapterInfo.setAddress(QBluetoothAddress(radio.address.ullLong));
adapterInfo.setName(QString::fromWCharArray(radio.szName));
foundAdapters << adapterInfo;
if (!::BluetoothFindNextRadio(hSearch, &hRadio))
break;
}
::BluetoothFindRadioClose(hSearch);
}
return foundAdapters;
}
Also you will need to link the necessary libraries Bthprops and ws2_32.

Visual C++, Windows Update Interface (IUpdate) <wuapi.h>, get_MsrcSeverity

I'm probably just blind, but I cannot see any errors here (and I am looking on this issue already for days now...)
I am trying to get the Patch Priority (Severity) from the Windows Update Interface using the following piece of code in Visual Studio:
#include "stdafx.h"
#include <wuapi.h>
#include <iostream>
#include <ATLComTime.h>
#include <wuerror.h>
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
HRESULT hr;
hr = CoInitialize(NULL);
IUpdateSession* iUpdate;
IUpdateSearcher* searcher;
ISearchResult* results;
BSTR criteria = SysAllocString(L"IsInstalled=0");
hr = CoCreateInstance(CLSID_UpdateSession, NULL, CLSCTX_INPROC_SERVER, IID_IUpdateSession, (LPVOID*)&iUpdate);
hr = iUpdate->CreateUpdateSearcher(&searcher);
wcout << L"Searching for updates ..."<<endl;
hr = searcher->Search(criteria, &results);
SysFreeString(criteria);
switch(hr)
{
case S_OK:
wcout<<L"List of applicable items on the machine:"<<endl;
break;
case WU_E_LEGACYSERVER:
wcout<<L"No server selection enabled"<<endl;
return 0;
case WU_E_INVALID_CRITERIA:
wcout<<L"Invalid search criteria"<<endl;
return 0;
}
IUpdateCollection *updateList;
IUpdateCollection *bundledUpdates;
IUpdate *updateItem;
IUpdate *bundledUpdateItem;
LONG updateSize;
LONG bundledUpdateSize;
BSTR updateName;
BSTR severity;
results->get_Updates(&updateList);
updateList->get_Count(&updateSize);
if (updateSize == 0)
{
wcout << L"No updates found"<<endl;
}
for (LONG i = 0; i < updateSize; i++)
{
updateList->get_Item(i,&updateItem);
updateItem->get_Title(&updateName);
severity = NULL;
updateItem->get_MsrcSeverity(&severity);
if (severity != NULL)
{
wcout << L"update severity: " << severity << endl;
}
wcout<<i+1<<" - " << updateName << endl;
// bundled updates
updateItem->get_BundledUpdates(&bundledUpdates);
bundledUpdates->get_Count(&bundledUpdateSize);
if (bundledUpdateSize != 0)
{
// iterate through bundled updates
for (LONG ii = 0; ii < bundledUpdateSize; ii++)
{
bundledUpdates->get_Item(ii, &bundledUpdateItem);
severity = NULL;
bundledUpdateItem->get_MsrcSeverity(&severity);
if (severity != NULL)
{
wcout << L" bundled update severity: " << severity << endl;
}
}
}
}
::CoUninitialize();
wcin.get();
return 0;
}
So here's the issue: updateItem->get_MsrcSeverity(&severity); is not returning anything. If I catch the result code with an HRESULT it always returns S_OK.
Link to MSDN IUpdate MsrcSeverity: http://msdn.microsoft.com/en-us/library/windows/desktop/aa386906(v=vs.85).aspx
Can you see what I am doing obviously wrong or is the get_MsrcSeverity function currently broken?
#EDIT: Changed the code to iterate through "BundledUpdates" as suggested.
#EDIT2: the code now outputs the severity value of the updateItem as well as the bundledUpdatesItem, but it's always NULL.
I also know there is one important update waiting for my computer - regarding to Windows Update in the control panel. It is KB2858725.
#EDIT3: Okay, it turns out, KB2858725 is no security update and therefore has no severity rating by Microsoft. But how does Microsoft Windows Update now categorize the updates in "important" and "optional" like you can see it in control panel's update?
Thank you for any hints!
// Markus
I've been struggling with the exact same problem for hours now... I finally figured out that Microsoft only seems to set the MsrcSeverity value on some updates. For general Windows updates, it's usually null. For most security updates it's set to one of:
"Critical"
"Moderate"
"Important"
"Low"
It seems as though the "Unspecified" value is never used, though it is documented in MSDN (http://msdn.microsoft.com/en-us/library/microsoft.updateservices.administration.msrcseverity(v=vs.85).aspx).
I wrote a small C# program to list all available updates and their reported severity. It requires a reference to WUApiLib:
using System;
using WUApiLib;
namespace EnumerateUpdates
{
class Program
{
static void Main(string[] args)
{
var session = new UpdateSession();
var searcher = session.CreateUpdateSearcher();
searcher.Online = false;
// Find all updates that have not yet been installed
var result = searcher.Search("IsInstalled=0 And IsHidden=0");
foreach (dynamic update in result.Updates)
{
Console.WriteLine(update.Title + ": " + update.Description);
Console.WriteLine("Severity is " + update.MsrcSeverity);
Console.WriteLine();
}
Console.ReadLine();
}
}
}
Hope this helps someone!
Look for bundled updates in the list. The bundled updates does not have some properties or methods as described on this page.
Remarks
If the BundledUpdates property contains an IUpdateCollection, some
properties and methods of the update may only be available on the
bundled updates, for example, DownloadContents or CopyFromCache.
The update you are processing in is a bundled update. Extract the individual updates from this bundle, it will have the property you are looking out for.
The IUpdate interface has a method get_BundledUpdates which gets you a IUpdateCollection if the size of this collection is greater then 0, this is a bundled update.

Is there a way to detect if a monitor is plugged in?

I have a custom application written in C++ that controls the resolution and other settings on a monitor connected to an embedded system. Sometimes the system is booted headless and run via VNC, but can have a monitor plugged in later (post boot). If that happens he monitor is fed no video until the monitor is enabled. I have found calling "displayswitch /clone" brings the monitor up, but I need to know when the monitor is connected. I have a timer that runs every 5 seconds and looks for the monitor, but I need some API call that can tell me if the monitor is connected.
Here is a bit of psudocode to describe what I'm after (what is executed when the timer expires every 5 seconds).
if(If monitor connected)
{
ShellExecute("displayswitch.exe /clone);
}else
{
//Do Nothing
}
I have tried GetSystemMetrics(SM_CMONITORS) to return the number of monitors, but it returns 1 if the monitor is connected or not. Any other ideas?
Thanks!
Try the following code
BOOL IsDisplayConnected(int displayIndex = 0)
{
DISPLAY_DEVICE device;
device.cb = sizeof(DISPLAY_DEVICE);
return EnumDisplayDevices(NULL, displayIndex, &device, 0);
}
This will return true if Windows identifies a display device with index (AKA identity) 0 (this is what the display control panel uses internally). Otherwise, it will return false false. So by checking the first possible index (which I marked as the default argument), you can find out whether any display device is connected (or at least identified by Windows, which is essentially what you're looking for).
Seems that there is some kind of "default monitor" even if no real monitor is connected.
The function below works for me (tested on a Intel NUC and a Surface 5 tablet).
The idea is to get the device id and check if it contains the string "default_monitor".
bool hasMonitor()
{
// Check if we have a monitor
bool has = false;
// Iterate over all displays and check if we have a valid one.
// If the device ID contains the string default_monitor no monitor is attached.
DISPLAY_DEVICE dd;
dd.cb = sizeof(dd);
int deviceIndex = 0;
while (EnumDisplayDevices(0, deviceIndex, &dd, 0))
{
std::wstring deviceName = dd.DeviceName;
int monitorIndex = 0;
while (EnumDisplayDevices(deviceName.c_str(), monitorIndex, &dd, 0))
{
size_t len = _tcslen(dd.DeviceID);
for (size_t i = 0; i < len; ++i)
dd.DeviceID[i] = _totlower(dd.DeviceID[i]);
has = has || (len > 10 && _tcsstr(dd.DeviceID, L"default_monitor") == nullptr);
++monitorIndex;
}
++deviceIndex;
}
return has;
}

C++ - SQLite3 leaks handles in multithread environment

I wrote a simple program that spawns 10 threads, each thread opens a database (common to all the threads), or creates it (with "Write-Ahead Log" option) if open fails, creates a table on the database and then it goes into an infinite loop in which it adds one row at the time into its table. I found out that the program leaks about 2 handles every 5 minutes, I tried a tool called Memory Verify which tells me that the leaked handles are SQLite3 file locks (line 34034 on the version 3.7.13) but I am not sure whether the bug is in SQLite or in the way I use it.
I haven't specified any compiler option to build SQLite3 so it is built as Multi-Thread and as far as I understand Multi-Thread should work fine in my case as every threads has its own SQLite connection.
To open or create a database I use the following code:
bool Create()
{
int iFlags = 0;
iFlags = iFlags | SQLITE_OPEN_READWRITE | SQLITE_OPEN_NOMUTEX | SQLITE_OPEN_CREATE;
return sqlite3_open_v2(dbName_sm.c_str(), &pHandle_m, iFlags, 0) == SQLITE_OK;
}
bool Open()
{
int iFlags = 0;
iFlags = iFlags | SQLITE_OPEN_READWRITE | SQLITE_OPEN_NOMUTEX;
return sqlite3_open_v2(dbName_sm.c_str(), &pHandle_m, iFlags, 0) == SQLITE_OK;
}
The hard loop in every thread calls ExecuteQuery which does prepare, step and finalize of an INSERT statement:
bool ExecuteQuery(const std::string& statement)
{
bool res = Prepare(statement);
if(!res)
{
return false;;
}
SQLiteStatus status = Step();
Finalize();
res = (ESuccess == status || EDatabaseDone == status);
return res;
}
bool Prepare(const std::string& statement)
{
return sqlite3_prepare_v2(pHandle_m, statement.c_str(), -1, &pStmt_m, 0) == SQLITE_OK;
}
enum SQLiteStatus { ESuccess, EDatabaseDone, EDatabaseTimeout, EDatabaseError };
SQLiteStatus Step()
{
int iRet = sqlite3_step(pStmt_m);
if (iRet == SQLITE_DONE)
{
return EDatabaseDone;
}
else if (iRet == SQLITE_BUSY)
{
return EDatabaseTimeout;
}
else if (iRet != SQLITE_ROW)
{
return EDatabaseError;
}
return ESuccess;
}
bool Finalize()
{
int iRet = sqlite3_finalize(pStmt_m);
pStmt_m = 0;
return iRet == SQLITE_OK;
}
Do you guys see any mistake in my code or is it a known issue in SQLite? I tried to google it for a couple of days but I couldn't find anything about it.
Thank you very much for your help.
Regards,
Andrea
P.S. I forgot to say that I am running my test on a WinXP 64bit PC, the compiler is VS2010, the application is compiled in 32bit, SQLite version is 3.7.13...
check whether you have sqlite3_reset after every sqlite3_step because this is one case that might causes leaks. after preparing a statement with sqlite3_prepare and executing it with sqlite3_step,you need to always reset it with sqlite3_reset.