QtBluetooth Win10, how to check if bluetooth adapter is available and ON? - c++

I'm using QtBluetooth under Win10. Works fine.
However, as my app is deployed both on laptops (that may or may not have a BT adapter) and desktops (that are likely not to have an adapter), I'd like to programmatically check if the adapter is available or not (present and enabled).
Considering the documentation, I tested 4 functions:
bool isBluetoothAvailable1()
{
return !QBluetoothLocalDevice::allDevices().empty();
}
bool isBluetoothAvailable2()
{
QBluetoothLocalDevice localDevice;
return localDevice.isValid();
}
bool isBluetoothAvailable3()
{
std::shared_ptr<QLowEnergyController> created( QLowEnergyController::createPeripheral() );
if ( created )
{
if ( !created->localAddress().isNull() )
return true;
}
return false;
}
bool isBluetoothAvailable4()
{
std::shared_ptr<QLowEnergyController> created( QLowEnergyController::createCentral( QBluetoothDeviceInfo() ) );
if ( created )
{
if ( !created->localAddress().isNull() )
return true;
}
return false;
}
But when I run my code on a Win10 laptop, they all return false! Even if I can search an connect a remote device using the QBluetooth API.
What's the right method to know if a BLE adapter is available?

The correct solution would be to use isBluetoothAvailable1() because the call to allDevices() lists all connected Bluetooth adapters. However, this does not works on Windows.
I do not fully understand their reasoning, but there are 2 Windows implementations of this function in Qt.
https://code.qt.io/cgit/qt/qtconnectivity.git/tree/src/bluetooth/qbluetoothlocaldevice_win.cpp?h=5.15.2
https://code.qt.io/cgit/qt/qtconnectivity.git/tree/src/bluetooth/qbluetoothlocaldevice_winrt.cpp?h=5.15.2
And by default it uses the one that always returns empty list (qbluetoothlocaldevice_win.cpp).
QList<QBluetoothHostInfo> QBluetoothLocalDevice::allDevices()
{
QList<QBluetoothHostInfo> localDevices;
return localDevices;
}
The simplest solution is to use the code from the other Windows implementation that works (qbluetoothlocaldevice_winrt.cpp)
#include <Windows.h>
#include <BluetoothAPIs.h>
QList<QBluetoothHostInfo> allDevices()
{
BLUETOOTH_FIND_RADIO_PARAMS params;
::ZeroMemory(&params, sizeof(params));
params.dwSize = sizeof(params);
QList<QBluetoothHostInfo> foundAdapters;
HANDLE hRadio = nullptr;
if (const HBLUETOOTH_RADIO_FIND hSearch = ::BluetoothFindFirstRadio(&params, &hRadio)) {
for (;;) {
BLUETOOTH_RADIO_INFO radio;
::ZeroMemory(&radio, sizeof(radio));
radio.dwSize = sizeof(radio);
const DWORD retval = ::BluetoothGetRadioInfo(hRadio, &radio);
::CloseHandle(hRadio);
if (retval != ERROR_SUCCESS)
break;
QBluetoothHostInfo adapterInfo;
adapterInfo.setAddress(QBluetoothAddress(radio.address.ullLong));
adapterInfo.setName(QString::fromWCharArray(radio.szName));
foundAdapters << adapterInfo;
if (!::BluetoothFindNextRadio(hSearch, &hRadio))
break;
}
::BluetoothFindRadioClose(hSearch);
}
return foundAdapters;
}
Also you will need to link the necessary libraries Bthprops and ws2_32.

Related

what is the right approach to fire an event in a C++ DLL and handle it in a client(C++ or C# application that consumes the DLL)?

I am trying to build a DLL that reads Raw input, processes the data, and streams it to a client application that consumes this DLL. Both my DLL, and client application will be running in the background, or in the system tray. To achieve this behavior, I plan to use events. While I was trying to learn about the right approach to do this, I found conflicting guidance online and had trouble differentiating between these, and picking the right approach.
This article on event handling in native C++ suggests to setup an event source and event receiver using the event_source and event_receiver attributes. Though this approach might work in my situation, This documentation on event handling informs that event handling, though supported for native C++, is deprecated and will not be supported in future releases. This implies that I should not be using this approach. At the same time, This documentation on handling COM events looks like one potentially correct approach. I however am not able to fully understand the code here. Are there better examples or documentation that I could learn from?
Finally, This article on using C++ events in CLI discusses what I understand is a slightly different approach. The code below shows my approach for registering for input from a Precision Touchpad. I want to then override the WindowProc callback function to read input in the background. Finally, I want my DLL to raise my own events when a touch begins, is in progress, and ends. An application (running in the background) consuming this DLL should then be able to listen for the events I raise, and perform actions.
What is the right, and non-deprecated approach to fire events in a C++ DLL, and handle it in the client?
My Code
// skipping headers and error handeling for brevity.
static std::vector<RAWINPUTDEVICELIST> getRawInputDevices()
{
std::vector<RAWINPUTDEVICELIST> devices(64);
while (true) {
UINT numDevices = devices.size();
UINT ret = GetRawInputDeviceList(&devices[0], &numDevices, sizeof(RAWINPUTDEVICELIST));
if (ret != (UINT)-1) {
devices.resize(ret);
return devices;
}
else if (GetLastError() == ERROR_INSUFFICIENT_BUFFER) {
devices.resize(numDevices);
}
else {
throw win32_error();
}
}
}
static RID_DEVICE_INFO getDeviceInfo(HANDLE HDevice)
{
RID_DEVICE_INFO info;
info.cbSize = sizeof(RID_DEVICE_INFO);
UINT deviceSize;
deviceSize = sizeof(RID_DEVICE_INFO);
if (GetRawInputDeviceInfo(HDevice, RIDI_DEVICEINFO, &info, &deviceSize) == -1)
{
throw win32_error();
}
return info;
}
bool isPrecisionTouchAvailable()
{
bool isDigitizerAvailable = false;
std::vector<RAWINPUTDEVICELIST> devices = getRawInputDevices();
for (std::size_t i = 0; i < devices.size(); ++i)
{
RID_DEVICE_INFO info = getDeviceInfo(devices[i].hDevice);
if (info.dwType == RIM_INPUT && info.hid.usUsagePage == HID_USAGE_PAGE_DIGITIZER && info.hid.usUsage == HID_USAGE_DIGITIZER_TOUCH_PAD)
{
isDigitizerAvailable = true;
break;
}
else {
isDigitizerAvailable = false;
}
}
return isDigitizerAvailable;
}
bool registerTouchpadForInput()
{
bool registrationStatus = false;
bool precisionTouchStatus = isPrecisionTouchAvailable();
if (isPrecisionTouchAvailable != 0)
{
return registrationStatus;
}
else {
RAWINPUTDEVICE rid[1]; //one for precision touch when application is in background. Array because I plan to register for more devices like keyboard once I get things working.
rid[0].usUsagePage = HID_USAGE_PAGE_DIGITIZER; // for touchpad
rid[0].usUsage = HID_USAGE_DIGITIZER_TOUCH_PAD; // usage ID for touchpad
rid[0].dwFlags = RIDEV_INPUTSINK; //to receive events even in the background.
if (RegisterRawInputDevices(rid, 1, sizeof(rid[0])) == false)
{
GetLastError();
//throw "error registering touchpad to get data.";
}
else { registrationStatus = true; }
}
return registrationStatus;
}
When my application launches, I want to call registerTouchpadForInput and let the user know that they will not be able to use my application if they do not have a Precision Touchpad on their device.

Nvidia graphics driver causing noticeable frame stuttering

Ok I've been researching this issue for a few days now so let me go over what I know so far which leads me to believe this might be an issue with NVidia's driver and not my code.
Basically my game starts stuttering after running a few seconds (random frames take 70ms instead of 16ms, on a regularish pattern). This ONLY happens if a setting called "Threaded Optimization" is enabled in the Nvidia control panel (latest drivers, windows 10). Unfortunately this setting is enabled by default and I'd rather not have to have people tweak their settings to get an enjoyable experience.
The game is not CPU or GPU intensive (2ms a frame without vsync on). It's not calling any openGL functions that need to synchronize data, and it's not streaming any buffers or reading data back from the GPU or anything. About the simplest possible renderer.
The problem was always there it just only started becoming noticeable when I added in fmod for audio. fmod is not the cause of this (more later in the post)
Trying to debug the problem with NVidia Nsight made the problem go away. "Start Collecting Data" instantly causes stuttering to go away. No dice here.
In the Profiler, a lot of cpu time is spent in "nvoglv32.dll". This process only spawns if Threaded Optimization is on. I suspect it's a synchronization issue then, so I debug with visual studio Concurrency Viewer.
A-HA!
Investigating these blocks of CPU time on the nvidia thread, the earliest named function I can get in their callstack is "CreateToolhelp32Snapshot" followed by a lot of time spent in Thread32Next. I noticed Thread32Next in the profiler when looking at CPU times earlier so this does seem like I'm on the right track.
So it looks like periodically the nvidia driver is grabbing a snapshot of the whole process for some reason? What could possibly be the reason, why is it doing this, and how do I stop it?
Also this explains why the problem started becoming noticeable once I added in fmod, because its grabbing info for all the processes threads, and fmod spawns a lot of threads.
Any help? Is this just a bug in nvidia's driver or is there something I can do to fix it other telling people to disable Threaded "Optimization"?
edit 1: The same issue occurs with current nvidia drivers on my laptop too. So I'm not crazy
edit 2: the same issue occurs on version 362 (previous major version) of nvidia's driver
... or is there something
I can do to fix it other telling people to disable Threaded
"Optimization"?
Yes.
You can create custom "Application Profile" for your game using NVAPI and disable "Threaded Optimization" setting in it.
There is a .PDF file on NVIDIA site with some help and code examples regarding NVAPI usage.
In order to see and manage all your NVIDIA profiles I recommend using NVIDIA Inspector. It is more convenient than the default NVIDIA Control Panel.
Also, here is my code example which creates "Application Profile" with "Threaded Optimization" disabled:
#include <stdlib.h>
#include <stdio.h>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
const wchar_t* profileName = L"Your Profile Name";
const wchar_t* appName = L"YourGame.exe";
const wchar_t* appFriendlyName = L"Your Game Casual Name";
const bool threadedOptimization = false;
void CheckError(NvAPI_Status status)
{
if (status == NVAPI_OK)
return;
NvAPI_ShortString szDesc = {0};
NvAPI_GetErrorMessage(status, szDesc);
printf("NVAPI error: %s\n", szDesc);
exit(-1);
}
void SetNVUstring(NvAPI_UnicodeString& nvStr, const wchar_t* wcStr)
{
for (int i = 0; i < NVAPI_UNICODE_STRING_MAX; i++)
nvStr[i] = 0;
int i = 0;
while (wcStr[i] != 0)
{
nvStr[i] = wcStr[i];
i++;
}
}
int main(int argc, char* argv[])
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
status = NvAPI_Initialize();
CheckError(status);
status = NvAPI_DRS_CreateSession(&hSession);
CheckError(status);
status = NvAPI_DRS_LoadSettings(hSession);
CheckError(status);
// Fill Profile Info
NVDRS_PROFILE profileInfo;
profileInfo.version = NVDRS_PROFILE_VER;
profileInfo.isPredefined = 0;
SetNVUstring(profileInfo.profileName, profileName);
// Create Profile
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_CreateProfile(hSession, &profileInfo, &hProfile);
CheckError(status);
// Fill Application Info
NVDRS_APPLICATION app;
app.version = NVDRS_APPLICATION_VER_V1;
app.isPredefined = 0;
SetNVUstring(app.appName, appName);
SetNVUstring(app.userFriendlyName, appFriendlyName);
SetNVUstring(app.launcher, L"");
SetNVUstring(app.fileInFolder, L"");
// Create Application
status = NvAPI_DRS_CreateApplication(hSession, hProfile, &app);
CheckError(status);
// Fill Setting Info
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.settingLocation = NVDRS_CURRENT_PROFILE_LOCATION;
setting.isCurrentPredefined = 0;
setting.isPredefinedValid = 0;
setting.u32CurrentValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
setting.u32PredefinedValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
// Set Setting
status = NvAPI_DRS_SetSetting(hSession, hProfile, &setting);
CheckError(status);
// Apply (or save) our changes to the system
status = NvAPI_DRS_SaveSettings(hSession);
CheckError(status);
printf("Success.\n");
NvAPI_DRS_DestroySession(hSession);
return 0;
}
Thanks for subGlitch's answer first, based on that proposal, I just make a safer one, which would enable you to cache and change the thread optimization, then restore it afterward.
Code is like below:
#include <stdlib.h>
#include <stdio.h>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
enum NvThreadOptimization {
NV_THREAD_OPTIMIZATION_AUTO = 0,
NV_THREAD_OPTIMIZATION_ENABLE = 1,
NV_THREAD_OPTIMIZATION_DISABLE = 2,
NV_THREAD_OPTIMIZATION_NO_SUPPORT = 3
};
bool NvAPI_OK_Verify(NvAPI_Status status)
{
if (status == NVAPI_OK)
return true;
NvAPI_ShortString szDesc = {0};
NvAPI_GetErrorMessage(status, szDesc);
char szResult[255];
sprintf(szResult, "NVAPI error: %s\n\0", szDesc);
printf(szResult);
return false;
}
NvThreadOptimization GetNVidiaThreadOptimization()
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
NvThreadOptimization threadOptimization = NV_THREAD_OPTIMIZATION_NO_SUPPORT;
status = NvAPI_Initialize();
if(!NvAPI_OK_Verify(status))
return threadOptimization;
status = NvAPI_DRS_CreateSession(&hSession);
if(!NvAPI_OK_Verify(status))
return threadOptimization;
status = NvAPI_DRS_LoadSettings(hSession);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;;
}
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_GetBaseProfile(hSession, &hProfile);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;;
}
NVDRS_SETTING originalSetting;
originalSetting.version = NVDRS_SETTING_VER;
status = NvAPI_DRS_GetSetting(hSession, hProfile, OGL_THREAD_CONTROL_ID, &originalSetting);
if(NvAPI_OK_Verify(status))
{
threadOptimization = (NvThreadOptimization)originalSetting.u32CurrentValue;
}
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;
}
void SetNVidiaThreadOptimization(NvThreadOptimization threadedOptimization)
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
if(threadedOptimization == NV_THREAD_OPTIMIZATION_NO_SUPPORT)
return;
status = NvAPI_Initialize();
if(!NvAPI_OK_Verify(status))
return;
status = NvAPI_DRS_CreateSession(&hSession);
if(!NvAPI_OK_Verify(status))
return;
status = NvAPI_DRS_LoadSettings(hSession);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_GetBaseProfile(hSession, &hProfile);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.u32CurrentValue = (EValues_OGL_THREAD_CONTROL)threadedOptimization;
status = NvAPI_DRS_SetSetting(hSession, hProfile, &setting);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
status = NvAPI_DRS_SaveSettings(hSession);
NvAPI_OK_Verify(status);
NvAPI_DRS_DestroySession(hSession);
}
Based on the two interfaces (Get/Set) above, you may well save the original setting and restore it when your application exits. That means your setting to disable thread optimization only impact your own application.
static NvThreadOptimization s_OriginalNVidiaThreadOptimization = NV_THREAD_OPTIMIZATION_NO_SUPPORT;
// Set
s_OriginalNVidiaThreadOptimization = GetNVidiaThreadOptimization();
if( s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_NO_SUPPORT
&& s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_DISABLE)
{
SetNVidiaThreadOptimization(NV_THREAD_OPTIMIZATION_DISABLE);
}
//Restore
if( s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_NO_SUPPORT
&& s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_DISABLE)
{
SetNVidiaThreadOptimization(s_OriginalNVidiaThreadOptimization);
};
Hate to state the obvious but I feel like it needs to be said.
Threaded optimization is notorious for causing stuttering in many games, even those that take advantage of multithreading. Unless your application works well with the threaded optimization setting, the only logical answer is to tell your users to disable it. If users are stubborn and don't want to do that, that's their fault.
The only bug in recent memory I can think of is that older versions of the nvidia driver caused applications w/ threaded optimization running in Wine to crash, but that's unrelated to the stuttering issue you describe.
Building off of #subGlitch's answer, the following checks to see if an application profile already exists, and if so updates the existing profile instead of creating a new one. It is also encapsulated into a function which can be called, that will bypass the logic if the nvidia api is not found on the system (AMD/Intel users), or an issue is encountered which prohibits modifying the profile:
#include <iostream>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
const wchar_t* profileName = L"Application for testing nvidia api";
const wchar_t* appName = L"nvapi.exe";
const wchar_t* appFriendlyName = L"Nvidia api test";
const bool threadedOptimization = false;
bool nvapiStatusOk(NvAPI_Status status)
{
if (status != NVAPI_OK)
{
// will need to not print these in prod, just return false
// full list of codes in nvapi_lite_common.h line 249
std::cout << "Status Code:" << status << std::endl;
NvAPI_ShortString szDesc = { 0 };
NvAPI_GetErrorMessage(status, szDesc);
printf("NVAPI Error: %s\n", szDesc);
return false;
}
return true;
}
void setNVUstring(NvAPI_UnicodeString& nvStr, const wchar_t* wcStr)
{
for (int i = 0; i < NVAPI_UNICODE_STRING_MAX; i++)
nvStr[i] = 0;
int i = 0;
while (wcStr[i] != 0)
{
nvStr[i] = wcStr[i];
i++;
}
}
void initNvidiaApplicationProfile()
{
NvAPI_Status status;
// if status does not equal NVAPI_OK (0) after initialization,
// either the system does not use an nvidia gpu, or something went
// so wrong that we're unable to use the nvidia api...therefore do nothing
/*
if (!nvapiStatusOk(NvAPI_Initialize()))
return;
*/
// for debugging use ^ in prod
if (!nvapiStatusOk(NvAPI_Initialize()))
{
std::cout << "Unable to initialize Nvidia api" << std::endl;
return;
}
else
{
std::cout << "Nvidia api initialized successfully" << std::endl;
}
// initialize session
NvDRSSessionHandle hSession;
if (!nvapiStatusOk(NvAPI_DRS_CreateSession(&hSession)))
return;
// load settings
if (!nvapiStatusOk(NvAPI_DRS_LoadSettings(hSession)))
return;
// check if application already exists
NvDRSProfileHandle hProfile;
NvAPI_UnicodeString nvAppName;
setNVUstring(nvAppName, appName);
NVDRS_APPLICATION app;
app.version = NVDRS_APPLICATION_VER_V1;
// documentation states this will return ::NVAPI_APPLICATION_NOT_FOUND, however I cannot
// find where that is defined anywhere in the headers...so not sure what's going to happen with this?
//
// This is returning NVAPI_EXECUTABLE_NOT_FOUND, which might be what it's supposed to return when it can't
// find an existing application, and the documentation is just outdated?
status = NvAPI_DRS_FindApplicationByName(hSession, nvAppName, &hProfile, &app);
if (!nvapiStatusOk(status))
{
// if status does not equal NVAPI_EXECUTABLE_NOT_FOUND, then something bad happened and we should not proceed
if (status != NVAPI_EXECUTABLE_NOT_FOUND)
{
NvAPI_Unload();
return;
}
// create application as it does not already exist
// Fill Profile Info
NVDRS_PROFILE profileInfo;
profileInfo.version = NVDRS_PROFILE_VER;
profileInfo.isPredefined = 0;
setNVUstring(profileInfo.profileName, profileName);
// Create Profile
//NvDRSProfileHandle hProfile;
if (!nvapiStatusOk(NvAPI_DRS_CreateProfile(hSession, &profileInfo, &hProfile)))
{
NvAPI_Unload();
return;
}
// Fill Application Info, can't re-use app variable for some reason
NVDRS_APPLICATION app2;
app2.version = NVDRS_APPLICATION_VER_V1;
app2.isPredefined = 0;
setNVUstring(app2.appName, appName);
setNVUstring(app2.userFriendlyName, appFriendlyName);
setNVUstring(app2.launcher, L"");
setNVUstring(app2.fileInFolder, L"");
// Create Application
if (!nvapiStatusOk(NvAPI_DRS_CreateApplication(hSession, hProfile, &app2)))
{
NvAPI_Unload();
return;
}
}
// update profile settings
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.settingLocation = NVDRS_CURRENT_PROFILE_LOCATION;
setting.isCurrentPredefined = 0;
setting.isPredefinedValid = 0;
setting.u32CurrentValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
setting.u32PredefinedValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
// load settings
if (!nvapiStatusOk(NvAPI_DRS_SetSetting(hSession, hProfile, &setting)))
{
NvAPI_Unload();
return;
}
// save changes
if (!nvapiStatusOk(NvAPI_DRS_SaveSettings(hSession)))
{
NvAPI_Unload();
return;
}
// disable in prod
std::cout << "Nvidia application profile updated successfully" << std::endl;
NvAPI_DRS_DestroySession(hSession);
// unload the api as we're done with it
NvAPI_Unload();
}
int main()
{
// if building for anything other than windows, we'll need to not call this AND have
// some preprocessor logic to not include any of the api code. No linux love apparently...so
// that's going to be a thing we'll have to figure out down the road -_-
initNvidiaApplicationProfile();
std::cin.get();
return 0;
}

How can I determine whether a process is 32 or 64 bit?

Given a Windows process handle, how can I determine, using C++ code, whether the process is 32 bit or 64 bit?
If you have a process handle, use IsWow64Process().
If IsWow64Process() reports true, the process is 32-bit running on a 64-bit OS.
If IsWow64Process() reports false (or does not exist in kernel32.dll), then the process is either 32-bit running on a 32-bit OS, or is 64-bit running on a 64-bit OS. To know if the OS itself is 32-bit or 64-bit, use GetNativeSystemInfo() (or GetSystemInfo() if GetNativeSystemInfo() is not available in kernel32.dll).
BOOL IsWow64(HANDLE process)
{
BOOL bIsWow64 = FALSE;
typedef BOOL(WINAPI *LPFN_ISWOW64PROCESS) (HANDLE, PBOOL);
LPFN_ISWOW64PROCESS fnIsWow64Process;
fnIsWow64Process = (LPFN_ISWOW64PROCESS)GetProcAddress(GetModuleHandle(TEXT("kernel32")), "IsWow64Process");
if (NULL != fnIsWow64Process)
{
if (!fnIsWow64Process(process, &bIsWow64))
{
//handle error
}
}
return bIsWow64;
}
bool IsX86Process(HANDLE process)
{
SYSTEM_INFO systemInfo = { 0 };
GetNativeSystemInfo(&systemInfo);
// x86 environment
if (systemInfo.wProcessorArchitecture == PROCESSOR_ARCHITECTURE_INTEL)
return true;
// Check if the process is an x86 process that is running on x64 environment.
// IsWow64 returns true if the process is an x86 process
return IsWow64(process);
}
If you have handle to the module then you can do this:
IMAGE_NT_HEADERS * headers = ImageNtHeader(handle);
if ( headers->FileHeader.Machine == IMAGE_FILE_MACHINE_I386 )
{
//module is x86
}
else if ( headers->FileHeader.Machine == IMAGE_FILE_MACHINE_AMD64 )
{
//module is x64
}
I took help from my own answer.
Try
#include <Windows.h>
enum class process_architecture
{
nun,
x32,
x64
};
enum class windows_architecture
{
x32,
x64
};
windows_architecture process::get_windows_architecture()
{
#ifdef _WIN64
return windows_architecture::x64;
#else
return windows_architecture::x32;
#endif
}
process_architecture get_process_architecture(DWORD id)
{
BOOL is_wow_64 = FALSE;
HANDLE h_process = OpenProcess(PROCESS_QUERY_LIMITED_INFORMATION, false, id);
if (!h_process) return process_architecture::nun;
bool result = IsWow64Process(h_process, &is_wow_64);
CloseHandle(h_process);
if (!result) return process_architecture::nun;
if (is_wow_64) return process_architecture::x32;
else if (get_windows_architecture() == windows_architecture::x32) return process_architecture::x32;
else return process_architecture::x64;
}
If you do not want to use windows API, try:
int main()
{
const int* pInt = nullptr;
if (sizeof(pInt) == 8)
{
std::cout << "64 bit process";
}
else if(sizeof(pInt) == 4)
{
std::cout << "32 bit process";
}
return 0;
}

C++ - SQLite3 leaks handles in multithread environment

I wrote a simple program that spawns 10 threads, each thread opens a database (common to all the threads), or creates it (with "Write-Ahead Log" option) if open fails, creates a table on the database and then it goes into an infinite loop in which it adds one row at the time into its table. I found out that the program leaks about 2 handles every 5 minutes, I tried a tool called Memory Verify which tells me that the leaked handles are SQLite3 file locks (line 34034 on the version 3.7.13) but I am not sure whether the bug is in SQLite or in the way I use it.
I haven't specified any compiler option to build SQLite3 so it is built as Multi-Thread and as far as I understand Multi-Thread should work fine in my case as every threads has its own SQLite connection.
To open or create a database I use the following code:
bool Create()
{
int iFlags = 0;
iFlags = iFlags | SQLITE_OPEN_READWRITE | SQLITE_OPEN_NOMUTEX | SQLITE_OPEN_CREATE;
return sqlite3_open_v2(dbName_sm.c_str(), &pHandle_m, iFlags, 0) == SQLITE_OK;
}
bool Open()
{
int iFlags = 0;
iFlags = iFlags | SQLITE_OPEN_READWRITE | SQLITE_OPEN_NOMUTEX;
return sqlite3_open_v2(dbName_sm.c_str(), &pHandle_m, iFlags, 0) == SQLITE_OK;
}
The hard loop in every thread calls ExecuteQuery which does prepare, step and finalize of an INSERT statement:
bool ExecuteQuery(const std::string& statement)
{
bool res = Prepare(statement);
if(!res)
{
return false;;
}
SQLiteStatus status = Step();
Finalize();
res = (ESuccess == status || EDatabaseDone == status);
return res;
}
bool Prepare(const std::string& statement)
{
return sqlite3_prepare_v2(pHandle_m, statement.c_str(), -1, &pStmt_m, 0) == SQLITE_OK;
}
enum SQLiteStatus { ESuccess, EDatabaseDone, EDatabaseTimeout, EDatabaseError };
SQLiteStatus Step()
{
int iRet = sqlite3_step(pStmt_m);
if (iRet == SQLITE_DONE)
{
return EDatabaseDone;
}
else if (iRet == SQLITE_BUSY)
{
return EDatabaseTimeout;
}
else if (iRet != SQLITE_ROW)
{
return EDatabaseError;
}
return ESuccess;
}
bool Finalize()
{
int iRet = sqlite3_finalize(pStmt_m);
pStmt_m = 0;
return iRet == SQLITE_OK;
}
Do you guys see any mistake in my code or is it a known issue in SQLite? I tried to google it for a couple of days but I couldn't find anything about it.
Thank you very much for your help.
Regards,
Andrea
P.S. I forgot to say that I am running my test on a WinXP 64bit PC, the compiler is VS2010, the application is compiled in 32bit, SQLite version is 3.7.13...
check whether you have sqlite3_reset after every sqlite3_step because this is one case that might causes leaks. after preparing a statement with sqlite3_prepare and executing it with sqlite3_step,you need to always reset it with sqlite3_reset.

Generate machine-specific key for Mac

On Windows, we generate a PC-specific unique key used to tie a license to a PC. It's a C++ app using wxWidgets, which is theoretically cross-platform compatible but not been maintained on the Mac side. We use some Win32-specific code for generating a key... how might I do something comparable on the Mac?
Looking more into whitelionV and blahdiblah's asnwers, I found this useful page:
Accessing the system serial number programmatically
#include <CoreFoundation/CoreFoundation.h>
#include <IOKit/IOKitLib.h>
// Returns the serial number as a CFString.
// It is the caller's responsibility to release the returned CFString when done with it.
void CopySerialNumber(CFStringRef *serialNumber)
{
if (serialNumber != NULL) {
*serialNumber = NULL;
io_service_t platformExpert = IOServiceGetMatchingService(kIOMasterPortDefault,
IOServiceMatching("IOPlatformExpertDevice"));
if (platformExpert) {
CFTypeRef serialNumberAsCFString =
IORegistryEntryCreateCFProperty(platformExpert,
CFSTR(kIOPlatformSerialNumberKey),
kCFAllocatorDefault, 0);
if (serialNumberAsCFString) {
*serialNumber = serialNumberAsCFString;
}
IOObjectRelease(platformExpert);
}
}
}
Accessing the built-in MAC address programmatically
#include <stdio.h>
#include <CoreFoundation/CoreFoundation.h>
#include <IOKit/IOKitLib.h>
#include <IOKit/network/IOEthernetInterface.h>
#include <IOKit/network/IONetworkInterface.h>
#include <IOKit/network/IOEthernetController.h>
static kern_return_t FindEthernetInterfaces(io_iterator_t *matchingServices);
static kern_return_t GetMACAddress(io_iterator_t intfIterator, UInt8 *MACAddress, UInt8 bufferSize);
static kern_return_t FindEthernetInterfaces(io_iterator_t *matchingServices)
{
kern_return_t kernResult;
CFMutableDictionaryRef matchingDict;
CFMutableDictionaryRef propertyMatchDict;
matchingDict = IOServiceMatching(kIOEthernetInterfaceClass);
if (NULL == matchingDict) {
printf("IOServiceMatching returned a NULL dictionary.\n");
}
else {
propertyMatchDict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
if (NULL == propertyMatchDict) {
printf("CFDictionaryCreateMutable returned a NULL dictionary.\n");
}
else {
CFDictionarySetValue(matchingDict, CFSTR(kIOPropertyMatchKey), propertyMatchDict);
CFRelease(propertyMatchDict);
}
}
kernResult = IOServiceGetMatchingServices(kIOMasterPortDefault, matchingDict, matchingServices);
if (KERN_SUCCESS != kernResult) {
printf("IOServiceGetMatchingServices returned 0x%08x\n", kernResult);
}
return kernResult;
}
static kern_return_t GetMACAddress(io_iterator_t intfIterator, UInt8 *MACAddress, UInt8 bufferSize)
{
io_object_t intfService;
io_object_t controllerService;
kern_return_t kernResult = KERN_FAILURE;
if (bufferSize < kIOEthernetAddressSize) {
return kernResult;
}
bzero(MACAddress, bufferSize);
while ((intfService = IOIteratorNext(intfIterator)))
{
CFTypeRef MACAddressAsCFData;
kernResult = IORegistryEntryGetParentEntry(intfService,
kIOServicePlane,
&controllerService);
if (KERN_SUCCESS != kernResult) {
printf("IORegistryEntryGetParentEntry returned 0x%08x\n", kernResult);
}
else {
MACAddressAsCFData = IORegistryEntryCreateCFProperty(controllerService,
CFSTR(kIOMACAddress),
kCFAllocatorDefault,
0);
if (MACAddressAsCFData) {
CFShow(MACAddressAsCFData); // for display purposes only; output goes to stderr
CFDataGetBytes(MACAddressAsCFData, CFRangeMake(0, kIOEthernetAddressSize), MACAddress);
CFRelease(MACAddressAsCFData);
}
(void) IOObjectRelease(controllerService);
}
(void) IOObjectRelease(intfService);
}
return kernResult;
}
int main(int argc, char *argv[])
{
kern_return_t kernResult = KERN_SUCCESS;
io_iterator_t intfIterator;
UInt8 MACAddress[kIOEthernetAddressSize];
kernResult = FindEthernetInterfaces(&intfIterator);
if (KERN_SUCCESS != kernResult) {
printf("FindEthernetInterfaces returned 0x%08x\n", kernResult);
}
else {
kernResult = GetMACAddress(intfIterator, MACAddress, sizeof(MACAddress));
if (KERN_SUCCESS != kernResult) {
printf("GetMACAddress returned 0x%08x\n", kernResult);
}
else {
printf("This system's built-in MAC address is %02x:%02x:%02x:%02x:%02x:%02x.\n",
MACAddress[0], MACAddress[1], MACAddress[2], MACAddress[3], MACAddress[4], MACAddress[5]);
}
}
(void) IOObjectRelease(intfIterator); // Release the iterator.
return kernResult;
}
While MAC is on the face of it probably preferable as being more predictable, they warn that:
Netbooting introduces a wrinkle with systems with multiple built-in
Ethernet ports. The primary Ethernet port on these systems is the one
that is connected to the NetBoot server. This means that a search for
the primary port may return either of the built-in MAC addresses
depending on which port was used for netbooting. Note that "built-in"
does not include Ethernet ports that reside on an expansion card.
It concerns me this might mean you don't always get the same value back?
You could just call system_profiler and look for "Serial Number"
/usr/sbin/system_profiler | grep "Serial Number (system)"
There might well be a programmatic way to get the same information, but I don't know it offhand.
To uniquely identify any machine you could try to use the MAC address. The process, although not trivial, its quite simple. There are a lot of cross platform open source libraries.
In fact you could try this Apple dev example