How to properly handle audio interruptions? - c++

I've created a OpenGL 3D game utilizing OpenAL for audio playback and experienceing a problem of losing audio if "Home" button is getting pressed before audio device is getting initialized. I tried to hook up to audio session interrupt handler, but my callback is never getting called. No matter if I minimize or maximize my application. My "OpenALInterruptionListener" is never getting called.
What am I doing wrong?
AudioSessionInitialize(NULL, NULL, OpenALInterriptionListener, this);
void OpenALInterriptionListener(void * inClientData, UInt32 inInterruptionState)
{
OpenALDevice * device = (OpenALDevice *) inClientData;
if (inInterruptionState == kAudioSessionBeginInterruption)
{
alcSuspendContext(_context);
alcMakeContextCurrent(_context);
AudioSessionSetActive(false);
}
else if (inInterruptionState == kAudioSessionEndInterruption)
{
UInt32 sessionCategory = kAudioSessionCategory_AmbientSound;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(sessionCategory), &sessionCategory);
AudioSessionSetActive(true);
alcMakeContextCurrent(_context);
alcProcessContext(_context);
}
}

Please note that there are currently issues with Audio Interruptions and IOS. Interruption notifications are fine, but end Audio Interruptions Notifications do not always work. There is a bug into Apple on this and they have not responded.

Try using NULL in alcMakeContextCurrent()
void OpenALInterriptionListener(void *inClientData, UInt32 inInterruptionState)
{
OpenALDevice * device = (OpenALDevice *) inClientData;
OSStatus nResult;
if( inInterruptionState == kAudioSessionBeginInterruption )
{
alcMakeContextCurrent(NULL);
}
else if( inInterruptionState == kAudioSessionEndInterruption )
{
nResult = AudioSessionSetActive(true);
if( nResult )
{
// "Error setting audio session active"
}
alcMakeContextCurrent( device->GetContext() );
}
}

Related

Vulkan proper frame synchronization

I'm trying to synchronize frames in Vulkan API, but I have some weird problems. I implemented synchronization like this:
void RenderSystem::OnUpdate(const float deltaTime)
{
uint32_t frameIndex{};
auto result = SwapChain->AcquireNextImageIndex(PresentationCompleteSemaphore.get(),
nullptr,
&frameIndex);
InFlightFences[frameIndex]->Wait();
InFlightFences[frameIndex]->Reset();
if (result == VK_ERROR_OUT_OF_DATE_KHR)
{
Recreate();
return;
}
else if (result != VK_SUCCESS && result != VK_SUBOPTIMAL_KHR)
{
throw std::runtime_error("Error when acquiring next image...");
}
UpdateModelMatrix(deltaTime, frameIndex); // TODO: Remove this! For testing purposes only
VkPipelineStageFlags waitStages[] = { VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT };
GraphicsMainQueue.Submit({ TriangleCommandBuffers[frameIndex].get() },
{ PresentationCompleteSemaphore.get() },
{ RenderCompleteSemaphore.get() },
InFlightFences[frameIndex].get(),
waitStages);
result = PresentationQueue.Present({ RenderCompleteSemaphore.get() },
{ SwapChain.get() },
&frameIndex);
if (result == VK_ERROR_OUT_OF_DATE_KHR || result == VK_SUBOPTIMAL_KHR || MainWindow->HasBeenResized())
Recreate();
else if (result != VK_SUCCESS)
throw std::runtime_error("Failed to present result!");
}
And it works on Windows 10 like a charm. Unfortunately on Linux Mint, it doesn't work in some cases. First of all, moving window on Linux is very laggy and sometimes freezes the whole OS for a second, but it's not the biggest problem. Closing the window calls vkDeviceWaitIdle and... it freezes the application. It will never start responding because it will wait for the device forever. The validation layer doesn't report any problem with my code.
I partly solved this problem by moving fences synchronization at the bottom of my function, but in my opinion, it's a suboptimal solution, because I wait for the frame to finish rendering, instead of preparing the next frame.
// ...
if (result == VK_ERROR_OUT_OF_DATE_KHR || result == VK_SUBOPTIMAL_KHR || MainWindow->HasBeenResized())
Recreate();
else if (result != VK_SUCCESS)
throw std::runtime_error("Failed to present result!");
InFlightFences[frameIndex]->Wait();
InFlightFences[frameIndex]->Reset();
}
How can I properly synchronize frames not only on Windows but also on Linux? What am I doing wrong? What am I missing?
You have only one set of semaphores. That means access to those semaphores might be missynchronized.
Let's see the code without the distractors:
AcquireNextImageIndex( PresentationCompleteSemaphore, frameIndex );
InFlightFences[frameIndex].WaitAndReset();
QSubmit( PresentationCompleteSemaphore, RenderCompleteSemaphore, InFlightFences[frameIndex] );
Present( RenderCompleteSemaphore, frameIndex );
Now, how do we know we can reuse PresentationCompleteSemaphore on Acquire? The Submit waits\unsignals it, and must finish. We could infer this from the fence, but the fence wait happens after the Acquire. So the semaphore still might be in use while Acquire tries to reuse it. This is a possible program flow:
AcquireNextImageIndex( PresentationCompleteSemaphore ) -> frameIndex = 0;
QSubmit( PresentationCompleteSemaphore, RenderCompleteSemaphore, InFlightFences[0] );
// hazard; QSubmit still might be waiting on PresentationCompleteSemaphore
AcquireNextImageIndex( PresentationCompleteSemaphore ) -> frameIndex = 1;
How do we know we can reuse RenderCompleteSemaphore? The QSubmit can only use it when Present is already done with it. Only sane way currently to infer that is when Acquire gives back the same swapchain image. This is a possible program flow:
AcquireNextImageIndex( PresentationCompleteSemaphore ) -> frameIndex = 0;
QSubmit( PresentationCompleteSemaphore, RenderCompleteSemaphore, InFlightFences[0] );
Present( RenderCompleteSemaphore, 0 );
AcquireNextImageIndex( PresentationCompleteSemaphore ) -> frameIndex = 1;
// hazard; RenderCompleteSemaphore might still be waited on by Present
// which presented image 0, but we acquired image 1, so it might be async
QSubmit( PresentationCompleteSemaphore, RenderCompleteSemaphore, InFlightFences[1] );

what is the right approach to fire an event in a C++ DLL and handle it in a client(C++ or C# application that consumes the DLL)?

I am trying to build a DLL that reads Raw input, processes the data, and streams it to a client application that consumes this DLL. Both my DLL, and client application will be running in the background, or in the system tray. To achieve this behavior, I plan to use events. While I was trying to learn about the right approach to do this, I found conflicting guidance online and had trouble differentiating between these, and picking the right approach.
This article on event handling in native C++ suggests to setup an event source and event receiver using the event_source and event_receiver attributes. Though this approach might work in my situation, This documentation on event handling informs that event handling, though supported for native C++, is deprecated and will not be supported in future releases. This implies that I should not be using this approach. At the same time, This documentation on handling COM events looks like one potentially correct approach. I however am not able to fully understand the code here. Are there better examples or documentation that I could learn from?
Finally, This article on using C++ events in CLI discusses what I understand is a slightly different approach. The code below shows my approach for registering for input from a Precision Touchpad. I want to then override the WindowProc callback function to read input in the background. Finally, I want my DLL to raise my own events when a touch begins, is in progress, and ends. An application (running in the background) consuming this DLL should then be able to listen for the events I raise, and perform actions.
What is the right, and non-deprecated approach to fire events in a C++ DLL, and handle it in the client?
My Code
// skipping headers and error handeling for brevity.
static std::vector<RAWINPUTDEVICELIST> getRawInputDevices()
{
std::vector<RAWINPUTDEVICELIST> devices(64);
while (true) {
UINT numDevices = devices.size();
UINT ret = GetRawInputDeviceList(&devices[0], &numDevices, sizeof(RAWINPUTDEVICELIST));
if (ret != (UINT)-1) {
devices.resize(ret);
return devices;
}
else if (GetLastError() == ERROR_INSUFFICIENT_BUFFER) {
devices.resize(numDevices);
}
else {
throw win32_error();
}
}
}
static RID_DEVICE_INFO getDeviceInfo(HANDLE HDevice)
{
RID_DEVICE_INFO info;
info.cbSize = sizeof(RID_DEVICE_INFO);
UINT deviceSize;
deviceSize = sizeof(RID_DEVICE_INFO);
if (GetRawInputDeviceInfo(HDevice, RIDI_DEVICEINFO, &info, &deviceSize) == -1)
{
throw win32_error();
}
return info;
}
bool isPrecisionTouchAvailable()
{
bool isDigitizerAvailable = false;
std::vector<RAWINPUTDEVICELIST> devices = getRawInputDevices();
for (std::size_t i = 0; i < devices.size(); ++i)
{
RID_DEVICE_INFO info = getDeviceInfo(devices[i].hDevice);
if (info.dwType == RIM_INPUT && info.hid.usUsagePage == HID_USAGE_PAGE_DIGITIZER && info.hid.usUsage == HID_USAGE_DIGITIZER_TOUCH_PAD)
{
isDigitizerAvailable = true;
break;
}
else {
isDigitizerAvailable = false;
}
}
return isDigitizerAvailable;
}
bool registerTouchpadForInput()
{
bool registrationStatus = false;
bool precisionTouchStatus = isPrecisionTouchAvailable();
if (isPrecisionTouchAvailable != 0)
{
return registrationStatus;
}
else {
RAWINPUTDEVICE rid[1]; //one for precision touch when application is in background. Array because I plan to register for more devices like keyboard once I get things working.
rid[0].usUsagePage = HID_USAGE_PAGE_DIGITIZER; // for touchpad
rid[0].usUsage = HID_USAGE_DIGITIZER_TOUCH_PAD; // usage ID for touchpad
rid[0].dwFlags = RIDEV_INPUTSINK; //to receive events even in the background.
if (RegisterRawInputDevices(rid, 1, sizeof(rid[0])) == false)
{
GetLastError();
//throw "error registering touchpad to get data.";
}
else { registrationStatus = true; }
}
return registrationStatus;
}
When my application launches, I want to call registerTouchpadForInput and let the user know that they will not be able to use my application if they do not have a Precision Touchpad on their device.

Figuring out a race condition

I am building a screen recorder, I am using ffmpeg to make the video out from frames I get from Google Chrome. I get green screen in the output video. I think there is a race condition in the threads since I am not allowed to use main thread to do the processing. here how the code look like
This function works each time I get a new frame, I suspect the functions avpicture_fill & vpx_codec_get_cx_data are being rewritten before write_ivf_frame_header & WriteFile are done.
I am thinking of creating a queue where this function push the object pp::VideoFrame then another thread with mutex will dequeue and do the processing below.
What is the best solution for this problem? and what is the optimal way of debugging it
void EncoderInstance::OnGetFrame(int32_t result, pp::VideoFrame frame) {
if (result != PP_OK)
return;
const uint8_t* data = static_cast<const uint8_t*>(frame.GetDataBuffer());
pp::Size size;
frame.GetSize(&size);
uint32_t buffersize = frame.GetDataBufferSize();
if (is_recording_) {
vpx_codec_iter_t iter = NULL;
const vpx_codec_cx_pkt_t *pkt;
// copy the pixels into our "raw input" container.
int bytes_filled = avpicture_fill(&pic_raw, data, AV_PIX_FMT_YUV420P, out_width, out_height);
if(!bytes_filled) {
Logger::Log("Cannot fill the raw input buffer");
return;
}
if(vpx_codec_encode(&codec, &raw, frame_cnt, 1, flags, VPX_DL_REALTIME))
die_codec(&codec, "Failed to encode frame");
while( (pkt = vpx_codec_get_cx_data(&codec, &iter)) ) {
switch(pkt->kind) {
case VPX_CODEC_CX_FRAME_PKT:
glb_app_thread.message_loop().PostWork(callback_factory_.NewCallback(&EncoderInstance::write_ivf_frame_header, pkt));
glb_app_thread.message_loop().PostWork(callback_factory_.NewCallback(&EncoderInstance::WriteFile, pkt));
break;
default:break;
}
}
frame_cnt++;
}
video_track_.RecycleFrame(frame);
if (need_config_) {
ConfigureTrack();
need_config_ = false;
} else {
video_track_.GetFrame(
callback_factory_.NewCallbackWithOutput(
&EncoderInstance::OnGetFrame));
}
}

Nvidia graphics driver causing noticeable frame stuttering

Ok I've been researching this issue for a few days now so let me go over what I know so far which leads me to believe this might be an issue with NVidia's driver and not my code.
Basically my game starts stuttering after running a few seconds (random frames take 70ms instead of 16ms, on a regularish pattern). This ONLY happens if a setting called "Threaded Optimization" is enabled in the Nvidia control panel (latest drivers, windows 10). Unfortunately this setting is enabled by default and I'd rather not have to have people tweak their settings to get an enjoyable experience.
The game is not CPU or GPU intensive (2ms a frame without vsync on). It's not calling any openGL functions that need to synchronize data, and it's not streaming any buffers or reading data back from the GPU or anything. About the simplest possible renderer.
The problem was always there it just only started becoming noticeable when I added in fmod for audio. fmod is not the cause of this (more later in the post)
Trying to debug the problem with NVidia Nsight made the problem go away. "Start Collecting Data" instantly causes stuttering to go away. No dice here.
In the Profiler, a lot of cpu time is spent in "nvoglv32.dll". This process only spawns if Threaded Optimization is on. I suspect it's a synchronization issue then, so I debug with visual studio Concurrency Viewer.
A-HA!
Investigating these blocks of CPU time on the nvidia thread, the earliest named function I can get in their callstack is "CreateToolhelp32Snapshot" followed by a lot of time spent in Thread32Next. I noticed Thread32Next in the profiler when looking at CPU times earlier so this does seem like I'm on the right track.
So it looks like periodically the nvidia driver is grabbing a snapshot of the whole process for some reason? What could possibly be the reason, why is it doing this, and how do I stop it?
Also this explains why the problem started becoming noticeable once I added in fmod, because its grabbing info for all the processes threads, and fmod spawns a lot of threads.
Any help? Is this just a bug in nvidia's driver or is there something I can do to fix it other telling people to disable Threaded "Optimization"?
edit 1: The same issue occurs with current nvidia drivers on my laptop too. So I'm not crazy
edit 2: the same issue occurs on version 362 (previous major version) of nvidia's driver
... or is there something
I can do to fix it other telling people to disable Threaded
"Optimization"?
Yes.
You can create custom "Application Profile" for your game using NVAPI and disable "Threaded Optimization" setting in it.
There is a .PDF file on NVIDIA site with some help and code examples regarding NVAPI usage.
In order to see and manage all your NVIDIA profiles I recommend using NVIDIA Inspector. It is more convenient than the default NVIDIA Control Panel.
Also, here is my code example which creates "Application Profile" with "Threaded Optimization" disabled:
#include <stdlib.h>
#include <stdio.h>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
const wchar_t* profileName = L"Your Profile Name";
const wchar_t* appName = L"YourGame.exe";
const wchar_t* appFriendlyName = L"Your Game Casual Name";
const bool threadedOptimization = false;
void CheckError(NvAPI_Status status)
{
if (status == NVAPI_OK)
return;
NvAPI_ShortString szDesc = {0};
NvAPI_GetErrorMessage(status, szDesc);
printf("NVAPI error: %s\n", szDesc);
exit(-1);
}
void SetNVUstring(NvAPI_UnicodeString& nvStr, const wchar_t* wcStr)
{
for (int i = 0; i < NVAPI_UNICODE_STRING_MAX; i++)
nvStr[i] = 0;
int i = 0;
while (wcStr[i] != 0)
{
nvStr[i] = wcStr[i];
i++;
}
}
int main(int argc, char* argv[])
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
status = NvAPI_Initialize();
CheckError(status);
status = NvAPI_DRS_CreateSession(&hSession);
CheckError(status);
status = NvAPI_DRS_LoadSettings(hSession);
CheckError(status);
// Fill Profile Info
NVDRS_PROFILE profileInfo;
profileInfo.version = NVDRS_PROFILE_VER;
profileInfo.isPredefined = 0;
SetNVUstring(profileInfo.profileName, profileName);
// Create Profile
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_CreateProfile(hSession, &profileInfo, &hProfile);
CheckError(status);
// Fill Application Info
NVDRS_APPLICATION app;
app.version = NVDRS_APPLICATION_VER_V1;
app.isPredefined = 0;
SetNVUstring(app.appName, appName);
SetNVUstring(app.userFriendlyName, appFriendlyName);
SetNVUstring(app.launcher, L"");
SetNVUstring(app.fileInFolder, L"");
// Create Application
status = NvAPI_DRS_CreateApplication(hSession, hProfile, &app);
CheckError(status);
// Fill Setting Info
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.settingLocation = NVDRS_CURRENT_PROFILE_LOCATION;
setting.isCurrentPredefined = 0;
setting.isPredefinedValid = 0;
setting.u32CurrentValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
setting.u32PredefinedValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
// Set Setting
status = NvAPI_DRS_SetSetting(hSession, hProfile, &setting);
CheckError(status);
// Apply (or save) our changes to the system
status = NvAPI_DRS_SaveSettings(hSession);
CheckError(status);
printf("Success.\n");
NvAPI_DRS_DestroySession(hSession);
return 0;
}
Thanks for subGlitch's answer first, based on that proposal, I just make a safer one, which would enable you to cache and change the thread optimization, then restore it afterward.
Code is like below:
#include <stdlib.h>
#include <stdio.h>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
enum NvThreadOptimization {
NV_THREAD_OPTIMIZATION_AUTO = 0,
NV_THREAD_OPTIMIZATION_ENABLE = 1,
NV_THREAD_OPTIMIZATION_DISABLE = 2,
NV_THREAD_OPTIMIZATION_NO_SUPPORT = 3
};
bool NvAPI_OK_Verify(NvAPI_Status status)
{
if (status == NVAPI_OK)
return true;
NvAPI_ShortString szDesc = {0};
NvAPI_GetErrorMessage(status, szDesc);
char szResult[255];
sprintf(szResult, "NVAPI error: %s\n\0", szDesc);
printf(szResult);
return false;
}
NvThreadOptimization GetNVidiaThreadOptimization()
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
NvThreadOptimization threadOptimization = NV_THREAD_OPTIMIZATION_NO_SUPPORT;
status = NvAPI_Initialize();
if(!NvAPI_OK_Verify(status))
return threadOptimization;
status = NvAPI_DRS_CreateSession(&hSession);
if(!NvAPI_OK_Verify(status))
return threadOptimization;
status = NvAPI_DRS_LoadSettings(hSession);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;;
}
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_GetBaseProfile(hSession, &hProfile);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;;
}
NVDRS_SETTING originalSetting;
originalSetting.version = NVDRS_SETTING_VER;
status = NvAPI_DRS_GetSetting(hSession, hProfile, OGL_THREAD_CONTROL_ID, &originalSetting);
if(NvAPI_OK_Verify(status))
{
threadOptimization = (NvThreadOptimization)originalSetting.u32CurrentValue;
}
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;
}
void SetNVidiaThreadOptimization(NvThreadOptimization threadedOptimization)
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
if(threadedOptimization == NV_THREAD_OPTIMIZATION_NO_SUPPORT)
return;
status = NvAPI_Initialize();
if(!NvAPI_OK_Verify(status))
return;
status = NvAPI_DRS_CreateSession(&hSession);
if(!NvAPI_OK_Verify(status))
return;
status = NvAPI_DRS_LoadSettings(hSession);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_GetBaseProfile(hSession, &hProfile);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.u32CurrentValue = (EValues_OGL_THREAD_CONTROL)threadedOptimization;
status = NvAPI_DRS_SetSetting(hSession, hProfile, &setting);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
status = NvAPI_DRS_SaveSettings(hSession);
NvAPI_OK_Verify(status);
NvAPI_DRS_DestroySession(hSession);
}
Based on the two interfaces (Get/Set) above, you may well save the original setting and restore it when your application exits. That means your setting to disable thread optimization only impact your own application.
static NvThreadOptimization s_OriginalNVidiaThreadOptimization = NV_THREAD_OPTIMIZATION_NO_SUPPORT;
// Set
s_OriginalNVidiaThreadOptimization = GetNVidiaThreadOptimization();
if( s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_NO_SUPPORT
&& s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_DISABLE)
{
SetNVidiaThreadOptimization(NV_THREAD_OPTIMIZATION_DISABLE);
}
//Restore
if( s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_NO_SUPPORT
&& s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_DISABLE)
{
SetNVidiaThreadOptimization(s_OriginalNVidiaThreadOptimization);
};
Hate to state the obvious but I feel like it needs to be said.
Threaded optimization is notorious for causing stuttering in many games, even those that take advantage of multithreading. Unless your application works well with the threaded optimization setting, the only logical answer is to tell your users to disable it. If users are stubborn and don't want to do that, that's their fault.
The only bug in recent memory I can think of is that older versions of the nvidia driver caused applications w/ threaded optimization running in Wine to crash, but that's unrelated to the stuttering issue you describe.
Building off of #subGlitch's answer, the following checks to see if an application profile already exists, and if so updates the existing profile instead of creating a new one. It is also encapsulated into a function which can be called, that will bypass the logic if the nvidia api is not found on the system (AMD/Intel users), or an issue is encountered which prohibits modifying the profile:
#include <iostream>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
const wchar_t* profileName = L"Application for testing nvidia api";
const wchar_t* appName = L"nvapi.exe";
const wchar_t* appFriendlyName = L"Nvidia api test";
const bool threadedOptimization = false;
bool nvapiStatusOk(NvAPI_Status status)
{
if (status != NVAPI_OK)
{
// will need to not print these in prod, just return false
// full list of codes in nvapi_lite_common.h line 249
std::cout << "Status Code:" << status << std::endl;
NvAPI_ShortString szDesc = { 0 };
NvAPI_GetErrorMessage(status, szDesc);
printf("NVAPI Error: %s\n", szDesc);
return false;
}
return true;
}
void setNVUstring(NvAPI_UnicodeString& nvStr, const wchar_t* wcStr)
{
for (int i = 0; i < NVAPI_UNICODE_STRING_MAX; i++)
nvStr[i] = 0;
int i = 0;
while (wcStr[i] != 0)
{
nvStr[i] = wcStr[i];
i++;
}
}
void initNvidiaApplicationProfile()
{
NvAPI_Status status;
// if status does not equal NVAPI_OK (0) after initialization,
// either the system does not use an nvidia gpu, or something went
// so wrong that we're unable to use the nvidia api...therefore do nothing
/*
if (!nvapiStatusOk(NvAPI_Initialize()))
return;
*/
// for debugging use ^ in prod
if (!nvapiStatusOk(NvAPI_Initialize()))
{
std::cout << "Unable to initialize Nvidia api" << std::endl;
return;
}
else
{
std::cout << "Nvidia api initialized successfully" << std::endl;
}
// initialize session
NvDRSSessionHandle hSession;
if (!nvapiStatusOk(NvAPI_DRS_CreateSession(&hSession)))
return;
// load settings
if (!nvapiStatusOk(NvAPI_DRS_LoadSettings(hSession)))
return;
// check if application already exists
NvDRSProfileHandle hProfile;
NvAPI_UnicodeString nvAppName;
setNVUstring(nvAppName, appName);
NVDRS_APPLICATION app;
app.version = NVDRS_APPLICATION_VER_V1;
// documentation states this will return ::NVAPI_APPLICATION_NOT_FOUND, however I cannot
// find where that is defined anywhere in the headers...so not sure what's going to happen with this?
//
// This is returning NVAPI_EXECUTABLE_NOT_FOUND, which might be what it's supposed to return when it can't
// find an existing application, and the documentation is just outdated?
status = NvAPI_DRS_FindApplicationByName(hSession, nvAppName, &hProfile, &app);
if (!nvapiStatusOk(status))
{
// if status does not equal NVAPI_EXECUTABLE_NOT_FOUND, then something bad happened and we should not proceed
if (status != NVAPI_EXECUTABLE_NOT_FOUND)
{
NvAPI_Unload();
return;
}
// create application as it does not already exist
// Fill Profile Info
NVDRS_PROFILE profileInfo;
profileInfo.version = NVDRS_PROFILE_VER;
profileInfo.isPredefined = 0;
setNVUstring(profileInfo.profileName, profileName);
// Create Profile
//NvDRSProfileHandle hProfile;
if (!nvapiStatusOk(NvAPI_DRS_CreateProfile(hSession, &profileInfo, &hProfile)))
{
NvAPI_Unload();
return;
}
// Fill Application Info, can't re-use app variable for some reason
NVDRS_APPLICATION app2;
app2.version = NVDRS_APPLICATION_VER_V1;
app2.isPredefined = 0;
setNVUstring(app2.appName, appName);
setNVUstring(app2.userFriendlyName, appFriendlyName);
setNVUstring(app2.launcher, L"");
setNVUstring(app2.fileInFolder, L"");
// Create Application
if (!nvapiStatusOk(NvAPI_DRS_CreateApplication(hSession, hProfile, &app2)))
{
NvAPI_Unload();
return;
}
}
// update profile settings
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.settingLocation = NVDRS_CURRENT_PROFILE_LOCATION;
setting.isCurrentPredefined = 0;
setting.isPredefinedValid = 0;
setting.u32CurrentValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
setting.u32PredefinedValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
// load settings
if (!nvapiStatusOk(NvAPI_DRS_SetSetting(hSession, hProfile, &setting)))
{
NvAPI_Unload();
return;
}
// save changes
if (!nvapiStatusOk(NvAPI_DRS_SaveSettings(hSession)))
{
NvAPI_Unload();
return;
}
// disable in prod
std::cout << "Nvidia application profile updated successfully" << std::endl;
NvAPI_DRS_DestroySession(hSession);
// unload the api as we're done with it
NvAPI_Unload();
}
int main()
{
// if building for anything other than windows, we'll need to not call this AND have
// some preprocessor logic to not include any of the api code. No linux love apparently...so
// that's going to be a thing we'll have to figure out down the road -_-
initNvidiaApplicationProfile();
std::cin.get();
return 0;
}

How do I read the pressure value from a graphics tablet stylus in Linux?

I am trying to add pressure sensitivity support to Synergy on Linux. I
believe the first step should be to detect the pressure value on the
server side. The stylus movement comes in as a MotionNotify event when
XNextEvent is called. However, this line does not output a pressure
value when the stylus is used:
case MotionNotify:
XDeviceMotionEvent* motionEvent = reinterpret_cast<XDeviceMotionEvent*>(xevent);
LOG((CLOG_INFO "tablet event: pressure=%d", motionEvent->axis_data[2]));
To solve this, I guessed that I might not be "subscribed" to such
info, so following some examples I found on the web, I have attempted
to open the Wacom device:
void
CXWindowsScreen::openWacom()
{
// init tablet (e.g. wacom)
int deviceCount;
XDeviceInfo* deviceInfo = XListInputDevices(m_display, &deviceCount);
for (int i = 0; i < deviceCount; ++i) {
if (CString(deviceInfo[i].name).find("stylus") != CString::npos) {
LOG((CLOG_INFO "tablet device: name='%s', id=%d",
deviceInfo[i].name, deviceInfo[i].id));
XDevice* tabletStylusDevice = XOpenDevice(m_display, deviceInfo[i].id);
if (tabletStylusDevice == NULL) {
LOG((CLOG_ERR "failed to open tablet device"));
return;
}
XEventClass eventClass;
DeviceMotionNotify(tabletStylusDevice, m_tabletMotionEvent, eventClass);
XSelectExtensionEvent(m_display, m_window, &eventClass, 1);
LOG((CLOG_INFO "tablet motion event=%d class=%d",
m_tabletMotionEvent, eventClass));
}
}
XFreeDeviceList(deviceInfo);
}
This does indeed detect a Wacom device...
2012-01-30T11:15:59 INFO: tablet device: name='Wacom Intuos4 6x9 stylus', id=8
/home/nick/Projects/synergy/1.4/src/lib/platform/CXWindowsScreen.cpp,1144
2012-01-30T11:15:59 INFO: tablet motion event=105 class=2153
/home/nick/Projects/synergy/1.4/src/lib/platform/CXWindowsScreen.cpp,1157
But I have so far never received the stylus event 105
(m_tabletMotionEvent) from XNextEvent...
if (xevent->type == m_tabletMotionEvent) {
XDeviceMotionEvent* motionEvent = reinterpret_cast<XDeviceMotionEvent*>(xevent);
LOG((CLOG_INFO "tablet event: pressure=%d", motionEvent->axis_data[2]));
return;
}
In other words, the above if never evaluates to true.
I hope someone can help me with this, I've been trying to solve it for weeks.
The next challenge will be to fake the pressure level on the Synergy
client so that programs like GIMP will receive it (and I'm not even sure where to
start with that).