How to identify top-level X11 windows using xlib? - c++

I'm trying to get a list of all top level desktop windows in an X11 session. Basically, I want to get a list of all windows that are shown in the window managers application-switching UI (commonly opened when the user presses ALT+TAB).
I've never done any X11 programming before, but so far I've managed to enumerate through the entire window list, with code that looks something like this:
void CSoftwareInfoLinux::enumerateWindows(Display *display, Window rootWindow)
{
Window parent;
Window *children;
Window *child;
quint32 nNumChildren;
XTextProperty wmName;
XTextProperty wmCommand;
int status = XGetWMName(display, rootWindow, &wmName);
if (status && wmName.value && wmName.nitems)
{
int i;
char **list;
status = XmbTextPropertyToTextList(display, &wmName, &list, &i);
if (status >= Success && i && *list)
{
qDebug() << "Found window with name:" << (char*) *list;
}
status = XGetCommand(display, rootWindow, &list, &i);
if (status >= Success && i && *list)
{
qDebug() << "... and Command:" << i << (char*) *list;
}
Window tf;
status = XGetTransientForHint(display, rootWindow, &tf);
if (status >= Success && tf)
{
qDebug() << "TF set!";
}
XWMHints *pHints = XGetWMHints(display, rootWindow);
if (pHints)
{
qDebug() << "Flags:" << pHints->flags
<< "Window group:" << pHints->window_group;
}
}
status = XQueryTree(display, rootWindow, &rootWindow, &parent, &children, &nNumChildren);
if (status == 0)
{
// Could not query window tree further, aborting
return;
}
if (nNumChildren == 0)
{
// No more children found. Aborting
return;
}
for (int i = 0; i < nNumChildren; i++)
{
enumerateWindows(display, children[i]);
}
XFree((char*) children);
}
enumerateWindows() is called initially with the root window.
This works, in so far as it prints out information about hundreds of windows - what I need, is to work out which property I can interrogate to determine if a given Window is a top-level Desktop application window (not sure what the official terminology is), or not.
Can anyone shed some light on this? All the reference documentation I've found for X11 programming has been terribly dry and hard to understand. Perhaps someone could point be to a better resource?

I have a solution!
Well, sort of.
If your window manager uses the extended window manager hints (EWMH), you can query the root window using the "_NET_CLIENT_LIST" atom. This returna list of client windows the window manager is managing. For more information, see here.
However, there are some issues with this. For a start, the window manager in use must support the EWMH. KDE and GNOME do, and I'm sure some others do as well. However, I'm sure there are many that don't. Also, I've noticed a few issues with KDE. Basically, some non-KDE applications don't get included in the list. For example, if you run xcalc under KDE it won't show up in this list.
If anyone can provide any improvements on this method, I'd be glad to hear them. For reference, the code I'm using is listed below:
Atom a = XInternAtom(m_pDisplay, "_NET_CLIENT_LIST" , true);
Atom actualType;
int format;
unsigned long numItems, bytesAfter;
unsigned char *data =0;
int status = XGetWindowProperty(m_pDisplay,
rootWindow,
a,
0L,
(~0L),
false,
AnyPropertyType,
&actualType,
&format,
&numItems,
&bytesAfter,
&data);
if (status >= Success && numItems)
{
// success - we have data: Format should always be 32:
Q_ASSERT(format == 32);
// cast to proper format, and iterate through values:
quint32 *array = (quint32*) data;
for (quint32 k = 0; k < numItems; k++)
{
// get window Id:
Window w = (Window) array[k];
qDebug() << "Scanned client window:" << w;
}
XFree(data);
}

To expand on the previous solution, if you want to then get the window names:
// get window Id:
Window w = (Window) array[k];
char* name = '\0';
status = XFetchName(display, w, &name);
if (status >= Success)
{
if (name == NULL)
printf("Found: %ul NULL\n", w);
else
printf("Found: %ul %s\n", w, name);
}
XFree(name);

If you don't have to use Xlib, using GDK's gdk_screen_get_window_stack() and gdk_window_get_window_type() may help you out for your needs.

Related

Why is if (fork() == 0) { getpid() } and a popen() process returning the same process id?

I want to know why the two process id's match when the the getpid() in the fork() to my knowledge is supposed to be a different process than the one produced by popen().
I was informed that my code only works because of what I interpret might be a bug with Ubuntu based distro's such as Xubuntu, Lubuntu, and KDE neon, (which are the distro's I've tested so far). You can easily compile and test the code from here: https://github.com/time-killer-games/XTransientFor Ignore the x64 binary available at that link if you don't trust it. Arch, RedHat, etc. testers are especially welcome to give feedback if this happens to not work for them.
Here's a much more minimal approach to demonstrating the issue:
// USAGE: xprocesstest [command]
#include <X11/Xlib.h>
#include <X11/Xatom.h>
#include <X11/Xutil.h>
#include <unistd.h>
#include <thread>
#include <chrono>
#include <iostream>
#include <string>
using std::string;
static inline Window XGetActiveWindow(Display *display) {
unsigned long window;
unsigned char *prop;
Atom actual_type, filter_atom;
int actual_format, status;
unsigned long nitems, bytes_after;
int screen = XDefaultScreen(display);
window = RootWindow(display, screen);
filter_atom = XInternAtom(display, "_NET_ACTIVE_WINDOW", True);
status = XGetWindowProperty(display, window, filter_atom, 0, 1000, False, AnyPropertyType, &actual_type, &actual_format, &nitems, &bytes_after, &prop);
unsigned long long_property = prop[0] + (prop[1] << 8) + (prop[2] << 16) + (prop[3] << 24);
XFree(prop);
return (Window)long_property;
}
static inline pid_t XGetActiveProcessId(Display *display) {
unsigned long window = XGetActiveWindow(display);
unsigned char *prop;
Atom actual_type, filter_atom;
int actual_format, status;
unsigned long nitems, bytes_after;
filter_atom = XInternAtom(display, "_NET_WM_PID", True);
status = XGetWindowProperty(display, window, filter_atom, 0, 1000, False, AnyPropertyType, &actual_type, &actual_format, &nitems, &bytes_after, &prop);
unsigned long long_property = prop[0] + (prop[1] << 8) + (prop[2] << 16) + (prop[3] << 24);
XFree(prop);
return (pid_t)(long_property - 1);
}
int main(int argc, const char **argv) {
if (argc == 2) {
char *buffer = NULL;
size_t buffer_size = 0;
string str_buffer;
FILE *file = popen(argv[1], "r");
if (fork() == 0) {
Display *display = XOpenDisplay(NULL);
Window window;
unsigned i = 0;
while (i < 10) {
std::this_thread::sleep_for(std::chrono::milliseconds(200));
if (XGetActiveProcessId(display) == getpid()) {
window = XGetActiveWindow(display);
break;
}
i++;
}
if (window == XGetActiveWindow(display))
std::cout << "process id's match!" << std::endl;
else std::cout << "process id's don't match!" << std::endl;
XCloseDisplay(display);
exit(0);
}
while (getline(&buffer, &buffer_size, file) != -1)
str_buffer += buffer;
std::cout << str_buffer;
free(buffer);
pclose(file);
}
}
Compile with:
cd "${0%/*}"
g++ -c -std=c++17 "xprocesstest.cpp" -fPIC -m64
g++ "xprocesstest.o" -o "xprocesstest" -fPIC -lX11
Run with:
cd "${0%/*}"
./xprocesstest "kdialog --getopenfilename"
You can replace the command in the quotes with any executable which sets the _NET_WM_PID atom.
#thatotherguy explains the answer in his comment:
"Are you aware that due to your - 1 you're actually checking that the two processes have sequential pids? This is not surprising behavior on Linux. Differences between distros would be down to whether sh optimizes out the extra fork it uses to run the command"
static inline pid_t XGetActiveProcessId(Display *display) {
unsigned long window = XGetActiveWindow(display);
unsigned char *prop;
Atom actual_type, filter_atom;
int actual_format, status;
unsigned long nitems, bytes_after;
filter_atom = XInternAtom(display, "_NET_WM_PID", True);
status = XGetWindowProperty(display, window, filter_atom, 0, 1000, False, AnyPropertyType, &actual_type, &actual_format, &nitems, &bytes_after, &prop);
unsigned long long_property = prop[0] + (prop[1] << 8) + (prop[2] << 16) + (prop[3] << 24);
XFree(prop);
return (pid_t)(long_property - 1);
}
That - 1 in the process id return I initially added because I thought at the time it was returning the wrong process id, because I thought when I wrote that the fork() was supposed to have the same process id as the popen, which later on I discovered wasn't the case. I subtracted one, thus making two different otherwise correct process id's, incorrectly equal.
Here's the correct way to do what I intended to do, in my original code, which lead to me asking this question; I wanted to know how to detect if a fork and popen child processes stem from a common parent process (while removing the subtraction of one from the return of the GetActiveProcessId() function):
#include <proc/readproc.h>
#include <cstring>
static inline pid_t GetParentPidFromPid(pid_t pid) {
proc_t proc_info; pid_t ppid;
memset(&proc_info, 0, sizeof(proc_info));
PROCTAB *pt_ptr = openproc(PROC_FILLSTATUS | PROC_PID, &pid);
if(readproc(pt_ptr, &proc_info) != 0) {
ppid = proc_info.ppid;
string cmd = proc_info.cmd;
if (cmd == "sh")
ppid = GetParentPidFromPid(ppid);
} else ppid = 0;
closeproc(pt_ptr);
return ppid;
}
Using the above helper function, while replacing the while loop, in the original code, with this, allows me to do what I was after:
while (i < 10) {
std::this_thread::sleep_for(std::chrono::milliseconds(200));
if (GetParentPidFromPid(XGetActiveProcessId(display)) == GetParentPidFromPid(getpid()) ||
GetParentPidFromPid(GetParentPidFromPid(XGetActiveProcessId(display))) == GetParentPidFromPid(getppid())) {
window = XGetActiveWindow(display);
break;
}
i++;
}
As #thatotherguy also pointed out, some distros will return a different parent process because the sh cmd will use run directly. To solve this, I did an or check in the if statement for seeing whether the parent or "grandparent" process id's returned equal, while attempting to skip any parent processes with the sh cmd value.
The helper function needs the -lprocps linker flag and libprocps-dev package installed if you are on a Debian based system. Package name will be different on other distros.

Nvidia graphics driver causing noticeable frame stuttering

Ok I've been researching this issue for a few days now so let me go over what I know so far which leads me to believe this might be an issue with NVidia's driver and not my code.
Basically my game starts stuttering after running a few seconds (random frames take 70ms instead of 16ms, on a regularish pattern). This ONLY happens if a setting called "Threaded Optimization" is enabled in the Nvidia control panel (latest drivers, windows 10). Unfortunately this setting is enabled by default and I'd rather not have to have people tweak their settings to get an enjoyable experience.
The game is not CPU or GPU intensive (2ms a frame without vsync on). It's not calling any openGL functions that need to synchronize data, and it's not streaming any buffers or reading data back from the GPU or anything. About the simplest possible renderer.
The problem was always there it just only started becoming noticeable when I added in fmod for audio. fmod is not the cause of this (more later in the post)
Trying to debug the problem with NVidia Nsight made the problem go away. "Start Collecting Data" instantly causes stuttering to go away. No dice here.
In the Profiler, a lot of cpu time is spent in "nvoglv32.dll". This process only spawns if Threaded Optimization is on. I suspect it's a synchronization issue then, so I debug with visual studio Concurrency Viewer.
A-HA!
Investigating these blocks of CPU time on the nvidia thread, the earliest named function I can get in their callstack is "CreateToolhelp32Snapshot" followed by a lot of time spent in Thread32Next. I noticed Thread32Next in the profiler when looking at CPU times earlier so this does seem like I'm on the right track.
So it looks like periodically the nvidia driver is grabbing a snapshot of the whole process for some reason? What could possibly be the reason, why is it doing this, and how do I stop it?
Also this explains why the problem started becoming noticeable once I added in fmod, because its grabbing info for all the processes threads, and fmod spawns a lot of threads.
Any help? Is this just a bug in nvidia's driver or is there something I can do to fix it other telling people to disable Threaded "Optimization"?
edit 1: The same issue occurs with current nvidia drivers on my laptop too. So I'm not crazy
edit 2: the same issue occurs on version 362 (previous major version) of nvidia's driver
... or is there something
I can do to fix it other telling people to disable Threaded
"Optimization"?
Yes.
You can create custom "Application Profile" for your game using NVAPI and disable "Threaded Optimization" setting in it.
There is a .PDF file on NVIDIA site with some help and code examples regarding NVAPI usage.
In order to see and manage all your NVIDIA profiles I recommend using NVIDIA Inspector. It is more convenient than the default NVIDIA Control Panel.
Also, here is my code example which creates "Application Profile" with "Threaded Optimization" disabled:
#include <stdlib.h>
#include <stdio.h>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
const wchar_t* profileName = L"Your Profile Name";
const wchar_t* appName = L"YourGame.exe";
const wchar_t* appFriendlyName = L"Your Game Casual Name";
const bool threadedOptimization = false;
void CheckError(NvAPI_Status status)
{
if (status == NVAPI_OK)
return;
NvAPI_ShortString szDesc = {0};
NvAPI_GetErrorMessage(status, szDesc);
printf("NVAPI error: %s\n", szDesc);
exit(-1);
}
void SetNVUstring(NvAPI_UnicodeString& nvStr, const wchar_t* wcStr)
{
for (int i = 0; i < NVAPI_UNICODE_STRING_MAX; i++)
nvStr[i] = 0;
int i = 0;
while (wcStr[i] != 0)
{
nvStr[i] = wcStr[i];
i++;
}
}
int main(int argc, char* argv[])
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
status = NvAPI_Initialize();
CheckError(status);
status = NvAPI_DRS_CreateSession(&hSession);
CheckError(status);
status = NvAPI_DRS_LoadSettings(hSession);
CheckError(status);
// Fill Profile Info
NVDRS_PROFILE profileInfo;
profileInfo.version = NVDRS_PROFILE_VER;
profileInfo.isPredefined = 0;
SetNVUstring(profileInfo.profileName, profileName);
// Create Profile
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_CreateProfile(hSession, &profileInfo, &hProfile);
CheckError(status);
// Fill Application Info
NVDRS_APPLICATION app;
app.version = NVDRS_APPLICATION_VER_V1;
app.isPredefined = 0;
SetNVUstring(app.appName, appName);
SetNVUstring(app.userFriendlyName, appFriendlyName);
SetNVUstring(app.launcher, L"");
SetNVUstring(app.fileInFolder, L"");
// Create Application
status = NvAPI_DRS_CreateApplication(hSession, hProfile, &app);
CheckError(status);
// Fill Setting Info
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.settingLocation = NVDRS_CURRENT_PROFILE_LOCATION;
setting.isCurrentPredefined = 0;
setting.isPredefinedValid = 0;
setting.u32CurrentValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
setting.u32PredefinedValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
// Set Setting
status = NvAPI_DRS_SetSetting(hSession, hProfile, &setting);
CheckError(status);
// Apply (or save) our changes to the system
status = NvAPI_DRS_SaveSettings(hSession);
CheckError(status);
printf("Success.\n");
NvAPI_DRS_DestroySession(hSession);
return 0;
}
Thanks for subGlitch's answer first, based on that proposal, I just make a safer one, which would enable you to cache and change the thread optimization, then restore it afterward.
Code is like below:
#include <stdlib.h>
#include <stdio.h>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
enum NvThreadOptimization {
NV_THREAD_OPTIMIZATION_AUTO = 0,
NV_THREAD_OPTIMIZATION_ENABLE = 1,
NV_THREAD_OPTIMIZATION_DISABLE = 2,
NV_THREAD_OPTIMIZATION_NO_SUPPORT = 3
};
bool NvAPI_OK_Verify(NvAPI_Status status)
{
if (status == NVAPI_OK)
return true;
NvAPI_ShortString szDesc = {0};
NvAPI_GetErrorMessage(status, szDesc);
char szResult[255];
sprintf(szResult, "NVAPI error: %s\n\0", szDesc);
printf(szResult);
return false;
}
NvThreadOptimization GetNVidiaThreadOptimization()
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
NvThreadOptimization threadOptimization = NV_THREAD_OPTIMIZATION_NO_SUPPORT;
status = NvAPI_Initialize();
if(!NvAPI_OK_Verify(status))
return threadOptimization;
status = NvAPI_DRS_CreateSession(&hSession);
if(!NvAPI_OK_Verify(status))
return threadOptimization;
status = NvAPI_DRS_LoadSettings(hSession);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;;
}
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_GetBaseProfile(hSession, &hProfile);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;;
}
NVDRS_SETTING originalSetting;
originalSetting.version = NVDRS_SETTING_VER;
status = NvAPI_DRS_GetSetting(hSession, hProfile, OGL_THREAD_CONTROL_ID, &originalSetting);
if(NvAPI_OK_Verify(status))
{
threadOptimization = (NvThreadOptimization)originalSetting.u32CurrentValue;
}
NvAPI_DRS_DestroySession(hSession);
return threadOptimization;
}
void SetNVidiaThreadOptimization(NvThreadOptimization threadedOptimization)
{
NvAPI_Status status;
NvDRSSessionHandle hSession;
if(threadedOptimization == NV_THREAD_OPTIMIZATION_NO_SUPPORT)
return;
status = NvAPI_Initialize();
if(!NvAPI_OK_Verify(status))
return;
status = NvAPI_DRS_CreateSession(&hSession);
if(!NvAPI_OK_Verify(status))
return;
status = NvAPI_DRS_LoadSettings(hSession);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
NvDRSProfileHandle hProfile;
status = NvAPI_DRS_GetBaseProfile(hSession, &hProfile);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.u32CurrentValue = (EValues_OGL_THREAD_CONTROL)threadedOptimization;
status = NvAPI_DRS_SetSetting(hSession, hProfile, &setting);
if(!NvAPI_OK_Verify(status))
{
NvAPI_DRS_DestroySession(hSession);
return;
}
status = NvAPI_DRS_SaveSettings(hSession);
NvAPI_OK_Verify(status);
NvAPI_DRS_DestroySession(hSession);
}
Based on the two interfaces (Get/Set) above, you may well save the original setting and restore it when your application exits. That means your setting to disable thread optimization only impact your own application.
static NvThreadOptimization s_OriginalNVidiaThreadOptimization = NV_THREAD_OPTIMIZATION_NO_SUPPORT;
// Set
s_OriginalNVidiaThreadOptimization = GetNVidiaThreadOptimization();
if( s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_NO_SUPPORT
&& s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_DISABLE)
{
SetNVidiaThreadOptimization(NV_THREAD_OPTIMIZATION_DISABLE);
}
//Restore
if( s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_NO_SUPPORT
&& s_OriginalNVidiaThreadOptimization != NV_THREAD_OPTIMIZATION_DISABLE)
{
SetNVidiaThreadOptimization(s_OriginalNVidiaThreadOptimization);
};
Hate to state the obvious but I feel like it needs to be said.
Threaded optimization is notorious for causing stuttering in many games, even those that take advantage of multithreading. Unless your application works well with the threaded optimization setting, the only logical answer is to tell your users to disable it. If users are stubborn and don't want to do that, that's their fault.
The only bug in recent memory I can think of is that older versions of the nvidia driver caused applications w/ threaded optimization running in Wine to crash, but that's unrelated to the stuttering issue you describe.
Building off of #subGlitch's answer, the following checks to see if an application profile already exists, and if so updates the existing profile instead of creating a new one. It is also encapsulated into a function which can be called, that will bypass the logic if the nvidia api is not found on the system (AMD/Intel users), or an issue is encountered which prohibits modifying the profile:
#include <iostream>
#include <nvapi.h>
#include <NvApiDriverSettings.h>
const wchar_t* profileName = L"Application for testing nvidia api";
const wchar_t* appName = L"nvapi.exe";
const wchar_t* appFriendlyName = L"Nvidia api test";
const bool threadedOptimization = false;
bool nvapiStatusOk(NvAPI_Status status)
{
if (status != NVAPI_OK)
{
// will need to not print these in prod, just return false
// full list of codes in nvapi_lite_common.h line 249
std::cout << "Status Code:" << status << std::endl;
NvAPI_ShortString szDesc = { 0 };
NvAPI_GetErrorMessage(status, szDesc);
printf("NVAPI Error: %s\n", szDesc);
return false;
}
return true;
}
void setNVUstring(NvAPI_UnicodeString& nvStr, const wchar_t* wcStr)
{
for (int i = 0; i < NVAPI_UNICODE_STRING_MAX; i++)
nvStr[i] = 0;
int i = 0;
while (wcStr[i] != 0)
{
nvStr[i] = wcStr[i];
i++;
}
}
void initNvidiaApplicationProfile()
{
NvAPI_Status status;
// if status does not equal NVAPI_OK (0) after initialization,
// either the system does not use an nvidia gpu, or something went
// so wrong that we're unable to use the nvidia api...therefore do nothing
/*
if (!nvapiStatusOk(NvAPI_Initialize()))
return;
*/
// for debugging use ^ in prod
if (!nvapiStatusOk(NvAPI_Initialize()))
{
std::cout << "Unable to initialize Nvidia api" << std::endl;
return;
}
else
{
std::cout << "Nvidia api initialized successfully" << std::endl;
}
// initialize session
NvDRSSessionHandle hSession;
if (!nvapiStatusOk(NvAPI_DRS_CreateSession(&hSession)))
return;
// load settings
if (!nvapiStatusOk(NvAPI_DRS_LoadSettings(hSession)))
return;
// check if application already exists
NvDRSProfileHandle hProfile;
NvAPI_UnicodeString nvAppName;
setNVUstring(nvAppName, appName);
NVDRS_APPLICATION app;
app.version = NVDRS_APPLICATION_VER_V1;
// documentation states this will return ::NVAPI_APPLICATION_NOT_FOUND, however I cannot
// find where that is defined anywhere in the headers...so not sure what's going to happen with this?
//
// This is returning NVAPI_EXECUTABLE_NOT_FOUND, which might be what it's supposed to return when it can't
// find an existing application, and the documentation is just outdated?
status = NvAPI_DRS_FindApplicationByName(hSession, nvAppName, &hProfile, &app);
if (!nvapiStatusOk(status))
{
// if status does not equal NVAPI_EXECUTABLE_NOT_FOUND, then something bad happened and we should not proceed
if (status != NVAPI_EXECUTABLE_NOT_FOUND)
{
NvAPI_Unload();
return;
}
// create application as it does not already exist
// Fill Profile Info
NVDRS_PROFILE profileInfo;
profileInfo.version = NVDRS_PROFILE_VER;
profileInfo.isPredefined = 0;
setNVUstring(profileInfo.profileName, profileName);
// Create Profile
//NvDRSProfileHandle hProfile;
if (!nvapiStatusOk(NvAPI_DRS_CreateProfile(hSession, &profileInfo, &hProfile)))
{
NvAPI_Unload();
return;
}
// Fill Application Info, can't re-use app variable for some reason
NVDRS_APPLICATION app2;
app2.version = NVDRS_APPLICATION_VER_V1;
app2.isPredefined = 0;
setNVUstring(app2.appName, appName);
setNVUstring(app2.userFriendlyName, appFriendlyName);
setNVUstring(app2.launcher, L"");
setNVUstring(app2.fileInFolder, L"");
// Create Application
if (!nvapiStatusOk(NvAPI_DRS_CreateApplication(hSession, hProfile, &app2)))
{
NvAPI_Unload();
return;
}
}
// update profile settings
NVDRS_SETTING setting;
setting.version = NVDRS_SETTING_VER;
setting.settingId = OGL_THREAD_CONTROL_ID;
setting.settingType = NVDRS_DWORD_TYPE;
setting.settingLocation = NVDRS_CURRENT_PROFILE_LOCATION;
setting.isCurrentPredefined = 0;
setting.isPredefinedValid = 0;
setting.u32CurrentValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
setting.u32PredefinedValue = threadedOptimization ? OGL_THREAD_CONTROL_ENABLE : OGL_THREAD_CONTROL_DISABLE;
// load settings
if (!nvapiStatusOk(NvAPI_DRS_SetSetting(hSession, hProfile, &setting)))
{
NvAPI_Unload();
return;
}
// save changes
if (!nvapiStatusOk(NvAPI_DRS_SaveSettings(hSession)))
{
NvAPI_Unload();
return;
}
// disable in prod
std::cout << "Nvidia application profile updated successfully" << std::endl;
NvAPI_DRS_DestroySession(hSession);
// unload the api as we're done with it
NvAPI_Unload();
}
int main()
{
// if building for anything other than windows, we'll need to not call this AND have
// some preprocessor logic to not include any of the api code. No linux love apparently...so
// that's going to be a thing we'll have to figure out down the road -_-
initNvidiaApplicationProfile();
std::cin.get();
return 0;
}

Detect USB hardware keylogger

I need to determine is there hardware keylogger that was plugged to PC with USB keyboard. It needs to be done via software method, from user-land. However wiki says that it is impossible to detect HKL using soft, there are several methods exists. The best and I think only one overiew that present in net relating that theme is "Detecting Hardware Keyloggers, by Fabian Mihailowitsch - youtube".
Using this overview I am developing a tool to detect USB hardware keyloggers. The sources for detecting PS/2 keyloggers was already shared by author and available here. So my task is to make it worked for USB only.
As suggested I am using libusb library to interfere with USB devices in system.
So, there are methods I had choosen in order to detect HKL:
Find USB keyboard that bugged by HKL. Note that HKL is usually
invisible from device list in system or returned by libusb.
Detect Keyghost HKL by: Interrupt read from USB HID device, send usb reset (libusb_reset_device), read interrupt again. If data returned on last read is not nulls then keylogger detected. It is described on page 45 of Mihailowitsch's presentation
Time measurement. The idea is measure time of send/receive packets using control transfer for original keyboard for thousands times. In case HKL has been plugged, program will measure time again and then compare the time with the original value. For HKL it have to be much(or not so much) greater.
Algorithm is:
Send an output report to Keyboard(as Control transfer) (HID_REPORT_TYPE_OUTPUT 0x02 )
Wait for ACKed packet
Repeat Loop (10.000 times)
Measure time
Below is my code according to steps of detection.
1. Find USB keyboard
libusb_device * UsbKeyboard::GetSpecifiedDevice(PredicateType pred)
{
if (_usbDevices == nullptr) return nullptr;
int i = 0;
libusb_device *dev = nullptr;
while ((dev = _usbDevices[i++]) != NULL)
{
struct libusb_device_descriptor desc;
int r = libusb_get_device_descriptor(dev, &desc);
if (r >= 0)
{
if (pred(desc))
return dev;
}
}
return nullptr;
}
libusb_device * UsbKeyboard::FindKeyboard()
{
return GetSpecifiedDevice([&](libusb_device_descriptor &desc) {
bool isKeyboard = false;
auto dev_handle = libusb_open_device_with_vid_pid(_context, desc.idVendor, desc.idProduct);
if (dev_handle != nullptr)
{
unsigned char buf[255] = "";
// product description contains 'Keyboard', usually string is 'USB Keyboard'
if (libusb_get_string_descriptor_ascii(dev_handle, desc.iProduct, buf, sizeof(buf)) >= 0)
isKeyboard = strstr((char*)buf, "Keyboard") != nullptr;
libusb_close(dev_handle);
}
return isKeyboard;
});
}
Here we're iterating through all USB devices in system and checks their Product string. In my system this string for keyboard is 'USB keyboard' (obviously).
Is it stable way to detect keyboard through Product string? Is there other ways?
2. Detect Keyghost HKL using Interrupt read
int UsbKeyboard::DetectKeyghost(libusb_device *kbdev)
{
int r, i;
int transferred;
unsigned char answer[PACKET_INT_LEN];
unsigned char question[PACKET_INT_LEN];
for (i = 0; i < PACKET_INT_LEN; i++) question[i] = 0x40 + i;
libusb_device_handle *devh = nullptr;
if ((r = libusb_open(kbdev, &devh)) < 0)
{
ShowError("Error open device", r);
return r;
}
r = libusb_set_configuration(devh, 1);
if (r < 0)
{
ShowError("libusb_set_configuration error ", r);
goto out;
}
printf("Successfully set usb configuration 1\n");
r = libusb_claim_interface(devh, 0);
if (r < 0)
{
ShowError("libusb_claim_interface error ", r);
goto out;
}
r = libusb_interrupt_transfer(devh, 0x81 , answer, PACKET_INT_LEN,
&transferred, TIMEOUT);
if (r < 0)
{
ShowError("Interrupt read error ", r);
goto out;
}
if (transferred < PACKET_INT_LEN)
{
ShowError("Interrupt transfer short read %", r);
goto out;
}
for (i = 0; i < PACKET_INT_LEN; i++) {
if (i % 8 == 0)
printf("\n");
printf("%02x, %02x; ", question[i], answer[i]);
}
printf("\n");
out:
libusb_close(devh);
return 0;
}
I've got such error on libusb_interrupt_transfer:
libusb: error [hid_submit_bulk_transfer] HID transfer failed: [5] Access denied
Interrupt read error - Input/Output Error (LIBUSB_ERROR_IO) (GetLastError() - 1168)
No clue why 'access denied', then IO error, and GetLastError() returns 1168, which means - Element not found (What element?). Looking for help here.
Time measurement. Send output report and wait for ACK packet.
int UsbKeyboard::SendOutputReport(libusb_device *kbdev)
{
const int PACKET_INT_LEN = 1;
int r, i;
unsigned char answer[PACKET_INT_LEN];
unsigned char question[PACKET_INT_LEN];
for (i = 0; i < PACKET_INT_LEN; i++) question[i] = 0x30 + i;
for (i = 1; i < PACKET_INT_LEN; i++) answer[i] = 0;
libusb_device_handle *devh = nullptr;
if ((r = libusb_open(kbdev, &devh)) < 0)
{
ShowError("Error open device", r);
return r;
}
r = libusb_set_configuration(devh, 1);
if (r < 0)
{
ShowError("libusb_set_configuration error ", r);
goto out;
}
printf("Successfully set usb configuration 1\n");
r = libusb_claim_interface(devh, 0);
if (r < 0)
{
ShowError("libusb_claim_interface error ", r);
goto out;
}
printf("Successfully claim interface\n");
r = libusb_control_transfer(devh, CTRL_OUT, HID_SET_REPORT, (HID_REPORT_TYPE_OUTPUT << 8) | 0x00, 0, question, PACKET_INT_LEN, TIMEOUT);
if (r < 0) {
ShowError("Control Out error ", r);
goto out;
}
r = libusb_control_transfer(devh, CTRL_IN, HID_GET_REPORT, (HID_REPORT_TYPE_INPUT << 8) | 0x00, 0, answer, PACKET_INT_LEN, TIMEOUT);
if (r < 0) {
ShowError("Control In error ", r);
goto out;
}
out:
libusb_close(devh);
return 0;
}
Error the same as for read interrupt:
Control Out error - Input/Output Error (LIBUSB_ERROR_IO) (GetLastError() - 1168
)
How to fix please? Also how to wait for ACK packet?
Thank you.
UPDATE:
I've spent a day on searching and debbuging. So currently my problem is only to
send Output report via libusb_control_transfer. The 2nd method with interrupt read is unnecessary to implement because of Windows denies access to read from USB device using ReadFile.
It is only libusb stuff left, here is the code I wanted to make work (from 3rd example):
// sending Output report (LED)
// ...
unsigned char buf[65];
buf[0] = 1; // First byte is report number
buf[1] = 0x80;
r = libusb_control_transfer(devh, CTRL_OUT,
HID_SET_REPORT/*0x9*/, (HID_REPORT_TYPE_OUTPUT/*0x2*/ << 8) | 0x00,
0, buf, (uint16_t)2, 1000);
...
The error I've got:
[ 0.309018] [00001c0c] libusb: debug [_hid_set_report] Failed to Write HID Output Report: [1] Incorrect function
Control Out error - Input/Output Error (LIBUSB_ERROR_IO) (GetLastError() - 1168)
This error occures right after DeviceIoControl call in libusb internals.
What means "Incorrect function" there?

How to read the initial state of a MIDI Foot Controller?

I know MIDI allows me to read the state of a MIDI Foot Controller by catching a MIDI Message indicating a Control Change. But what if the user has not touched/changed the control yet? Am I still able to read the state/value? What would be the way to do that?
This is my code for catching Midi Messages using OSX CoreMIDI
void initMidi()
{
MIDIClientRef midiClient;
MIDIPortRef inputPort;
OSStatus status;
MIDIEndpointRef src;
status = MIDIClientCreate(CFSTR("testing"), NULL, NULL, &midiClient);
if (status != noErr)
NSLog(#"Error creating MIDI client: %d", status);
status = MIDIInputPortCreate(midiClient, CFSTR("Input"), midiInputCallback, NULL, &inputPort);
if (status != noErr)
NSLog(#"Error creating MIDI input port: %d", status);
ItemCount numOfDevices = MIDIGetNumberOfDevices();
// just try to connect to every device
for (ItemCount i = 0; i < numOfDevices; i++) {
src = MIDIGetSource(i);
status = MIDIPortConnectSource(inputPort, src, NULL);
}
}
void midiInputCallback(const MIDIPacketList *list,
void *procRef,
void *srcRef)
{
for (UInt32 i = 0; i < list->numPackets; i++) {
const MIDIPacket *packet = &list->packet[i];
for (UInt16 j = 0, size = 0; j < packet->length; j += size) {
UInt8 status = packet->data[j];
if (status < 0xC0) size = 3;
else if (status < 0xE0) size = 2;
else if (status < 0xF0) size = 3;
else if (status < 0xF3) size = 3;
else if (status == 0xF3) size = 2;
else size = 1;
switch (status & 0xF0) {
case 0xb0:
NSLog(#"MIDI Control Changed: %d %d", packet->data[j + 1], packet->data[j + 2]);
break;
}
}
}
}
If you did not reset the device, and did not change a control, then your program does not know the state of a control until it receives a message.
Some devices might have vendor-specific commands to read the current state of a control, or to dump the entire state.
The short answer is - No - you cannot know until an event occurs
Other answers are correct, if you have IN and OUT connected to a controller that allows interrogation through SysEx messages (Manufacturer specific)
To be more helpful:
The default state of all controllers (you are wanting to use) should be OFF on startup
e.g. Pitch Bend = centered, Modulation = ZERO, Sustain = OFF etc…
This has been the state of play since 1980's so it is not been a real problem
If you have your foot down (on a pedal) before you start your app you will be in sync the moment you release it
Good luck

building a shell - IO trouble

I am working on a shell for a systems programming class. I have been having some trouble with the file redirection. I just got redirecting the output to work, e.x. "ls > a" however when I type a command like "cat < a" into my shell it deletes everything in the file. I feel like the problem stems from the second if statement- "fdin = open(_inputFile,777)"
If that is the case a link to a recommended tutorial / other examples would be much appreciated.
On a side note, I included the entire function, however at the point which it creates the pipe, I have not tested anything there yet. I don't believe it works properly either though, but that may be from a mistake in another file.
void Command:: execute(){
if(_numberOfSimpleCommands == 0){
prompt();
return;
}
//save input/output
int defaultin = dup(0);
int defaultout = dup(1);
//initial input
int fdin;
if(_inputFile){
fdin = open(_inputFile,0777);
}else{
//use default input
fdin = dup(defaultin);
}
//execution
int pid;
int fdout;
for(int i = 0; i < _numberOfSimpleCommands; i++){
dup2(fdin,0);
close(fdin);
//setoutput
if(i == _numberOfSimpleCommands -1){
if(_outFile){
fdout = creat(_outFile,0666);
}else{
fdout = dup(defaultout);
}
}else{
int fdpipe[2];
pipe(fdpipe);
fdout = fdpipe[0];
fdin = fdpipe[1];
}
dup2(fdout,1);
close(fdout);
//create child
pid = fork();
if(pid == 0){
execvp(_simpleCommands[0]->_arguments[0],_simpleCommands[0]->_arguments);
perror("-myshell");
_exit(1);
}
}
//restore IO defaults
dup2(defaultin,0);
dup2(defaultout,1);
close(defaultin);
close(defaultout);
if(!_background){
waitpid(pid,0,0);
}
}
Your call open(_inputFile, 0777) is incorrect. The second argument to open is supposed to contain a bitwise or'd combination of values that specify access mode and file creation flags, among other things (O_RDONLY, O_WRONLY, etc). Since you're passing 0777, that probably ends up containing both O_CREAT and O_TRUNC, which causes _inputFile to be erased. You probably want open(_inputFile, O_RDONLY).