GLX Context Creation Error: GLXBadFBConfig - c++

I used glXCreateContext to create the contexts, but the function is deprecated and always results in an OpenGL Version 3.0, where I would need at least 4. Now, if I have understood it right, GLXContext glXCreateContextAttribsARB(Display* dpy, GLXFBConfig config, GLXContext share_context, Bool direct, const int* attrib_list); replaced glXCreateContext. The "new" function allows for explicitly specifying the major version, minor version, profile et cetera in it's attrib_list like this for example:
int context_attribs[] =
{
GLX_CONTEXT_MAJOR_VERSION_ARB, 4,
GLX_CONTEXT_MINOR_VERSION_ARB, 5,
GLX_CONTEXT_FLAGS_ARB, GLX_CONTEXT_DEBUG_BIT_ARB,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_COMPABILITY_PROFILE_BIT_ARB,
None
};
Then use the function:
glXCreateContextAttribsARB(dpy, config, NULL, true, context_attribs);
That is how I have done it in my program. The window is already created and dpy is a valid pointer to Display. config I have defined like this:
// GLXFBConfig config; created at the beginning of the program
int attrib_list[] =
{
GLX_RENDER_TYPE, GLX_RGBA_BIT,
GLX_RED_SIZE, 8,
GLX_GREEN_SIZE, 8,
GLX_BLUE_SIZE, 8,
GLX_DEPTH_SIZE, 24,
GLX_DOUBLEBUFFER, True,
None
};
int nAttribs;
config = glXChooseFBConfig(dpy, 0, attrib_list, &nAttribs);
Checking with glxinfo, I have the correct visual for it; vi has been set to 0x120, which I can confirm with glxinfo | grep 0x120. It exactly fulfills the above.
So far, so good. But when running the application (compiling works fine), I get the following error:
X Error of failed request: GLXBadFBConfig
Major opcode of failed request: 152 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 31
Current serial number in output stream: 31
Now, this is what the error is about:
If <config> does not support compatible OpenGL contexts providing the requested API major and minor version, forward-compatible flag, and debug context flag, GLXBadFBConfig is generated.
So, the problem is pretty straightforward. I don't know how to solve it though. What it essentially means is that no OpenGL context corresponding both to the attributes I specified in attrib_list[] and the attributes in context_attribs can be found. With glxinfo | grep Max I confirmed that my highest possible OpenGL Version is 4.5. I would like to hear your advice on what I should do now. I have played around with the attributes in context_attribs for a while, but did not get anywhere. Maybe the problem really is in another place. Maybe my conception of the GLX functions is flawed in general, please point it out if so!

The specification of GLX_ARB_create_context is clear about when GLXBadFBConfig error may be returned:
* If <config> does not support compatible OpenGL contexts
providing the requested API major and minor version,
forward-compatible flag, and debug context flag, GLXBadFBConfig
is generated.
This maybe confusing (as error has nothing to do with already created GLXFBConfig), but I that's what we have. So the most obvious reason for the error you have is that your system doesn't actually support OpenGL 4.5 Compatible Profile you have requested - it might have, though, support OpenGL 4.5 Core Profile or compatible/core profiles of lower versions. This is a pretty common case for Mesa drivers supporting only OpenGL 3.3+ Core Profiles and just OpenGL 3.0 Compatible Profile for many GPUs (but not all - some gets better Compatible Profile support like Radeons).
If you are not familiar yet with conception of OpenGL profiles - you can start here.
glxinfo shows information about both Core and Compatible profiles, which could be filtered out like this:
glxinfo | grep -e "OpenGL version" -e "Core" -e "Compatible"
which returns this on a virtual Ubuntu 18.04 to me:
OpenGL core profile version string: 3.3 (Core Profile) Mesa 19.2.8
OpenGL version string: 3.1 Mesa 19.2.8
If your application really needs OpenGL 4.5 or higher, than just try creating a context with GLX_CONTEXT_CORE_PROFILE_BIT_ARB bit instead of GLX_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB and ensure not using any deprecated functionality.
Note that requesting a Compatible Profile of specific version usually makes no sense - it is enough just skipping version parameters to get the highest supported one and filter out unsupported versions from GL_VERSION/GL_MAJOR_VERSION of already created context like it was done in days before profiles have been introduced. In case of a Core Profile, it might be tricky on some OpenGL drivers requesting the highest supported version (e.g. without disabled functionality of versions higher than requested) - the following code snippet could be useful:
//! A dummy XError handler which just skips errors
static int xErrorDummyHandler (Display* , XErrorEvent* ) { return 0; }
...
Window aWindow = ...;
Display* aDisp = ...;
GLXFBConfig anFBConfig = ...;
bool toDebugContext = false;
GLXContext aGContext = NULL
const char* aGlxExts = glXQueryExtensionsString (aDisp, aVisInfo.screen);
if (!checkGlExtension (aGlxExts, "GLX_ARB_create_context_profile"))
{
std::cerr << "GLX_ARB_create_context_profile is NOT supported\n";
return;
}
// Replace default XError handler to ignore errors.
// Warning - this is global for all threads!
typedef int (*xerrorhandler_t)(Display* , XErrorEvent* );
xerrorhandler_t anOldHandler = XSetErrorHandler(xErrorDummyHandler);
typedef GLXContext (*glXCreateContextAttribsARB_t)(Display* dpy, GLXFBConfig config,
GLXContext share_context, Bool direct,
const int* attrib_list);
glXCreateContextAttribsARB_t aCreateCtxProc = (glXCreateContextAttribsARB_t )glXGetProcAddress((const GLubyte* )"glXCreateContextAttribsARB");
int aCoreCtxAttribs[] =
{
GLX_CONTEXT_MAJOR_VERSION_ARB, 3,
GLX_CONTEXT_MINOR_VERSION_ARB, 2,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_CORE_PROFILE_BIT_ARB,
GLX_CONTEXT_FLAGS_ARB, toDebugContext ? GLX_CONTEXT_DEBUG_BIT_ARB : 0,
0, 0
};
// try to create a Core Profile of highest OpenGL version (up to 4.6)
for (int aLowVer4 = 6; aLowVer4 >= 0 && aGContext == NULL; --aLowVer4)
{
aCoreCtxAttribs[1] = 4;
aCoreCtxAttribs[3] = aLowVer4;
aGContext = aCreateCtxProc (aDisp, anFBConfig, NULL, True, aCoreCtxAttribs);
}
for (int aLowVer3 = 3; aLowVer3 >= 2 && aGContext == NULL; --aLowVer3)
{
aCoreCtxAttribs[1] = 3;
aCoreCtxAttribs[3] = aLowVer3;
aGContext = aCreateCtxProc (aDisp, anFBConfig, NULL, True, aCoreCtxAttribs);
}
bool isCoreProfile = aGContext != NULL;
if (!isCoreProfile)
{
std::cerr << "glXCreateContextAttribsARB() failed to create Core Profile\n";
}
// try to create Compatible Profile
if (aGContext == NULL)
{
int aCtxAttribs[] =
{
GLX_CONTEXT_FLAGS_ARB, toDebugContext ? GLX_CONTEXT_DEBUG_BIT_ARB : 0,
0, 0
};
aGContext = aCreateCtxProc (aDisp, anFBConfig, NULL, True, aCtxAttribs);
}
XSetErrorHandler (anOldHandler);
// fallback to glXCreateContext() as last resort
if (aGContext == NULL)
{
aGContext = glXCreateContext (aDisp, aVis.get(), NULL, GL_TRUE);
if (aGContext == NULL) { std::cerr << "glXCreateContext() failed\n"; }
}

Related

How I can get my total GPU memory using Qt's native OpenGL?

I'm trying to get the total amount of GPU memory from my video card using native Qt's OpenGL, I have tried hundred of methods, but none do work.
This is what I have at the moment:
QOpenGLContext context;
context.create();
QOffscreenSurface surface;
surface.setFormat(context.format());
surface.create();
QOpenGLFunctions func;
context.makeCurrent(&surface);
func.initializeOpenGLFunctions();
GLint total_mem_kb = 0;
func.glGetIntegerv(GL_GPU_MEM_INFO_TOTAL_AVAILABLE_MEM_NVX,&total_mem_kb);
qDebug()<<total_mem_kb;
The problem is that the variable total_mem_kb is always 0, It does not get the value inside of glGetIntegerv. By running this code I get 0. What can be the problem? Can you please give me a hint?
First an foremost check if the NVX_gpu_memory_info extension is supported.
Note that the extension requires OpenGL 2.0 at least.
GLint count;
glGetIntegerv(GL_NUM_EXTENSIONS, &count);
for (GLint i = 0; i < count; ++i)
{
const char *extension = (const char*)glGetStringi(GL_EXTENSIONS, i);
if (!strcmp(extension, "GL_NVX_gpu_memory_info"))
printf("%d: %s\n", i, extension);
}
I know you just said that you have an Nvidia graphics card, but this doesn't by default guarantee support. Additionally if you have an integrated graphics card then make sure you are actually using your dedicated graphics card.
If you have an Nvidia GeForce graphics card, then then the following should result in something along the lines of "Nvidia" and "GeForce".
glGetString(GL_VENDOR);
glGetString(GL_RENDERER);
If it returns anything but "Nvidia" then you need to open your Nvidia Control Panel and set the preferred graphics card to your Nvidia graphics card.
After you've verified it being the Nvidia graphics card and that the extension is supported. Then you can try getting the total and current available memory:
GLint totalMemoryKb = 0;
glGetIntegerv(GL_GPU_MEMORY_INFO_TOTAL_AVAILABLE_MEMORY_NVX, &totalMemoryKb);
GLint currentMemoryKb = 0;
glGetIntegerv(GL_GPU_MEMORY_INFO_CURRENT_AVAILABLE_VIDMEM_NVX, &currentMemoryKb);
I would also like to point out that the NVX_gpu_memory_info extension defines it as:
GL_GPU_MEMORY_INFO_TOTAL_AVAILABLE_MEMORY_NVX
and not
GL_GPU_MEM_INFO_TOTAL_AVAILABLE_MEM_NVX
Note the MEMORY vs MEM difference.
So suspecting you've defined GL_GPU_MEM_INFO_TOTAL_AVAILABLE_MEM_NVX yourself or leveraging something else that has defined it. That tells it could be wrongly defined or referring to something else.
I use the following:
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
LONG __stdcall glxGpuTotalMemory()
{
GLint total_mem_kb = 0;
glGetIntegerv(GL_GPU_MEM_INFO_TOTAL_AVAILABLE_MEM_NVX, &total_mem_kb);
if (total_mem_kb == 0 && wglGetGPUIDsAMD)
{
UINT n = wglGetGPUIDsAMD(0, 0);
UINT *ids = new UINT[n];
size_t total_mem_mb = 0;
wglGetGPUIDsAMD(n, ids);
wglGetGPUInfoAMD(ids[0], WGL_GPU_RAM_AMD, GL_UNSIGNED_INT, sizeof(size_t), &total_mem_mb);
total_mem_kb = total_mem_mb * 1024;
}
return total_mem_kb;
}
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
LONG __stdcall glxGpuAvailMemory()
{
GLint cur_avail_mem_kb = 0;
glGetIntegerv(GL_GPU_MEM_INFO_CURRENT_AVAILABLE_MEM_NVX, &cur_avail_mem_kb);
if (cur_avail_mem_kb == 0 && wglGetGPUIDsAMD)
{
glGetIntegerv(GL_TEXTURE_FREE_MEMORY_ATI, &cur_avail_mem_kb);
}
return cur_avail_mem_kb;
}

glewInit function throws error code 0x0000007F (C++) [duplicate]

I try to write down code from this tutorial. I have the code of InitializeOGL():
bool Ogl::InitializeOGL(bool vSync)
{
cout<<"Init OpenGL"<<endl;
int pixelFormat;
PIXELFORMATDESCRIPTOR pixelFormatDescriptor;
int result;
char *vendorChar, *rendererChar;
hDC = GetDC(hWnd);
if(!hDC)
return false;
pixelFormat = ChoosePixelFormat(hDC,&pixelFormatDescriptor);
if(pixelFormat==0)
return false;
result = SetPixelFormat(hDC,pixelFormat,&pixelFormatDescriptor);
if(result!=1)
return false;
HGLRC tempDeviceContext = wglCreateContext(hDC);
wglMakeCurrent(hDC,tempDeviceContext);
// glewExperimental = GL_TRUE;
if(glewInit()!=GLEW_OK)
return false;
int attribList[5] =
{
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 1, 0
};
hGLRC = wglCreateContextAttribsARB(hDC,0,attribList);
if(hGLRC!=NULL)
{
wglMakeCurrent(NULL,NULL);
wglDeleteContext(tempDeviceContext);
result = wglMakeCurrent(hDC,hGLRC);
if(result!=1)
return false;
}
vendorChar = (char*)glGetString(GL_VENDOR);
rendererChar = (char*)glGetString(GL_RENDERER);
strcpy_s(videoCardInfo,vendorChar);
strcat_s(videoCardInfo,"-");
strcat_s(videoCardInfo,rendererChar);
if(vSync)
result = wglSwapIntervalEXT(1);
else
result = wglSwapIntervalEXT(0);
if(result!=1)
return false;
int glVersion[2] = {-1,-1};
glGetIntegerv(GL_MAJOR_VERSION,&glVersion[0]);
glGetIntegerv(GL_MINOR_VERSION,&glVersion[1]);
cout<<"Initializing OpenGL"<<endl;
cout<<"OpenGL version"<<glVersion[0]<<"."<<glVersion[1]<<endl;
cout<<"GPU"<<videoCardInfo<<endl;
return 0;
}
When I try to change context version to OpenGL 3.1, here crashes wglCreateContextAttribsARB()
function (Pointer for this is getting in LoadExtensions() correctly). When I try to create OpenGL 4.0 here crashes function wglSwapIntervalEXT(). My graphic card handles only OpenGL 3.1.
My question is how to succefully init OpenGL context here? What I have to do to create OpenGL context in version 3.1.
There are a couple of things that need to be mentioned here:
1. Driver Version
If your graphics card / driver only support OpenGL 3.1, then WGL_CONTEXT_MAJOR_VERSION_ARB and friends are generally going to be undefined. Before OpenGL 3.2 introduced core / compatibility, context versions were not particularly meaningful.
This requires support for either WGL_ARB_create_context or WGL_ARB_create_conext_profile.
2. Incorrect usage of ChoosePixelFormat and SetPixelFormat
PIXELFORMATDESCRIPTOR pixelFormatDescriptor; // <--- Uninitialized
At minimum, the Win32 API needs you to initialize the size field of this structure. In years past, the size of structures was used to determine the version of Windows that a particular piece of code was written for. These days structures like PIXELFORMATDESCRIPTOR are generally static in size because they are used by a part of Windows that is deprecated (GDI), but if you do not set the size you can still thoroughly confuse Windows. Furthermore, you need to flag your pixel format to support OpenGL to guarantee that you can use it to create an OpenGL render context.
Also note that once you set the pixel format for a device context on Windows, it cannot be changed. Generally, this means if you want to create a dummy render context to initialize your extensions you should also create a dummy window with a dummy pixel format. After you initialize your extensions, you can use wglChoosePixelFormatARB (...) and its associated functions to select the pixel format for your main window's device context.
This is (was) particularly important back in the days before FBOs when you wanted to implement multi-sampling. You cannot get a multi-sample pixel format using ChoosePixelFormat (...), but you need to call ChoosePixelFormat (...) to setup the extension necessary to get a multi-sample pixel format. Kind of a catch 22.

clCreateContextFromType ends up in a SEGFAULT while execution

I am trying to create an OpenCL context on the platform which is containing my graphics card. But when I call clCreateContextFromType() a SEGFAULT is thrown.
int main(int argc, char** argv)
{
/*
...
*/
cl_platform_id* someValidPlatformId;
//creating heap space using malloc to store all platform ids
getCLPlatforms(someValidPlatformId);
//error handling for getCLPLatforms()
//OCLPlatform(cl_platform_id platform)
OCLPLatform platform = OCLPlatform(someValidPlatformId[0]);
//OCLContext::OCL_GPU_DEVICE == CL_DEVICE_TYPE_GPU
OCLContext context = OCLContext(platform,OCLContext::OCL_GPU_DEVICE);
/*
...
*/
}
cl_platform_id* getCLPlatforms(cl_platform_id* platforms)
{
cl_int errNum;
cl_uint numPlatforms;
numPlatforms = (cl_uint) getCLPlatformsCount(); //returns the platform count
//using clGetPlatformIDs()
//as described in the Khronos API
if(numPlatforms == 0)
return NULL;
errNum = clGetPlatformIDs(numPlatforms,platforms,NULL);
if(errNum != CL_SUCCESS)
return NULL;
return platforms;
}
OCLContext::OCLContext(OCLPlatform platform,unsigned int type)
{
this->initialize(platform,type);
}
void OCLContext::initialize(OCLPlatform platform,unsigned int type)
{
cl_int errNum;
cl_context_properties contextProperties[] =
{
CL_CONTEXT_PLATFORM,
(cl_context_properties)platform.getPlatformId(),
0
};
cout << "a" << endl;std::flush(cout);
this->context = clCreateContextFromType(contextProperties,
(cl_device_type)type,
&pfn_notify,
NULL,&errNum);
if(errNum != CL_SUCCESS)
throw OCLContextException();
cout << "b" << endl;std::flush(cout);
/*
...
*/
}
The given type is CL_DEVICE_TYPE_GPU and also the platform contained by the cl_context_properties array is valid.
To debug the error I implemented the following pfn_notify() function described by the Khronos API:
static void pfn_notify(const char* errinfo,
const void* private_info,
size_t cb, void* user_data)
{
fprintf(stderr, "OpenCL Error (via pfn_notify): %s\n", errinfo);
flush(cout);
}
Here is the ouput schown by the shell:
$ ./OpenCLFramework.exe
a
Segmentation fault
The machine i am working with has the following properties:
Intel Core i5 2500 CPU
NVIDIA Geforce 210 GPU
OS: Windows 7
AMD APP SDK 3.0 Beta
IDE: Eclipse with gdb
It would be great if somebody knew an answer to this problem.
The problem seems to be solved now.
Injecting the a valid cl_platform_id throught gdb solved the SEGFAULT. So I digged a little bit deeper and the issue for the error was that I saved the value as a standard primitive. When I called a function with this value casted to cl_platform_id some functions failed handling that. So it looks like it is a mingling of types what lead to this failure.
Now I save the value as cl_platform_id and cast it to an primitive when needed and not vice versa.
I thank you for your answers and apologize for the long radio silence for my part.

OpenCL crashing while dynamic linking?

I am trying to load OpenCL library at run time so that the same exe can run on platforms which do not have OpenCL drivers without finding unresolved symbols. I am using Qt to do this but I dont think I am facing my problem due to Qt. Here is my function which checks if OpenCL 1.1 is installed or not:
QLibrary *MyOpenCL::openCLLibrary = NULL;
bool MyOpenCL::loadOpenCL()
{
if(openCLLibrary)
return true;
QLibrary *lib = new QLibrary("OpenCL");
if(!lib->load())
return false;
bool result = false;
typedef cl_int (*MyPlatorms)(cl_uint, cl_platform_id *, cl_uint *);
MyPlatorms pobj = (MyPlatorms) lib->resolve("clGetPlatformIDs");
if(pobj)
{
cl_uint nplatforms = 0;
cl_uint myerr = pobj(0, NULL, &nplatforms);
if((myerr == CL_SUCCESS) && (nplatforms > 0))
{
cl_platform_id *mplatforms = new cl_platform_id[nplatforms];
myerr = pobj(nplatforms, mplatforms, NULL);
typedef cl_int (*MyPlatformInfo)(cl_platform_id, cl_platform_info, size_t, void *, size_t *);
MyPlatformInfo pinfoobj = (MyPlatformInfo) lib->resolve("clGetPlatformInfo");
if(pinfoobj)
{
size_t size;
for(unsigned int i = 0; i < nplatforms; i++)
{
size = 0;
myerr = pinfoobj(mplatforms[i], CL_PLATFORM_VERSION, 0, NULL, &size);//size = 27
if(size < 1)
continue;
char *ver = new char[size];
myerr = pinfoobj(mplatforms[i], CL_PLATFORM_VERSION, size, ver, NULL);
qDebug() << endl << ver;//segmentation fault at this line
...
}
As can be seen Qt successfully resolved clGetPlatformIDs(). It even showed that there is 1 platform available. But when I pass the array to store the cl_platform_id, it crashes.
Why is this happening?
EDIT:
I am using Qt 4.8.1 with MinGW compiler using OpenCL APP SDK 2.9.
I am using the OpenCL 1.1 header from Khronos website.
My laptop which has Windows7 64 bit also has ATI Radeon 7670m GPU which has OpenCL 1.1 drivers.
The first parameter to clGetPlatformIDs is the number of elements the driver is allowed to write to the array pointed to by the second element.
If the first call, you are passing INT_MAX and NULL for these. I'd expect a crash here because you are telling the driver to go ahead and write through your NULL pointer.
You should pass 0 for the first parameter since all you are interested in is the returned third parameter value.
In the second call you at least pass valid memory for the second parameter, but you again pass INT_MAX. Here you should pass nplatforms since that is how much memory you allocated. For the third parameter, pass NULL since you don't need the return value (again).

How do I get the version of a driver on Windows from C++

I'm looking for a programmatic way to get the version number of a driver. I want the same number that device manager shows in the driver properties for a device.
Background: I have an application that talks to some custom hardware. The device driver for the custom hardware has known bugs before a certain version number. I want the application to check the driver version and warn the user if they need to update it. The application runs on Windows XP and 7 and is written in C++.
A previous hack I used was to read the .sys file directly from system32/drivers and search for "FileVersion" directly. This is bad for many reasons. In particular it seems to need admin privileges on Windows 7.
I know the class GUID and the hardware ID (ie "USB\VID_1234&PID_5678").
The application currently uses SetupDiGetClassDevs, SetupDiEnumDeviceInterfaces and then SetupDiGetDeviceInterfaceDetail to get the "DevicePath". It then calls CreateFile with that path to talk to the driver.
It looks like I need to get a SP_DRVINFO_DATA structure from somewhere. I've tried various functions from setupapi.h, such as SetupDiGetDeviceInterfaceDetail. Here's some code I've tried that fails:
int main(void)
{
HDEVINFO DeviceInfoSet = SetupDiGetClassDevs((LPGUID)&GUID_DEVINTERFACE_USBSPI, NULL, NULL,
DIGCF_PRESENT | DIGCF_DEVICEINTERFACE);
SP_INTERFACE_DEVICE_DATA InterfaceDeviceData;
InterfaceDeviceData.cbSize = sizeof(SP_INTERFACE_DEVICE_DATA);
// Cycle through all devices.
for (int i = 0; i < 32; i++)
{
if (!SetupDiEnumDeviceInterfaces(DeviceInfoSet, 0, (LPGUID)&GUID_DEVINTERFACE_USBSPI, i, &InterfaceDeviceData))
break;
PSP_DEVICE_INTERFACE_DETAIL_DATA DeviceInterfaceDetailData;
DWORD RequiredSize;
SetupDiGetDeviceInterfaceDetail(DeviceInfoSet, &InterfaceDeviceData, NULL, 0, &RequiredSize, NULL);
DeviceInterfaceDetailData = (PSP_DEVICE_INTERFACE_DETAIL_DATA)HeapAlloc(GetProcessHeap(), HEAP_GENERATE_EXCEPTIONS | HEAP_ZERO_MEMORY, RequiredSize);
try
{
DeviceInterfaceDetailData->cbSize = sizeof(SP_DEVICE_INTERFACE_DETAIL_DATA);
SetupDiGetDeviceInterfaceDetail(DeviceInfoSet, &InterfaceDeviceData, DeviceInterfaceDetailData, RequiredSize, NULL, NULL);
// Try to get the driver info. This part always fails with code
// 259 (ERROR_NO_MORE_ITEMS).
SP_DRVINFO_DATA drvInfo;
drvInfo.cbSize = sizeof(SP_DRVINFO_DATA);
if (!SetupDiEnumDriverInfo(DeviceInfoSet, NULL, SPDIT_CLASSDRIVER, i, &drvInfo))
printf("error = %d\n", GetLastError());
printf("Driver version is %08x %08x\n", drvInfo.DriverVersion >> 32, drvInfo.DriverVersion & 0xffffffff);
}
catch(...)
{
HeapFree(GetProcessHeap(), 0, DeviceInterfaceDetailData);
throw;
}
HeapFree(GetProcessHeap(), 0, DeviceInterfaceDetailData);
}
SetupDiDestroyDeviceInfoList(DeviceInfoSet);
return 0;
}
Edit - My updated code now looks like this:
HDEVINFO devInfoSet = SetupDiGetClassDevs(&GUID_DEVINTERFACE_USBSPI, NULL, NULL,
DIGCF_PRESENT | DIGCF_DEVICEINTERFACE);
// Cycle through all devices.
for (int i = 0; ; i++)
{
// Get the device info for this device
SP_DEVINFO_DATA devInfo;
devInfo.cbSize = sizeof(SP_DEVINFO_DATA);
if (!SetupDiEnumDeviceInfo(devInfoSet, i, &devInfo))
break;
// Get the first info item for this driver
SP_DRVINFO_DATA drvInfo;
drvInfo.cbSize = sizeof(SP_DRVINFO_DATA);
if (!SetupDiEnumDriverInfo(devInfoSet, &devInfo, SPDIT_COMPATDRIVER, 0, &drvInfo))
printf("err - %d\n", GetLastError()); // Still fails with "no more items"
}
SetupDiDestroyDeviceInfoList(devInfoSet);
You're incorrectly reusing i as index in SetupDiEnumDriverInfo. That should be an inner loop for each driver info element per driver. As a result, you fail to retrieve driver info #0 for device #1.
Still, that doesn't explain why info #0 for device #0 fails. For that, you have to look at the second parameter of SetupDiEnumDriverInfo. That is a SP_DEVINFO_DATA structure for your device, but you leave it set to NULL. That gets you the list of drivers associated with the device class, not the device. I.e. that works for mice and USB sticks, which have class drivers. Your device probably has a vendor-specific driver, so you need the driver for that specific device.
As you asked a nearly identical question I post only the link to my answer here:
Why does SetupDiEnumDriverInfo give two version numbers for my driver