I have 2 projectors, usually disconnected from the electricity. For play videos I am using KODI to play videos. He has sources C++ available that I process for my needs.
When running KODI, I switch screens to extended mode.
SetDisplayConfig( 0, NULL, 0, NULL, SDC_TOPOLOGY_EXTEND | SDC_APPLY );
But this only works if both screens are detected by the system. In a situation where the screens are not detected by the system, I have to enter the display settings and click "detection" before running KODI.
How to do this detection in C++?
I tried:
bool IsDisplayConnected(int displayIndex)
{
DISPLAY_DEVICE device;
device.cb = sizeof(DISPLAY_DEVICE);
return EnumDisplayDevices(NULL, displayIndex, &device, 0);
}
IsDisplayConnected(0);
IsDisplayConnected(1);
But nothing is happening. The system does not try to detect the screens.
I want capture image frame from video capture device on win7
and directly work with the RGB data.
The code is working but when device is off application is stuck.
I can't kill process. There are 2 devices to select webcam or a video capture device. With capDlgVideoSource sometimes a dialog opens and I can select one.
How to avoid a deadlock when device is off ?
How to put video frame data direcly in memory instead to a file ? Solved with frame copy to clipboard and then copy to memory
How can I explicitly set a device ?
The microsoft documentation is so less.
capDriverConnect(hCam, X) with X=1,2,3 doesnt work. 0 is right.
It seems "0" addresses both devices.
capDlgVideoSource(hCam); let me sometimes choose a driver in a dialog
sometimes not.
Sometimes without having asked for it mysteriously dialog pops up for selecting device sometimes not.
Please, can anybody help me to fix this unclear behaviour.
// create the preview window
HWND hCam = capCreateCaptureWindow(L"hoven", WS_CHILD, 0, 0, 0, 0, GetDesktopWindow(), 0);
if ( hCam == NULL )
{
printf("capCreateCaptureWindow Error !\n");
return -1;
}
// here I get deadlock
if ( !capDriverConnect(hCam, 0) )
{
printf("capDriverConnect Error !\n");
return -1;
}
capGrabFrame(hCam);
capFileSaveDIB(hCam, L"shot.bmp");
capDriverDisconnect(hCam);
DestroyWindow(hCam);
return 0;
Good afternoon,
I have a barebone Direct3D App that works on a host PC, but fails to initialize DirectX while running via remote desktop.
I traced the failure to this call, where it fails with
result = adapterOutput->GetDisplayModeList(DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_ENUM_MODES_INTERLACED, &numModes, NULL);
if(FAILED(result))
{
return false;
}
It fails with:
result = 0x887a0022 : A resource is not available at the time of the call, but may become available later.
The full initialization code is from Rastertek tutorials, found here:
http://www.rastertek.com/dx11tut03.html
Does anyone know a workaround for this problem?
Remote Desktop involves some corner-cases, and keep in mind it's sometimes using the 'Microsoft Basic Renderer' (a.k.a. the software WARP driver). See this blog post.
You can also guard your use of GetDisplayModeList in the remote scenario by detecting it in the first place. For example, the legacy DXUT sample framework did this in it's enumeration code:
// mode for the current screen resolution for the remote session.
if( 0 != GetSystemMetrics( SM_REMOTESESSION) )
{
DEVMODE DevMode;
DevMode.dmSize = sizeof( DEVMODE );
if( EnumDisplaySettings( nullptr, ENUM_CURRENT_SETTINGS, &DevMode ) )
{
NumModes = 1;
pDesc[0].Width = DevMode.dmPelsWidth;
pDesc[0].Height = DevMode.dmPelsHeight;
pDesc[0].Format = DXGI_FORMAT_R8G8B8A8_UNORM;
pDesc[0].RefreshRate.Numerator = 0;
pDesc[0].RefreshRate.Denominator = 0;
pDesc[0].ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_PROGRESSIVE;
pDesc[0].Scaling = DXGI_MODE_SCALING_CENTERED;
}
}
You also can't use 'full-screen exclusive' mode in remote desktop:
if( GetSystemMetrics(SM_REMOTESESSION) != 0 )
{
sd.Windowed = TRUE;
}
You don't really need to use GetDisplayModeList at all. Just pick a reasonable starting size or start your window 'maximized'. See the directx-vs-templates for an approach that just uses the 'native resolution' of the desktop for both windowed and 'fake full screen'. It also all works well for remote desktop.
Another 'corner-case' with remote desktop is "raw input" for mouse. See the implementation of Mouse from the DirectX Tool Kit.
Not technically a solution, but the problem was in refresh rate initialization, bypassing this with a try{}-catch{} block allowed me to run with a default refresh rate via remote desktop. Everything else initialized without issues
I develop an application which shows something like a video in its window. I use technologies which are described here Introducing Direct2D 1.1. In my case the only difference is that eventually I create a bitmap using
ID2D1DeviceContext::CreateBitmap
then I use
ID2D1Bitmap::CopyFromMemory
to copy raw RGB data to it and then I call
ID2D1DeviceContext::DrawBitmap
to draw the bitmap. I use the high quality cubic interpolation mode D2D1_INTERPOLATION_MODE_HIGH_QUALITY_CUBIC for scaling to have the best picture but in some cases (RDP, Citrix, virtual machines, etc) it is very slow and has very high CPU consumption. It happens because in those cases a non-hardware video adapter is used. So for non-hardware adapters I am trying to turn off the interpolation and use faster methods. The problem is that I cannot exactly check if the system has a true hardware adapter.
When I call D3D11CreateDevice, I use it with D3D_DRIVER_TYPE_HARDWARE but on virtual machines it typically returns "Microsoft Basic Render Driver" which is a software driver and does not use GPU (it consumes CPU). So currently I check the vendor ID. If the vendor is AMD (ATI), NVIDIA or Intel, then I use the cubic interpolation. In the other case I use the fastest method which does not consume CPU a lot.
Microsoft::WRL::ComPtr<IDXGIDevice> dxgiDevice;
if (SUCCEEDED(m_pD3dDevice->QueryInterface(...)))
{
Microsoft::WRL::ComPtr<IDXGIAdapter> adapter;
if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter)))
{
DXGI_ADAPTER_DESC desc;
if (SUCCEEDED(adapter->GetDesc(&desc)))
{
// NVIDIA
if (desc.VendorId == 0x10DE ||
// AMD
desc.VendorId == 0x1002 || // 0x1022 ?
// Intel
desc.VendorId == 0x8086) // 0x163C, 0x8087 ?
{
bSupported = true;
}
}
}
}
It works for physical (console) Windows session even in virtual machines. But for RDP sessions IDXGIAdapter still returns the vendors in case of real machines but it does not use GPU (I can see it via the Process Hacker 2 and AMD System Monitor (in case of ATI Radeon)) so I still have high CPU consumption with the cubic interpolation. In case of an RDP session to Windows 7 with ATI Radeon it is 10% bigger than via the physical console.
Or am I mistaken and somehow RDP uses GPU resources and that is the reason why it returns a real hardware adapter via IDXGIAdapter::GetDesc?
DirectDraw
Also I looked at DirectX Diagnostic Tool. It looks like the "DirectDraw Acceleration" info field returns exactly what I need. In case of physical (console) sessions it says "Enabled". In case of RDP and virtual machine (without hardware video acceleration) sessions it says "Not Available". I looked at sources and theoretically I can use the verification algorithm. But it is actually for DirectDraw which I do not use in my application. I would like to use something which is directly linked to ID3D11Device, IDXGIDevice, IDXGIAdapter and so on.
IDXGIAdapter1::GetDesc1 and DXGI_ADAPTER_FLAG
I also tried to use IDXGIAdapter1::GetDesc1 and check the flags.
Microsoft::WRL::ComPtr<IDXGIDevice> dxgiDevice;
if (SUCCEEDED(m_pD3dDevice->QueryInterface(...)))
{
Microsoft::WRL::ComPtr<IDXGIAdapter> adapter;
if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter)))
{
Microsoft::WRL::ComPtr<IDXGIAdapter1> adapter1;
if (SUCCEEDED(adapter->QueryInterface(__uuidof(IDXGIAdapter1), reinterpret_cast<void**>(adapter1.GetAddressOf()))))
{
DXGI_ADAPTER_DESC1 desc;
if (SUCCEEDED(adapter1->GetDesc1(&desc)))
{
// desc.Flags
// DXGI_ADAPTER_FLAG_NONE = 0,
// DXGI_ADAPTER_FLAG_REMOTE = 1,
// DXGI_ADAPTER_FLAG_SOFTWARE = 2,
// DXGI_ADAPTER_FLAG_FORCE_DWORD = 0xffffffff
}
}
}
}
Information about the DXGI_ADAPTER_FLAG_SOFTWARE flag
Virtual Machine RDP Win Serv 2012 (Microsoft Basic Render Driver) -> (0x02) DXGI_ADAPTER_FLAG_SOFTWARE
Physical Win 10 (Intel Video) -> (0x00) DXGI_ADAPTER_FLAG_NONE
Physical Win 7 (ATI Radeon) - > (0x00) DXGI_ADAPTER_FLAG_NONE
RDP Win 10 (Intel Video) -> (0x00) DXGI_ADAPTER_FLAG_NONE
RDP Win 7 (ATI Radeon) -> (0x00) DXGI_ADAPTER_FLAG_NONE
In case of RDP session on a real machine with a hardware adapter, Flags == 0 but as I can see via Process Hacker 2 the GPU is not used. At least on Windows 7 with ATI Radeon I can see bigger CPU usage in case of an RDP session. So it looks like DXGI_ADAPTER_FLAG_SOFTWARE is only for Microsoft Basic Render Driver. So the issue is not solved.
The question
Is there a correct way to check if a real hardware video card (GPU) is used for the current Windows session? Or maybe it is possible to check if a specific interpolation mode of ID2D1DeviceContext::DrawBitmap has hardware implementation and uses GPU for the current session?
UPD
The topic is not about detecting RDP or Citrix sessions. It is not about detecting if the application is inside a virtual machine or not. I already have the all verifications and use the linear interpolation for those cases. The topic is about detecting if a real GPU is used for the current Windows session to display the desktop. I am looking for a more sophisticated solution to make decision using features of DirectX and DXGI.
If you want to detect the Microsoft Basic Renderer, the best option is to use it's VID/PID combo:
ComPtr<IDXGIDevice> dxgiDevice;
if (SUCCEEDED(device.As(&dxgiDevice)))
{
ComPtr<IDXGIAdapter> adapter;
if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter)))
{
DXGI_ADAPTER_DESC desc;
if (SUCCEEDED(adapter->GetDesc(&desc)))
{
if ( (desc.VendorId == 0x1414) && (desc.DeviceId == 0x8c) )
{
// WARNING: Microsoft Basic Render Driver is active.
// Performance of this application may be unsatisfactory.
// Please ensure that your video card is Direct3D10/11 capable
// and has the appropriate driver installed.
}
}
}
}
See Microsoft Docs and Anatomy of Direct3D 11 Create Device
You will probably find for testing/debugging that you don't want to explicitly block these scenarios, but you do want to provide some kind of warning or notice feedback to the user that they are using software rather than hardware rendering.
Remote Desktop detection from Win32 classic desktop applications is better done directly via GetSystemMetrics( SM_REMOTESESSION ).
See Microsoft Docs
Answering a 3 years old question as I struggled myself to do so.
I had to go through the registry. First thing is to find the adapter LUID in the registry, to get the adapter GUID
private string GetAdapterGuid(long luid)
{
var directXRegistryKey = Registry.LocalMachine.OpenSubKey(#"SOFTWARE\Microsoft\DirectX");
if (directXRegistryKey == null)
return "";
var subKeyNames = directXRegistryKey.GetSubKeyNames();
foreach (var subKeyName in subKeyNames)
{
var subKey = directXRegistryKey.OpenSubKey(subKeyName);
if (subKey.GetValueKind("AdapterLuid") != RegistryValueKind.QWord)
continue;
var luidValue = (long)subKey.GetValue("AdapterLuid");
if (luidValue == luid)
return subKeyName;
}
return "";
}
Once you have that Guid, you can search for the details of the graphic card in HKLM like this. If it is virtual, the service name will be "INDIRECTKMD" :
private bool IsVirtualAdapter(string adapterGuid)
{
var videoRegistryKey = Registry.LocalMachine.OpenSubKey($#"SYSTEM\CurrentControlSet\Control\Video\{adapterGuid}\Video");
if (videoRegistryKey == null)
return false;
if (videoRegistryKey.GetValueKind("Service") != RegistryValueKind.String)
return false;
var serviceName = (string)videoRegistryKey.GetValue("Service");
return serviceName.ToUpper() == "INDIRECTKMD";
}
Checking the service name felt easier than parsing the DeviceDesc value.
My use case involved having the Guid ready so I split up the function, you could merge it into one.
It also only detect RDP/MSTSC through this, additional service names might be needed for other virtual adapters. Or you could try to detect only Nvidia/AMD/Intel driver names... up to you.
I am posting here for the first time, I hope this will be clear and readable.
I am currently trying to test the presence of usb devices on an embedded system using a specific HCD and port path programmatically using C++ and Visual Studio 2008.
The idea is to pass in the port number and the hcd value as the parameters of the function, and the function will return a true or false that indicates the connection status.
I have written some code to populate the root hub and prove that the device attached to port 1 of the root hub is a hub using bool DeviceIsHub from usbioctl.h.
However, when I attempt to enumerate the usb hub attached to port 1 of root so that I may test for the connection status of the downstream ports for the presence of device(ports 1 and 2 of this usb hub). It does not seem to know how many downstream ports the usb hub has.
I checked USBVIEW/TreeView, both application tells me that devices are there
but I am not sure what ioctl command code to use such that I can enumerate the downstream ports so I can check the connection status.
The structure of the device based on USB view and USB tree provides the following.
Root hub - it has 7 ports, only the first port is being used.
A USB hub (it has four available ports) is attached to the first port of the root hub.
Two USB devices (USB mouse and USB keyboard) are attached to port 1 and port 2 of the USB hub.
I have tried IOCTL_USB_GET_CONNECTION_INFORMATION, IOCTL_USB_GET_CONNECTION_NAME,
IOCTL_USB_GET_CONNECTION_INFORMATION_EX, IOCTL_USB_PORT_CONNECTOR_PROPERTIES
(which is not supported, it can only be used in windows 8, this is the exact ioctl call they used to enumerate the ports).
Please ignore the MessageBoxes, those are for me to check the control path status and determine which route it was following.
Please find the code that I wrote as I attempt to enumerate/populate the usb hub. I did not include the Root hub code because it would make this snippet too big.
My questions mainly resides in the enumeration process of the secondary USB hub I believe.
I just checked the registry key of the device. it appears that the USB hub is enumerated and present on the device since the information is shown under regedit HKLM->System->CurrentControlSet->Enum->USB. I believe I am not enumerating it correctly within my test application.
Any help would be greatly appreciated.
Update
The part that I am most concerned about is the DeviceIoControl Calls that attempts to get the size and the actual name of that USB hub.
It currently only takes in USB_GET_NODE_INFORMATION. Any other ioctl calls that are intended to retrieve the name of the hub will fail at the first DeviceIoControl where it attempts to get the size of the hub name so that it know how much memory to allocate
for it.
Update Part 2
From observing the usbview open source code, I believe I need to enumerate the host controller and all the devices first before checking for the presence of device. I drew a conclusion such that without doing the enumeration of controller, it only goes so far down the tree (at best second layer, which is where the external hub is attached to in my case).
I am currently attempting to enumerate the other devices and controllers in hope that I can get to the third layer of device. I will keep on updating this thread until either I figure out the problem myself or someone is capable of answering my questions.
//we are connected to the external hub, now we can begin the configuration and enumeration of the external hub.
ULONG kBytes = 0;
USB_HUB_NAME SubHubName;
//Create a Handle for the external hub driver
char Name[16];
wsprintf(Name, "\\\\.\\HCD%d", HcdSub);
HANDLE SubHub = CreateFile(Name,
GENERIC_WRITE,
FILE_SHARE_WRITE,
NULL,
OPEN_EXISTING,
0,
NULL);
//Check to see if the handle was created successfully
if (SubHub == INVALID_HANDLE_VALUE)
{
MessageBox(NULL,"SubHandle Fail ","TEST",MB_OK);
return false;
}
//Query the SUBHUB/External Hub for the structure, this will tell us the number of down stream ports we need to enumerate
ioctlSuccess = DeviceIoControl(SubHub,
IOCTL_USB_GET_NODE_INFORMATION,
0,
0,
&SubHubName,
sizeof(SubHubName),
&kBytes,
NULL);
//If the command failed, close the handle and return false.
if(!ioctlSuccess)
{
CloseHandle(SubHub);
MessageBox(NULL," sub hub size fail ","TEST",MB_OK);
return false;
}
//Prepare to receive the SubHubName
kBytes = SubHubName.ActualLength;
USB_HUB_NAME *subHubNameW = (USB_HUB_NAME *) malloc(sizeof(USB_HUB_NAME) * kBytes);
//Check if the allocation failed, if it did, free the memory allocated and return false.
if (subHubNameW == NULL)
{
free(subHubNameW);
CloseHandle(SubHub);
MessageBox(NULL,"SUBHubNameW=NULL ","TEST",MB_OK);
return false;
}
//Send the command to retrieve the name
ioctlSuccess = DeviceIoControl(SubHub,
IOCTL_USB_GET_NODE_INFORMATION,
NULL,
0,
subHubNameW,
kBytes,
&kBytes,
NULL);
//We no longer need this handle.
CloseHandle(SubHub);
if(!ioctlSuccess)
{
if(subHubNameW !=NULL)
{
free(subHubNameW);
}
MessageBox(NULL,"GET NODE INFO FAIL ","TEST",MB_OK);
return false;
}
//Converts the SubHubNAme from widechar to a cahr.
MessageBox(NULL,"BEGIN CONVERTION","TEST",MB_OK);
kBytes = wcslen(subHubNameW->HubName) + 1;
char *subhubname = (char *) malloc(sizeof(char)*kBytes);
wcstombs(subhubname,subHubNameW->HubName, kBytes);
//we no longer need subHubNameW the information is now in subhubname.
if(subHubNameW !=NULL)
{
free(subHubNameW);
}
//Attempt to open a handle to driver for sub hub.
int SubhdnSize = strlen(subhubname) + sizeof("\\\\.\\");
char *subhubnamelength = (char *) malloc(sizeof(char) * SubhdnSize);
sprintf(subhubnamelength, "\\\\.\\%s", subhubname);
//We no longer need subhubname, so free it.
if(subhubname !=NULL) free(subhubname);
//Attempt to open a handle for enumerating ports on this hub.
HANDLE ExternalHub = CreateFile(subhubnamelength,
GENERIC_WRITE,
FILE_SHARE_WRITE,
NULL,
OPEN_EXISTING,
0,
NULL);
//we no longer need subhubnamelength, so free it if it is not NULL
if(subhubnamelength != NULL) free(subhubnamelength);
//Check and see if the handle was created successfully, if not, return false.
if(ExternalHub == INVALID_HANDLE_VALUE)
{
CloseHandle(ExternalHub);
MessageBox(NULL,"EXT handle fail ","TEST",MB_OK);
return false;
}
}
USB_NODE_CONNECTION_ATTRIBUTES *PortConnection =
(USB_NODE_CONNECTION_ATTRIBUTES *) malloc(sizeof(USB_NODE_CONNECTION_ATTRIBUTES));
PortConnection ->ConnectionIndex = Port;
ioctlSuccess = DeviceIoControl(ExternalHub,
IOCTL_USB_GET_NODE_CONNECTION_ATTRIBUTES,
PortConnection,
sizeof(USB_NODE_CONNECTION_ATTRIBUTES),
PortConnection,
sizeof(USB_NODE_CONNECTION_ATTRIBUTES),
&kBytes,
NULL);
if(!ioctlSuccess)
{
MessageBox(NULL,"DEVICE CONNECTION FAIL ","TEST",MB_OK);
return false;
}
if(PortConnection->ConnectionStatus !=DeviceConnected)
{
printf("The Connection Status Returns: %d",PortConnection->ConnectionStatus);
printf("\n");
return false;
}
Enumerating USB on Windows 7 will fail if UAC is enabled and application doesn't have enough privilege. Windows Embedded 7 may fail also on similar task.
There is also a possibility to enumerate through registry.
You can not enumerate anything possibly. Normally the usb device is either a bus device or a child of a bus device. Sense the bus device is not removable, it is automatically enumerated by acpi.sys the root device during boot, or if a bus device for some other device attached to the root node, then that device has to undergo some event that means it detects the device and then enumerates it, plug and play will automatically look for it's device type's driver and if installed will load it. If the usb say has 4 actual usb connectors, that are controlled from a central device, there could be a usb driver/mini driver pair. In that case the miniport driver is a child of the usb driver and is enumerated by it and attached to it. The miniport's job is to know the actual hardware, and irp's and all other io will come from it's parent. If there are several physical connections there may be additional child drivers. They all operate the hardware and there is a hot plug. When the firmware detects the installation of a usb device it communicates this to the miniport driver and on down the stack to the bus and ultimately it is processed by plug and play. Plug and play will then have the miniport driver enumerate it's device, but it needs to get a hardware id. Then it can find it's driver and load that, then the device is installed and ready for operation.
With out knowing the device stack it is not clear what device it is referring to. Keep in mind the driver stack may not reflect the actual hardware topology. There are also things called logical devices which do not represent any piece of hardware. Any of these would have to be taken into account and you need to know which device corresponds to the end of the node.
Also one little detail. I'm not as familiar with user mode api's and drivers as I am kernel, but it seems wsprintf second argument controls format of the output buffer from the input. It should be %[]%[] were [] is a symbol that represents the format. It would format the first character according to the first symbol then the second character with teh second symbol. Seems you are doing something different.
A final note, it appears the use of wsprintf() is now deprecated instead for other api's.