Cpp Detect connected monitor in windows 10 - c++

I have 2 projectors, usually disconnected from the electricity. For play videos I am using KODI to play videos. He has sources C++ available that I process for my needs.
When running KODI, I switch screens to extended mode.
SetDisplayConfig( 0, NULL, 0, NULL, SDC_TOPOLOGY_EXTEND | SDC_APPLY );
But this only works if both screens are detected by the system. In a situation where the screens are not detected by the system, I have to enter the display settings and click "detection" before running KODI.
How to do this detection in C++?
I tried:
bool IsDisplayConnected(int displayIndex)
{
DISPLAY_DEVICE device;
device.cb = sizeof(DISPLAY_DEVICE);
return EnumDisplayDevices(NULL, displayIndex, &device, 0);
}
IsDisplayConnected(0);
IsDisplayConnected(1);
But nothing is happening. The system does not try to detect the screens.

Related

Read from video capture device deadlock

I want capture image frame from video capture device on win7
and directly work with the RGB data.
The code is working but when device is off application is stuck.
I can't kill process. There are 2 devices to select webcam or a video capture device. With capDlgVideoSource sometimes a dialog opens and I can select one.
How to avoid a deadlock when device is off ?
How to put video frame data direcly in memory instead to a file ? Solved with frame copy to clipboard and then copy to memory
How can I explicitly set a device ?
The microsoft documentation is so less.
capDriverConnect(hCam, X) with X=1,2,3 doesnt work. 0 is right.
It seems "0" addresses both devices.
capDlgVideoSource(hCam); let me sometimes choose a driver in a dialog
sometimes not.
Sometimes without having asked for it mysteriously dialog pops up for selecting device sometimes not.
Please, can anybody help me to fix this unclear behaviour.
// create the preview window
HWND hCam = capCreateCaptureWindow(L"hoven", WS_CHILD, 0, 0, 0, 0, GetDesktopWindow(), 0);
if ( hCam == NULL )
{
printf("capCreateCaptureWindow Error !\n");
return -1;
}
// here I get deadlock
if ( !capDriverConnect(hCam, 0) )
{
printf("capDriverConnect Error !\n");
return -1;
}
capGrabFrame(hCam);
capFileSaveDIB(hCam, L"shot.bmp");
capDriverDisconnect(hCam);
DestroyWindow(hCam);
return 0;

DirectX 11 GetDisplayModeList() fails in Remote Desktop Connection

Good afternoon,
I have a barebone Direct3D App that works on a host PC, but fails to initialize DirectX while running via remote desktop.
I traced the failure to this call, where it fails with
result = adapterOutput->GetDisplayModeList(DXGI_FORMAT_R8G8B8A8_UNORM, DXGI_ENUM_MODES_INTERLACED, &numModes, NULL);
if(FAILED(result))
{
return false;
}
It fails with:
result = 0x887a0022 : A resource is not available at the time of the call, but may become available later.
The full initialization code is from Rastertek tutorials, found here:
http://www.rastertek.com/dx11tut03.html
Does anyone know a workaround for this problem?
Remote Desktop involves some corner-cases, and keep in mind it's sometimes using the 'Microsoft Basic Renderer' (a.k.a. the software WARP driver). See this blog post.
You can also guard your use of GetDisplayModeList in the remote scenario by detecting it in the first place. For example, the legacy DXUT sample framework did this in it's enumeration code:
// mode for the current screen resolution for the remote session.
if( 0 != GetSystemMetrics( SM_REMOTESESSION) )
{
DEVMODE DevMode;
DevMode.dmSize = sizeof( DEVMODE );
if( EnumDisplaySettings( nullptr, ENUM_CURRENT_SETTINGS, &DevMode ) )
{
NumModes = 1;
pDesc[0].Width = DevMode.dmPelsWidth;
pDesc[0].Height = DevMode.dmPelsHeight;
pDesc[0].Format = DXGI_FORMAT_R8G8B8A8_UNORM;
pDesc[0].RefreshRate.Numerator = 0;
pDesc[0].RefreshRate.Denominator = 0;
pDesc[0].ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_PROGRESSIVE;
pDesc[0].Scaling = DXGI_MODE_SCALING_CENTERED;
}
}
You also can't use 'full-screen exclusive' mode in remote desktop:
if( GetSystemMetrics(SM_REMOTESESSION) != 0 )
{
sd.Windowed = TRUE;
}
You don't really need to use GetDisplayModeList at all. Just pick a reasonable starting size or start your window 'maximized'. See the directx-vs-templates for an approach that just uses the 'native resolution' of the desktop for both windowed and 'fake full screen'. It also all works well for remote desktop.
Another 'corner-case' with remote desktop is "raw input" for mouse. See the implementation of Mouse from the DirectX Tool Kit.
Not technically a solution, but the problem was in refresh rate initialization, bypassing this with a try{}-catch{} block allowed me to run with a default refresh rate via remote desktop. Everything else initialized without issues

How to check if a true hardware video adapter is used

I develop an application which shows something like a video in its window. I use technologies which are described here Introducing Direct2D 1.1. In my case the only difference is that eventually I create a bitmap using
ID2D1DeviceContext::CreateBitmap
then I use
ID2D1Bitmap::CopyFromMemory
to copy raw RGB data to it and then I call
ID2D1DeviceContext::DrawBitmap
to draw the bitmap. I use the high quality cubic interpolation mode D2D1_INTERPOLATION_MODE_HIGH_QUALITY_CUBIC for scaling to have the best picture but in some cases (RDP, Citrix, virtual machines, etc) it is very slow and has very high CPU consumption. It happens because in those cases a non-hardware video adapter is used. So for non-hardware adapters I am trying to turn off the interpolation and use faster methods. The problem is that I cannot exactly check if the system has a true hardware adapter.
When I call D3D11CreateDevice, I use it with D3D_DRIVER_TYPE_HARDWARE but on virtual machines it typically returns "Microsoft Basic Render Driver" which is a software driver and does not use GPU (it consumes CPU). So currently I check the vendor ID. If the vendor is AMD (ATI), NVIDIA or Intel, then I use the cubic interpolation. In the other case I use the fastest method which does not consume CPU a lot.
Microsoft::WRL::ComPtr<IDXGIDevice> dxgiDevice;
if (SUCCEEDED(m_pD3dDevice->QueryInterface(...)))
{
Microsoft::WRL::ComPtr<IDXGIAdapter> adapter;
if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter)))
{
DXGI_ADAPTER_DESC desc;
if (SUCCEEDED(adapter->GetDesc(&desc)))
{
// NVIDIA
if (desc.VendorId == 0x10DE ||
// AMD
desc.VendorId == 0x1002 || // 0x1022 ?
// Intel
desc.VendorId == 0x8086) // 0x163C, 0x8087 ?
{
bSupported = true;
}
}
}
}
It works for physical (console) Windows session even in virtual machines. But for RDP sessions IDXGIAdapter still returns the vendors in case of real machines but it does not use GPU (I can see it via the Process Hacker 2 and AMD System Monitor (in case of ATI Radeon)) so I still have high CPU consumption with the cubic interpolation. In case of an RDP session to Windows 7 with ATI Radeon it is 10% bigger than via the physical console.
Or am I mistaken and somehow RDP uses GPU resources and that is the reason why it returns a real hardware adapter via IDXGIAdapter::GetDesc?
DirectDraw
Also I looked at DirectX Diagnostic Tool. It looks like the "DirectDraw Acceleration" info field returns exactly what I need. In case of physical (console) sessions it says "Enabled". In case of RDP and virtual machine (without hardware video acceleration) sessions it says "Not Available". I looked at sources and theoretically I can use the verification algorithm. But it is actually for DirectDraw which I do not use in my application. I would like to use something which is directly linked to ID3D11Device, IDXGIDevice, IDXGIAdapter and so on.
IDXGIAdapter1::GetDesc1 and DXGI_ADAPTER_FLAG
I also tried to use IDXGIAdapter1::GetDesc1 and check the flags.
Microsoft::WRL::ComPtr<IDXGIDevice> dxgiDevice;
if (SUCCEEDED(m_pD3dDevice->QueryInterface(...)))
{
Microsoft::WRL::ComPtr<IDXGIAdapter> adapter;
if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter)))
{
Microsoft::WRL::ComPtr<IDXGIAdapter1> adapter1;
if (SUCCEEDED(adapter->QueryInterface(__uuidof(IDXGIAdapter1), reinterpret_cast<void**>(adapter1.GetAddressOf()))))
{
DXGI_ADAPTER_DESC1 desc;
if (SUCCEEDED(adapter1->GetDesc1(&desc)))
{
// desc.Flags
// DXGI_ADAPTER_FLAG_NONE = 0,
// DXGI_ADAPTER_FLAG_REMOTE = 1,
// DXGI_ADAPTER_FLAG_SOFTWARE = 2,
// DXGI_ADAPTER_FLAG_FORCE_DWORD = 0xffffffff
}
}
}
}
Information about the DXGI_ADAPTER_FLAG_SOFTWARE flag
Virtual Machine RDP Win Serv 2012 (Microsoft Basic Render Driver) -> (0x02) DXGI_ADAPTER_FLAG_SOFTWARE
Physical Win 10 (Intel Video) -> (0x00) DXGI_ADAPTER_FLAG_NONE
Physical Win 7 (ATI Radeon) - > (0x00) DXGI_ADAPTER_FLAG_NONE
RDP Win 10 (Intel Video) -> (0x00) DXGI_ADAPTER_FLAG_NONE
RDP Win 7 (ATI Radeon) -> (0x00) DXGI_ADAPTER_FLAG_NONE
In case of RDP session on a real machine with a hardware adapter, Flags == 0 but as I can see via Process Hacker 2 the GPU is not used. At least on Windows 7 with ATI Radeon I can see bigger CPU usage in case of an RDP session. So it looks like DXGI_ADAPTER_FLAG_SOFTWARE is only for Microsoft Basic Render Driver. So the issue is not solved.
The question
Is there a correct way to check if a real hardware video card (GPU) is used for the current Windows session? Or maybe it is possible to check if a specific interpolation mode of ID2D1DeviceContext::DrawBitmap has hardware implementation and uses GPU for the current session?
UPD
The topic is not about detecting RDP or Citrix sessions. It is not about detecting if the application is inside a virtual machine or not. I already have the all verifications and use the linear interpolation for those cases. The topic is about detecting if a real GPU is used for the current Windows session to display the desktop. I am looking for a more sophisticated solution to make decision using features of DirectX and DXGI.
If you want to detect the Microsoft Basic Renderer, the best option is to use it's VID/PID combo:
ComPtr<IDXGIDevice> dxgiDevice;
if (SUCCEEDED(device.As(&dxgiDevice)))
{
ComPtr<IDXGIAdapter> adapter;
if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter)))
{
DXGI_ADAPTER_DESC desc;
if (SUCCEEDED(adapter->GetDesc(&desc)))
{
if ( (desc.VendorId == 0x1414) && (desc.DeviceId == 0x8c) )
{
// WARNING: Microsoft Basic Render Driver is active.
// Performance of this application may be unsatisfactory.
// Please ensure that your video card is Direct3D10/11 capable
// and has the appropriate driver installed.
}
}
}
}
See Microsoft Docs and Anatomy of Direct3D 11 Create Device
You will probably find for testing/debugging that you don't want to explicitly block these scenarios, but you do want to provide some kind of warning or notice feedback to the user that they are using software rather than hardware rendering.
Remote Desktop detection from Win32 classic desktop applications is better done directly via GetSystemMetrics( SM_REMOTESESSION ).
See Microsoft Docs
Answering a 3 years old question as I struggled myself to do so.
I had to go through the registry. First thing is to find the adapter LUID in the registry, to get the adapter GUID
private string GetAdapterGuid(long luid)
{
var directXRegistryKey = Registry.LocalMachine.OpenSubKey(#"SOFTWARE\Microsoft\DirectX");
if (directXRegistryKey == null)
return "";
var subKeyNames = directXRegistryKey.GetSubKeyNames();
foreach (var subKeyName in subKeyNames)
{
var subKey = directXRegistryKey.OpenSubKey(subKeyName);
if (subKey.GetValueKind("AdapterLuid") != RegistryValueKind.QWord)
continue;
var luidValue = (long)subKey.GetValue("AdapterLuid");
if (luidValue == luid)
return subKeyName;
}
return "";
}
Once you have that Guid, you can search for the details of the graphic card in HKLM like this. If it is virtual, the service name will be "INDIRECTKMD" :
private bool IsVirtualAdapter(string adapterGuid)
{
var videoRegistryKey = Registry.LocalMachine.OpenSubKey($#"SYSTEM\CurrentControlSet\Control\Video\{adapterGuid}\Video");
if (videoRegistryKey == null)
return false;
if (videoRegistryKey.GetValueKind("Service") != RegistryValueKind.String)
return false;
var serviceName = (string)videoRegistryKey.GetValue("Service");
return serviceName.ToUpper() == "INDIRECTKMD";
}
Checking the service name felt easier than parsing the DeviceDesc value.
My use case involved having the Guid ready so I split up the function, you could merge it into one.
It also only detect RDP/MSTSC through this, additional service names might be needed for other virtual adapters. Or you could try to detect only Nvidia/AMD/Intel driver names... up to you.

getting started with Thorlabs APT

I'm hoping someone else out there has experience programming an APT - DC Servo controller.
My client wants a custom solution, so using the ActiveX control isn't viable.
I think once I can figure out how to send a basic message, I will be able to follow the API well enough, but I'm having difficulties getting started... and the documentation doesn't seem to clearly state how to actually send messages to the controller.
IE, am I supposed to be using the FTDI interface, with the FT_Write/FT_Read commands to operate the device?
I've run the following code which runs through the initial setup, which fails on the very last line where I try to flash the LED.
//the following is per the user manual for thor device.
ftHandle = FT_W32_CreateFile(SerialNumber.c_str(),
GENERIC_READ|GENERIC_WRITE,
0,
0,
OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL | FILE_FLAG_OVERLAPPED | FT_OPEN_BY_SERIAL_NUMBER,
0); // Open device by serial number
assert (ftHandle != INVALID_HANDLE_VALUE);
// Set baud rate to 115200.
const int uBaudRate=115200;
auto ftStatus = FT_SetBaudRate(ftHandle, (ULONG)uBaudRate);
assert(ftStatus==FT_OK);
// 8 data bits, 1 stop bit, no parity
ftStatus = FT_SetDataCharacteristics(ftHandle, FT_BITS_8, FT_STOP_BITS_1, FT_PARITY_NONE);
assert(ftStatus==FT_OK);
// Pre purge dwell 50ms.
Sleep(50);
// Purge the device.
ftStatus = FT_Purge(ftHandle, FT_PURGE_RX | FT_PURGE_TX);
assert(ftStatus==FT_OK);
// Post purge dwell 50ms.
Sleep(50);
ftStatus = FT_ResetDevice(ftHandle);
assert(ftStatus==FT_OK);
// Set flow control to RTS/CTS.
ftStatus = FT_SetFlowControl(ftHandle, FT_FLOW_RTS_CTS, 0, 0);
// Set RTS.
ftStatus = FT_SetRts(ftHandle);
assert(ftStatus==FT_OK);
//lets flash the led, MGMSG_MOD_IDENTIFY
BYTE buf[6] ={0x23,0x2,0,0,0x21,0x1};
DWORD written=0;
/*******************/
ftStatus = FT_Write(ftHandle, buf, (DWORD)6, &written);//4= FT_IO_ERROR
assert(ftStatus==FT_OK); //this is where I'm failing
/*******************/
For reference, I'm programming a 32 bit application - working on a 64 bit laptop.
Fixed by using FT_OpenEx instead of FT_W32_CreateFile.

Change 2nd Monitor Display Setting to Duplicate

I am attempting to programatically make the 2nd Monitor have a duplicate display. My function below should change the 2nd monitors display to 'duplicate display', ie, make the 2nd monitor display everything that is on the 1st/Primary monitor.
My Problem: When I run my function it successfully finds the 2nd monitor and it changes that monitors display x coordinate to 0, ie, the left of the primary monitor screen by changing the DEVMODE dmPosition.x property. Both of my 2 monitors refresh themselves(they go black then reshow their screen) but the 2nd monitor still has the extended display instead of a duplicate display.
Any ideas how I can make my 2nd Monitor have a duplicate display?
Some relevant information:
- My 2nd monitor is a LCD TV and is connected to my laptop via HDMI
- My function code is exacty the same as the example on this MSDN Page that describes how to attach a 2nd monitor without having to restart. I have changed LINE 30 though.
- I am aware I can change the display on Windows 7 using one WinAPI function call but I need my program to work on Windows 2000 and up.
// From http://support.microsoft.com/kb/308216/en-gb Title: You must restart...
BOOL TVManager::AddUnattachedDisplayDeviceToDesktop()
{
DWORD DispNum = 0;
DISPLAY_DEVICE DisplayDevice;
DEVMODE defaultMode;
HDC hdc;
int nWidth;
BOOL bFoundSecondary = FALSE;
hdc = GetDC(0);
nWidth = GetDeviceCaps(hdc, HORZRES);
ReleaseDC(0, hdc);
// Initialize DisplayDevice.
ZeroMemory(&DisplayDevice, sizeof(DisplayDevice));
DisplayDevice.cb = sizeof(DisplayDevice);
// Get display devices.
while ((EnumDisplayDevices(NULL, DispNum, &DisplayDevice, 0)) && (bFoundSecondary == FALSE))
{
ZeroMemory(&defaultMode, sizeof(DEVMODE));
defaultMode.dmSize = sizeof(DEVMODE);
if (!EnumDisplaySettings((LPTSTR)DisplayDevice.DeviceName, ENUM_REGISTRY_SETTINGS, &defaultMode)) {
printf("1\n");
return FALSE; // Store default failed
}
if (!(DisplayDevice.StateFlags & DISPLAY_DEVICE_PRIMARY_DEVICE)) {
//Found the first secondary device.
_tprintf(_T("Found the first secondary device: Name: %s, Pos: %d, Width: %d\n"), DisplayDevice.DeviceName, defaultMode.dmPosition.x, nWidth);
bFoundSecondary = TRUE;
defaultMode.dmPosition.x = 0; // LINE CHANGED: ONLY CHANGE FROM MSDN'S CODE
defaultMode.dmFields = DM_POSITION;
ChangeDisplaySettingsEx((LPTSTR)DisplayDevice.DeviceName, &defaultMode, NULL, CDS_NORESET|CDS_UPDATEREGISTRY, NULL);
_tprintf(_T("Check for error: %u\n"), GetLastError()); // prints "Check for error: 0" which means no error occurred
// A second call to ChangeDisplaySettings updates the monitor.
ChangeDisplaySettings(NULL, 0);
_tprintf(_T("Check for error: %u\n"), GetLastError()); // prints "Check for error: 0" which means no error occurred
}
// Reinitialize DisplayDevice.
ZeroMemory(&DisplayDevice, sizeof(DisplayDevice));
DisplayDevice.cb = sizeof(DisplayDevice);
DispNum++;
} // End while the display devices.
return TRUE;
}
Windows XP and earlier uses a different display driver model (XPDM) from Vista and later (WDDM). Mirroring on XPDM depends very much on your graphics card vendor. The general idea is that for extending the desktop, you provide an extend driver; for mirroring a portion of the desktop, you provide a mirror driver.
In most cases, each extend driver is responsible for one output on your graphics card. Let's say that you have a dual DVI card, then you should see two extend drivers in your Device Manager, each is responsible for one of the DVI port. When you want to set your monitor to extend the desktop, you enable the extend driver and give it a sensible location.
Mirroring is trickier. This is where the behaviour can vary a bit between the different card vendors. From the perspective of the OS, this is what's happening. The extend driver associated with the graphics card port is disabled. The mirror driver is enabled if it was not enabled already. The mirror driver is then placed at (0, 0). Then some trickery happens inside your graphics card/driver and the monitor is showing what's inside the mirror driver's screen buffer.
In order to set a monitor into mirror mode on XPDM, you need find the extend driver it's currently showing stuff from and disable it. This may be all you have to do. Some of the vendors will automatically do the rest for you and start mirroring the primary display. Some vendors will do whatever your monitor was doing last before it was put into extend mode. If you find your monitor not showing anything, you can try to enable the mirror driver. If you manage to find the mirror driver and enable it, what happens after is anyone's guess. There isn't a universal way to wire up a monitor to a mirror driver.