I'm trying too use DirectX desktop duplication API.
I tried running exmaples from
http://www.codeproject.com/Tips/1116253/Desktop-Screen-Capture-on-Windows-via-Windows-Desk
And from
https://code.msdn.microsoft.com/windowsdesktop/Desktop-Duplication-Sample-da4c696a
Both of these are examples of screen capture using DXGI.
I have NVIDIA GeForce GTX 1060 with Windows 10 Pro on the machine. It has Intel™ Core i7-6700HQ processor.
These examples work perfectly fine when NVIDIA Control Panel > 3D Settings is selected to Auto select processor.
However if I set the setting manually to NVIDIA Graphics Card the samples stop working.
Error occurs at the following line.
//IDXGIOutput1* DxgiOutput1
hr = DxgiOutput1->DuplicateOutput(m_Device, &m_DeskDupl);
Error in hr(HRESULT) is DXGI_ERROR_UNSUPPORTED 0x887A0004
I'm new to DirectX and I don't know the issue here, is DirectX desktop duplication not supported on NVIDIA ?
If that's the case then is there a way to select a particular processor at the start of program so that program can run with any settings ?
#Edit
After looking around I asked the developer (Evgeny Pereguda) of the second sample project on codeproject.com
Here's a link to the discussion
https://www.codeproject.com/Tips/1116253/Desktop-Screen-Capture-on-Windows-via-Windows-Desk?msg=5319978#xx5319978xx
Posting the screenshot of the discussion on codeproject.com in case original link goes down
I also found an answer on stackoverflow which unequivocally suggested that it could not be done with the desktop duplication API referring to support ticket at microsoft's support site https://support.microsoft.com/en-us/help/3019314/error-generated-when-desktop-duplication-api-capable-application-is-ru
Quote from the ticket
This issue occurs because the DDA does not support being run against
the discrete GPU on a Microsoft Hybrid system. By design, the call
fails together with error code DXGI_ERROR_UNSUPPORTED in such a
scenario.
However there are some applications which are efficiently duplicating desktop on windows in both modes (integrated graphics and discrete) on my machine. (https://www.youtube.com/watch?v=bjE6qXd6Itw)
I have looked into the installation folder of the Virtual Desktop on my machine and can see following DLLs of interest
SharpDX.D3DCompiler.dll
SharpDX.Direct2D1.dll
SharpDX.Direct3D10.dll
SharpDX.Direct3D11.dll
SharpDX.Direct3D9.dll
SharpDX.dll
SharpDX.DXGI.dll
SharpDX.Mathematics.dll
Its probably an indication that this application is using DXGI to duplicate desktop, or may be the application is capable of selecting a specific processor before it starts.
Anyway the question remains, is there any other efficient method of duplicating desktop in both modes?
The likely cause is certain internal limitation for Desktop Duplication API, described in Error generated when Desktop Duplication API-capable application is run against discrete GPU:
... when the application tries to duplicate the desktop image against the discrete GPU on a Microsoft Hybrid system, the application may not run correctly, or it may generate one of the following errors:
Failed to create windows swapchain with 0x80070005
CDesktopCaptureDWM: IDXGIOutput1::DuplicateOutput failed: 0x887a0004
The article does not suggest any other workaround except use of a different GPU (without more specific detail as for whether it is at all achievable programmatically):
To work around this issue, run the application on the integrated GPU instead of on the discrete GPU on a Microsoft Hybrid system.
Microsoft introduced a registry value that can be set programmatically to control which GPU an application runs on. Full answer here.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
The community reviewed whether to reopen this question 5 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
OpenGL and Windows Remote don't play along nicely.
Solutions for this are dependent on the use case and answers are fragmented across the vast depths of the net.
This is a write-up I wish existed when I started researching this, both for coders and non-coders.
Problem:
A RDP session of Windows does not expose the graphics card, at least not directly. For instance you cannot change the desktop resolution and GraphicsCard drivers usually just disable their setting menus. Starting a OpenGL context higher than v1.1 fails because of this. The, especially in support IRCs, often suggested "Don't use WindowsRemote" is unfortunately not an option for many. In many corporate environments Windows Remote is a constantly used tool and an app has to work there as well.
Non-Coder workarounds
You can start the OpenGL program, allowing it to see the graphics card, create an opengl context and then connect via WindowsRemote. This always works, as Windows remote just transfers the window content. This can be accomplished by:
A batch script, that closes the session and starts the program, allowing you to connect to the program already running. (Source)
Using VNC or other to remote into the machine, start the program and then switch to Windows Remote. (Simple VNC programm, also with a portable client)
Coder workarounds
(Only for OpenGL ES)Translate OpenGL to DirectX. DirectX works under Windows Remote flawselly and even has a Software rendering fallback built into DX11 if something fails.
Use the ANGLE Project to do this at run-time. This is what QT officially suggests you do and how Chrome and Firefox implement WebGL. (Source)
Switch to software rendering as a fall back. Some CAD software like 3dsMax does this for instance:
Under SDL2 you can use SDL_CreateSoftwareRenderer (Source)
Under GLFW version 3.3 will release OSMesa (Mesa's off screen rendering), in the mean time you can build the Github version with -DGLFW_USE_OSMESA=TRUE, but I personally still struggle to get that running (Source)
Directly use Mesa's LLVM pipe for a fast OpenGL implementation. (Source)
Misc:
Use OpenGL 1.1: Windows has a built in implementation of OpenGL 1.1 and
earlier. Some game engines have a built in fall back to this and thus
work under Windows Remote.
Apparently there is a middle-ware, that allows for even OpenGL 4 over Windows Remote, but it's part of a bigger package and is a commercial solution. (Source)
Any other solutions or corrections are greatly appreciated.
[10] Nvidia -> https://www.khronos.org/news/permalink/nvidia-provides-opengl-accelerated-remote-desktop-for-geforce-5e88fc2035e342.98417181
According to this article it seems that now RDP handles newer versions of Direct3D and OpenGL on Windows 10 and Windows Server 2016, but by default it is disabled by Group Policy.
I suppose that for performance reasons, using a hardware graphics card is disabled, and RDP uses a software-emulated graphics card driver that provides only some baseline features.
I stumbled upon this problem when trying to run Ultimaker CURA over standard Remote Desktop from a Windows 10 client to a Windows 10 host. Cura shouted "cannot initialize OpenGL 2.0 context". I also noticed that Repetier Host's "preview" window runs terribly slow, and Repetier detects only an OpenGL 1.1 card. Pretty much fits the "only baseline features" description.
By running gpedit.msc then navigating to
Local Computer Policy\Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Remote Session Environment
and changing the value of
Use hardware graphics adapters for all Remote Desktop Services sessions
I was able to successfully run Ultimaker CURA via with no issues, and Repetier-Host now displays OpenGL 4.6, and everything finally runs fast as it should.
Note from genpfault:
As usual, this Policy is kept in the HKLM registry group in
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services
Set REG_DWORD:bEnumerateHWBeforeSW to 1 to turn ON using GPUs in RDP.
OpenGL works great by RDP with professional Nvidia cards without anything like virtual machines and RemoteFX. For Quadro (Quadro 4000 tested) you need driver 377.xx. For M60 you can use the same driver. If you want to use last driver with M60, you have to change the driver mode to WDDM mode (see c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.1.pdf). It is possible that there are some problems with licensing in this last case.
Some people recommend using "tscon.exe" if you can: https://stackoverflow.com/a/45723167/32453 or using a scheduler to do it on native hardware: https://stackoverflow.com/a/41839102/32453 or creating a group policy:
https://community.esri.com/thread/225251-enabling-gpu-rendering-on-windows-server-2016-windows-10-rdp
maybe copy opengl32.dll (or opengl64.dll) to your executable's dir: https://blender.stackexchange.com/a/73014 and newer version of the dll: https://fdossena.com/?p=mesa/index.frag
Remote Desktop and OpenGL does not play very well. When you connect to a Windows box the OpenGL Driver is unloaded and you end up with software emulation of OpenGL.
When you disconnect from the Windows box the OpenGL driver is not reloaded. This causes issues when you are running tests on the machine as you have to physically login to the machine to reset the drivers.
The solution I ended up using was to:
Disable Remote Desktop.
Delete all other software for remote desktop access. Because if it's used for logging in remotely the current set of drivers loaded may be messed up.
Install NoMachine
NoMachine is my personal favourite (when it does not play up) for a number of reasons:
Hardware acceleration of compression (video of desktop).
Works on Windows and Linux.
Works well on low-bandwidth connections especially if the client and server have the necessary hardware for compression of the data stream.
On Linux you get your desktop as you last left it when you were sitting in front of the machine.
On Windows it does not affect OpenGL.
currently free for personal and commercial use. Do check the licence in case it's changed.
When NoMachine plays up it hogs the CPU but this happens rarely. It is however in active development
Others to consider:
TurboVNC
TightVNC
TeamViewer - only free for personal use.
I'm developing a small video capture library on top of videoInput (a thin wrapper around DirectShow) and lately I encountered a tricky issue.
The library captures and saves video frames to its internal format, using code to the effect of this:
if (VI->setupDevice(m_deviceIndex, width, height)) {
//... checks for frame size etc
//...
auto pixels = VI->getPixels(m_deviceIndex, true, true);
}
This code was built in VS 2017 using vc140/sdk8.1 and it worked fine on a range of different machines running Windows 7, 8.1 and 10, which included typical office desktops and laptops, several development machines, a highly restrictive production desktop and VirtualBox guests.
Then we discovered that on one Windows 7 computer videoInput yields black frames (null pixels), even though the camera itself works properly with other applications. We tested several different camera models to the same effect.
I built DirectShow examples from official Microsoft repository and discovered that on startup the samples fail with hr=0x80070005 error (access denied), regardless of running in elevated mode. Here's where the error occurs (amcap.cpp, line 787).
Since official samples supposedly should work out of the box, I suspected that there might be a compatibility bug in later versions of SDK/MSVC and tried compiling with VS 2010, but that didn't help. I also tried different capture back-ends, using Windows Media Foundation example from the same repository, as well as OpenCV with ffmpeg - all to the same effect.
Then we discovered another machine, running Windows 10, which had exactly the same problem, indicating that this is not an issue of backwards compatibility. Meanwhile, the same builds were working fine on my test machines, and third-party applications like Webcamoid were working fine on the problematic PCs.
My best guess is that there's some kind of compatibility flag or permission which has to be granted, since the camera works just fine with third-party software, but I have no idea where to look for them, and Windows 7 doesn't have camera permission settings to begin with, if I'm not mistaken.
Now, does anyone have any idea what on Earth might be wrong? I would greatly appreciate any advice.
Thank's.
Problem solved.
The problem turned out to be due to Kaspersky Endpoint Security, which has an option to restrict video streaming for unknown applications. This is why camera apps from the Store worked fine (they were trusted by default), and our application didn't.
Caveat emptor.
New to CUDA, but have some time spending on computing, and I have geforces at home and tesla (same generation) in the office.
At home I have two gpus installed in the same computer, one is GK110 (compute capability 3.5), the other is GF110 (compute capability 2.0), I perfer to use GK110 for computation task ONLY and GF110 for display UNLESS I tell it to do computation, is there a way to do this through driver setting or I still need to rewrite some of my codes?
Also, if I understand correctly, if the display port of GK110 is not being connected, then the annoying windows timeout detection will not try to reset it even if the computation time is very long?
Btw my CUDA codes are compiled with both compute_35 and compute20, so the codes can be run on both GPUs, however I plan to use features that being exclusive to GK110 so the codes in the future may not being able to run on GF110 at all, and the OS is windows 7.
With a GeForce GTX Titan (or any GeForce product) on Windows, I don't believe there is a way to prevent the GPU from appearing in the system in WDDM mode, which means that windows will build a display driver stack on it, even if the card has no physical display attached to it. So you may be stuck with the windows TDR mechanism. You could try experimenting with it to confirm that. (The windows TDR behavior can be modified via registry hacking).
Regarding steering CUDA tasks to the GTX Titan, the display driver control panel should have a selectable setting for this. It may be in the "Manage 3D settings" area or some other area depending on which driver you have. When you find the appropriate settings area, there will be a selection entitled something like CUDA - GPUs which will probably be set to "All". If you change the "Global Presets" selection to "Base Profile" you should be able to change this CUDA-GPUs setting. Clicking on it should give you a selection of "All" or a set of checkboxes for each GPU detected. If you uncheck the GF110 device and check the GK110 device, then CUDA programs that do not select a particular GPU via cudaSetDevice() should be steered to the GK110 device based on this checkbox selection. You may want to experiment with this as well to confirm.
Other than that, as mentioned in the comments, using a programmatic method, you can always query device properties and then select the device that reports itself as a cc3.5 device.
I have just recently installed Windows 8, and I tried to compile and build a simple c++ game project in VS 2010, but when I did, it was running at 5 fps. On windows 7, it runs at a solid 60 fps. Nothing has been changed in the code, but there is just horrible slow down.
I have updated my video drivers, but there is still horrible lag. I thought the problem was to do with compatibility issues with windows 8 and OpenGL, but I can't find anything to confirm this. I was wondering if anyone else has had this problem, and if you have solved it.
I would recommend you test your graphics card / drivers first. All sorts of driver issues could arise when you upgrade operating systems. One of the best tests would be to download Cinebench and see how it performs. Cinebench will evaluate your OpenGL performance. If you get poor results, then you know it's a hardware / driver issue and not an issue with your application.
If the Cinebench results are good, then you can move on to the recommendations made by #Robert Rouhani (comments).
http://www.maxon.net/products/cinebench/overview.html
What sort of video card do you have in the Win8 machine?
If it's a laptop you might be battling against nVidia Optimus (or an equivalent technology?). Basically programs have to tell the OS in advance that they want to use the video card or they get defaulted to using the low power GPU embedded in the CPU (note: over-simplification).
If this is the case, there's some options in the nVidia control panel to let you create a profile telling the OS to run your app with the discrete GPU, rather than the embedded one.
every one. i am a newbie to cuda. i am wondering that can cuda be used combining with ActiveX technology,
the presented ocx or dll file can be used in webpage,
for example, using cuda can we simulate a fluid particle easily.
if combine cuda and activeX technology ,
we can see fluid particle in a webpage, am i right?
what's more, if there are problems when i simulate lots of particles?
Thank you very much.
I think that if ActiveX could access your GPU on such low level as running your arbitrary CUDA code, it would be a big security risk. If on the other hand, ActiveX could perform some of its computations on the GPU though some higher-level interface, that would be safer, but it is Microsoft who would have to implement it, not you.
A trusted ActiveX control can do anything. So, yes, you could theoretically spin up the CUDA runtime and go to town with the GPU. You would need to distribute the CUDA runtime with the ActiveX control, but everything else you need would already be installed assuming they're using an nVidia GPU. FWIW, distributing cudart.dll is permissable per the EULA on the CUDA Developer Toolkit.
Since, last I read, you cannot statically link against cudart.dll, you would need to distribute that dependency along with your ActiveX control by using a CAB file. Details on creating CAB files can be found here on MSDN. Then again that forum post is from 2008, so maybe newer versions of cudart.dll can be statically linked now... you might want to give it a try.
First and foremost, it runs on the client machine. What means that the client needs to have a CUDA enabled graphics card (nVidia only).