OpenGL (v >=3) application on remote machine - c++

Is there a way to start an application with OpenGL >= 3 on a remote machine?
Local and remote machine run on Linux.
More precisely, I have the following problem:
I have an application that uses Qt for GUI stuff and OpenGL for 3D rendering.
I want to start this application on several remote machines because the program does some very time consuming computation.
Thus, I created a version of my program that does not raise a window. I use QGuiApplication, QOffscreenSurface, and a framebuffer object as rendertarget.
BUT: When I start the application on a remote machine (ssh -Y remotemachine01 myapp) I only have OpenGL version 2.1.2. When I start the application locally (on the same machine, I have opengl 4.4). I suppose the X forwarding is the problem.
So I need a way to avoid X forwarding.

Right now there's no clean solution, sorry.
GLX (the OpenGL extension to X11 which does the forwarding stuff) is only specified up to OpenGL-2.1, hence your inability to forward a OpenGL-3 context. This is actually a ridiculous situation, because the "OpenGL-3 way" is much better suited for indirected rendering, than old fashioned OpenGL-2.1 and earlier. Khronos really needs to get their act together and specify GLX-3.
Your best bet would be either to fall back to a software renderer on the remote side and some form of X compression. Or use Xpra backed by on GPU X11 server; however that only works for only a single user at a time.
In the not too far future the upcomming Linux graphics driver models will allow for remote GPU rendering execution by multiple users sharing graphics resources. But we're not there yet.

Related

Current state and solutions for OpenGL over Windows Remote [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
The community reviewed whether to reopen this question 5 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
OpenGL and Windows Remote don't play along nicely.
Solutions for this are dependent on the use case and answers are fragmented across the vast depths of the net.
This is a write-up I wish existed when I started researching this, both for coders and non-coders.
Problem:
A RDP session of Windows does not expose the graphics card, at least not directly. For instance you cannot change the desktop resolution and GraphicsCard drivers usually just disable their setting menus. Starting a OpenGL context higher than v1.1 fails because of this. The, especially in support IRCs, often suggested "Don't use WindowsRemote" is unfortunately not an option for many. In many corporate environments Windows Remote is a constantly used tool and an app has to work there as well.
Non-Coder workarounds
You can start the OpenGL program, allowing it to see the graphics card, create an opengl context and then connect via WindowsRemote. This always works, as Windows remote just transfers the window content. This can be accomplished by:
A batch script, that closes the session and starts the program, allowing you to connect to the program already running. (Source)
Using VNC or other to remote into the machine, start the program and then switch to Windows Remote. (Simple VNC programm, also with a portable client)
Coder workarounds
(Only for OpenGL ES)Translate OpenGL to DirectX. DirectX works under Windows Remote flawselly and even has a Software rendering fallback built into DX11 if something fails.
Use the ANGLE Project to do this at run-time. This is what QT officially suggests you do and how Chrome and Firefox implement WebGL. (Source)
Switch to software rendering as a fall back. Some CAD software like 3dsMax does this for instance:
Under SDL2 you can use SDL_CreateSoftwareRenderer (Source)
Under GLFW version 3.3 will release OSMesa (Mesa's off screen rendering), in the mean time you can build the Github version with -DGLFW_USE_OSMESA=TRUE, but I personally still struggle to get that running (Source)
Directly use Mesa's LLVM pipe for a fast OpenGL implementation. (Source)
Misc:
Use OpenGL 1.1: Windows has a built in implementation of OpenGL 1.1 and
earlier. Some game engines have a built in fall back to this and thus
work under Windows Remote.
Apparently there is a middle-ware, that allows for even OpenGL 4 over Windows Remote, but it's part of a bigger package and is a commercial solution. (Source)
Any other solutions or corrections are greatly appreciated.
[10] Nvidia -> https://www.khronos.org/news/permalink/nvidia-provides-opengl-accelerated-remote-desktop-for-geforce-5e88fc2035e342.98417181
According to this article it seems that now RDP handles newer versions of Direct3D and OpenGL on Windows 10 and Windows Server 2016, but by default it is disabled by Group Policy.
I suppose that for performance reasons, using a hardware graphics card is disabled, and RDP uses a software-emulated graphics card driver that provides only some baseline features.
I stumbled upon this problem when trying to run Ultimaker CURA over standard Remote Desktop from a Windows 10 client to a Windows 10 host. Cura shouted "cannot initialize OpenGL 2.0 context". I also noticed that Repetier Host's "preview" window runs terribly slow, and Repetier detects only an OpenGL 1.1 card. Pretty much fits the "only baseline features" description.
By running gpedit.msc then navigating to
Local Computer Policy\Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Remote Session Environment
and changing the value of
Use hardware graphics adapters for all Remote Desktop Services sessions
I was able to successfully run Ultimaker CURA via with no issues, and Repetier-Host now displays OpenGL 4.6, and everything finally runs fast as it should.
Note from genpfault:
As usual, this Policy is kept in the HKLM registry group in
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services
Set REG_DWORD:bEnumerateHWBeforeSW to 1 to turn ON using GPUs in RDP.
OpenGL works great by RDP with professional Nvidia cards without anything like virtual machines and RemoteFX. For Quadro (Quadro 4000 tested) you need driver 377.xx. For M60 you can use the same driver. If you want to use last driver with M60, you have to change the driver mode to WDDM mode (see c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.1.pdf). It is possible that there are some problems with licensing in this last case.
Some people recommend using "tscon.exe" if you can: https://stackoverflow.com/a/45723167/32453 or using a scheduler to do it on native hardware: https://stackoverflow.com/a/41839102/32453 or creating a group policy:
https://community.esri.com/thread/225251-enabling-gpu-rendering-on-windows-server-2016-windows-10-rdp
maybe copy opengl32.dll (or opengl64.dll) to your executable's dir: https://blender.stackexchange.com/a/73014 and newer version of the dll: https://fdossena.com/?p=mesa/index.frag
Remote Desktop and OpenGL does not play very well. When you connect to a Windows box the OpenGL Driver is unloaded and you end up with software emulation of OpenGL.
When you disconnect from the Windows box the OpenGL driver is not reloaded. This causes issues when you are running tests on the machine as you have to physically login to the machine to reset the drivers.
The solution I ended up using was to:
Disable Remote Desktop.
Delete all other software for remote desktop access. Because if it's used for logging in remotely the current set of drivers loaded may be messed up.
Install NoMachine
NoMachine is my personal favourite (when it does not play up) for a number of reasons:
Hardware acceleration of compression (video of desktop).
Works on Windows and Linux.
Works well on low-bandwidth connections especially if the client and server have the necessary hardware for compression of the data stream.
On Linux you get your desktop as you last left it when you were sitting in front of the machine.
On Windows it does not affect OpenGL.
currently free for personal and commercial use. Do check the licence in case it's changed.
When NoMachine plays up it hogs the CPU but this happens rarely. It is however in active development
Others to consider:
TurboVNC
TightVNC
TeamViewer - only free for personal use.

Opengl rendering in server [duplicate]

And if so why? What does X do for me beyond piping my rendering commands to the graphics card driver?
I'm not clear on the relationship X - OpenGL. I've searched the internet but couldn't find a concise answer.
If it matters, assuming a minimal modern distribution, like a headless Ubuntu 13 machine.
With the current drivers: Yes.
And if so why?
Because the X server is the host for the actual graphics driver talking to the GPU. At the moment Linux GPU drivers require a X server that gives them an environment to live in and a channel to the kernel interfaces to talk through with the GPU.
On the DRI/DRM/Gallium front a new driver model has been created that allows to use the GPU without an X server, for example using the EGL-API. However only a small range of GPUs is supported by this right now; most Intel and AMD; none NVidia.
I'm not clear on the relationship X - OpenGL
I covered that in detail in the SO answers found at https://stackoverflow.com/a/7967211/524368 and https://stackoverflow.com/a/8777891/524368
In short the X server acts like a "proxy" to the GPU. You send the X server commands like "open a window" or "draw a line there". And there's an extension to the X protocol called "GLX", where each OpenGL command gets translated into a stream of GLX/X opcodes and the X server executes those commands on the GPU on behalf of the calling client. Also most OpenGL/GLX implementations provide a mechanism to bypass the X server if the client process could actually talk directly to the GPU (because it runs on the same machine as the X server and has permissions to access the kernel API); that is called Direct Rendering. It however still requires the X server for opening the window, creating the context and to general housekeeping.
Update due to comment
Also if you can live without GPU acceleration, you can use Mesa3D using the osmesa (off-screen mesa) mode and the LLVMpipe software rasterizer.
With Linux 3.12: Not any more.
Offscreen rendering is what DRM render nodes are for, according to the commit. See the developer's blog for a better explanation.
TLDR:
A render node (/dev/dri/renderD<num>) appears as a GPU with no screens attached.
As for how exactly one is supposed to make use of this, the (kernel) developer only has very general advice for userspace infrastructure. Nevertheless, it is fair to assume the feature to be nothing short of a show-enabler for Wayland and Mir, as clients won't be able to render on-screen any more.
The wikipedia entry has some more pointers.

glfwInit fails when launch by IIS 7

I am making an app that create image from 3d scene.
I use GLFW and GLEW library.
I want to call this app since web service.
My app run well when I launch it with the .exe file but when it is launch by IIS7. it crash when glCreateShader is called and it seem that glfwInit fails.
I put the .dll path in environment variable.
any idea ?
The OpenGL implementations you can usually find on a computer assume a GPU to be available. In general network services, like web servers, are run in an environment configuration that doesn't give access to a GPU. Hence OpenGL is not available for that either.
Furthermore often for security reasons, all API functions that deal with UI elements (like Window and Device Context) creation are disabled as well.
Update:
You could drop using GLFW and use OSMesa to create a pure offscreen, windowless OpenGL context, which rasterizes using a CPU-only implementation. OSMesa has to be custom built and liked into your program, and when doing so it will not be able to fall back (effortlessly) to a GPU accelerated OpenGL implementation.

Setup OpenGL 4 build / unittest server?

I'm trying to find a solution to setting up an OpenGL build server. My preference would be to have a virtual or cloud server, but as far as I can see those only go up to 3.0/3.1 using software rendering. I have a server running Windows, but my tests are Linux specific and I'd have to run it in a VM, which as far as I know also only support OpenGL 3.1.
So, is it possible to set up a OpenGL 4 build/unittest server?
OpenGL specification does not include any pixel-perfect warranties. This means your tests may fail just by switching to the other GPU or even to the other version of the driver.
So, you have to be specific. You should test not the result of rendering, but the result of math that just precedes the submission of the primitives to the API.

Simulate non-existing 3D Driver

I'm currently developing a 3D-based Application (In C++, if that matters). To test special circumstances, I also need to test the behaviour when no 3D Interface could be loaded (e.g, glutInit() failed).
The environment is currently Linux, so a Linux-based solution would be preferable.
How would I test a case where no 3D Interface could be created, without unloading the binary 3D driver from my kernel (which is nVidia)?
Try running the application under something like a VNC server, or Xnest. Those don't generally support OpenGL.
Run it under a virtual machine using VMware or VirtualBox.