Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I connect the remote Ubuntu 12.04 64bit by vncviewer application. But when I run a OpenGL application, it shows the exception information:
Caught exception GLShader::GLShader: GL_ARB_shader_objects not supported while initializing rendering windows
But if I connect the monitor with the remote computer, it works well and could show the OpenGL application.
Is there any solution to make the OpenGL application run in the remote window by vncviewer? Thanks!
UPDATED:
In the remote Ubuntu 12.04 64bit server, the ~/.vnc/xstartup file is as follows:
.
And the VNC Viewer client in a Windows 7 32bit system is as follows:
Usually on Linux the VNC server is a dedicated variant of the Xorg X11 server (Xvnc) that uses a software based renderer backend and has not GPU acceleration. I guess you're using a NVidia GPU and the NVidia proprietary drivers, or AMD GPU with AMD proprietary drivers, because otherwise the Mesa softpipe implementation would have kicked in.
If you really want to use the GPU you'll have to VNC into a running X11 session into which you start the x11vnc server.
Update
First things first, for the GPU to work a X server must be running and have its output be sent to the display connectors. Sorry, the current driver model doesn't allow for a purely off-screen GPU accelerated X11 sever; this is not a limitation of the hardware, but just the Xorg X11 server implementation. This also means, that whatever you're doing will be visible to whoever connects a monitor to the screen. At least we can take care of, that nobody messes with mouse and keyboard.
First things first create a custom /etc/X11/xorg.vnc.conf consisting of this
Section "ServerFlags"
Option "AllowEmptyInput" "true"
Option "AutoAddDevices" "off"
Option "DontZap" "false"
Option "DontVTSwitch" "true"
Option "HandleSpecialKeys" "Never"
EndSection
Section "Device"
Identifier "DeviceGPU"
Driver "nvidia"
EndSection
Next implement a script stat starts everything you want to run in that particular X11 session. Most of the time this would be something that launches the x11vnc server and then execs into the desktop envinronment, e.g.
#!/bin/sh
x11vnc -display $DISPLAY &
exec startxfce4 # or whatever
I refer you to the manpage of x11vnc on how to configure the authentication to use.
Lastly you should check that the Xorg server binary is SUID root; the NVidia driver is still not making full use of KMS and depends on the X server being started with full privileges.
Once these prerequsites are met you can start a X11 session that supports VNC using
xinit $FULL_PATH_TO_YOUR_SESSION_SCRIPT -- $DISPLAY -config xorg.vnc.conf
where $DISPLAY is a free X11 display number.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm having some difficulty using Mayavi mlab in python with PyCharm IDE from the MNE-Python environment. I access the Conda environment with Mayavi and VTK over a VNC session using a xvnc server from my local MacOS to a Linux cluster machine.
The error I get when opening a mlab window is:
ERROR: In ../Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 754
vtkXOpenGLRenderWindow (0x556e13b32670): Unable to find a valid OpenGL 3.2 or later implementation. Please update your video card driver to the latest version. If you are using Mesa please make sure you have version 11.2 or later and make sure your driver in Mesa supports OpenGL 3.2 such as llvmpipe or openswr. If you are on windows and using Microsoft remote desktop note that it only supports OpenGL 3.2 with nvidia quadro cards. You can use other remoting software such as nomachine to avoid this issue.
It seems that using VirtualGL to intercept PyCharm for VTK's call of OpenGL is one possible solution. Have anyone successfully overcome this issue of using Mayavi mlab over VNC session? What are your solutions?
I am unable to reproduce. Have you made sure that the machine you are accessing has available and loaded up-to-date graphics drivers and/or Mesa software? As an example, I am using TurboVNC to access a remote Ubuntu 18.04 machine, and I am able to produce the Spherical Harmonics Gallery Example both through the regular and envisage Mayavi backends. However, using the default settings of TurboVNC, the rendered scene has artefacts and saving the scene as png yields a black image, which I guess is a consequence of Mesa being used (llvmpipe). If I either start TurboVNC with -extension GLX or if I simply prepend vglrun to the python3 command, which calls VirtualGL, then OpenGL is used and the rendered scene is flawless. I attach the screenshot and saved figures below.
I had the same problem days ago, after exchanging emails with HPC staff, our solution is quite simple:
export MESA_GL_VERSION_OVERRIDE=3.2
I am using RealVNC Viewer to access HPC and I ran Mayavi API through VSCode.
I also recommend PyVista, which seems to more pythonic in many aspects. I was able to save all the plot in PyVista and the plots are great.
The new problem is that I cannot save the mlab plot using either API or the interactive scene. I only got a black figure so far.
Some answer related to saving: mayavi mlab.savefig() gives an empty image is not working well in VNC so far.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
The community reviewed whether to reopen this question 5 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
OpenGL and Windows Remote don't play along nicely.
Solutions for this are dependent on the use case and answers are fragmented across the vast depths of the net.
This is a write-up I wish existed when I started researching this, both for coders and non-coders.
Problem:
A RDP session of Windows does not expose the graphics card, at least not directly. For instance you cannot change the desktop resolution and GraphicsCard drivers usually just disable their setting menus. Starting a OpenGL context higher than v1.1 fails because of this. The, especially in support IRCs, often suggested "Don't use WindowsRemote" is unfortunately not an option for many. In many corporate environments Windows Remote is a constantly used tool and an app has to work there as well.
Non-Coder workarounds
You can start the OpenGL program, allowing it to see the graphics card, create an opengl context and then connect via WindowsRemote. This always works, as Windows remote just transfers the window content. This can be accomplished by:
A batch script, that closes the session and starts the program, allowing you to connect to the program already running. (Source)
Using VNC or other to remote into the machine, start the program and then switch to Windows Remote. (Simple VNC programm, also with a portable client)
Coder workarounds
(Only for OpenGL ES)Translate OpenGL to DirectX. DirectX works under Windows Remote flawselly and even has a Software rendering fallback built into DX11 if something fails.
Use the ANGLE Project to do this at run-time. This is what QT officially suggests you do and how Chrome and Firefox implement WebGL. (Source)
Switch to software rendering as a fall back. Some CAD software like 3dsMax does this for instance:
Under SDL2 you can use SDL_CreateSoftwareRenderer (Source)
Under GLFW version 3.3 will release OSMesa (Mesa's off screen rendering), in the mean time you can build the Github version with -DGLFW_USE_OSMESA=TRUE, but I personally still struggle to get that running (Source)
Directly use Mesa's LLVM pipe for a fast OpenGL implementation. (Source)
Misc:
Use OpenGL 1.1: Windows has a built in implementation of OpenGL 1.1 and
earlier. Some game engines have a built in fall back to this and thus
work under Windows Remote.
Apparently there is a middle-ware, that allows for even OpenGL 4 over Windows Remote, but it's part of a bigger package and is a commercial solution. (Source)
Any other solutions or corrections are greatly appreciated.
[10] Nvidia -> https://www.khronos.org/news/permalink/nvidia-provides-opengl-accelerated-remote-desktop-for-geforce-5e88fc2035e342.98417181
According to this article it seems that now RDP handles newer versions of Direct3D and OpenGL on Windows 10 and Windows Server 2016, but by default it is disabled by Group Policy.
I suppose that for performance reasons, using a hardware graphics card is disabled, and RDP uses a software-emulated graphics card driver that provides only some baseline features.
I stumbled upon this problem when trying to run Ultimaker CURA over standard Remote Desktop from a Windows 10 client to a Windows 10 host. Cura shouted "cannot initialize OpenGL 2.0 context". I also noticed that Repetier Host's "preview" window runs terribly slow, and Repetier detects only an OpenGL 1.1 card. Pretty much fits the "only baseline features" description.
By running gpedit.msc then navigating to
Local Computer Policy\Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Remote Session Environment
and changing the value of
Use hardware graphics adapters for all Remote Desktop Services sessions
I was able to successfully run Ultimaker CURA via with no issues, and Repetier-Host now displays OpenGL 4.6, and everything finally runs fast as it should.
Note from genpfault:
As usual, this Policy is kept in the HKLM registry group in
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services
Set REG_DWORD:bEnumerateHWBeforeSW to 1 to turn ON using GPUs in RDP.
OpenGL works great by RDP with professional Nvidia cards without anything like virtual machines and RemoteFX. For Quadro (Quadro 4000 tested) you need driver 377.xx. For M60 you can use the same driver. If you want to use last driver with M60, you have to change the driver mode to WDDM mode (see c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.1.pdf). It is possible that there are some problems with licensing in this last case.
Some people recommend using "tscon.exe" if you can: https://stackoverflow.com/a/45723167/32453 or using a scheduler to do it on native hardware: https://stackoverflow.com/a/41839102/32453 or creating a group policy:
https://community.esri.com/thread/225251-enabling-gpu-rendering-on-windows-server-2016-windows-10-rdp
maybe copy opengl32.dll (or opengl64.dll) to your executable's dir: https://blender.stackexchange.com/a/73014 and newer version of the dll: https://fdossena.com/?p=mesa/index.frag
Remote Desktop and OpenGL does not play very well. When you connect to a Windows box the OpenGL Driver is unloaded and you end up with software emulation of OpenGL.
When you disconnect from the Windows box the OpenGL driver is not reloaded. This causes issues when you are running tests on the machine as you have to physically login to the machine to reset the drivers.
The solution I ended up using was to:
Disable Remote Desktop.
Delete all other software for remote desktop access. Because if it's used for logging in remotely the current set of drivers loaded may be messed up.
Install NoMachine
NoMachine is my personal favourite (when it does not play up) for a number of reasons:
Hardware acceleration of compression (video of desktop).
Works on Windows and Linux.
Works well on low-bandwidth connections especially if the client and server have the necessary hardware for compression of the data stream.
On Linux you get your desktop as you last left it when you were sitting in front of the machine.
On Windows it does not affect OpenGL.
currently free for personal and commercial use. Do check the licence in case it's changed.
When NoMachine plays up it hogs the CPU but this happens rarely. It is however in active development
Others to consider:
TurboVNC
TightVNC
TeamViewer - only free for personal use.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I run a X server on a Windows 7 machine with OpenGL 4.4. From there I ssh -Y to a remote machine where I start an OpenGL application. (for what it matters, the network connection is very fast, I have turned off compression and use arcfour,blowfish-cbc ciphers for speed)
glxgears runs, but not very smoothly. Reports it is doing 6000+ FPS though.
However, matlab fails to use hardware OpenGL rendering. I read the docs and they mention it requires OpenGL version 2.1. When I run glxinfo in the ssh terminal, it tells me:
GLX version: 1.4
OpenGL version string: 1.4 (4.4.0 - Build 10.18.15.4279)
I don't know the technical details of GLX, but does this mean that the OpenGL version supported over SSH is limited to 1.4? I understand that the latest version of GLX is quite old, compared to the progress of OpenGL.
I run a X server on a Windows 7 machine with OpenGL 4.4
The first problem start with this. A X11 server on Windows is just another program running there and ultimately is going to turn X11 commands into Win32 GDI calls. X11 itself does not "know" OpenGL, that's why there's the GLX extension. And GLX is an interesting beast and the X11 servers for windows all implement only a very basic baseline of OpenGL commands to support the essentials.
But that's only half of your problem…
From there I ssh -Y to a remote machine where I start an OpenGL application.
Doing this kind of thing always invokes indirect rendering where all commands have to be sent as a GLX opcode command stream. And unfortunately (for you) GLX opcodes have been specified only up to OpenGL-2.1, but full GLX support is mandatory only for up to OpenGL-1.4. OpenGL-1.5 introduced vertex buffer objects, which add quite a lot of complications for an indirect rendering contexts, so GLX may implementations opt not to support it for indirect rendering.
For Linux at least the proprietary NVidia drivers and client libraries have full indirect OpenGL-2.1 support. But the X11 server you're running on Windows, and likely the client library don't.
Is there a way to start an application with OpenGL >= 3 on a remote machine?
Local and remote machine run on Linux.
More precisely, I have the following problem:
I have an application that uses Qt for GUI stuff and OpenGL for 3D rendering.
I want to start this application on several remote machines because the program does some very time consuming computation.
Thus, I created a version of my program that does not raise a window. I use QGuiApplication, QOffscreenSurface, and a framebuffer object as rendertarget.
BUT: When I start the application on a remote machine (ssh -Y remotemachine01 myapp) I only have OpenGL version 2.1.2. When I start the application locally (on the same machine, I have opengl 4.4). I suppose the X forwarding is the problem.
So I need a way to avoid X forwarding.
Right now there's no clean solution, sorry.
GLX (the OpenGL extension to X11 which does the forwarding stuff) is only specified up to OpenGL-2.1, hence your inability to forward a OpenGL-3 context. This is actually a ridiculous situation, because the "OpenGL-3 way" is much better suited for indirected rendering, than old fashioned OpenGL-2.1 and earlier. Khronos really needs to get their act together and specify GLX-3.
Your best bet would be either to fall back to a software renderer on the remote side and some form of X compression. Or use Xpra backed by on GPU X11 server; however that only works for only a single user at a time.
In the not too far future the upcomming Linux graphics driver models will allow for remote GPU rendering execution by multiple users sharing graphics resources. But we're not there yet.
DISCLAIMER:
I see that some suggestions for the exact same question come up, however that (similar) post was migrated to SuperUsers and seems to have been removed. I however would still like to post my question here because I consider it software/programming related enough not to post on SuperUsers (the line is vague sometimes between what is a software and what is a hardware issue).
I am running a very simple OpenGL program in Code::Blocks in VirtualBox with Ubuntu 11.10 installed on a SSD. Whenever I build&run a program I get these errors:
OpenGL Warning: XGetVisualInfo returned 0 visuals for 0x232dbe0
OpenGL Warning: Retry with 0x802 returned 0 visuals
Segmentation fault
From what I have gathered myself so far this is VirtualBox related. I need to set
LIBGL_ALWAYS_INDIRECT=1
In other words, enabling indirect rendering via X.org rather then communicating directly with the hardware. This issue is probably not related to the fact that I have an ATI card as I have a laptop with an ATI card that runs the same program flawlessly.
Still, I don't dare to say that the fact that my GPU is an ATI doesn't play any role at all. Nor am I sure if the drivers are correctly installed (it says under System info -> Graphics -> graphics driver: Chromium.)
Any help on HOW to set LIBGL_ALWAYS_INDIRECT=1 would be greatly appreciated. I simply lack the knowledge of where to put this command or where/how to execute it in the terminal.
Sources:
https://forums.virtualbox.org/viewtopic.php?f=3&t=30964
https://www.virtualbox.org/ticket/6848
EDIT: in the terminal type:
export LIBGL_ALWAYS_INDIRECT = 1
To verfiy that direct rendering is off:
glxinfo | grep direct
However, the problem persists. I still get mentioned OpenGL warnings and the segmentation fault.
I ran into this same problem running the Bullet Physics OpenGL demos on Ubuntu 12.04 inside VirtualBox. Rather than using indirect rendering, I was able to solve the problem by modifying the glut window creation code in my source as described here: https://groups.google.com/forum/?fromgroups=#!topic/comp.graphics.api.opengl/Oecgo2Fc9Zc.
This entailed replacing the original
...
glutCreateWindow(title);
...
with
...
if (!glutGet(GLUT_DISPLAY_MODE_POSSIBLE))
{
exit(1);
}
glutCreateWindow(title);
...
as described in the link. It's not clear to me why this should correct the segfault issue; apparently glutGet has some side effects beyond retrieving state values. It could be a quirk of freeglut's implementation of glut.
If you look at the /etc/environment file, you can see a couple of variables exposed there - this will give you and idea of how to expose that environment variable across the entire system. You could also try putting it in either ~/.profile or ~/.bash_profile depending on your needs.
The real question in my mind is: Did you install the guest additions for Ubuntu? You shouldn't need to install any ATI drivers in your guest as VirtualBox won't expose the actual physical graphics hardware to your VM. You can configure your guest to support 3D acceleration in the virtual machine settings (make sure you turn off the VM first) under the Display section. You will probably want to boost the allocated virtual memory - 64MB or 128MB should be plenty depending on your needs.