I am trying to use NVIDIA's EGL library to render an OpenGL-based simulator in headless mode. However, I find that rendering on the server with EGL is the same speed as rendering on the server with xvfb, which only performs software-based rendering. I'm wondering if my EGL might not be properly detecting my NVIDIA GPU.
What is the best way to check if EGL and OpenGL are properly detecting my GPU?
I tried the following command and find the renderer is set to software-based rather than GPU-based, but I am unsure if this is because I am calling glxinfo with an xvfb:
xvfb-run -a -s "-screen 0 1400x900x24 +extension RANDR -noreset" -- glxinfo | grep "renderer",
which outputs OpenGL renderer string: llvmpipe (LLVM 10.0.0, 256 bits).
However, I find that rendering on the server with EGL is the same speed as rendering on the server with xvfb,
If your program is using EGL, then it makes zero difference if you run it in a X server or not. The whole purpose of EGL is to completely bypass the neccessity for a X server. And for a offscreen-EGL enabled program it doesn't matter if it's running inside a X environment (be it Xvfb, or Xorg with nvidia driver, proper), because it doesn't care about X.
I am calling glxinfo with an xvfb:
glxinfo will never tell you about EGL support.
In your own program, after establishing the OpenGL context use glGetString to obtain information about the OpenGL implementation you're running on.
If in your own program, with EGL, glGetString(GL_VENDOR) and glGetString(GL_RENDERER) tell you Nvidia, then you're GPU accelerated.
Related
I am doing some work using Google Cloud Platform,that's to say I use ssh to login. When I run a script(mayavi/test_drawline.py) from others, it tells me:
ERROR: In /work/standalone-x64-build/VTKsource/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 797 vtkXOpenGLRenderWindow (0x3987b00): GL version 2.1 with the gpu_shader4 extension is not supported by your graphics driver but is required for the new OpenGL rendering backend. Please update your OpenGL driver. If you are using Mesa please make sure you have version 10.6.5 or later and make sure your driver in Mesa supports OpenGL 3.2.
So I think I need to up upgrade my mesa. Before that, the glxinfo shows:
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL version string: 1.4 (2.1 Mesa 10.5.4)
I followed the instruction from How to upgrade mesa, but the glxinfo didn't change.
And I tried to compile Mesa from source code. So I followed the instruction from Mesa official website Compiling and Installing. I use
Building with autoconf (Linux/Unix/X11). All things are OK, it seemed that I have installed the newest Mesa.
However, when I run glxinfo| grep version again, it still like this:
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL version string: 1.4 (2.1 Mesa 10.5.4)
I have tried reboot, but it doesn't work.
So, does anyone know how to solve it?
Thank you!
The OpenGL version reported depends on the available Mesa version only be second degree. You're reported GLX-1.4 and OpenGL-1.4 which is an absolute baseline version dating back over 15 years ago. So this is not a Mesa version problem.
What far more likely is, that you're trying to create a OpenGL context in a system configuration which simply can't do more than OpenGL-1.4 without resorting to software rendering. Now one reason for that could be, that you're connecting via SSH using X11 forwarding. In that case all OpenGL commands would be tunneled through the X11 connection (GLX) to your local machine and be executed there. However GLX is very limited in it's OpenGL version profile capabilities. Technically it's supporting up to OpenGL-2.1 (which is the last OpenGL version, that defines GLX transport opcodes for all its functions). But a given configuration might support less.
If the remote machine does have a GPU, you have to use that. A couple of years ago, this would have meant running a Xorg server there. Not anymore. With NVidia GPUs you can use headless EGL. With Intel and AMD GPUs you can also use headless EGL, or use GBM/DRI to create a headless GPU accelerated OpenGL context. Of course this requires a GPU being available on the remote end.
If you don't have a GPU on the remote site, you must use some software implementation. Which with Mesa unfortunately doesn't work with a forwarded X11 session. Your best bet would be running something like Xpra, or Xvnc (i.e. some kind of remote framebuffer), where the X server runs on the remote end, so that the GLX connection terminates there, and not on your local machine.
Or you somehow coax the program you're building to use OSMesa (Off-Screen Mesa), but that requires entirely different OpenGL context setup, entirely different from what's done with GLX, so your VTK application may not work out of the box with that.
I'm trying to use xming to render software using OpenGl running on the same machine in WSL / windows bash.
This works fine for some really small demos, however once I try something like glmark2, it fails because it seems the OpenGl version is reported incorrectly.
glxinfo | grep OpenGL reports this:
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 970M/PCIe/SSE2
OpenGL version string: 1.4 (4.5.0 NVIDIA 382.05)
If I let xming run on my internal graphics card (using a laptop), it reports
OpenGL vendor string: Intel
OpenGL renderer string: Intel(R) HD Graphics 4600
OpenGL version string: 1.4 (4.3.0 - Build 20.19.15.4568)
The weird part is the 1.4 in front of 4.5.0 NVIDIA 382.05.
The OpenGl support is definitely at least 3, because a demo using GLSL shaders which require newer OpenGl runs, but the version string is kinda garbage.
The problem you're running into is, that the GLX portion of XMing does support only up to OpenGL-1.4. The part inside the parentheses is the version string as reported by the system native OpenGL implementation. However since XMing lacks (so far) the capability to reliably pass on anything beyond OpenGL-1.4 it will simply tell you "all I guarantee you to support is OpenGL 1.4, but the system I'm running on could actually do …".
Maybe some day someone goes through the effort to implement a fully featured dynamic GLX←→WGL wrapper.
I have finally successfully compiled a Qt app (C++) using OpenGL on a CentOS 7 machine. The application was originally developed for Windows.
I have an OpenGL scene that is showing a black screen. It works if I compile the project with the Windows version of Qt in a Windows environment.
All controls and functionalities are working except I cannot see the result on the OpenGl scene. After few searches, I have discovered it might be a 3D acceleration problem and I have been advised to try to disable it.
I am using the Mesa libraries on a CentOS system:
glxinfo | grep vendor
server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
OpenGL vendor string: VMware, Inc.
and I can see the that 3D acceleration is on:
glxinfo | grep rendering
direct rendering: Yes
How do I disable it?
Use environment variable LIBGL_ALWAYS_SOFTWARE=1. It disables hardware acceleration. From Mesa3D documentation:
LIBGL_ALWAYS_SOFTWARE - if set, always use software rendering
I have had a lot of problems / confusion setting up my laptop to work for OpenGL programming / the running of OpenGL programs.
My laptop has one of these very clever (too clever for me) designs where the Intel CPU has a graphics processor on chip, and there is also a dedicated graphics card. Specifically, the CPU is a 3630QM, with "HD Graphics 4000" (a very exciting name, I am sure), and the "proper" Graphics Processor is a Nvidia GTX 670MX.
Theoretically, according to Wikipedia, the HD Graphics Chip (Intel), under Linux, supports OpenGL 3.1, if the correct drivers are installed. (They probably aren't.)
According to NVIDIA, the 670MX can support OpenGL 4.1, so ideally I would like to develop and execute on this GPU.
Do I have drivers installed to enable me to execute OpenGL 4.1 code on the NVIDIA GPU? Answer: Probably no, currently I use this "optirun" thing to execute OpenGL programs on the dedicated GPU. See this link to see the process I followed to setup my computer.
My question is, I know how to run a compiled program on the 670MX; that would be 'optirun ./programname', but how can I find out what OpenGL version the installed graphics drivers on my system will support? Running 'glxinfo | grep -i opengl' in a terminal tells me that the Intel Chip supports OpenGl version 3.0. See the below information:
ed#kubuntu1304-P151EMx:~$ glxinfo | grep -i opengl
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile
OpenGL version string: 3.0 Mesa 9.1.3
OpenGL shading language version string: 1.30
OpenGL extensions:
How do I do the same or similar thing to find out what support is available under 'optirun', and what version of OpenGL is supported?
Update
Someone suggested I use glGetString() to find this information: I am now completely confused!
Without optirun, the supported OpenGL version is '3.0 MESA 9.1.3', so version 3, which is what I expected. However, under optirun, the supported OpenGL version is '4.3.0 NVIDIA 313.30', so version 4.3?! How can it be Version 4.3 if the hardware specification from NVIDIA states only Version 4.1 is supported?
You can just run glxinfo under optirun:
optirun glxinfo | grep -i opengl
Both cards have different features, so its normal to get different OpenGL versions.
I'm currently porting an open-source OpenGL game to OpenGL ES. The target device runs Linux and has a relatively weak CPU (ARM11 family, with FPU). It has an OpenGL ES accelerator but not an OpenGL one.
Initially I want to get the existing OpenGL-GLX-X11 implementation to run, using an accelerated OpenGL instance on another Linux machine - for example, an Athlon X2 with a Radeon X1650 Pro. This will help to verify that there are no serious CPU bottlenecks that need to be sorted out at a high level.
I have managed to set up SSH forwarding of the X11 connection. The glxinfo and glxgears programs run, but the latter has very poor performance (8fps) compared to a locally running glxgears (60fps with vsync). The glxinfo report stated that Direct Rendering is being used, which tells me that the local (to the ARM device) software renderer is being used.
What I want to happen is for OpenGL commands to be sent to the Athlon X2 machine and accelerated using the Radeon. I believe I need to turn on Indirect Rendering for this. However, setting LIBGL_ALWAYS_INDIRECT=1 does not change anything. For example:
arm$ LIBGL_ALWAYS_INDIRECT=1 glxinfo | fgrep rendering
direct rendering: Yes
arm$
The ARM device is running Gentoo Linux. What is the best way to force what I want to happen?
The glxinfo and glxgears programs run, but the latter has very poor performance (8fps) compared to a locally running glxgears (60fps with vsync). The glxinfo report stated that Direct Rendering is being used, which tells me that the local (to the ARM device) software renderer is being used.
I'm a bit puzzled by this. If you see the OpenGL output on the remote display, then this would mean that instead of GLX commands pictures are transmitted. This however would mean, that the libGL.so on your host device is X11 aware for its output, but won't use GLX.
Could you please determine which package contributes the libGL.so on your ARM device. I suggest you install a separate libGL.so with only GLX command generation and LD_PRELOAD that.
Mesa3D can be configured to build GLX command stream generator library.