Disable graphic accelleration on CentOS 7 - Mesa Libraries - opengl

I have finally successfully compiled a Qt app (C++) using OpenGL on a CentOS 7 machine. The application was originally developed for Windows.
I have an OpenGL scene that is showing a black screen. It works if I compile the project with the Windows version of Qt in a Windows environment.
All controls and functionalities are working except I cannot see the result on the OpenGl scene. After few searches, I have discovered it might be a 3D acceleration problem and I have been advised to try to disable it.
I am using the Mesa libraries on a CentOS system:
glxinfo | grep vendor
server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
OpenGL vendor string: VMware, Inc.
and I can see the that 3D acceleration is on:
glxinfo | grep rendering
direct rendering: Yes
How do I disable it?

Use environment variable LIBGL_ALWAYS_SOFTWARE=1. It disables hardware acceleration. From Mesa3D documentation:
LIBGL_ALWAYS_SOFTWARE - if set, always use software rendering

Related

New mesa installed but glxinfo show the older one

I am doing some work using Google Cloud Platform,that's to say I use ssh to login. When I run a script(mayavi/test_drawline.py) from others, it tells me:
ERROR: In /work/standalone-x64-build/VTKsource/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 797 vtkXOpenGLRenderWindow (0x3987b00): GL version 2.1 with the gpu_shader4 extension is not supported by your graphics driver but is required for the new OpenGL rendering backend. Please update your OpenGL driver. If you are using Mesa please make sure you have version 10.6.5 or later and make sure your driver in Mesa supports OpenGL 3.2.
So I think I need to up upgrade my mesa. Before that, the glxinfo shows:
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL version string: 1.4 (2.1 Mesa 10.5.4)
I followed the instruction from How to upgrade mesa, but the glxinfo didn't change.
And I tried to compile Mesa from source code. So I followed the instruction from Mesa official website Compiling and Installing. I use
Building with autoconf (Linux/Unix/X11). All things are OK, it seemed that I have installed the newest Mesa.
However, when I run glxinfo| grep version again, it still like this:
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL version string: 1.4 (2.1 Mesa 10.5.4)
I have tried reboot, but it doesn't work.
So, does anyone know how to solve it?
Thank you!
The OpenGL version reported depends on the available Mesa version only be second degree. You're reported GLX-1.4 and OpenGL-1.4 which is an absolute baseline version dating back over 15 years ago. So this is not a Mesa version problem.
What far more likely is, that you're trying to create a OpenGL context in a system configuration which simply can't do more than OpenGL-1.4 without resorting to software rendering. Now one reason for that could be, that you're connecting via SSH using X11 forwarding. In that case all OpenGL commands would be tunneled through the X11 connection (GLX) to your local machine and be executed there. However GLX is very limited in it's OpenGL version profile capabilities. Technically it's supporting up to OpenGL-2.1 (which is the last OpenGL version, that defines GLX transport opcodes for all its functions). But a given configuration might support less.
If the remote machine does have a GPU, you have to use that. A couple of years ago, this would have meant running a Xorg server there. Not anymore. With NVidia GPUs you can use headless EGL. With Intel and AMD GPUs you can also use headless EGL, or use GBM/DRI to create a headless GPU accelerated OpenGL context. Of course this requires a GPU being available on the remote end.
If you don't have a GPU on the remote site, you must use some software implementation. Which with Mesa unfortunately doesn't work with a forwarded X11 session. Your best bet would be running something like Xpra, or Xvnc (i.e. some kind of remote framebuffer), where the X server runs on the remote end, so that the GLX connection terminates there, and not on your local machine.
Or you somehow coax the program you're building to use OSMesa (Off-Screen Mesa), but that requires entirely different OpenGL context setup, entirely different from what's done with GLX, so your VTK application may not work out of the box with that.

xming 7.7.0.23 OpenGl version reported incorrectly

I'm trying to use xming to render software using OpenGl running on the same machine in WSL / windows bash.
This works fine for some really small demos, however once I try something like glmark2, it fails because it seems the OpenGl version is reported incorrectly.
glxinfo | grep OpenGL reports this:
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 970M/PCIe/SSE2
OpenGL version string: 1.4 (4.5.0 NVIDIA 382.05)
If I let xming run on my internal graphics card (using a laptop), it reports
OpenGL vendor string: Intel
OpenGL renderer string: Intel(R) HD Graphics 4600
OpenGL version string: 1.4 (4.3.0 - Build 20.19.15.4568)
The weird part is the 1.4 in front of 4.5.0 NVIDIA 382.05.
The OpenGl support is definitely at least 3, because a demo using GLSL shaders which require newer OpenGl runs, but the version string is kinda garbage.
The problem you're running into is, that the GLX portion of XMing does support only up to OpenGL-1.4. The part inside the parentheses is the version string as reported by the system native OpenGL implementation. However since XMing lacks (so far) the capability to reliably pass on anything beyond OpenGL-1.4 it will simply tell you "all I guarantee you to support is OpenGL 1.4, but the system I'm running on could actually do …".
Maybe some day someone goes through the effort to implement a fully featured dynamic GLX←→WGL wrapper.

windows 7, qt 5.7, Intel (HD) Graphics Family, OpenGL V 3.1

In trying to run a Qt app on a Windows 7 laptop it says the following (and keeps repeating the):
shader compilation failed:
"Failed to create D3D shaders.\n"
QOpenGLShader::link: Failed to create D3D shaders.
Failed to create D3D shaders.
QOpenGLShaderProgram::uniformLocation( matrix ): shader program is not linked
QOpenGLShaderProgram::uniformLocation( color ): shader program is not linked
QOpenGLShaderProgram::uniformLocation( textureScale ): shader program is not linked
QOpenGLShaderProgram::uniformLocation( dpr ): shader program is not linked
QOpenGLShader::link: Failed to create D3D shaders.
The output above is from the 'Application Output' window in QtCreator.
I am using windows 7, Intel (HD) Graphics Family: Driver version 8.15.10.2559, OpenGL version 3.1
The OpenGL version is from OpenGL Extensions Viewer
The Driver version was retrieved from Screen Resolution->Advanced Settings->Properties->Driver
Edit 1:
The app runs on Linux (Ubuntu 14.04). I moved it without any modifications to my Windows 7 laptop. I'm using Qt 5.7 on both laptops (Linux & Windows). It built on Windows 7 without any complaints. It built and ran on Linux without any complaints. I thought I might have an OpenGL version issue, but OpenGL Viewer says that my system has OpenGL 3.1.
This error log
shader compilation failed:
"Failed to create D3D shaders.\n"
QOpenGLShader::link: Failed to create D3D shaders.
strongly suggests that the Qt build you're using is using the ANGLE OpenGL (on top of Direct3D) emulation layer. You can find Qt in two build variants and ANGLE is the more widely deployed one in Windows, because the default drivers installed in Windows lack modern OpenGL support… which makes applications requiring modern OpenGL features, like Qt5 fail. Hence there's a Qt5 build variant that contains this emulation layer.
In your case you probably want to install (or probably already did so) the native OpenGL driver for your GPU (obtainable directly from the vendor website) and use a Qt build configured to use the native OpenGL implementation around (Qt built with -opengl desktop configuration). See also QT and native OpenGL support in MS Windows

Can I use EGL in OSX?

I am trying to use Cairo library in a C++ application utilizing its GL acceleration in Mac. (I made same tests with its Quartz backend but the performance was disappointing.) It says it supports EGL and GLX. Use of GLX requires (externally installed) XQuartz and opens an XWindow so I lean towards EGL:
Apple's programming guide pages tell to use NSOpenGL*, which this page and others say it uses CGL.
This (2012) page says Mac has EAGL and it is only similar to EGL (I suppose it refers to IOS, not MAC as its EAGL reference links to IOS help pages).
Angle says it supports EGL but it is for Direct3D in windows, as I understand(?)
GLFW v3 is also said to support (in future releases?) but via GLX, it is said (?).
Mali says it has a simulator for Mac but I don't know if it is accelerated or is only for its hardware (it also says it only supports a subset of EGL on different platforms).
Most of the links refer to mobile when EGL is used. I am using Mac OS 10.8 and XCode 4.6. What is the current situation / How can I (if I can) use EGL in Mac (now)?
Here it is
https://github.com/SRA-SiliconValley/cairogles/
clone cairo, checkout branch nsgl. This cairo is our fork of cairo 1.12.14 that has the following enhancement vs the upstream cairo
support OpenGL ES 3.0, and support OpenGL ES 2.0 angle MSAA extension
new convex tessellator for fill circle for msaa compositor
new cairo API - cairo_rounded_rectangle() - it is optimized for MSAA compositor
support gaussian blur for four backends: GL/GLES, quartz, xcb and image
support drop shadow and inset for four backends: GL/GLES, quartz, xcv and image with shaow cache
support faster stroke when stroke width = 1 - we call hairline stroke
add integration for NSOpenGL
various bug fixes and optimization.
On Mac OSX, you have two choices: GLX or NSOpenGL - they are mutually exclusive. You can get mesa glx from macport.
1. To compile for NSOpenGL - ./configure --prefix=your_install_location --enable-gl=yes --enable-nsgl=yes --enable-glx=no --enable-egl=no
To compile for GLX - ./configure --prefix=your_install_location --enable-gl=yes --enable-glx=yes --enable-nsgl=no --enable-egl=no.
If you are interested in egl (no available on mac, but mesa 9.1+ on linux and various embedded platform form has egl) do
./configure --prefix=your_install_location --enable-gl=no --enable-egl=yes --enable-glesv2=yes --enable-glesv3= no ===== this compiles for gles2 drivers.
./confgure --prefix=your_install_location --enable-gl=no --enable-egl=yes --enable-glesv2=no --enable-glesv3=yes ==== this compiles for glesv3 driver (mesa 9.1+ has glesv3)
you can have CFLAGS="-g" for debug or CFLAGS="-O2" for optimization.
cairo gl/gles has 3 GL compositors (rendering paths for GL/GLES backend). The default one is span compositor which is software simulation of AA and is slow. If your driver supports MSAA, use msaa compositor. To use MSAA compositor, you can export CAIRO_GL_COMPOSITOR=msaa in terminal, or you can setenv() in your program.
I have sample code to show cairo for quartz, xcv, image, glx, gel or nsgl. If you are interested, I can send you.
Any bug reports/patches are welcome. I have not have time to get wgl (MS windows) to work yet. Additional, it would be nice to have a d3d backend for cairo, I just don't have time to do that - on the todo list.
Enjoy
yes. cairo has been ported to use nsopengl. I will show you howto. amd sample code if you are interested. performance is much faster than quaetz gl.
You definitely can use angle:
#define GL_GLEXT_PROTOTYPES
#include <GLES2/gl2.h>
#include <EGL/egl.h>

How can I check which Version of OpenGL is supported on a linux system with optirun?

I have had a lot of problems / confusion setting up my laptop to work for OpenGL programming / the running of OpenGL programs.
My laptop has one of these very clever (too clever for me) designs where the Intel CPU has a graphics processor on chip, and there is also a dedicated graphics card. Specifically, the CPU is a 3630QM, with "HD Graphics 4000" (a very exciting name, I am sure), and the "proper" Graphics Processor is a Nvidia GTX 670MX.
Theoretically, according to Wikipedia, the HD Graphics Chip (Intel), under Linux, supports OpenGL 3.1, if the correct drivers are installed. (They probably aren't.)
According to NVIDIA, the 670MX can support OpenGL 4.1, so ideally I would like to develop and execute on this GPU.
Do I have drivers installed to enable me to execute OpenGL 4.1 code on the NVIDIA GPU? Answer: Probably no, currently I use this "optirun" thing to execute OpenGL programs on the dedicated GPU. See this link to see the process I followed to setup my computer.
My question is, I know how to run a compiled program on the 670MX; that would be 'optirun ./programname', but how can I find out what OpenGL version the installed graphics drivers on my system will support? Running 'glxinfo | grep -i opengl' in a terminal tells me that the Intel Chip supports OpenGl version 3.0. See the below information:
ed#kubuntu1304-P151EMx:~$ glxinfo | grep -i opengl
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile
OpenGL version string: 3.0 Mesa 9.1.3
OpenGL shading language version string: 1.30
OpenGL extensions:
How do I do the same or similar thing to find out what support is available under 'optirun', and what version of OpenGL is supported?
Update
Someone suggested I use glGetString() to find this information: I am now completely confused!
Without optirun, the supported OpenGL version is '3.0 MESA 9.1.3', so version 3, which is what I expected. However, under optirun, the supported OpenGL version is '4.3.0 NVIDIA 313.30', so version 4.3?! How can it be Version 4.3 if the hardware specification from NVIDIA states only Version 4.1 is supported?
You can just run glxinfo under optirun:
optirun glxinfo | grep -i opengl
Both cards have different features, so its normal to get different OpenGL versions.