Forwarding accelerated OpenGL GLX via SSH - opengl

I'm currently porting an open-source OpenGL game to OpenGL ES. The target device runs Linux and has a relatively weak CPU (ARM11 family, with FPU). It has an OpenGL ES accelerator but not an OpenGL one.
Initially I want to get the existing OpenGL-GLX-X11 implementation to run, using an accelerated OpenGL instance on another Linux machine - for example, an Athlon X2 with a Radeon X1650 Pro. This will help to verify that there are no serious CPU bottlenecks that need to be sorted out at a high level.
I have managed to set up SSH forwarding of the X11 connection. The glxinfo and glxgears programs run, but the latter has very poor performance (8fps) compared to a locally running glxgears (60fps with vsync). The glxinfo report stated that Direct Rendering is being used, which tells me that the local (to the ARM device) software renderer is being used.
What I want to happen is for OpenGL commands to be sent to the Athlon X2 machine and accelerated using the Radeon. I believe I need to turn on Indirect Rendering for this. However, setting LIBGL_ALWAYS_INDIRECT=1 does not change anything. For example:
arm$ LIBGL_ALWAYS_INDIRECT=1 glxinfo | fgrep rendering
direct rendering: Yes
arm$
The ARM device is running Gentoo Linux. What is the best way to force what I want to happen?

The glxinfo and glxgears programs run, but the latter has very poor performance (8fps) compared to a locally running glxgears (60fps with vsync). The glxinfo report stated that Direct Rendering is being used, which tells me that the local (to the ARM device) software renderer is being used.
I'm a bit puzzled by this. If you see the OpenGL output on the remote display, then this would mean that instead of GLX commands pictures are transmitted. This however would mean, that the libGL.so on your host device is X11 aware for its output, but won't use GLX.
Could you please determine which package contributes the libGL.so on your ARM device. I suggest you install a separate libGL.so with only GLX command generation and LD_PRELOAD that.
Mesa3D can be configured to build GLX command stream generator library.

Related

CUDA Manjaro Nvidia gtx 970 Bus error (core dumped) [duplicate]

I am trying to use CUDA with GTX 570.
I am using Ubuntu 14.04 and CUDA has been installed successfully.
I think I should use desktop or gui interface with on-board VGA
and run GTX 570 solely for CUDA, but seems not working well. (I set up on-board VGA as default in BIOS, but after installing CUDA, Ubuntu only provide gui in GTX 570 port.)
So, is it okay to use GTX 570 for both gui and CUDA? What is the standard way to use it?
If your on-board VGA is still active at boot time, and only goes dark when Ubuntu loads, then it should be possible via a rearrangement of your xorg.conf file, to get Ubuntu to use the on-board VGA for display also. In this case, you would remove all reference to the GTX570 from your xorg.conf, and this is the best approach.
You can use the GTX570 for both display and CUDA.
There will be two areas of limitation:
Interactivity - when running CUDA apps, your display will be unresponsive. For learning purposes, most CUDA kernels run for significantly less than 1 second, so this is not likely to be much of an issue for you (the display will freeze while the CUDA kernel is running). But if you want to run longer CUDA kernels, your system will be unresponsive during that time, and you may even run into Linux watchdog timeout issues. This document may also be interesting reading for you.
Debugging - When there is no X-server that is using the GTX570, then it can be easily used for debugging. However, you will not be able to debug (e.g. set breakpoints in CUDA device code) your CUDA apps when the GUI/display is also running on the GTX570.

How to check if OpenGL/EGL detects GPU on a headless server?

I am trying to use NVIDIA's EGL library to render an OpenGL-based simulator in headless mode. However, I find that rendering on the server with EGL is the same speed as rendering on the server with xvfb, which only performs software-based rendering. I'm wondering if my EGL might not be properly detecting my NVIDIA GPU.
What is the best way to check if EGL and OpenGL are properly detecting my GPU?
I tried the following command and find the renderer is set to software-based rather than GPU-based, but I am unsure if this is because I am calling glxinfo with an xvfb:
xvfb-run -a -s "-screen 0 1400x900x24 +extension RANDR -noreset" -- glxinfo | grep "renderer",
which outputs OpenGL renderer string: llvmpipe (LLVM 10.0.0, 256 bits).
However, I find that rendering on the server with EGL is the same speed as rendering on the server with xvfb,
If your program is using EGL, then it makes zero difference if you run it in a X server or not. The whole purpose of EGL is to completely bypass the neccessity for a X server. And for a offscreen-EGL enabled program it doesn't matter if it's running inside a X environment (be it Xvfb, or Xorg with nvidia driver, proper), because it doesn't care about X.
I am calling glxinfo with an xvfb:
glxinfo will never tell you about EGL support.
In your own program, after establishing the OpenGL context use glGetString to obtain information about the OpenGL implementation you're running on.
If in your own program, with EGL, glGetString(GL_VENDOR) and glGetString(GL_RENDERER) tell you Nvidia, then you're GPU accelerated.

New mesa installed but glxinfo show the older one

I am doing some work using Google Cloud Platform,that's to say I use ssh to login. When I run a script(mayavi/test_drawline.py) from others, it tells me:
ERROR: In /work/standalone-x64-build/VTKsource/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 797 vtkXOpenGLRenderWindow (0x3987b00): GL version 2.1 with the gpu_shader4 extension is not supported by your graphics driver but is required for the new OpenGL rendering backend. Please update your OpenGL driver. If you are using Mesa please make sure you have version 10.6.5 or later and make sure your driver in Mesa supports OpenGL 3.2.
So I think I need to up upgrade my mesa. Before that, the glxinfo shows:
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL version string: 1.4 (2.1 Mesa 10.5.4)
I followed the instruction from How to upgrade mesa, but the glxinfo didn't change.
And I tried to compile Mesa from source code. So I followed the instruction from Mesa official website Compiling and Installing. I use
Building with autoconf (Linux/Unix/X11). All things are OK, it seemed that I have installed the newest Mesa.
However, when I run glxinfo| grep version again, it still like this:
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL version string: 1.4 (2.1 Mesa 10.5.4)
I have tried reboot, but it doesn't work.
So, does anyone know how to solve it?
Thank you!
The OpenGL version reported depends on the available Mesa version only be second degree. You're reported GLX-1.4 and OpenGL-1.4 which is an absolute baseline version dating back over 15 years ago. So this is not a Mesa version problem.
What far more likely is, that you're trying to create a OpenGL context in a system configuration which simply can't do more than OpenGL-1.4 without resorting to software rendering. Now one reason for that could be, that you're connecting via SSH using X11 forwarding. In that case all OpenGL commands would be tunneled through the X11 connection (GLX) to your local machine and be executed there. However GLX is very limited in it's OpenGL version profile capabilities. Technically it's supporting up to OpenGL-2.1 (which is the last OpenGL version, that defines GLX transport opcodes for all its functions). But a given configuration might support less.
If the remote machine does have a GPU, you have to use that. A couple of years ago, this would have meant running a Xorg server there. Not anymore. With NVidia GPUs you can use headless EGL. With Intel and AMD GPUs you can also use headless EGL, or use GBM/DRI to create a headless GPU accelerated OpenGL context. Of course this requires a GPU being available on the remote end.
If you don't have a GPU on the remote site, you must use some software implementation. Which with Mesa unfortunately doesn't work with a forwarded X11 session. Your best bet would be running something like Xpra, or Xvnc (i.e. some kind of remote framebuffer), where the X server runs on the remote end, so that the GLX connection terminates there, and not on your local machine.
Or you somehow coax the program you're building to use OSMesa (Off-Screen Mesa), but that requires entirely different OpenGL context setup, entirely different from what's done with GLX, so your VTK application may not work out of the box with that.

Can I use EGL in OSX?

I am trying to use Cairo library in a C++ application utilizing its GL acceleration in Mac. (I made same tests with its Quartz backend but the performance was disappointing.) It says it supports EGL and GLX. Use of GLX requires (externally installed) XQuartz and opens an XWindow so I lean towards EGL:
Apple's programming guide pages tell to use NSOpenGL*, which this page and others say it uses CGL.
This (2012) page says Mac has EAGL and it is only similar to EGL (I suppose it refers to IOS, not MAC as its EAGL reference links to IOS help pages).
Angle says it supports EGL but it is for Direct3D in windows, as I understand(?)
GLFW v3 is also said to support (in future releases?) but via GLX, it is said (?).
Mali says it has a simulator for Mac but I don't know if it is accelerated or is only for its hardware (it also says it only supports a subset of EGL on different platforms).
Most of the links refer to mobile when EGL is used. I am using Mac OS 10.8 and XCode 4.6. What is the current situation / How can I (if I can) use EGL in Mac (now)?
Here it is
https://github.com/SRA-SiliconValley/cairogles/
clone cairo, checkout branch nsgl. This cairo is our fork of cairo 1.12.14 that has the following enhancement vs the upstream cairo
support OpenGL ES 3.0, and support OpenGL ES 2.0 angle MSAA extension
new convex tessellator for fill circle for msaa compositor
new cairo API - cairo_rounded_rectangle() - it is optimized for MSAA compositor
support gaussian blur for four backends: GL/GLES, quartz, xcb and image
support drop shadow and inset for four backends: GL/GLES, quartz, xcv and image with shaow cache
support faster stroke when stroke width = 1 - we call hairline stroke
add integration for NSOpenGL
various bug fixes and optimization.
On Mac OSX, you have two choices: GLX or NSOpenGL - they are mutually exclusive. You can get mesa glx from macport.
1. To compile for NSOpenGL - ./configure --prefix=your_install_location --enable-gl=yes --enable-nsgl=yes --enable-glx=no --enable-egl=no
To compile for GLX - ./configure --prefix=your_install_location --enable-gl=yes --enable-glx=yes --enable-nsgl=no --enable-egl=no.
If you are interested in egl (no available on mac, but mesa 9.1+ on linux and various embedded platform form has egl) do
./configure --prefix=your_install_location --enable-gl=no --enable-egl=yes --enable-glesv2=yes --enable-glesv3= no ===== this compiles for gles2 drivers.
./confgure --prefix=your_install_location --enable-gl=no --enable-egl=yes --enable-glesv2=no --enable-glesv3=yes ==== this compiles for glesv3 driver (mesa 9.1+ has glesv3)
you can have CFLAGS="-g" for debug or CFLAGS="-O2" for optimization.
cairo gl/gles has 3 GL compositors (rendering paths for GL/GLES backend). The default one is span compositor which is software simulation of AA and is slow. If your driver supports MSAA, use msaa compositor. To use MSAA compositor, you can export CAIRO_GL_COMPOSITOR=msaa in terminal, or you can setenv() in your program.
I have sample code to show cairo for quartz, xcv, image, glx, gel or nsgl. If you are interested, I can send you.
Any bug reports/patches are welcome. I have not have time to get wgl (MS windows) to work yet. Additional, it would be nice to have a d3d backend for cairo, I just don't have time to do that - on the todo list.
Enjoy
yes. cairo has been ported to use nsopengl. I will show you howto. amd sample code if you are interested. performance is much faster than quaetz gl.
You definitely can use angle:
#define GL_GLEXT_PROTOTYPES
#include <GLES2/gl2.h>
#include <EGL/egl.h>

Open GL call flow in ubuntu guest virtual box

I compiled glxgears.c demo code by linking it with libGL.so provided by mesa.
This compilation is for ubuntu guest in virtual box and host is windows 7.
I am running my demo code in ubuntu, during run time it is accessing VBoxOGL.so provided by virtual box guest additions and using 3D hardware acceleration of host.
If I rename VBoxOGL.so to some other name my demo code is not using hardware acceleration but uses software rendering.
Can you tell me how my demo code is connected to VBoxOGL.so.
I need the flow from democode->libGL.so->how ? ->VBoxOGL.so->h/w. As these 2 libraries are not linked together during compilation am not sure how libGL calls are directed to VBoxOGL.so.
Help me to understand the flow and which library or module is doing this redirection.
I don't know the internals of Virtual Box, but my best guess would be that they either LD_PRELOAD that .so instead of libGL.so, or that it's been implemented as a Mesa state tracker and acts on the backside of Mesa as other DRI2 based GPU drivers do.