Windowless OpenGL Context in Apache2 Module - c++

I'm trying to develop an Apache2 module that utilizes OpenGL to perform off-screen rendering and dynamically generate images that I can then send back to the client.
Apache2 is running on an Ubuntu 12.04 machine and I created a test module that renders a quad and stores the frame as an image to disk using OpenGL/GLX. But when the module receives a client request, it crashes at XOpenDisplay(0) with a segmentation fault. Any ideas what could be going wrong?
Edit:
All the examples I have seen talk about using a pixel buffer (PBuffer). As far as I know, these are deprecated and FBOs should be used instead. Can someone explain how to create a context and use FBOs to perform off-screen rendering?

While technically it's perfectly possible to do windowless, display server less off-screen GPU accelerated rendering with OpenGL, practically it's impossible these days because you need a display environment to actually get access to the GPU. Fortunately the structure of graphics systems is changing these days (Hybrid graphics, display compositors). Already Mesa provides an off-screen context creation mode (OSMesa), but it's far from being feature complete.
So right now, you'll need some kind of display server drawable to work with on which you can bind a context. X11 offers two kinds of GPU accelerated drawables: Windows and PBuffers. You can use FBOs with either (PBuffers are technically Windows that can not be mapped to the root window and have an off-screen canvas). The easiest way to go is to create a regular window on an X server but not showing it; you can still create an OpenGL context on it and create FBOs, like shown in numerous tutorials. But for OpenGL to work the X server you use must be active hold the console and be configured to use the GPU (theoretically with newer Hybrid graphics capable X servers and drivers it should be possible to configure the X server to use a dummy display device and configure the GPU as a secondary device for accelerated rendering, but I never tried that, so far).

Related

RPI OpenGL PWM display driver

So I'm building a system based on a raspberry pi 4 running Linux (image created through buildroot) driving a Led matrix (64x32 RGB connectors) and I'm very confused about the software stack of linux. I'd like to be able to use OpenGL capabilities on a small resolution screen that would then be transfered to a driver that would actually drive the Led matrix.
I've read about DRM, KMS, GEM and other systems and I've concluded the best way to go about it would be to have the following working scheme:
User space: App
| OpenGL
v
Kernel space: DRM -GEM-> Led device driver
|
v
Hardware: Led Matrix
Some of this may not make a lot of sense since the concepts are still confusing to me.
Essentially, the app would make OpenGL calls that would generate frames that could be mapped to buffers on the DRM which could be shared with the Led device driver which would then drive the leds in the matrix.
Would something like this be the best way about it?
I could just program some dumb buffer cpu implementation but I'd rather take this as a learning experience.
OpenGL renders into a buffer (called "framebuffer" that is usually displayed onto the screen. But rendering into an off screen buffer (as the name implies) does not render onto the screen but into an array, which can be read by C/C++. There is one indirection on modern operating systems. Usually you have multiple windows visible on your screen. Therefore the application can't render onto the screen itself but into a buffer maneged by the windowing system, which is then composited into one final image. Linux uses Wayland, multiple Wayland clients can create and draw into the Wayland compositor's buffers.
If you only want to display your application just use a off screen buffer.
If you want to display another application read it's framebuffer by writing your own Wayland compositor. Note this may be hard (I've never done that) if you want to use hardware acceleration.

How do I optimize my OpenGL textures for Remote Desktop/ANGLE?

I display a 2D texture in OpenGL using Qt.
Unfortunately I have found out that I need to support running my application via Remote Desktop to a Windows 7 PC. In this case I need to use OpenGL ES 2.0 API (ANGLE).
Due to low bandwidth my 2D visualization seems to be lagging.
My texture may have higher resolution than the screen so that it needs to be minified.
When not using remote desktop my approach have been to specify a very detailed texture and let the graphics card do the minification.
However now I am thinking that the OpenGL calls are executed in software locally and not on the remote machine? In which case the textures have to be transmitted via TCP/IP?
Does this mean that I should do minification myself before using the textures?
As an example instead of using a 2048x2048 texture I may bin 2x2 pixels in C++ and upload a 1024x1024 texture.
Alternatively I could use glGenerateMipmap?
I feel multiple terms are confused here: RDP just transfers the entire remote desktop for you whatever is on it, so no "OpenGL calls are executed in software locally". Hence, unfortunately it will not help if you reduce the texture size in your app, even if you remove it entirely (try it). RDP is not really suitable for real time animation.
Your app better be running locally on the user machine, so better to think how to distribute your OGL app to users.
If you cannot install your app on users machine, or give them installation kit, then
maybe turning your app to a browser app is a better option.
WebGL there for exactly this kind of applications, and is a standard too:
https://www.khronos.org/webgl/

Can EGL application run in console mode?

I want to implement an opengl application which generates images and I view the image via a webpage.
the application is intended to run on a linux server which has no display, no x windows, but with gpu.
I know that egl can use pixmap or pbuffer as render targets.
but the function eglGetDisplay worries me, it sounds like I still need to have attached display to make it work?
does egl work without display and xwindows or wayland?
This is a recurring question. TL;DR: With the current Linux graphics driver model it is impossible to use the GPU with traditional drivers without running a X server. If the GPU is supported by KMS+DRM+DRI you can do it. (EDIT:) Also in 2016 Nvidia finally introduced truly headless OpenGL support in their drivers through EGL.
The long story is, that technically GPUs are perfectly capable of rendering to an offscreen buffer without a display being attached or a graphics server running. However due to the history of graphics driver and environment development this is not possible, yet has not been possible for a long time. The assumption back then (when graphics was first introduced to Linux) was: "The graphics device is there to deliver a picture to a screen." That a graphics card could be used as an accelerating coprocessor was not even a figment of an idea.
Add to this, that until a few years ago, the Linux kernel itself had no idea how to talk to graphics devices (other than a dumb framebuffer somewhere in the system's address space). The X server was what talked to GPUs, so you needed that to run. And the first X server developers made the assumption that there is a person between keyboard and chair.
So what are your options:
Short term, if you're using a NVidia GPU: Just start an X server. You don't need a full blown desktop environment. You can even save yourself the trouble of starting a window manager. Just have the X server claim the VT and being active. There is now support for headless OpenGL contexts through EGL in the Nvidia drivers.
If you're using an AMD or Intel GPU you can talk directly to it. Either through EGL or using KMS (Google for something called kmscube, when trying it, make sure you switch away from your X server to a text VT first, otherwise you'll crash the X server). I've not tried it yet, but it should be possible to adjust the kmscube example that it uses the GPU to render into an offscreen buffer, without switching the VT to graphics mode or have any graphics output on the display framebuffer at all.
As datenwolf told u can create a frame buffer without using x with AMD and intel GPU. since iam using AMD graphics card with EGL and iam able to create a frame buffer and iam drawing on it.with Mesa Library by configuring without x u can achieve.

Remote off-screen rendering (Linux / no GUI)

The situation is as follows:
There is a remote Linux server (no GUI), which builds the OpenGL scene.
Objective: Transfer generated image(s) to client windows machine
I can not understand some thing with offscreen rendering, read a lot of literature, but still not well understood:
Using GLUT implies setting variable DISPLAY. If I right understand means remote rendering via x11. If I run x11 server on windows (XWin server) machine everything works. If I try to run without rendering server , then : freeglut (. / WFWorkspace): failed to open display 'localhost: 11.0'. Anyway x11 is not suitable.
Do I need to create a graphics context (hardware rendering support is required)?
How can I create a graphics context on Linux server without GLUT/x11?
Framebuffer object - whether it is suitable for my task and whether the graphics context is necessary for it?
What is the most efficient way to solve this problem (rendering requires hardware support).
Not an important issue, but nevertheless:
Pixel buffer object. I plan to use it to increase the read performance of GPU memory. Is it profitable within my task?
You need to modify your program to use OSMesa - it's a "null display" driver used by Mesa for software rendering. Consider this answer for near duplicate question as a starter:
https://stackoverflow.com/a/8442800/2702398
For a full example, you can check out the examples in the Mesa distribution itself, such as this: http://cgit.freedesktop.org/mesa/demos/tree/src/osdemos/osdemo.c
Update
It appears that VirtualGL (http://www.virtualgl.org) supports remote rendering of OpenGL/GLX protocol and serves rendered pixmaps to the client over VNC (whereupon, VNC head can be trivially made virtual).
If you want to use full OpenGL spec, use X11 to create context. Here is a tutorial showing how you can do this:
http://arrayfire.com/remote-off-screen-rendering-with-opengl/

Print an OpenGL Texture to File without Display?

I'm trying to use OpenGL to help with processing Kinect depth map input into an image. At the moment we're using the Kinect as a basic motion sensor, and the program counts how many people walk by and takes a screen shot each time it detects someone new.
The problem is that I need to get this program to run without access to a display. We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea. Attaching a display to the machine running the program is a possibility, but we're wanting to avoid doing that for energy consumption reasons.
The program does generate a 2D texture object for OpenGL, and usually just uses GLUT to render it before a read the pixels and output them to a .PNG file using FreeImage. The issue I'm running into is that once GLUT function calls are removed, all that gets printed to the .PNG files are just black boxes.
I'm using the OpenNI and NITE drivers for the Kinect. The programming language is C++, and I'm needing to use Ubuntu 10.04 due to the hardware limitations of the target device.
I've tried using OSMesa or FrameBuffer objects, but I am a complete OpenGL newbie so I haven't gotten OSMesa to render properly in place of the GLUT functions, and my compilers can't find any of the OpenGL FrameBuffer functions in GL/glext.h or GL/gl.h.
I know that textures can be read into a program from image files, and all I want to output is a single 2-D texture. Is there a way to skip the headache of off-screen rendering in this case and print a texture directly to an image file without needing OpenGL to render it first?
The OSMesa library is neither a drop-in replacement for GLUT, nor can work together. If you only need the offscreen rendering part without interaction you have to implement a simple event loop yourself.
For example:
/* init OSMesa */
OSMesaContext mContext;
void *mBuffer;
size_t mWidth;
size_t mHeight;
// Create RGBA context and specify Z, stencil, accum sizes
mContext = OSMesaCreateContextExt( OSMESA_RGBA, 16, 0, 0, NULL );
OSMesaMakeCurrent(mContext, mBuffer, GL_UNSIGNED_BYTE, mWidth, mHeight);
After this snipped you can use the normal OpenGL calls to render and after a glFinish() call the results can be accessed through the mBuffer pointer.
In you event loop you can call your normal onDisplay, onIdle, etc callbacks.
We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea.
If you forward X11 and create the OpenGL context on that display, OpenGL traffic will go over the net no matter if there is a window visible or not. So what you actually need to do (if you want to make use of GPU accelerated OpenGL) is starting an X server on the remote machine, and keeping it the active VT (i.e. the X server must be the program that "owns" the display). Then your program can make a connection to this very X server only. But this requires to use Xlib. Some time ago fungus wrote a minimalistic Xlib example, I extended it a bit so that it makes use of FBConfigs, you can find it here: https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/x11argb_opengl/x11argb_opengl.c
In your case you should render to a FBO or a PBuffer. Never use a visible window framebuffer to render stuff that's to be stored away! If you create a OpenGL window, like with the code I linked use a FBO. Creating a GLX PBuffer is not unlike creating a GLX Window, only that it will be off-screen.
The trick is, not to use the default X Display (of your SSH forward) but a separate connection to the local X Server. The key is the line
Xdisplay = XOpenDisplay(NULL);
Instead of NULL, you'd pass the connection to the local server there. To make this work you'll also need to (manually) add an xauth entry to, or disable xauth on the OpenGL rendering server.
You can use glGetTexImage to read a texture back from OpenGL.