Does EGL require a GPU? - opengl

I am trying to do server-side rendering for a problem that I am working on. EGL provides a way to define a context for OpenGL without the need for a windowing system. I have been able to successfully render offscreen using EGL on my laptop, but when I try to run to the code on an instance on digitalocean EGL fails to initialize. The ability to run this code on a compute resource from a cloud provider is one of the use-cases I need to support.
I want to know if EGL is a viable approach but I don't understand why it is failing. Does it require a GPU? Is this a problem with running on a virtual machine?
The following code reproduces the problem I am experiencing,
#include <EGL/egl.h>
#include <assert.h>
int main(int argc, char** argv) {
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
assert(display != EGL_NO_DISPLAY);
EGLBoolean result = eglInitialize(display, NULL, NULL);
//assert(result != EGL_FALSE);
EGLint errcode = eglGetError();
assert(errcode == EGL_SUCCESS);
return 0;
}
The error code returned after calling eglInitialize is EGL_NOT_INITIALIZED and, from the header, this means "EGL is not initialized, or could not be initialized, for the specified EGL display connection." The default display is returned without error, so I assume that the problem is that it could not be initialized. So I am trying to work out why is was not initialized.

If you want to use EGL with hardware acceleration, you need a GPU. So a server without GPU provides little benefit.
If you want to still render on the server in software and use the OpenGL API you can look into the mesaGL software implementation.
But if you are rendering in software, you can start using any other approaches, such as a a software ray tracer, such as pov ray

Related

How can I use OpenGL to render to memory without requiring any windowing system library?

I would like to use OpenGL (version 1.5) to render images to memory, without displaying them on screen (I can e.g. just save them as image files or render them as ASCII in terminal). I do not want any I/O. I've found similar question at SO but none that addresses my specific further requirements which will follow.
Now I think I could use a library like glx and tell it to not open any window, however I also don't want my code to depend on any windowing system library like X11, because my program simply doesn't do anything with any windows or I/O, I don't see why my program should be burdened by a dependency on X window (as some systems simply don't have X window, they may even have no graphical interface at all). My programs should only depend on an OpenGL driver.
I understand for this I need to create an OpenGL context, which is not part of OpenGL and which is something that's platform-dependent, so actually I might need some library for creating an OpenGL render-to-memory context ideally in a multiplatfom way (i.e. abstracting away the platform-dependent stuff). Does anything like this exist? (I am not interested in any proprietary, GPU-specific or driver-specific software, the program should run on any GPU that supports given OpenGL version.) Is there something else I should consider?
Basically I want my program to be very minimal and not burdened by what it doesn't need, given that all it needs is to use a generic OpenGL driver to render an image into memory, and should work on any system having such OpenGL driver.
Thank you.
Depending on the operating system you're using and the availability of drivers, you can do pure, headless, GPU accelerated OpenGL rendering using EGL. Nvidia has a nice developer blog about how to do it at https://developer.nvidia.com/blog/egl-eye-opengl-visualization-without-x-server/
The gist of it is, to create a EGL context on a display device without associating it with an output. Source (copied directly from the linked article):
#include <EGL/egl.h>
static const EGLint configAttribs[] = {
EGL_SURFACE_TYPE, EGL_PBUFFER_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_DEPTH_SIZE, 8,
EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE
};
static const int pbufferWidth = …;
static const int pbufferHeight = …;
static const EGLint pbattr[] = {
EGL_WIDTH, pbufferWidth,
EGL_HEIGHT, pbufferHeight,
EGL_NONE,
};
int main(int argc, char *argv[])
{
EGLDisplay eglDpy = eglGetDisplay(EGL_DEFAULT_DISPLAY);
EGLint major, minor;
eglInitialize(eglDpy, &major, &minor);
EGLint numConfigs;
EGLConfig eglCfg;
eglChooseConfig(eglDpy, configAttribs, &eglCfg, 1, &numConfigs);
EGLSurface eglSurf = eglCreatePbufferSurface(eglDpy, eglCfg, pbattr);
eglBindAPI(EGL_OPENGL_API);
EGLContext eglCtx = eglCreateContext(eglDpy, eglCfg, EGL_NO_CONTEXT, NULL);
eglMakeCurrent(eglDpy, eglSurf, eglSurf, eglCtx);
do_opengl_stuff();
eglTerminate(eglDpy);
return 0;
}
If you don't have access to EGL, but your OS and your GPU is supported by Linux DRM/DRI, you could go the KMS/GBM route and worth with framebuffer objects obtained through the extension mechanism (well, with Mesa you can just use them as if they were non extensions, even with OpenGL-1.x). The kmscube demo has a "surfaceless" mode, which demonstrates doing exactly that.
In short: EGL is the "clean" way do to it. KMS is the "hacky" way to do it.
Another option, probably completely outside of your scope right now, would be to use Vulkan, where strictly speaking, headless rendering is the "default", and methods for getting stuff on-screen are actual extensions to the specification:
VK_KHR_wayland_surface
VK_KHR_xcb_surface
VK_KHR_xlib_surface
VK_KHR_win32_surface

Windowless OpenGL Context in Apache2 Module

I'm trying to develop an Apache2 module that utilizes OpenGL to perform off-screen rendering and dynamically generate images that I can then send back to the client.
Apache2 is running on an Ubuntu 12.04 machine and I created a test module that renders a quad and stores the frame as an image to disk using OpenGL/GLX. But when the module receives a client request, it crashes at XOpenDisplay(0) with a segmentation fault. Any ideas what could be going wrong?
Edit:
All the examples I have seen talk about using a pixel buffer (PBuffer). As far as I know, these are deprecated and FBOs should be used instead. Can someone explain how to create a context and use FBOs to perform off-screen rendering?
While technically it's perfectly possible to do windowless, display server less off-screen GPU accelerated rendering with OpenGL, practically it's impossible these days because you need a display environment to actually get access to the GPU. Fortunately the structure of graphics systems is changing these days (Hybrid graphics, display compositors). Already Mesa provides an off-screen context creation mode (OSMesa), but it's far from being feature complete.
So right now, you'll need some kind of display server drawable to work with on which you can bind a context. X11 offers two kinds of GPU accelerated drawables: Windows and PBuffers. You can use FBOs with either (PBuffers are technically Windows that can not be mapped to the root window and have an off-screen canvas). The easiest way to go is to create a regular window on an X server but not showing it; you can still create an OpenGL context on it and create FBOs, like shown in numerous tutorials. But for OpenGL to work the X server you use must be active hold the console and be configured to use the GPU (theoretically with newer Hybrid graphics capable X servers and drivers it should be possible to configure the X server to use a dummy display device and configure the GPU as a secondary device for accelerated rendering, but I never tried that, so far).

Remote off-screen rendering (Linux / no GUI)

The situation is as follows:
There is a remote Linux server (no GUI), which builds the OpenGL scene.
Objective: Transfer generated image(s) to client windows machine
I can not understand some thing with offscreen rendering, read a lot of literature, but still not well understood:
Using GLUT implies setting variable DISPLAY. If I right understand means remote rendering via x11. If I run x11 server on windows (XWin server) machine everything works. If I try to run without rendering server , then : freeglut (. / WFWorkspace): failed to open display 'localhost: 11.0'. Anyway x11 is not suitable.
Do I need to create a graphics context (hardware rendering support is required)?
How can I create a graphics context on Linux server without GLUT/x11?
Framebuffer object - whether it is suitable for my task and whether the graphics context is necessary for it?
What is the most efficient way to solve this problem (rendering requires hardware support).
Not an important issue, but nevertheless:
Pixel buffer object. I plan to use it to increase the read performance of GPU memory. Is it profitable within my task?
You need to modify your program to use OSMesa - it's a "null display" driver used by Mesa for software rendering. Consider this answer for near duplicate question as a starter:
https://stackoverflow.com/a/8442800/2702398
For a full example, you can check out the examples in the Mesa distribution itself, such as this: http://cgit.freedesktop.org/mesa/demos/tree/src/osdemos/osdemo.c
Update
It appears that VirtualGL (http://www.virtualgl.org) supports remote rendering of OpenGL/GLX protocol and serves rendered pixmaps to the client over VNC (whereupon, VNC head can be trivially made virtual).
If you want to use full OpenGL spec, use X11 to create context. Here is a tutorial showing how you can do this:
http://arrayfire.com/remote-off-screen-rendering-with-opengl/

SDL Won't Use OpenGL driver, defaults to DirectX

I am using SDL 1.2 for a project. It renders things just fine, but I want to do some small pixel shader effects. All of the examples for this show using OpenGl driver for SDL's video subsystem.
So, I start the video subsystem with opengl as the driver, and tell SDL_SetVideoMode() to use SDL_OPENGL. When I go to run the program, it now starts crashing on the SetVideoMode() call, which worked fine without forcing OpenGl).
I went back and ran the program without forcing OpenGl and dumped out SDL_VideoDriverName() and it says I am using the "directx" driver.
My question is two-pronged: what is wrong that it doesn't like the opengl driver, and how to I get SDL to use opengl without problems here? Or, how do I get the SDL surface into DirectX to apply pixel shader effects?
I would prefer to use OpenGl as it would be easier to port code to other platforms.
As an example, I have added this code that breaks when I try to use the OpenGl system:
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#include <SDL.h>
INT WINAPI WinMain( HINSTANCE hInst, HINSTANCE, LPSTR strCmdLine, INT )
{
SDL_putenv("SDL_VIDEODRIVER=opengl");
SDL_Init( SDL_INIT_EVERYTHING );
SDL_VideoInit("opengl",0);
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 ); // crashes here
SDL_Surface *mWindow = SDL_SetVideoMode(1024,768,32,SDL_HWSURFACE|SDL_HWPALETTE|SDL_DOUBLEBUF|SDL_OPENGL);
SDL_Quit();
return 0;
}
SDL without the OpenGL option uses DirectX to obtain direct access to a 3D drawing surface. Using OpenGL triggers a whole different codepath in SDL. And using OpenGL with SDL you no longer can use the SDL surface for direct access to the pixels It's very likely your program crashes, because you're still trying to directly access the surface.
Anyway, if you want to use pixel shaders, you no longer must use direct pixel buffer access, as provided by plain SDL. You have to do everything through OpenGL then.
Update
Some of the parameters you give to SDL are mutually exclusive. Also the driver name given to SDL_VideoInit makes no sense if used together with OpenGL (it's only relevant together with DirectDraw to select a specific output device).
Also, because you already did call SDL_Init(SDL_INIT_EVERYTHING) the call to SDL_VideoInit is redundant and maybe harmfull actually.
See this for a fully working OpenGL example:
http://sdl.beuc.net/sdl.wiki/Using_OpenGL_With_SDL

Cross-platform renderer in OpenGL ES

I'm writing an cross-platform renderer. I want to use it on Windows, Linux, Android, iOS.
Do you think that it is a good idea to avoid absolute abstraction and write it directly in OpenGL ES 2.0?
As far as I know I should be able to compile it on PC against standard OpenGL, with only a small changes in code that handles context and connection to windowing system.
Do you think that it is a good idea to avoid absolute abstraction and write it directly in OpenGL ES 2.0?
Your principle difficulties with this will be dealing with those parts of the ES 2.0 specification which are not actually the same as OpenGL 2.1.
For example, you just can't shove ES 2.0 shaders through a desktop GLSL 1.20 compiler. In ES 2.0, you use things like specifying precision; those are illegal constructs in GLSL 1.20.
You can however #define around them, but this requires a bit of manual intervention. You will have to insert a #ifdef into the shader source file. There are shader compilation tricks you can do to make this a bit easier.
Indeed, because GL ES uses a completely different set of extensions (though some are mirrors and subsets of desktop GL extensions), you may want to do this.
Every GLSL shader (desktop or ES) needs to have a "preamble". The first non-comment thing in a shader needs to be a #version declaration. Fortunately for you, the version is the same between desktop GL 2.1 and GL ES 2.0: #version 1.20. The problem is what comes next: the #extension list (if any). This enables extensions needed by the shader.
Since GL ES uses different extensions from desktop GL, you will need to change this extension list. And since odds are good you're going to need more GLSL ES extensions than desktop GL 2.1 extensions, these lists won't just be 1:1 mapping, but completely different lists.
My suggestion is to employ the ability to give GLSL shaders multiple strings. That is, your actual shader files do not have any preamble stuff. They only have the actual definitions and functions. The main body of the shader.
When running on GL ES, you have a global preamble that you will affix to the beginning of the shader. You will have a different global preamble in desktop GL. The code would look like this:
GLuint shader = glCreateShader(/*shader type*/);
const char *shaderList[2];
shaderList[0] = GetGlobalPreambleString(); //Gets preamble for the right platform
shaderList[1] = LoadShaderFile(); //Get the actual shader file
glShaderSource(shader, 2, shaderList, NULL);
The preamble can also include a platform-specific #define. User-defined of course. That way, you can #ifdef code for different platforms.
There are other differences between the two. For example, while valid ES 2.0 texture uploading function calls will work fine in desktop GL 2.1, they will not necessarily be optimal. Things that would upload fine on big-endian machines like all mobile systems will require some bit twiddling from the driver in little-endian desktop machines. So you may want to have a way to specify different pixel transfer parameters on GL ES and desktop GL.
Also, there are different sets of extensions in ES 2.0 and desktop GL 2.1 that you will want to take advantage of. While many of them try to mirror one another (OES_framebuffer_object is a subset of EXT_framebuffer_object), you may run afoul of similar "not quite a subset" issues like those mentioned above.
In my humble experience, the best approach for this kind of requirements is to develop your engine in a pure C flavor, with no additional layers on it.
I am the main developer of PATRIA 3D engine which is based on the basic principle you just mentioned in terms of portability and we have achieved this by just developing the tool on basic standard libraries.
The effort to compile your code then on the different platforms is very minimal.
The actual effort to port the entire solution can be calculated depending on the components you want to embed in your engine.
For example:
Standard C:
Engine 3D
Game Logic
Game AI
Physics
+
Window interface (GLUT, EGL etc) - Depends on the platform, anyway could be GLUT for desktop and EGL for mobile devices.
Human Interface - depends on the porting, Java for Android, OC for IOS, whatever version desktop
Sound manager - depends on the porting
Market services - depends on the porting
In this way, you can re-use 95% of your efforts in a seamless way.
we have adopted this solution for our engine and so far it is really worth the initial investment.
Here are the results of my experience implementing OpenGL ES 2.0 support for various platforms on which my commercial mapping and routing library runs.
The rendering class is designed to run in a separate thread. It has a reference to the object containing the map data and the current view information, and uses mutexes to avoid conflicts when reading that information at the time of drawing. It maintains a cache of OpenGL ES vector data in graphics memory.
All the rendering logic is written in C++ and is used on all the following platforms.
Windows (MFC)
Use the ANGLE library: link to libEGL.lib and libGLESv2.lib and ensure that the executable has access to the DLLs libEGL.dll and libGLESv2.dll. The C++ code creates a thread that redraws the graphics at a suitable rate (e.g., 25 times a second).
Windows (.NET and WPF)
Use a C++/CLI wrapper to create an EGL context and to call the C++ rendering code that is used directly in the MFC implementation. The C++ code creates a thread that redraws the graphics at a suitable rate (e.g., 25 times a second).
Windows (UWP)
Create the EGL context in the UWP app code and call the C++ rendering code via the a a C++/CXX wrapper. You will need to use a SwapChainPanel and create your own render loop running in a different thread. See the GLUWP project for sample code.
Qt on Windows, Linux and Mac OS
Use a QOpenGLWidget as your windows. Use the Qt OpenGL ES wrapper to create the EGL context, then call the C++ rendering code in your paintGL() function.
Android
Create a renderer class implementing android.opengl.GLSurfaceView.Renderer. Create a JNI wrapper for the C++ rendering object. Create the C++ rendering object in your onSurfaceCreated() function. Call the C++ rendering object's drawing function in your onDrawFrame() function. You will need to import the following libraries for your renderer class:
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.opengl.GLSurfaceView.Renderer;
Create a view class derived from GLSurfaceView. In your view class's constructor first set up your EGL configuration:
setEGLContextClientVersion(2); // use OpenGL ES 2.0
setEGLConfigChooser(8,8,8,8,24,0);
then create an instance of your renderer class and call setRenderer to install it.
iOS
Use the METALAngle library, not GLKit, which Apple has deprecated and will eventually no longer support.
Create an Objective C++ renderer class to call your C++ OpenGL ES drawing logic.
Create a view class derived from MGLKView. In your view class's drawRect() function, create a renderer object if it doesn't yet exist, then call its drawing function. That is, your drawRect function should be something like:
-(void)drawRect:(CGRect)rect
{
if (m_renderer == nil && m_my_other_data != nil)
m_renderer = [[MyRenderer alloc] init:m_my_other_data];
if (m_renderer)
[m_renderer draw];
}
In your app you'll need a view controller class that creates the OpenGL context and sets it up, using code like this:
MGLContext* opengl_context = [[MGLContext alloc] initWithAPI:kMGLRenderingAPIOpenGLES2];
m_view = [[MyView alloc] initWithFrame:aBounds context:opengl_context];
m_view.drawableDepthFormat = MGLDrawableDepthFormat24;
self.view = m_view;
self.preferredFramesPerSecond = 30;
Linux
It is easiest to to use Qt on Linux (see above) but it's also possible to use the GLFW framework. In your app class's constructor, call glfwCreateWindow to create a window and store it as a data member. Call glfwMakeContextCurrent to make the EGL context current, then create a data member holding an instance of your renderer class; something like this:
m_window = glfwCreateWindow(1024,1024,"My Window Title",nullptr,nullptr);
glfwMakeContextCurrent(m_window);
m_renderer = std::make_unique<CMyRenderer>();
Add a Draw function to your app class:
bool MapWindow::Draw()
{
if (glfwWindowShouldClose(m_window))
return false;
m_renderer->Draw();
/* Swap front and back buffers */
glfwSwapBuffers(m_window);
return true;
}
Your main() function will then be:
int main(void)
{
/* Initialize the library */
if (!glfwInit())
return -1;
// Create the app.
MyApp app;
/* Draw continuously until the user closes the window */
while (app.Draw())
{
/* Poll for and process events */
glfwPollEvents();
}
glfwTerminate();
return 0;
}
Shader incompatibilities
There are incompatibilities in the shader language accepted by the various OpenGL ES 2.0 implementations. I overcome these in the C++ code using the following conditionally compiled code in my CompileShader function:
const char* preamble = "";
#if defined(_POSIX_VERSION) && !defined(ANDROID) && !defined(__ANDROID__) && !defined(__APPLE__) && !defined(__EMSCRIPTEN__)
// for Ubuntu using Qt or GLFW
preamble = "#version 100\n";
#elif defined(USING_QT) && defined(__APPLE__)
// On the Mac #version doesn't work so the precision qualifiers are suppressed.
preamble = "#define lowp\n#define mediump\n#define highp\n";
#endif
The preamble is then prefixed to the shader code.