On making an in-process compositor for embedded use the EGL eglBindTexImage looks like a promising solution for enabling the compositor to use OpenGL itself without messing with any client OpenGL context. It'd be serving clients a pbuffer and associated GL context secretly bound a GL object in the compositor GL context.
However I'd like for this to work on host too which runs X. Closest thing I can find in the extensions list is GLX_EXT_texture_from_pixmap, but then I'd have to use a pixmap instead of pbuffer and manage a bunch of caveats such as y-direction.
Any alternatives I might have missed?
Related
I'm attempting to learn SDL2 and am having difficulties from a practical perspective. I feel like I have a good understanding of SDL windows, renderers, and textures from an abstract perspective. However, I feel like I need to know more about what's going on under the hood to use them appropriately.
For example, when creating a texture I am required to provide a reference to a renderer. I find this odd. A texture seems like it is a resource that is loaded into VRAM. Why should I need to give a resource a reference to a renderer? I understand why it would be necessary to give a renderer a reference to a texture, however, vice versa it doesn't make any sense.
So that leads to another question. Since each texture requires a renderer, should each texture have its own dedicated renderer, or should multiple textures share a renderer?
I feel like there are consequences going down one route versus the other.
Short Answers
I believe the reason a SDL_Texture requires a renderer is because some backend implementations (OpenGL?) have contexts (this is essentially what SDL_Renderer is) and the image data must be associated with that particular context. You cannot use a texture created in one context inside of another.
for your other question, no, you don't need or want a renderer for each texture. That probably would only produce correct results with the software backend for the same reason (context).
As #keltar correctly points out none of the renderer's will work with a texture that was created with a different renderer due to a check in SDL_RenderCopy. However, this is strictly an API requirement to keep things consistent, my point above is to highlight that even if that check were absent it would not work for backends such as OpenGL, but there is no technical reason it would not work for the software renderer.
Some Details about SDL_Renderer
Remember that SDL_Renderer is an abstract interface to multiple possible backends (OpenGL, OpenGLES, D3D, Metal, Software, more?). Each of these are going to possibly have restrictions on sharing data between contexts and therefore SDL has to limit itself in the same way to maintain sanity.
Example of OpenGL restrictions
Here is a good resource for general restrictions and platform dependent functionality on OpenGL contexts.
As you can see from that page that sharing between contexts has restrictions.
Sharing can only occur in the same OpenGL implementation
This means that you certainly cant share between an SDL_Renderer using OpenGL an a different SDL_Renderer using another backend.
You can share data between different OpenGL Contexts
...
This is done using OS Specific extensions
Since SDL is cross platform this means they would have to write special code for each platform to support this, and all OpenGL implementations may not support it at all so its better for SDL to simply not support this either.
each extra render context has a major impact of the applications
performance
while not a restriction, it is a reason why adding support for sharing textures is not worthwhile for SDL.
Final Note: the 'S' in SDL stands for "simple". If you need to share data between contexts SDL is simply the wrong tool for the job.
If all I want to do is to do some rendering with OpenGL functions, without even creating a window. Do I still need to use libraries like glx to bind OpenGL with platform windowing system?
If I don't need to, then where is OpenGL context created? As I need to use functions like glXCreateContext to create an OpenGL context. But if I remember right, every OpenGL program needs a context. So there seems to be a contradiction?
Hope someone can clarify this for me.
OpenGL itself (the specification) does not impose any requirements on window system integration and where and how a render context can be obtained. It is perfectly legal for a OpenGL implementation to offer off-screen context creation. The practical question is: Which OpenGL implementations do this and what's the API for that.
On Linux with DRI/DRM/Mesa a window and screenless OpenGL context can be created with the GBM API/library on KMS supported GPUs.
Also Mesa has an Off-Screen-Mesa variant (OSMesa), which however at the moment only does software based rendering (llvmpipe or softpipe), but it might add GPU support later.
EGL (the Khronos cross platform API for context management) also offers windowless/screenless context creation options, that are optional to be supported by drivers. At least the NVidia proprietary drivers do support it: https://devblogs.nvidia.com/parallelforall/egl-eye-opengl-visualization-without-x-server/
in Android NDK, is it possible to make OpenGL ES 1.1 work with the typical java-side GLSurfaceView pattern (overriding methods from GLSurfaceView.Renderer onDrawFrame, onSurfaceCreated, etc.) while using in the C++ side the frame, color and depth buffers, and VBO?
I am trying to create them using this:
void ES1Renderer::on_surface_created() {
// Create default framebuffer object. The backing will be allocated for the current layer in -resizeFromLayer
glGenFramebuffersOES(1, &defaultFramebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer);
// Create color renderbuffer object.
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
// create depth renderbuffer object.
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
}
However, it seems that this does not get the context appropriately, which is I think created when the GLSurfaceView and renderer are initialized (java side).
I am no expert either on NDK nor OpenGLES, but I have to port an iOS app which is using OpenGL ES 1.1, and i aim to reuse as much code as i can. Since the app is also leveraging the platform-specific UI components (buttons, lists, etc.), while drawing GL graphics, I thought this would be the best way to go. However, I am now considering using a native activity, though I am not sure about what will be the relationship with the other java components.
Absolutely, yes. The standard approach is that you create a GLSurfaceView like you would when using OpenGL from Java, create and hook up your GLSurfaceView.Renderer implementation, and let the rendering thread start up.
From your Renderer methods, like onSurfaceCreated() and onDrawFrame(), you can now call the JNI functions that invoke functions in your native code. In those native functions, you can make any OpenGL API calls your heart desires. For example, in the function you call from onSurfaceCreated() you might create some objects and set up some initial state. In the function you call from onSurfaceChanged(), you might set up your viewport and projection. In the function you call from onDrawFrame(), you do your rendering.
You can even make OpenGL calls from both Java and native code. The Java OpenGL API is just a very thin layer around the native functions. It doesn't make a difference if the functions are called from native code or through the Java API.
The only thing you need to watch out for is that you invoke all your native code that makes OpenGL API calls from the GLSurfaceView.Renderer implementations of onSurfaceCreated(), onSurfaceChanged() and onDrawFrame(). When these methods are called, you are in the rendering thread, and have a current OpenGL context. If native OpenGL code is invoked from anywhere else, chances are that you are in the wrong thread and/or you do not have a current OpenGL context.
There are of course more complex setups where you create your own OpenGL contexts, make them current explicitly, etc. But I would strongly recommend to stick with the simple approach above unless you have a very good reason why you need something more. For most standard OpenGL rendering, what I described should be perfectly sufficient.
I'm trying to develop an Apache2 module that utilizes OpenGL to perform off-screen rendering and dynamically generate images that I can then send back to the client.
Apache2 is running on an Ubuntu 12.04 machine and I created a test module that renders a quad and stores the frame as an image to disk using OpenGL/GLX. But when the module receives a client request, it crashes at XOpenDisplay(0) with a segmentation fault. Any ideas what could be going wrong?
Edit:
All the examples I have seen talk about using a pixel buffer (PBuffer). As far as I know, these are deprecated and FBOs should be used instead. Can someone explain how to create a context and use FBOs to perform off-screen rendering?
While technically it's perfectly possible to do windowless, display server less off-screen GPU accelerated rendering with OpenGL, practically it's impossible these days because you need a display environment to actually get access to the GPU. Fortunately the structure of graphics systems is changing these days (Hybrid graphics, display compositors). Already Mesa provides an off-screen context creation mode (OSMesa), but it's far from being feature complete.
So right now, you'll need some kind of display server drawable to work with on which you can bind a context. X11 offers two kinds of GPU accelerated drawables: Windows and PBuffers. You can use FBOs with either (PBuffers are technically Windows that can not be mapped to the root window and have an off-screen canvas). The easiest way to go is to create a regular window on an X server but not showing it; you can still create an OpenGL context on it and create FBOs, like shown in numerous tutorials. But for OpenGL to work the X server you use must be active hold the console and be configured to use the GPU (theoretically with newer Hybrid graphics capable X servers and drivers it should be possible to configure the X server to use a dummy display device and configure the GPU as a secondary device for accelerated rendering, but I never tried that, so far).
The situation is as follows:
There is a remote Linux server (no GUI), which builds the OpenGL scene.
Objective: Transfer generated image(s) to client windows machine
I can not understand some thing with offscreen rendering, read a lot of literature, but still not well understood:
Using GLUT implies setting variable DISPLAY. If I right understand means remote rendering via x11. If I run x11 server on windows (XWin server) machine everything works. If I try to run without rendering server , then : freeglut (. / WFWorkspace): failed to open display 'localhost: 11.0'. Anyway x11 is not suitable.
Do I need to create a graphics context (hardware rendering support is required)?
How can I create a graphics context on Linux server without GLUT/x11?
Framebuffer object - whether it is suitable for my task and whether the graphics context is necessary for it?
What is the most efficient way to solve this problem (rendering requires hardware support).
Not an important issue, but nevertheless:
Pixel buffer object. I plan to use it to increase the read performance of GPU memory. Is it profitable within my task?
You need to modify your program to use OSMesa - it's a "null display" driver used by Mesa for software rendering. Consider this answer for near duplicate question as a starter:
https://stackoverflow.com/a/8442800/2702398
For a full example, you can check out the examples in the Mesa distribution itself, such as this: http://cgit.freedesktop.org/mesa/demos/tree/src/osdemos/osdemo.c
Update
It appears that VirtualGL (http://www.virtualgl.org) supports remote rendering of OpenGL/GLX protocol and serves rendered pixmaps to the client over VNC (whereupon, VNC head can be trivially made virtual).
If you want to use full OpenGL spec, use X11 to create context. Here is a tutorial showing how you can do this:
http://arrayfire.com/remote-off-screen-rendering-with-opengl/