FrameBuffers with GLSurfaceView pattern in OpenGLES 1.1 on android ndk - c++

in Android NDK, is it possible to make OpenGL ES 1.1 work with the typical java-side GLSurfaceView pattern (overriding methods from GLSurfaceView.Renderer onDrawFrame, onSurfaceCreated, etc.) while using in the C++ side the frame, color and depth buffers, and VBO?
I am trying to create them using this:
void ES1Renderer::on_surface_created() {
// Create default framebuffer object. The backing will be allocated for the current layer in -resizeFromLayer
glGenFramebuffersOES(1, &defaultFramebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer);
// Create color renderbuffer object.
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
// create depth renderbuffer object.
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
}
However, it seems that this does not get the context appropriately, which is I think created when the GLSurfaceView and renderer are initialized (java side).
I am no expert either on NDK nor OpenGLES, but I have to port an iOS app which is using OpenGL ES 1.1, and i aim to reuse as much code as i can. Since the app is also leveraging the platform-specific UI components (buttons, lists, etc.), while drawing GL graphics, I thought this would be the best way to go. However, I am now considering using a native activity, though I am not sure about what will be the relationship with the other java components.

Absolutely, yes. The standard approach is that you create a GLSurfaceView like you would when using OpenGL from Java, create and hook up your GLSurfaceView.Renderer implementation, and let the rendering thread start up.
From your Renderer methods, like onSurfaceCreated() and onDrawFrame(), you can now call the JNI functions that invoke functions in your native code. In those native functions, you can make any OpenGL API calls your heart desires. For example, in the function you call from onSurfaceCreated() you might create some objects and set up some initial state. In the function you call from onSurfaceChanged(), you might set up your viewport and projection. In the function you call from onDrawFrame(), you do your rendering.
You can even make OpenGL calls from both Java and native code. The Java OpenGL API is just a very thin layer around the native functions. It doesn't make a difference if the functions are called from native code or through the Java API.
The only thing you need to watch out for is that you invoke all your native code that makes OpenGL API calls from the GLSurfaceView.Renderer implementations of onSurfaceCreated(), onSurfaceChanged() and onDrawFrame(). When these methods are called, you are in the rendering thread, and have a current OpenGL context. If native OpenGL code is invoked from anywhere else, chances are that you are in the wrong thread and/or you do not have a current OpenGL context.
There are of course more complex setups where you create your own OpenGL contexts, make them current explicitly, etc. But I would strongly recommend to stick with the simple approach above unless you have a very good reason why you need something more. For most standard OpenGL rendering, what I described should be perfectly sufficient.

Related

Is using legacy OpengGL (Windows's implementation - 1.1) bad practise?

I'm currently developing a GameEngine API/framework so far I have an OpenGL 4.6 context where I load all the functions by myself (no OpenGL wrapper). The Engine is still under heavy construction, but some 3D stuff is already possible. Now the Engine was targeted to primarily draw 3D objects, but I'm also considering 2D stuff (though I haven't started yet to implement it).
Right now I'm creating kind of a user interface framework (which can be used instead of the 3D or 2D engine and likewise) "like" MFC, WinForms... you name it. For that, I'm using Windows's OpenGL 1.1 implementation, and that's where we get to my actual question.
Is using such an old OpenGL, or any legacy OpenGL version, considered as a bad practice? I know this question may be opinion-based, but especially in my case, I think it's the easiest way to create reusable controls; besides from that it's super interesting to create controls like a button all by myself, you have to think about everything. And I don't think Microsoft will remove it soon from their API if so, this would mean, we wouldn't be able to get an OpenGL context at all if I'm not completely wrong on that.
Additional information:
The way people/I should use the framework is, you would need to choose one of the types, the Engine can provide, such as 3D, 2D or creating a "normal" user-interface.
The reason I choose Windows's OpenGL implementation for the user-interface is, that it would also be possible to create a user-interface for Windows XP.
I currently use functions like glBegin(), glEnd(), glVertex2f() and glColor3f() to draw a button and wglUseFontBitmaps(), glPushAttrib(), glListBase(), glCallLists(), glPopAttrib() to draw the text.
Regarding the OpenGL context, I use Nehe's way to get a context:
http://nehe.gamedev.net/tutorial/creating_an_opengl_window_(win32)/13001/
So for the user-interface, I don't use the 4.6 context but 1.1

intercept the opengl calls and make multi-viewpoints and multi-viewports

I want to creates a layer between any other OpenGL-based application and the original OpenGL library. It seamlessly intercepts OpenGL calls made by the application, and renders and sends images to the display, or sends the OpenGL stream to the rendering cluster.
I have completed my openg32.dll to replace the original library, I don't know what to do next,
How to convert OpenGL calls to images and what are OpenGL stream?
For an accurate description. visit the Opengl Wrapper
First and foremost OpenGL is not a libarary. It's an API. The opengl32.dll you have on your system is a library that provides the API and acts as a anchoring point for the actual graphics driver to attach to the programs.
Next it's a terrible idea to intercept OpenGL calls and turn them into something different, like multiple viewports. It may work for the fixed function pipeline, but as soon as shaders get involved it will break the program you hooked into. OpenGL is designed as an API to draw things to the screen, it's not a scene graph. Programs expect that when they make OpenGL calls they will produce an image in a pixel buffer according to their drawing commands. Now if you hook into that process and wildly alter the outcome, any graphics algorithm that relies on the visual outcome of the previous rendering for the following steps will break. For example any form of shadow mapping will be broken by what you do.
Also things like multiple viewport hacks will likely not work if the program does things like frustum culling internally, before making the actual OpenGL calls. Again this is because OpenGL is a drawing API, not a scene graph.
In the end yes you can hook into OpenGL, but whatever you do, you must make sure that OpenGL calls as made by the application get executed according to the specification. There is a authorative OpenGL specification for a reason, namely that programs rely on it to have predictable results.
OpenGL almost undoubtedly allows you to do the things you want to do without doing crazy modifications to it. Multi-viewpoints can be done by, in your render function, doing the following
glViewport(/*View 1 window coords*/0, 0, window_width, window_height / 2);
// Do all of your rendering for the first camera
glViewport(/*View 2 window coords*/0, window_height / 2, window_width, window_height);
glMatrixMode(GL_MODELVIEW);
// Redo your modelview matrix for a different viewpoint here, then re-render it all.
It's as simple as rendering twice into two areas which you specify with glViewport. If you Google around you can get a more detailed tutorial. I highly do not recommend messing with OpenGL as a good deal if it is implemented by the graphics card, and you should really just use what you're given. Chances are if you're modifying it you're doing it wrong. It probably allows you to do it a FAR better way.
Good luck!

some OpenGL function calls is not available in developing PS3 game

I am currently taking a Game Console Programming module at Sunderland University.
What they are teaching in this module is OpenGL and Phyre Engine to develop PS3 game.
The fact that PS3 SDK kit is not available for free (it is quite expensive) makes it really difficult for me to get around when a problem arises.
Apparently, PS3 framework doesn't support most of the gl function calls like glGenList, glBegin, glEnd and so on.
glBegin(GL_QUADS);
glTexCoord2f(TEXTURE_SIZE, m_fTextureOffset);
glVertex3f(-100, 0, -100);
//some more
glEnd();
I get errors when debugging with PS3 debug mode at glBegin, glEnd and glTexCoord2f.
Is there any way to get around it?
like a different way of drawing object, perhaps?
Most games developed for the PS3 don't use OpenGL at all, but are programmed "on the metal" i.e. make direct use of the GPU without an intermediate, abstrace API. Yes, there is a OpenGL-esque API for the PS3, but this is actually based on OpenGL-ES.
In OpenGL-ES there is no immediate mode. Immediatate Mode is this cumbersome method of passing geometry to OpenGL by starting a primitive with glBegin and then chaining up calls of vertex attribute state setting, concluded by submitting the vertex by its position glVertex and finishing with glEnd. Nobody wants to use this! Especially not on a system with limited resources.
You have the geometry data in memory available anyway. So why not simply point OpenGL to use what's already there? Well, that's exactly what to do: Vertex Arrays. You give OpenGL pointers to where find data (generic glVertexAttribPointer in modern OpenGL, or in old fixed function the predefined, fixed attributesglVertexPointer, glTexCoordPointer, glNormalPointer, glColorPointer) and then have it draw a whole bunch of it using glDrawElements or glDrawArrays.
In modern OpenGL the drawing process is controlled by user programmable shaders. In fixed function OpenGL all you can do is parametize a inflationary number of state variables.
The OpenGL used by the PlayStation 3 is a variant of OpenGL ES 1.0 (according to wikipedia with some features of ES 2.0).
http://www.khronos.org/opengles/1_X
Has the specification. There doesn't seem to be glBegin/glEnd functions there. Those (as in, fixed pipeline functions) are deprecated (and with OpenGL 4.0 and OpenGL ES 2.0, removed) in favor of things like VBO's anyway though, so there probably isn't much point in learning how to work with these.
If you are using PhyreEngine, you should generally avoid directly calling the graphics API directly, as PhyreEngine sits on top of different APIs on different platforms.
On PC it uses GL (or D3D), but on PS3 it uses a lower-level API. So even if you used GL-ES functionality, and even if it compiles, it will likely not function. So it's not surprising you are seeing errors when building for PS3.
Ideally you should use PhyreEngine's pipeline for drawing, which is platform-agnostic. If you stick to that API, you can in principle compile your code for any supported platform.
There is a limit to how much I can comment on PhyreEngine publicly (sorry), but if you are on a university course, your university should have access to the official support forums where you could get more specific help.
If you really must target the underlying graphics API directly, be aware that you may need to write/modify your code per-platform, and that you will need to 'play nice' with any contextual state that PhyreEngine may rely on.

Cross-platform renderer in OpenGL ES

I'm writing an cross-platform renderer. I want to use it on Windows, Linux, Android, iOS.
Do you think that it is a good idea to avoid absolute abstraction and write it directly in OpenGL ES 2.0?
As far as I know I should be able to compile it on PC against standard OpenGL, with only a small changes in code that handles context and connection to windowing system.
Do you think that it is a good idea to avoid absolute abstraction and write it directly in OpenGL ES 2.0?
Your principle difficulties with this will be dealing with those parts of the ES 2.0 specification which are not actually the same as OpenGL 2.1.
For example, you just can't shove ES 2.0 shaders through a desktop GLSL 1.20 compiler. In ES 2.0, you use things like specifying precision; those are illegal constructs in GLSL 1.20.
You can however #define around them, but this requires a bit of manual intervention. You will have to insert a #ifdef into the shader source file. There are shader compilation tricks you can do to make this a bit easier.
Indeed, because GL ES uses a completely different set of extensions (though some are mirrors and subsets of desktop GL extensions), you may want to do this.
Every GLSL shader (desktop or ES) needs to have a "preamble". The first non-comment thing in a shader needs to be a #version declaration. Fortunately for you, the version is the same between desktop GL 2.1 and GL ES 2.0: #version 1.20. The problem is what comes next: the #extension list (if any). This enables extensions needed by the shader.
Since GL ES uses different extensions from desktop GL, you will need to change this extension list. And since odds are good you're going to need more GLSL ES extensions than desktop GL 2.1 extensions, these lists won't just be 1:1 mapping, but completely different lists.
My suggestion is to employ the ability to give GLSL shaders multiple strings. That is, your actual shader files do not have any preamble stuff. They only have the actual definitions and functions. The main body of the shader.
When running on GL ES, you have a global preamble that you will affix to the beginning of the shader. You will have a different global preamble in desktop GL. The code would look like this:
GLuint shader = glCreateShader(/*shader type*/);
const char *shaderList[2];
shaderList[0] = GetGlobalPreambleString(); //Gets preamble for the right platform
shaderList[1] = LoadShaderFile(); //Get the actual shader file
glShaderSource(shader, 2, shaderList, NULL);
The preamble can also include a platform-specific #define. User-defined of course. That way, you can #ifdef code for different platforms.
There are other differences between the two. For example, while valid ES 2.0 texture uploading function calls will work fine in desktop GL 2.1, they will not necessarily be optimal. Things that would upload fine on big-endian machines like all mobile systems will require some bit twiddling from the driver in little-endian desktop machines. So you may want to have a way to specify different pixel transfer parameters on GL ES and desktop GL.
Also, there are different sets of extensions in ES 2.0 and desktop GL 2.1 that you will want to take advantage of. While many of them try to mirror one another (OES_framebuffer_object is a subset of EXT_framebuffer_object), you may run afoul of similar "not quite a subset" issues like those mentioned above.
In my humble experience, the best approach for this kind of requirements is to develop your engine in a pure C flavor, with no additional layers on it.
I am the main developer of PATRIA 3D engine which is based on the basic principle you just mentioned in terms of portability and we have achieved this by just developing the tool on basic standard libraries.
The effort to compile your code then on the different platforms is very minimal.
The actual effort to port the entire solution can be calculated depending on the components you want to embed in your engine.
For example:
Standard C:
Engine 3D
Game Logic
Game AI
Physics
+
Window interface (GLUT, EGL etc) - Depends on the platform, anyway could be GLUT for desktop and EGL for mobile devices.
Human Interface - depends on the porting, Java for Android, OC for IOS, whatever version desktop
Sound manager - depends on the porting
Market services - depends on the porting
In this way, you can re-use 95% of your efforts in a seamless way.
we have adopted this solution for our engine and so far it is really worth the initial investment.
Here are the results of my experience implementing OpenGL ES 2.0 support for various platforms on which my commercial mapping and routing library runs.
The rendering class is designed to run in a separate thread. It has a reference to the object containing the map data and the current view information, and uses mutexes to avoid conflicts when reading that information at the time of drawing. It maintains a cache of OpenGL ES vector data in graphics memory.
All the rendering logic is written in C++ and is used on all the following platforms.
Windows (MFC)
Use the ANGLE library: link to libEGL.lib and libGLESv2.lib and ensure that the executable has access to the DLLs libEGL.dll and libGLESv2.dll. The C++ code creates a thread that redraws the graphics at a suitable rate (e.g., 25 times a second).
Windows (.NET and WPF)
Use a C++/CLI wrapper to create an EGL context and to call the C++ rendering code that is used directly in the MFC implementation. The C++ code creates a thread that redraws the graphics at a suitable rate (e.g., 25 times a second).
Windows (UWP)
Create the EGL context in the UWP app code and call the C++ rendering code via the a a C++/CXX wrapper. You will need to use a SwapChainPanel and create your own render loop running in a different thread. See the GLUWP project for sample code.
Qt on Windows, Linux and Mac OS
Use a QOpenGLWidget as your windows. Use the Qt OpenGL ES wrapper to create the EGL context, then call the C++ rendering code in your paintGL() function.
Android
Create a renderer class implementing android.opengl.GLSurfaceView.Renderer. Create a JNI wrapper for the C++ rendering object. Create the C++ rendering object in your onSurfaceCreated() function. Call the C++ rendering object's drawing function in your onDrawFrame() function. You will need to import the following libraries for your renderer class:
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.opengl.GLSurfaceView.Renderer;
Create a view class derived from GLSurfaceView. In your view class's constructor first set up your EGL configuration:
setEGLContextClientVersion(2); // use OpenGL ES 2.0
setEGLConfigChooser(8,8,8,8,24,0);
then create an instance of your renderer class and call setRenderer to install it.
iOS
Use the METALAngle library, not GLKit, which Apple has deprecated and will eventually no longer support.
Create an Objective C++ renderer class to call your C++ OpenGL ES drawing logic.
Create a view class derived from MGLKView. In your view class's drawRect() function, create a renderer object if it doesn't yet exist, then call its drawing function. That is, your drawRect function should be something like:
-(void)drawRect:(CGRect)rect
{
if (m_renderer == nil && m_my_other_data != nil)
m_renderer = [[MyRenderer alloc] init:m_my_other_data];
if (m_renderer)
[m_renderer draw];
}
In your app you'll need a view controller class that creates the OpenGL context and sets it up, using code like this:
MGLContext* opengl_context = [[MGLContext alloc] initWithAPI:kMGLRenderingAPIOpenGLES2];
m_view = [[MyView alloc] initWithFrame:aBounds context:opengl_context];
m_view.drawableDepthFormat = MGLDrawableDepthFormat24;
self.view = m_view;
self.preferredFramesPerSecond = 30;
Linux
It is easiest to to use Qt on Linux (see above) but it's also possible to use the GLFW framework. In your app class's constructor, call glfwCreateWindow to create a window and store it as a data member. Call glfwMakeContextCurrent to make the EGL context current, then create a data member holding an instance of your renderer class; something like this:
m_window = glfwCreateWindow(1024,1024,"My Window Title",nullptr,nullptr);
glfwMakeContextCurrent(m_window);
m_renderer = std::make_unique<CMyRenderer>();
Add a Draw function to your app class:
bool MapWindow::Draw()
{
if (glfwWindowShouldClose(m_window))
return false;
m_renderer->Draw();
/* Swap front and back buffers */
glfwSwapBuffers(m_window);
return true;
}
Your main() function will then be:
int main(void)
{
/* Initialize the library */
if (!glfwInit())
return -1;
// Create the app.
MyApp app;
/* Draw continuously until the user closes the window */
while (app.Draw())
{
/* Poll for and process events */
glfwPollEvents();
}
glfwTerminate();
return 0;
}
Shader incompatibilities
There are incompatibilities in the shader language accepted by the various OpenGL ES 2.0 implementations. I overcome these in the C++ code using the following conditionally compiled code in my CompileShader function:
const char* preamble = "";
#if defined(_POSIX_VERSION) && !defined(ANDROID) && !defined(__ANDROID__) && !defined(__APPLE__) && !defined(__EMSCRIPTEN__)
// for Ubuntu using Qt or GLFW
preamble = "#version 100\n";
#elif defined(USING_QT) && defined(__APPLE__)
// On the Mac #version doesn't work so the precision qualifiers are suppressed.
preamble = "#define lowp\n#define mediump\n#define highp\n";
#endif
The preamble is then prefixed to the shader code.

Is it possible to hack GTK to render to OpenGL texture

I'm writing an OpenGL game, and want native looking GUI elements. I was wondering if anyone has successfully hacked GTK+ using GtkOffscreenWindow and gtk_offscreen_window_get_pixbuf to render to an OpenGL texture, and whether this would have reasonable performance, considering repeated re-uploading of texture data every time the GUI is updated
While this is certainly possible, I'd instead use a real OpenGL widget toolkit like Clutter. If you want to render GTK+ with OpenGL, I'd start by creating a new GDK backend (X11/OpenGL or something like that), that (re-)implements all the GDK drawing functions using OpenGL. A nice side effect would be, that all GTK+ windows would allow for ordinary OpenGL rendering, too, i.e. no more need for a GtkGLWidget class.