I'm writing an cross-platform renderer. I want to use it on Windows, Linux, Android, iOS.
Do you think that it is a good idea to avoid absolute abstraction and write it directly in OpenGL ES 2.0?
As far as I know I should be able to compile it on PC against standard OpenGL, with only a small changes in code that handles context and connection to windowing system.
Do you think that it is a good idea to avoid absolute abstraction and write it directly in OpenGL ES 2.0?
Your principle difficulties with this will be dealing with those parts of the ES 2.0 specification which are not actually the same as OpenGL 2.1.
For example, you just can't shove ES 2.0 shaders through a desktop GLSL 1.20 compiler. In ES 2.0, you use things like specifying precision; those are illegal constructs in GLSL 1.20.
You can however #define around them, but this requires a bit of manual intervention. You will have to insert a #ifdef into the shader source file. There are shader compilation tricks you can do to make this a bit easier.
Indeed, because GL ES uses a completely different set of extensions (though some are mirrors and subsets of desktop GL extensions), you may want to do this.
Every GLSL shader (desktop or ES) needs to have a "preamble". The first non-comment thing in a shader needs to be a #version declaration. Fortunately for you, the version is the same between desktop GL 2.1 and GL ES 2.0: #version 1.20. The problem is what comes next: the #extension list (if any). This enables extensions needed by the shader.
Since GL ES uses different extensions from desktop GL, you will need to change this extension list. And since odds are good you're going to need more GLSL ES extensions than desktop GL 2.1 extensions, these lists won't just be 1:1 mapping, but completely different lists.
My suggestion is to employ the ability to give GLSL shaders multiple strings. That is, your actual shader files do not have any preamble stuff. They only have the actual definitions and functions. The main body of the shader.
When running on GL ES, you have a global preamble that you will affix to the beginning of the shader. You will have a different global preamble in desktop GL. The code would look like this:
GLuint shader = glCreateShader(/*shader type*/);
const char *shaderList[2];
shaderList[0] = GetGlobalPreambleString(); //Gets preamble for the right platform
shaderList[1] = LoadShaderFile(); //Get the actual shader file
glShaderSource(shader, 2, shaderList, NULL);
The preamble can also include a platform-specific #define. User-defined of course. That way, you can #ifdef code for different platforms.
There are other differences between the two. For example, while valid ES 2.0 texture uploading function calls will work fine in desktop GL 2.1, they will not necessarily be optimal. Things that would upload fine on big-endian machines like all mobile systems will require some bit twiddling from the driver in little-endian desktop machines. So you may want to have a way to specify different pixel transfer parameters on GL ES and desktop GL.
Also, there are different sets of extensions in ES 2.0 and desktop GL 2.1 that you will want to take advantage of. While many of them try to mirror one another (OES_framebuffer_object is a subset of EXT_framebuffer_object), you may run afoul of similar "not quite a subset" issues like those mentioned above.
In my humble experience, the best approach for this kind of requirements is to develop your engine in a pure C flavor, with no additional layers on it.
I am the main developer of PATRIA 3D engine which is based on the basic principle you just mentioned in terms of portability and we have achieved this by just developing the tool on basic standard libraries.
The effort to compile your code then on the different platforms is very minimal.
The actual effort to port the entire solution can be calculated depending on the components you want to embed in your engine.
For example:
Standard C:
Engine 3D
Game Logic
Game AI
Physics
+
Window interface (GLUT, EGL etc) - Depends on the platform, anyway could be GLUT for desktop and EGL for mobile devices.
Human Interface - depends on the porting, Java for Android, OC for IOS, whatever version desktop
Sound manager - depends on the porting
Market services - depends on the porting
In this way, you can re-use 95% of your efforts in a seamless way.
we have adopted this solution for our engine and so far it is really worth the initial investment.
Here are the results of my experience implementing OpenGL ES 2.0 support for various platforms on which my commercial mapping and routing library runs.
The rendering class is designed to run in a separate thread. It has a reference to the object containing the map data and the current view information, and uses mutexes to avoid conflicts when reading that information at the time of drawing. It maintains a cache of OpenGL ES vector data in graphics memory.
All the rendering logic is written in C++ and is used on all the following platforms.
Windows (MFC)
Use the ANGLE library: link to libEGL.lib and libGLESv2.lib and ensure that the executable has access to the DLLs libEGL.dll and libGLESv2.dll. The C++ code creates a thread that redraws the graphics at a suitable rate (e.g., 25 times a second).
Windows (.NET and WPF)
Use a C++/CLI wrapper to create an EGL context and to call the C++ rendering code that is used directly in the MFC implementation. The C++ code creates a thread that redraws the graphics at a suitable rate (e.g., 25 times a second).
Windows (UWP)
Create the EGL context in the UWP app code and call the C++ rendering code via the a a C++/CXX wrapper. You will need to use a SwapChainPanel and create your own render loop running in a different thread. See the GLUWP project for sample code.
Qt on Windows, Linux and Mac OS
Use a QOpenGLWidget as your windows. Use the Qt OpenGL ES wrapper to create the EGL context, then call the C++ rendering code in your paintGL() function.
Android
Create a renderer class implementing android.opengl.GLSurfaceView.Renderer. Create a JNI wrapper for the C++ rendering object. Create the C++ rendering object in your onSurfaceCreated() function. Call the C++ rendering object's drawing function in your onDrawFrame() function. You will need to import the following libraries for your renderer class:
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.opengl.GLSurfaceView.Renderer;
Create a view class derived from GLSurfaceView. In your view class's constructor first set up your EGL configuration:
setEGLContextClientVersion(2); // use OpenGL ES 2.0
setEGLConfigChooser(8,8,8,8,24,0);
then create an instance of your renderer class and call setRenderer to install it.
iOS
Use the METALAngle library, not GLKit, which Apple has deprecated and will eventually no longer support.
Create an Objective C++ renderer class to call your C++ OpenGL ES drawing logic.
Create a view class derived from MGLKView. In your view class's drawRect() function, create a renderer object if it doesn't yet exist, then call its drawing function. That is, your drawRect function should be something like:
-(void)drawRect:(CGRect)rect
{
if (m_renderer == nil && m_my_other_data != nil)
m_renderer = [[MyRenderer alloc] init:m_my_other_data];
if (m_renderer)
[m_renderer draw];
}
In your app you'll need a view controller class that creates the OpenGL context and sets it up, using code like this:
MGLContext* opengl_context = [[MGLContext alloc] initWithAPI:kMGLRenderingAPIOpenGLES2];
m_view = [[MyView alloc] initWithFrame:aBounds context:opengl_context];
m_view.drawableDepthFormat = MGLDrawableDepthFormat24;
self.view = m_view;
self.preferredFramesPerSecond = 30;
Linux
It is easiest to to use Qt on Linux (see above) but it's also possible to use the GLFW framework. In your app class's constructor, call glfwCreateWindow to create a window and store it as a data member. Call glfwMakeContextCurrent to make the EGL context current, then create a data member holding an instance of your renderer class; something like this:
m_window = glfwCreateWindow(1024,1024,"My Window Title",nullptr,nullptr);
glfwMakeContextCurrent(m_window);
m_renderer = std::make_unique<CMyRenderer>();
Add a Draw function to your app class:
bool MapWindow::Draw()
{
if (glfwWindowShouldClose(m_window))
return false;
m_renderer->Draw();
/* Swap front and back buffers */
glfwSwapBuffers(m_window);
return true;
}
Your main() function will then be:
int main(void)
{
/* Initialize the library */
if (!glfwInit())
return -1;
// Create the app.
MyApp app;
/* Draw continuously until the user closes the window */
while (app.Draw())
{
/* Poll for and process events */
glfwPollEvents();
}
glfwTerminate();
return 0;
}
Shader incompatibilities
There are incompatibilities in the shader language accepted by the various OpenGL ES 2.0 implementations. I overcome these in the C++ code using the following conditionally compiled code in my CompileShader function:
const char* preamble = "";
#if defined(_POSIX_VERSION) && !defined(ANDROID) && !defined(__ANDROID__) && !defined(__APPLE__) && !defined(__EMSCRIPTEN__)
// for Ubuntu using Qt or GLFW
preamble = "#version 100\n";
#elif defined(USING_QT) && defined(__APPLE__)
// On the Mac #version doesn't work so the precision qualifiers are suppressed.
preamble = "#define lowp\n#define mediump\n#define highp\n";
#endif
The preamble is then prefixed to the shader code.
Related
I am developing a project using modern OpenGL through OpenTK. I want to use Gwen dot net GUI library in my project. Unfortunately, Gwen dot net uses old OpenGL for its widget rendering. I have tried merging both modern OpenGL and Gwen dot net and so far, have been unsuccessfull. Before I waste my time debugging my code, I would like to know, is it possible to merge both old OpenGL and modern OpenGL?
If you create a compatibility profile context, it should support all all legacy functionality. From OpenGL 4.3 compatibility spec, 1.2.4:
Older generations of graphics hardware were not programmable using shaders,
although they were configurable by setting state controlling specific details of their
operation. The compatibility profile of OpenGL continues to support the legacy
OpenGL commands developed for such fixed-function hardware, although they
are typically implemented by writing shaders which reproduce the operation of
such hardware. Fixed-function OpenGL commands and operations are described
as alternative interfaces following descriptions of the corresponding shader stages.
These days mixing old style and new style OpenGL is best avoided. On MS Windows and Linux you can, but weird stuff tends to happen.
For MacOS, Apple have declared that they're not going to support compatibility contexts at all, so you can't mix.
Since you're stuck with the GUI toolkit, I would try to isolate all your new style OpenGL code in a separate context and render to an offscreen target, then blit that to the main display.
OpenTK render for GWEN is a separate class. Just rewrite it modern way. There's no problem with that.
I am a student learning c++ and opengl for 5 months now and we have touched some advanced topics over the course of time starting from basic opengl like glBegin/glEnd to VA to VBO to shaders etc. Our professor has made us build up our graphics engine over time form first class and every now and then he asks us to stop using one or the other deprecated features and move on to the newer versions.
Now as part of the current assignment, he asked us to get rid of everything prior to OpenGl ES 2.0. Our codebase is fairly large and I was wondering if I could set OpenGL to 2.0 and above only so that having those deprecated features would actually fail at compile time, so that I can make sure all those features are out of my engine.
When you initialize your OpenGL context, you can pass hints to the context to request a specific context version. For example, using the GLFW library:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
GLFWwindow* window = glfwCreateWindow(res_width, res_height, window_name, monitor, NULL);
This will fail (in the case of GLFW, it returns a NULL window) if the OpenGL library doesn't support ES 2.0. Your platform's native EGL (or WGL, GLX, AGL, etc.) functions offer this functionality.
I am discovering that GL needs different settings on different systems to run correctly. For instance, on my desktop if I request a version 3.3 core-profile context everything works. On a Mac, I have to set the forward-compatibility flag as well. My netbook claims to only support version 2.1, but has most of the 3.1 features as extensions. (So I have to request 2.1.)
My question is, is there a general way to determine:
Whether all of the GL features my application needs are supported, and
What combination of version, profile (core or not), and forward compatibility flag I need to make it work?
I don't think there is any magic bullet here. It's pretty much going to be up to you to do the appropriate runtime checking and provided alternate paths that make the most out of the available features.
When I'm writing an OpenGL app, I usually define a "caps" table, like this:
struct GLInfo {
// GL extensions/features as booleans for direct access:
int hasVAO; // GL_ARB_vertex_array_object........: Has VAO support? Allows faster switching between vertex formats.
int hasPBO; // GL_ARB_pixel_buffer_object........: Supports Pixel Buffer Objects?
int hasFBO; // GL_ARB_framebuffer_object.........: Supports Framebuffer objects for offscreen rendering?
int hasGPUMemoryInfo; // GL_NVX_gpu_memory_info............: GPU memory info/stats.
int hasDebugOutput; // GL_ARB_debug_output...............: Allows GL to generate debug and profiling messages.
int hasExplicitAttribLocation; // GL_ARB_explicit_attrib_location...: Allows the use of "layout(location = N)" in GLSL code.
int hasSeparateShaderObjects; // GL_ARB_separate_shader_objects....: Allows creation of "single stage" programs that can be combined at use time.
int hasUniformBuffer; // GL_ARB_uniform_buffer_object......: Uniform buffer objects (UBOs).
int hasTextureCompressionS3TC; // GL_EXT_texture_compression_s3tc...: Direct use of S3TC compressed textures.
int hasTextureCompressionRGTC; // GL_ARB_texture_compression_rgtc...: Red/Green texture compression: RGTC/AT1N/ATI2N.
int hasTextureFilterAnisotropic; // GL_EXT_texture_filter_anisotropic.: Anisotropic texture filtering!
};
Where I place all the feature information collected at startup with glGet and by testing function pointers. Then, when using a feature/function, I always check first for availability, providing a fallback if possible. E.g.:
if (glInfo.hasVAO)
{
draw_with_VAO();
}
else
{
draw_with_VB_only();
}
Of course that there are some minimal features that you might decide the hardware must have in order to run your software. E.g.: Must have at least OpenGL 2.1 with support for GLSL v120. This is perfectly normal and expected.
About the diferences between core and compatibility profile, if you wish to support both, there are some OpenGL features that you will have to avoid. For example: Always draw using VBOs. They are the norm on Core and existed before via extensions.
in Android NDK, is it possible to make OpenGL ES 1.1 work with the typical java-side GLSurfaceView pattern (overriding methods from GLSurfaceView.Renderer onDrawFrame, onSurfaceCreated, etc.) while using in the C++ side the frame, color and depth buffers, and VBO?
I am trying to create them using this:
void ES1Renderer::on_surface_created() {
// Create default framebuffer object. The backing will be allocated for the current layer in -resizeFromLayer
glGenFramebuffersOES(1, &defaultFramebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer);
// Create color renderbuffer object.
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
// create depth renderbuffer object.
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
}
However, it seems that this does not get the context appropriately, which is I think created when the GLSurfaceView and renderer are initialized (java side).
I am no expert either on NDK nor OpenGLES, but I have to port an iOS app which is using OpenGL ES 1.1, and i aim to reuse as much code as i can. Since the app is also leveraging the platform-specific UI components (buttons, lists, etc.), while drawing GL graphics, I thought this would be the best way to go. However, I am now considering using a native activity, though I am not sure about what will be the relationship with the other java components.
Absolutely, yes. The standard approach is that you create a GLSurfaceView like you would when using OpenGL from Java, create and hook up your GLSurfaceView.Renderer implementation, and let the rendering thread start up.
From your Renderer methods, like onSurfaceCreated() and onDrawFrame(), you can now call the JNI functions that invoke functions in your native code. In those native functions, you can make any OpenGL API calls your heart desires. For example, in the function you call from onSurfaceCreated() you might create some objects and set up some initial state. In the function you call from onSurfaceChanged(), you might set up your viewport and projection. In the function you call from onDrawFrame(), you do your rendering.
You can even make OpenGL calls from both Java and native code. The Java OpenGL API is just a very thin layer around the native functions. It doesn't make a difference if the functions are called from native code or through the Java API.
The only thing you need to watch out for is that you invoke all your native code that makes OpenGL API calls from the GLSurfaceView.Renderer implementations of onSurfaceCreated(), onSurfaceChanged() and onDrawFrame(). When these methods are called, you are in the rendering thread, and have a current OpenGL context. If native OpenGL code is invoked from anywhere else, chances are that you are in the wrong thread and/or you do not have a current OpenGL context.
There are of course more complex setups where you create your own OpenGL contexts, make them current explicitly, etc. But I would strongly recommend to stick with the simple approach above unless you have a very good reason why you need something more. For most standard OpenGL rendering, what I described should be perfectly sufficient.
I am working on a gaming framework of sorts, and am a newcomer to OpenGL. Most books seem to not give a terribly clear answer to this question, and I want to develop on my desktop using OpenGL, but execute the code in an OpenGL ES 2.0 environment. My question is twofold then:
If I target my framework for OpenGL on the desktop, will it just run without modification in an OpenGL ES 2.0 environment?
If not, then is there a good emulator out there, PC or Mac; is there a script that I can run that will convert my OpenGL code into OpenGL ES code, or flag things that won't work?
It's been about three years since I was last doing any ES work, so I may be out of date or simply remembering some stuff incorrectly.
No, targeting OpenGL for desktop does not equal targeting OpenGL ES, because ES is a subset. ES does not implement immediate mode functions (glBegin()/glEnd(), glVertex*(), ...) Vertex arrays are the main way of sending stuff into the pipeline.
Additionally, it depends on what profile you are targetting: at least in the Lite profile, ES does not need to implement floating point functions. Instead you get fixed point functions; think 32-bit integers where first 16 bits mean digits before decimal point, and the following 16 bits mean digits after the decimal point.
In other words, even simple code might be unportable if it uses floats (you'd have to replace calls to gl*f() functions with calls to gl*x() functions.
See how you might solve this problem in Trolltech's example (specifically the qtwidget.cpp file; it's Qt example, but still...). You'll see they make this call:
q_glClearColor(f2vt(0.1f), f2vt(0.1f), f2vt(0.2f), f2vt(1.0f));
This is meant to replace call to glClearColorf(). Additionally, they use macro f2vt() - meaning float to vertex type - which automagically converts the argument from float to the correct data type.
While I was developing some small demos three years ago for a company, I've had success working with PowerVR's SDK. It's for Visual C++ under Windows; I haven't tried it under Linux (no need since I was working on company PC).
A small update to reflect my recent experiences with ES. (June 7th 2011)
Today's platforms probably don't use the Lite profile, so you probably don't have to worry about fixed-point decimals
When porting your desktop code for mobile (e.g. iOS), quite probably you'll have to do primarily these, and not much else:
replace glBegin()/glEnd() with vertex arrays
replace some calls to functions such as glClearColor() with calls such as glClearColorf()
rewrite your windowing and input system
if targeting OpenGL ES 2.0 to get shader functionality, you'll now have to completely replace fixed-function pipeline's built in behavior with shaders - at least the basic ones that reimplement fixed-function pipeline
Really important: unless your mobile system is not memory-constrained, you really want to look into using texture compression for your graphics chip; for example, on iOS devices, you'll be uploading PVRTC-compressed data to the chip
In OpenGL ES 2.0, which is what new gadgets use, you also have to provide your own vertex and fragment shaders because the old fixed function pipeline is gone. This means having to do any shading calculations etc. yourself, things which would be quite complex, but you can find existing implementations on GLSL tutorials.
Still, as GLES is a subset of desktop OpenGL, it is possible to run the same program on both platforms.
I know of two projects to provide GL translation between desktop and ES:
glshim: Substantial fixed pipeline to 1.x support, basic ES 2.x support.
Regal: Anything to ES 2.x.
From my understanding OpenGL ES is a subset of OpenGL. I think if you refrain from using immediate mode stuff, like glBegin() and glEnd() you should be alright. I haven't done much with OpenGL in the past couple of months, but when I was working with ES 1.0 as long as I didn't use glBegin/glEnd all the code I had learned from the standard OpenGL worked.
I know the iPhone simulator runs OpenGL ES code. I'm not sure about the Android one.
Here is Windows emulator.
Option 3) You could use a library like Qt to handle your OpenGL code using their built in wrapper functions. This gives you the option of using one code base (or minimally different code bases) for OpenGL and building for most any platform you want. You wouldn't need to port it for each different platform you wanted to support. Qt can even choose the OpenGL context based on the functions that you use.