Forcing Opengl 2.0 and above in c++ - c++

I am a student learning c++ and opengl for 5 months now and we have touched some advanced topics over the course of time starting from basic opengl like glBegin/glEnd to VA to VBO to shaders etc. Our professor has made us build up our graphics engine over time form first class and every now and then he asks us to stop using one or the other deprecated features and move on to the newer versions.
Now as part of the current assignment, he asked us to get rid of everything prior to OpenGl ES 2.0. Our codebase is fairly large and I was wondering if I could set OpenGL to 2.0 and above only so that having those deprecated features would actually fail at compile time, so that I can make sure all those features are out of my engine.

When you initialize your OpenGL context, you can pass hints to the context to request a specific context version. For example, using the GLFW library:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
GLFWwindow* window = glfwCreateWindow(res_width, res_height, window_name, monitor, NULL);
This will fail (in the case of GLFW, it returns a NULL window) if the OpenGL library doesn't support ES 2.0. Your platform's native EGL (or WGL, GLX, AGL, etc.) functions offer this functionality.

Related

Can I use both deprecated OpenGL and modern OpenGL in a single rendering window?

I am developing a project using modern OpenGL through OpenTK. I want to use Gwen dot net GUI library in my project. Unfortunately, Gwen dot net uses old OpenGL for its widget rendering. I have tried merging both modern OpenGL and Gwen dot net and so far, have been unsuccessfull. Before I waste my time debugging my code, I would like to know, is it possible to merge both old OpenGL and modern OpenGL?
If you create a compatibility profile context, it should support all all legacy functionality. From OpenGL 4.3 compatibility spec, 1.2.4:
Older generations of graphics hardware were not programmable using shaders,
although they were configurable by setting state controlling specific details of their
operation. The compatibility profile of OpenGL continues to support the legacy
OpenGL commands developed for such fixed-function hardware, although they
are typically implemented by writing shaders which reproduce the operation of
such hardware. Fixed-function OpenGL commands and operations are described
as alternative interfaces following descriptions of the corresponding shader stages.
These days mixing old style and new style OpenGL is best avoided. On MS Windows and Linux you can, but weird stuff tends to happen.
For MacOS, Apple have declared that they're not going to support compatibility contexts at all, so you can't mix.
Since you're stuck with the GUI toolkit, I would try to isolate all your new style OpenGL code in a separate context and render to an offscreen target, then blit that to the main display.
OpenTK render for GWEN is a separate class. Just rewrite it modern way. There's no problem with that.

How did I just use an OpenGL 3 feature in a 1.1 context?

I just started programming in OpenGL a few weeks ago, and as people suggested to me, I used GLFW as my window handler. I also used GLEW as my extensions handler. So I go through the whole process of making a vertex buffer with three points to draw a triangle and passing it to OpenGL to draw it and I compile and run. No triangle draws, presumably because I didn't have any shaders. So I think to myself "Why don't I lower my OpenGL version through the context creation using GLFW?" and I did that. From OpenGL 3.3 to 1.1 and surely enough, there's a triangle. Success, I thought. Then I remember an article saying that vertex buffers have only been introduce in OpenGL 3, so how have I possibly used an OpenGL 3 feature in a 1.1 context?
The graphics driver is free to give you a context which is a different version than what you requested, as long as they are compatible. For example, you may get a v3.0 context even if you ask for a v1.1 context, as OpenGL 3.0 does not change or remove any features from OpenGL 1.1.
Additionally, often times the only difference between OpenGL versions is what extensions that the GPU must support. If you have a v1.1 context but ARB_vertex_buffer_object is supported, then you will still be able to use VBOs (though you may need to append the ARB suffix to the function names).

How to learn OpenGl the right way using http://www.opengl-tutorial.org/?

I have set out to learn OpenGl using this tutorial.
I followed the instructions, installed the libraries and compiled the tutorial source code and when I tried to run it I got:
Failed to open GLFW window. If you have an Intel GPU, they are not 3.3
compatible. Try the 2.1 version of the tutorials.
So I checked out the FAQ on this particular issue and got this advice:
However I do not fully understand this advice fully. I have a 5 year old laptop with Ubuntu 13.10 and a Mobile IntelĀ® GM45 Express Chipset x86/MMX/SSE2. According to the FAQ OpenGl
3.3 is not supported for me. The FAQ suggests that I learn OpenGl 3.3 anyhow.
But how can I learn it without actually running the code?
Is there a way to emulate OpenGl 3.3 somehow on older hardware?
I think the sad truth is that you have to update your hardware. It's relatively cheap on desktop computers (3.3 GPUs can be get for coffee money, really), but on mobile you are more limited, I guess.
The emulators available like ANGLE or the ARM MALI one focus on ES mostly, and in the latter case require 3.2/3.3 support anyway.
That being said, you absolutely can learn OpenGL without running the code, altough it's certainly less fun. Aside from GL2.1, I'd explore WebGL too; maybe it's not cutting edge, but it's fun enough for a lot of people to dig it.
Perhaps you can set out to learn OpenGL 2.1 instead; however, I wouldn't recommend sticking with it! There are a ton of changes that happened in OpenGL 3.0, where a lot of old functionality you could use in v2.1 becomes deprecated.
Modern versions of the OpenGL specification force developers to use their 'programmable pipeline' via shader programs in order to render.
While although v2.1 supports some shader features, it also contains support for the 'fixed-function pipeline' for rendering.
Personally, I started learning OpenGL through using the Java bindings for it (this may simplify things if you are using the Windows API). However, no matter which bindings you use, the OpenGL specification remains the same. All implementations of OpenGL require you to create some window/display to render to and to respond to some basic rendering events (initialization and window resize for example).
Within the fixed-function pipeline, you can make calls such as the following to render a triangle to the screen. The vertices and colors for those vertices are described within the glBegin/End block.
glBegin(GL_TRIANGLES)
glColor3d(1, 0, 0);
glVertex3d(-1, 0, 0);
glColor3d(0, 1, 0);
glVertex3d(1, 0, 0);
glColor3d(0, 0, 1);
glVertex3d(0, 1, 0);
glEnd();
Here are some links you may want to visit to learn more:
- OpenGL Version History
- Swiftless Tutorials (I highly recomend this one!)
- Lighthouse 3D (good for GLSL)
- Java OpenGL Tutorial

Cross-platform renderer in OpenGL ES

I'm writing an cross-platform renderer. I want to use it on Windows, Linux, Android, iOS.
Do you think that it is a good idea to avoid absolute abstraction and write it directly in OpenGL ES 2.0?
As far as I know I should be able to compile it on PC against standard OpenGL, with only a small changes in code that handles context and connection to windowing system.
Do you think that it is a good idea to avoid absolute abstraction and write it directly in OpenGL ES 2.0?
Your principle difficulties with this will be dealing with those parts of the ES 2.0 specification which are not actually the same as OpenGL 2.1.
For example, you just can't shove ES 2.0 shaders through a desktop GLSL 1.20 compiler. In ES 2.0, you use things like specifying precision; those are illegal constructs in GLSL 1.20.
You can however #define around them, but this requires a bit of manual intervention. You will have to insert a #ifdef into the shader source file. There are shader compilation tricks you can do to make this a bit easier.
Indeed, because GL ES uses a completely different set of extensions (though some are mirrors and subsets of desktop GL extensions), you may want to do this.
Every GLSL shader (desktop or ES) needs to have a "preamble". The first non-comment thing in a shader needs to be a #version declaration. Fortunately for you, the version is the same between desktop GL 2.1 and GL ES 2.0: #version 1.20. The problem is what comes next: the #extension list (if any). This enables extensions needed by the shader.
Since GL ES uses different extensions from desktop GL, you will need to change this extension list. And since odds are good you're going to need more GLSL ES extensions than desktop GL 2.1 extensions, these lists won't just be 1:1 mapping, but completely different lists.
My suggestion is to employ the ability to give GLSL shaders multiple strings. That is, your actual shader files do not have any preamble stuff. They only have the actual definitions and functions. The main body of the shader.
When running on GL ES, you have a global preamble that you will affix to the beginning of the shader. You will have a different global preamble in desktop GL. The code would look like this:
GLuint shader = glCreateShader(/*shader type*/);
const char *shaderList[2];
shaderList[0] = GetGlobalPreambleString(); //Gets preamble for the right platform
shaderList[1] = LoadShaderFile(); //Get the actual shader file
glShaderSource(shader, 2, shaderList, NULL);
The preamble can also include a platform-specific #define. User-defined of course. That way, you can #ifdef code for different platforms.
There are other differences between the two. For example, while valid ES 2.0 texture uploading function calls will work fine in desktop GL 2.1, they will not necessarily be optimal. Things that would upload fine on big-endian machines like all mobile systems will require some bit twiddling from the driver in little-endian desktop machines. So you may want to have a way to specify different pixel transfer parameters on GL ES and desktop GL.
Also, there are different sets of extensions in ES 2.0 and desktop GL 2.1 that you will want to take advantage of. While many of them try to mirror one another (OES_framebuffer_object is a subset of EXT_framebuffer_object), you may run afoul of similar "not quite a subset" issues like those mentioned above.
In my humble experience, the best approach for this kind of requirements is to develop your engine in a pure C flavor, with no additional layers on it.
I am the main developer of PATRIA 3D engine which is based on the basic principle you just mentioned in terms of portability and we have achieved this by just developing the tool on basic standard libraries.
The effort to compile your code then on the different platforms is very minimal.
The actual effort to port the entire solution can be calculated depending on the components you want to embed in your engine.
For example:
Standard C:
Engine 3D
Game Logic
Game AI
Physics
+
Window interface (GLUT, EGL etc) - Depends on the platform, anyway could be GLUT for desktop and EGL for mobile devices.
Human Interface - depends on the porting, Java for Android, OC for IOS, whatever version desktop
Sound manager - depends on the porting
Market services - depends on the porting
In this way, you can re-use 95% of your efforts in a seamless way.
we have adopted this solution for our engine and so far it is really worth the initial investment.
Here are the results of my experience implementing OpenGL ES 2.0 support for various platforms on which my commercial mapping and routing library runs.
The rendering class is designed to run in a separate thread. It has a reference to the object containing the map data and the current view information, and uses mutexes to avoid conflicts when reading that information at the time of drawing. It maintains a cache of OpenGL ES vector data in graphics memory.
All the rendering logic is written in C++ and is used on all the following platforms.
Windows (MFC)
Use the ANGLE library: link to libEGL.lib and libGLESv2.lib and ensure that the executable has access to the DLLs libEGL.dll and libGLESv2.dll. The C++ code creates a thread that redraws the graphics at a suitable rate (e.g., 25 times a second).
Windows (.NET and WPF)
Use a C++/CLI wrapper to create an EGL context and to call the C++ rendering code that is used directly in the MFC implementation. The C++ code creates a thread that redraws the graphics at a suitable rate (e.g., 25 times a second).
Windows (UWP)
Create the EGL context in the UWP app code and call the C++ rendering code via the a a C++/CXX wrapper. You will need to use a SwapChainPanel and create your own render loop running in a different thread. See the GLUWP project for sample code.
Qt on Windows, Linux and Mac OS
Use a QOpenGLWidget as your windows. Use the Qt OpenGL ES wrapper to create the EGL context, then call the C++ rendering code in your paintGL() function.
Android
Create a renderer class implementing android.opengl.GLSurfaceView.Renderer. Create a JNI wrapper for the C++ rendering object. Create the C++ rendering object in your onSurfaceCreated() function. Call the C++ rendering object's drawing function in your onDrawFrame() function. You will need to import the following libraries for your renderer class:
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.opengl.GLSurfaceView.Renderer;
Create a view class derived from GLSurfaceView. In your view class's constructor first set up your EGL configuration:
setEGLContextClientVersion(2); // use OpenGL ES 2.0
setEGLConfigChooser(8,8,8,8,24,0);
then create an instance of your renderer class and call setRenderer to install it.
iOS
Use the METALAngle library, not GLKit, which Apple has deprecated and will eventually no longer support.
Create an Objective C++ renderer class to call your C++ OpenGL ES drawing logic.
Create a view class derived from MGLKView. In your view class's drawRect() function, create a renderer object if it doesn't yet exist, then call its drawing function. That is, your drawRect function should be something like:
-(void)drawRect:(CGRect)rect
{
if (m_renderer == nil && m_my_other_data != nil)
m_renderer = [[MyRenderer alloc] init:m_my_other_data];
if (m_renderer)
[m_renderer draw];
}
In your app you'll need a view controller class that creates the OpenGL context and sets it up, using code like this:
MGLContext* opengl_context = [[MGLContext alloc] initWithAPI:kMGLRenderingAPIOpenGLES2];
m_view = [[MyView alloc] initWithFrame:aBounds context:opengl_context];
m_view.drawableDepthFormat = MGLDrawableDepthFormat24;
self.view = m_view;
self.preferredFramesPerSecond = 30;
Linux
It is easiest to to use Qt on Linux (see above) but it's also possible to use the GLFW framework. In your app class's constructor, call glfwCreateWindow to create a window and store it as a data member. Call glfwMakeContextCurrent to make the EGL context current, then create a data member holding an instance of your renderer class; something like this:
m_window = glfwCreateWindow(1024,1024,"My Window Title",nullptr,nullptr);
glfwMakeContextCurrent(m_window);
m_renderer = std::make_unique<CMyRenderer>();
Add a Draw function to your app class:
bool MapWindow::Draw()
{
if (glfwWindowShouldClose(m_window))
return false;
m_renderer->Draw();
/* Swap front and back buffers */
glfwSwapBuffers(m_window);
return true;
}
Your main() function will then be:
int main(void)
{
/* Initialize the library */
if (!glfwInit())
return -1;
// Create the app.
MyApp app;
/* Draw continuously until the user closes the window */
while (app.Draw())
{
/* Poll for and process events */
glfwPollEvents();
}
glfwTerminate();
return 0;
}
Shader incompatibilities
There are incompatibilities in the shader language accepted by the various OpenGL ES 2.0 implementations. I overcome these in the C++ code using the following conditionally compiled code in my CompileShader function:
const char* preamble = "";
#if defined(_POSIX_VERSION) && !defined(ANDROID) && !defined(__ANDROID__) && !defined(__APPLE__) && !defined(__EMSCRIPTEN__)
// for Ubuntu using Qt or GLFW
preamble = "#version 100\n";
#elif defined(USING_QT) && defined(__APPLE__)
// On the Mac #version doesn't work so the precision qualifiers are suppressed.
preamble = "#define lowp\n#define mediump\n#define highp\n";
#endif
The preamble is then prefixed to the shader code.

What is the relationship between EGL and OpenGL?

I'm writing an implementation for OpenVG and OpenGL|ES in Go, both of which depend on the Khronos EGL API, supposedly to ease portability I guess.
I'm writing an implementation of OpenVG on top of OpenGL ES for fun and educational reasons - I haven't done a lot of rendering work and I'd like to learn more about the open APIs and practice implementing well defined standards (easier to see if I got the right results).
As I understand it, EGL provides a standard API for retrieving a drawing context (or what ever it's rightly called,) instead of using one of the multiple OS provided APIs (GLX, WGL etc)
I have a hard time believing Khronos would go through such effort and leave the standard OpenGL out of the loop but the thing is, I haven't found how or if OpenGL (the real deal) interfaces with EGL or if it's only OpenGL ES. If OpenGL ES can use the drawing context from EGL, would standard OpenGL also work?
I'm really new to all of this which is why I'm excited but the real project I'm doing is a Go widget toolkit that utilizes OpenVG for its drawing operations and uses hardware acceleration wherever possible.
If OpenVG, OpenGL and OpenGL ES depend on EGL, I think my question can be answered with "yes" or "no". Just keep in mind that I dove into this subject head-first last night.
Does OpenGL use or depend on EGL?
Off topic, but there is no EGL tag. Should there be?
You can bind EGL_OPENGL_API as the current API for your thread, via the eglBindAPI(EGLenum api); a subsequent eglCreateContext will create an OpenGL rendering context.
From the EGL spec, p42:
Some of the functions described in this section make use of the current rendering API, which is set on a per-thread basis
by calling
EGLBoolean eglBindAPI(EGLenum api);
api must specify one of the supported client APIs , either EGL_OPENGL_API,
EGL_OPENGL_ES_API, or EGL_OPENVG_API
The caveat is that the EGL implementation is well within its rights not support EGL_OPENGL_API and instead generate an EGL_BAD_PARAMETER error if you try to bind it.
It's also hard to link to libGL without picking up the AGL/WGL/GLX cruft; the ABI on these platforms require that libGL provides those entry points. Depending on what platform you're playing with this may or may not be a problem.
Does OpenGL use or depend on EGL?
No. You can run OpenGL without EGL.
But is possible to have EGL implementation capable to create desktop OpenGL context. That's because EGL's eglBindAPI(int api) allows EGL_OPENGL_API, EGL_OPENGL_ES_API, or EGL_OPENVG_API.
But if you ask:
Does OpenGL-ES use or depend on EGL?
The answer is yes, but there are exceptions.
Currently (2015), you have several implementations of OpenGL-ES that rely on EGL to create graphics context: Google ANGLE, PowerVR, ARM MALI, Adreno, AMD, Mesa, etc.
But on recent releases of NVIDIA and Intel drivers you can also request OpenGL-ES contexts directly, where extensions WGL_EXT_create_context_es_profile and WGL_EXT_create_context_es2_profile are available (Windows). Same thing on Unix platforms where GLX_EXT_create_context_es_profile and GLX_EXT_create_context_es2_profile extensions are available.
The intent of EGL is to ease developers' lives by creating a portable and standard way to initialize and get context of supported graphics API, without worrying about platform specific issues, as WGL, GLX, etc. That is a problem of EGL implementers, not final programmer.
There is no relationship between OpenGL and EGL. EGL generally does not run on desktops, and there is no ability to create a desktop OpenGL context through EGL.
OpenGL contexts are instead created and managed by platform-specific APIs. On Windows, the WGL API is used. On X11-based platforms, GLX is used. And so forth.
There was some noise last year from Khronos about creating a version of EGL that could work on the desktop and make OpenGL contexts, but thus far, nothing came of it.