How do I design my GL application for portability? - opengl

I am discovering that GL needs different settings on different systems to run correctly. For instance, on my desktop if I request a version 3.3 core-profile context everything works. On a Mac, I have to set the forward-compatibility flag as well. My netbook claims to only support version 2.1, but has most of the 3.1 features as extensions. (So I have to request 2.1.)
My question is, is there a general way to determine:
Whether all of the GL features my application needs are supported, and
What combination of version, profile (core or not), and forward compatibility flag I need to make it work?

I don't think there is any magic bullet here. It's pretty much going to be up to you to do the appropriate runtime checking and provided alternate paths that make the most out of the available features.
When I'm writing an OpenGL app, I usually define a "caps" table, like this:
struct GLInfo {
// GL extensions/features as booleans for direct access:
int hasVAO; // GL_ARB_vertex_array_object........: Has VAO support? Allows faster switching between vertex formats.
int hasPBO; // GL_ARB_pixel_buffer_object........: Supports Pixel Buffer Objects?
int hasFBO; // GL_ARB_framebuffer_object.........: Supports Framebuffer objects for offscreen rendering?
int hasGPUMemoryInfo; // GL_NVX_gpu_memory_info............: GPU memory info/stats.
int hasDebugOutput; // GL_ARB_debug_output...............: Allows GL to generate debug and profiling messages.
int hasExplicitAttribLocation; // GL_ARB_explicit_attrib_location...: Allows the use of "layout(location = N)" in GLSL code.
int hasSeparateShaderObjects; // GL_ARB_separate_shader_objects....: Allows creation of "single stage" programs that can be combined at use time.
int hasUniformBuffer; // GL_ARB_uniform_buffer_object......: Uniform buffer objects (UBOs).
int hasTextureCompressionS3TC; // GL_EXT_texture_compression_s3tc...: Direct use of S3TC compressed textures.
int hasTextureCompressionRGTC; // GL_ARB_texture_compression_rgtc...: Red/Green texture compression: RGTC/AT1N/ATI2N.
int hasTextureFilterAnisotropic; // GL_EXT_texture_filter_anisotropic.: Anisotropic texture filtering!
};
Where I place all the feature information collected at startup with glGet and by testing function pointers. Then, when using a feature/function, I always check first for availability, providing a fallback if possible. E.g.:
if (glInfo.hasVAO)
{
draw_with_VAO();
}
else
{
draw_with_VB_only();
}
Of course that there are some minimal features that you might decide the hardware must have in order to run your software. E.g.: Must have at least OpenGL 2.1 with support for GLSL v120. This is perfectly normal and expected.
About the diferences between core and compatibility profile, if you wish to support both, there are some OpenGL features that you will have to avoid. For example: Always draw using VBOs. They are the norm on Core and existed before via extensions.

Related

What is the best way to copy the lower resolution mips of a texture in OpenGL?

The OpenGL application I maintain has tu run in systems with small VRAM.
In some cases, at runtime, I can guarantee that a specific texture won't need high resolution. My textures are in compressed format.
What I would like to do is to discard the highest resolution mips to save VRAM.
I thought of doing something like this psudocode:
u32 fullResTex = loadTextureAndUploadToGPU("my_texture.ktx");
generateMipmaps(fullResTex);
u32 lowResTex = createTexture();
copyMipsLowestMips(fullResTex -> lowResMips);
destroy(fullResTex)
I'm not sure which function of the OpenGL API can help me with "copyMipsLowestMips".
I have found a few that could potentially serve the purpose: glCopyTexImage2D,
glCopyImageSubData.
I have seen that glCopyImageSubData requires OpenGL 4.3 though. I would prefer to avoid using versions later than 3.3 but I might be able to use the function if available through an extension.
What would be the best to approach this problem?
I ended up using glCopyImageSubData and it seems to work fine. It's the only function that was able to do what I needed. It's a pity that it requires OpenGL 4.3.

Vulkan: Geometry Shader Validation incorrect?

I am currently using a NVIDIA GeForce GTX 780 (from Gigabyte if that matters - I don't know how much this could be affected by the onboard BIOS, I've also got two of them Installed but due to Vulkans incapeability of SLI I only use one device at a time in my code. However in the NVIDIA control center SLI is activated. I use the official Driver version 375.63). That GPU is fully capable of geometry shaders of course.
I am using a geometry shader with Vulkan API and it works allright and does everything I exspect it to do. However I get the validation layer report as follows: #[SC]: Shader requires VkPhysicalDeviceFeatures::geometryShader but is not enabled on the device.
Is this a bug? Does someone have similiar issues?
PS: http://vulkan.gpuinfo.org/displayreport.php?id=777#features is saying the support for "Geometry Shader" is "true" as exspected. I am using Vulkan 1.0.30.0 SDK.
Vulkan features work differently from OpenGL extensions. In OpenGL, if an extension is supported, then it's always active. In Vulkan, the fact that a feature is available is not sufficient. When you create a VkDevice, you must explicitly ask for all features you intend to use.
If you didn't ask for the Geometry Shader feature, then you can't use GS's, even if the VkPhysicalDevice advertises support for it.
So the sequence of steps should be to check to see if a VkPhysicalDevice supports the features you want to use, then supply those features in VkDeviceCreateInfo::pEnabledFeatures when you call vkCreateDevice.
Since Vulkan doesn't do validation checking on most of its inputs, the actual driver will likely assume you enabled the feature and just do what it normally would. But it is not required to do so; using a feature which has not been enabled is undefined behavior. So the validation layer is right to stop you.

How to render to OpenGL from Vulkan?

Is it possible to render to OpenGL from Vulkan?
It seems nVidia has something:
https://lunarg.com/faqs/mix-opengl-vulkan-rendering/
Can it be done for other GPU's?
Yes, it's possible if the Vulkan implementation and the OpenGL implementation both have the appropriate extensions available.
Here is a screenshot from an example app in the Vulkan Samples repository which uses OpenGL to render a simple shadertoy to a texture, and then uses that texture in a Vulkan rendered window.
Although your question seems to suggest you want to do the reverse (render to something using Vulkan and then display the results using OpenGL), the same concepts apply.... populate a texture in one API, use synchronization to ensure the GPU work is complete, and then use the texture in the other API. You can also do the same thing with buffers, so for instance you could use Vulkan for compute operations and then use the results in an OpenGL render.
Requirements
Doing this requires that both the OpenGL and Vulkan implementations support the required extensions, however, according to this site, these extensions are widely supported across OS versions and GPU vendors, as long as you're working with a recent (> 1.0.51) version of Vulkan.
You need the the External Objects extension for OpenGL and the External Memory/Fence/Sempahore extensions for Vulkan.
The Vulkan side of the extensions allow you to allocate memory, create semaphores or fences while marking the resulting objects as exportable. The corresponding GL extensions allow you to take the objects and manipulate them with new GL commands which allow you to wait on fences, signal and wait on semaphores, or use Vulkan allocated memory to back an OpenGL texture. By using such a texture in an OpenGL framebuffer, you can pretty much render whatever you want to it, and then use the rendered results in Vulkan.
Export / Import example code
For example, on the Vulkan side, when you're allocating memory for an image you can do this...
vk::Image image;
... // create the image as normal
vk::MemoryRequirements memReqs = device.getImageMemoryRequirements(image);
vk::MemoryAllocateInfo memAllocInfo;
vk::ExportMemoryAllocateInfo exportAllocInfo{
vk::ExternalMemoryHandleTypeFlagBits::eOpaqueWin32
};
memAllocInfo.pNext = &exportAllocInfo;
memAllocInfo.allocationSize = memReqs.size;
memAllocInfo.memoryTypeIndex = context.getMemoryType(
memReqs.memoryTypeBits, vk::MemoryPropertyFlagBits::eDeviceLocal);
vk::DeviceMemory memory;
memory = device.allocateMemory(memAllocInfo);
device.bindImageMemory(image, memory, 0);
HANDLE sharedMemoryHandle = device.getMemoryWin32HandleKHR({
texture.memory, vk::ExternalMemoryHandleTypeFlagBits::eOpaqueWin32
});
This is using the C++ interface and is using the Win32 variation of the extensions. For Posix platforms there are alternative methods for getting file descriptors instead of WIN32 handles.
The sharedMemoryHandle is the value that you'll need to pass to OpenGL, along with the actual allocation size. On the GL side you can then do this...
// These values should be populated by the vulkan code
HANDLE sharedMemoryHandle;
GLuint64 sharedMemorySize;
// Create a 'memory object' in OpenGL, and associate it with the memory
// allocated in vulkan
GLuint mem;
glCreateMemoryObjectsEXT(1, mem);
glImportMemoryWin32HandleEXT(mem, sharedMemorySize,
GL_HANDLE_TYPE_OPAQUE_WIN32_EXT, sharedMemoryHandle);
// Having created the memory object we can now create a texture and use
// the memory object for backing it
glCreateTextures(GL_TEXTURE_2D, 1, &color);
// The internalFormat here should correspond to the format of
// the Vulkan image. Similarly, the w & h values should correspond to
// the extent of the Vulkan image
glTextureStorageMem2DEXT(color, 1, GL_RGBA8, w, h, mem, 0 );
Synchronization
The trickiest bit here is synchronization. The Vulkan specification requires images to be in certain states (layouts) before corresponding operations can be performed on them. So in order to do this properly (based on my understanding), you would need to...
In Vulkan, create a command buffer that transitions the image to ColorAttachmentOptimal layout
Submit the command buffer so that it signals a semaphore that has similarly been exported to OpenGL
In OpenGL, use the glWaitSemaphoreEXT function to cause the GL driver to wait for the transition to complete.
Note that this is a GPU side wait, so the function will not block at all. It's similar to glWaitSync (as opposed to glClientWaitSync)in this regard.
Execute your GL commands that render to the framebuffer
Signal a different exported Semaphore on the GL side with the glSignalSemaphoreEXT function
In Vulkan, execute another image layout transition from ColorAttachmentOptimal to ShaderReadOnlyOptimal
Submit the transition command buffer with the wait semaphore set to the one you just signaled from the GL side.
That's would be an optimal path. Alternatively, the quick and dirty method would be to do the vulkan transition, and then execute queue and device waitIdle commands to ensure the work is done, execute the GL commands, followed by glFlush & glFinish commands to ensure the GPU is done with that work, and then resume your Vulkan commands. This is more of a brute force approach and will likely produce poorer performance than doing the proper synchronization.
NVIDIA has created an OpenGL extension, NV_draw_vulkan_image, which can render a VkImage in OpenGL. It even has some mechanisms for interacting with Vulkan semaphores and the like.
However, according to the documentation, you must bypass all Vulkan layers, since layers can modify non-dispatchable handles and the OpenGL extension doesn't know about said modifications. And their recommended means of doing so is to use the glGetVkProcAddrNV for all of your Vulkan functions.
Which also means that you can't get access to any debugging that relies on Vulkan layers.
There is some more information in this more recent slide deck from SIGGRAPH 2016. Slides 63-65 describe how to blit a Vulkan image to an OpenGL backbuffer. My opinion is that it may have been pretty easy for NVIDIA to support this since the Vulkan driver is contained in libGL.so (on Linux). So it may not have been that hard to give the Vulkan image handle to the GL side of the driver and have it be useful.
As another answer pointed out, there are still no official registered multi-vendor interop extensions. This approach just works on NVIDIA.

Cross-platform renderer in OpenGL ES

I'm writing an cross-platform renderer. I want to use it on Windows, Linux, Android, iOS.
Do you think that it is a good idea to avoid absolute abstraction and write it directly in OpenGL ES 2.0?
As far as I know I should be able to compile it on PC against standard OpenGL, with only a small changes in code that handles context and connection to windowing system.
Do you think that it is a good idea to avoid absolute abstraction and write it directly in OpenGL ES 2.0?
Your principle difficulties with this will be dealing with those parts of the ES 2.0 specification which are not actually the same as OpenGL 2.1.
For example, you just can't shove ES 2.0 shaders through a desktop GLSL 1.20 compiler. In ES 2.0, you use things like specifying precision; those are illegal constructs in GLSL 1.20.
You can however #define around them, but this requires a bit of manual intervention. You will have to insert a #ifdef into the shader source file. There are shader compilation tricks you can do to make this a bit easier.
Indeed, because GL ES uses a completely different set of extensions (though some are mirrors and subsets of desktop GL extensions), you may want to do this.
Every GLSL shader (desktop or ES) needs to have a "preamble". The first non-comment thing in a shader needs to be a #version declaration. Fortunately for you, the version is the same between desktop GL 2.1 and GL ES 2.0: #version 1.20. The problem is what comes next: the #extension list (if any). This enables extensions needed by the shader.
Since GL ES uses different extensions from desktop GL, you will need to change this extension list. And since odds are good you're going to need more GLSL ES extensions than desktop GL 2.1 extensions, these lists won't just be 1:1 mapping, but completely different lists.
My suggestion is to employ the ability to give GLSL shaders multiple strings. That is, your actual shader files do not have any preamble stuff. They only have the actual definitions and functions. The main body of the shader.
When running on GL ES, you have a global preamble that you will affix to the beginning of the shader. You will have a different global preamble in desktop GL. The code would look like this:
GLuint shader = glCreateShader(/*shader type*/);
const char *shaderList[2];
shaderList[0] = GetGlobalPreambleString(); //Gets preamble for the right platform
shaderList[1] = LoadShaderFile(); //Get the actual shader file
glShaderSource(shader, 2, shaderList, NULL);
The preamble can also include a platform-specific #define. User-defined of course. That way, you can #ifdef code for different platforms.
There are other differences between the two. For example, while valid ES 2.0 texture uploading function calls will work fine in desktop GL 2.1, they will not necessarily be optimal. Things that would upload fine on big-endian machines like all mobile systems will require some bit twiddling from the driver in little-endian desktop machines. So you may want to have a way to specify different pixel transfer parameters on GL ES and desktop GL.
Also, there are different sets of extensions in ES 2.0 and desktop GL 2.1 that you will want to take advantage of. While many of them try to mirror one another (OES_framebuffer_object is a subset of EXT_framebuffer_object), you may run afoul of similar "not quite a subset" issues like those mentioned above.
In my humble experience, the best approach for this kind of requirements is to develop your engine in a pure C flavor, with no additional layers on it.
I am the main developer of PATRIA 3D engine which is based on the basic principle you just mentioned in terms of portability and we have achieved this by just developing the tool on basic standard libraries.
The effort to compile your code then on the different platforms is very minimal.
The actual effort to port the entire solution can be calculated depending on the components you want to embed in your engine.
For example:
Standard C:
Engine 3D
Game Logic
Game AI
Physics
+
Window interface (GLUT, EGL etc) - Depends on the platform, anyway could be GLUT for desktop and EGL for mobile devices.
Human Interface - depends on the porting, Java for Android, OC for IOS, whatever version desktop
Sound manager - depends on the porting
Market services - depends on the porting
In this way, you can re-use 95% of your efforts in a seamless way.
we have adopted this solution for our engine and so far it is really worth the initial investment.
Here are the results of my experience implementing OpenGL ES 2.0 support for various platforms on which my commercial mapping and routing library runs.
The rendering class is designed to run in a separate thread. It has a reference to the object containing the map data and the current view information, and uses mutexes to avoid conflicts when reading that information at the time of drawing. It maintains a cache of OpenGL ES vector data in graphics memory.
All the rendering logic is written in C++ and is used on all the following platforms.
Windows (MFC)
Use the ANGLE library: link to libEGL.lib and libGLESv2.lib and ensure that the executable has access to the DLLs libEGL.dll and libGLESv2.dll. The C++ code creates a thread that redraws the graphics at a suitable rate (e.g., 25 times a second).
Windows (.NET and WPF)
Use a C++/CLI wrapper to create an EGL context and to call the C++ rendering code that is used directly in the MFC implementation. The C++ code creates a thread that redraws the graphics at a suitable rate (e.g., 25 times a second).
Windows (UWP)
Create the EGL context in the UWP app code and call the C++ rendering code via the a a C++/CXX wrapper. You will need to use a SwapChainPanel and create your own render loop running in a different thread. See the GLUWP project for sample code.
Qt on Windows, Linux and Mac OS
Use a QOpenGLWidget as your windows. Use the Qt OpenGL ES wrapper to create the EGL context, then call the C++ rendering code in your paintGL() function.
Android
Create a renderer class implementing android.opengl.GLSurfaceView.Renderer. Create a JNI wrapper for the C++ rendering object. Create the C++ rendering object in your onSurfaceCreated() function. Call the C++ rendering object's drawing function in your onDrawFrame() function. You will need to import the following libraries for your renderer class:
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.opengl.GLSurfaceView.Renderer;
Create a view class derived from GLSurfaceView. In your view class's constructor first set up your EGL configuration:
setEGLContextClientVersion(2); // use OpenGL ES 2.0
setEGLConfigChooser(8,8,8,8,24,0);
then create an instance of your renderer class and call setRenderer to install it.
iOS
Use the METALAngle library, not GLKit, which Apple has deprecated and will eventually no longer support.
Create an Objective C++ renderer class to call your C++ OpenGL ES drawing logic.
Create a view class derived from MGLKView. In your view class's drawRect() function, create a renderer object if it doesn't yet exist, then call its drawing function. That is, your drawRect function should be something like:
-(void)drawRect:(CGRect)rect
{
if (m_renderer == nil && m_my_other_data != nil)
m_renderer = [[MyRenderer alloc] init:m_my_other_data];
if (m_renderer)
[m_renderer draw];
}
In your app you'll need a view controller class that creates the OpenGL context and sets it up, using code like this:
MGLContext* opengl_context = [[MGLContext alloc] initWithAPI:kMGLRenderingAPIOpenGLES2];
m_view = [[MyView alloc] initWithFrame:aBounds context:opengl_context];
m_view.drawableDepthFormat = MGLDrawableDepthFormat24;
self.view = m_view;
self.preferredFramesPerSecond = 30;
Linux
It is easiest to to use Qt on Linux (see above) but it's also possible to use the GLFW framework. In your app class's constructor, call glfwCreateWindow to create a window and store it as a data member. Call glfwMakeContextCurrent to make the EGL context current, then create a data member holding an instance of your renderer class; something like this:
m_window = glfwCreateWindow(1024,1024,"My Window Title",nullptr,nullptr);
glfwMakeContextCurrent(m_window);
m_renderer = std::make_unique<CMyRenderer>();
Add a Draw function to your app class:
bool MapWindow::Draw()
{
if (glfwWindowShouldClose(m_window))
return false;
m_renderer->Draw();
/* Swap front and back buffers */
glfwSwapBuffers(m_window);
return true;
}
Your main() function will then be:
int main(void)
{
/* Initialize the library */
if (!glfwInit())
return -1;
// Create the app.
MyApp app;
/* Draw continuously until the user closes the window */
while (app.Draw())
{
/* Poll for and process events */
glfwPollEvents();
}
glfwTerminate();
return 0;
}
Shader incompatibilities
There are incompatibilities in the shader language accepted by the various OpenGL ES 2.0 implementations. I overcome these in the C++ code using the following conditionally compiled code in my CompileShader function:
const char* preamble = "";
#if defined(_POSIX_VERSION) && !defined(ANDROID) && !defined(__ANDROID__) && !defined(__APPLE__) && !defined(__EMSCRIPTEN__)
// for Ubuntu using Qt or GLFW
preamble = "#version 100\n";
#elif defined(USING_QT) && defined(__APPLE__)
// On the Mac #version doesn't work so the precision qualifiers are suppressed.
preamble = "#define lowp\n#define mediump\n#define highp\n";
#endif
The preamble is then prefixed to the shader code.

OpenGL: How to select correct mipmapping method automatically?

I'm having problem at mipmapping the textures on different hardware. I use the following code:
char *exts = (char *)glGetString(GL_EXTENSIONS);
if(strstr(exts, "SGIS_generate_mipmap") == NULL){
// use gluBuild2DMipmaps()
}else{
// use GL_GENERATE_MIPMAP
}
But on some cards it says GL_GENERATE_MIPMAP is supported when its not, thus the gfx card is trying to read memory from where the mipamp is supposed to be, thus the card renders other textures to those miplevels.
I tried glGenerateMipmapEXT(GL_TEXTURE_2D) but it makes all my textures white, i enabled GL_TEXTURE_2D before using that function (as it was told to).
I could as well just use gluBuild2DMipmaps() for everyone, since it works. But i dont want to make new cards load 10x slower because theres 2 users who have really old cards.
So how do you choose mipmap method correctly?
glGenerateMipmap is supported at least by OpenGL 3.3 as a part of functionality, not as extension.
You have following options:
Check OpenGL version, if it is more recent that the first one that ever supported glGenerateMipmap, use glGenerateMipmap.
(I'd recommend this one) OpenGL 1.4..2.1 supports texParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE)(see this) Which will generate mipmaps from base level. This probably became "deprecated" in OpenGL 3, but you should be able to use it.
Use GLEE or GLEW ans use glewIsSupported / gleeIsSupported call to check for extension.
Also I think that instead of using extensions, it should be easier to stick with OpenGL specifications. A lot of hardware supports OpenGL 3, so you should be able get most of required functionality (shaders, mipmaps, framebuffer objects, geometry shaders) as part of OpenGL specification, not as extension.
If drivers lie, there's not much you can do about it. Also remember that glGenerateMipmapEXT is part of the GL_EXT_framebuffer_object extension.
What you are doing wrong is checking for the SGIS_generate_mipmap extension and using GL_GENERATE_MIPMAP, since this enum belongs to core OpenGL, but that's not really the problem.
The issue you describe sounds like a very horrible OpenGL implementation bug, i would bypass it using gluBuild2DMipmaps on those cards (having a list and checking at startup).