difference between vulkan context and opengl context - opengl

Does Vulkan have the concept of context and is it similar to the openGL context? and what a Vulkan instance actually is?
I am trying to understand the concept of context in both the Vulkan and openGL senses. I also want to know what a Vulkan instance actually is.

An OpenGL context represents:
An interface for issuing a single sequence of rendering commands for a particular piece of hardware.
The current set of state that will be used for any rendering operations on that hardware.
A set of objects which are usable by rendering operations, as well as the ability to interact with them. Some of these objects may be shared with other contexts.
Vulkan doesn't have a single thing that represents all of these. There is no "Vulkan context". And indeed, a Vulkan instance doesn't represent any of the above.
See, while an OpenGL context defines an interface for a particular piece of hardware, it doesn't define how to create that context. External APIs exist for creating a context. And these external APIs are entirely platform-dependent.
Vulkan wants to both define the hardware interface and provide a framework for creating one. That framework will be platform-dependent to some degree, but only to the degree that is absolutely necessary.
Specifically, the only platform-dependent part of dealing with Vulkan is interacting with an OS-provided display system (ie: a window). Everything else is platform-neutral.
A Vulkan instance is the interface for interacting with the available hardware. You use a Vulkan interface to ask what Vulkan-capable hardware is available, what their capabilities and limitations are, and how to create a Vulkan device from that hardware (and to optionally attach it to an OS-provided display system).
Individual Vulkan implementations on your machine register themselves with the Vulkan instance system. This allows the instance implementation to talk to them and ask them those questions.
The closest thing Vulkan has to an OpenGL context is a Vulkan logical device (VkDevice). These are built from one or more "physical devices" (representing actual GPUs), and they provide the interface for interacting with the rendering system. But even that doesn't really do most of the OpenGL context stuff. It only really does #3.
Command buffers represent the current state for rendering commands, as well as the rendering commands themselves. But you can have many separate command buffers, and they are not executed immediately.
Executing command buffers is done through queues, which come from devices. A queue is a separate path of execution for command buffers.
Vulkan is not like OpenGL. You shouldn't try to map Vulkan constructs onto OpenGL constructs (except when they literally have the same name, like "texture" or "shader" or whatever). Take Vulkan for what it is, not for how it looks like OpenGL.

No, Vulkan uses much more mainstream paradigm: object oriented programming.
Vulkan instance is an object. It is created, destroyed, encapsulates its data, and has methods.
glEnable function is an exemplar of what context means in OpenGL. Once you set a state, it applies from that point forward forever, unless it is changed to something else (i.e. the following function calls operate in a "context" of previous function calls). This leads to endless frustration:
// ok lets draw something with State 1
glEnable( STATE1 );
glEnable( STATE2 );
glEnable( STATE3 );
glDraw();
//... lot of stuff potentially happens here
// ok lets draw something with State 2 which differs from State 1 in STATE3
glDisable( STATE3 );
// that is pretty brittle right; I can't be sure of anything that happed beforehand
// so I better just doublecheck everything:
assert( glIsEnabled(STATE1) );
assert( glIsEnabled(STATE2) );
assert( glIsDisabled(STATE4) );
//...
assert( glIsDisabled(STATE69) );
glDraw();
//... lot of stuff potentially happens here
// Ok now I want to draw things again with State 1 again:
glEnable( STATE3 );
// But I don't even remember what state any previous work did set,
// so I might as well try to set everything just to be safe:
glEnable( STATE1 );
glEnable( STATE2 );
glDisable( STATE4 );
//...
glDisable( STATE69 );
// But Extensions can always add new state, so I am pretty much screwed
// and guaranteed bugs sooner or later because someone set STATE420_EXT and didn't clear it
glDraw();
This is trivial to manage in Vulkan in modern less errorprone object oriented way:
VkGraphicsPipeline pipe1, pipe2;
VkPipelineCreateInfo pi{};
pi.state1 = enable;
pi.state2 = enable;
pi.state3 = enable;
vkCreateGraphicsPipeline( device, &pi, &pipe1 );
pi.state3 = disable;
vkCreateGraphicsPipeline( device, &pi, &pipe2 );
// ...
// draw with State 1:
vkCmdBindPipeline( cmdBuff, pipe1 );
vkCmdDraw( cmdBuff );
// draw with State 2:
vkCmdBindPipeline( cmdBuff, pipe2 );
vkCmdDraw( cmdBuff );
// draw with State 1 again:
vkCmdBindPipeline( cmdBuff, pipe1 );
vkCmdDraw( cmdBuff );
Note this is C, so the object is the first parameter, rather than before a dot like it would be in e.g. class based C++.

Related

How to render to OpenGL from Vulkan?

Is it possible to render to OpenGL from Vulkan?
It seems nVidia has something:
https://lunarg.com/faqs/mix-opengl-vulkan-rendering/
Can it be done for other GPU's?
Yes, it's possible if the Vulkan implementation and the OpenGL implementation both have the appropriate extensions available.
Here is a screenshot from an example app in the Vulkan Samples repository which uses OpenGL to render a simple shadertoy to a texture, and then uses that texture in a Vulkan rendered window.
Although your question seems to suggest you want to do the reverse (render to something using Vulkan and then display the results using OpenGL), the same concepts apply.... populate a texture in one API, use synchronization to ensure the GPU work is complete, and then use the texture in the other API. You can also do the same thing with buffers, so for instance you could use Vulkan for compute operations and then use the results in an OpenGL render.
Requirements
Doing this requires that both the OpenGL and Vulkan implementations support the required extensions, however, according to this site, these extensions are widely supported across OS versions and GPU vendors, as long as you're working with a recent (> 1.0.51) version of Vulkan.
You need the the External Objects extension for OpenGL and the External Memory/Fence/Sempahore extensions for Vulkan.
The Vulkan side of the extensions allow you to allocate memory, create semaphores or fences while marking the resulting objects as exportable. The corresponding GL extensions allow you to take the objects and manipulate them with new GL commands which allow you to wait on fences, signal and wait on semaphores, or use Vulkan allocated memory to back an OpenGL texture. By using such a texture in an OpenGL framebuffer, you can pretty much render whatever you want to it, and then use the rendered results in Vulkan.
Export / Import example code
For example, on the Vulkan side, when you're allocating memory for an image you can do this...
vk::Image image;
... // create the image as normal
vk::MemoryRequirements memReqs = device.getImageMemoryRequirements(image);
vk::MemoryAllocateInfo memAllocInfo;
vk::ExportMemoryAllocateInfo exportAllocInfo{
vk::ExternalMemoryHandleTypeFlagBits::eOpaqueWin32
};
memAllocInfo.pNext = &exportAllocInfo;
memAllocInfo.allocationSize = memReqs.size;
memAllocInfo.memoryTypeIndex = context.getMemoryType(
memReqs.memoryTypeBits, vk::MemoryPropertyFlagBits::eDeviceLocal);
vk::DeviceMemory memory;
memory = device.allocateMemory(memAllocInfo);
device.bindImageMemory(image, memory, 0);
HANDLE sharedMemoryHandle = device.getMemoryWin32HandleKHR({
texture.memory, vk::ExternalMemoryHandleTypeFlagBits::eOpaqueWin32
});
This is using the C++ interface and is using the Win32 variation of the extensions. For Posix platforms there are alternative methods for getting file descriptors instead of WIN32 handles.
The sharedMemoryHandle is the value that you'll need to pass to OpenGL, along with the actual allocation size. On the GL side you can then do this...
// These values should be populated by the vulkan code
HANDLE sharedMemoryHandle;
GLuint64 sharedMemorySize;
// Create a 'memory object' in OpenGL, and associate it with the memory
// allocated in vulkan
GLuint mem;
glCreateMemoryObjectsEXT(1, mem);
glImportMemoryWin32HandleEXT(mem, sharedMemorySize,
GL_HANDLE_TYPE_OPAQUE_WIN32_EXT, sharedMemoryHandle);
// Having created the memory object we can now create a texture and use
// the memory object for backing it
glCreateTextures(GL_TEXTURE_2D, 1, &color);
// The internalFormat here should correspond to the format of
// the Vulkan image. Similarly, the w & h values should correspond to
// the extent of the Vulkan image
glTextureStorageMem2DEXT(color, 1, GL_RGBA8, w, h, mem, 0 );
Synchronization
The trickiest bit here is synchronization. The Vulkan specification requires images to be in certain states (layouts) before corresponding operations can be performed on them. So in order to do this properly (based on my understanding), you would need to...
In Vulkan, create a command buffer that transitions the image to ColorAttachmentOptimal layout
Submit the command buffer so that it signals a semaphore that has similarly been exported to OpenGL
In OpenGL, use the glWaitSemaphoreEXT function to cause the GL driver to wait for the transition to complete.
Note that this is a GPU side wait, so the function will not block at all. It's similar to glWaitSync (as opposed to glClientWaitSync)in this regard.
Execute your GL commands that render to the framebuffer
Signal a different exported Semaphore on the GL side with the glSignalSemaphoreEXT function
In Vulkan, execute another image layout transition from ColorAttachmentOptimal to ShaderReadOnlyOptimal
Submit the transition command buffer with the wait semaphore set to the one you just signaled from the GL side.
That's would be an optimal path. Alternatively, the quick and dirty method would be to do the vulkan transition, and then execute queue and device waitIdle commands to ensure the work is done, execute the GL commands, followed by glFlush & glFinish commands to ensure the GPU is done with that work, and then resume your Vulkan commands. This is more of a brute force approach and will likely produce poorer performance than doing the proper synchronization.
NVIDIA has created an OpenGL extension, NV_draw_vulkan_image, which can render a VkImage in OpenGL. It even has some mechanisms for interacting with Vulkan semaphores and the like.
However, according to the documentation, you must bypass all Vulkan layers, since layers can modify non-dispatchable handles and the OpenGL extension doesn't know about said modifications. And their recommended means of doing so is to use the glGetVkProcAddrNV for all of your Vulkan functions.
Which also means that you can't get access to any debugging that relies on Vulkan layers.
There is some more information in this more recent slide deck from SIGGRAPH 2016. Slides 63-65 describe how to blit a Vulkan image to an OpenGL backbuffer. My opinion is that it may have been pretty easy for NVIDIA to support this since the Vulkan driver is contained in libGL.so (on Linux). So it may not have been that hard to give the Vulkan image handle to the GL side of the driver and have it be useful.
As another answer pointed out, there are still no official registered multi-vendor interop extensions. This approach just works on NVIDIA.

How to isolate my own OpenGL calls inside a third-party process?

I am writing small tool that is drawing OpenGL overlay on top of the game which is closed source. The game is using SDL, so I am just hooking into SDL_GL_SwapWindow and doing my own stuff. However, this kind of hooking results in some side effects in the game itself. I found a solution that is basically wrapping around my own calls with deprecated glPushAttrib/glPopAttrib. But this solves only half of the problems. I am still getting random texture flickering in the game (I meant game textures, mine are showing fine). What could be the reason of this flickering? Can my own textures interfere with game textures? Do I need to isolate my own calls and how can I do it?
What could be the reason of this flickering?
If the game uses shaders, then glPushAttrib / glPopAttrib will not take care of all the state you may be clobbering with. The attribute stack has been deprecated and the program may use states that are either not covered by it, or where certain attribute bits in compatibility profile have been reused or expanded to cover further state. I recommend not using the attribute stack at all, because it's hard to get right.
Can my own textures interfere with game textures?
Yes. Say you left a 2D texture active in a texture unit that's later being used for a 1D texture. If the host program does not use shaders, then the GL_TEXTURE_2D will take precedence over the GL_TEXTURE_1D. It's a (IMHO poor) design choice of OpenGL that you can have multiple texture targets being bound to the same texture unit at the same time and which one is used to deliver texels depends on the individual targets' precedence.
Do I need to isolate my own calls
Yes.
and how can I do it?
Two possible solutions:
Create separate OpenGL context for just your own stuff. Use {wgl,glX}GetCurrentContext and {wglGetCurrentDC,glXGetCurrentDrawable} to retrieve the OpenGL context and drawable active at the moment you're "jumping" in. If you don't have a context already, you can use the drawable just retrieved to create a matching OpenGL context. Optionally install a namespace sharing. Switch to your context, draw your stuff and switch back to the host program one's. – Major drawback: Switching OpenGL contexts is quite expensive.
Before switching state around, use glGet… to retrieve the state active before doing so and restore the old state before returning to the host program.

FrameBuffers with GLSurfaceView pattern in OpenGLES 1.1 on android ndk

in Android NDK, is it possible to make OpenGL ES 1.1 work with the typical java-side GLSurfaceView pattern (overriding methods from GLSurfaceView.Renderer onDrawFrame, onSurfaceCreated, etc.) while using in the C++ side the frame, color and depth buffers, and VBO?
I am trying to create them using this:
void ES1Renderer::on_surface_created() {
// Create default framebuffer object. The backing will be allocated for the current layer in -resizeFromLayer
glGenFramebuffersOES(1, &defaultFramebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer);
// Create color renderbuffer object.
glGenRenderbuffersOES(1, &colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, colorRenderbuffer);
// create depth renderbuffer object.
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
}
However, it seems that this does not get the context appropriately, which is I think created when the GLSurfaceView and renderer are initialized (java side).
I am no expert either on NDK nor OpenGLES, but I have to port an iOS app which is using OpenGL ES 1.1, and i aim to reuse as much code as i can. Since the app is also leveraging the platform-specific UI components (buttons, lists, etc.), while drawing GL graphics, I thought this would be the best way to go. However, I am now considering using a native activity, though I am not sure about what will be the relationship with the other java components.
Absolutely, yes. The standard approach is that you create a GLSurfaceView like you would when using OpenGL from Java, create and hook up your GLSurfaceView.Renderer implementation, and let the rendering thread start up.
From your Renderer methods, like onSurfaceCreated() and onDrawFrame(), you can now call the JNI functions that invoke functions in your native code. In those native functions, you can make any OpenGL API calls your heart desires. For example, in the function you call from onSurfaceCreated() you might create some objects and set up some initial state. In the function you call from onSurfaceChanged(), you might set up your viewport and projection. In the function you call from onDrawFrame(), you do your rendering.
You can even make OpenGL calls from both Java and native code. The Java OpenGL API is just a very thin layer around the native functions. It doesn't make a difference if the functions are called from native code or through the Java API.
The only thing you need to watch out for is that you invoke all your native code that makes OpenGL API calls from the GLSurfaceView.Renderer implementations of onSurfaceCreated(), onSurfaceChanged() and onDrawFrame(). When these methods are called, you are in the rendering thread, and have a current OpenGL context. If native OpenGL code is invoked from anywhere else, chances are that you are in the wrong thread and/or you do not have a current OpenGL context.
There are of course more complex setups where you create your own OpenGL contexts, make them current explicitly, etc. But I would strongly recommend to stick with the simple approach above unless you have a very good reason why you need something more. For most standard OpenGL rendering, what I described should be perfectly sufficient.

Beating the state machine

I'm working on a plugin for a scripting language that allows the user to access the OpenGL 1.1 command set. On top of that, all functions of the scripting language's own gfx command set are transparently redirected to appropriate OpenGL calls. Normally, the user should use either the OpenGL command set or the scripting language's inbuilt gfx command set which basically contains just your typical 2D drawing commands like DrawLine(), DrawRectangle(), DrawPolygon(), etc.
Under certain conditions, however, the user might want to mix calls to the OpenGL and the inbuilt gfx command sets. This leads to the problem that my OpenGL implementations of inbuilt commands like DrawLine(), DrawRectangle(), DrawPolygon(), etc. have to be able to deal with whatever state the OpenGL state machine might currently be in.
Therefore, my idea was to first save all state information on the stack, then prepare a clean OpenGL context needed for my implementations of commands like DrawLine(), etc. and then restore the original state. E.g. something like this:
glPushAttrib(GL_ALL_ATTRIB_BITS);
glPushClientAttrib(GL_CLIENT_ALL_ATTRIB_BITS);
glPushMatrix();
....prepare OpenGL context for my needs.... --> problem: see below #2
....do drawing....
glPopMatrix();
glPopClientAttrib();
glPopAttrib();
Doing it like this, however, leads to several problems:
glPushAttrib() doesn't push all attributes, e.g save pixel pack and unpack state, render mode state, and select and feedback state are not pushed. Also, extension states are not saved. Extension states are not important as my plugin is not designed to support extensions. Saving and restoring other information (pixel pack and unpack) could probably be implemented manually using glGet().
Big problem: How should I prepare the OpenGL context after having saved all state information? I could save a copy of a "clean" state on the stack right after OpenGL's initialization and then try to pop this stack but for this I'd need a function to move data inside the stack, i.e. I'd need a function to copy or move a saved state from the back to the top of stack so that I can pop it. But I didn't see a function that can accomplish this...
It's probably going to be very slow but this is something I could live with because the user should not mix OpenGL and inbuilt gfx calls. If he does nevertheless, he will have to live with a very poor performance.
After these introductory considerations I'd finally like to present my question: Is it possible to "beat" the OpenGL state machine somehow? By "beating" I mean the following: Is it possible to completely save all current state information, then restore the default state and prepare it for my needs and do the drawing, and then finally restore the complete previous state again so that everything is exactly as it was before. For example, an OpenGL based version of the scripting language's DrawLine() command would do something like this then:
1. Save all current state information
2. Restore default state, set up a 2D projection matrix
3. Draw the line
4. Restore all saved state information so that the state is exactly the same as before
Is that possible somehow? It doesn't matter if it's very slow as long as it is 100% guaranteed to put the state into exactly the same state as it was before.
You can simply use different contexts, especially if you do not care about performance. Just keep an context for your internal gfx operations and another one the user might mess with and just bind the appropriate one to your window (and thread).
The way you describe it looks like you never want to share objects with the user's GL stuff, so simple "unshared" contexts will do fine. All you seem to want to share is the framebuffer - and the GL framebuffer (including back and front color buffers, depth buffer, stencil, etc..) is part of the drawable/window, not the context - so you get access to it whit any context when you make the context current. Changing the contexts mid-frame is not a problem.

Restoring OpenGL State

I want to write a general purpose utility function that will use an OpenGL Framebuffer Object to create a texture that can be used by some OpenGL program for whatever purpose a third party programmer would like.
Lets say for argument stake the function looks like
void createSpecialTexture(GLuint textureID)
{
MyOpenGLState state;
saveOpenGLState(state);
setStateToDefault();
doMyDrawing();
restoreOpenGLState(state);
}
What should MyOpenGLState, saveOpenGLState(state), setStateToDefault() and restoreOpenGLState(state) look like to ensure that doMyDrawing will behave correctly and that nothing that I do in doMyDrawing will affect anything else that a third party developer might be doing?
The problem that has been biting me is that OpenGL has a lot of implicit state and I am not sure I am capturing it all.
Update: My main concern is OpenGL ES 2.0 but I thought I would ask the question more generally
Don't use a framebuffer object to render your texture. Rather create a new EGLContext and EGLSurface. You can then use eglBindTexImage to turn your EGLSurface into a texture. This way you are guaranteed that state from doMyDrawing will not pollute the main gl Context and visa versa.
As for saving and restoring, glPushAttrib and glPopAttrib will get you very far.
You cannot, however, restore GL to a default state. However, since doMyDrawing() uses and/or modifies only state that should be known to you, you can just set that to values that you need.
That depends a lot on what yo do on your doMyDrawing();, but basically you have to restore everything (all states) that you change in this function. Without having a look at what is going on inside doMyDrawing(); it is impossible to guess what you have to restore.
If modifications on the projection or modelview matrix are done inside doMyDrawing(), remember to go initially push both GL_PROJECTION and GL_MODELVIEW matrix through glPushMatrix and restore them after the drawing through glPopMatrix. Any other state that you modify can be push and pop though glPushAttrib and the right attribute. Remember also to unbind any texture, FBO, PBO, etc.. that you might bind inside doMyDrawing();