Should each texture have its own dedicated renderer in SDL? - sdl

I'm attempting to learn SDL2 and am having difficulties from a practical perspective. I feel like I have a good understanding of SDL windows, renderers, and textures from an abstract perspective. However, I feel like I need to know more about what's going on under the hood to use them appropriately.
For example, when creating a texture I am required to provide a reference to a renderer. I find this odd. A texture seems like it is a resource that is loaded into VRAM. Why should I need to give a resource a reference to a renderer? I understand why it would be necessary to give a renderer a reference to a texture, however, vice versa it doesn't make any sense.
So that leads to another question. Since each texture requires a renderer, should each texture have its own dedicated renderer, or should multiple textures share a renderer?
I feel like there are consequences going down one route versus the other.

Short Answers
I believe the reason a SDL_Texture requires a renderer is because some backend implementations (OpenGL?) have contexts (this is essentially what SDL_Renderer is) and the image data must be associated with that particular context. You cannot use a texture created in one context inside of another.
for your other question, no, you don't need or want a renderer for each texture. That probably would only produce correct results with the software backend for the same reason (context).
As #keltar correctly points out none of the renderer's will work with a texture that was created with a different renderer due to a check in SDL_RenderCopy. However, this is strictly an API requirement to keep things consistent, my point above is to highlight that even if that check were absent it would not work for backends such as OpenGL, but there is no technical reason it would not work for the software renderer.
Some Details about SDL_Renderer
Remember that SDL_Renderer is an abstract interface to multiple possible backends (OpenGL, OpenGLES, D3D, Metal, Software, more?). Each of these are going to possibly have restrictions on sharing data between contexts and therefore SDL has to limit itself in the same way to maintain sanity.
Example of OpenGL restrictions
Here is a good resource for general restrictions and platform dependent functionality on OpenGL contexts.
As you can see from that page that sharing between contexts has restrictions.
Sharing can only occur in the same OpenGL implementation
This means that you certainly cant share between an SDL_Renderer using OpenGL an a different SDL_Renderer using another backend.
You can share data between different OpenGL Contexts
...
This is done using OS Specific extensions
Since SDL is cross platform this means they would have to write special code for each platform to support this, and all OpenGL implementations may not support it at all so its better for SDL to simply not support this either.
each extra render context has a major impact of the applications
performance
while not a restriction, it is a reason why adding support for sharing textures is not worthwhile for SDL.
Final Note: the 'S' in SDL stands for "simple". If you need to share data between contexts SDL is simply the wrong tool for the job.

Related

C++ OpenGL wrapper: interface similar to fixed pipeline, can export .collada

I have opengl code that uses the fixed pipeline.
Hitting two birds with one stone, I need a wrapper that can help me with the following tasks:
Convert the code to the new shader-based pipeline with minimal effort.
I have a class that calls opengl functions, such as: glBegin(triangles/lines), glVertex, glPushMatrix, glTranslate, glColor, gluSphere.
Ideally, I'd like it to derive from a class that supplies these functions in the base class. Behind the scenes, it would use the same high level logic as the fixed pipeline.
I'd like to export an opengl scene to .collada to load in an external renderer.
Opengl is low level rendering, and it doesn't have the concept of a scene. For example, this reddit post:
"You realize that you have to write a shim to capture all API calls
you are interested in to do that. Then, when finally, a draw call is
emitted you have to parse every single vertex and collect the data
from all over the memory from the buffers that you have recorded from
the APi calls that set up VAOs, VBOs and IBOs. Then you have to parse
the shader source code so that you can see which uniforms and vertex
attributes contribute to vertex clip coordinate generation. Then you
also have to synthesize/guess which outputs are normal, color, texture
coordinate and so on from the shader source if the resulting program
even have those in .obj file format-wise.
This gets even more complicated if Compute is used to generate data
inside the GPU for any of the buffers. If geometry or tessellator is
used then you also have to implement one of those so that you get
accurate outputs from the vertex processing. TL;DR - you have to write
your own OpenGL 4.5 driver that does exactly the same things a real
hardware driver would do. Good luck with that."
However, my scene is simple, using the fixed pipeline operations above.
I'd like the wrapper to keep track and construct a scene that can be exported.
--
EDIT: Since recommendation is off-topic, I'll ask the following question.
What I need above seems like something obvious that many should have found useful. Since I can't find a library that accomplishes that, I'm wondering if my approach is unreasonable?
More specifically, how do people port their legacy opengl code; do they write the relevant part from scratch, or does everyone implement his own wrapper as I suggested?
What about constructing a scene to export to collada?
Posted also:
https://community.khronos.org/t/c-opengl-wrapper-interface-similar-to-fixed-pipeline-can-export-collada/105829
Although there are some parts in legacy OpenGL that are not optimized in current drivers (like glDrawPixels, the raster drawing operations and indexed color mode), between modern hardware and the modest requirements of legacy applications, legacy OpenGL stuff runs well enough on modern systems.
The main reason to "modernize" legacy OpenGL code is, if one want to make use of the modern features. Any sort of "wrapper" will just run into the same kind of design problems that the OpenGL API ran between OpenGL-1.5 to OpenGL-2.1: Lots of built-in variables, default state, implicit action, etc. etc. This is difficult to document properly, and even more difficult to make use of reliably. Which is the reason you usually don't find these kinds of wrappers.
If you find yourself in the situation, that you absolutely must port your legacy code to modern OpenGL, e.g. to be interoperable with core contexts, then your best course of action will be to do a proper rewrite. Replace implcit mode calls to filling vertex buffers, replace calls to glTexEnv…, glMaterial…, glLight… with loading appropriate shaders and setting their uniforms.
Or, if you want a quick and dirty method: Just create two contexts, a modern one, and a legacy one and switch between them; often you can establish "list" sharing between them.

Is it possible to render one half of a scene by OpenGL and other half by DirectX

My straight answer would be NO. But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They used video editing software. They recorded two nearly deterministic run-throughs of their engine and spliced them together.
As for the question posed by your title, not within the same window. It may be possible within the same application from two windows, but you'd be better off with two separate applications.
Yes, it is possible. I did this as an experiment for a graduate course; I implemented half of a deferred shading graphics engine in OpenGL and the other half in D3D10. You can share surfaces between OpenGL and D3D contexts using the appropriate vendor extensions.
Does it have any practical applications? Not many that I can think of. I just wanted to prove that it could be done :)
I digress, however. That video is just a side-by-side of two separately recorded videos of the Haven benchmark running in the two different APIs.
My straight answer would be NO.
My straight answer would be "probably yes, but you definitely don't want to do that."
But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They prerendered the video, and simply combined it via video editor. Because camera has fixed path, that can be done easily.
Anyway, you could render both (DirectX/OpenGL) scenes onto offscreen buffers, and then combine them using either api to render final result. You would read data from render buffer in one api and transfer it into renderable buffer used in another api. The dumbest way to do it will be through system memory (which will be VERY slow), but it is possible that some vendors (nvidia, in particular) provide extensions for this scenario.
On windows platform you could also place two child windows/panels side-by-side on the main windows (so you'll get the same effect as in that youtube video), and create OpenGL context for one of them, and DirectX device for another. Unless there's some restriction I'm not aware of, that should work, because in order to render 3d graphics, you need window with a handle (HWND). However, both windows will be completely independent of each other and will not share resources, so you'll need 2x more memory for textures alone to run them both.

Draw DirectX/OpenGL Graphics on an existing graphics application

First off, let me just apologize right off the bat in case this is already answered, because I might just be searching it under irregular search terms.
I am looking to draw 2D graphics in an application that uses DirectX to draw its own graphics (A game). I will be doing that by injecting a DLL into the application (that part I have no questions about, I can do that), and drawing my graphics. But not being really good at DirectX/OpenGL, I have a couple of fundamental questions to ask.
1) In order to draw graphics on that window, will I need to get a pre-existing context from the process memory, some sort of handle to the drawing scene?
2) If the application uses DirectX, can I use OpenGL graphics on it?
Please let me know as to how I can approach this. Any details will be appreciated :-)
Thank you in advance.
Your approach in injecting an DLL is indeed the right way to go. Programs like FRAPS use the same approach. I can't tell you about the method for Direct3D, but for OpenGL you'd do about the following things:
First you must Hook into the functions wglMakeCurrent, glFinish and wglSwapBuffers of opengl32.dll so that your DLL notices when a OpenGL context is selected for drawing. Pass their calls through to the OS. When wglMakeCurrent is called use the function GetPixelFormat to find out if the window is double buffered or not. Also use the glGet… OpenGL calls to find out which version of OpenGL context you're dealing with. In case you have a legacy OpenGL context you must use different methods for drawing your overlay, than for a modern OpenGL-3 or later core context.
In case of a double buffered window use your Hook on wglSwapBuffers to perform further OpenGL drawing operations. OpenGL is just pens and brushes (in form of points, lines and triangles) drawing on a canvas. Then pass through the wglSawpBuffers call to make everything visible.
In case of a single buffered context instead of wglSwapBuffers the function to hook is glFinish.
Draw 2D with OpenGL is as simple as disable depth buffering and using an orthographic projection matrix. You can change OpenGL state whenever you desire to do so. Just make sure you restore everything into its original condition before you leave the hooks.
"1) In order to draw graphics on that window, will I need to get a pre-existing context from the process memory, some sort of handle to the drawing scene?"
Yes, you need to make sure your hooks catch the important context creation functions.
For example, all variations of CreateDevice in d3d are interesting to you.
You didn't mention which DirectX you are using, but there are some differences between the versions.
For example, At DirectX 9 you'd be mostly interested in functions that:
1. Create/return IDirect3DSwapChain9 objects
2. Create/return IDirect3DDevice9,IDirect3DDevice9Ex objects
In newer versions of DirectX their code was splitted into (mostly) Device, DeviceContext, & DXGI.
If you are on a "specific mission" share which directx version you are addressing.
Apart from catching all the needed objects to allow your own rendering, you also want to catch all presentation events ("SwapBuffers" in GL, "Present" in DX),
Because that's time that you want to add your overlay.
Since it seems that you are attempting to render an overlay on top of DX applications, allow me to warn you that making a truly generic solution (that works on all games) isn't easy.
mostly due to need to support different DX versions along with numerous ways to create
If you are focused on a specific game/application it is, naturally, much easier.
"2. If the application uses DirectX, can I use OpenGL graphics on it?"
Well, first of all yes. It's possible.
The terminology that you want to search for is OpenGL DirectX interoperability (or in short interop)
Here's an example:
https://sites.google.com/site/snippetsanddriblits/OpenglDxInterop
I don't know if the extension they used is only available in nVidia devices or not - check it.
Another thing about this is that you need a really good motivation in order to do it, generally I would simply stick with DX for both hooking and rendering.
I assume that internal interop between different DX version is better option.
I'd personally probably go with DirectX9 for your own rendering code.
Of course, if you only need to support a single DirectX version, no interop needed.
Bonus:
If you ever need to generate full wrappers of C++ classes, a quick n' dirty dll wrapper, or just general global function hook, feel free to use this lib that i created:
http://code.google.com/p/hookit/
It's far from a fully tested tool, just something i hacked 2 days, but I found it super useful.
Note that in your case, i recommend just to use VTable hooking, you'll probably have to hardcode the function offset into the table, but that's not likely to change.
Good luck :)

Multiple QGLWidgets with a single openGL Context in C++

I'm writing an application that is composed of multiple (16-32) plots that are updated several times a second and are drawn using openGL. Until now I've down most of the prototyping of the plots with GLUT. However I'd like to adopt a full fledge framework like QT and I'm getting ready to write a test QGLWidget.
Before I get started I'd like to figure out if its possible for multiple QGLWidgets to share a single openGL context? If so is there anything specifics I need to keep track of when sharing an openGL context between widgets?
if its possible for multiple QGLWidgets to share a single openGL context?
Now this is not possible to answer in general, because it depends on the platform in question: On X11/GLX it is indeed possible to use an indirect context on multiple drawables, however the context can be active on only one drawable at a time.
However:
It is also possible (and it is the recommended way to do this) to have multiple contexts share their data. In the very first versions of OpenGL this was only display lists, hence this still called list sharing. But with current versions of OpenGL this also includes textures, Pixel Buffer Objects and Vertex Buffer Objects. Frame Buffer Objects however can not be shared, but since textures can be used as FBO attachments that's no big deal.
QGLWidget provides a straigtforward API to share context data between QGLWidgests' contexts.
Yes, it is possible to share an opengl context by using this constructor.
If so is there anything specifics I need to keep track of when sharing
an openGL context between widgets?
I am not sure, but I don't think there is anything special you need to take care of.

Sharing a texture between direct3d and opengl?

I know mixing OpenGL and DirectX is not recommended but I'm trying to build a bridge between two different applications that use separate graphics API:s and I'm hoping there is a technique for sharing data, specifically textures.
I have a texture that is created in Direct3D like this:
d3_device-> CreateTexture(width, height,
1, D3DUSAGE_RENDERTARGET, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT,
&texture, NULL);
Is there any way I can use this texture from OpenGL without taking a roundtrip through system memory?
YES. As previously posted (see below) there should exists at least one solution.
I found two possible solutions:
On nvidia cards a new extension was integrated in the 256 dirvers. see http://developer.download.nvidia.com/opengl/specs/WGL_NV_DX_interop.txt
DXGI is the driving force to composite all windows in Vista and Windows 7. see msdn.microsoft.com/en-us/library/ee913554.aspx
I have not yet made experience with either solution, but I hope I will find some time to test one of them. But for me the first one seems to be the easier one.
[I think it should be possible. In recent windows version (Vista and 7) one can see a preview of any window content in the taskbar (whether its GDI, Direct3D, or OpenGL).
To my knowledge OpenGL preview was not supported in earlier windows versions. So at least in the newer version there should be a possibility to couple or share render contexts even between different processes...
This is also true for other modern platforms which share render contexts system wide to make different rendering effects.]
I think it is not possible. As both have different models of a texture.
You cannot access the texture memory directly without either directX or openGL.
Otherway around: If it is possible, you should be able to retrieve the texture address, its pitch, width and other (hardware dependant) memory layout informations and create a dummytexture in the other system and push the retrieved data into your just created texture object. This is not possible
Obviously, this will not work on any descend hardware, and if so it would not be very portable.
I don't think it's possible without downloading the data into host memory and re-uploading it into device memory.
It's possible now.
Use ANGLE OpenGL API instead native OpenGL.
You can share direct3d texture with EGL_ANGLE_d3d_texture_client_buffere extension.
https://github.com/microsoft/angle/wiki/Interop-with-other-DirectX-code#demo
No.
Think of it like sharing an image in photoshop and another image viewer. You would need a memory management library that those two applications shared.