To check extension availability, I need to use GL.isExtensionAvailable. In order to get the GL object, I need to create some GLCanvas and get the GL instance in init() or display().
Is there a way to check the extension availability even before I create the window, at the beginning of main()?
I guess you are out of luck. The availability of some extension may change according to which video card is connected to the screen you want to visualize your GL content, so you cannot get reliably that information before creating the GL context. You may be able to create an offscreen context only to get that information, however result may differ from a context bound to a window
It's possible to call GLContext.getCurrent().getPlatformExtensionsString() very early but it will return a non null value only when the OpenGL context has been made current at least once and on the appropriate thread. Don't forget to call GLProfile.initSingleton() before calling GLContext.getCurrent().
However, pqnet's comment is correct. Numerous computers (especially modern laptops) have several graphics cards and mechanisms difficult to understand to switch to another one (for example Optimus) depending on the power consumption, the performance profile ("high performance" or not).
Moreover, different drivers might be supported (the crappy GDI renderer and the true OpenGL driver under Windows), several profiles are often supported (forward compatible and backward compatible profiles, ES profiles even on desktop machines), ... JOGL will do its best to pick the most capable one but it can use different ones for offscreen and onscreen. The first OpenGL context used by GLProfile and the one used by the first created drawable can be very different.
This problem isn't only a problem with JOGL. My suggestion helps to know which extensions are available with the default device. You can use GLProfile.glAvailabilityToString() and GLProfile.getDefault() too.
N.B: I assume that you use at least JOGL 2.3.1. The maintenance of JOGL 1 was stopped about 5 years ago.
Related
I'm trying to query a Windows machine, using C++, for a list of available graphics cards.
This SO question has an answer (from moxize) which provides one way (d3d9.h):
get-the-graphics-card-model
And this one provides another (dxgi.h): dxgi enumadapters
When I tried each, I found the dxgi method above listed all the cards whilst the d3d9 one seemed only to provide one of them, depending on the selection of the "preferred graphics processor" in the NVIDIA control panel.
I'm struggling to understand the difference between what each of the above programmatic routes provides and is meant to be used for?
The DirectX Graphics Infrastructure (DXGI) was introduced with Vista. It basically factored all the enumeration, display and adapter management, and presentation stuff out of Direct3D. That way, all sorts of graphics APIs can coexist without a need to have separate mechanisms for these common tasks in each of them. It allows, e.g., all the Direct3D APIs (>= 10) to only be concerned with drawing 3D content into buffers and not care about where these buffers come from, or whether and how they are going to be displayed.
The old Direct3D 9 API still has its own interface for adapter enumeration. If I remember correctly, Direct3D 9 used to only enumerate adapters that actually had a display connected. Most likely because the API didn't really have support for headless rendering, so it wouldn't make sense to try use an adapter without an output. DXGI, on the other hand, operates on a more complete picture of the whole video and present network on your machine. Most importantly, it differentiates between adapters (graphics cards), and outputs (displays connected to an adapter). I assume you're running on a laptop or some other machine with an integrated as well as a dedicated GPU!? Switching the "preferred graphics processor" in the driver control panel will, most likely, change which of the two GPUs is (logically) connected to the display. And Direct3D 9 will then always only enumerate that one…
I'm attempting to learn SDL2 and am having difficulties from a practical perspective. I feel like I have a good understanding of SDL windows, renderers, and textures from an abstract perspective. However, I feel like I need to know more about what's going on under the hood to use them appropriately.
For example, when creating a texture I am required to provide a reference to a renderer. I find this odd. A texture seems like it is a resource that is loaded into VRAM. Why should I need to give a resource a reference to a renderer? I understand why it would be necessary to give a renderer a reference to a texture, however, vice versa it doesn't make any sense.
So that leads to another question. Since each texture requires a renderer, should each texture have its own dedicated renderer, or should multiple textures share a renderer?
I feel like there are consequences going down one route versus the other.
Short Answers
I believe the reason a SDL_Texture requires a renderer is because some backend implementations (OpenGL?) have contexts (this is essentially what SDL_Renderer is) and the image data must be associated with that particular context. You cannot use a texture created in one context inside of another.
for your other question, no, you don't need or want a renderer for each texture. That probably would only produce correct results with the software backend for the same reason (context).
As #keltar correctly points out none of the renderer's will work with a texture that was created with a different renderer due to a check in SDL_RenderCopy. However, this is strictly an API requirement to keep things consistent, my point above is to highlight that even if that check were absent it would not work for backends such as OpenGL, but there is no technical reason it would not work for the software renderer.
Some Details about SDL_Renderer
Remember that SDL_Renderer is an abstract interface to multiple possible backends (OpenGL, OpenGLES, D3D, Metal, Software, more?). Each of these are going to possibly have restrictions on sharing data between contexts and therefore SDL has to limit itself in the same way to maintain sanity.
Example of OpenGL restrictions
Here is a good resource for general restrictions and platform dependent functionality on OpenGL contexts.
As you can see from that page that sharing between contexts has restrictions.
Sharing can only occur in the same OpenGL implementation
This means that you certainly cant share between an SDL_Renderer using OpenGL an a different SDL_Renderer using another backend.
You can share data between different OpenGL Contexts
...
This is done using OS Specific extensions
Since SDL is cross platform this means they would have to write special code for each platform to support this, and all OpenGL implementations may not support it at all so its better for SDL to simply not support this either.
each extra render context has a major impact of the applications
performance
while not a restriction, it is a reason why adding support for sharing textures is not worthwhile for SDL.
Final Note: the 'S' in SDL stands for "simple". If you need to share data between contexts SDL is simply the wrong tool for the job.
My straight answer would be NO. But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They used video editing software. They recorded two nearly deterministic run-throughs of their engine and spliced them together.
As for the question posed by your title, not within the same window. It may be possible within the same application from two windows, but you'd be better off with two separate applications.
Yes, it is possible. I did this as an experiment for a graduate course; I implemented half of a deferred shading graphics engine in OpenGL and the other half in D3D10. You can share surfaces between OpenGL and D3D contexts using the appropriate vendor extensions.
Does it have any practical applications? Not many that I can think of. I just wanted to prove that it could be done :)
I digress, however. That video is just a side-by-side of two separately recorded videos of the Haven benchmark running in the two different APIs.
My straight answer would be NO.
My straight answer would be "probably yes, but you definitely don't want to do that."
But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They prerendered the video, and simply combined it via video editor. Because camera has fixed path, that can be done easily.
Anyway, you could render both (DirectX/OpenGL) scenes onto offscreen buffers, and then combine them using either api to render final result. You would read data from render buffer in one api and transfer it into renderable buffer used in another api. The dumbest way to do it will be through system memory (which will be VERY slow), but it is possible that some vendors (nvidia, in particular) provide extensions for this scenario.
On windows platform you could also place two child windows/panels side-by-side on the main windows (so you'll get the same effect as in that youtube video), and create OpenGL context for one of them, and DirectX device for another. Unless there's some restriction I'm not aware of, that should work, because in order to render 3d graphics, you need window with a handle (HWND). However, both windows will be completely independent of each other and will not share resources, so you'll need 2x more memory for textures alone to run them both.
First off, let me just apologize right off the bat in case this is already answered, because I might just be searching it under irregular search terms.
I am looking to draw 2D graphics in an application that uses DirectX to draw its own graphics (A game). I will be doing that by injecting a DLL into the application (that part I have no questions about, I can do that), and drawing my graphics. But not being really good at DirectX/OpenGL, I have a couple of fundamental questions to ask.
1) In order to draw graphics on that window, will I need to get a pre-existing context from the process memory, some sort of handle to the drawing scene?
2) If the application uses DirectX, can I use OpenGL graphics on it?
Please let me know as to how I can approach this. Any details will be appreciated :-)
Thank you in advance.
Your approach in injecting an DLL is indeed the right way to go. Programs like FRAPS use the same approach. I can't tell you about the method for Direct3D, but for OpenGL you'd do about the following things:
First you must Hook into the functions wglMakeCurrent, glFinish and wglSwapBuffers of opengl32.dll so that your DLL notices when a OpenGL context is selected for drawing. Pass their calls through to the OS. When wglMakeCurrent is called use the function GetPixelFormat to find out if the window is double buffered or not. Also use the glGet… OpenGL calls to find out which version of OpenGL context you're dealing with. In case you have a legacy OpenGL context you must use different methods for drawing your overlay, than for a modern OpenGL-3 or later core context.
In case of a double buffered window use your Hook on wglSwapBuffers to perform further OpenGL drawing operations. OpenGL is just pens and brushes (in form of points, lines and triangles) drawing on a canvas. Then pass through the wglSawpBuffers call to make everything visible.
In case of a single buffered context instead of wglSwapBuffers the function to hook is glFinish.
Draw 2D with OpenGL is as simple as disable depth buffering and using an orthographic projection matrix. You can change OpenGL state whenever you desire to do so. Just make sure you restore everything into its original condition before you leave the hooks.
"1) In order to draw graphics on that window, will I need to get a pre-existing context from the process memory, some sort of handle to the drawing scene?"
Yes, you need to make sure your hooks catch the important context creation functions.
For example, all variations of CreateDevice in d3d are interesting to you.
You didn't mention which DirectX you are using, but there are some differences between the versions.
For example, At DirectX 9 you'd be mostly interested in functions that:
1. Create/return IDirect3DSwapChain9 objects
2. Create/return IDirect3DDevice9,IDirect3DDevice9Ex objects
In newer versions of DirectX their code was splitted into (mostly) Device, DeviceContext, & DXGI.
If you are on a "specific mission" share which directx version you are addressing.
Apart from catching all the needed objects to allow your own rendering, you also want to catch all presentation events ("SwapBuffers" in GL, "Present" in DX),
Because that's time that you want to add your overlay.
Since it seems that you are attempting to render an overlay on top of DX applications, allow me to warn you that making a truly generic solution (that works on all games) isn't easy.
mostly due to need to support different DX versions along with numerous ways to create
If you are focused on a specific game/application it is, naturally, much easier.
"2. If the application uses DirectX, can I use OpenGL graphics on it?"
Well, first of all yes. It's possible.
The terminology that you want to search for is OpenGL DirectX interoperability (or in short interop)
Here's an example:
https://sites.google.com/site/snippetsanddriblits/OpenglDxInterop
I don't know if the extension they used is only available in nVidia devices or not - check it.
Another thing about this is that you need a really good motivation in order to do it, generally I would simply stick with DX for both hooking and rendering.
I assume that internal interop between different DX version is better option.
I'd personally probably go with DirectX9 for your own rendering code.
Of course, if you only need to support a single DirectX version, no interop needed.
Bonus:
If you ever need to generate full wrappers of C++ classes, a quick n' dirty dll wrapper, or just general global function hook, feel free to use this lib that i created:
http://code.google.com/p/hookit/
It's far from a fully tested tool, just something i hacked 2 days, but I found it super useful.
Note that in your case, i recommend just to use VTable hooking, you'll probably have to hardcode the function offset into the table, but that's not likely to change.
Good luck :)
I know mixing OpenGL and DirectX is not recommended but I'm trying to build a bridge between two different applications that use separate graphics API:s and I'm hoping there is a technique for sharing data, specifically textures.
I have a texture that is created in Direct3D like this:
d3_device-> CreateTexture(width, height,
1, D3DUSAGE_RENDERTARGET, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT,
&texture, NULL);
Is there any way I can use this texture from OpenGL without taking a roundtrip through system memory?
YES. As previously posted (see below) there should exists at least one solution.
I found two possible solutions:
On nvidia cards a new extension was integrated in the 256 dirvers. see http://developer.download.nvidia.com/opengl/specs/WGL_NV_DX_interop.txt
DXGI is the driving force to composite all windows in Vista and Windows 7. see msdn.microsoft.com/en-us/library/ee913554.aspx
I have not yet made experience with either solution, but I hope I will find some time to test one of them. But for me the first one seems to be the easier one.
[I think it should be possible. In recent windows version (Vista and 7) one can see a preview of any window content in the taskbar (whether its GDI, Direct3D, or OpenGL).
To my knowledge OpenGL preview was not supported in earlier windows versions. So at least in the newer version there should be a possibility to couple or share render contexts even between different processes...
This is also true for other modern platforms which share render contexts system wide to make different rendering effects.]
I think it is not possible. As both have different models of a texture.
You cannot access the texture memory directly without either directX or openGL.
Otherway around: If it is possible, you should be able to retrieve the texture address, its pitch, width and other (hardware dependant) memory layout informations and create a dummytexture in the other system and push the retrieved data into your just created texture object. This is not possible
Obviously, this will not work on any descend hardware, and if so it would not be very portable.
I don't think it's possible without downloading the data into host memory and re-uploading it into device memory.
It's possible now.
Use ANGLE OpenGL API instead native OpenGL.
You can share direct3d texture with EGL_ANGLE_d3d_texture_client_buffere extension.
https://github.com/microsoft/angle/wiki/Interop-with-other-DirectX-code#demo
No.
Think of it like sharing an image in photoshop and another image viewer. You would need a memory management library that those two applications shared.