This is probably a stupid question, but I cant find good examples on how to approach this, or if its even possible. Im just done with a project where I used gdi to biblt stuff onto a DIB-buffer then swap that onto the screen hdc, basically making my own swapchain and drawing with opengl.
So then I thought, can I do the same thing using directx11? But I cant seem to find where the DIB/buffer I need to change even is.
Am I even thinking about this correctly? Any ideas on how to handle this?
Yes, you can. Nvidia exposes vendor-specific extensions called NV_DX_interop and NV_DX_Interop2. With these extensions, you can directly access a DirectX surface (when it resides on the GPU) and render to it from an OpenGL context. There should be minimal (driver-only) overhead for this operation and the CPU will almost never be involved.
Note that while this is a vendor-specific extension, Intel GPUs support it as well.
However, don't do this simply for the fun of it or if you control all the source code for your application. This kind of interop scenario is meant for cases where you have two legacy/complicated codebases and interop is a cheaper/better option than porting all the logic to the other API.
Yeah you can do it, both OpenGL and D3D support both writeable textures and locking them to get to the pixel data.
Simply render your scene in OpenGL to a texture, lock it, read the pixel data and pass it directly to the D3D locked texture pixel data, unlock it then do whatever you want with the texture.
Performance would be dreadful of course, you're stalling the GPU multiple times in a single "operation" and forcing it to synchronize with the CPU (who's passing the data) and the bus (for memory access). Plus there would be absolutely no benefit at all. But if you really want to try it, you can do it.
Related
I have a texture in Unity which I will modify frequently.
Now there are two options:
I can make changes to texture by calling setPixels and then call Texture2D.apply. I think the apply actually copies the data from CPU to GPU.
One option is I can modify the texture in native code by getting the texture native handle and modifying it using glTexSubImage2D functions.
Now I read the apply copies only the changed pixels to GPU not full texture but I really doubt if its possible. but if it is true does this mean that calling Texture2D.apply == glTexSubImage2Din terms of performance.
If not, what should I use if I need good performance. I actually dont want to go to native side as I will have to manage the native code on for different graphics APIs supported by Unity like opengl, DX etc
Texture2D.Apply() and glTexSubImage2D are both used to update Texture. They both perform the-same action but they have differences in them.
GetPixels, SetPixels and Texture2D.Apply() are done on the CPU.
You should only use GetPixels, SetPixels and Texture2D.Apply() if you need individual pixels. Good example of this is when you want to send the Texture data over the network.
glTexSubImage2D is done on the GPU and does not require SetPixels or
GetPixels.
glTexSubImage2D is extremely faster than GetPixels, SetPixels and Texture2D.Apply().
If not, what should I use if I need good performance. I actually dont
want to go to native side as I will have to manage the native code on
for different graphics APIs supported by Unity like opengl,
You mentioned that you will be modifying the image frequently, so do not use GetPixels, SetPixels and Texture2D.Apply(). I know it is the easiest solution but it is very slow.
For the best performance:
1.Use glTexSubImage2D
Pass Texture.GetNativeTexturePtr() to the native C++ side as IntPtr then use glTexSubImage2D to directly modify it. I noticed that most of your questions is about C++ and OpenGL so this shouldn't be hard for someone like you.
As for supporting different graphics APIs, the first to support is OpenGL because that's supported on all major platforms. From the Editor, change the Graphics API to OpenGL then start coding. It should work on Windows, Mac, Linux, Android and iOS. If you want to support Direct3D, Metal and Vulkan then go for them too. You just don't have to. OpenGL is enough for this.
2. Use Shaders
You can combine Unity Shaders and Compute Shaders and still get more performance than glTexSubImage2D because this will be happening on the GPU instead of CPU. I personally find shaders complicated so #1 should be your priority.
Yes, glTexSubImage2D can be used to update a smaller rectangular portion of a larger texture.
Both SDL and Game Maker have the concept of surfaces, images that you may modify on the fly and display them. I'm using OpenGL 1 and i'd like to know if openGL has this concept of Surface.
The only way that i came up with was:
Every frame create / destroy a new texture based on needs.
Every frame, update said texture based on needs.
These approachs don't seem to be very performant, but i see no alternative. Maybe this is how they are implemented in the mentioned engines.
Yes these two are the ways you would do it in OpenGL 1.0. I dont think there are any other means as far as 1.0 spec is concerned.
Link : https://www.opengl.org/registry/doc/glspec10.pdf
Do note that the textures are stored on the device memory (GPU) which is fast to access for shading. And the above approaches copy it between host (CPU) memory and device memory. Hence the performance hit is the speed of host-device copy.
Why are you limited to OpenGL 1.0 spec. You can go higher and then you start getting more options.
Use GLSL shaders to directly edit content from one texture and output the same to another texture. Processing will be done on the GPU and a device-device copy is as fast as it gets.
Use CUDA. Map a texture to a CUDA array, use your kernel to modify the content. Or use OpenCL for non-NVIDIA cards.
This would be the better scenario so long as the modification can be executed in parallel this would benefit.
I would suggest trying the CPU copy method, as it might be fast enough for your needs. The host-device copy is getting faster with latest hardware. You might be able to get real-time 60fps or higher even with this copy, unless its a lot of textures you plan to execute this for.
I want to find a way to send all the geometry from an opengl framebuffer to a remote computer, who would do the rendering. This would allow me to have very complex simulations running on some kind of a big supercomputer, and rendered on a small mobile or simply cheap client machine doing the rendering.
Before starting digging in my code, I though it would be relatively easy: let's copy the vertex arrays and send it through the network, using boost::serialisation for example, and that's it. But my geometry are encapsulated, which prevents me from accessing it from where I want to.
I have been able to render into a framebuffer instead of rendering directly on screen though, and I was wondering if there is a way to retrieve data from OpenGL's fbo's in anyway?
First your terminology is wrong. Frame Buffer Objects are encapsulations of off-screen images/surfaces and don't hold geometry.
Second: What you imagine has been implemented already by the VirtualGL project (however it's stuck at a rather old OpenGL profile and doesn't support modern GPUs).
Also X11/GLX always supported indirect OpenGL operation, i.e. a remote machine would send OpenGL commands to the local display server, which is exactly what you probably think of. But this has a major drawback: Network bandwidth becomes the major bottleneck.
I was moving from SDL to SDL2, and was confused of the 'render & texture' system introduced.
Back in SDL, the most frequent operation was creating Surface's and BlitSurface them onto screen. Now there seems a trend of using renderers and textures. However, this is extremely slow (in terms of coding overhead) from my point. Why can't I just load_BMP and BlitSurface as before? What good can be introduced from this whole window-renderer-texture thing?
I have read a couple threads What is a SDL renderer? but still a little confusing.
So..
Will the old Surface way work in SDL2?
What is the point in Renderer & Texture? (could be about hardware-accelaration according to a little googling, but not sure what that means)
You might want to take a look at the migration guide for SDL2, it provides information on the new way of dealing with 2D graphics.
The point of using textures instead of surfaces is that textures works on the GPU and get loaded into video memory and surfaces works in system memory with the CPU and since GPUs are much better suited than CPU for handling graphics it will be faster. Also, the renderer hides the underlying system used (it could be D3D, OpenGL, or something else).
You can still load surfaces, but you'll have to convert them to textures before rendering them or use the SDL_UpdateWindowSurface and SDL_GetWindowSurface functions; the latter link includes an example on how to use them.
As for the SDL2 approach being slow, as you assert, I don't agree with you, you set up the window and renderer once, load your textures much like you loaded surfaces, copy them to the renderer instead of blitting and finally use SDL_RenderPresent instead of SDL_Flip. Not that different really :)
But, take a look at the migration guide, it has the information you're looking for.
First off, let me just apologize right off the bat in case this is already answered, because I might just be searching it under irregular search terms.
I am looking to draw 2D graphics in an application that uses DirectX to draw its own graphics (A game). I will be doing that by injecting a DLL into the application (that part I have no questions about, I can do that), and drawing my graphics. But not being really good at DirectX/OpenGL, I have a couple of fundamental questions to ask.
1) In order to draw graphics on that window, will I need to get a pre-existing context from the process memory, some sort of handle to the drawing scene?
2) If the application uses DirectX, can I use OpenGL graphics on it?
Please let me know as to how I can approach this. Any details will be appreciated :-)
Thank you in advance.
Your approach in injecting an DLL is indeed the right way to go. Programs like FRAPS use the same approach. I can't tell you about the method for Direct3D, but for OpenGL you'd do about the following things:
First you must Hook into the functions wglMakeCurrent, glFinish and wglSwapBuffers of opengl32.dll so that your DLL notices when a OpenGL context is selected for drawing. Pass their calls through to the OS. When wglMakeCurrent is called use the function GetPixelFormat to find out if the window is double buffered or not. Also use the glGet… OpenGL calls to find out which version of OpenGL context you're dealing with. In case you have a legacy OpenGL context you must use different methods for drawing your overlay, than for a modern OpenGL-3 or later core context.
In case of a double buffered window use your Hook on wglSwapBuffers to perform further OpenGL drawing operations. OpenGL is just pens and brushes (in form of points, lines and triangles) drawing on a canvas. Then pass through the wglSawpBuffers call to make everything visible.
In case of a single buffered context instead of wglSwapBuffers the function to hook is glFinish.
Draw 2D with OpenGL is as simple as disable depth buffering and using an orthographic projection matrix. You can change OpenGL state whenever you desire to do so. Just make sure you restore everything into its original condition before you leave the hooks.
"1) In order to draw graphics on that window, will I need to get a pre-existing context from the process memory, some sort of handle to the drawing scene?"
Yes, you need to make sure your hooks catch the important context creation functions.
For example, all variations of CreateDevice in d3d are interesting to you.
You didn't mention which DirectX you are using, but there are some differences between the versions.
For example, At DirectX 9 you'd be mostly interested in functions that:
1. Create/return IDirect3DSwapChain9 objects
2. Create/return IDirect3DDevice9,IDirect3DDevice9Ex objects
In newer versions of DirectX their code was splitted into (mostly) Device, DeviceContext, & DXGI.
If you are on a "specific mission" share which directx version you are addressing.
Apart from catching all the needed objects to allow your own rendering, you also want to catch all presentation events ("SwapBuffers" in GL, "Present" in DX),
Because that's time that you want to add your overlay.
Since it seems that you are attempting to render an overlay on top of DX applications, allow me to warn you that making a truly generic solution (that works on all games) isn't easy.
mostly due to need to support different DX versions along with numerous ways to create
If you are focused on a specific game/application it is, naturally, much easier.
"2. If the application uses DirectX, can I use OpenGL graphics on it?"
Well, first of all yes. It's possible.
The terminology that you want to search for is OpenGL DirectX interoperability (or in short interop)
Here's an example:
https://sites.google.com/site/snippetsanddriblits/OpenglDxInterop
I don't know if the extension they used is only available in nVidia devices or not - check it.
Another thing about this is that you need a really good motivation in order to do it, generally I would simply stick with DX for both hooking and rendering.
I assume that internal interop between different DX version is better option.
I'd personally probably go with DirectX9 for your own rendering code.
Of course, if you only need to support a single DirectX version, no interop needed.
Bonus:
If you ever need to generate full wrappers of C++ classes, a quick n' dirty dll wrapper, or just general global function hook, feel free to use this lib that i created:
http://code.google.com/p/hookit/
It's far from a fully tested tool, just something i hacked 2 days, but I found it super useful.
Note that in your case, i recommend just to use VTable hooking, you'll probably have to hardcode the function offset into the table, but that's not likely to change.
Good luck :)