How to get the exact amount of render targets currently bound to output-merger stage in DirectX11?
I'm using OMGetRenderTargets to retrieve the render targets and depth stencil view, and after my render step I want to set back the previous render targets with OMSetRenderTargets
But OMGetRenderTargets doesn't return the actual amount of RTV's currently bound and I haven't found any info about this in documentation.
It worked perfectly with exactly 1 render target, but if there's more and I don't know the exact amount it does not work at all.
Is this even possible?
Related
I'm trying to get ImGui working in my engine but having some trouble "overlaying" it over my cube mesh. I split the two in seperate command buffers like
std::array<VkCommandBuffer, 2> cmdbuffers = { commandBuffers[imageIndex], imguicmdbuffers[imageIndex] };
And then in my queue submit info I put the command buffer count to 2 and pass it the data like so
submitInfo.commandBufferCount = 2;
submitInfo.pCommandBuffers = cmdbuffers.data();
But what happens now is that it only renders imgui, or if I switch the order in the array it only renders the cube, never both. Is it because they share the same render pass? I changed the VkRenderPassBeginInfo clear color to double check and indeed it either clears yellow and draws imgui or clears red and draws the cube. I've tried setting the clear alpha to 0 but that doesn't work and seems like a hack anyway. I feel like I lack understanding of how it submits and executes the command buffers and how it's tied to render passes/framebuffers, so whats up?
Given the following statements (that is, assuming they are accurate):
they share the same render pass
in my queue submit info I put the command buffer count to 2
VkRenderPassBeginInfo clear color
Certain things about the nature of your rendering become apparent (things you didn't directly state or provide code for). First, you are submitting two separate command buffers directly to the queue. Only primary command buffers can be submitted to the queue.
Second, by the nature of render passes, a render pass instance cannot span primary command buffers. So you must have two render pass instances.
Third, you specify that you can change the clear color of the image when you begin the render pass instance. Ergo, the render pass must specify that the image gets cleared as its load-op.
From all of this, I conclude that you are beginning the same VkRenderPass twice. A render pass that, as previously deduced, is set to clear the image at the beginning of the render pass instance. Which will dutifully happen both times, the second of which will wipe out everything that was rendered to that image beforehand.
Basically, you have two rendering operations, using a render pass that's set to destroy the data created by any previous rendering operation to the images it uses. That's not going to work.
You have a few ways of resolving this.
My preferred way would be to start employing secondary command buffers. I don't know if ImGui can be given a CB to record its data into. But if it can, I would suggest making it record its data into a secondary CB. You can then execute that secondary CB into the appropriate subpass of your renderpass. And thus, you don't submit two primary CBs; you only submit one.
Alternatively, you can make a new VkRenderPass which doesn't clear the previous image; it should load the image data instead. Your second rendering operation would use that render pass, while your initial one would retain the clear load-op.
Worst-case scenario, you can have the second operation render to an entirely different image, and then merge it with the primary image in a third rendering operation.
I'd like to implement color picking in DirectX 12. So basically what I am trying to do is render to two render targets simultaneously. The first render target should contain the normal rendering, while the second should contain the objectID.
To render to two render targets, I think all you need to do is set them with OMSetRenderTargets.
Question 1: How do you specify which shader or pipeline state object should be used for a specific render target? Like how do you say render_target_0 should be rendered with shader_0, render_target_1 should be rendered with shader_1?
Question 2: How do you read a pixel from the frame buffer after it has been rendered? Is it like in DirectX 11 by using CopySubresourceRegion then Map? Do you need to use a readback heap? Do you need to use a resource barrier or fence or some sort synchronisation primitive to avoid the CPU and GPU using the frame buffer resource at the same time?
I tried googling for the answers but didn't get very far because DirectX 12 is pretty new and there aren't a lot of examples, tutorials, or open source projects for DirectX 12 yet.
Thanks for your help in advance.
Extra special bonus points for code examples.
Since I didn't get any response on Stack Overflow, I cross-posted on gamedev.net and got a good response there: http://www.gamedev.net/topic/674529-d3d12-color-picking-questions/
For anyone finding this in the future, I'll just copy red75prime's answer from the GameDev forums here.
red75prime's answer:
Question 1 is not specific to D3D12. Use one pixel shader with multiple outputs. Rendering to multiple textures with one pass in directx 11
Questions 2. Yes to all.
Pseudocode:
ID3D12GraphicsCommandList *gl = ...;
ID3D12CommandQueue *gqueue = ...;
ID3D12Resource *render_target, *read_back_texture;
...
// Draw scene
gl->DrawInstanced(...);
// Make ready for copy
gl->ResourceBarrier(render_target, RENDER_TARGET, COPY_SOURCE);
//gl->ResourceBarrier(read_back_texture, GENERIC_READ, COPY_DEST);
// Copy
gl->CopyTextureRegion(...);
// Make render_target ready for Present(), read_back_texture for Map()
gl->ResourceBarrier(render_target, COPY_SOURCE, PRESENT);
//gl->ResourceBarrier(read_back_texture, COPY_DEST, GENERIC_READ);
gl->Close(); // It's easy to forget
gqueue->ExecuteCommandLists(gl);
// Instruct GPU to signal when command list is done.
gqueue->Signal(fence, ...);
// Wait until GPU completes drawing
// It's inefficient. It doesn't allow GPU and CPU work in parallel.
// It's here just to make example simple.
wait_for_fence(fence, ...);
// Also, you can map texture once and store pointer to mapped data.
read_back_texture->Map(...);
// Read texture data
...
read_back_texture->Unmap();
EDIT: I added "gl->Close()" to the code.
EDIT2: State transitions for read_back_texture are unnecessary. Resources in read-back heap must always have state COPY_DEST.
I am rendering a scene using a very simple path tracer using GLSL. I create a map in screen space by using some of the information from the path tracing. In the first frame the map looks very noisy (because has mostly 0s), but in the next frame I change the sample and I would like the new map to be averaged with the previous one.
I know there is accumulation buffer but it has deprecated and I think that it would be a costly solution for such simple feature.
In my shader I currently render the map and a final image and display this second one. In the next frame I want to use compute a new map and accumulate it with the previous one in order to render this new frame. Ideally I would like to accumulate 5 subsequent frames.
I hope I made the question clear enough. I don't want to compute the samples and accumulation in the shader because I want the program to still be interactive, even if noisy.
I am very new to 3D programming, namely with DirectX. I have been trying to follow tutorials on how to do basic things, and I have been looking at the samples provided by Microsoft. One of the big questions I have had is how to tell what calculations should be done in the actual game code and what calculations should be done in HLSL. I have not been able to understand what should be done where, because it looks like, to me, you could have almost all code pertaining to calculations in your shader file, or you could have it all in the executable code and only send the bear minimum to the pixel and vertex shaders. How can one tell what code should go where? If you need an example, I'll try to find one.
"Code" - CPU code
"HLSL" - GPU code
Basically, you want everything that is pure graphics to happen on the GPU. That is, when the information about what you want to render has been sent to the GPU, it should take over and use that information to generate the final image.
You want to the CPU to say to the GPU "this is what I want to render, and here is everything you need to make it happen" and then make sure to tell the GPU "this is how you render it".
Some examples (not a complete or final list in anyway):
CPU:
Anything dealing with window opening/closing/resizing
User input from mouse, keyboard
Reading and setting configuration
Generating and updating view matrices
Application logic
Setting up and initializing rendering (textures, buffers etc)
Generating vertex data (position, texture coordinates etc)
Creating graphic entities (triangles, textures, colors etc)
Handling animation (timestepping, swapping buffers)
Sending updated data to the GPU for each frame
GPU:
Use the view matrices to put things on the right place on the screen
Interpolate from vertex data to fragment data
Shading (usually, this is the most complicated part)
Calculate and write final pixel color
Instead of using glGenTextures() to get an unused texture ID. Can I randomly choose a number, say, 99999 and use it?
I will, of course, query:
glIsTexture( m_texture )
and make sure it's false before proceeding.
Here's some background:
I am developing an image slideshow app for the mac. Previewing the slideshow is flawless. To save the slideshow, I am rendering to an FBO. I create an AGL context, instantiate a texture with glGenTextures() and render to a frame buffer. All's well except for a minor issue.
After I've saved the slideshow and return to the main window, all my image thumbnails are grey, i.e. the textures have been cleared.
I have investigated it and found out that the image thumbnails and my FBO texture somehow have the same texture ID. When I delete my FBO texture at the end of the slideshow saving operation, the thumbnail textures are also lost. This is weird because I have an AGL context during saving, and the main UI has another AGL context, presumably created in the background by Core Image and which I have no control on.
So my options, as I see it now, is to:
Not delete the FBO texture.
Randomly choose a high texture ID in the hopes that the main UI will not be using it.
Actually I read that you do not necessarily need to delete your texture if you are deleting the AGL opengl context. Because deleting your openGL context automatically deletes all associated textures. Is this true? If yes, option 1 makes more sense.
I do realize that there are funny things happening here that I can't really explain. For example, after I've saved my image slideshow to an .mov file, I delete the context, which was created in the same class. By right, it should not affect textures created in another context. But it does, and I seriously can't explain that.
Allow me to answer your basic question first: yes, it is legal in OpenGL 2.1 to simply pick a texture name arbitrarily and use it as a texture. Note that it is not legal in 3.2 core. The fact that the OpenGL ARB removed this ability should help illustrate whether it is a good idea or not.
Now, you seem to have run into an odd bug. glGenTextures should never return a texture name that is already in use. After all that is what it is for. You appear to somehow be getting textures that are already in use. I don't know how that happens.
An OpenGL context, when destroyed, will delete all resources specific to that context. If you have created a second context that shares resources with the first, deleting the second context will not delete the shared resources (textures, buffers, etc). It will delete un-shared resources (VAOs, etc), but everything else will remain.
I would suggest not creating and deleting the FBO texture constantly. Just create it on application startup. Worst-case, you'll have to reallocate storage for the object as needed (via glTexImage2D), if you need a different size or something.
To anybody who has this problem, make sure your context is fully initialized; until it is glGenTextures will always return 0. I didn't realize what was happening at first because it seems 0 is still a valid texture ID.