I'm looking at the documentation for IDXGIKeyedMutex and I'm a bit unsure regarding the following:
You must call the ReleaseSync method when you are done rendering to a
surface.
My question is what does "when you are done rendering" mean? Is it after is remove the texture as the render target for the immidiate context, when I call Flush on the immediate context or do I need some other form of GPU fence/sync before I can call ReleaseSync?
Also why is D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX preferred over D3D11_RESOURCE_MISC_SHARED?
You should call IDXGIKeyedMutex::ReleaseSync after you have called the ID3D11DeviceContext::Draw calls or other calls to issue GPU commands to write to the buffer (i.e. ID3D11DeviceContext::CopyResource). You don't need to explicitly call Flush. For a sample of using AcquireSync/ReleaseSync, please look at http://code.msdn.microsoft.com/DXGISyncSharedSurf
D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX is preferred over D3D11_RESOURCE_MISC_SHARED because it can be used with D3D11_RESOURCE_MISC_SHARED_NTHANDLE, which provides better security for cross-proc surface sharing.
Related
// Contains the list of secondary command buffers to be submitted
std::vector<VkCommandBuffer> secondaryCommandBuffers;
// Inheritance info for the secondary command buffers
VkCommandBufferInheritanceInfo inheritanceInfo = {};
...
inheritanceInfo.renderPass = renderpass; <-------------
inheritanceInfo.framebuffer = frameBuffer; <-------------
...
VkCommandBufferBeginInfo secondaryCommandBufferBeginInfo = {};
...
commandBufferBeginInfo.pInheritanceInfo = &inheritanceInfo;
...
vkBeginCommandBuffer(secondaryCommandBuffers[i], &secondaryCommandBufferBeginInfo);
vkEndCommandBuffer(secondaryCommandBuffers[i]);
// renderPassBeginInfo for the primary command buffer
VkRenderPassBeginInfo renderPassBeginInfo = {};
...
renderPassBeginInfo.renderPass = renderPass; <-------------
renderPassBeginInfo.framebuffer = frameBuffer; <-------------
vkBeginCommandBuffer(primaryCommandBuffer, &cmdBufInfo);
vkCmdBeginRenderPass(primaryCommandBuffer, &renderPassBeginInfo, VK_SUBPASS_CONTENTS_SECONDARY_COMMAND_BUFFERS);
vkCmdExecuteCommands(primaryCommandBuffer, secondaryCommandBuffers.size(), secondaryCommandBuffers.data());
vkEndCommandBuffer(primaryCommandBuffer);
Why do secondary command buffers already set framebuffer and renderpass, and also framebuffer and renderpass for primary command buffer?
Must it be set to the same?
Secondary command buffers which contain rendering commands must execute within a render pass, and wholly within a specific subpass of that render pass (hence VkCommandBufferInheritanceInfo::subpass). This is their purpose.
The framebuffer parameter is optional and may be VK_NULL_HANDLE.
The render pass model essentially requires all aspects of command generation to be able to determine what is going on. How rendering commands get generated depends in many ways on which subpass of which render pass is being used. That's central to the whole mechanism.
The primary/secondary CB distinction allows the structure of a render pass operation to be defined within a primary command buffer (which contains the begin, end, and subpass switching operations), while the actual rendering commands can be built in secondary command buffers on other threads. But in order for those other threads to do their jobs, they have to know about how they're being used. Hence the need for the render pass.
That's an API design question. And any answer by a non-insider would be opinion-based at best.
Most accurate answer (nobody seem to like to hear) is simply: because Vulkan specification requires you to.
At the face of it, secondary command buffer is created way before vkCmdBeginRenderPass. So if the driver might need to know (or there is some advantage in knowing) the render pass environment at vkBeginCommandBuffer of the secondary, then render pass handle must be provided at that point.
As for VkFramebuffer the specification says it is optional, but providing it may yield better performance.
Must it be set to the same?
Appropriate VUs:
If vkCmdExecuteCommands is being called within a render pass instance, the render passes specified in the pBeginInfo::pInheritanceInfo::renderPass members of the vkBeginCommandBuffer commands used to begin recording each element of pCommandBuffers must be compatible with the current render pass.
and
If vkCmdExecuteCommands is being called within a render pass instance, and any element of pCommandBuffers was recorded with VkCommandBufferInheritanceInfo::framebuffer not equal to VK_NULL_HANDLE, that VkFramebuffer must match the VkFramebuffer used in the current render pass instance
I have a touchscreen controller (which is an I2C slave) that I need to enable via APCI. This should be done by calling the _PS0 ACPI method. I call this method by using AcpiEvaluateObject with no arguments and no return values.
AcpiEvaluateObject(nullptr, (ACPI_STRING)"\\_SB.I2C4._PS0", nullptr, nullptr); // returns AE_OK
AcpiEvaluateObject(nullptr, (ACPI_STRING)"\\_SB.I2C4.TCS2._PS0", nullptr, nullptr); // returns AE_AML_UNINITIALIZED_ARG
When calling this method on the parent object (I2C4), everything goes fine but calling it on the touch screen controller (TCS2), it fails. What also makes me wonder is that it returns AE_AML_UNINITIALIZED_ARG even though it doesn't take any args (according to the DSDT).
Calling the _CRS method on the same object also works without any problems. I also looked into the Linux kernel source how they change ACPI power states and they use the exact same mechanism. It boils down to the use of acpi_evaluate_object in acpi_dev_pm_explicit_set which also seems to work on the touchscreen device.
I'm not using Linux, but Genode and the Acpica library.
what am I missing to successfully enable the touchscreen device via ACPI? Is there something the Linux kernel is initializing implicitly (I couldn't find something like this)?
I am working on augmented reality project. So the user should use the Webcam, and to see the captured video with a cube drawn on that frame.
And this is where I get stuck , when I try to use glBindTexture(GL_TEXTURE_2D,texture_background) method, I get error:
(
ArgumentError: argument 2: : wrong type
GLUT Display callback with (),{} failed: returning None argument 2: : wrong type
)
I am completely stuck, have no idea what to do . The project is done in Python 2.7 , am using opencv and PyOpenGl 3.1.0.
You can find code on this link :click here
Thanks in advance.
Interesting error! So I played around with your source code (by the way in the future you should probably just add the code directly to your question instead of as a separate link), and the issue is actually just one of variable scope, not glut or opengl usage. The problem is your texture_background variable does not exist within the scope of the _draw_scene() function. To verify this, simply try calling print texture_background in your _draw_scene() function and you will find it returns None rather than the desired integer texture identifier.
The simple hack-y solution is to just call global texture_background before using it within your _handle_input() function. You will also need to define texture_background = None in the main scope of your program (underneath your ##FOR CUBE FROM OPENGL comment). The same global comment applies for x_axis and z_axis.
That being said, that solution is not really that great. The rigid structure required by GLUT with all of these predefined glut* functions makes it hard to structure code the way you might want to in terms of initializing your app. I would suggest, if you are not forced to use GLUT, to use a more flexible alternative, such as pygame, pysdl2 or pyqt, to create your context instead.
I am designing a game engine in DirectX 11 and I had a question about the ID3D11DeviceContext::IASetInputLayout function. From what i can find in the documentation there is no mention of what the function will do if you set an input layout to the device that has been previously set. In context, if i were to do the following:
//this assumes dc is a valid ID3D11DeviceContex interface and that
//ia is a valid ID3D11InputLayout interface.
dc->IASetInputLayout(&ia);
//other program lines: drawing, setting vertex shaders/pixel shaders, etc.
dc->IASetInputLayout(&ia);
//continue execution
would this incur a performance penalty through device state switching, or would the runtime recognize the input layout as being equivalent to the one already set and return?
While I also can not find anything related to if the InputLayout is already set, you could get a pointer to the input layout already bound by calling ID3D11DeviceContext::IAGetInputLayout or by doing an internal check by keeping your own reference, that way you do not have a call to your ID3D11DeviceContext object.
As far as I know, it should detect that there are no changes and so the call is to be ignored. But it can be easily tested - just call this method 10000 times each render and see how bad FPS drop is :)
for (int i = 0; i < Number_Of_queries; i++)
{
glBeginQueryARB(GL_SAMPLES_PASSED_ARB, queries[i]);
Box[i]
glEndQueryARB(GL_SAMPLES_PASSED_ARB);
}
I'm curious about the method suggested in GPU GEMS 1 for occlusion culling where a certain number of querys are performed. Using the method described you can't test individual boxes against each other so are you supposed to do the following?
Test Box A -> Render Box A
Test Box B -> Render Box B
Test Box C -> Render Box C
and so on...
I'm not sure if I understand you correctly, but isn't this one of the drawbacks of the naive implementation of first rendering all boxes (and not writing to depth buffer) and then using the query results to check every object? But your suggestion to use the query result of a single box immediately is an even more naive approach as this stalls the pipeline. If you read this chapter (assuming you refer to chapter 29) further, they present a simple technique to overcome the disadvantages of both naive approaches (that is, just render everything normally and use the query results of the previous frame).
I think (it would have been good to link the GPU gems article...) you are confused about somewhat asynchronous queries as described in extensions like this:
http://developer.download.nvidia.com/opengl/specs/GL_NV_conditional_render.txt
If I recall correctly there were other extensions to check for the availability of a result without blocking also.
As Christian Rau points out doing just "query, wait for result, do stuff based on result" might stall and might not be any gain because of that, depending on how much work is in "do stuff". In fact, doing the query, waiting for it to round trip just to save a single draw call is most likely not going to help at all.