i am trying to get pickigng working in directx 9 and i am having some trouble. it works fine when i am rendering my mesh in software however i do get errors when rendering with a shader.
i can be off of a mesh but it still detects it as a hit (see image for more detail)
i am stopping animation controllers and updating frame matrices but still no joy with the picking.
http://tweetphoto.com/a7vtajzt
any help much apprechiated this has been driving me nuts for two days now.
regards
Mark
not to worry i found out that i was missing a clone function that i was doing in software mode but not hardware
Related
I'm currently working on external plugin in Unity3d which uses NVAPI & 3D Vision. In NVAPI there are two API calls to turn on/off active stereo.
NvAPI_Stereo_Deactivate
NvAPI_Stereo_Activate
So whenever I try to toggle on/off stereo it crashes at random time with following exception:
Unity Player [version: Unity 2017.1.0f3 (472613c02cf7)]
nvwgf2umx.dll caused an Access Violation (0xc0000005) in module nvwgf2umx.dll at 0033:6f9981d8.
The crash can happen at third try or any try later sometimes. What I'm assuming currently is it has to do something with some value accessed by the dll. Problem is since its NVIDIA internal I have no access to it.
I have already tried other simple methods such as Vsync off, Change Quality settings to max in Manage 3d settings but all failing.
I did come across similer issue in NVDIA dev forums but there is not answer to it seems. Any suggestions or help regarding this would be greatly appreciated.
Also here is the link to error log
I have managed to fix this above issue using a roundabout way. Instead of using
NvAPI_Stereo_Deactivate
NvAPI_Stereo_Activate
functions to turn on & off 3d vision I'm passing the render texture to mono eye in NvAPI_Stereo_SetActiveEye to mono camera while in active mode I pass it to Left Eye & Right Eye respectively. Toggling seems to work properly although I have also noted using NvAPI_Stereo_IsActivated in a loop seems to cause also same access violation so rather only user NvAPI_Stereo_SetActiveEye function to set eye and not to mess around with NVAPI native functions. One downside of using this is 3d emitter will be kept on unitil the exit of application(for my project this seems ok). Hope this helps anyone in future coming across this problem. Do update the answer if anyone has a better solution. That would be nice.
I have been developing this game in C++ in Visual Studio using DirectX 12. I used the Debug build configuration during development and the graphics were smooth as butter.
When I was preparing to publish the game on the Windows Store so I could share it with friends as play testers, I switched to Release build configuration. As soon as I did that I started getting this flicker of the back-ground color coming through my wall meshes.
Here is a short video that shows the flicker.
Here is a longer video that I made before switching to Release build configuration that shows there is no flicker.
I am new to DirectX 12. This project was my teacher. I studied Microsoft's Direct3D 12 Graphics, and I studied the DirectX 12 templates in Visual Studio.
I felt quite pleased that I was able to master DirectX 12 well enough to produce this game as well as I did. Then the Release thing, and the flicker thing, and I am at a loss.
Is this likely to be a shader issue? or a command queue issue? or a texture issue? or something else?
DirectX 12 is an API designed for graphics experts and provides a great deal of application control compared to say Direct3D 11. The cost of that control is that you the application developer are responsible for getting everything right, making sure it works across a broad range of hardware, and robustly handling stress scenarios and error cases all yourself.
There are numerous ways you can get 'blinking' effects in DirectX 12. A common one is failure to keep your graphics memory with constants, IBs, VBs, etc. unchanged between the time you call Draw and the time the actual draw completes on the GPU which often happens a few frames later. This synchronization is a key challenge of using the API properly. For an example solution, see GraphicsMemory in DirectX Tool Kit for DirectX 12.
If you are new to DirectX development, I strongly advise starting with DirectX 11. It's basically the same functionality, but takes care of buffer renaming, resource barriers, fences, VRAM overcommit, etc.
I am having alot of struggles trying to follow Frank Luna's book for DirectX 11 3D Programming, and I am currently up to chapter 7 section 2. I have imported the Skull model file and I have begun to render it. The strange thing is that when I am rendering it, it appears to be rendering the back faces over the front facing faces. I am pretty sure this is the case of what is happening. But I am putting forth this question for help and guidance on where I may be going wrong. I will edit this post with inclusions of my code if required to help me figure out where I am going wrong, Thanks alot! (photo's attached)
Photo - Facing the Skull, Slightly to the Left
Photo - Above the Skull, Facing Downwards
EDIT: I have set a breakpoint in my code for after the first draw call loop and it does not show any faces which are behind the fronts ones, so issue is solved at this frame, but when I continue to the next frame, this is when the problems start.
These kinds of problems are sometimes related to the setup of the CullMode in the D3D11_RASTERIZER_DESC. Try changing from D3D11_CULL_BACK to D3D11_CULL_FRONT or vice versa. Also have a look at the FrontCounterClockwise option of the D3D11_RASTERIZER_DESC.
I figured out what had been causing my grief. I have been using DirectXToolKits SpriteFont and SpriteBatch class's to use the function DrawString to display an FPS counter at the top left corner of the screen, and it must have been messing around with the ID3D11DeviceContext::DrawIndexed calls. Thankyou for all your input and brainstorming!!
I'm having a really annoying problem with my application that is a school project. I'm using deferred rendering and I'm trying to add positions from the light's pov as a new g-buffer texture, and the depth buffer texture as a shader resource in the light pass. I handle all of the g-buffer textures in the exact same way.
My problem is that these new shader resources are nowhere to be found on the GPU!
I'm using RenderDoc to debug my application, and there I can see everything being written to these new resources just fine, and the call to bind them as shader resources looks good as well, but I still only have the 4 resources in the light pass that I had before.
My code is an absolute mess, and there's a lot(!) of it. So if you want to see something specific to be able to help me, I can post it.
I'd be really happy if I just got some tips as to how you go about debugging this kind of problem, and even happier if someone knows what the problem is.
Thanks in advance!
There are two essential first steps for debugging DirectX applications:
Make sure that for every function that returns an HRESULT that you check it at runtime. If it is safe to ignore the return, it would return void. When checking the HRESULT don't use == S_OK, but use the FAILED or SUCCEEDED macro instead. A nice option for 'fast fail' errors is to use C++ exception handling via ThrowIfFailed.
Enable the DirectX Debug Device and look for errors, warnings, and other messages in the debug output window. For details on enabling it, see Anatomy of Direct3D 11 Create Device and Direct3D SDK Debug Layer Tricks.
This may not solve your problem, but can really help avoid some silly mistakes that cost you a lot of time to track down.
If you are new to DirectX development, take a look at the DirectX Tool Kit tutorials
For debugging DirectX 11 applications, the Visual Studio Graphics diagnostics feature in VS 2012 or later is a good option. You likely fit the license requirements to use the VS 2013 or VS 2015 Community edition, and it includes all features including the VSGS. See MSDN.
I have an OpenGL test application that is producing incredibly unusual results. When I start up the application it may or may not feature a severe graphical bug.
It might produce an image like this:
http://i.imgur.com/JwPoDrh.jpg
Or like this:
http://i.imgur.com/QEYwhBY.jpg
Or just the correct image, like this:
http://i.imgur.com/zUJbwCM.jpg
The scene consists of one spinning colored cube (made of 12 triangles) with a simple shader on it that colors the pixels based on the absolute value of their model space coordinates. The junk faces appear to spin with the cube as though they were attached to it and often junk triangles or quads flash on the screen briefly as though they were rendered in 2D.
The thing I find most unusual about this is that the behavior is highly inconsistent, starting the exact same application repeatedly without me personally changing anything else on the system will produce different results, sometimes bugged, sometimes not, the arrangement of the junk faces produced isn't consistent either.
I can't really post source code for the application as it is very lengthy and the actual OpenGL calls are spread out across many wrapper classes and such.
This is occurring under the following conditions:
Windows 10 64 bit OS (although I have observed very similar behavior under Windows 8.1 64 bit).
AMD FX-9590 CPU (Clocked at 4.7GHz on an ASUS Sabertooth 990FX).
AMD 7970HD GPU (It is a couple years old and occasionally areas of the screen in 3D applications become scrambled, but nothing on the scale of what I'm experiencing here).
Using SDL (https://www.libsdl.org/) for window and context creation.
Using GLEW (http://glew.sourceforge.net/) for OpenGL.
Using OpenGL versions 1.0, 3.3 and 4.3 (I'm assuming SDL is indeed using the versions I instructed it to).
AMD Catalyst driver version 15.7.1 (Driver Packaging Version listed as 15.20.1062.1004-150803a1-187674C, although again I have seen very similar behavior on much older drivers).
Catalyst Control Center lists my OpenGL version as 6.14.10.13399.
This looks like a broken graphics card to me. Most likely some problem with the memory (either the memory itself, or some soldering problem). Artifacts like those you see can happen if for some reason setting the address for a memory operation does not fully settle or happen at all, before starting the read; that can happen due to a bad connection between the GPU and the memory (solder connections failed) or because the memory itself failed.
Solution: Buy new graphics card. You may try out what happens if you resolder it using a reflow process; there are some tutorials on how to do this DIY, but a proper reflow oven gives better results.