I'm following the DirectX 11 Series 2 tutorial on rastertek.com. Presently I'm on Tutorial 3 (Initializing DirectX) and for some reason the CreateDeviceAndSwapChain function keeps failing when I run the program. I have followed the steps to link the Windows 10 SDK libraries and includes to the project from here (rastertek.com/dx12tut01.html) and my GPU is an Nvidia 780 Ti. I've also tried his pre-compiled .exe and that works fine. What's the problem? Let me know if screenshots are needed!
//enable debug mode
UINT flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
#if defined( DEBUG ) || defined( _DEBUG )
flags |= D3D11_CREATE_DEVICE_DEBUG;
#endif
//create the swap chain, d3d device, and d3d device context
result = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, flags, &featureLevel, 1, D3D11_SDK_VERSION, &swapChainDesc, &m_swapChain, &m_device, NULL, &m_deviceContext);
if (FAILED(result)) {
return false;
}
Well, in my case, i have a laptop with intel hd 4400 and geforce 840m (using win10 + visual studio 2015), and i had set featureLevel to 11.1 manually, turned out that intel supports that feature level, but geforce suports maximum 11.0. Anyway, its still not clear what is your problem, what error message is, or anything else, so you can try lower feature level, pass 0 as flags.
Related
I'm trying to initialize Direct3D11 in C++. On machines that have Visual Studio installed(all of those are running on Windows 10), it runs fine.
On other computers (without Visual studio installed, Windows 10 and 7) it returns E_INVALIDARG.
The flag P_FeatureLevelsSupported says 0 on those computers. On mine it says D3D_FEATURE_LEVEL_11_1.
So I guess it has something to do with the DirectX installation or maybe because the SDK is missing( but wouldn't that be strange? :D )
By running dxdiag, I know that those machines support DirectX11_0.
Is there something i need to install?
The software has to run on the PCs of our clients.
The Code that causes the error:
const D3D_FEATURE_LEVEL lvl[] = { D3D_FEATURE_LEVEL_11_1, D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_3, D3D_FEATURE_LEVEL_9_2, D3D_FEATURE_LEVEL_9_1,
};
D3D_FEATURE_LEVEL P_FeatureLevelsSupported;
//see microsoft documentation, we use 11_1 or 11_0 if 11_1 is not supported by the client machine
//https://learn.microsoft.com/en-us/windows/desktop/direct3d11/overviews-direct3d-11-devices-initialize
result = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, D3D11_CREATE_DEVICE_DEBUG, lvl, _countof(lvl), D3D11_SDK_VERSION, &swapChainDesc, &swapChain, &device, &P_FeatureLevelsSupported, &deviceContext);
if(result == E_INVALIDARG) //check with FEATURE_LEVEL_11_0
D3D11CreateDeviceAndSwapChain(NULL,
D3D_DRIVER_TYPE_HARDWARE,
NULL,
D3D11_CREATE_DEVICE_DEBUG,
&lvl[1],
_countof(lvl) - 1,
D3D11_SDK_VERSION,
&swapChainDesc,
&swapChain,
&device,
&P_FeatureLevelsSupported,
&deviceContext);
Thanks in advance :)
You are asking to create the debug device by passing in D3D11_CREATE_DEVICE_DEBUG. For that to succeed you must have D3D11*SDKLayers.dll installed which you probably have on your dev machines. See here for details which includes:
Debug Layer The debug layer provides extensive additional parameter
and consistency validation (such as validating shader linkage and
resource binding, validating parameter consistency, and reporting
error descriptions).
To create a device that supports the debug layer, you must install the
DirectX SDK (to get D3D11SDKLayers.dll), and then specify the
D3D11_CREATE_DEVICE_DEBUG flag when calling the D3D11CreateDevice
function or the D3D11CreateDeviceAndSwapChain function. If you run
your application with the debug layer enabled, the application will be
substantially slower. But, to ensure that your application is clean of
errors and warnings before you ship it, use the debug layer. For more
info, see Using the debug layer to debug apps.
Note
For Windows 8, to create a device that supports the debug layer,
install the Windows Software Development Kit (SDK) for Windows 8 to
get D3D11_1SDKLayers.dll.
If you don't need a debug device when on a customer machine just remove that flag.
Whenever I call glBufferStorage(...) the subsequent glBindBuffer(..) always crashes. Ex:
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 1);
glBufferStorage(GL_SHADER_STORAGE_BUFFER, sizeof(unsigned int) * 100, NULL, GL_DYNAMIC_STORAGE_BIT | GL_MAP_WRITE_BIT | GL_MAP_READ_BIT );
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 2); // <- CRASH HERE!
If I remove the glBufferStorage(...) call, the subsequent glBindBuffer calls dont crashes!
This code was working normally in my Desktop under a GTX650 Ti and PhenonII x6, with openGl installed via NugeT on VS2015 ( nupengl.core package ). Then I pasted the entire project folder to my Notebook ( GeForce 740M / i7 ), removed the openGl nuget package and reinstalled it.
How can I proceed to investigate what is wrong? Is this logical error or maybe gpu driver error?
I could make it.
As stated, I moved my project from Desktop to Laptop. My laptop have newer OpenGL support than Desktop, but my Laptop was using the CPU graphics ( Intel HD Graphics ) instead of dedicated GPU GeForce 740M.
This way, my OpenGL program was executing on a device that not supports some newer OpenGL features, like the GL_SHADER_STORAGE_BUFFER target, and that's why it crashes.
I'm trying to create a D3D12 device as specified in
https://msdn.microsoft.com/en-us/library/dn899120%28v=vs.85%29.aspx
I have a NVidia 670 gtx, Windows 10 preview build 9926, and last 10041 windows sdk.
I also have latest NVidia beta driver, system information for GeForce reports a DirectX12 runtime.
Calling
ID3D12Device* device;
HRESULT hr = D3D12CreateDevice(NULL, D3D_DRIVER_TYPE::D3D_DRIVER_TYPE_HARDWARE,
D3D12_CREATE_DEVICE_FLAG::D3D12_CREATE_DEVICE_NONE,
D3D_FEATURE_LEVEL::D3D_FEATURE_LEVEL_11_0, D3D12_SDK_VERSION, __uuidof(ID3D12Device), (void**)&device);
Returns me a HRESULT with NOINTERFACE error code
Strangely calling:
ID3D12Object* device;
HRESULT hr = D3D12CreateDevice(NULL, D3D_DRIVER_TYPE::D3D_DRIVER_TYPE_HARDWARE,
D3D12_CREATE_DEVICE_FLAG::D3D12_CREATE_DEVICE_NONE,
D3D_FEATURE_LEVEL::D3D_FEATURE_LEVEL_11_0, D3D12_SDK_VERSION, __uuidof(ID3D12Object), (void**)&device);
returns me a valid object, but I'm not able to use QueryInterface to get a valid device object afterwards.
Please note I already tried using LoadLibrary/GetProcAddress instead of using d3d12 headers, which returns same error code.
You should always use the same OS and SDK Build, because APIs can change betweens builds. Because you use SDK for Build 10041, you should also update Windows 10 to the Build 10041. Open the Settings App, and search for a new Windows 10 Build and install it.
I am trying to run this OpenCL Example in Ubuntu 10.04.
My graphics card is an NVIDIA GeForce GTX 480. I have installed the latest NVIDIA driver and CUDA toolkit manually.
The program compiles without any errors. Thus linking with libOpenCL works. The application also runs but the output is very strange (mostly zeros and some random numbers). Debugging shows that
clGetPlatformIDs(1, &platform_id, &ret_num_platforms);
returns -1001.
google and stack told me that the reason may be a missing nvidia.icd in /etc/OpenCL/vendors. It was not there so I've added /etc/OpenCL/vendors/nvidia.icd with the following line
libnvidia-opencl.so.1
I have also tried some variants (absolute paths etc). But nothing solved the problem. Right now I have no idea what else I can try. Any suggestions?
EDIT: I have installed the Intel OpenCL SDK and I have copied its icd into /etc/OpenCL/vendors and the application works fine for
clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_DEFAULT, 1,
&device_id, &ret_num_devices);
For
clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_GPU, 1,
&device_id, &ret_num_devices);
I get the error -1.
EDIT:
I have noticed one thing in the console when executing the application. After execution of line
cl_int ret = clGetPlatformIDs(1, &platform_id, &ret_num_platforms);
the application gives me the output
modprobe: ERROR: ../libkmod/libkmod-module.c:809 kmod_module_insert_module() could not find module by name='nvidia_331_uvm'
modprobe: ERROR: could not insert 'nvidia_331_uvm': Function not implemented
There seems to be a conflict with an older driver version since I am using 340.
cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module 340.32 Tue Aug 5 20:58:26 PDT 2014
Maybe I should try to remove Ubuntu's own NVIDIA drivers one more time and reinstall the latest manually one more time?
EDIT:
The old driver was the problem. Somehow it wasn't removed properly thus I have done it one more time with
apt-get remove nvidia-331 nvidia-opencl-icd-331 nvidia-libopencl1-331
and now it works. I hope this helps someone who has similar problems.
The above mentioned problems occurred due to a driver conflict. If you have a similar problem then read the above edits to get the solution.
I am trying to have the .hlsl files in my project pre-/ offline/ buildtime compiled for my C++ DirectX 11 project. Visual Studio compiles the .hlsl files into .cso files. I am now having trouble reading .cso files of SM 5.0 into my program to create a shader.
Here is the code I have to create a Pixel Shader from a precompiled .cso file:
ID3D11VertexShader* PS;
ID3DBlob* PS_Buffer;
D3DReadFileToBlob(L"PixelShader.cso", &PS_Buffer);
d3d11Device->CreatePixelShader(PS_Buffer->GetBufferPointer(), PS_Buffer->GetBufferSize(), NULL, &PS);
d3d11DevCon->PSSetShader(PS, 0, 0);
The pixel shader is simply supposed to return blue pixels. It has been compiled to Shader Model 5.0 by changing its compile settings in Visual Studio 11. However upon running the program the triangle I am trying to render doesn't show up despite their being no build or runtime error messages.
When compiling at runtime using:
D3DCompileFromFile(L"PixelSHader.hlsl", 0, 0, "main", "ps_5_0", 0, 0, &PS_Buffer, 0);
instead of D3DReadFileToBlob(..), there is still no triangle rendered.
However when using Shader Model 4.0, whether buildtime or runtime compiled, a blue triangle does show signifying that it works.
Here is the code in my PixelShader.hlsl file. By any chance could it be too outdated for Shader Model 5.0 and be the root of my problem?
float4 main() : SV_TARGET
{
return float4(0.0f, 0.0f, 1.0f, 1.0f);
}
I am using the Windows SDK libraries and not the June 2010 DirectX SDK.
My Graphics Card is a nVidia 8800 GTS.
Shader model 5 is supported by the GeForce 400 series and newer for nVidia cards, and the Radeon 5000 series and newer for AMD cards. Your GeForce 8800 only supports shader model 4, which is why the shader doesn't run. To get around this you can use the reference device (by using D3D_DRIVER_TYPE_REFERENCE) but this will be quite slow. Additionally you can use the WARP device if you have the Windows 8 preview (which adds WARP support for D3D_FEATURE_LEVEL_11_0), which will perform better than the reference device but still nowhere near running on the hardware device.