DirectX 12 D3DCompileFromFile doesn't work - c++

I'm learning DirectX 12. I got an error at D3DCompileFromFile() function.
void Shader::CompileShader(const std::wstring& path, const std::string& name, const std::string& version, ID3DBlob* shaderBlob) {
#if defined(_DEBUG)
// Enable better shader debugging with the graphics debugging tools.
UINT compileFlags = D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION;
#else
UINT compileFlags = 0;
#endif
ThrowIfFailed(D3DCompileFromFile(path.c_str(), nullptr, D3D_COMPILE_STANDARD_FILE_INCLUDE, name.c_str(), version.c_str(), compileFlags, 0, &shaderBlob, &mErrBlob));
}
mErrBlob said "Information not available, no symbols loaded for D3DCompiler_47.dll"
enter image description here
So I checked Microsoft Symbol Server option at Debug-Options-Debugging-Symbols. And then I got another error from mErrBlob which is "No type information available in symbol file for D3DCompiler_47.dll"
HRESULT value is E_FAIL.
enter image description here
I don't know anymore how to fix this error.

Related

netsh.exe error on initializing NDI with custom console

I have an application that works with NDI. But when I initialize it, error occurs: window with title "netsh.exe - Application Error" and error description - "The application was unable to start correctly (0xc0000142). Click OK to close the application". I skip this error and all the required NDI functionality works fine. But this error shouldn't occur anyway. I also found, that this error because of my "custom console" usage - my application is GUI-application and I want to see console window near it in some cases.
Very simplified but problem-contains example of this:
#include <iostream>
#include <Windows.h>
#include "ndi/Processing.NDI.Lib.h"
int main()
{
FreeConsole();
AllocConsole();
SetConsoleActiveScreenBuffer(CreateConsoleScreenBuffer(GENERIC_WRITE | GENERIC_READ, 0, NULL, CONSOLE_TEXTMODE_BUFFER, NULL));
HMODULE m_NDIHandler = LoadLibraryA("Processing.NDI.Lib.x64.dll");
const NDIlib_v5* (*NDIlib_v5_load)(void) = NULL;
if (m_NDIHandler)
{
*((FARPROC*)&NDIlib_v5_load) = GetProcAddress(m_NDIHandler, "NDIlib_v5_load");
}
const NDIlib_v5* m_NDILib = NDIlib_v5_load();
m_NDILib->initialize();
return 0;
}
(Later I'm using WriteConsole for some purposes). Could you please tell me, what is wrong with my code? Error occur on m_NDILib->initialize();
I updated my NDI to latest version (5.1.3.0) and problem fixed.

Reading large files on Linux vs Windows (mmap vs CreateFileMapping/MapViewOfFile)

I have to read some data line by line from a large file (more than 7GB), it contains a list of vertex coordinates and face to vertex connectivity information to form a mesh. I am also learning how to use open, mmap on Linux and CreateFileA, CreateFileMapping, MapViewOfFile on Windows. Both Linux and Windows versions are 64bit compiled.
When I am on Linux (using docker) with g++-10 test.cpp -O3 -std=c++17 I get around 6s.
When I am on Windows (my actual PC) both with (version 19.29.30037 x64) cl test.cpp /EHsc /O3 /std:c++17 I get 13s, and with clang++-11 (from Visual Studio Build Tools) I get 11s.
Both systems (same PC, but one is using docker) use the same exact code except for generating the const char* that represents the memory array and the uint64_t size that reprents the memory size.
This is the way I switch platforms:
// includes for using a specific platform API
#ifdef _WIN32
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
// using windows handle void*
#define handle_type HANDLE
#else
#include <fcntl.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <unistd.h>
// using file descriptors
#define handle_type int
#endif
Specifically the code for getting the memory in an array of char-s is:
using uint_t = std::size_t;
// open the file -----------------------------------------------------------------------------
handle_type open(const std::string& filename) {
#ifdef _WIN32
// see windows file mapping api for parameter explanation
return ::CreateFileA(filename.c_str(), GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); // private access
#else
return ::open(filename.c_str(), O_RDONLY);
#endif
}
// get the memory size to later have a bound for reading -------------------------------------
uint_t memory_size(handle_type fid) {
#ifdef _WIN32
LARGE_INTEGER size{};
if (!::GetFileSizeEx(fid, &size)) {
std::cerr << "file not found\n";
return size.QuadPart;
}
return size.QuadPart;
#else
struct stat sb;
// get the file stats and check if not zero size
if (fstat(fid, &sb)) {
std::cerr << "file not found\n";
return decltype(sb.st_size){};
}
return sb.st_size;
#endif
}
// get the actual char array to access memory ------------------------------------------------
const char* memory_map(handle_type fid, uint_t memory_size) {
#ifdef _WIN32
HANDLE mapper = ::CreateFileMapping(fid, NULL, PAGE_READONLY, 0, 0, NULL);
return reinterpret_cast<const char*>(::MapViewOfFile(mapper, FILE_MAP_READ, 0, 0, memory_size));
#else
return reinterpret_cast<const char*>(::mmap(NULL, memory_size, PROT_READ, MAP_PRIVATE, fid, 0));
#endif
}
I am completely new to this sort of parsing and was wondering if I am doing something wrong in choosing the parameters in the Windows API (to mimic the behaviour of mmap) or if the difference in time is a matter of compilers/systems and have to accept it?
The actual time to open, get the memory size, and the memory map is negligible both on Linux and on Windows, the rest of the code is identical, as it only operates using the const char* and size_t info.
Thanks for taking the time to read. Any tip is greatly appreciated and sorry if anything is unclear.
Maybe you should take a look at https://github.com/alitrack/mman-win32 which is a mmap implementation for Windows. That way you don't need to write different code for Windows.

Exception: device is nullptr. read access violation in D3D11

everyone.
I wrote a small 3D scene two years ago, it's only 1700 lines of source code (excluding .h files). Now coming back to GitHub and running my app I found out pretty interestion bug in debug mode.
Debbuger throws me an exception when calling CreateBuffer for vertex buffer:
auto result = device->CreateBuffer(&vertex_buffer_desc, &vertex_data, &vertex_buffer);
if(FAILED(result))
return false;
Basically, debugger says (d3d11) device is nullptr which can't be the case, because running in non-debug mode, everything works fine. But when I define UINT create_device_flag = D3D11_CREATE_DEVICE_DEBUG; before creating device I have this exception thrown: read access violation. device is nullptr. A few days later, I still cannot find out what's is wrong, because the order that pointers are defined in is correct.
Here's Main.cpp:
#include <StdAfx.h>
#include <Window.h>
#include <FPSCamera.h>
#include <DirectInput8.h>
#include <D3D11Renderer.h>
#include <Terrain.h>
#include <TerrainShader.h>
using namespace bm;
int __stdcall WinMain(HINSTANCE, HINSTANCE, char*, int)
{
auto resource_directory_name = L"..\\..\\..\\Resource\\"s;
auto terrain_name = L"terrain"s;
auto dds_file_extension = L".dds"s;
auto hlsl_file_extension = L".hlsl"s;
std::wstring resources[] = {resource_directory_name + L"heightmap.bmp"s,
resource_directory_name + terrain_name + dds_file_extension,
resource_directory_name + terrain_name + L"_bump"s + dds_file_extension,
resource_directory_name + terrain_name + L"_vs"s + hlsl_file_extension,
resource_directory_name + terrain_name + L"_ps"s + hlsl_file_extension};
constexpr auto ENABLE_FULLSCREEN = false;
constexpr auto ENABLE_VSYNC = false;
constexpr auto SCREEN_WIDTH = 1366;
constexpr auto SCREEN_HEIGHT = 768;
auto window = std::make_shared<bm::Window>(SCREEN_WIDTH, SCREEN_HEIGHT, ENABLE_FULLSCREEN);
window->registerClass();
window->create();
auto d3d11_renderer = std::make_shared<bm::D3D11Renderer>(SCREEN_WIDTH, SCREEN_HEIGHT, ENABLE_FULLSCREEN, window->getHandle(), ENABLE_VSYNC);
// Exception is thrown in the following ponter, but d3d11 device should be already initialized.
auto terrain = std::make_shared<bm::Terrain>(d3d11_renderer->getDevice(), resources[0].c_str(), resources[1].c_str(), resources[2].c_str());
auto terrain_shader = std::make_shared<bm::TerrainShader>(d3d11_renderer->getDevice(), resources[3].c_str(), resources[4].c_str());
auto fps_camera = std::make_shared<bm::FPSCamera>(static_cast<float>(SCREEN_WIDTH), static_cast<float>(SCREEN_HEIGHT));
fps_camera->setPosition(500.f, 75.f, 400.f);
fps_camera->setRotation(20.f, 30.f, 0.f); // in degree.
auto direct_input_8 = std::make_shared<bm::DirectInput8>(window->getHandle());
constexpr float CLEAR_COLOR[] = {0.84f, 0.84f, 1.f, 1.f};
while(window->update())
{
direct_input_8->update(fps_camera->getMoveLeftRight(), fps_camera->getMoveBackForward(), fps_camera->getYaw(), fps_camera->getPitch());
fps_camera->update();
d3d11_renderer->clearScreen(CLEAR_COLOR);
terrain->render(d3d11_renderer->getDeviceContext());
terrain_shader->render(d3d11_renderer->getDeviceContext(),
terrain->getIndexCount(),
fps_camera->getWorld(),
fps_camera->getView(),
fps_camera->getProjection(),
{0.82f, 0.82f, 0.82f, 1.0f},
{-0.0f, -1.0f, 0.0f},
terrain->getColorTexture(),
terrain->getNormalMapTexture());
d3d11_renderer->swapBuffers();
}
return 0;
}
P.S.
And I know there's a 6 years old article on the website: CreateBuffer throwing an "Access violation reading location"
But it hardly explains any thing, since I have no global variables and pointers. I'd like to correct my old mistake, so I'll be glad to specify anything if needed.
Sorry but this is all abstraction code, so we can't see the actual place where you are calling D3D11CreateDevice.
That said, the symptom you describe sounds like you don't have the right Debug Device SDK layer installed on your operating system. You are likely failing to check for a FAILED HRESULT from D3D11CreateDevice as well.
DWORD createDeviceFlags = 0;
#ifdef _DEBUG
createDeviceFlags |= D3D11_CREATE_DEVICE_DEBUG;
#endif
ComPtr<ID3D11Device> device;
ComPtr<ID3D11DeviceContext> context;
D3D_FEATURE_LEVEL fl;
HRESULT hr = D3D11CreateDevice( nullptr, D3D_DRIVER_TYPE_HARDWARE,
nullptr, createDeviceFlags, nullptr,
0, D3D11_SDK_VERSION, &device, &fl, &context );
if (FAILED(hr))
...
On a system without the Debug Device SDK layer installed, this will fail in _DEBUG.
On Windows 8.x or Windows 10, installing the legacy DirectX SDK does not install any debug runtime.
For Windows 8.x you can get the Direct3D 11 Debug Runtime by installing the Windows 8.x SDK or Windows 10 SDK.
For Windows 10, you get the Direct3D Debug Runtime by installing a Windows optional feature named Graphics Tools. For Windows 10, this is version specific so make sure you have it enabled so it has the one that matches your release. See this blog post
See Anatomy of Direct3D 11 Create Device

i got the 100 errors when include a ntddk.h and wdm.h in my program

enter link description here[2][this image shows the error]
I tried this code for block the process before execution.But i got a 100 errors while import a ntddk.h and wdm.h in c++.How to solve it?
Then i got sme erroe like this expected a ')' in my 14 and 22 line of code.
So what should i do for removing the 100 error?
#include <ntstatus.h>
#include<DbgEng.h>
#include<Windows.h>
#include <ntddk.h>
#include <wdm.h>
int main()
{
PEPROCESS process1;
process1 = IoGetCurrentProcess();
HANDLE ProcessId = PsGetCurrentProcessId();
PS_CREATE_NOTIFY_INFO CreateInfo;
PCREATE_PROCESS_NOTIFY_ROUTINE_EX(process1, ProcessId, CreateInfo);
PCUNICODE_STRING ImageFileName;
NTSTATUS CreationStatus;
CreateInfo.CreationStatus = STATUS_ACCESS_DENIED;
ImageFileName = CreateInfo.ImageFileName;
if (ImageFileName == (PCUNICODE_STRING)L"firefox.exe")
{
NTSTATUS result;
result = PsSetCreateProcessNotifyRoutineEx(PCREATE_PROCESS_NOTIFY_ROUTINE_EX(process1, ProcessId, CreateInfo), FALSE);
if (result)
{
printf("blocked");
}
}
return 0;
}
Then i got sme erroe like this expected a ')' in my 14 and 22 line of code.
PCREATE_PROCESS_NOTIFY_ROUTINE_EX(process1, ProcessId, CreateInfo);
result=PsSetCreateProcessNotifyRoutineEx(PCREATE_PROCESS_NOTIFY_ROUTINE_EX(process1, ProcessId, CreateInfo), FALSE);
this is the link for showing my error
Don't mix SDK and DDK headers/libraries in one executable.
If you write a driver, don't include Windows.h. Driver code is not Win32 code.
If you want to create a process in suspended state from another Win32 process, use CREATE_SUSPENDED process creation flag in CreateProcess() (or a similar) Win32 call.
If you want to deny process creation for a particular process from a driver, check this StackOverflow question for the boilerplate code.

Disable LFH in Windows?

I'm looking to disable the LFH for an application I'm trying to debug. I'm able to rebuild and redeploy the application, but I cannot attach a debugger or set any gflags.
What's a good way to disable the LFH with these constraints? Is there maybe an attribute I can modify on the executable itself? Or some startup code I can add to the program?
On Vista and Win7, I think you can disable the Low-Fragmentation Heap on a per-executable basis with the Application Compatibility Toolkit.
On XP, the documentation suggests you don't get a LFH by default. So probably it is your C++ runtime library (which you haven't named) that is turning it on. And it cannot be disabled once enabled. So check the documentation for your particular runtime library to see if you can tell it not to enable LFH, or if there's another version of the runtime library you can link with that doesn't enable it.
See also this thread on the Microsoft forums
Based on Michael Burr's comment above about the IMAGE_LOAD_CONFIG_DIRECTORY containing GlobalFlagSet I wrote the following code to demonstrate enabling the correct GlobalFlag to disable the Low Fragmentation Heap. One caveat about writing your own IMAGE_LOAD_CONFIG_DIRECTORY at compile time is that it disables SafeSEH.
// editloadconfig.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <windows.h>
#include <tchar.h>
#include <stdio.h>
/*
typedef struct {
DWORD Size;
DWORD TimeDateStamp;
WORD MajorVersion;
WORD MinorVersion;
DWORD GlobalFlagsClear;
DWORD GlobalFlagsSet;
DWORD CriticalSectionDefaultTimeout;
DWORD DeCommitFreeBlockThreshold;
DWORD DeCommitTotalFreeThreshold;
DWORD LockPrefixTable; // VA
DWORD MaximumAllocationSize;
DWORD VirtualMemoryThreshold;
DWORD ProcessHeapFlags;
DWORD ProcessAffinityMask;
WORD CSDVersion;
WORD Reserved1;
DWORD EditList; // VA
DWORD SecurityCookie; // VA
DWORD SEHandlerTable; // VA
DWORD SEHandlerCount;
} IMAGE_LOAD_CONFIG_DIRECTORY32, *PIMAGE_LOAD_CONFIG_DIRECTORY32;
*/
extern "C"
IMAGE_LOAD_CONFIG_DIRECTORY _load_config_used = { 0x48, 0, 0, 0,0, 0x00000020/*enable heap free checking*/};
// change the last value to 0 to not enable any globalflags
#define HEAP_STANDARD 0
#define HEAP_LAL 1
#define HEAP_LFH 2
#define SIZE 100
int _tmain(int argc, _TCHAR* argv[])
{
BOOL bResult;
HANDLE hHeap;
ULONG HeapInformation;
void* allocb[0x12+1];
// based on "Understanding the LFH" paper at
// http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CE0QFjAE&url=http%3A%2F%2Fillmatics.com%2FUnderstanding_the_LFH.pdf&ei=GlBvT9yrMKHy0gGHpLnaBg&usg=AFQjCNGsvVtl54X7MWGyWYqiSrsdTBrbXQ
int i = 0;
for(i = 0; i < 0x12; i++)
{
printf("Allocation 0x%02x for 0x%02x bytes\n", i, SIZE);
allocb[i] = HeapAlloc(GetProcessHeap(), 0x0, SIZE);
}
printf("Allocation 0x%02x for 0x%02x bytes\n", i++, SIZE);
printf("\tFirst serviced by the LFH\n");
allocb[i] = HeapAlloc(GetProcessHeap(), 0x0, SIZE);
// LFH is now activated so the query below will return 0 or 2.
// sample code from MSDN for querying heap information
//
// Get a handle to the default process heap.
//
hHeap = GetProcessHeap();
if (hHeap == NULL) {
_tprintf(TEXT("Failed to retrieve default process heap with LastError %d.\n"),
GetLastError());
return 1;
}
//
// Query heap features that are enabled.
//
bResult = HeapQueryInformation(hHeap,
HeapCompatibilityInformation,
&HeapInformation,
sizeof(HeapInformation),
NULL);
if (bResult == FALSE) {
_tprintf(TEXT("Failed to retrieve heap features with LastError %d.\n"),
GetLastError());
return 1;
}
//
// Print results of the query.
//
_tprintf(TEXT("HeapCompatibilityInformation is %d.\n"), HeapInformation);
switch(HeapInformation)
{
case HEAP_STANDARD:
_tprintf(TEXT("The default process heap is a standard heap.\n"));
break;
case HEAP_LAL:
_tprintf(TEXT("The default process heap supports look-aside lists.\n"));
break;
case HEAP_LFH:
_tprintf(TEXT("The default process heap has the low-fragmentation ") \
TEXT("heap enabled.\n"));
break;
default:
_tprintf(TEXT("Unrecognized HeapInformation reported for the default ") \
TEXT("process heap.\n"));
break;
}
return 0;
}
You can use the gflags.exe tool that's included with the WDK (maybe also the SDK via the Debugging Tools for Windows package) to manipulate a subset of the gflags in the executable image's PE header. Just go to the "Image File" tab in 'gflags.exe'.
As pointed out by jcopenha in a comment, it looks like gflags.exe does not manipulate the PE file header (I was relying on information from "Windows Internals, Fifth Edition" in Chapter 9's "Heap Debugging Features" section) - apparently it only manipulates the "Image File Execution Options" registry key.
However, it may still be possible possible to set (or clear) the gflags bits for a particular executable in the image - see the docs for the IMAGE_LOAD_CONFIG_DIRECTORY structure; in particular the GlobalFlagsClear and GlobalFlagsSet fields:
GlobalFlagsClear - The global flags that control system behavior. For more information, see Gflags.exe.
GlobalFlagsSet - The global flags that control system behavior. For more information, see Gflags.exe.
You can dump these fields with dumpbin (or link /dump) using the /loadconfig option:
C:\temp>dumpbin /loadconfig test.exe
Microsoft (R) COFF/PE Dumper Version 10.00.40219.01
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file test.exe
File Type: EXECUTABLE IMAGE
Section contains the following load config:
00000048 size
0 time date stamp
0.00 Version
0 GlobalFlags Clear
0 GlobalFlags Set // <=======
0 Critical Section Default Timeout
// remainder of dump snipped...
You can get the RVA of the "Load Configuration Directory" using dumpbin /headers:
C:\temp>dumpbin /headers test.exe
Microsoft (R) COFF/PE Dumper Version 10.00.40219.01
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file test.exe
// ...
OPTIONAL HEADER VALUES
// ...
142B0 [ 40] RVA [size] of Load Configuration Directory
// ^^^^^ ^^^
// ...
As a point of interest, the /loadconfig and /headers option disagree on the size of the structure (for the record, it looks like the /header info isn't right)
Unfortunately, I'm unaware of PE editor that directly supports these fields - you'll probably have to use a hex editor (or the hex editing feature of a PE editor) to change those fields. The RVA of the IMAGE_LOAD_CONFIG_DIRECTORY structure should help you find it in the hex editor.
I believe that setting one or more of the heap debugging flags in the image header (maybe any of them, but you might have to experiment) will disable the low fragmentation heap. But I haven't tested whether or not setting bits in these fields actually works. If you try this, please let us know how it fares.
The simplest way if you can't change the configuration on the machine, is to set the heap information.
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366705(v=vs.85).aspx
I believe you can disable the LFH heap programmatically this way, although I haven't tried it.