everyone.
I wrote a small 3D scene two years ago, it's only 1700 lines of source code (excluding .h files). Now coming back to GitHub and running my app I found out pretty interestion bug in debug mode.
Debbuger throws me an exception when calling CreateBuffer for vertex buffer:
auto result = device->CreateBuffer(&vertex_buffer_desc, &vertex_data, &vertex_buffer);
if(FAILED(result))
return false;
Basically, debugger says (d3d11) device is nullptr which can't be the case, because running in non-debug mode, everything works fine. But when I define UINT create_device_flag = D3D11_CREATE_DEVICE_DEBUG; before creating device I have this exception thrown: read access violation. device is nullptr. A few days later, I still cannot find out what's is wrong, because the order that pointers are defined in is correct.
Here's Main.cpp:
#include <StdAfx.h>
#include <Window.h>
#include <FPSCamera.h>
#include <DirectInput8.h>
#include <D3D11Renderer.h>
#include <Terrain.h>
#include <TerrainShader.h>
using namespace bm;
int __stdcall WinMain(HINSTANCE, HINSTANCE, char*, int)
{
auto resource_directory_name = L"..\\..\\..\\Resource\\"s;
auto terrain_name = L"terrain"s;
auto dds_file_extension = L".dds"s;
auto hlsl_file_extension = L".hlsl"s;
std::wstring resources[] = {resource_directory_name + L"heightmap.bmp"s,
resource_directory_name + terrain_name + dds_file_extension,
resource_directory_name + terrain_name + L"_bump"s + dds_file_extension,
resource_directory_name + terrain_name + L"_vs"s + hlsl_file_extension,
resource_directory_name + terrain_name + L"_ps"s + hlsl_file_extension};
constexpr auto ENABLE_FULLSCREEN = false;
constexpr auto ENABLE_VSYNC = false;
constexpr auto SCREEN_WIDTH = 1366;
constexpr auto SCREEN_HEIGHT = 768;
auto window = std::make_shared<bm::Window>(SCREEN_WIDTH, SCREEN_HEIGHT, ENABLE_FULLSCREEN);
window->registerClass();
window->create();
auto d3d11_renderer = std::make_shared<bm::D3D11Renderer>(SCREEN_WIDTH, SCREEN_HEIGHT, ENABLE_FULLSCREEN, window->getHandle(), ENABLE_VSYNC);
// Exception is thrown in the following ponter, but d3d11 device should be already initialized.
auto terrain = std::make_shared<bm::Terrain>(d3d11_renderer->getDevice(), resources[0].c_str(), resources[1].c_str(), resources[2].c_str());
auto terrain_shader = std::make_shared<bm::TerrainShader>(d3d11_renderer->getDevice(), resources[3].c_str(), resources[4].c_str());
auto fps_camera = std::make_shared<bm::FPSCamera>(static_cast<float>(SCREEN_WIDTH), static_cast<float>(SCREEN_HEIGHT));
fps_camera->setPosition(500.f, 75.f, 400.f);
fps_camera->setRotation(20.f, 30.f, 0.f); // in degree.
auto direct_input_8 = std::make_shared<bm::DirectInput8>(window->getHandle());
constexpr float CLEAR_COLOR[] = {0.84f, 0.84f, 1.f, 1.f};
while(window->update())
{
direct_input_8->update(fps_camera->getMoveLeftRight(), fps_camera->getMoveBackForward(), fps_camera->getYaw(), fps_camera->getPitch());
fps_camera->update();
d3d11_renderer->clearScreen(CLEAR_COLOR);
terrain->render(d3d11_renderer->getDeviceContext());
terrain_shader->render(d3d11_renderer->getDeviceContext(),
terrain->getIndexCount(),
fps_camera->getWorld(),
fps_camera->getView(),
fps_camera->getProjection(),
{0.82f, 0.82f, 0.82f, 1.0f},
{-0.0f, -1.0f, 0.0f},
terrain->getColorTexture(),
terrain->getNormalMapTexture());
d3d11_renderer->swapBuffers();
}
return 0;
}
P.S.
And I know there's a 6 years old article on the website: CreateBuffer throwing an "Access violation reading location"
But it hardly explains any thing, since I have no global variables and pointers. I'd like to correct my old mistake, so I'll be glad to specify anything if needed.
Sorry but this is all abstraction code, so we can't see the actual place where you are calling D3D11CreateDevice.
That said, the symptom you describe sounds like you don't have the right Debug Device SDK layer installed on your operating system. You are likely failing to check for a FAILED HRESULT from D3D11CreateDevice as well.
DWORD createDeviceFlags = 0;
#ifdef _DEBUG
createDeviceFlags |= D3D11_CREATE_DEVICE_DEBUG;
#endif
ComPtr<ID3D11Device> device;
ComPtr<ID3D11DeviceContext> context;
D3D_FEATURE_LEVEL fl;
HRESULT hr = D3D11CreateDevice( nullptr, D3D_DRIVER_TYPE_HARDWARE,
nullptr, createDeviceFlags, nullptr,
0, D3D11_SDK_VERSION, &device, &fl, &context );
if (FAILED(hr))
...
On a system without the Debug Device SDK layer installed, this will fail in _DEBUG.
On Windows 8.x or Windows 10, installing the legacy DirectX SDK does not install any debug runtime.
For Windows 8.x you can get the Direct3D 11 Debug Runtime by installing the Windows 8.x SDK or Windows 10 SDK.
For Windows 10, you get the Direct3D Debug Runtime by installing a Windows optional feature named Graphics Tools. For Windows 10, this is version specific so make sure you have it enabled so it has the one that matches your release. See this blog post
See Anatomy of Direct3D 11 Create Device
Related
I'm learning DirectX 12. I got an error at D3DCompileFromFile() function.
void Shader::CompileShader(const std::wstring& path, const std::string& name, const std::string& version, ID3DBlob* shaderBlob) {
#if defined(_DEBUG)
// Enable better shader debugging with the graphics debugging tools.
UINT compileFlags = D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION;
#else
UINT compileFlags = 0;
#endif
ThrowIfFailed(D3DCompileFromFile(path.c_str(), nullptr, D3D_COMPILE_STANDARD_FILE_INCLUDE, name.c_str(), version.c_str(), compileFlags, 0, &shaderBlob, &mErrBlob));
}
mErrBlob said "Information not available, no symbols loaded for D3DCompiler_47.dll"
enter image description here
So I checked Microsoft Symbol Server option at Debug-Options-Debugging-Symbols. And then I got another error from mErrBlob which is "No type information available in symbol file for D3DCompiler_47.dll"
HRESULT value is E_FAIL.
enter image description here
I don't know anymore how to fix this error.
I'm using Visual Studio 2019 Community with C++17 and Vulkan SDK 1.2.148.1
#define GLFW_INCLUDE_VULKAN
#include <GLFW/glfw3.h>
//other vulkan stuff here
VkInstance instance;
uint32_t count;
VkInstanceCreateInfo createInfo{};
createInfo.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
createInfo.ppEnabledExtensionNames = glfwGetRequiredInstanceExtensions(&count);
createInfo.enabledExtensionCount = count;
createInfo.enabledLayerCount = 0;
vkCreateInstance(&createInfo, nullptr, &instance)
After creating the instance, VkWin32SurfaceCreateInfoKHR is still not available. The code fails at:
VkWin32SurfaceCreateInfoKHR createInfo{};
Full code on pastebin. The error is on line 110.
VkWin32SurfaceCreateInfoKHRis platform specific for windows, so in order to use it you need to define VK_USE_PLATFORM_WIN32_KHR somewhere in your project.
After my first successful attempt at a 3D engine using Java and OpenGL (LWJGL3), I have decided to try my hand at Vulkan, using C++.
I have barely any experience with C/C++ and I am aware of the steep learning curve of Vulkan. This is however not a problem.
I decided to follow this tutorial: https://vulkan-tutorial.com/Introduction
It has showed me how to create a new project with Vulkan using XCode (as I am on Mac OS Mojave). I would, however, like to continue the rest of the tutorial using CLion as I would be switching between multiple operating systems.
I tried my hand at creating a CLion project and succeeded in making my first CMakeLists file, however something seems to be wrong. The file currently consists of the following:
cmake_minimum_required(VERSION 3.12)
project(VulkanTesting)
set(CMAKE_CXX_STANDARD 14)
add_executable(VulkanTesting main.cpp)
include_directories(/usr/local/include)
include_directories(/Users/[username]/Documents/Vulkan/SDK/vulkansdk-macos-1.1.92.1/macOS/include)
target_link_libraries(VulkanTesting /usr/local/lib/libglfw.3.3.dylib)
target_link_libraries(VulkanTesting /Users/[username]/Documents/Vulkan/SDK/vulkansdk-macos-1.1.92.1/macOS/lib/libvulkan.1.dylib)
target_link_libraries(VulkanTesting /Users/[username]/Documents/Vulkan/SDK/vulkansdk-macos-1.1.92.1/macOS/lib/libvulkan.1.1.92.dylib)
# Don't know if I need the next two lines
link_directories(/usr/local/lib)
link_directories(/Users/[username]/Documents/Vulkan/SDK/vulkansdk-macos-1.1.92.1/macOS/lib)
The reason I showed the above file will become apparent in the question.
The 'Program' so far is the following:
#define GLFW_INCLUDE_VULKAN
#include <GLFW/glfw3.h>
#include <iostream>
#include <stdexcept>
#include <functional>
#include <cstdlib>
#include <vector>
const int WIDTH = 800;
const int HEIGHT = 600;
class HelloTriangleApplication {
public:
void run() {
initWindow();
initVulkan();
mainLoop();
cleanup();
}
private:
GLFWwindow* window;
VkInstance instance;
void initWindow(){
glfwInit();
glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API);
glfwWindowHint(GLFW_RESIZABLE, GLFW_FALSE);
window = glfwCreateWindow(WIDTH, HEIGHT, "My first Vulkan window", nullptr, nullptr);
}
void initVulkan() {
createInstance();
}
void createInstance(){
// Instantiate Application Info
VkApplicationInfo applicationInfo = {};
applicationInfo.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
applicationInfo.pApplicationName = "Hello Triangle";
applicationInfo.applicationVersion = VK_MAKE_VERSION(1,0,0);
applicationInfo.pEngineName = "No Engine";
applicationInfo.engineVersion = VK_MAKE_VERSION(1,0,0);
applicationInfo.apiVersion = VK_API_VERSION_1_0;
// Instantiate Instance Creation Info
VkInstanceCreateInfo createInfo = {};
createInfo.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
createInfo.pApplicationInfo = &applicationInfo;
// Get GLFW platform specific extensions
uint32_t glfwExtensionCount = 0;
const char** glfwExtensions;
glfwExtensions = glfwGetRequiredInstanceExtensions(&glfwExtensionCount);
// Fill in required extensions in Instance Creation Info
createInfo.enabledExtensionCount = glfwExtensionCount;
createInfo.ppEnabledExtensionNames = glfwExtensions;
// For validation layers, this is a later step in the tutorial.
createInfo.enabledLayerCount = 0;
// Create the Vulkan instance, and check if it was successful.
VkResult result = vkCreateInstance(&createInfo, nullptr, &instance);
if(result != VK_SUCCESS){
std::cout << "glfwExtensionCount: " << glfwExtensionCount << "\n";
std::cout << "glfwExtensionNames: " << &glfwExtensions << "\n";
std::cout << "result: " << result << "\n";
throw std::runtime_error("Failed to create Vulkan Instance");
}
}
void mainLoop() {
while(!glfwWindowShouldClose(window)){
glfwPollEvents();
}
}
void cleanup() {
glfwDestroyWindow(window);
glfwTerminate();
}
};
int main() {
HelloTriangleApplication app;
try {
app.run();
} catch (const std::exception& e) {
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
The problem I am having is that when I try to run the program, it will not create a VkInstance. The function returns VK_ERROR_INCOMPATIBLE_DRIVER. Now, I doubt that the driver is in fact incompatible as I have run the demo applications that came with the Vulkan SDK for one, and for another I have been able to run the exact same 'program' in XCode. When I investigated the problem a bit further, I noticed that the glfwGetRequiredInstanceExtensions function returns no extensions when the program is run in CLion like this, but does return one in the XCode equivalent.
This all leads me to believe that there is something I have done wrong in linking the libraries/frameworks in the Cmake file because I am aware of the fact that Vulkan is not directly supported in Mac OS, but instead (somehow?) passes through a layer to communicate with Metal.
Do I need to specify a way for the program to pass its Vulkan functionality through a Metal layer, and is this done automagically in XCode, or is there another problem with my approach?
Any help would be greatly appreciated!
You might want to look at the MacOS Getting Started Guide on the LunarXchange website and in your SDK. There is a section at the end that shows how to use CMake to build a Vulkan app and run it on MacOS. You also may want to use the FindVulkan CMake module instead of manually setting the include directories and the target link libraries.
But my first guess about your specific problem is that you may not be setting the VK_ICD_FILENAMES environment variable. You are correct in your observation that there is no direct support for Vulkan. Instead, the support is provided by the MoltenVK library which is treated as a Vulkan driver. But this "driver" is not installed in any system directory by the SDK. The SDK is just unzipped in your home directory structure, so you must tell the Vulkan loader where to find it via this environment variable.
Again, the CMake section at the end of the Getting Started Guide demonstrates the use of this environment variable. And the entire guide goes into additional detail about how the various Vulkan and MoltenVK components work.
I've been having trouble with a game I've been working on, where once I added music it started segfaulting in my frequently-called texture-loading code, between 5-30 secs after it started playing. The best I could come up with was that it is some sort of memory corruption. After a good week of unsuccessfully trying to debug it (trying things like GFlags pageheap), I managed to cut it down to the following code, which still exhibits the problem.
Sometimes this segfaults with the callstack going through SDL2_mixer.dll, but mostly it occurs in the SDL_CreateTextureFromSurface call, due to the renderer being in a bad state. numTextures gets to between 15000-40000 on my machine (Windows 10 x64, with program compiled for x86).
My gut tells me that there's an issue in my environment or code, rather than an issue in SDL itself, but I'm at a loss. Any help or insights would be greatly appreciated.
#include <SDL_image.h>
#include <SDL_mixer.h>
#include <cassert>
int main(int argc, char* argv[])
{
assert(SDL_Init(SDL_INIT_EVERYTHING) == 0);
SDL_Window * pWindow_ = SDL_CreateWindow(
"", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 640, 480, 0x0);
assert(pWindow_ != nullptr);
SDL_Renderer * pRenderer_ = SDL_CreateRenderer(pWindow_, -1, 0);
assert(pRenderer_ != nullptr);
assert(Mix_OpenAudio(44100, MIX_DEFAULT_FORMAT, 2, 512) == 0);
Mix_Music * pMusic = Mix_LoadMUS("sounds/tranquility.wav");
assert(pMusic != nullptr);
assert(Mix_PlayMusic(pMusic, -1) == 0);
SDL_Surface * pSurface = IMG_Load("images/caution.png");
assert(pSurface != nullptr);
SDL_Texture * pTexture = SDL_CreateTextureFromSurface(pRenderer_, pSurface);
assert(pTexture != nullptr);
int numTextures = 0;
while (true)
{
numTextures += 10;
assert(pTexture != nullptr);
SDL_DestroyTexture(pTexture);
pTexture = SDL_CreateTextureFromSurface(pRenderer_, pSurface);
assert(pTexture != nullptr);
}
}
The solution turned out to be to update to the latest version of SDL (2.0.3 -> 2.0.5).
I started developing the project in question with an engine code base which I upgraded from SDL 1.2 to 2.0 about 2 years ago, when the latest version was 2.0.3.
When I recently added sound and music, I took the latest SDL_mixer, and didn't think to update SDL to the latest 2.0.5.
After getting the latest development and runtime libraries for SDL (and SDL_image and SDL_mixer for good measure), the problem disappeared.
I'm not entirely satisfied with this. I'm quite surprised that the newer SDL_mixer linked successfully with an older SDL, if they were not compatible. In addition, I can't find any resources online that suggest any compatibility issues. Therefore, I have an uneasy feeling there may have been something else going on, which was resolved incidentally by the upgrade.
I'm working on a C++ project using Visual Studio 2010 on Windows. I'm linking dynamically against x264 which I built myself as a shared library using MinGW following the guide at
http://www.ayobamiadewole.com/Blog/Others/x264compilation.aspx
The strange thing is that my x264 code is working perfectly sometimes. Then when I change some line of code (or even change the comments in the file!) and recompile everything crashes on the line
encoder_ = x264_encoder_open(¶m);
With the message
Access violation reading location 0x00000000
I'm not doing anything funky at all so it's probably not my code that is wrong but I guess there is something going wrong with the linking or maybe something is wrong with how I compiled x264.
The full initialization code:
x264_param_t param = { 0 };
if (x264_param_default_preset(¶m, "ultrafast", "zerolatency") < 0) {
throw KStreamerException("x264_param_default_preset failed");
}
param.i_threads = 1;
param.i_width = 640;
param.i_height = 480;
param.i_fps_num = 10;
param.i_fps_den = 1;
encoder_ = x264_encoder_open(¶m); // <-----
if (encoder_ == 0) {
throw KStreamerException("x264_encoder_open failed");
}
x264_picture_alloc(&pic_, X264_CSP_I420, 640, 480);
Edit: It turns out that it always works in Release mode and when using superfast instead of ultrafast it also works in Debug mode 100%. Could it be that the ultrafast mode is doing some crazy optimizations that the debugger doesn't like?
I've met this problem too with libx264-120.
libx264-120 was built on MinGW and configuration option like below.
$ ./configure --disable-cli --enable-shared --extra-ldflags=-Wl,--output-def=libx264-120.def --enable-debug --enable-win32thread
platform: X86
system: WINDOWS
cli: no
libx264: internal
shared: yes
static: no
asm: yes
interlaced: yes
avs: yes
lavf: no
ffms: no
gpac: no
gpl: yes
thread: win32
filters: crop select_every
debug: yes
gprof: no
strip: no
PIC: no
visualize: no
bit depth: 8
chroma format: all
$ make -j8
lib /def:libx264-120.def /machine:x86
#include "stdafx.h"
#include <iostream>
#include <cassert>
using namespace std;
#include <stdint.h>
extern "C"{
#include <x264.h>
}
int _tmain(int argc, _TCHAR* argv[])
{
int width(640);
int height(480);
int err(-1);
x264_param_t x264_param = {0};
//x264_param_default(&x264_param);
err =
x264_param_default_preset(&x264_param, "veryfast", "zerolatency");
assert(0==err);
x264_param.i_threads = 8;
x264_param.i_width = width;
x264_param.i_height = height;
x264_param.i_fps_num = 60;//fps;
x264_param.i_fps_den = 1;
// Intra refres:
x264_param.i_keyint_max = 60;//fps;
x264_param.b_intra_refresh = 1;
//Rate control:
x264_param.rc.i_rc_method = X264_RC_CRF;
x264_param.rc.f_rf_constant = 25;
x264_param.rc.f_rf_constant_max = 35;
//For streaming:
x264_param.b_repeat_headers = 1;
x264_param.b_annexb = 1;
err = x264_param_apply_profile(&x264_param, "baseline");
assert(0==err);
x264_t *x264_encoder = x264_encoder_open(&x264_param);
x264_encoder = x264_encoder;
x264_encoder_close( x264_encoder );
getchar();
return 0;
}
This program succeeds sometime. But will fail often on x264_encoder_open with the access violation.
The information for this is not existing on Google. And how to initialize x264_param_t and how to use x264_encoder_open are unclear.
It seems that behavior caused from x264's setting values, but I can't know these without reading some open source programs that using libx264.
And, this access violation seems doesn't occurs on FIRST TIME EXECUTION and on compilation with MinGW's gcc (e.g gcc -o test test.c -lx264;./test)
Since this behavior, I think that libx264 doing some strange processes of resources in DLL version of ilbx264 that was built on MinGW's gcc.
I had the same problem. The only way I was able to fix it was to build the x264 dll without the asm option (ie. specify --disable-asm)