some part of code image, another part of code image, I am beginner to DirectX 12 (or Game Programming) and studying from Microsoft documentation. While using function ThrowIfFailed() I get an error from intellisense of vs2015 editor
This declaration has no storage class or type specifier.
Can anyone help.
As you are new to DirectX programming, I strongly recommend starting with DirectX 11 rather than DirectX 12. DirectX 12 assumes you are already an expert DirectX 11 developer, and is a quite unforgiving API. It's absolutely worth learning if you plan to be a graphics developer, but starting with DX 12 over DX 11 is a huge undertaking. See the DirectX Tool Kit tutorials for DX11 and/or DX12
For modern DirectX sample code and in the VS DirectX templates, Microsoft uses a standard helper function ThrowIfFailed. It's not part of the OS or system headers; it's just defined in the local project's Precompiled Header File (pch.h):
#include <exception>
namespace DX
{
inline void ThrowIfFailed(HRESULT hr)
{
if (FAILED(hr))
{
// Set a breakpoint on this line to catch DirectX API errors
throw std::exception();
}
}
}
For COM programming, you must check at runtime all HRESULT values for failure. If it's safe to ignore the return value of a particular DirectX 11 or DirectX 12 API, it will return void instead. You generally use ThrowIfFailed for 'fast fail' scenarios (i.e. your program can't recover if the function fails).
Note the recommendation is to use C++ Exception Handling (a.k.a. /EHsc) which is the default compiler setting in the VS templates. On the x64 and ARM platforms, this is implemented very efficiently without any additional code overhead. Legacy x86 requires some additional epilog/prologue code that the compiler creates. Most of the "FUD" around exception handling in native code is based on the experience of using the older Asynchronous Structured Exception Handling (a.k.a. /EHa) which severely hampers the code optimizer.
See this wiki page for a bit more detail and usage information. You should also read the page on ComPtr.
In my version of the Direct3D Game VS Templates on GitHub, I use a slightly enhanced version of ThrowIfFailed which you could also use:
#include <exception>
namespace DX
{
// Helper class for COM exceptions
class com_exception : public std::exception
{
public:
com_exception(HRESULT hr) : result(hr) {}
virtual const char* what() const override
{
static char s_str[64] = {};
sprintf_s(s_str, "Failure with HRESULT of %08X",
static_cast<unsigned int>(result));
return s_str;
}
private:
HRESULT result;
};
// Helper utility converts D3D API failures into exceptions.
inline void ThrowIfFailed(HRESULT hr)
{
if (FAILED(hr))
{
throw com_exception(hr);
}
}
}
This error is because some of your code are outside of any function.
Your error is just here :
void D3D12HelloTriangle::LoadPipeline() {
#if defined(_DEBUG) { //<= this brack is simply ignore because on a pre-processor line
ComPtr<ID3D12Debug> debugController;
if (SUCCEEDED(D3D12GetDebugInterface(IID_PPV_ARGS(&debugController)))) {
debugController->EnableDebugLayer();
}
} // So, this one closes method LoadPipeline
#endif
// From here, you are out of any function
ComPtr<IDXGIFactory4> factory;
ThrowIfFailed(CreateDXGIFactory1(IID_PPV_ARGS(&factory)));
So to correct it :
void D3D12HelloTriangle::LoadPipeline() {
#if defined(_DEBUG)
{ //<= just put this bracket on it's own line
ComPtr<ID3D12Debug> debugController;
if (SUCCEEDED(D3D12GetDebugInterface(IID_PPV_ARGS(&debugController)))) {
debugController->EnableDebugLayer();
}
} // So, this one close method LoadPipeline
#endif
// From here, you are out of any function
ComPtr<IDXGIFactory4> factory;
ThrowIfFailed(CreateDXGIFactory1(IID_PPV_ARGS(&factory)));
Related
Preface and the problem
I'm currently studying C++ programming language and game programming.
At the moment, I'm working on a simple game engine just to practice 'consistency' and architecture of the API, and due to this reason the idea of mimicing C# 'Program' class appeared.
C# Entry point:
class Program
{
static void Main(string[] args)
{
// Do stuff.
}
}
C++ analogue required:
class Program
{
public:
static void Main()
{
// Do stuff. 'args' analogue can be ignored, if necessary.
}
};
Is it possible to somehow, using linker options, redefine entry point to be a static class method?
Related experience and my theories on this topic
The main reason, why I think, this should be possible is described in the following piece of code (that was successfully compiled using mingw-w64).
#include <iostream>
class Main
{
public:
static void Foo() { std::cout << "Main::Foo\n"; }
};
void localFoo() { std::cout << "localFoo\n"; }
void callFunc(void(*funcToCall)())
{
funcToCall();
}
int main()
{
callFunc(localFoo);
callFunc(Main::Foo); // Proves that Main::Foo has the same interface as localFoo.
return 0;
}
(Refers to Win32 API) I abstracted Win32 API into classes and used window procedure as a static member of class. It was absolutely correct to Win32 WNDCLASS and I could even use static members of my class inside this procedure.
Conslusion I made: static fields and methods technically have no differences between global variables and functions, and, since that, they can replace some code, that dates back to C (default entry point, for example).
Notes
Both MinGW and MSVC (Visual Studio or cmd) solutions are acceptable.
The author of the post is extremely grateful for any information provided :3
Is it possible to somehow, using linker options, redefine entry point to be a static class method?
No. Not if you want to use the C++ runtime library, at any rate. main (or WinMain) is called by the runtime library once it has completed initialising itself, and that call is hard-coded in the runtime library itself.
The MSVC linker lets you specify an alternative entry point with the /ENTRY switch (see here), but if you do that you will bypass the runtime library initialisation code and that will break things.
I'm trying to write a c++/cli wrapper for IO Industries Core2 DVR, which will then be used by LabView. The company provided a SDK with with all the headers (written in c++) and boost library. I've managed to build a wrapper that builds and LabView is able to see the function through the .net pallet.
// ManagedProject.h
#pragma once
#include "core_api_helper.h"
#include "core_api.h"
using namespace System;
using namespace CoreApi;
namespace ManagedProject {
//Setup class
public ref class Setup
{
private:
public:
unsigned int initializeTest();
};
}
// This is the DLL Wrapper.
#include "stdafx.h"
#include "ManagedProject.h"
#include "core_api_helper.h"
#include "core_api.h"
#include "resource.h"
using namespace CoreApi;
using namespace Common;
using namespace ManagedProject;
//Global handles
//A handle to the Core Api
InstanceHandle g_hApi;
//A handle to the Core Api Device Collection
DeviceCollectionHandle g_hCoreDeviceCollection;
unsigned int Setup::initializeTest()
{
try
{
//Initialize the Core API (must be called before any other Core API functions)
//Returns a handle to the Core Api
g_hApi = Instance::initialize();
// get a collection of Core devices
g_hCoreDeviceCollection = g_hApi->deviceCollection();
unsigned int deviceCount = g_hCoreDeviceCollection->deviceCount();
return deviceCount;
}
catch (GeneralException& e)
{
e.what();
return 3;
}
}
However when I run LabView through Visual studio 2015 in debug mode I run into the problem below, and what is returned to LabView is the 3 from the catch block.
First break in debug mode (NULL ptr)
NOTE: InstanceHandle is a shared_ptr
As can be seen the variable is a NULL pointer, the same thing happens for the g_hCoreDeviceCollectoin as well. I think I need to Instantiate it with the new command but am a little unsure as InstanceHandle is a shared_ptr.
Any help would be much appreciated
The C++/CLI has great feature called mixed mode. You can uses both managed and native data types in the same code (in the same C++/CLI class). Try to use object from that SDK written in C++ directly in your wrapper.
I'm currently using boost.Test and I'm wondering if it might be possible to switch off the Memory Leak Detection, if one compiles in DEBUG Mode.
I don't want to use the command line parameter switch --detect_memory_leak=0. I'm looking for a kind of #define parameter, that switches off the memory leak detection feature in DEBUG mode.
It would be also suitable for me to switch off the memory detection feature by defining a certain compiler switch. I'm currently using Microsoft Visual Studio 2010.
#define BOOST_TEST_DETECT_MEMORY_LEAK 0 // Preprocesser switch I'm looking for!
#define BOOST_TEST_MODULE MyUnitTest
#include <boost/test/included/unit_test.hpp>
BOOST_AUTO_TEST_SUITE(MySuite);
BOOST_AUTO_TEST_CASE(MyUnitTest) {
/// Following code has a memory leak
/// ....
}
BOOST_AUTO_TEST_SUITE_END()
Just found out that possibly the best way to turn off the detection of memory leaks is to include the following code snippet into one's tests.
#include <boost/test/debug.hpp>
struct GlobalFixture {
GlobalFixture() {
boost::debug::detect_memory_leaks(false);
}
~GlobalFixture() { }
};
BOOST_GLOBAL_FIXTURE(GlobalFixture);
Still, I was not able to switch off and to switch on the detection of memory leaks for single tests.
You can directly set the environment variable BOOST_TEST_DETECT_MEMORY_LEAK to 0 or use putenv :
#include <cstdlib>
//...
BOOST_AUTO_TEST_CASE(MyUnitTest) {
putenv("BOOST_TEST_DETECT_MEMORY_LEAK=0");
//...
}
Edit
As you're using visual studio 2010, you can try _putenv or _wputenv :
#include <stdlib.h>
//...
BOOST_AUTO_TEST_CASE(MyUnitTest) {
_putenv("BOOST_TEST_DETECT_MEMORY_LEAK=0");
//...
}
Otherwise, I found a function detect_memory_leaks in the Boost documentation but it seems to be only available on recent boost version.
_CrtSetDbgFlag(0);
is the only thing that mostly worked for me. Leak messages came out, but not enough to make me wait.
Here's some code detailing everything I tried:
struct GlobalFixture
{
GlobalFixture()
{
// This doesn't seem to do anything
// boost::debug::detect_memory_leaks(false);
// This either
//_putenv("BOOST_TEST_DETECT_MEMORY_LEAK=0");
// This total hack also does nothing
// using namespace boost::unit_test::runtime_config;
// const_cast<boost::runtime::arguments_store&>(argument_store()).set(btrt_detect_mem_leaks, 0);
// This gets rid of most of the messages
_CrtSetDbgFlag(0);
}
};
Why not use the _DEBUG macro ?
#ifdef _DEBUG
#define BOOST_TEST_DETECT_MEMORY_LEAK 0
#endif
I am getting warnings when I am trying to include <boost/thread.hpp> in C++ Builder. For every unit I am including it, C++ Builder shows up these 2 lines:
thread_heap_alloc.hpp(59): W8128 Can't import a function being defined
thread_heap_alloc.hpp(69): W8128 Can't import a function being defined
Already tried some things, nothing worked though.
It compiles correctly, however, it's getting on my nerves. Why is this message being shown?
The lines are:
#include <boost/config/abi_prefix.hpp>
namespace boost
{
namespace detail
{
inline BOOST_THREAD_DECL void* allocate_raw_heap_memory(unsigned size)
{
void* const eap_memory=detail::win32::HeapAlloc(detail::win32::GetProcessHeap(),0,size);
if(!heap_memory)
{
throw std::bad_alloc();
}
return heap_memory;
}
inline BOOST_THREAD_DECL void free_raw_heap_memory(void* heap_memory)
{
BOOST_VERIFY(detail::win32::HeapFree(detail::win32::GetProcessHeap(),0,heap_memory)!=0);
}
where 59 is the { below the BOOST_THREAD_DECL, as is 69. Looks like BOOST_THREAD_DECL is not defined properly or mis-defined, trying to follow through the Boost code is not that easy.
This is Boost 1.39.
add #define BOOST_THREAD_USE_LIB before including the thread.hpp.
This is what I tested:
#define BOOST_THREAD_USE_LIB
extern "C"
{
namespace boost
{
void tss_cleanup_implemented( void )
{
/*
This function's sole purpose is to cause a link error in cases where
automatic tss cleanup is not implemented by Boost.Threads as a
reminder that user code is responsible for calling the necessary
functions at the appropriate times (and for implementing an a
tss_cleanup_implemented() function to eliminate the linker's
missing symbol error).
If Boost.Threads later implements automatic tss cleanup in cases
where it currently doesn't (which is the plan), the duplicate
symbol error will warn the user that their custom solution is no
longer needed and can be removed.*/
}
}
}
#include <boost/thread.hpp>
Then set 'Link with Dynamic RTL' and 'Link with Runtime Packages'.
This does a clean build and starts a thread properly.
Given two IDL definitions: (I'm only implementing a client, the server side is fixed.)
// Version 1.2
module Server {
interface IObject {
void Foo1();
void Foo2() raises(EFail);
string Foo3();
// ...
}
};
// Version 2.3
module Server {
interface IObject {
// no longer available: void Foo1();
void Foo2(string x) raises(ENotFound, EFail); // incompatible change
wstring Foo3();
// ...
}
};
(Edit Note: added Foo3 method that cannot be overloaded because the return type changed.)
Is it somehow possible to compile both stub code files in the same C++ CORBA Client App?
Using the defaults of an IDL compiler, the above two IDL definitions will result in stub code that cannot be compiled into the same C++ module, as you'd get multiple definition errors from the linker. The client however needs to be able to talk to both server versions.
What are possible solutions?
(Note: We're using omniORB)
(Adding answer from one Stefan Gustafsson, posted in comp.object.corba 2011-03-08)
If you look at it as a C++ problem instead of a CORBA problem, the
solution is C++ namespaces.
You could try to wrap the different implementations in different C++
namespaces.
Like:
namespace v1 {
#include "v1/foo.h" // From foo.idl version 1
}
namespace v2 {
#include "v2/foo.h" // from foo.idl version 2
}
And to be able to compile the C++ proxy/stub code you need to create C++
main files like:
// foo.cpp
namespace v1 {
#include "v1/foo_proxy.cpp" // filename depend on IDL compiler
}
namespace v2 {
#include "v2/foo_proxy.cpp"
}
This will prevent the C++ linker complaining since the names will be
different. Of course you
could run into problems with C++ compilers not supporting nested
namespaces..
A second solution is to implement the invocation using DII, you could
write a C++ class
class ServerCall {
void foo2_v1() {
// create request
// invoke
}
void foo2_v2(String arg) {
// create_list
// add_value("x",value,ARG_IN)
// create_request
// invoke
}
}
By using DII you can create any invocation you like, and can keep full
control of your client code.
I think this is a good idea, but I haven't been able to try it out yet, so there may lurk some unexpected surprises wrt to things no longer being in the global namespace.
What comes to my mind would be splitting the client code into separate libraries for each version.
Then you can select the correct client depending on the version to be used.
In a recent project we handled this by introducing a service layer with no dependency to the CORBA IDL.
For example:
class ObjectService
{
public:
virtual void Foo1() = 0;
virtual void Foo2() = 0;
virtual void Foo2(const std::string &x) = 0;
};
For each version, create a class derived from ObjectService and implement the operations by
calling the CORBA::Object. Each derived class must be in separate library.
In the client implementation, you only operate on instances of ObjectService.
CORBA::Object_var remoteObject=... // How to get the remote object depends on your project
ObjectService *serviceObject=0;
// create a service object matching the remote object version
// Again, this is project specific
switch (getRemoteObjectVersion(remoteObject))
{
case VERSION_1_2:
serviceObject=new ServiceObjectImpl12(remoteObject);
break;
case VERSION_2_3:
serviceObject=new ServiceObjectImpl23(remoteObject);
break;
default:
// No matching version found, throw exception?
break;
}
// Access remote object through service object
serviceObject->Foo2("42");