When I am trying to load a Shader result from memory the compiler says: one or more arguments are invalid. Shader compiling successfully but it seems after D3DCompileFromFile() command in memory is something not correct and ID3DBlob interface does not get correct values for some reason.
ID3DBlob* pBlobFX = NULL;
ID3DBlob* pErrorBlob = NULL;
hr = D3DCompileFromFile(str, NULL, NULL, NULL, "fx_5_0", NULL, NULL, &pBlobFX, &pErrorBlob); // OK
if (FAILED(hr))
{
if (pErrorBlob != NULL)
OutputDebugStringA((char *)pErrorBlob->GetBufferPointer());
SAFE_RELEASE(pErrorBlob);
return hr;
}
// Create the effect
hr = D3DX11CreateEffectFromMemory(pBlobFX->GetBufferPointer(), pBlobFX->GetBufferSize(), 0, pd3dDevice, ppEffect); // Error: E_INVALIDARG One or more arguments are invalid
The legacy DirectX SDK version of Effects for Direct3D 11 only included D3DX11CreateEffectFromMemory for creating effects which required using a compiled shader binary blob loaded by the application.
The latest GitHub version includes the other expected functions:
D3DX11CreateEffectFromFile which loads a compiled binary blob from disk and then creates an effect from it.
D3DX11CompileEffectFromMemory which uses the D3DCompile API to compile a fx file in memory and then create an effect from it.
D3DX11CompileEffectFromFile which uses the D3DCompile API to compile a provided fx file and then create an effect from it.
Using D3DX11CompileEffectFromFile instead of trying to do it manually as the original poster tried is the easiest solution here.
The original library wanted to strongly encourage using build-time rather than run-time compilation of effects. Given that the primary use of Effects 11 today is developer education, this was unnecessarily difficult for new developers to use so the GitHub version now includes all four possible options for creating effects.
Note: The fx_5_0 profile in the HLSL compiler is deprecated, and is required to use Effects 11.
Related
I am trying to load an image resource using the LoadImageA() function, yet it doesn't work and I don't understand why.
Here's a bit of my code :
bool isRessource = IS_INTRESOURCE(107);
// Load the resource to the HGLOBAL.
HGLOBAL imageResDataHandle = LoadImageA(
NULL,
MAKEINTRESOURCEA(107),
IMAGE_BITMAP,
0,
0,
LR_SHARED
);
HRESULT hr = (imageResDataHandle ? S_OK : E_FAIL);
The image I want to load is a bitmap saved in the resources, and represented as such within resources.h:
#define IDB_BITMAP1 107
When I execute the code, isRessource is equal to true, yet hr is equal to E_FAIL.
Any idea as to why this is happening? I am using Visual Studio 2019, and I made the image using Gimp.
After making the same image with the same format on another application (I used "Krita") and importing it again, the image finally loads with the same code (I only changed the reference to the resource). I guess that all types of bitmaps made from Gimp won't work in Visual Studio (I tried most formats of bitmaps from Gimp).
The first link searched with LoadImage gimp as a keyword is enough to answer this question.
This is some useful information:
The bitmap exported by GIMP has a broken header. Specifically, the
code seems to not write the RGBA masks, which AFAIK are not optional
in a BITMAPV5HEADER. This misaligns and changes the size of the entire
extended header, incidentally making it look like a BITMAPV4HEADER,
which explains why most programs will still open it fine. Without
having done any testing, I'd guess LoadImage() is more picky about the
values in this extended header; returning NULL is how it indicates
failure.
By the way, when you import a bitmap, the system does not remind you that the format of the image is unknown?
Like:
After testing, use LoadImage to load such an image will return NULL, and GetLastError will also return 0.
I need to use a windows function like GetModuleHandle or GetModuleFileName to find out if a specific dll is loaded in the same process where my code is executing.
One module I'm looking for is System.Windows.Forms.dll, but even when it is loaded in process... (Here you can see it using Process Explorer)
GetModuleHandle still will not find it!
HMODULE modHandle = GetModuleHandle(L"System.Windows.Forms.dll");
GetLastError() returns ERROR_MOD_NOT_FOUND
If the function succeeds, the return value is a handle to the specified module.
If the function fails, the return value is NULL.
I think it may be something to do with how the CLR loads these dlls. I see a note on LoadLibraryEx that if the LOAD_LIBRARY_AS_DATAFILE flag is used then:
If this value is used, the system maps the file into the calling
process's virtual address space as if it were a data file. Nothing is
done to execute or prepare to execute the mapped file. Therefore, you
cannot call functions like GetModuleFileName, GetModuleHandle or
GetProcAddress with this DLL.
Maybe this is my problem, but regardless of the cause - does anyone know a way to find a managed DotNet dll in a process using native / c++ code?
Thanks!
EDIT:
Based on suggestions from Castorix in the comments I tried to use EnumProcessModules:
HMODULE modules[100];
void* hProcess = OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, 0, GetCurrentProcessId());
if (hProcess)
{
DWORD bytesNeeded;
BOOL rc = EnumProcessModules(hProcess, modules, sizeof(modules), &bytesNeeded);
if (rc)
{
int count = (int)(bytesNeeded / sizeof(HMODULE));
for (int i = 0; i < count; i++)
{
wchar_t moduleName[260];
GetModuleFileName(modules[i], moduleName, 260);
}
}
}
CloseHandle(hProcess);
This code finds a lot of the modules but not System.Windows.Forms.dll
OK, this is an attempt to an answer (or really just a too long comment, sorry).
Personally, I have never seen managed .NET DLLs in the Process Explorer pane, but might not have been looking hard / often enough. However, what I can (and always could) see are the NGENed images (*.ni.dll).
Note also the presence of System.Data.dll here, which is not NGENed, but is a mixed mode assembly and contains native code as well as managed code.
So one could conclude, that you can only see NGENed and mixed mode "assemblies" here, because they are still loaded by LoadLibrary or LoadLibraryEx.
Also note my comment, which I reproduce here for easier access:
I think the CLR does not use LoadLibrary, which would explain why you
cannot "see" them using the APIs you described. In fact,
CLR 4 Does Not Use LoadLibrary to Load Assemblies
is a blog entry that is relevant. You could always check the sources
(CoreCLR, but shouldn't matter), about how it is done in particular. I
have no really good place, but you could start
here
and then go from it. Use the ICorDebug interface instead.
Here are some relevant quotes from the blog entry linked above:
You may be asking yourself: …who cares? Well, first of all it’s good
to know. I haven’t noticed a public service announcement to the above.
It is an implementation detail, however—CLR assemblies are not even
guaranteed to be implemented using files, not to mention DLL files in
a specific format that are loaded using the LoadLibrary Win32 API.
However, there are several tools and scenarios which have come to rely
on the fact the CLR loads assemblies using LoadLibrary. For example,
up to CLR 4, if you wanted to know which .NET assemblies were loaded
in your process, a fairly reliable heuristic would be to fire up
Sysinternals Process Explorer and look at the DLLs view of a given
process. This doesn’t work for CLR 4, as you can see here:
Frankly, I don't know how Process Explorer manages to show assemblies (not NGENed and not mixed mode) in your case - appart from you are watching a CLR2 process. However, mind you that PE does not only use Win32 APIs. It also uses WMI and probably also uses the CLR directly for more information. For example, the "Process Properties/.NET Assemblies" and "Process Properties/.NET Performance" tabs most likely use ICorDebug/ICorProfile and performance counters/ETW respectively.
You might need to use on of those interfaces as well, or something else from the unmanaged Debugging API or the unmanaged API in general.
Whatever it is, I don't think that EnumProcessModules, etc. will get you there for reasons above.
To add to the above answer and provide relevant code; it was not possible to use a native function like EnumProcessModules to detect the non-ngen'ed DotNet dlls and instead I had to use c++ interfaces to the CLR.
There is a lot more info here: https://blogs.msdn.microsoft.com/calvin_hsia/2013/12/05/use-reflection-from-native-c-code-to-run-managed-code/ The code most relevant to this particular question was:
HRESULT GetAssemblyFromAppDomain(_AppDomain* pAppDomain, LPCWSTR wszAssemblyName, _Deref_out_opt_ _Assembly **ppAssembly)
{
*ppAssembly = NULL;
// get the assemblies into a safearray
SAFEARRAY *pAssemblyArray = NULL;
HRESULT hr = pAppDomain->GetAssemblies(&pAssemblyArray);
if (FAILED(hr))
{
return hr;
}
// put the safearray into a smart ptr, so it gets released
CComSafeArray<IUnknown*> csaAssemblies;
csaAssemblies.Attach(pAssemblyArray);
size_t cchAssemblyName = wcslen(wszAssemblyName);
long cAssemblies = csaAssemblies.GetCount();
for (long i=0; i<cAssemblies; i++)
{
CComPtr<_Assembly> spAssembly;
spAssembly = csaAssemblies[i];
if (spAssembly == NULL)
continue;
CComBSTR cbstrAssemblyFullName;
hr = spAssembly->get_FullName(&cbstrAssemblyFullName);
if (FAILED(hr))
continue;
// is it the one we want?
if (cbstrAssemblyFullName != NULL &&
_wcsnicmp(cbstrAssemblyFullName,
wszAssemblyName,
cchAssemblyName) == 0)
{
*ppAssembly = spAssembly.Detach();
hr = S_OK;
break;
}
}
if (*ppAssembly == 0)
{
hr = E_FAIL;
}
return hr;
}
There's some information on the CLR interfaces here:
ICLRMetaHost
ICLRRuntimeInfo
ICorRuntimeHost
_AppDomain
_Assembly
I am writing a Directshow application which connects a file source to a MPEG4s DMO.
The graph looks like:
File Source -> DMO Wrapper Filter -> Video Renderer.
Here are my questions:
1. How can I add a file source filter in the graph ? I got this piece of code which graphedit plus generated. Is this piece of code correct ? I see that it uses "CComPtr" which needs "atlbase.h". With VS2010 Express edition I don't have the atl headers.
LPCOLESTR srcFile1 = L"C:\\Users\shyam\\Downloads\\sample.avi";
CComPtr<IBaseFilter> pBaseFilter;
hr = pBaseFilter.CoCreateInstance(CLSID_AsyncReader);
CComQIPtr<IFileSourceFilter> pFileSourceFilter = pBaseFilter;
ATLASSERT(pFileSourceFilter);
pFileSourceFilter->Load(srcFile1, NULL);
hr = pGB->AddFilter(pBaseFilter, L"File Source (Async.)");
2. I manually downloaded "atlbase.h" from net and I am encountering several build errors. What can be done in this case.
Please help me in moving in the right direction !!
Thanks,
Shyam
The code generated above is correct. For getting rid of your compilation error, download and install the latest windows sdk. It should have the correct atl headers.
It's possible to write C++ code for Directshow without ATL, but I strongly wouldn't recommend it unless you like spagetti with leaks. Here's what your code would look like
IBaseFilter* pBaseFilter;
CoCreateInstance(CLSID_AsyncReader, NULL, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void**)&pBaseFilter);
IFileSourceFilter* pFileSourceFilter = NULL;
hr = pBaseFilter->QueryInterface(IID_IFileSourceFilter, (void**)&pFileSourceFilter);
ASSERT(pFileSourceFilter != NULL);
hr = pFileSourceFilter->Load(L"C:\\Users\shyam\\Downloads\\sample.avi", NULL);
if (pFileSourceFilter)
pFileSourceFilter->Release();
hr = pFileSourceFilter->AddFilter(pBaseFilter, L"AsyncReader");
You also need to check hr for errors at every step.
The latest Windows SDK may not have all the Directshow interfaces, so I suggest Microsoft Windows SDK Update for Windows Vista (for qedit.h). But seriously please don't write Directshow or COM code without ATL, even DirectshowLib in c# would be easier for a simple app.
I'm planning to rewrite a small C++ OpenGL font library I made a while back using
FreeType 2 since I recently discovered the changes to newer OpenGL versions. My code
uses immediate mode and some function calls I'm pretty sure are deprecated now, e.g.
glLineStipple.
I would very much like to support a range of OpenGL versions such that the code
uses e.g. VBO's when possible or falls back on immediate mode if nothing else is available
and so forth. I'm not sure how to go about it though. Afaik, you can't do a compile
time check since you need a valid OpenGL context created at runtime. So far, I've
come up with the following proposals (with inspiration from other threads/sites):
Use GLEW to make runtime checks in the drawing functions and to check for function
support (e.g. glLineStipple)
Use some #define's and other preprocessor directives that can be specified at compile
time to compile different versions that work with different OpenGL versions
Compile different versions supporting different OpenGL versions and supply each as a
separate download
Ship the library with a script (Python/Perl) that checks the OpenGL version on the
system (if possible/reliable) and does the approapriate modifications to the source
so it fits with the user's version of OpenGL
Target only newer OpenGL versions and drop support for anything below
I'm probably going to use GLEW anyhow to easily load extensions.
FOLLOW-UP:
Based on your very helpful answers, I tried to whip up a few lines based on my old code, here's a snippet (no tested/finished). I declare the appropriate function pointers in the config header, then when the library is initialized, I try to get the right function pointers. If VBOs fail (pointers null), I fall back to display lists (deprecated in 3.0) and then finally to vertex arrays. I should (maybe?) also check for available ARB extensions if fx. VBOs fail to load or is that too much work? Would this be a solid approrach? Comments are appreciated :)
#if defined(WIN32) || defined(_WIN32) || defined(__WIN32__)
#define OFL_WINDOWS
// other stuff...
#ifndef OFL_USES_GLEW
// Check which extensions are supported
#else
// Declare vertex buffer object extension function pointers
PFNGLGENBUFFERSPROC glGenBuffers = NULL;
PFNGLBINDBUFFERPROC glBindBuffer = NULL;
PFNGLBUFFERDATAPROC glBufferData = NULL;
PFNGLVERTEXATTRIBPOINTERPROC glVertexAttribPointer = NULL;
PFNGLDELETEBUFFERSPROC glDeleteBuffers = NULL;
PFNGLMULTIDRAWELEMENTSPROC glMultiDrawElements = NULL;
PFNGLBUFFERSUBDATAPROC glBufferSubData = NULL;
PFNGLMAPBUFFERPROC glMapBuffer = NULL;
PFNGLUNMAPBUFFERPROC glUnmapBuffer = NULL;
#endif
#elif some_other_system
Init function:
#ifdef OFL_WINDOWS
bool loaded = true;
// Attempt to load vertex buffer obejct extensions
loaded = ((glGenBuffers = (PFNGLGENBUFFERSPROC)wglGetProcAddress("glGenBuffers")) != NULL && loaded);
loaded = ((glBindBuffer = (PFNGLBINDBUFFERPROC)wglGetProcAddress("glBindBuffer")) != NULL && loaded);
loaded = ((glVertexAttribPointer = (PFNGLVERTEXATTRIBPOINTERPROC)wglGetProcAddress("glVertexAttribPointer")) != NULL && loaded);
loaded = ((glDeleteBuffers = (PFNGLDELETEBUFFERSPROC)wglGetProcAddress("glDeleteBuffers")) != NULL && loaded);
loaded = ((glMultiDrawElements = (PFNGLMULTIDRAWELEMENTSPROC)wglGetProcAddress("glMultiDrawElements")) != NULL && loaded);
loaded = ((glBufferSubData = (PFNGLBUFFERSUBDATAPROC)wglGetProcAddress("glBufferSubData")) != NULL && loaded);
loaded = ((glMapBuffer = (PFNGLMAPBUFFERPROC)wglGetProcAddress("glMapBuffer")) != NULL && loaded);
loaded = ((glUnmapBuffer = (PFNGLUNMAPBUFFERPROC)wglGetProcAddress("glUnmapBuffer")) != NULL && loaded);
if (!loaded)
std::cout << "OFL: Current OpenGL context does not support vertex buffer objects" << std::endl;
else {
#define OFL_USES_VBOS
std::cout << "OFL: Loaded vertex buffer object extensions successfully"
return true;
}
if (glMajorVersion => 3.f) {
std::cout << "OFL: Using vertex arrays" << std::endl;
#define OFL_USES_VERTEX_ARRAYS
} else {
// Display lists were deprecated in 3.0 (although still available through ARB extensions)
std::cout << "OFL: Using display lists"
#define OFL_USES_DISPLAY_LISTS
}
#elif some_other_system
First of all, and you're going to be safe with that one, because it's supported everywhere: Rewrite your font renderer to use Vertex Arrays. It's only a small step from VAs to VBOs, but VAs are supported everywhere. You only need a small set of extension functions; maybe it made sense to do the loading manually, to not be dependent on GLEW. Linking it statically was huge overkill.
Then put the calls into wrapper functions, that you can refer to through function pointers so that you can switch render paths that way. For example add a function "stipple_it" or so, and internally it calls glLineStipple or builds and sets the appropriate fragment shader for it.
Similar for glVertexPointer vs. glVertexAttribPointer.
If you do want to make every check by hand, then you won't get away from some #defines because Android/iOS only support OpenGL ES and then the runtime checks would be different.
The run-time checks are also almost unavoidable because (from personal experience) there are a lot of caveats with different drivers from different hardware vendors (for anything above OpenGL 1.0, of course).
"Target only newer OpenGL versions and drop support for anything below" would be a viable option, since most of the videocards by ATI/nVidia and even Intel support some version of OpenGL 2.0+ which is roughly equivalent to the GL ES 2.0.
GLEW is a good way to ease the GL extension fetching. Still, there are issues with the GL ES on embedded platforms.
Now the loading procedure:
On win32/linux just check the function pointer for not being NULL and use the ExtensionString from GL to know what is supported at this concrete hardware
The "loading" for iOS/Android/MacOSX would be just storing the pointers or even "do-nothing". Android is a different beast, here you have static pointers, but the need to check extension. Even after these checks you might not be sure about some things that are reported as "working" (I'm talking about "noname" Android devices or simple gfx hardware). So you will add your own(!) checks based on the name of the videocard.
OSX/iOS OpenGL implementation "just works". So if you're running on 10.5, you'll get GL2.1; 10.6 - 2.1 + some extensions which make it almost like 3.1/3.2; 10.7 - 3.2 CoreProfile. No GL4.0 for Macs yet, which is mostly an evolution of 3.2.
If you're interested in my personal opinion, then I'm mostly from the "reinvent everything" camp and over the years we've been using some autogenerated extension loaders.
Most important, you're on the right track: the rewrite to VBO/VA/Shaders/NoFFP would give you a major performance boost.
I'm trying to detect if a DVD-RAM media is empty or not, with C++ on Windows. The simplest choice is to use IMAPI (version 2) - boilerplate code omitted:
IMAPI_FORMAT2_DATA_MEDIA_STATE state;
HRESULT hr;
// ... Initialize an MsftDiscFormat2Data COM object and put recorder
hr = format->get_CurrentMediaStatus( &state );
// ... Verify returned status ...
return (state & IMAPI_FORMAT2_DATA_MEDIA_STATE_BLANK);
This code usually works perfectly. However, with DVD-RAM it gives the wrong results: the only flag enabled in the returned state is IMAPI_FORMAT2_DATA_MEDIA_STATE_OVERWRITE_ONLY ( = 0x1).
On Windows Vista 32 bit it works as expected.
Does anyone knows the reason for this result? Is there any workaround?
You can use the method IDiscFormat2::get_MediaHeuristicallyBlank from IDiscFormat2 interface.
It will attempt to determine if the media is blank using heuristics (mainly for DVD+RW and DVD-RAM media).
VARIANT_BOOL vbBlank;
hr = format->get_MediaHeuristicallyBlank(&vbBlank);
if (VARIANT_TRUE == vbBlank)
Log("The media is blank.");
In order to determine if the current media is reported as physically blank by the drive you can use IDiscFormat2::get_MediaPhysicallyBlank method.
As for the reasons of the different behavior between Windows7 x64 and Windows Vista x86, it could be because IMAPIv2 versions may be different on those systems. You may want to update your Vista machine with the latest Image Mastering API v2.0 update package to get the same results on each system.