IDirectDrawSurface7::Blt returned E_INVALIDARG - c++

Step1:
Image* image = NULL;
image = Bitmap::FromFile(m_lpwFPSImagePath[i], TRUE);
DDSURFACEDESC2 ddsd;
DDCOLORKEY ddck;
ZeroMemory( &ddsd, sizeof( ddsd ) );
ddsd.dwSize = sizeof( ddsd );
ddsd.dwFlags = DDSD_CAPS | DDSD_WIDTH | DDSD_HEIGHT;
ddsd.ddsCaps.dwCaps = DDSCAPS_OFFSCREENPLAIN | DDSCAPS_VIDEOMEMORY;
ddsd.dwWidth = image->GetWidth();;
ddsd.dwHeight = image->GetHeight();
hr = m_pDevice->CreateSurface(&ddsd, &m_pFPSTexture, NULL );
if( hr != DD_OK )
{
if(hr == DDERR_OUTOFVIDEOMEMORY)
{
ddsd.ddsCaps.dwCaps = DDSCAPS_OFFSCREENPLAIN |
DDSCAPS_SYSTEMMEMORY;
hr = m_pDevice->CreateSurface(&ddsd, &m_pFPSTexture, NULL );
}
}
Step2:
RECT SrcRect={0,0,fTexWidth,fTexHeight};
RECT DstRect = {0,0,60,20};
hr = m_pPrimarySurf->Blt(&DstRect,
m_pFPSTexture,&SrcRect,DDBLT_WAIT,NULL);
Note:
The image size is : 3170 x 64
m_pPrimarySurf->Blt(...) returned E_INVALIDARG . So why ?
Thx !

Happened to me too. I fixed it by changing driver type. When I created DirectDraw object, I specified that software only rendering should be used. LPDIRECTDRAW dd; HRESULT const dd_created = DirectDrawCreate(reinterpret_cast<GUID*>(DDCREATE_EMULATIONONLY), &dd, nullptr); as stated in DirectDrawCreate function documentation on MSDN. I'm running x86 application on Windows 10 x64 version 10.0.18363.1082, inside VirtualBox 5.2.42_Ubuntu r137960 inside Ubuntu x64 18.04.5 LTS on Lenovo laptop with IntelĀ® UHD Graphics 620 (WHL GT2) graphics.

Related

Empty outputInfoList for second graphic card

I have two discrete video adapters on my PC: GTX 1060 and RTX 2080 ti.
I would like to use second one for my DXUT app. I found command line argument -adapter# to specify it, however, my program crashed when I run with -adapter1 (1 is adapter ordinal for RTX2080) option.
I started debugging and figured out the following issue: EnumOutputs returns only DXGI_ERROR_NOT_FOUND.
For GTX1060 first EnumOutputs call returns correct output.
Code section:
HRESULT CD3D11Enumeration::EnumerateOutputs( _In_ CD3D11EnumAdapterInfo* pAdapterInfo )
{
HRESULT hr;
IDXGIOutput* pOutput;
for( int iOutput = 0; ; ++iOutput )
{
pOutput = nullptr;
//next line returns at once DXGI_ERROR_NOT_FOUND for RTX2080
hr = pAdapterInfo->m_pAdapter->EnumOutputs( iOutput, &pOutput );
if( DXGI_ERROR_NOT_FOUND == hr )
{
return S_OK;
}
...
}
hr = EnumerateOutputs( pAdapterInfo );
if( FAILED( hr ) || pAdapterInfo->outputInfoList.empty() ) //failed here cause second condition is true
{
delete pAdapterInfo;
continue;
}
Who know how to fix this problem?
All drivers are up to date.
P.S. I also tried to specify graphic card via Windows and GeForce software, but It seems appropriate only for laptop case with both integrate/discrete cards.
Oh my God..
The problem is only GTX1060 connected to my monitor.
My tutor explained me that it's impossible to render frame buffer in this case.

Implement Microsoft Speech Platform languages in SAPI 5

I created a little program in C++ where I use the SAPI library. In my code, I list the number of voices installed on my system. When I compile, I get 11, but there are only 8 installed and the only voice speaking is Microsoft Anna. I checked it in the registry (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech\Voices).
I have several languages installed , especially languages from the Microsoft Speech Platform but none can be used.
Furthermore, when I change the voice ID, I get an unhandled exception error and I think it is because the chosen ID does not exist.
Here is my code
#include "stdafx.h"
int main( int argc, char* argv[] )
{
CComPtr<ISpObjectToken> cpVoiceToken;
CComPtr<IEnumSpObjectTokens> cpEnum;
ISpVoice * pVoice = NULL;
ULONG count = 0;
string str;
if( FAILED( ::CoInitialize( NULL ) ) )
return FALSE;
HRESULT hr = CoCreateInstance( CLSID_SpVoice, NULL, CLSCTX_ALL,
IID_ISpVoice, ( void ** )&pVoice );
if( SUCCEEDED( hr ) )
{
//Enumerate Voices
hr = SpEnumTokens( SPCAT_VOICES, NULL /*L"Gender=Female"*/, NULL, &cpEnum);
printf( "Success\n" );
}
else
{
printf( "Failed to initialize SAPI" );
}
if( SUCCEEDED( hr ) )
{
//Get number of voices
hr = cpEnum->GetCount( &count );
printf( "TTS voices found: %i\n", count );
}
else
{
printf( "Failed to enumerate voices" );
hr = S_OK;
}
if( SUCCEEDED( hr ) )
{
cpVoiceToken.Release();
cpEnum->Item( 3, &cpVoiceToken ); //3 represents the ID of the voice
pVoice->SetVoice( cpVoiceToken );
hr = pVoice->Speak( L"You have selected Microsoft Server Speech Text to Speech Voice (en-GB, Hazel) as the computer's default voice.", 0, NULL ); //speak sentence
pVoice->Release();
pVoice = NULL;
}
::CoUninitialize();
system( "PAUSE" );
}
The only voice working is Microsoft Anna, and I don't understand why. If the other languages were not available, the program won't show me that there are so many(11). I wonder if the Microsoft Speech Platform languages are compatible with SAPI.
After many tries and fails, I managed to find an answer to my problem.
I compiled my program in Win32. So I decided to change it to x64 and I recompiled the solution. I changed the voice ID in my program, and the voices from the Microsoft Speech Platform worked. This means that the MS Speech Platform languages are 64 bit voices and Microsoft Anna is a 32 bit voice.
The following post inspired me.

Direct3D11 Screenshot crash

I'm trying to get, basically, screenshot (each 1 second, without saving) of Direct3D11 application. Code works fine on my PC(Intel CPU, Radeon GPU) but crashes after few iterations on 2 others (Intel CPU + Intel integrated GPU, Intel CPU + Nvidia GPU).
void extractBitmap(void* texture) {
if (texture) {
ID3D11Texture2D* d3dtex = (ID3D11Texture2D*)texture;
ID3D11Texture2D* pNewTexture = NULL;
D3D11_TEXTURE2D_DESC desc;
d3dtex->GetDesc(&desc);
desc.BindFlags = 0;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
desc.Usage = D3D11_USAGE_STAGING;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB;
HRESULT hRes = D3D11Device->CreateTexture2D(&desc, NULL, &pNewTexture);
if (FAILED(hRes)) {
printCon(std::string("CreateTexture2D FAILED:" + format_error(hRes)).c_str());
if (hRes == DXGI_ERROR_DEVICE_REMOVED)
printCon(std::string("DXGI_ERROR_DEVICE_REMOVED -- " + format_error(D3D11Device->GetDeviceRemovedReason())).c_str());
}
else {
if (pNewTexture) {
D3D11DeviceContext->CopyResource(pNewTexture, d3dtex);
// Wokring with texture
pNewTexture->Release();
}
}
}
return;
}
D3D11SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast< void** >(&pBackBuffer));
extractBitmap(pBackBuffer);
pBackBuffer->Release();
Crash log:
CreateTexture2D FAILED:887a0005
DXGI_ERROR_DEVICE_REMOVED -- 887a0020
Once I comment out D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); code works fine on all 3 PC's.
Got it wokring. I was running my code in separate thread. After I moved it to hooked Present(), application did not crash.

Why MFTEnumEx() corrupts the stack?

Down below you can see some dummy code for enumerating available multiplexers. On my system there is only one mux available (as expected).
When I call the MFTEnumEx(), the function succeeds, but stack gets corrupted. That's why I added that 64k buffer. 16 bytes will be written at offset 16. I tried this code on two different machines with the same result (Windows 10). Can somebody explain this?
BYTE buff[ 65536 ];
HRESULT hr;
hr = CoInitialize( NULL );
ATLASSERT( SUCCEEDED( hr ) );
hr = MFStartup( MF_VERSION, MFSTARTUP_FULL );
ATLASSERT( SUCCEEDED( hr ) );
IMFActivate ** ppActivate = NULL;
UINT numActivate = 0;
hr = MFTEnumEx( MFT_CATEGORY_MULTIPLEXER,
MFT_ENUM_FLAG_SYNCMFT | MFT_ENUM_FLAG_ASYNCMFT | MFT_ENUM_FLAG_HARDWARE |
MFT_ENUM_FLAG_FIELDOFUSE | MFT_ENUM_FLAG_LOCALMFT | MFT_ENUM_FLAG_TRANSCODE_ONLY,
NULL,
NULL,
&ppActivate,
&numActivate );

How to get width and height of directshow webcam video stream

I found a bit of code that gets me access to the raw pixel data from my webcam. However I need to know the image width, height, pixel format and preferably the data stride(pitch, memory padding or whatever you want to call it) if its ever gonna be something other than the width * bytes per pixel
#include <windows.h>
#include <dshow.h>
#pragma comment(lib,"Strmiids.lib")
#define DsHook(a,b,c) if (!c##_) { INT_PTR* p=b+*(INT_PTR**)a; VirtualProtect(&c##_,4,PAGE_EXECUTE_READWRITE,&no);\
*(INT_PTR*)&c##_=*p; VirtualProtect(p, 4,PAGE_EXECUTE_READWRITE,&no); *p=(INT_PTR)c; }
// Here you get image video data in buf / len. Process it before calling Receive_ because renderer dealocates it.
HRESULT ( __stdcall * Receive_ ) ( void* inst, IMediaSample *smp ) ;
HRESULT __stdcall Receive ( void* inst, IMediaSample *smp ) {
BYTE* buf; smp->GetPointer(&buf); DWORD len = smp->GetActualDataLength();
//AM_MEDIA_TYPE* info;
//smp->GetMediaType(&info);
HRESULT ret = Receive_ ( inst, smp );
return ret;
}
int WINAPI WinMain(HINSTANCE inst,HINSTANCE prev,LPSTR cmd,int show){
HRESULT hr = CoInitialize(0); MSG msg={0}; DWORD no;
IGraphBuilder* graph= 0; hr = CoCreateInstance( CLSID_FilterGraph, 0, CLSCTX_INPROC,IID_IGraphBuilder, (void **)&graph );
IMediaControl* ctrl = 0; hr = graph->QueryInterface( IID_IMediaControl, (void **)&ctrl );
ICreateDevEnum* devs = 0; hr = CoCreateInstance (CLSID_SystemDeviceEnum, 0, CLSCTX_INPROC, IID_ICreateDevEnum, (void **) &devs);
IEnumMoniker* cams = 0; hr = devs?devs->CreateClassEnumerator (CLSID_VideoInputDeviceCategory, &cams, 0):0;
IMoniker* mon = 0; hr = cams->Next (1,&mon,0); // get first found capture device (webcam?)
IBaseFilter* cam = 0; hr = mon->BindToObject(0,0,IID_IBaseFilter, (void**)&cam);
hr = graph->AddFilter(cam, L"Capture Source"); // add web cam to graph as source
IEnumPins* pins = 0; hr = cam?cam->EnumPins(&pins):0; // we need output pin to autogenerate rest of the graph
IPin* pin = 0; hr = pins?pins->Next(1,&pin, 0):0; // via graph->Render
hr = graph->Render(pin); // graph builder now builds whole filter chain including MJPG decompression on some webcams
IEnumFilters* fil = 0; hr = graph->EnumFilters(&fil); // from all newly added filters
IBaseFilter* rnd = 0; hr = fil->Next(1,&rnd,0); // we find last one (renderer)
hr = rnd->EnumPins(&pins); // because data we are intersted in are pumped to renderers input pin
hr = pins->Next(1,&pin, 0); // via Receive member of IMemInputPin interface
IMemInputPin* mem = 0; hr = pin->QueryInterface(IID_IMemInputPin,(void**)&mem);
DsHook(mem,6,Receive); // so we redirect it to our own proc to grab image data
hr = ctrl->Run();
while ( GetMessage( &msg, 0, 0, 0 ) ) {
TranslateMessage( &msg );
DispatchMessage( &msg );
}
};
Bonus points if you can tell me how get this thing not to render a window but still get me access to the image data.
That's really ugly. Please don't do that. Insert a pass-through filter like the sample grabber instead (as I replied to your other post on the same topic). Connecting the sample grabber to the null renderer gets you the bits in a clean, safe way without rendering the image.
To get the stride, you need to get the media type, either through ISampleGrabber or IPin::ConnectionMediaType. The format block will be either a VIDEOINFOHEADER or a VIDEOINFOHEADER2 (check the format GUID). The bitmapinfo header biWidth and biHeight defines the bitmap dimensions (and hence stride). If the RECT is not empty, then that defines the relevant image area within the bitmap.
I'm going to have to wash my hands now after touching this post.
I am sorry for you. When the interface was created there were probably not the best programmer to it.
// Here you get image video data in buf / len. Process it before calling Receive_ because renderer dealocates it.
BITMAPINFOHEADER bmpInfo; // current bitmap header info
int stride;
HRESULT ( __stdcall * Receive_ ) ( void* inst, IMediaSample *smp ) ;
HRESULT __stdcall Receive ( void* inst, IMediaSample *smp )
{
BYTE* buf; smp->GetPointer(&buf); DWORD len = smp->GetActualDataLength();
HRESULT ret = Receive_ ( inst, smp );
AM_MEDIA_TYPE* info;
HRESULT hr = smp->GetMediaType(&info);
if ( hr != S_OK )
{ //TODO: error }
else
{
if ( type->formattype == FORMAT_VideoInfo )
{
const VIDEOINFOHEADER * vi = reinterpret_cast<VIDEOINFOHEADER*>( type->pbFormat );
const BITMAPINFOHEADER & bmiHeader = vi->bmiHeader;
//! now the bmiHeader.biWidth contains the data stride
stride = bmiHeader.biWidth;
bmpInfo = bmiHeader;
int width = ( vi->rcTarget.right - vi->rcTarget.left );
//! replace the data stride be the actual width
if ( width != 0 )
bmpInfo.biWidth = width;
}
else
{ // unsupported format }
}
DeleteMediaType( info );
return ret;
}
Here's how to add the Null Renderer that suppresses the rendering window. Add directly after creating the IGraphBuilder*
//create null renderer and add null renderer to graph
IBaseFilter *m_pNULLRenderer; hr = CoCreateInstance(CLSID_NullRenderer, NULL, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void **)&m_pNULLRenderer);
hr = graph->AddFilter(m_pNULLRenderer, L"Null Renderer");
That dshook hack is the only elegant directshow code of which I am aware.
In my experience, the DirectShow API is a convoluted nightmare, requiring hundreds of lines of code to do even the simplest operation, and adapting a whole programming paradigm in order to access your web camera. So if this code does the job for you, as it did for me, use it and enjoy fewer lines of code to maintain.