How to define an audio output programmatically on Windows 10? - c++

I'm developing a C++ application which implements the Microsoft Speech API (SAPI). I developed numerous functions related to Text-To-Speech. Among them a function which allows the listing of the audio outputs, and a function that allows to define an audio output.
I started developing this program on Windows 7, but now I switched to Windows 10. However, the function that defines the audio output doesn't work anymore. I didn't edited a thing in my code, and on Windows 7 it worked perfectly.
Here is the code which lists the available audio outputs
int getAudioOut( int auOut ) //get audio outputs function
{
if( SUCCEEDED( hr ) )
{
//Enumerate Audio Outputs
hr = SpEnumTokens( SPCAT_AUDIOOUT, NULL, NULL, &cpEnum );
cpEnum->GetCount( &vCount );
cpEnum->Item( saveAudio, &cpAudioOutToken );
SpGetDescription( cpAudioOutToken, &dynStr );
printf( "Defined audio output is: %ls\n\n", dynStr );
dynStr.Clear();
//Loop through the audio output list and enumerate them all
for( audioOut = 0; audioOut <= vCount - 1; audioOut++ )
{
cpAudioOutToken.Release();
cpEnum->Item( audioOut, &cpAudioOutToken );
SpGetDescription( cpAudioOutToken, &dynStr );
printf( "Defined Audio Output %i - %ls\n", audioOut, dynStr );
dynStr.Clear();
}
printf( "\n" );
audioOut = saveAudio;
cpEnum.Release();
cpAudioOutToken.Release();
}
else
{
printf( "Could not enumerate available audio outputs\n" );
}
return true;
}
Here is the code which allows the definition of an audio output
int setAudioOut( int auOut ) //define audio output function
{
if( SUCCEEDED( hr ) )
{
hr = SpEnumTokens( SPCAT_AUDIOOUT, NULL, NULL, &cpEnum );
cpEnum->GetCount( &vCount );
size_t nOut = auOut;
if( nOut >= vCount )
{
cout << "Not so many audio outputs available! Try again\n" << endl;
}
else
{
cout << "Success" << endl;
}
ULONG audioOut = static_cast<ULONG>( nOut ); //convert nOut to ULONG audioOut
cpEnum->Item( audioOut, &cpAudioOutToken );
SpGetDescription( cpAudioOutToken, &dynStr );
printf( "You chose %ls\n\n", dynStr );
cpVoice->SetOutput( cpAudioOutToken, TRUE ); //Initialization of the Audio Output
dynStr.Clear();
cpEnum.Release();
cpAudioOutToken.Release();
saveAudio = audioOut; //define saveAudio to audioOut value
}
else
{
printf( "Could not set audio output\n" );
}
return true;
}
When I start my program and call the getAudioOut function, I get following listing:
The first line shows the default audio output, and the two below are the available outputs. On Windows 7, when I set the second audio output (Lautsprecher / Kopfhörer) as default, then no sound comes out from the first (Digitalaudio), which makes sense. However, on Windows 10 I have reproduced the same procedure but it doesn't work. The audio output is always defined according to the audio menu.
My question is, does anyone have experienced this issue? Is there an alternative to define an audio output programmatically?

I edited the code as #NikolayShmyrev suggested but it didn't change a thing. However, I continued to dig into the problem an found out that the issue came from another function. Indeed, when I switched from Windows 7 to Windows 10, I encountered other problems with the speech synthesis function and the speech to WAV file function. When I started the program and called the Text-To-Speech function, everything worked great. When I called the Speech2Wav function, it worked too. However, when I recalled the Text-To-Speech function, the variable HRESULT hr = S_OK; changed its value and no sound was played. The value of hr was set to -2147200968 which corresponds to Error 0x80045038: SPERR_STREAM_CLOSED (source/list of error codes)
To solve this issue I had to define an audio output like this cpVoice->SetOutput( cpAudioOutToken, TRUE ); in the Text-To-Speech function.
This brings us back to the problem I stated above. When I set the audio output in the function setAudioOut, I release its value at the end cpAudioOutToken.Release();
However, I reuse the same variable in the Text-To-Speech function. Its value was set to nothing because I released it when I defined the audio output. This is why the audio output was always set to default. In order to solve the problem, I assigned the value of cpAudioOutToken to another variable called cpSpeechOutToken.
Here is the code for the function setAudioOut
int setAudioOut( int auOut ) //define audio output function
{
if( SUCCEEDED( hr ) )
{
hr = SpEnumTokens( SPCAT_AUDIOOUT, NULL, NULL, &cpEnum );
cpEnum->GetCount( &vCount );
size_t nOut = auOut;
if( nOut >= vCount )
{
cout << "Not so many audio outputs available! Try again\n" << endl;
return 0;
}
else
{
cout << "Success" << endl;
}
ULONG audioOut = static_cast<ULONG>( nOut ); //convert nOut to ULONG audioOut
cpEnum->Item( audioOut, &cpAudioOutToken );
SpGetDescription( cpAudioOutToken, &dynStr );
printf( "You chose %ls\n\n", dynStr );
cpVoice->SetOutput( cpAudioOutToken, TRUE ); //Initialization of the Audio Output
dynStr.Clear();
cpEnum.Release();
cpSpeechOutToken = cpAudioOutToken;
cpAudioOutToken.Release();
saveAudio = audioOut; //define saveAudio to audioOut value
}
else
{
printf( "Could not set audio output\n" );
}
return true;
}
Here is the code from the Text-To-Speech function
int ttsSpeak( const char* text ) //Text to Speech speaking function
{
if( SUCCEEDED( hr ) )
{
string xmlSentence( text );
hr = SpEnumTokens( SPCAT_VOICES_WIN10, NULL, NULL, &cpEnum );
//Replace SPCAT_VOICES_WIN10 with SPCAT_VOICES if you want to use it on Windows 7
cpEnum->Item( saveVoice, &cpVoiceToken ); //get saveVoice token defined at line 175
cpVoice->SetVoice( cpVoiceToken ); //Initialization of the voice
//string strText( text );
int wchars_num = MultiByteToWideChar( CP_ACP, 0, xmlSentence.c_str(), -1, NULL, 0 );
wchar_t* wstr = new wchar_t[ wchars_num ];
MultiByteToWideChar( CP_ACP, 0, xmlSentence.c_str(), -1, wstr, wchars_num );
printf( "Text To Speech processing\n" );
cpVoice->SetOutput( cpSpeechOutToken, TRUE );
hr = cpVoice->Speak( wstr, SVSFIsXML, NULL );
saveText = xmlSentence.c_str();
cpEnum.Release();
cpVoiceToken.Release();
delete new wchar_t[ wchars_num ];
}
else
{
printf( "Could not speak entered text\n" );
}
return true;
}

Related

How to get word list using ISpLexicon::GetWords?

I'm developping a Text-To-Speech application using Microsoft SAPI. I found out that it is possible to add customized prononciations of words in a dictionnary (correct me if I'm wrong). I implemented a function which allows to add words into this dictionnary. Here is my code:
int addPrononciation( const char* addPron, const char* phon )
{
hr = cpLexicon.CoCreateInstance( CLSID_SpLexicon );
hr = cpContainerLexicon.CoCreateInstance( CLSID_SpLexicon );
hr = SpEnumTokens( SPCAT_VOICES, NULL, NULL, &cpEnum );
cpEnum->Item( saveVoice, &cpVoiceToken ); //get saveVoice token defined at line 136
cpVoice->SetVoice( cpVoiceToken ); //Initialization of the voice
hr = cpContainerLexicon->AddLexicon( cpLexicon, eLEXTYPE_APP );
langId = MAKELANGID( LANG_ENGLISH, SUBLANG_ENGLISH_US );
hr = SpCreatePhoneConverter( langId, NULL, NULL, &cpPhoneConv );
int wchars_num = MultiByteToWideChar( CP_ACP, 0, addPron, -1, NULL, 0 );
wchar_t* pronWstr = new wchar_t[ wchars_num ];
MultiByteToWideChar( CP_ACP, 0, addPron, -1, pronWstr, wchars_num );
int phonWchars_num = MultiByteToWideChar( CP_ACP, 0, phon, -1, NULL, 0 );
wchar_t* phonWstr = new wchar_t[ phonWchars_num ];
MultiByteToWideChar( CP_ACP, 0, phon, -1, phonWstr, phonWchars_num );
if(SUCCEEDED( hr ))
{
hr = cpPhoneConv->PhoneToId( phonWstr, wszId );
hr = cpVoice->Speak( phonWstr, SPF_DEFAULT, NULL );
hr = cpLexicon->AddPronunciation( pronWstr, langId, SPPS_Noun, wszId );
hr = cpVoice->Speak( pronWstr, SPF_DEFAULT, NULL );
if( SUCCEEDED( hr ) )
{
printf( "Success\n" );
}
else
{
printf( "Failed\n" );
}
}
cpEnum.Release();
cpVoiceToken.Release();
cpContainerLexicon.Release();
cpLexicon.Release();
cpPhoneConv.Release();
delete new wchar_t[ wchars_num ];
delete new wchar_t[ phonWchars_num ];
return true;
}
Now I would like to list these words using ISpLexicon::GetWords.
I already read the documentation on the Microsoft website and tried to implement the function, but I can't figure out how to initialize the variable spWordList.
Here is my code:
ZeroMemory( &spWordList, sizeof( spWordList ) );
if( SUCCEEDED( hr ) )
{
hr = cpLexicon->GetWords( eLEXTYPE_APP, &dwGeneration, &dwCookie, &spWordList );
printf( "Words: %ls\n", spWordList ); //print words but the output is null
}
CoTaskMemFree( spWordList.pvBuffer );
I'm triying to print the words, but the output is null. I think the spWordList variable is not initialized. Here is a screenshot of the variable values.
How can I initialize it?
I found out how to initialize spWordList. You have to replace eLEXTYPE_APP with eLEXTYPE_USER. However, you can keep both of them like I did. Below you will find an example on how it lists the words.
ZeroMemory( &spWordList, sizeof( spWordList ) );
hr = S_FALSE;
if( hr == S_FALSE )
{
hr = cpLexicon->GetWords( eLEXTYPE_USER | eLEXTYPE_APP, &dwGeneration, &dwCookie, &spWordList );
for( spWord = spWordList.pFirstWord; spWord != NULL; spWord = spWord->pNextWord )
{
for( spWordPron = spWord->pFirstWordPronunciation; spWordPron != NULL; spWordPron = spWordPron->pNextWordPronunciation )
{
printf( "Words in dictionnary: %i\n", dwGeneration );
printf( "Word: %ls\n", spWord->pszWord );
//you can also display the pronunciation of words if you wish
}
}
}
CoTaskMemFree( spWordList.pvBuffer );
In the code, I loop through the entire dictionnary. Notice that the listed words are displayed randomly. I'll update my answer if I find other important information about ISpLexicon::GetWords

Implement Microsoft Speech Platform languages in SAPI 5

I created a little program in C++ where I use the SAPI library. In my code, I list the number of voices installed on my system. When I compile, I get 11, but there are only 8 installed and the only voice speaking is Microsoft Anna. I checked it in the registry (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech\Voices).
I have several languages installed , especially languages from the Microsoft Speech Platform but none can be used.
Furthermore, when I change the voice ID, I get an unhandled exception error and I think it is because the chosen ID does not exist.
Here is my code
#include "stdafx.h"
int main( int argc, char* argv[] )
{
CComPtr<ISpObjectToken> cpVoiceToken;
CComPtr<IEnumSpObjectTokens> cpEnum;
ISpVoice * pVoice = NULL;
ULONG count = 0;
string str;
if( FAILED( ::CoInitialize( NULL ) ) )
return FALSE;
HRESULT hr = CoCreateInstance( CLSID_SpVoice, NULL, CLSCTX_ALL,
IID_ISpVoice, ( void ** )&pVoice );
if( SUCCEEDED( hr ) )
{
//Enumerate Voices
hr = SpEnumTokens( SPCAT_VOICES, NULL /*L"Gender=Female"*/, NULL, &cpEnum);
printf( "Success\n" );
}
else
{
printf( "Failed to initialize SAPI" );
}
if( SUCCEEDED( hr ) )
{
//Get number of voices
hr = cpEnum->GetCount( &count );
printf( "TTS voices found: %i\n", count );
}
else
{
printf( "Failed to enumerate voices" );
hr = S_OK;
}
if( SUCCEEDED( hr ) )
{
cpVoiceToken.Release();
cpEnum->Item( 3, &cpVoiceToken ); //3 represents the ID of the voice
pVoice->SetVoice( cpVoiceToken );
hr = pVoice->Speak( L"You have selected Microsoft Server Speech Text to Speech Voice (en-GB, Hazel) as the computer's default voice.", 0, NULL ); //speak sentence
pVoice->Release();
pVoice = NULL;
}
::CoUninitialize();
system( "PAUSE" );
}
The only voice working is Microsoft Anna, and I don't understand why. If the other languages were not available, the program won't show me that there are so many(11). I wonder if the Microsoft Speech Platform languages are compatible with SAPI.
After many tries and fails, I managed to find an answer to my problem.
I compiled my program in Win32. So I decided to change it to x64 and I recompiled the solution. I changed the voice ID in my program, and the voices from the Microsoft Speech Platform worked. This means that the MS Speech Platform languages are 64 bit voices and Microsoft Anna is a 32 bit voice.
The following post inspired me.

Using xaudio2 and a parallel port together

I am using C++ to code a neuroscience experiment in my research lab. We are studying tactile perception, and we use a parallel port to trigger our brain stimulating device. The timing is very important.
We recently started using xaudio2 to play very simple WAV files, which are used to trigger our vibrotactile stimulators (for example, our "tactile" stimuli are 100 and 200 Hz sounds with a duration of 100 ms, that move a piezo-electric stimulator that is placed on the hand).
Our problem is that we need to send out 3 commands to the brain stimulator via the parallel port: once 40 ms before the tactile stimulus, once 10 ms after the start of the stimulus, and a third time 60 ms into the stimulus. Remember, the tactile stimulus lasts 100 ms.
However, the way xaudio2 triggers sound is that it plays the wave and blocks until it is finished. As a consequence, the program ignores the two parallel port commands which should be sent during the stimulus.
Does anybody know how I can make sure the tactile stimulus is still triggered for the entirety of its 100 ms duration, but also send out parallel port commands during it? I am using the MSDN XAudio2Samples as the basic structure for playing the wav files, and the PlayWave function is the one which "blocks" any other input while the Wav file is playing -- but I can't figure out how to modify it so that it will also take my parallel port commands (which are Out32(888,1)) while a sound is being played.
Thank you!
Here is the code for the PlayWave function:
//--------------------------------------------------------------------------------------
// Name: PlayWave
// Desc: Plays a wave and blocks until the wave finishes playing
//--------------------------------------------------------------------------------------
_Use_decl_annotations_
HRESULT PlayWave( IXAudio2* pXaudio2, LPCWSTR szFilename )
{
//
// Locate the wave file
//
WCHAR strFilePath[MAX_PATH];
HRESULT hr = FindMediaFileCch( strFilePath, MAX_PATH, szFilename );
if( FAILED( hr ) )
{
wprintf( L"Failed to find media file: %s\n", szFilename );
return hr;
}
//
// Read in the wave file
//
std::unique_ptr<uint8_t[]> waveFile;
DirectX::WAVData waveData;
if ( FAILED( hr = DirectX::LoadWAVAudioFromFileEx( strFilePath, waveFile, waveData ) ) )
{
wprintf( L"Failed reading WAV file: %#X (%s)\n", hr, strFilePath );
return hr;
}
//
// Play the wave using a XAudio2SourceVoice
//
// Create the source voice
IXAudio2SourceVoice* pSourceVoice;
if( FAILED( hr = pXaudio2->CreateSourceVoice( &pSourceVoice, waveData.wfx ) ) )
{
wprintf( L"Error %#X creating source voice\n", hr );
return hr;
}
// Submit the wave sample data using an XAUDIO2_BUFFER structure
XAUDIO2_BUFFER buffer = {0};
buffer.pAudioData = waveData.startAudio;
buffer.Flags = XAUDIO2_END_OF_STREAM; // tell the source voice not to expect any data after this buffer
buffer.AudioBytes = waveData.audioBytes;
if ( waveData.loopLength > 0 )
{
buffer.LoopBegin = waveData.loopStart;
buffer.LoopLength = waveData.loopLength;
buffer.LoopCount = 1; // We'll just assume we play the loop twice
}
#if (_WIN32_WINNT < 0x0602 /*_WIN32_WINNT_WIN8*/)
if ( waveData.seek )
{
XAUDIO2_BUFFER_WMA xwmaBuffer = {0};
xwmaBuffer.pDecodedPacketCumulativeBytes = waveData.seek;
xwmaBuffer.PacketCount = waveData.seekCount;
if( FAILED( hr = pSourceVoice->SubmitSourceBuffer( &buffer, &xwmaBuffer ) ) )
{
wprintf( L"Error %#X submitting source buffer (xWMA)\n", hr );
pSourceVoice->DestroyVoice();
return hr;
}
}
#else
if ( waveData.seek )
{
wprintf( L"This platform does not support xWMA or XMA2\n" );
pSourceVoice->DestroyVoice();
return hr;
}
#endif
else if( FAILED( hr = pSourceVoice->SubmitSourceBuffer( &buffer ) ) )
{
wprintf( L"Error %#X submitting source buffer\n", hr );
pSourceVoice->DestroyVoice();
return hr;
}
hr = pSourceVoice->Start( 0 );
// Let the sound play
BOOL isRunning = TRUE;
while( SUCCEEDED( hr ) && isRunning )
{
XAUDIO2_VOICE_STATE state;
pSourceVoice->GetState( &state );
isRunning = ( state.BuffersQueued > 0 ) != 0;
// Wait till the escape key is pressed
if( GetAsyncKeyState( VK_ESCAPE ) )
break;
Sleep( 10 );
}
// Wait till the escape key is released
while( GetAsyncKeyState( VK_ESCAPE ) )
Sleep( 10 );
pSourceVoice->DestroyVoice();
return hr;
}

Printing Html file ERROR with Winspool API

I have a simple application written in C++ for print the Html's documments. This program is based on RawPrinter c++ example by MSDN.
When i use the LEXMARK printer with PostScript driver, i have sucess but when i use the HP LaserJet P3015, it's not works and my document is printed with the html tags.
I tried some settings for HP printer but i don't have success.
I'm using the datatype "RAW".
Why the Lexmark printer works and the HP printer not works?
My S.O is a windows 7x 64.
int main()
{
HANDLE hPrinter;
BOOL lOpen = OpenPrinter((LPSTR)printerName,&hPrinter,NULL);
if( !lOpen )
{
printLastSOError(os_errorMsg);
return -3;
};
// prepare printer to send data (RAW)
DOC_INFO_1 DocInfo;
DocInfo.pDocName = "teste printer";
DocInfo.pOutputFile = NULL;
DocInfo.pDatatype = "RAW";
// Start Job Printer
DWORD dwJob = StartDocPrinter( hPrinter, 1, (LPBYTE)&DocInfo );
if( dwJob == 0 )
{
printLastSOError( os_errorMsg );
ClosePrinter(hPrinter);
return -4;
};
// Execute JOB
char buff[bufferSize];
DWORD nReaded;
while( ReadFile(hFile,buff,bufferSize,&nReaded,0) )
{
if( nReaded ==0 )
break;
DWORD dwBytesWritten;
if( !WritePrinter( hPrinter, buff, nReaded, &dwBytesWritten ) )
{
printLastSOError(os_errorMsg);
break;
};
};
EndDocPrinter(hPrinter);
ClosePrinter(hPrinter);
return 0;
}
Thanks.
Best Regards.
Paulo

MS Kinect FaceTracker creating IFTResult

I have a fairly simple application that contains the following:
context->mFaceTracker = FTCreateFaceTracker();
hr = context->mFaceTracker->Initialize( &mVideoCameraConfig, &mDepthCameraConfig, NULL, NULL );
which works fine and returns S_OK and mFaceTracker is (as far as I can tell) initialized properly. However, the next line is:
hr = context->mFaceTracker->CreateFTResult( &context->mFTResult );
which always returns FT_ERROR_UNINITIALIZED, doesn't allocate the pointer, and has me puzzled. I've tried many different strategies for getting this to work from changing how the threading for the device and detector works, to changing my FTcontext object from a class to a struct to match the samples, all with no success. The Kinect SDK samples all work fine, but trying to use them in my own application doesn't seem to, despite my closely mirroring how they initialize the device and the Face Tracker. I'm curious if anyone else has run into this or similar problems around initializing either IFTFaceTracker or the IFTResult. Also, I'm curious how else I can test the IFTFaceTracker for correct intialization, other than testing the HRESULT that Initialize() returns. Thanks in advance,
=== edit
I've had a few request for more code. It's built on Cinder and is using this block for Cinder: https://github.com/BanTheRewind/Cinder-KinectSdk
I can't post all of the code, but I've posted at least most of the relevant Kinect initialization code here:
void Kinect::start( const DeviceOptions &deviceOptions )
{
if ( !mCapture ) {
// Copy device options
mDeviceOptions = deviceOptions;
string deviceId = mDeviceOptions.getDeviceId();
int32_t index = mDeviceOptions.getDeviceIndex();
// Clamp device index
if ( index >= 0 ) {
index = math<int32_t>::clamp( index, 0, math<int32_t>::max( getDeviceCount() - 1, 0 ) );
}
// Initialize device instance
long hr = S_OK;
if ( index >= 0 ) {
hr = NuiCreateSensorByIndex( index, &mSensor );
if ( FAILED( hr ) ) {
trace( "Unable to create device instance " + toString( index ) + ": " );
error( hr );
return;
}
} else if ( deviceId.length() > 0 ) {
_bstr_t id = deviceId.c_str();
hr = NuiCreateSensorById( id, &mSensor );
if ( FAILED( hr ) ) {
trace( "Unable to create device instance " + deviceId + ":" );
error( hr );
return;
}
} else {
trace( "Invalid device name or index." );
return;
}
// Check device
hr = mSensor != 0 ? mSensor->NuiStatus() : E_NUI_NOTCONNECTED;
if ( hr == E_NUI_NOTCONNECTED ) {
error( hr );
return;
}
// Get device name and index
if ( mSensor != 0 ) {
mDeviceOptions.setDeviceIndex( mSensor->NuiInstanceIndex() );
BSTR id = ::SysAllocString( mSensor->NuiDeviceConnectionId() );
_bstr_t idStr( id );
if ( idStr.length() > 0 ) {
std::string str( idStr );
mDeviceOptions.setDeviceId( str );
}
::SysFreeString( id );
} else {
index = -1;
deviceId = "";
}
flags |= NUI_INITIALIZE_FLAG_USES_COLOR;
}
hr = mSensor->NuiInitialize( flags );
if ( FAILED( hr ) ) {
trace( "Unable to initialize device " + mDeviceOptions.getDeviceId() + ":" );
error( hr );
return;
}
hr = mSensor->NuiSkeletonTrackingEnable( 0, flags );
if ( FAILED( hr ) ) {
trace( "Unable to initialize skeleton tracking for device " + mDeviceOptions.getDeviceId() + ": " );
error( hr );
return;
}
mIsSkeletonDevice = true;
mThread = CreateThread(NULL, 0, &Kinect::StaticThread, (PVOID)this, 0, 0);
}
}
DWORD WINAPI Kinect::StaticThread(PVOID lpParam)
{
Kinect* device = static_cast<Kinect*>(lpParam);
if (device)
{
return device->run();
}
return 0;
}
void run() {
if(mSensor) {
if(mEnabledFaceTracking)
{
if(mNeedFaceTracker) {
mFaceTracker = new FaceTracker(
mDeviceOptions.getVideoSize().x,
mDeviceOptions.getVideoSize().y,
mDeviceOptions.getDepthSize().x,
mDeviceOptions.getDepthSize().y,
1.0,
1 );
mNeedFaceTracker = false;
}
// make sure we have both color && depth buffers to work with
if(newDepth || newVideo)
{
FT_SENSOR_DATA sensorData(mFTColorImage, mFTDepthImage);
FT_VECTOR3D hint[2]; // this is initialized elsewhere
mFaceTracker->checkFaces( (NUI_SKELETON_FRAME*) &skeletonFrame, mFTColorImage, mFTDepthImage, 1.0, 0);
if(mFaceTracker->getNumFaces() > 0) {
cout << " we have a face " << mFaceTracker->getNumFaces() << endl;
mNewFaceTrackData = true;
mFaceData.clear();
for( int i = 0; i < mFaceTracker->getNumFaces(); i++) {
Face newFace;
mFaceTracker->getProjectedShape(0, newFace.scale, newFace.rotation, newFace.transform, newFace.screenPositions);
mFaceData.push_back(newFace);
}
}
}
}
Sleep( 8 );
}
}
It looks like you never call (or omitted including in the code sample) NuiImageStreamOpen(), such as this code snippet from the SingleFace sample, KinectSensor.cpp in the Init method:
hr = NuiImageStreamOpen(
colorType,
colorRes,
0,
2,
m_hNextVideoFrameEvent,
&m_pVideoStreamHandle );
if (FAILED(hr))
{
return hr;
}
hr = NuiImageStreamOpen(
depthType,
depthRes,
(bNearMode)? NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE : 0,
2,
m_hNextDepthFrameEvent,
&m_pDepthStreamHandle );
Calling those before you call CreateFTResult() may fix the uninitialized error.
Additionally, you call CreateThread() and then call run(), but there is no while loop so that thread will exit almost immediately, certainly without enough time for the Kinect to start providing data to the FaceTracking.
It doesn't look like you have included the Thread or event loop that is checking the sensor for new data, updating mFTColorImage and mFTDepthImage, and setting the newDepth and newVideo flags. This could be in the same thread that you create above (provided you create a while loop, and ignoring performance or other classes needing the Kinect data), or could be a different thread as in the SingleFace Kinect SDK sample.