Packging LEFT and RIGHT channel data - c++

I am decoding FLAC audio into memory, and passing the decoded audio data to the OpenAL: void alBufferData (ALuint bufferName, ALenum format, const ALvoid *data, ALsizei size, ALsizei frequency);
The data from the decoded audio goes into mine std::vector<FLAC__int32> data_;. Into which I am attempting to package the LEFT and RIGHT channels (AL_FORMAT_STEREO16). However, I don't understand how I am to store/align these channels within my data_ vector.
So I have the libFLAC virtual callback member function:
FLAC__StreamDecoderWriteStatus
Source::write_callback (
FLAC__Frame const* _frame, FLAC__int32 const *const _buffer[])
{
for(size_t i(0); i < _frame->header.blocksize; i++) {
data_[index_] = _buffer[0][i]; // channel audio on the left
++index_;
data_[index_] = _buffer[1][i]; // what about the right channel?
} // jump
return FLAC__STREAM_DECODER_WRITE_STATUS_CONTINUE;
} // main
At the moment, during audio playback, I am hearing only the LEFT channel. There is static sound after the sound has finished playing which I am assuming is the missing RIGHT channel data. How do i get the RIGHT channel to work also?
Also, this is the metadata callback signature as per libFLAC:
void
Source::metadata_callback (const ::FLAC__StreamMetadata *metadata)
{
total_samples_ = metadata->data.stream_info.total_samples;
rate_ = metadata->data.stream_info.sample_rate;
channels_ = metadata->data.stream_info.channels;
bps_ = metadata->data.stream_info.bits_per_sample;
switch (bps_) {
case 16 :
if (channels_ > 1) {
format_ = AL_FORMAT_STEREO16; } else {
format_ = AL_FORMAT_MONO16; }
break;
case 8 :
if (channels_ > 1) {
format_ = AL_FORMAT_STEREO8; } else {
format_ = AL_FORMAT_MONO8; }
break;
default:
break;
}
size_ = (ALuint)(rate_ * channels_ * (bps_ / 8));
data_.resize(total_samples_); index_ = 0;
} // main

A solution which works, was to assagned the below struct as the vector data type like so:
struct Data
{
FLAC__int16 channelLeft_;
FLAC__int16 channelRight_;
};
std::vector<Source::Data> data_;
than assign the size_ like so:
size_ = total_samples_ * sizeof(Source::Data);
finaly the data loop should now be:
for(size_t i(0); i < _frame->header.blocksize; i++) {
data_[index_].channelLeft_ = _buffer[0][i];
data_[index_].channelRight_ = _buffer[1][i];
++index_;
} // jump

Related

Intel OneAPI Video decoding memory leak when using C++ CLI

I am trying to use Intel OneAPI/OneVPL to decode a stream I receive from an RTSP Camera in C#. But when I run the code I get an enormous memory leak. Around 1-200MB per run, which is around once every second.
When I've collected a GoP from the camera where I know the first data is a keyframe I pass it as a byte array to my CLI and C++ code.
Here I expect it to decode all the frames and return decoded images. It receives 30 frames and returns 16 decoded images but has a memory leak.
I've tried to use Visual Studio memory profiler and all I can tell from it is that its unmanaged memory that's my problem. I've tried to override the "new" and "delete" method inside videoHandler.cpp to track and compare all allocations and deallocations and as far as I can tell everything is handled correctly in there. I cannot see any classes that get instantiated that do not get cleaned up. I think my issue is in the CLI class videoHandlerWrapper.cpp. Am I missing something obvious?
videoHandlerWrapper.cpp
array<imgFrameWrapper^>^ videoHandlerWrapper::decode(array<System::Byte>^ byteArray)
{
array<imgFrameWrapper^>^ returnFrames = gcnew array<imgFrameWrapper^>(30);
{
std::vector<imgFrame> frames(30); //Output from decoding process. imgFrame implements a deconstructor that will rid the data when exiting scope
std::vector<unsigned char> bytes(byteArray->Length); //Input for decoding process
Marshal::Copy(byteArray, 0, IntPtr((unsigned char*)(&((bytes)[0]))), byteArray->Length); //Copy from managed (C#) to unmanaged (C++)
int status = _pVideoHandler->decode(bytes, frames); //Decode
for (size_t i = 0; i < frames.size(); i++)
{
if (frames[i].size > 0)
returnFrames[i] = gcnew imgFrameWrapper(frames[i].size, frames[i].bytes);
}
}
//PrintMemoryUsage();
return returnFrames;
}
videoHandler.cpp
#define BITSTREAM_BUFFER_SIZE 2000000 //TODO Maybe higher or lower bitstream buffer. Thorough testing has been done at 2000000
int videoHandler::decode(std::vector<unsigned char> bytes, std::vector<imgFrame> &frameData)
{
int result = -1;
bool isStillGoing = true;
mfxBitstream bitstream = { 0 };
mfxSession session = NULL;
mfxStatus sts = MFX_ERR_NONE;
mfxSurfaceArray* outSurfaces = nullptr;
mfxU32 framenum = 0;
mfxU32 numVPPCh = 0;
mfxVideoChannelParam* mfxVPPChParams = nullptr;
void* accelHandle = NULL;
mfxVideoParam mfxDecParams = {};
mfxVersion version = { 0, 1 };
//variables used only in 2.x version
mfxConfig cfg = NULL;
mfxLoader loader = NULL;
mfxVariant inCodec = {};
std::vector<mfxU8> input_buffer;
// Initialize VPL session for any implementation of HEVC/H265 decode
loader = MFXLoad();
VERIFY(NULL != loader, "MFXLoad failed -- is implementation in path?");
cfg = MFXCreateConfig(loader);
VERIFY(NULL != cfg, "MFXCreateConfig failed")
inCodec.Type = MFX_VARIANT_TYPE_U32;
inCodec.Data.U32 = MFX_CODEC_AVC;
sts = MFXSetConfigFilterProperty(
cfg,
(mfxU8*)"mfxImplDescription.mfxDecoderDescription.decoder.CodecID",
inCodec);
VERIFY(MFX_ERR_NONE == sts, "MFXSetConfigFilterProperty failed for decoder CodecID");
sts = MFXCreateSession(loader, 0, &session);
VERIFY(MFX_ERR_NONE == sts, "Not able to create VPL session");
// Print info about implementation loaded
version = ShowImplInfo(session);
//VERIFY(version.Major > 1, "Sample requires 2.x API implementation, exiting");
if (version.Major == 1) {
mfxVariant ImplValueSW;
ImplValueSW.Type = MFX_VARIANT_TYPE_U32;
ImplValueSW.Data.U32 = MFX_IMPL_TYPE_SOFTWARE;
MFXSetConfigFilterProperty(cfg, (mfxU8*)"mfxImplDescription.Impl", ImplValueSW);
sts = MFXCreateSession(loader, 0, &session);
VERIFY(MFX_ERR_NONE == sts, "Not able to create VPL session");
}
// Convenience function to initialize available accelerator(s)
accelHandle = InitAcceleratorHandle(session);
bitstream.MaxLength = BITSTREAM_BUFFER_SIZE;
bitstream.Data = (mfxU8*)calloc(bytes.size(), sizeof(mfxU8));
VERIFY(bitstream.Data, "Not able to allocate input buffer");
bitstream.CodecId = MFX_CODEC_AVC;
std::copy(bytes.begin(), bytes.end(), bitstream.Data);
bitstream.DataLength = static_cast<mfxU32>(bytes.size());
memset(&mfxDecParams, 0, sizeof(mfxDecParams));
mfxDecParams.mfx.CodecId = MFX_CODEC_AVC;
mfxDecParams.IOPattern = MFX_IOPATTERN_OUT_SYSTEM_MEMORY;
sts = MFXVideoDECODE_DecodeHeader(session, &bitstream, &mfxDecParams);
VERIFY(MFX_ERR_NONE == sts, "Error decoding header\n");
numVPPCh = 1;
mfxVPPChParams = new mfxVideoChannelParam[numVPPCh];
for (mfxU32 i = 0; i < numVPPCh; i++) {
mfxVPPChParams[i] = {};
}
//mfxVPPChParams[0].VPP.FourCC = mfxDecParams.mfx.FrameInfo.FourCC;
mfxVPPChParams[0].VPP.FourCC = MFX_FOURCC_BGRA;
mfxVPPChParams[0].VPP.ChromaFormat = MFX_CHROMAFORMAT_YUV420;
mfxVPPChParams[0].VPP.PicStruct = MFX_PICSTRUCT_PROGRESSIVE;
mfxVPPChParams[0].VPP.FrameRateExtN = 30;
mfxVPPChParams[0].VPP.FrameRateExtD = 1;
mfxVPPChParams[0].VPP.CropW = 1920;
mfxVPPChParams[0].VPP.CropH = 1080;
//Set value directly if input and output is the same.
mfxVPPChParams[0].VPP.Width = 1920;
mfxVPPChParams[0].VPP.Height = 1080;
//// USED TO RESIZE. IF INPUT IS THE SAME AS OUTPUT THIS WILL MAKE IT SHIFT A BIT. 1920x1080 becomes 1920x1088.
//mfxVPPChParams[0].VPP.Width = ALIGN16(mfxVPPChParams[0].VPP.CropW);
//mfxVPPChParams[0].VPP.Height = ALIGN16(mfxVPPChParams[0].VPP.CropH);
mfxVPPChParams[0].VPP.ChannelId = 1;
mfxVPPChParams[0].Protected = 0;
mfxVPPChParams[0].IOPattern = MFX_IOPATTERN_IN_SYSTEM_MEMORY | MFX_IOPATTERN_OUT_SYSTEM_MEMORY;
mfxVPPChParams[0].ExtParam = NULL;
mfxVPPChParams[0].NumExtParam = 0;
sts = MFXVideoDECODE_VPP_Init(session, &mfxDecParams, &mfxVPPChParams, numVPPCh); //This causes a MINOR memory leak!
outSurfaces = new mfxSurfaceArray;
while (isStillGoing == true) {
sts = MFXVideoDECODE_VPP_DecodeFrameAsync(session,
&bitstream,
NULL,
0,
&outSurfaces); //Big memory leak. 100MB pr run in the while loop.
switch (sts) {
case MFX_ERR_NONE:
// decode output
if (framenum >= 30)
{
isStillGoing = false;
break;
}
sts = WriteRawFrameToByte(outSurfaces->Surfaces[1], &frameData[framenum]);
VERIFY(MFX_ERR_NONE == sts, "Could not write 1st vpp output");
framenum++;
break;
case MFX_ERR_MORE_DATA:
// The function requires more bitstream at input before decoding can proceed
isStillGoing = false;
break;
case MFX_ERR_MORE_SURFACE:
// The function requires more frame surface at output before decoding can proceed.
// This applies to external memory allocations and should not be expected for
// a simple internal allocation case like this
break;
case MFX_ERR_DEVICE_LOST:
// For non-CPU implementations,
// Cleanup if device is lost
break;
case MFX_WRN_DEVICE_BUSY:
// For non-CPU implementations,
// Wait a few milliseconds then try again
break;
case MFX_WRN_VIDEO_PARAM_CHANGED:
// The decoder detected a new sequence header in the bitstream.
// Video parameters may have changed.
// In external memory allocation case, might need to reallocate the output surface
break;
case MFX_ERR_INCOMPATIBLE_VIDEO_PARAM:
// The function detected that video parameters provided by the application
// are incompatible with initialization parameters.
// The application should close the component and then reinitialize it
break;
case MFX_ERR_REALLOC_SURFACE:
// Bigger surface_work required. May be returned only if
// mfxInfoMFX::EnableReallocRequest was set to ON during initialization.
// This applies to external memory allocations and should not be expected for
// a simple internal allocation case like this
break;
default:
printf("unknown status %d\n", sts);
isStillGoing = false;
break;
}
}
sts = MFXVideoDECODE_VPP_Close(session); // Helps massively! Halves the memory leak speed. Closes internal structures and tables.
VERIFY(MFX_ERR_NONE == sts, "Error closing VPP session\n");
result = 0;
end:
printf("Decode and VPP processed %d frames\n", framenum);
// Clean up resources - It is recommended to close components first, before
// releasing allocated surfaces, since some surfaces may still be locked by
// internal resources.
if (mfxVPPChParams)
delete[] mfxVPPChParams;
if (outSurfaces)
delete outSurfaces;
if (bitstream.Data)
free(bitstream.Data);
if (accelHandle)
FreeAcceleratorHandle(accelHandle);
if (loader)
MFXUnload(loader);
return result;
}
imgFrameWrapper.h
public ref class imgFrameWrapper
{
private:
size_t size;
array<System::Byte>^ bytes;
public:
imgFrameWrapper(size_t u_size, unsigned char* u_bytes);
~imgFrameWrapper();
!imgFrameWrapper();
size_t get_size();
array<System::Byte>^ get_bytes();
};
imgFrameWrapper.cpp
imgFrameWrapper::imgFrameWrapper(size_t u_size, unsigned char* u_bytes)
{
size = u_size;
bytes = gcnew array<System::Byte>(size);
Marshal::Copy((IntPtr)u_bytes, bytes, 0, size);
}
imgFrameWrapper::~imgFrameWrapper()
{
}
imgFrameWrapper::!imgFrameWrapper()
{
}
size_t imgFrameWrapper::get_size()
{
return size;
}
array<System::Byte>^ imgFrameWrapper::get_bytes()
{
return bytes;
}
imgFrame.h
struct imgFrame
{
int size;
unsigned char* bytes;
~imgFrame()
{
if (bytes)
delete[] bytes;
}
};
MFXVideoDECODE_VPP_DecodeFrameAsync() function creates internal memory surfaces for the processing.
You should release surfaces.
Please check this link it's mentioning about it.
https://spec.oneapi.com/onevpl/latest/API_ref/VPL_structs_decode_vpp.html#_CPPv415mfxSurfaceArray
mfxStatus (*Release)(struct mfxSurfaceArray *surface_array)ΒΆ
Decrements the internal reference counter of the surface. (*Release) should be
called after using the (*AddRef) function to add a surface or when allocation
logic requires it.
And please check this sample.
https://github.com/oneapi-src/oneVPL/blob/master/examples/hello-decvpp/src/hello-decvpp.cpp
Especially, WriteRawFrame_InternalMem() function in https://github.com/oneapi-src/oneVPL/blob/17968d8d2299352f5a9e09388d24e81064c81c87/examples/util/util/util.h
It shows how to release surfaces.

Traverse an raw video more efficiently

I have a raw video file , and i make in qt , an app that reads frame by frame from this file .At large raw files when i press an button that goes to the next frame there is a big delay almost one sec .
Here is my code that returns an frame from raw file :
void RawVideoReader::getFrame(int offset)
{
std::cout<<"getFrame"<<std::endl;
file.seek((unsigned long long int)(((unsigned long long int)width * (unsigned long long int)height) * (unsigned long long int)offset));
QByteArray array = file.read(width * height);
const std::size_t count = array.size();
hex = std::unique_ptr<unsigned char>(new unsigned char[count]);
std::memcpy(hex.get(), array.constData(), count);
}
You can read directly into the buffer you desire - the question is: why do you want to manage this memory buffer using unique_ptr? QByteArray already does that job. Furthermore, you probably want to keep the same buffer, and not reallocate it over and over.
class RawVideoReader : ... {
QByteArray frame;
uint8_t *frameData() const { return frame.size() ? static_cast<uint8_t*>(frame.constData()) : nullptr; }
size_t frameSize() const { return static_cast<size_t>(frame.size()); }
...
};
bool RawVideoReader::getFrame(int frameNo) {
qDebug() << __FUNCTION__;
frame.resize(width * height * 1);
file.seek(qint64(frame.size()) * qint64(frameNo));
auto const hadRead = file.read(frame.data(), frame.size());
return hadRead == frame.size();
}

Waiting-time of thread switches systematicly between 0 and 30000 microseconds for the same task

I'm writing a little Console-Game-Engine and for better performance I wanted 2 threads (or more but 2 for this task) using two buffers. One thread is drawing the next frame in the first buffer while the other thread is reading the current frame from the second buffer. Then the buffers get swapped.
Of cause I can only swap them if both threads finished their task and the drawing/writing thread happened to be the one waiting. But the time it is waiting systematicly switches more or less between two values, here a few of the messurements I made (in microseconds):
0, 36968, 0, 36260, 0, 35762, 0, 38069, 0, 36584, 0, 36503
It's pretty obvious that this is not a coincidence but I wasn't able to figure out what the problem was as this is the first time I'm using threads.
Here the code, ask for more if you need it, I think it's too much to post it all:
header-file (Manager currently only adds a pointer to my WinAppBase-class):
class SwapChain : Manager
{
WORD *pScreenBuffer1, *pScreenBuffer2, *pWritePtr, *pReadPtr, *pTemp;
bool isRunning, writingFinished, readingFinished, initialized;
std::mutex lockWriting, lockReading;
std::condition_variable cvWriting, cvReading;
DWORD charsWritten;
COORD startPosition;
int screenBufferWidth;
// THREADS (USES NORMAL THREAD AS SECOND THREAD)
void ReadingThread();
// THIS FUNCTION IS ONLY FOR INTERN USE
void SwapBuffers();
public:
// USE THESE TO CONTROL WHEN THE BUFFERS GET SWAPPED
void BeginDraw();
void EndDraw();
// PUT PIXEL | INLINED FOR BETTER PERFORMANCE
inline void PutPixel(short xPos, short yPos, WORD color)
{
this->pWritePtr[(xPos * 2) + yPos * screenBufferWidth] = color;
this->pWritePtr[(xPos * 2) + yPos * screenBufferWidth + 1] = color;
}
// GENERAL CONTROL OVER SWAP CHAIN
void Initialize();
void Run();
void Stop();
// CONSTRUCTORS
SwapChain(WinAppBase * pAppBase);
virtual ~SwapChain();
};
Cpp-file
SwapChain::SwapChain(WinAppBase * pAppBase)
:
Manager(pAppBase)
{
this->isRunning = false;
this->initialized = false;
this->pReadPtr = NULL;
this->pScreenBuffer1 = NULL;
this->pScreenBuffer2 = NULL;
this->pWritePtr = NULL;
this->pTemp = NULL;
this->charsWritten = 0;
this->startPosition = { 0, 0 };
this->readingFinished = 0;
this->writingFinished = 0;
this->screenBufferWidth = this->pAppBase->screenBufferInfo.dwSize.X;
}
SwapChain::~SwapChain()
{
this->Stop();
if (_CrtIsValidHeapPointer(pReadPtr))
delete[] pReadPtr;
if (_CrtIsValidHeapPointer(pScreenBuffer1))
delete[] pScreenBuffer1;
if (_CrtIsValidHeapPointer(pScreenBuffer2))
delete[] pScreenBuffer2;
if (_CrtIsValidHeapPointer(pWritePtr))
delete[] pWritePtr;
}
void SwapChain::ReadingThread()
{
while (this->isRunning)
{
this->readingFinished = 0;
WriteConsoleOutputAttribute(
this->pAppBase->consoleCursor,
this->pReadPtr,
this->pAppBase->screenBufferSize,
this->startPosition,
&this->charsWritten
);
memset(this->pReadPtr, 0, this->pAppBase->screenBufferSize);
this->readingFinished = true;
this->cvWriting.notify_all();
if (!this->writingFinished)
{
std::unique_lock<std::mutex> lock(this->lockReading);
this->cvReading.wait(lock);
}
}
}
void SwapChain::SwapBuffers()
{
this->pTemp = this->pReadPtr;
this->pReadPtr = this->pWritePtr;
this->pWritePtr = this->pTemp;
this->pTemp = NULL;
}
void SwapChain::BeginDraw()
{
this->writingFinished = false;
}
void SwapChain::EndDraw()
{
TimePoint tpx1, tpx2;
tpx1 = Clock::now();
if (!this->readingFinished)
{
std::unique_lock<std::mutex> lock2(this->lockWriting);
this->cvWriting.wait(lock2);
}
tpx2 = Clock::now();
POST_DEBUG_MESSAGE(std::chrono::duration_cast<std::chrono::microseconds>(tpx2 - tpx1).count(), "EndDraw wating time");
SwapBuffers();
this->writingFinished = true;
this->cvReading.notify_all();
}
void SwapChain::Initialize()
{
if (this->initialized)
{
POST_DEBUG_MESSAGE(Result::CUSTOM, "multiple initialization");
return;
}
this->pScreenBuffer1 = (WORD *)malloc(sizeof(WORD) * this->pAppBase->screenBufferSize);
this->pScreenBuffer2 = (WORD *)malloc(sizeof(WORD) * this->pAppBase->screenBufferSize);
for (int i = 0; i < this->pAppBase->screenBufferSize; i++)
{
this->pScreenBuffer1[i] = 0x0000;
}
for (int i = 0; i < this->pAppBase->screenBufferSize; i++)
{
this->pScreenBuffer2[i] = 0x0000;
}
this->pWritePtr = pScreenBuffer1;
this->pReadPtr = pScreenBuffer2;
this->initialized = true;
}
void SwapChain::Run()
{
this->isRunning = true;
std::thread t1(&SwapChain::ReadingThread, this);
t1.detach();
}
void SwapChain::Stop()
{
this->isRunning = false;
}
This is where I run the SwapChain-class from:
void Application::Run()
{
this->engine.graphicsmanager.swapChain.Initialize();
Sprite<16, 16> sprite(&this->engine);
sprite.LoadSprite("engine/resources/TestData.xml", "root.test.sprites.baum");
this->engine.graphicsmanager.swapChain.Run();
int a, b, c;
for (int i = 0; i < 60; i++)
{
this->engine.graphicsmanager.swapChain.BeginDraw();
for (c = 0; c < 20; c++)
{
for (a = 0; a < 19; a++)
{
for (b = 0; b < 10; b++)
{
sprite.Print(a * 16, b * 16);
}
}
}
this->engine.graphicsmanager.swapChain.EndDraw();
}
this->engine.graphicsmanager.swapChain.Stop();
_getch();
}
The for-loops above simply draw the sprite 20 times from the top-left corner to the bottom-right corner of the console - the buffers don't get swapped during that, and that again for a total of 60 times (so the buffers get swapped 60 times).
sprite.Print uses the PutPixel function of SwapChain.
Here the WinAppBase (which consits more or less of global-like variables)
class WinAppBase
{
public:
// SCREENBUFFER
CONSOLE_SCREEN_BUFFER_INFO screenBufferInfo;
long screenBufferSize;
// CONSOLE
DWORD consoleMode;
HWND consoleWindow;
HANDLE consoleCursor;
HANDLE consoleInputHandle;
HANDLE consoleHandle;
CONSOLE_CURSOR_INFO consoleCursorInfo;
RECT consoleRect;
COORD consoleSize;
// FONT
CONSOLE_FONT_INFOEX fontInfo;
// MEMORY
char * pUserAccessDataPath;
public:
void reload();
WinAppBase();
virtual ~WinAppBase();
};
There are no errors, simply this alternating waitng time.
Maybe you'd like to start by looking if I did the synchronisation of the threads correctly? I'm not exactly sure how to use a mutex or condition-variables so it might comes from that.
Apart from that it is working fine, the sprites are shown as they should.
The clock you are using may have limited resolution. Here is a random example of a clock provided by Microsoft with 15 ms (15000 microsecond) resolution: Why are .NET timers limited to 15 ms resolution?
If one thread is often waiting for the other, it is entirely possible (assuming the above clock resolution) that it sometimes waits two clockticks and sometimes none. Maybe your clock only has 30 ms resolution. We really can't tell from the code. Do you get more precise measurements elsewhere with this clock?
There are also other systems in play such as the OS scheduler or whatever controls your std::threads. That one is (hopefully) much more granular, but how all these interactions play out doesn't have to be obvious or intuitive.

Pops / clicks when stopping and starting DirectX sound synth in C++ / MFC

I have made a soft synthesizer in Visual Studio 2012 with C++, MFC and DirectX. Despite having added code to rapidly fade out the sound I am experiencing popping / clicking when stopping playback (also when starting).
I copied the DirectX code from this project: http://www.codeproject.com/Articles/7474/Sound-Generator-How-to-create-alien-sounds-using-m
I'm not sure if I'm allowed to cut and paste all the code from the Code Project. Basically I use the Player class from that project as is, the instance of this class is called m_player in my code. The Stop member function in that class calls the Stop function of LPDIRECTSOUNDBUFFER:
void Player::Stop()
{
DWORD status;
if (m_lpDSBuffer == NULL)
return;
HRESULT hres = m_lpDSBuffer->GetStatus(&status);
if (FAILED(hres))
EXCEP(DirectSoundErr::GetErrDesc(hres), "Player::Stop GetStatus");
if ((status & DSBSTATUS_PLAYING) == DSBSTATUS_PLAYING)
{
hres = m_lpDSBuffer->Stop();
if (FAILED(hres))
EXCEP(DirectSoundErr::GetErrDesc(hres), "Player::Stop Stop");
}
}
Here is the notification code (with some supporting code) in my project that fills the sound buffer. Note that the rend function always returns a double between -1 to 1, m_ev_smps = 441, m_n_evs = 3 and m_ev_sz = 882. subInit is called from OnInitDialog:
#define FD_STEP 0.0005
#define SC_NOT_PLYD 0
#define SC_PLYNG 1
#define SC_FD_OUT 2
#define SC_FD_IN 3
#define SC_STPNG 4
#define SC_STPD 5
bool CMainDlg::subInit()
// initialises various variables and the sound player
{
Player *pPlayer;
SOUNDFORMAT format;
std::vector<DWORD> events;
int t, buf_sz;
try
{
pPlayer = new Player();
pPlayer->SetHWnd(m_hWnd);
m_player = pPlayer;
m_player->Init();
format.NbBitsPerSample = 16;
format.NbChannels = 1;
format.SamplingRate = 44100;
m_ev_smps = 441;
m_n_evs = 3;
m_smps = new short[m_ev_smps];
m_smp_scale = (int)pow(2, format.NbBitsPerSample - 1);
m_max_tm = (int)((double)m_ev_smps / (double)(format.SamplingRate * 1000));
m_ev_sz = m_ev_smps * format.NbBitsPerSample/8;
buf_sz = m_ev_sz * m_n_evs;
m_player->CreateSoundBuffer(format, buf_sz, 0);
m_player->SetSoundEventListener(this);
for(t = 0; t < m_n_evs; t++)
events.push_back((int)((t + 1)*m_ev_sz - m_ev_sz * 0.95));
m_player->CreateEventReadNotification(events);
m_status = SC_NOT_PLYD;
}
catch(MATExceptions &e)
{
MessageBox(e.getAllExceptionStr().c_str(), "Error initializing the sound player");
EndDialog(IDCANCEL);
return FALSE;
}
return TRUE;
}
void CMainDlg::Stop()
// stop playing
{
m_player->Stop();
m_status = SC_STPD;
}
void CMainDlg::OnBnClickedStop()
// causes fade out
{
m_status = SC_FD_OUT;
}
void CMainDlg::OnSoundPlayerNotify(int ev_num)
// render some sound samples and check for errors
{
ScopeGuardMutex guard(&m_mutex);
int s, end, begin, elapsed;
if (m_status != SC_STPNG)
{
begin = GetTickCount();
try
{
for(s = 0; s < m_ev_smps; s++)
{
m_smps[s] = (int)(m_synth->rend() * 32768 * m_fade);
if (m_status == SC_FD_IN)
{
m_fade += FD_STEP;
if (m_fade > 1)
{
m_fade = 1;
m_status = SC_PLYNG;
}
}
else if (m_status == SC_FD_OUT)
{
m_fade -= FD_STEP;
if (m_fade < 0)
{
m_fade = 0;
m_status = SC_STPNG;
}
}
}
}
catch(MATExceptions &e)
{
OutputDebugString(e.getAllExceptionStr().c_str());
}
try
{
m_player->Write(((ev_num + 1) % m_n_evs)*m_ev_sz, (unsigned char*)m_smps, m_ev_sz);
}
catch(MATExceptions &e)
{
OutputDebugString(e.getAllExceptionStr().c_str());
}
end = GetTickCount();
elapsed = end - begin;
if(elapsed > m_max_tm)
m_warn_msg.Format(_T("Warning! compute time: %dms"), elapsed);
else
m_warn_msg.Format(_T("compute time: %dms"), elapsed);
}
if (m_status == SC_STPNG)
Stop();
}
It seems like the buffer is not always sounding out when the stop button is clicked. I don't have any specific code for waiting for the sound buffer to finish playing before the DirectX Stop is called. Other than that the sound playback is working just fine, so at least I am initialising the player correctly and notification code is working in that respect.
Try replacing 32768 with 32767. Not by any means sure this is your issue, but it could overflow the positive short int range (assuming your audio is 16-bit) and cause a "pop".
I got rid of the pops / clicks when stopping playback, by filling the buffer with zeros after the fade out. However I still get pops when re-starting playback, despite filling with zeros and then fading back in (it is frustrating).

How to write a Live555 FramedSource to allow me to stream H.264 live

I've been trying to write a class that derives from FramedSource in Live555 that will allow me to stream live data from my D3D9 application to an MP4 or similar.
What I do each frame is grab the backbuffer into system memory as a texture, then convert it from RGB -> YUV420P, then encode it using x264, then ideally pass the NAL packets on to Live555. I made a class called H264FramedSource that derived from FramedSource basically by copying the DeviceSource file. Instead of the input being an input file, I've made it a NAL packet which I update each frame.
I'm quite new to codecs and streaming, so I could be doing everything completely wrong. In each doGetNextFrame() should I be grabbing the NAL packet and doing something like
memcpy(fTo, nal->p_payload, nal->i_payload)
I assume that the payload is my frame data in bytes? If anybody has an example of a class they derived from FramedSource that might at least be close to what I'm trying to do I would love to see it, this is all new to me and a little tricky to figure out what's happening. Live555's documentation is pretty much the code itself which doesn't exactly make it easy for me to figure out.
Ok, I finally got some time to spend on this and got it working! I'm sure there are others who will be begging to know how to do it so here it is.
You will need your own FramedSource to take each frame, encode, and prepare it for streaming, I will provide some of the source code for this soon.
Essentially throw your FramedSource into the H264VideoStreamDiscreteFramer, then throw this into the H264RTPSink. Something like this
scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);
framedSource = H264FramedSource::createNew(*env, 0,0);
h264VideoStreamDiscreteFramer
= H264VideoStreamDiscreteFramer::createNew(*env, framedSource);
// initialise the RTP Sink stuff here, look at
// testH264VideoStreamer.cpp to find out how
videoSink->startPlaying(*h264VideoStreamDiscreteFramer, NULL, videoSink);
env->taskScheduler().doEventLoop();
Now in your main render loop, throw over your backbuffer which you've saved to system memory to your FramedSource so it can be encoded etc. For more info on how to setup the encoding stuff check out this answer How does one encode a series of images into H264 using the x264 C API?
My implementation is very much in a hacky state and is yet to be optimised at all, my d3d application runs at around 15fps due to the encoding, ouch, so I will have to look into this. But for all intents and purposes this StackOverflow question is answered because I was mostly after how to stream it. I hope this helps other people.
As for my FramedSource it looks a little something like this
concurrent_queue<x264_nal_t> m_queue;
SwsContext* convertCtx;
x264_param_t param;
x264_t* encoder;
x264_picture_t pic_in, pic_out;
EventTriggerId H264FramedSource::eventTriggerId = 0;
unsigned H264FramedSource::FrameSize = 0;
unsigned H264FramedSource::referenceCount = 0;
int W = 720;
int H = 960;
H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
{
return new H264FramedSource(env, preferredFrameSize, playTimePerFrame);
}
H264FramedSource::H264FramedSource(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
: FramedSource(env),
fPreferredFrameSize(fMaxSize),
fPlayTimePerFrame(playTimePerFrame),
fLastPlayTime(0),
fCurIndex(0)
{
if (referenceCount == 0)
{
}
++referenceCount;
x264_param_default_preset(&param, "veryfast", "zerolatency");
param.i_threads = 1;
param.i_width = 720;
param.i_height = 960;
param.i_fps_num = 60;
param.i_fps_den = 1;
// Intra refres:
param.i_keyint_max = 60;
param.b_intra_refresh = 1;
//Rate control:
param.rc.i_rc_method = X264_RC_CRF;
param.rc.f_rf_constant = 25;
param.rc.f_rf_constant_max = 35;
param.i_sps_id = 7;
//For streaming:
param.b_repeat_headers = 1;
param.b_annexb = 1;
x264_param_apply_profile(&param, "baseline");
encoder = x264_encoder_open(&param);
pic_in.i_type = X264_TYPE_AUTO;
pic_in.i_qpplus1 = 0;
pic_in.img.i_csp = X264_CSP_I420;
pic_in.img.i_plane = 3;
x264_picture_alloc(&pic_in, X264_CSP_I420, 720, 920);
convertCtx = sws_getContext(720, 960, PIX_FMT_RGB24, 720, 760, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
if (eventTriggerId == 0)
{
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
}
}
H264FramedSource::~H264FramedSource()
{
--referenceCount;
if (referenceCount == 0)
{
// Reclaim our 'event trigger'
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = 0;
}
}
void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes)
{
uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]);
memcpy(surfaceData, buf, surfaceSizeInBytes);
int srcstride = W*3;
sws_scale(convertCtx, &surfaceData, &srcstride,0, H, pic_in.img.plane, pic_in.img.i_stride);
x264_nal_t* nals = NULL;
int i_nals = 0;
int frame_size = -1;
frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
static bool finished = false;
if (frame_size >= 0)
{
static bool alreadydone = false;
if(!alreadydone)
{
x264_encoder_headers(encoder, &nals, &i_nals);
alreadydone = true;
}
for(int i = 0; i < i_nals; ++i)
{
m_queue.push(nals[i]);
}
}
delete [] surfaceData;
surfaceData = NULL;
envir().taskScheduler().triggerEvent(eventTriggerId, this);
}
void H264FramedSource::doGetNextFrame()
{
deliverFrame();
}
void H264FramedSource::deliverFrame0(void* clientData)
{
((H264FramedSource*)clientData)->deliverFrame();
}
void H264FramedSource::deliverFrame()
{
x264_nal_t nalToDeliver;
if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) {
if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) {
// This is the first frame, so use the current time:
gettimeofday(&fPresentationTime, NULL);
} else {
// Increment by the play time of the previous data:
unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime;
fPresentationTime.tv_sec += uSeconds/1000000;
fPresentationTime.tv_usec = uSeconds%1000000;
}
// Remember the play time of this data:
fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize;
fDurationInMicroseconds = fLastPlayTime;
} else {
// We don't know a specific play time duration for this data,
// so just record the current time as being the 'presentation time':
gettimeofday(&fPresentationTime, NULL);
}
if(!m_queue.empty())
{
m_queue.wait_and_pop(nalToDeliver);
uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E;
newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload);
unsigned newFrameSize = nalToDeliver.i_payload;
// Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
}
else {
fFrameSize = newFrameSize;
}
memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload);
FramedSource::afterGetting(this);
}
}
Oh and for those who want to know what my concurrent queue is, here it is, and it works brilliantly http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html
Enjoy and good luck!
The deliverFrame method lacks the following check at its start:
if (!isCurrentlyAwaitingData()) return;
see DeviceSource.cpp in LIVE