AudioConverterNew returned -50 - c++

I have a little issue regarding the use of the AudioQueue services.
I have followed the guide that is available on Apple's webiste, but when I got to start and run the Audio Queue, I get the message telling me that "AudioConverterNew returned -50".
Now, I know that the -50 error code means that there is a bad parameter. However, what I don't know is which parameter is the bad one (thank you so much Apple...) !
So, here's my code.
Here are the parameters of my class, named cPlayerCocoa
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[NUMBER_BUFFERS]; // NUMBER_BUFFERS = 3
uint32 mBufferByteSize;
AudioStreamBasicDescription mDataFormat;
Here's the first function :
static void
BuildBuffer( void* iAQData, AudioQueueRef iAQ, AudioQueueBufferRef iBuffer )
{
cPlayerCocoa* player = (cPlayerCocoa*) iAQData;
player->HandleOutputBuffer( iAQ, iBuffer );
}
It creates a cPlayerCocoa from the structure containing the AudioQueue and calls the HandleOutputBuffer function, which allocates the audio buffers :
void
cPlayerCocoa::HandleOutputBuffer( AudioQueueRef iAQ, AudioQueueBufferRef iBuffer )
{
if( mContinue )
{
xassert( iBuffer->mAudioDataByteSize == 32768 );
int startSample = mPlaySampleCurrent;
int result = 0;
int samplecount = 32768 / ( mSoundData->BytesPerSample() ); // BytesPerSample, in my case, returns 4
tErrorCode error = mSoundData->ReadData( (int16*)(iBuffer->mAudioData), samplecount, &result, startSample );
AudioQueueEnqueueBuffer( mQueue, iBuffer, 0, 0 ); // I'm using CBR data (PCM), hence the 0 passed into the AudioQueueEnqueueBuffer.
if( result != samplecount )
mContinue = false;
startSample += result;
}
else
{
AudioQueueStop( mQueue, false );
}
}
In this next function, the AudioQueue is created then started.
I begin to initialise the parameters of the Data format. Then I create the AudioQueue, and I allocate the 3 buffers.
When the buffers are allocated, I start the AudioQueue and then I run the loop.
void
cPlayerCocoa::ThreadEntry()
{
int samplecount = 32768 / ( mSoundData->BytesPerSample() );
mDataFormat.mSampleRate = mSoundData->SamplingRate(); // Returns 44100
mDataFormat.mFormatID = kAudioFormatLinearPCM;
mDataFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
mDataFormat.mBytesPerPacket = 32768;
mDataFormat.mFramesPerPacket = samplecount;
mDataFormat.mBytesPerFrame = mSoundData->BytesPerSample(); // BytesPerSample returns 4.
mDataFormat.mChannelsPerFrame = 2;
mDataFormat.mBitsPerChannel = uint32(mSoundData->BitsPerChannel());
mDataFormat.mReserved = 0;
AudioQueueNewOutput( &mDataFormat, BuildBuffer, this, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &mQueue );
for( int i = 0; i < NUMBER_BUFFERS; ++i )
{
AudioQueueAllocateBuffer( mQueue, mBufferByteSize, &mBuffers[i] );
HandleOutputBuffer( mQueue, mBuffers[i] );
}
AudioQueueStart( mQueue, NULL ); // I want the queue to start playing immediately, so I pass NULL
do {
CFRunLoopRunInMode( kCFRunLoopDefaultMode, 0.25, false );
} while ( !NeedStopASAP() );
AudioQueueDispose( mQueue, true );
}
The call to AudioQueueStart returns -50 (bad parameter) and I can't figure what's wrong...
I would really appreciate some help, thanks in advance :-)

I think your ASBD is suspect. PCM formats have predictable values for mBytesPerPacket, mBytesPerFrame, and mFramesPerPacket. For normal 16-bit interleaved signed 44.1 stereo audio the ASBD would look like
AudioStreamBasicDescription asbd = {
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked,
.mSampleRate = 44100,
.mChannelsPerFrame = 2,
.mBitsPerChannel = 16,
.mBytesPerPacket = 4,
.mFramesPerPacket = 1,
.mBytesPerFrame = 4,
.mReserved = 0
};
AudioConverterNew returns -50 when one of the ASBDs is unsupported. There is no PCM format where mBytesPerPacket should be 32768, which is why you're getting the error.

Related

How is possible get only the device to capture or playback using pjsua2

I trying get the devices from pjsua2 , I got it get all devices, but do not got split in capture device and playback device.
void AudioController::load(){
Endpoint ep;
ep.libCreate();
// Initialize endpoint
EpConfig ep_cfg;
ep.libInit( ep_cfg );
AudDevManager &manager = ep.audDevManager();
manager.refreshDevs();
this->input.clear();
const AudioDevInfoVector &list = manager.enumDev();
for(unsigned int i = 0;list.size() != i;i++){
AudioDevInfo * info = list[i];
GtAudioDevice * a = new GtAudioDevice();
a->name = info->name.c_str();
a->deviceId = i;
qDebug() << info->name.c_str();
qDebug() << info->driver.c_str();
qDebug() << info->caps;
this->input.append(a);
}
ep.libDestroy();
}
This is my output:
Wave mapper
WMME
23
Microfone (Dispositivo de High
WMME
3
Alto-falantes (Dispositivo de H
WMME
21
You can check the fields inputCount and outputCount inside AudioDevInfo.
According the documentation:
unsigned inputCount
Maximum number of input channels supported by this device. If the
value is zero, the device does not support input operation (i.e. it is
a playback only device).
And
unsigned outputCount
Maximum number of output channels supported by this device. If the
value is zero, the device does not support output operation (i.e. it
is an input only device).
So you could do something like this:
for(unsigned int i = 0;list.size() != i;i++){
AudioDevInfo * info = list[i];
GtAudioDevice * a = new GtAudioDevice();
a->name = info->name.c_str();
a->deviceId = i;
if (info->inputCount > 0) {
a->captureDevice = true;
}
if (info->outputCount > 0) {
a->playbackDevice = true;
}
this->input.append(a);
}
Reference: http://www.pjsip.org/pjsip/docs/html/structpj_1_1AudioDevInfo.htm
Another way, you can check the field caps (capabilities). Something like this:
for (int i = 0; i < list.size(); i++)
{
AudioDevInfo * info = list[i];
if ((info.caps & (int)pjmedia_aud_dev_cap.PJMEDIA_AUD_DEV_CAP_OUTPUT_LATENCY) != 0)
{
// Playback devices come here
}
if ((info.caps & (int)pjmedia_aud_dev_cap.PJMEDIA_AUD_DEV_CAP_INPUT_LATENCY) != 0)
{
// Capture devices come here
}
}
caps is combined from these possible values:
enum pjmedia_aud_dev_cap {
PJMEDIA_AUD_DEV_CAP_EXT_FORMAT = 1,
PJMEDIA_AUD_DEV_CAP_INPUT_LATENCY = 2,
PJMEDIA_AUD_DEV_CAP_OUTPUT_LATENCY = 4,
PJMEDIA_AUD_DEV_CAP_INPUT_VOLUME_SETTING = 8,
PJMEDIA_AUD_DEV_CAP_OUTPUT_VOLUME_SETTING = 16,
PJMEDIA_AUD_DEV_CAP_INPUT_SIGNAL_METER = 32,
PJMEDIA_AUD_DEV_CAP_OUTPUT_SIGNAL_METER = 64,
PJMEDIA_AUD_DEV_CAP_INPUT_ROUTE = 128,
PJMEDIA_AUD_DEV_CAP_INPUT_SOURCE = 128,
PJMEDIA_AUD_DEV_CAP_OUTPUT_ROUTE = 256,
PJMEDIA_AUD_DEV_CAP_EC = 512,
PJMEDIA_AUD_DEV_CAP_EC_TAIL = 1024,
PJMEDIA_AUD_DEV_CAP_VAD = 2048,
PJMEDIA_AUD_DEV_CAP_CNG = 4096,
PJMEDIA_AUD_DEV_CAP_PLC = 8192,
PJMEDIA_AUD_DEV_CAP_MAX = 16384
}

Audio distorted with VST plugin

I had to plug into a pre-existing software, managing ASIO audio streams, a simple VST host. Despite of lack of some documentation, I managed to do so however once I load the plugin I get a badly distorted audio signal back.
The VST I'm using works properly (with other VST Hosts) so it's probably some kind of bug in the code I made, however when I disable the "PROCESS" from the plugin (my stream goes through the plugin, it simply does not get processed) it gets back as I sent without any noise or distortion on it.
One thing I'm slightly concerned about is the type of the data used as the ASIO driver fills an __int32 buffer while the plugins wants some float buffer.
That's really depressing as I reviewed zillions of times my code and it seems to be fine.
Here is the code of the class I'm using; please note that some numbers are temporarily hard-coded to help debugging.
VSTPlugIn::VSTPlugIn(const char* fullDirectoryName, const char* ID)
: plugin(NULL)
, blocksize(128) // TODO
, sampleRate(44100.0F) // TODO
, hostID(ID)
{
this->LoadPlugin(fullDirectoryName);
this->ConfigurePluginCallbacks();
this->StartPlugin();
out = new float*[2];
for (int i = 0; i < 2; ++i)
{
out[i] = new float[128];
memset(out[i], 0, 128);
}
}
void VSTPlugIn::LoadPlugin(const char* path)
{
HMODULE modulePtr = LoadLibrary(path);
if(modulePtr == NULL)
{
printf("Failed trying to load VST from '%s', error %d\n", path, GetLastError());
plugin = NULL;
}
// vst 2.4 export name
vstPluginFuncPtr mainEntryPoint = (vstPluginFuncPtr)GetProcAddress(modulePtr, "VSTPluginMain");
// if "VSTPluginMain" was not found, search for "main" (backwards compatibility mode)
if(!mainEntryPoint)
{
mainEntryPoint = (vstPluginFuncPtr)GetProcAddress(modulePtr, "main");
}
// Instantiate the plugin
plugin = mainEntryPoint(hostCallback);
}
void VSTPlugIn::ConfigurePluginCallbacks()
{
// Check plugin's magic number
// If incorrect, then the file either was not loaded properly, is not a
// real VST plugin, or is otherwise corrupt.
if(plugin->magic != kEffectMagic)
{
printf("Plugin's magic number is bad. Plugin will be discarded\n");
plugin = NULL;
}
// Create dispatcher handle
this->dispatcher = (dispatcherFuncPtr)(plugin->dispatcher);
// Set up plugin callback functions
plugin->getParameter = (getParameterFuncPtr)plugin->getParameter;
plugin->processReplacing = (processFuncPtr)plugin->processReplacing;
plugin->setParameter = (setParameterFuncPtr)plugin->setParameter;
}
void VSTPlugIn::StartPlugin()
{
// Set some default properties
dispatcher(plugin, effOpen, 0, 0, NULL, 0);
dispatcher(plugin, effSetSampleRate, 0, 0, NULL, sampleRate);
dispatcher(plugin, effSetBlockSize, 0, blocksize, NULL, 0.0f);
this->ResumePlugin();
}
void VSTPlugIn::ResumePlugin()
{
dispatcher(plugin, effMainsChanged, 0, 1, NULL, 0.0f);
}
void VSTPlugIn::SuspendPlugin()
{
dispatcher(plugin, effMainsChanged, 0, 0, NULL, 0.0f);
}
void VSTPlugIn::ProcessAudio(float** inputs, float** outputs, long numFrames)
{
plugin->processReplacing(plugin, inputs, out, 128);
memcpy(outputs, out, sizeof(float) * 128);
}
EDIT: Here's the code I use to interface my sw with the VST Host
// Copying the outer buffer in the inner container
for(unsigned i = 0; i < bufferLenght; i++)
{
float f;
f = ((float) buff[i]) / (float) std::numeric_limits<int>::max()
if( f > 1 ) f = 1;
if( f < -1 ) f = -1;
samples[0][i] = f;
}
// DO JOB
for(auto it = inserts.begin(); it != inserts.end(); ++it)
{
(*it)->ProcessAudio(samples, samples, bufferLenght);
}
// Copying the result back into the buffer
for(unsigned i = 0; i < bufferLenght; i++)
{
float f = samples[0][i];
int intval;
f = f * std::numeric_limits<int>::max();
if( f > std::numeric_limits<int>::max() ) f = std::numeric_limits<int>::max();
if( f < std::numeric_limits<int>::min() ) f = std::numeric_limits<int>::min();
intval = (int) f;
buff[i] = intval;
}
where "buff" is defined as "__int32* buff"
I'm guessing that when you call f = std::numeric_limits<int>::max() (and the related min() case on the line below), this might cause overflow. Have you tried f = std::numeric_limits<int>::max() - 1?
Same goes for the code snippit above with f = ((float) buff[i]) / (float) std::numeric_limits<int>::max()... I'd also subtract one there to avoid a potential overflow later on.

vp9 encoder returns a null packet

i using this code to encode video stream using vp8 and i decided to give vp9 a try so i changed every thing with starts with vp_* from 8 to 9.
but the vp9 encoder always return a null packet although the encoder doesn't return any error.
here is the code i'am using for configuring.
vpx_codec_err_t error = vpx_codec_enc_config_default(vpx_codec_vp9_cx(), &enc_cfg, 0);
if(error != VPX_CODEC_OK)
return error;
enc_cfg.g_timebase.den = fps;
enc_cfg.rc_undershoot_pct = 95;
enc_cfg.rc_target_bitrate = bitrate;
enc_cfg.g_error_resilient = 1;
enc_cfg.kf_max_dist = 999999;
enc_cfg.rc_buf_initial_sz = 4000;
enc_cfg.rc_buf_sz = 6000;
enc_cfg.rc_buf_optimal_sz = 5000;
enc_cfg.rc_end_usage = VPX_CBR;
enc_cfg.g_h = height;
enc_cfg.g_w = width;
enc_cfg.rc_min_quantizer = 4;
enc_cfg.rc_max_quantizer = 56;
enc_cfg.g_threads = 4;
enc_cfg.g_pass = VPX_RC_ONE_PASS;
error = vpx_codec_enc_init(&codec, vpx_codec_vp9_cx(), &enc_cfg, 0);
if(error != VPX_CODEC_OK)
return error;
vpx_img_alloc(&vpx_image,VPX_IMG_FMT_I420 , width, height, 1);
configured = true;
return VPX_CODEC_OK;
and the code for the encoding
libyuv::RAWToI420(frame, vpx_image.d_w * 3, vpx_image.planes[VPX_PLANE_Y],vpx_image.stride[VPX_PLANE_Y],
vpx_image.planes[VPX_PLANE_U], vpx_image.stride[VPX_PLANE_U], vpx_image.planes[VPX_PLANE_V],
vpx_image.stride[VPX_PLANE_V], vpx_image.d_w, vpx_image.d_h);
const vpx_codec_cx_pkt_t *pkt;
vpx_codec_err_t error = vpx_codec_encode(&codec, &vpx_image, 0, 1, 0, VPX_DL_GOOD_QUALITY);
if(error != VPX_CODEC_OK)
return vector<byte>();
vpx_codec_iter_t iter = NULL;
if((pkt = vpx_codec_get_cx_data(&codec, &iter)))//always return null ?
{
if(pkt->kind == VPX_CODEC_CX_FRAME_PKT)
{
int length = pkt->data.frame.sz;
byte* buf = (byte*) pkt->data.frame.buf;
vector<byte> data(buf, buf + length);
return data;
}
return vector<byte>();
}
return vector<byte>();
the code is fully working if i'am using vp8 instead of 9, any help is welcomed
Just came across this post because I faced the same problem. Just for other to know: I solved it with setting
enc_cfg.g_lag_in_frames = 0;
This basically disallows the encoder to consume up to default 25 frames until it produces any output.

Setting media type ignored on output pin

I'm building a DirectShow graph.
I have video capture filter that connects its output pin to the input pin of SampleGrabber.
Before I connect two pins, I configure output pin like this:
HRESULT GraphBuilder::applyVideoFormat()
{
if( !pVideoCaptureFilter_ )
return E_FAIL;
CComPtr<IPin> pPin;
pPin.Attach( findCategoryPin( pVideoCaptureFilter_, PINDIR_OUTPUT, PIN_CATEGORY_CAPTURE ) );
if( !pPin )
return E_FAIL;
CComQIPtr<IAMStreamConfig> pConfig( pPin );
if( !pConfig )
return E_FAIL;
AM_MEDIA_TYPE mt = { 0 };
VIDEOINFOHEADER vih = { 0 };
if( videoStandard_ == AnalogVideo_NTSC_M )
vih.AvgTimePerFrame = 333667;
else
vih.AvgTimePerFrame = 400000;
vih.bmiHeader.biSize = sizeof( BITMAPINFOHEADER );
vih.bmiHeader.biWidth = captureResolution_.cx;
vih.bmiHeader.biHeight = captureResolution_.cy;
vih.bmiHeader.biPlanes = 1;
vih.bmiHeader.biBitCount = 16;
vih.bmiHeader.biCompression = mmioFOURCC('Y','U','Y','2');
vih.bmiHeader.biSizeImage = vih.bmiHeader.biWidth * vih.bmiHeader.biHeight * 2;
mt.majortype = MEDIATYPE_Video;
mt.subtype = MEDIASUBTYPE_YUY2;
mt.bFixedSizeSamples = TRUE;
mt.bTemporalCompression = FALSE;
mt.lSampleSize = vih.bmiHeader.biSizeImage;
mt.formattype = FORMAT_VideoInfo;
mt.cbFormat = sizeof( VIDEOINFOHEADER );
mt.pbFormat = (BYTE*)&vih;
mt.pUnk = NULL;
return pConfig->SetFormat( &mt ); // SUCCESS - always
}
I also configure the sample grabber, although I know it will consider only the major type and subtype. It doesn't care about the rest.
HRESULT GraphBuilder::configureVideoSampleGrabber( ISampleGrabber * const pSampleGrabber )
{
AM_MEDIA_TYPE mt = { 0 };
VIDEOINFOHEADER vih = { 0 };
if( videoStandard_ == AnalogVideo_NTSC_M )
vih.AvgTimePerFrame = 333667;
else
vih.AvgTimePerFrame = 400000;
vih.bmiHeader.biSize = sizeof( BITMAPINFOHEADER );
vih.bmiHeader.biWidth = captureResolution_.cx;
vih.bmiHeader.biHeight = captureResolution_.cy;
vih.bmiHeader.biPlanes = 1;
vih.bmiHeader.biBitCount = 16;
vih.bmiHeader.biCompression = mmioFOURCC('Y','U','Y','2');
vih.bmiHeader.biSizeImage = vih.bmiHeader.biWidth * vih.bmiHeader.biHeight * 2;
mt.majortype = MEDIATYPE_Video;
mt.subtype = MEDIASUBTYPE_YUY2;
mt.bFixedSizeSamples = TRUE;
mt.bTemporalCompression = FALSE;
mt.lSampleSize = vih.bmiHeader.biSizeImage;
mt.formattype = FORMAT_VideoInfo;
mt.cbFormat = sizeof( VIDEOINFOHEADER );
mt.pbFormat = (BYTE*)&vih;
mt.pUnk = NULL;
return pSampleGrabber->SetMediaType( &mt ); // SUCCESS - always
}
This code is called once the filters are in the graph, but BEFORE they are connected. All return values are 0.
In this example, the captureResolution_.cx = 352 and captureResolution_.cy = 240.
Now, the question is: WHY do I get the default 720x480 always through the SampleGrabber instead of 352x240?? I configured the capture pin to deliver 352x240.
What you are doing is about right, there are however a few things you want to change.
You definitely want to review your Sample Grabber initialization. You don't need a fully specified media type set on it. Instead you want a partial media type with only major type an subtype initialized (and the rest is NULL'ed). The rationale is the following.
Sample Grabber is just an inset which you configure to insist on specific media type. In particular, it is unable to work out a connection type on the upstream connection in any other way than straightforwardly compare attempts to its internal reference type and accept or reject depending on result of this comparison. Having said this, your Sample Grabber cannot help setting the resolution, but it can reject the connection if media type is different in some unimportant field. It is sufficient to require video, YUY2 on Sample Grabber, and the rest of the format is the responsibility of video source.
To make sure this media type is okay for the video source you can always connect it interactively without Sample Grabber and review all fields of effective connection media type.

How to write a Live555 FramedSource to allow me to stream H.264 live

I've been trying to write a class that derives from FramedSource in Live555 that will allow me to stream live data from my D3D9 application to an MP4 or similar.
What I do each frame is grab the backbuffer into system memory as a texture, then convert it from RGB -> YUV420P, then encode it using x264, then ideally pass the NAL packets on to Live555. I made a class called H264FramedSource that derived from FramedSource basically by copying the DeviceSource file. Instead of the input being an input file, I've made it a NAL packet which I update each frame.
I'm quite new to codecs and streaming, so I could be doing everything completely wrong. In each doGetNextFrame() should I be grabbing the NAL packet and doing something like
memcpy(fTo, nal->p_payload, nal->i_payload)
I assume that the payload is my frame data in bytes? If anybody has an example of a class they derived from FramedSource that might at least be close to what I'm trying to do I would love to see it, this is all new to me and a little tricky to figure out what's happening. Live555's documentation is pretty much the code itself which doesn't exactly make it easy for me to figure out.
Ok, I finally got some time to spend on this and got it working! I'm sure there are others who will be begging to know how to do it so here it is.
You will need your own FramedSource to take each frame, encode, and prepare it for streaming, I will provide some of the source code for this soon.
Essentially throw your FramedSource into the H264VideoStreamDiscreteFramer, then throw this into the H264RTPSink. Something like this
scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);
framedSource = H264FramedSource::createNew(*env, 0,0);
h264VideoStreamDiscreteFramer
= H264VideoStreamDiscreteFramer::createNew(*env, framedSource);
// initialise the RTP Sink stuff here, look at
// testH264VideoStreamer.cpp to find out how
videoSink->startPlaying(*h264VideoStreamDiscreteFramer, NULL, videoSink);
env->taskScheduler().doEventLoop();
Now in your main render loop, throw over your backbuffer which you've saved to system memory to your FramedSource so it can be encoded etc. For more info on how to setup the encoding stuff check out this answer How does one encode a series of images into H264 using the x264 C API?
My implementation is very much in a hacky state and is yet to be optimised at all, my d3d application runs at around 15fps due to the encoding, ouch, so I will have to look into this. But for all intents and purposes this StackOverflow question is answered because I was mostly after how to stream it. I hope this helps other people.
As for my FramedSource it looks a little something like this
concurrent_queue<x264_nal_t> m_queue;
SwsContext* convertCtx;
x264_param_t param;
x264_t* encoder;
x264_picture_t pic_in, pic_out;
EventTriggerId H264FramedSource::eventTriggerId = 0;
unsigned H264FramedSource::FrameSize = 0;
unsigned H264FramedSource::referenceCount = 0;
int W = 720;
int H = 960;
H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
{
return new H264FramedSource(env, preferredFrameSize, playTimePerFrame);
}
H264FramedSource::H264FramedSource(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
: FramedSource(env),
fPreferredFrameSize(fMaxSize),
fPlayTimePerFrame(playTimePerFrame),
fLastPlayTime(0),
fCurIndex(0)
{
if (referenceCount == 0)
{
}
++referenceCount;
x264_param_default_preset(&param, "veryfast", "zerolatency");
param.i_threads = 1;
param.i_width = 720;
param.i_height = 960;
param.i_fps_num = 60;
param.i_fps_den = 1;
// Intra refres:
param.i_keyint_max = 60;
param.b_intra_refresh = 1;
//Rate control:
param.rc.i_rc_method = X264_RC_CRF;
param.rc.f_rf_constant = 25;
param.rc.f_rf_constant_max = 35;
param.i_sps_id = 7;
//For streaming:
param.b_repeat_headers = 1;
param.b_annexb = 1;
x264_param_apply_profile(&param, "baseline");
encoder = x264_encoder_open(&param);
pic_in.i_type = X264_TYPE_AUTO;
pic_in.i_qpplus1 = 0;
pic_in.img.i_csp = X264_CSP_I420;
pic_in.img.i_plane = 3;
x264_picture_alloc(&pic_in, X264_CSP_I420, 720, 920);
convertCtx = sws_getContext(720, 960, PIX_FMT_RGB24, 720, 760, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
if (eventTriggerId == 0)
{
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
}
}
H264FramedSource::~H264FramedSource()
{
--referenceCount;
if (referenceCount == 0)
{
// Reclaim our 'event trigger'
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = 0;
}
}
void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes)
{
uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]);
memcpy(surfaceData, buf, surfaceSizeInBytes);
int srcstride = W*3;
sws_scale(convertCtx, &surfaceData, &srcstride,0, H, pic_in.img.plane, pic_in.img.i_stride);
x264_nal_t* nals = NULL;
int i_nals = 0;
int frame_size = -1;
frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
static bool finished = false;
if (frame_size >= 0)
{
static bool alreadydone = false;
if(!alreadydone)
{
x264_encoder_headers(encoder, &nals, &i_nals);
alreadydone = true;
}
for(int i = 0; i < i_nals; ++i)
{
m_queue.push(nals[i]);
}
}
delete [] surfaceData;
surfaceData = NULL;
envir().taskScheduler().triggerEvent(eventTriggerId, this);
}
void H264FramedSource::doGetNextFrame()
{
deliverFrame();
}
void H264FramedSource::deliverFrame0(void* clientData)
{
((H264FramedSource*)clientData)->deliverFrame();
}
void H264FramedSource::deliverFrame()
{
x264_nal_t nalToDeliver;
if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) {
if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) {
// This is the first frame, so use the current time:
gettimeofday(&fPresentationTime, NULL);
} else {
// Increment by the play time of the previous data:
unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime;
fPresentationTime.tv_sec += uSeconds/1000000;
fPresentationTime.tv_usec = uSeconds%1000000;
}
// Remember the play time of this data:
fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize;
fDurationInMicroseconds = fLastPlayTime;
} else {
// We don't know a specific play time duration for this data,
// so just record the current time as being the 'presentation time':
gettimeofday(&fPresentationTime, NULL);
}
if(!m_queue.empty())
{
m_queue.wait_and_pop(nalToDeliver);
uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E;
newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload);
unsigned newFrameSize = nalToDeliver.i_payload;
// Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
}
else {
fFrameSize = newFrameSize;
}
memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload);
FramedSource::afterGetting(this);
}
}
Oh and for those who want to know what my concurrent queue is, here it is, and it works brilliantly http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html
Enjoy and good luck!
The deliverFrame method lacks the following check at its start:
if (!isCurrentlyAwaitingData()) return;
see DeviceSource.cpp in LIVE