playing created Audio-Data has noise and periodical clicking in sound - c++

I write an application, which plays a sound getting from Hardware (like a ring buffer filled with a sinus wave with certain frequency). Everything works fine, and I can playback the created sound correctly except a periodical clicking (maybe at the end of buffer?) and noise.
I initialize and run the Buffer:
void Audiooutput::InitializeAudioParameters()
{
Audio_DataWritten = 0;
Audio_fragments = 4;
Audio_channels = 2;
Audio_BufferSize = 256;
Audio_Samplerate = 8000;
Audio_ResamplingFactor = 1;
Audio_Framesize = 2;
// (SND_PCM_FORMAT_S16_LE / 8);
Audio_frames = Audio_BufferSize / Audio_Framesize * Audio_fragments;
snd_pcm_uframes_t size;
err = snd_pcm_hw_params_any(pcmPlaybackHandle, hw_params);
err = snd_pcm_hw_params_set_rate_resample(pcmPlaybackHandle, hw_params, 1);
// qDebug()<<a1.sprintf(" % d \t snd_pcm_hw_params_set_rate: %s",Audio_Samplerate,snd_strerror(err));
err =
snd_pcm_hw_params_set_format(pcmPlaybackHandle, hw_params,
SND_PCM_FORMAT_S16_LE);
err =
snd_pcm_hw_params_set_channels(pcmPlaybackHandle, hw_params,
Audio_channels);
err = snd_pcm_hw_params_set_rate_near(pcmPlaybackHandle, hw_params, &Audio_Samplerate, 0);
// qDebug()<<a1.sprintf(" % d \t snd_pcm_hw_params_set_rate: %s",Audio_Samplerate,snd_strerror(err));
if ((err =
snd_pcm_hw_params_set_periods_near(pcmPlaybackHandle, hw_params,
&Audio_fragments, 0)) < 0) {
qDebug() << a1.sprintf("Error setting # fragments to %d: %s\n",
Audio_fragments, snd_strerror(err));
} else
qDebug() << a1.sprintf("setting # fragments to %d: %s\n",
Audio_fragments, snd_strerror(err));
err = snd_pcm_hw_params_get_buffer_size(hw_params, &size);
if ((err =
snd_pcm_hw_params_set_buffer_size_near(pcmPlaybackHandle,
hw_params,
&Audio_frames)) < 0) {
qDebug() << a1.
sprintf("Error setting buffer_size %d frames: %s",
Audio_frames, snd_strerror(err));
} else
qDebug() << a1.sprintf("setting Buffersize to %d --> %d: %s\n",
Audio_BufferSize, Audio_frames,
snd_strerror(err));
Audio_BufferSize = Audio_frames;
if ((err = snd_pcm_hw_params(pcmPlaybackHandle, hw_params)) < 0) {
qDebug() << a1.sprintf("Error setting HW params: %s",
snd_strerror(err));
}
Q_ASSERT(err >= 0);
}
void Audiooutput::ProduceAudioOutput(int n, int mmodes, int totalMModeGates,
short *sinusValue, short *cosinusValue)
{
for (int audioSample = 0; audioSample < n;
audioSample += Audio_ResamplingFactor) {
currentposition =
(int)(m_Audio.generalPos % (Audio_BufferSize / 2));
if (currentposition == 0) {
QueueAudioBuffer();
m_Audio.currentPos = 0;
}
m_Audio.generalPos++;
AudioData[currentposition * 2] =
(short)(sinusValue[audioSample]);
AudioData[currentposition * 2 + 1] =
(short)(cosinusValue[audioSample]);
}
}
void Audiooutput::QueueAudioBuffer()
{
snd_pcm_prepare(pcmPlaybackHandle);
Audio_DataWritten +=
snd_pcm_writei(pcmPlaybackHandle, AudioData, Audio_BufferSize);
}
Changing the audiobuffer size or fragments changes also the clicking period.
Can anyone help me with this issue ?
I checked also the first and Last Values. Thy are always difference.
OS: Ubuntu 11
more detail.
the count of received data is dynamically, and changes depend of different parameters. But I play always a certain part e.g. 128 values or 256 or 512....
// I get the Audiodata from a hardware (in a Timerloop)
audiobuffersize = 256;
short *AudioData = new short[256];
int generalAudioSample = 0;
void CollectDataFromHw()
{
...
int n = 0;
n = GetData(buf1,buf2);//buf1 = new short[MAX_SHRT]
if(n > 0)
FillAudioBuffer(n,buf1,buf2)
...
}
-------------------------------------------
void FillAudioBuffer(int n, short*buf1, short*buf2)
{
for(int audioSample = 0;audioSample < n; audioSample++){
iCurrentAudioSample = (int)(generalAudioSample % (audiobuffersize/2));
if(iCurrentAudioSample == 0) {
snd_pcm_writei(pcmPlaybackHandle,AudioData,audiobuffersize );
memset(AudioData,0x00,audiobuffersize*sizeof(short));
}
generalAudioSample++;
AudioData[iCurrentAudioSample * 2] = (short)(buf1[audioSample];
AudioData[iCurrentAudioSample * 2 +1] = (short)(buf2[audioSample];
}
}
I changed the audiobuffersize also. If I set it to a bigger size, I have some Echo additional to clicks.
any Idea ?
//-----------------------
the Problem is
snd_pcm_prepare(pcmPlaybackHandle);
every call of this function produce a click in sound !

Can't test the source code, but I think that the high-frequency clicks you hear are discontinuities in the sound wave. You have to assure that looping period (or, buffer size) is multiple of wave period.
Check if first and last value of buffer are almost the same (+/- 1, for example). Their distance determines the amplitude of the unwanted click.

solved
buffer has been played several times before it was filled with the data.
stupid error in the code.missing a parantez --> audio_buffersize/2 <--
and therefore the result was very often if(iCurrentAudioSample == 0) true !!!!!
iCurrentAudioSample = (int)(generalAudioSample % (audio_buffersize/2));
if(iCurrentAudioSample == 0)
{
writetoaudioStream(audiobuffer);
}

Related

rd_kafka_offsets_for_times() returns error -186 (Local: Invalid argument or configuration.)

I'm trying to get the offset (high watermark offset) for a partition using rd_kafka_offsets_for_times().
Here is a code snippet:
rd_kafka_topic_partition_t* pt0 = rd_kafka_topic_partition_new(topic, 0);
pt0->offset = 0;
pt0->metadata = 0;
pt0->metadata_size = 0;
pt0->opaque = 0;
pt0->err = 0;
pt0->_private = 0;
rd_kafka_topic_partition_list_t* partition_list = rd_kafka_topic_partition_list_new(1);
partition_list->elems = pt0;
rd_kafka_resp_err_t offsets_err = rd_kafka_offsets_for_times(rk, partition_list, 5000);
if (offsets_err != RD_KAFKA_RESP_ERR_NO_ERROR)
{
printf("ERROR: Failed to get offsets: %d: %s.\n", offsets_err,
rd_kafka_err2str(offsets_err));
}
else
{
printf("Successfully got offset.\n");
}
When I run it, I get this error:
ERROR: Failed to get offsets: -186: Local: Invalid argument or configuration.
How do I correct my code to be able to get the offset for the partition?
Need to use rd_kafka_topic_partition_list_add to add a topic partition to the topic partition list. For example:
rd_kafka_topic_partition_list_t* partition_list = rd_kafka_topic_partition_list_new(1);
rd_kafka_topic_partition_t* pt0 = rd_kafka_topic_partition_list_add(partition_list, topic, 0);
pt0->offset = ~0; // set to max integer value
rd_kafka_resp_err_t offsets_err = rd_kafka_offsets_for_times(rk, partition_list, 10000);
if (offsets_err != RD_KAFKA_RESP_ERR_NO_ERROR)
{
printf("ERROR: Failed to get offsets: %d: %s.\n", offsets_err, rd_kafka_err2str(offsets_err));
}
else
{
printf("Successfully got high watermark offset %d.\n", pt0->offset);
}

More efficient way for reading CAN data in while loop

I have 3 devices which send 8 bytes of data over CAN interface. To read the buffer from CAN I am using a while loop which looks something like this:
void CanServer::ReadFromCAN() {
data_from_buffer_.clear();
can_frame frame;
read_can_port_ = read(soc_, &frame, sizeof(struct can_frame));
if (read_can_port_ < 0) return;
id_ = frame.can_id&0x1FFFFFFF;
dlc_ = frame.can_dlc;
for (const auto& byte : frame.data)
data_from_buffer_.push_back(byte);
}
while (ros::ok()) {
std_msgs::Int32MultiArray tachometer_array;
std::vector<__u8> data_from_can;
/***
* Read for the Radar1
*/
this->ReadFromCAN();
if (read_can_port_ < 0) continue;
//ROS_INFO("Read from CAN");
if (id_ == can_id::RadarFrame1)
for (int i = 0; i < dlc_; i++) {
radar1_bytes_[i] = data_from_buffer_[i];
radar1_buffer_.push_back(data_from_buffer_[i]);
}
if (IsMagicWord(radar1_bytes_, 0)) {
frame_id = "radar1_link";
this->PulbishRadarPCL(frame_id, radar1_pub_, radar1_buffer_, 0);
radar1_buffer_.clear();
canFrame_.can_dlc = 0;
}
}
if (id_ == can_id::RadarFrame2) {
for (int i = 0; i < dlc_; i++) {
radar2_bytes_[i] = data_from_buffer_[i];
radar2_buffer_.push_back(data_from_buffer_[i]);
}
if (IsMagicWord(radar2_bytes_, 1)) {
frame_id = "radar2_link";
this->PulbishRadarPCL(frame_id, radar2_pub_, radar2_buffer_, 1);
radar2_buffer_.clear();
canFrame_.can_dlc = 0;
}
}
if (id_ == can_id::RadarFrame3) {
for (int i = 0; i < dlc_; i++) {
radar3_bytes_[i] = data_from_buffer_[i];
radar3_buffer_.push_back(data_from_buffer_[i]);
}
if (IsMagicWord(radar3_bytes_, 2)) {
frame_id = "radar3_link";
this->PulbishRadarPCL(frame_id, radar3_pub_, radar3_buffer_, 2);
radar3_buffer_.clear();
canFrame_.can_dlc = 0;
}
}
rate.sleep();
}
Where rate.sleep() is similar to sleep() function in C++.
Right now, I am running this while loop in 5 MHz however I think this is an overkill and I am getting almost 100% CPU usage on a 1 core.
I tried to play around with the delay time but I think this is highly inefficient and I wonder is there any other way to handle this?
It turns out that poll is what you need. Here is my example.
First, create a pollfd structure from <poll.h> header in Linux. I have decided to create a class member but you can create however you like:
pollfd poll_;
poll_.fd = soc_;
poll_.events = POLLIN;
poll_.revents = 0;
Here, soc_ is a socket and POLLIN means that you want to read from the socket.
Then, in my while loop, instead of delaying I just used this function at the beginning of my while loop:
poll_int = poll(&poll_, 1, 100);
if (poll_int <= 0) continue;
So poll() function returns value of 1 if the read was succesful and I made a timeout of 100ms (just a random number, I know that the data are coming at much higher rate)
With that, you will only read the data from socket whenever poll returns a value greater that 0.
Results? 3% CPU usage and if you want to add more data into your socket flow, poll will optimize for you so this is a scalable way of reading something like CAN bus.

How is possible get only the device to capture or playback using pjsua2

I trying get the devices from pjsua2 , I got it get all devices, but do not got split in capture device and playback device.
void AudioController::load(){
Endpoint ep;
ep.libCreate();
// Initialize endpoint
EpConfig ep_cfg;
ep.libInit( ep_cfg );
AudDevManager &manager = ep.audDevManager();
manager.refreshDevs();
this->input.clear();
const AudioDevInfoVector &list = manager.enumDev();
for(unsigned int i = 0;list.size() != i;i++){
AudioDevInfo * info = list[i];
GtAudioDevice * a = new GtAudioDevice();
a->name = info->name.c_str();
a->deviceId = i;
qDebug() << info->name.c_str();
qDebug() << info->driver.c_str();
qDebug() << info->caps;
this->input.append(a);
}
ep.libDestroy();
}
This is my output:
Wave mapper
WMME
23
Microfone (Dispositivo de High
WMME
3
Alto-falantes (Dispositivo de H
WMME
21
You can check the fields inputCount and outputCount inside AudioDevInfo.
According the documentation:
unsigned inputCount
Maximum number of input channels supported by this device. If the
value is zero, the device does not support input operation (i.e. it is
a playback only device).
And
unsigned outputCount
Maximum number of output channels supported by this device. If the
value is zero, the device does not support output operation (i.e. it
is an input only device).
So you could do something like this:
for(unsigned int i = 0;list.size() != i;i++){
AudioDevInfo * info = list[i];
GtAudioDevice * a = new GtAudioDevice();
a->name = info->name.c_str();
a->deviceId = i;
if (info->inputCount > 0) {
a->captureDevice = true;
}
if (info->outputCount > 0) {
a->playbackDevice = true;
}
this->input.append(a);
}
Reference: http://www.pjsip.org/pjsip/docs/html/structpj_1_1AudioDevInfo.htm
Another way, you can check the field caps (capabilities). Something like this:
for (int i = 0; i < list.size(); i++)
{
AudioDevInfo * info = list[i];
if ((info.caps & (int)pjmedia_aud_dev_cap.PJMEDIA_AUD_DEV_CAP_OUTPUT_LATENCY) != 0)
{
// Playback devices come here
}
if ((info.caps & (int)pjmedia_aud_dev_cap.PJMEDIA_AUD_DEV_CAP_INPUT_LATENCY) != 0)
{
// Capture devices come here
}
}
caps is combined from these possible values:
enum pjmedia_aud_dev_cap {
PJMEDIA_AUD_DEV_CAP_EXT_FORMAT = 1,
PJMEDIA_AUD_DEV_CAP_INPUT_LATENCY = 2,
PJMEDIA_AUD_DEV_CAP_OUTPUT_LATENCY = 4,
PJMEDIA_AUD_DEV_CAP_INPUT_VOLUME_SETTING = 8,
PJMEDIA_AUD_DEV_CAP_OUTPUT_VOLUME_SETTING = 16,
PJMEDIA_AUD_DEV_CAP_INPUT_SIGNAL_METER = 32,
PJMEDIA_AUD_DEV_CAP_OUTPUT_SIGNAL_METER = 64,
PJMEDIA_AUD_DEV_CAP_INPUT_ROUTE = 128,
PJMEDIA_AUD_DEV_CAP_INPUT_SOURCE = 128,
PJMEDIA_AUD_DEV_CAP_OUTPUT_ROUTE = 256,
PJMEDIA_AUD_DEV_CAP_EC = 512,
PJMEDIA_AUD_DEV_CAP_EC_TAIL = 1024,
PJMEDIA_AUD_DEV_CAP_VAD = 2048,
PJMEDIA_AUD_DEV_CAP_CNG = 4096,
PJMEDIA_AUD_DEV_CAP_PLC = 8192,
PJMEDIA_AUD_DEV_CAP_MAX = 16384
}

Pops / clicks when stopping and starting DirectX sound synth in C++ / MFC

I have made a soft synthesizer in Visual Studio 2012 with C++, MFC and DirectX. Despite having added code to rapidly fade out the sound I am experiencing popping / clicking when stopping playback (also when starting).
I copied the DirectX code from this project: http://www.codeproject.com/Articles/7474/Sound-Generator-How-to-create-alien-sounds-using-m
I'm not sure if I'm allowed to cut and paste all the code from the Code Project. Basically I use the Player class from that project as is, the instance of this class is called m_player in my code. The Stop member function in that class calls the Stop function of LPDIRECTSOUNDBUFFER:
void Player::Stop()
{
DWORD status;
if (m_lpDSBuffer == NULL)
return;
HRESULT hres = m_lpDSBuffer->GetStatus(&status);
if (FAILED(hres))
EXCEP(DirectSoundErr::GetErrDesc(hres), "Player::Stop GetStatus");
if ((status & DSBSTATUS_PLAYING) == DSBSTATUS_PLAYING)
{
hres = m_lpDSBuffer->Stop();
if (FAILED(hres))
EXCEP(DirectSoundErr::GetErrDesc(hres), "Player::Stop Stop");
}
}
Here is the notification code (with some supporting code) in my project that fills the sound buffer. Note that the rend function always returns a double between -1 to 1, m_ev_smps = 441, m_n_evs = 3 and m_ev_sz = 882. subInit is called from OnInitDialog:
#define FD_STEP 0.0005
#define SC_NOT_PLYD 0
#define SC_PLYNG 1
#define SC_FD_OUT 2
#define SC_FD_IN 3
#define SC_STPNG 4
#define SC_STPD 5
bool CMainDlg::subInit()
// initialises various variables and the sound player
{
Player *pPlayer;
SOUNDFORMAT format;
std::vector<DWORD> events;
int t, buf_sz;
try
{
pPlayer = new Player();
pPlayer->SetHWnd(m_hWnd);
m_player = pPlayer;
m_player->Init();
format.NbBitsPerSample = 16;
format.NbChannels = 1;
format.SamplingRate = 44100;
m_ev_smps = 441;
m_n_evs = 3;
m_smps = new short[m_ev_smps];
m_smp_scale = (int)pow(2, format.NbBitsPerSample - 1);
m_max_tm = (int)((double)m_ev_smps / (double)(format.SamplingRate * 1000));
m_ev_sz = m_ev_smps * format.NbBitsPerSample/8;
buf_sz = m_ev_sz * m_n_evs;
m_player->CreateSoundBuffer(format, buf_sz, 0);
m_player->SetSoundEventListener(this);
for(t = 0; t < m_n_evs; t++)
events.push_back((int)((t + 1)*m_ev_sz - m_ev_sz * 0.95));
m_player->CreateEventReadNotification(events);
m_status = SC_NOT_PLYD;
}
catch(MATExceptions &e)
{
MessageBox(e.getAllExceptionStr().c_str(), "Error initializing the sound player");
EndDialog(IDCANCEL);
return FALSE;
}
return TRUE;
}
void CMainDlg::Stop()
// stop playing
{
m_player->Stop();
m_status = SC_STPD;
}
void CMainDlg::OnBnClickedStop()
// causes fade out
{
m_status = SC_FD_OUT;
}
void CMainDlg::OnSoundPlayerNotify(int ev_num)
// render some sound samples and check for errors
{
ScopeGuardMutex guard(&m_mutex);
int s, end, begin, elapsed;
if (m_status != SC_STPNG)
{
begin = GetTickCount();
try
{
for(s = 0; s < m_ev_smps; s++)
{
m_smps[s] = (int)(m_synth->rend() * 32768 * m_fade);
if (m_status == SC_FD_IN)
{
m_fade += FD_STEP;
if (m_fade > 1)
{
m_fade = 1;
m_status = SC_PLYNG;
}
}
else if (m_status == SC_FD_OUT)
{
m_fade -= FD_STEP;
if (m_fade < 0)
{
m_fade = 0;
m_status = SC_STPNG;
}
}
}
}
catch(MATExceptions &e)
{
OutputDebugString(e.getAllExceptionStr().c_str());
}
try
{
m_player->Write(((ev_num + 1) % m_n_evs)*m_ev_sz, (unsigned char*)m_smps, m_ev_sz);
}
catch(MATExceptions &e)
{
OutputDebugString(e.getAllExceptionStr().c_str());
}
end = GetTickCount();
elapsed = end - begin;
if(elapsed > m_max_tm)
m_warn_msg.Format(_T("Warning! compute time: %dms"), elapsed);
else
m_warn_msg.Format(_T("compute time: %dms"), elapsed);
}
if (m_status == SC_STPNG)
Stop();
}
It seems like the buffer is not always sounding out when the stop button is clicked. I don't have any specific code for waiting for the sound buffer to finish playing before the DirectX Stop is called. Other than that the sound playback is working just fine, so at least I am initialising the player correctly and notification code is working in that respect.
Try replacing 32768 with 32767. Not by any means sure this is your issue, but it could overflow the positive short int range (assuming your audio is 16-bit) and cause a "pop".
I got rid of the pops / clicks when stopping playback, by filling the buffer with zeros after the fade out. However I still get pops when re-starting playback, despite filling with zeros and then fading back in (it is frustrating).

How to write a Live555 FramedSource to allow me to stream H.264 live

I've been trying to write a class that derives from FramedSource in Live555 that will allow me to stream live data from my D3D9 application to an MP4 or similar.
What I do each frame is grab the backbuffer into system memory as a texture, then convert it from RGB -> YUV420P, then encode it using x264, then ideally pass the NAL packets on to Live555. I made a class called H264FramedSource that derived from FramedSource basically by copying the DeviceSource file. Instead of the input being an input file, I've made it a NAL packet which I update each frame.
I'm quite new to codecs and streaming, so I could be doing everything completely wrong. In each doGetNextFrame() should I be grabbing the NAL packet and doing something like
memcpy(fTo, nal->p_payload, nal->i_payload)
I assume that the payload is my frame data in bytes? If anybody has an example of a class they derived from FramedSource that might at least be close to what I'm trying to do I would love to see it, this is all new to me and a little tricky to figure out what's happening. Live555's documentation is pretty much the code itself which doesn't exactly make it easy for me to figure out.
Ok, I finally got some time to spend on this and got it working! I'm sure there are others who will be begging to know how to do it so here it is.
You will need your own FramedSource to take each frame, encode, and prepare it for streaming, I will provide some of the source code for this soon.
Essentially throw your FramedSource into the H264VideoStreamDiscreteFramer, then throw this into the H264RTPSink. Something like this
scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);
framedSource = H264FramedSource::createNew(*env, 0,0);
h264VideoStreamDiscreteFramer
= H264VideoStreamDiscreteFramer::createNew(*env, framedSource);
// initialise the RTP Sink stuff here, look at
// testH264VideoStreamer.cpp to find out how
videoSink->startPlaying(*h264VideoStreamDiscreteFramer, NULL, videoSink);
env->taskScheduler().doEventLoop();
Now in your main render loop, throw over your backbuffer which you've saved to system memory to your FramedSource so it can be encoded etc. For more info on how to setup the encoding stuff check out this answer How does one encode a series of images into H264 using the x264 C API?
My implementation is very much in a hacky state and is yet to be optimised at all, my d3d application runs at around 15fps due to the encoding, ouch, so I will have to look into this. But for all intents and purposes this StackOverflow question is answered because I was mostly after how to stream it. I hope this helps other people.
As for my FramedSource it looks a little something like this
concurrent_queue<x264_nal_t> m_queue;
SwsContext* convertCtx;
x264_param_t param;
x264_t* encoder;
x264_picture_t pic_in, pic_out;
EventTriggerId H264FramedSource::eventTriggerId = 0;
unsigned H264FramedSource::FrameSize = 0;
unsigned H264FramedSource::referenceCount = 0;
int W = 720;
int H = 960;
H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
{
return new H264FramedSource(env, preferredFrameSize, playTimePerFrame);
}
H264FramedSource::H264FramedSource(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
: FramedSource(env),
fPreferredFrameSize(fMaxSize),
fPlayTimePerFrame(playTimePerFrame),
fLastPlayTime(0),
fCurIndex(0)
{
if (referenceCount == 0)
{
}
++referenceCount;
x264_param_default_preset(&param, "veryfast", "zerolatency");
param.i_threads = 1;
param.i_width = 720;
param.i_height = 960;
param.i_fps_num = 60;
param.i_fps_den = 1;
// Intra refres:
param.i_keyint_max = 60;
param.b_intra_refresh = 1;
//Rate control:
param.rc.i_rc_method = X264_RC_CRF;
param.rc.f_rf_constant = 25;
param.rc.f_rf_constant_max = 35;
param.i_sps_id = 7;
//For streaming:
param.b_repeat_headers = 1;
param.b_annexb = 1;
x264_param_apply_profile(&param, "baseline");
encoder = x264_encoder_open(&param);
pic_in.i_type = X264_TYPE_AUTO;
pic_in.i_qpplus1 = 0;
pic_in.img.i_csp = X264_CSP_I420;
pic_in.img.i_plane = 3;
x264_picture_alloc(&pic_in, X264_CSP_I420, 720, 920);
convertCtx = sws_getContext(720, 960, PIX_FMT_RGB24, 720, 760, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
if (eventTriggerId == 0)
{
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
}
}
H264FramedSource::~H264FramedSource()
{
--referenceCount;
if (referenceCount == 0)
{
// Reclaim our 'event trigger'
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = 0;
}
}
void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes)
{
uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]);
memcpy(surfaceData, buf, surfaceSizeInBytes);
int srcstride = W*3;
sws_scale(convertCtx, &surfaceData, &srcstride,0, H, pic_in.img.plane, pic_in.img.i_stride);
x264_nal_t* nals = NULL;
int i_nals = 0;
int frame_size = -1;
frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
static bool finished = false;
if (frame_size >= 0)
{
static bool alreadydone = false;
if(!alreadydone)
{
x264_encoder_headers(encoder, &nals, &i_nals);
alreadydone = true;
}
for(int i = 0; i < i_nals; ++i)
{
m_queue.push(nals[i]);
}
}
delete [] surfaceData;
surfaceData = NULL;
envir().taskScheduler().triggerEvent(eventTriggerId, this);
}
void H264FramedSource::doGetNextFrame()
{
deliverFrame();
}
void H264FramedSource::deliverFrame0(void* clientData)
{
((H264FramedSource*)clientData)->deliverFrame();
}
void H264FramedSource::deliverFrame()
{
x264_nal_t nalToDeliver;
if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) {
if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) {
// This is the first frame, so use the current time:
gettimeofday(&fPresentationTime, NULL);
} else {
// Increment by the play time of the previous data:
unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime;
fPresentationTime.tv_sec += uSeconds/1000000;
fPresentationTime.tv_usec = uSeconds%1000000;
}
// Remember the play time of this data:
fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize;
fDurationInMicroseconds = fLastPlayTime;
} else {
// We don't know a specific play time duration for this data,
// so just record the current time as being the 'presentation time':
gettimeofday(&fPresentationTime, NULL);
}
if(!m_queue.empty())
{
m_queue.wait_and_pop(nalToDeliver);
uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E;
newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload);
unsigned newFrameSize = nalToDeliver.i_payload;
// Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
}
else {
fFrameSize = newFrameSize;
}
memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload);
FramedSource::afterGetting(this);
}
}
Oh and for those who want to know what my concurrent queue is, here it is, and it works brilliantly http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html
Enjoy and good luck!
The deliverFrame method lacks the following check at its start:
if (!isCurrentlyAwaitingData()) return;
see DeviceSource.cpp in LIVE