NVIDIA Gefore GTX 1660Ti graphics card, I use video-sdk-samples for HEVC encoding, YUV format: NV_ENC_BUFFER_FORMAT_YUV444, resolution “1920 * 1080” and “3840 * 2160” resolution encoding is successful, but resolution 2560*1440 encoding, prompting error info :NV_ENC_ERR_INVALID_PARAM, I do not know where the problem lies?
void NvEncoder::DoEncode(NV_ENC_INPUT_PTR inputBuffer, std::vector<bool> &vKeyFrame, std::vector<std::vector<uint8_t>> &vPacket, NV_ENC_PIC_PARAMS *pPicParams)
{
NV_ENC_PIC_PARAMS picParams = {};
if (pPicParams)
{
picParams = *pPicParams;
}
picParams.version = NV_ENC_PIC_PARAMS_VER;
picParams.pictureStruct = NV_ENC_PIC_STRUCT_FRAME;
picParams.inputBuffer = inputBuffer;
picParams.bufferFmt = GetPixelFormat();
picParams.inputWidth = GetEncodeWidth();
picParams.inputHeight = GetEncodeHeight();
picParams.outputBitstream = m_vBitstreamOutputBuffer[m_iToSend % m_nEncoderBuffer];
picParams.completionEvent = m_vpCompletionEvent[m_iToSend % m_nEncoderBuffer];
NVENCSTATUS nvStatus = m_nvenc.nvEncEncodePicture(m_hEncoder, &picParams);
if (nvStatus == NV_ENC_SUCCESS || nvStatus == NV_ENC_ERR_NEED_MORE_INPUT)
{
m_iToSend++;
GetEncodedPacket(m_vBitstreamOutputBuffer, vKeyFrame, vPacket, false);
}
else
{
printf("picParams.inputWidth = %d,picParams.inputHeight = %d \n", picParams.inputWidth, picParams.inputHeight);
NVENC_THROW_ERROR("nvEncEncodePicture API failed", nvStatus);
}
}
error:
nvEncEncodePicture Prompt Error:NV_ENC_ERR_INVALID_PARAM
`
Related
My preset:
m_stEncodeConfig.encodeCodecConfig.hevcConfig.sliceMode = 3u;
m_stEncodeConfig.encodeCodecConfig.hevcConfig.sliceModeData = (uint32_t)m_stEncodeStreamInfo.nMaxSliceNum; //4
m_stCreateEncodeParams.reportSliceOffsets = 1;
m_stCreateEncodeParams.enableSubFrameWrite = 1;
code of process output:
NV_ENC_LOCK_BITSTREAM lockBitstreamData;
memset(&lockBitstreamData, 0, sizeof(lockBitstreamData));
lockBitstreamData.version = NV_ENC_LOCK_BITSTREAM_VER;
lockBitstreamData.outputBitstream = pEncodeBuffer->stOutputBfr.hBitstreamBuffer;
lockBitstreamData.doNotWait = 1u;
std::vector<uint32_t> arrSliceOffset(m_stEncodeConfig.encodeCodecConfig.hevcConfig.sliceModeData);
lockBitstreamData.sliceOffsets = arrSliceOffset.data();
while (true)
{
NVENCSTATUS status = m_pEncodeAPI->nvEncLockBitstream(m_hEncoder, &lockBitstreamData);
auto tick = int(std::chrono::steady_clock::now().time_since_epoch().count() / 1000000);
if (status == NVENCSTATUS::NV_ENC_SUCCESS)
{
if (lockBitstreamData.hwEncodeStatus == 2)
{
static std::ofstream of("slice.h265", std::ios::trunc | std::ios::binary);
of.write((char*)lockBitstreamData.bitstreamBufferPtr, lockBitstreamData.bitstreamSizeInBytes);
of.flush();
break;
}
NVENCAPI_CALL_CHECK(m_pEncodeAPI->nvEncUnlockBitstream(m_hEncoder, lockBitstreamData.outputBitstream));
}
else
{
break;
}
}
play bitstream:
ffplay -i slice.h265
output : Packet corrupt
arrSliceOffset[0] always = 255.
I watch the memory from VS Debug and compare with enableSubFrameWrite = 0,the bitstreamSizeInBytes less of valid data size . It’s BUG or I loss some details? Anybady can tell me how can I correct use of enableSubFrameWrite
I am currently messing the first time with iOS and Objective-C++. Im coming from C/C++ so please excuse my bad coding in the below examples.
I am trying to live stream the microphone audio of my iOS device over tcp, the iOS device is acting as server and sends the data to all clients that connect.
To do so, I am first using AVCaptureDevice and requestAccessForMediaType:AVMediaTypeAudio to request access to the microphone (along with the needed entry in the Info.plist).
Then I create a AVCaptureSession* using the below function:
AVCaptureSession* createBasicARecordingSession(aReceiver* ObjectReceivingAudioFrames){
AVCaptureSession* s = [[AVCaptureSession alloc] init];
AVCaptureDevice* aDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput* aInput = NULL;
if([aDevice lockForConfiguration:NULL] == YES && aDevice){
aInput = [AVCaptureDeviceInput deviceInputWithDevice:aDevice error:nil];
[aDevice unlockForConfiguration];
}
else if(!aDevice){
fprintf(stderr, "[d] could not create device. (%p)\n", aDevice);
return NULL;
}
else{
fprintf(stderr, "[d] could not lock device.\n");
return NULL;
}
if(!aInput){
fprintf(stderr, "[d] could not create input.\n");
return NULL;
}
AVCaptureAudioDataOutput* aOutput = [[AVCaptureAudioDataOutput alloc] init];
dispatch_queue_t aQueue = dispatch_queue_create("aQueue", NULL);
if(!aOutput){
fprintf(stderr, "[d] could not create output.\n");
return NULL;
}
[aOutput setSampleBufferDelegate:ObjectReceivingAudioFrames queue:aQueue];
// the below line does only work on macOS
//aOutput.audioSettings = settings;
[s beginConfiguration];
if([s canAddInput:aInput]){
[s addInput:aInput];
}
else{
fprintf(stderr, "[d] could not add input.\n");
return NULL;
}
if([s canAddOutput:aOutput]){
[s addOutput:aOutput];
}
else{
fprintf(stderr, "[d] could not add output.\n");
return NULL;
}
[s commitConfiguration];
return s;
}
The aReceiver* class (?) is defined below and receives the audio frames provided by the AVCaptureAudioDataOutput* object. The frames are stored inside a std::vector.
(im adding the code as image as I could not get it formatted right...)
Then I start the AVCaptureSession* using [audioSession start]
When a tcp client connects I first create a AudioConverterRef and two AudioStreamBasicDescription to convert the audio frames to AAC, see below:
AudioStreamBasicDescription asbdIn, asbdOut;
AudioConverterRef converter;
asbdIn.mFormatID = kAudioFormatLinearPCM;
//asbdIn.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbdIn.mFormatFlags = 12;
asbdIn.mSampleRate = 44100;
asbdIn.mChannelsPerFrame = 1;
asbdIn.mFramesPerPacket = 1;
asbdIn.mBitsPerChannel = 16;
//asbdIn.mBytesPerFrame = (asbdIn.mBitsPerChannel / 8) * asbdIn.mBitsPerChannel;
asbdIn.mBytesPerFrame = 2;
asbdIn.mBytesPerPacket = asbdIn.mBytesPerFrame;
asbdIn.mReserved = 0;
asbdOut.mFormatID = kAudioFormatMPEG4AAC;
asbdOut.mFormatFlags = 0;
asbdOut.mSampleRate = 44100;
asbdOut.mChannelsPerFrame = 1;
asbdOut.mFramesPerPacket = 1024;
asbdOut.mBitsPerChannel = 0;
//asbdOut.mBytesPerFrame = (asbdOut.mBitsPerChannel / 8) * asbdOut.mBitsPerChannel;
asbdOut.mBytesPerFrame = 0;
asbdOut.mBytesPerPacket = asbdOut.mBytesPerFrame;
asbdOut.mReserved = 0;
OSStatus err = AudioConverterNew(&asbdIn, &asbdOut, &converter);
Then I create a AudioBufferList* to store the encoded frames:
while(audioInput.locked){ // audioInput is my aReceiver*
usleep(0.2 * 1000000);
}
audioInput.locked = true;
UInt32 RequestedPackets = 8192;
//AudioBufferList* aBufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList));
AudioBufferList* aBufferList = static_cast<AudioBufferList*>(calloc(1, offsetof(AudioBufferList, mBuffers) + (sizeof(AudioBuffer) * 1)));
aBufferList->mNumberBuffers = 1;
aBufferList->mBuffers[0].mNumberChannels = asbdIn.mChannelsPerFrame;
aBufferList->mBuffers[0].mData = static_cast<void*>(calloc(RequestedPackets, asbdIn.mBytesPerFrame));
aBufferList->mBuffers[0].mDataByteSize = asbdIn.mBytesPerFrame * RequestedPackets;
Then I go through the frames stored in the std::vector mentioned earlier and pass them to AudioConverterFillComplexBuffer(). After conversion, i concat all encoded frames into one NSMutableData which I then write() to the socket connected to the client.
long aBufferListSize = audioInput.aBufferList.size();
while(aBufferListSize > 0){
err = AudioConverterFillComplexBuffer(converter, feedAFrames, static_cast<void*>(&audioInput.aBufferList[audioInput.aBufferList.size() - aBufferListSize]), &RequestedPackets, aBufferList, NULL);
NSMutableData* encodedData = [[NSMutableData alloc] init];
long encodedDataLen = 0;
for(int i = 0; i < aBufferList->mNumberBuffers; i++){
Float32* frame = (Float32*)aBufferList->mBuffers[i].mData;
[encodedData appendBytes:frame length:aBufferList->mBuffers[i].mDataByteSize];
encodedDataLen += aBufferList->mBuffers[i].mDataByteSize;
}
unsigned char* encodedDataBytes = (unsigned char*)[encodedData bytes];
fprintf(stderr, "[d] got %li encoded bytes to send...\n", encodedDataLen);
long bytes = write(Client->GetFD(), encodedDataBytes, encodedDataLen);
fprintf(stderr, "[d] written %li of %li bytes.\n", bytes, encodedDataLen);
usleep(0.2 * 1000000);
aBufferListSize--;
}
audioInput.aBufferList.clear();
audioInput.locked = false;
Below is the feedAFrames() callback used in the AudioConverterFillComplexBuffer() call:
(again this is an image of the code, same reason as above)
Step 5 to 7 are repeated until the tcp connection is closed.
Each step runs without any noticeable error (I know I could include way better error handling here), and I do get data out of step 3 and 7. However it does not seem to be AAC what comes out at the end.
As im rather new to all of this, im really not sure what my error is, im sure there are several things I made wrong. It is kind of hard to find suitable example code of what I am trying to do, and the above is the best I could come up with until now with all that I have found paired with the apple dev documentation.
I hope someone might take some time to explain me what I did wrong and how I can get this to work. Thanks for reading until here!
i using this code to encode video stream using vp8 and i decided to give vp9 a try so i changed every thing with starts with vp_* from 8 to 9.
but the vp9 encoder always return a null packet although the encoder doesn't return any error.
here is the code i'am using for configuring.
vpx_codec_err_t error = vpx_codec_enc_config_default(vpx_codec_vp9_cx(), &enc_cfg, 0);
if(error != VPX_CODEC_OK)
return error;
enc_cfg.g_timebase.den = fps;
enc_cfg.rc_undershoot_pct = 95;
enc_cfg.rc_target_bitrate = bitrate;
enc_cfg.g_error_resilient = 1;
enc_cfg.kf_max_dist = 999999;
enc_cfg.rc_buf_initial_sz = 4000;
enc_cfg.rc_buf_sz = 6000;
enc_cfg.rc_buf_optimal_sz = 5000;
enc_cfg.rc_end_usage = VPX_CBR;
enc_cfg.g_h = height;
enc_cfg.g_w = width;
enc_cfg.rc_min_quantizer = 4;
enc_cfg.rc_max_quantizer = 56;
enc_cfg.g_threads = 4;
enc_cfg.g_pass = VPX_RC_ONE_PASS;
error = vpx_codec_enc_init(&codec, vpx_codec_vp9_cx(), &enc_cfg, 0);
if(error != VPX_CODEC_OK)
return error;
vpx_img_alloc(&vpx_image,VPX_IMG_FMT_I420 , width, height, 1);
configured = true;
return VPX_CODEC_OK;
and the code for the encoding
libyuv::RAWToI420(frame, vpx_image.d_w * 3, vpx_image.planes[VPX_PLANE_Y],vpx_image.stride[VPX_PLANE_Y],
vpx_image.planes[VPX_PLANE_U], vpx_image.stride[VPX_PLANE_U], vpx_image.planes[VPX_PLANE_V],
vpx_image.stride[VPX_PLANE_V], vpx_image.d_w, vpx_image.d_h);
const vpx_codec_cx_pkt_t *pkt;
vpx_codec_err_t error = vpx_codec_encode(&codec, &vpx_image, 0, 1, 0, VPX_DL_GOOD_QUALITY);
if(error != VPX_CODEC_OK)
return vector<byte>();
vpx_codec_iter_t iter = NULL;
if((pkt = vpx_codec_get_cx_data(&codec, &iter)))//always return null ?
{
if(pkt->kind == VPX_CODEC_CX_FRAME_PKT)
{
int length = pkt->data.frame.sz;
byte* buf = (byte*) pkt->data.frame.buf;
vector<byte> data(buf, buf + length);
return data;
}
return vector<byte>();
}
return vector<byte>();
the code is fully working if i'am using vp8 instead of 9, any help is welcomed
Just came across this post because I faced the same problem. Just for other to know: I solved it with setting
enc_cfg.g_lag_in_frames = 0;
This basically disallows the encoder to consume up to default 25 frames until it produces any output.
I've been trying to write a class that derives from FramedSource in Live555 that will allow me to stream live data from my D3D9 application to an MP4 or similar.
What I do each frame is grab the backbuffer into system memory as a texture, then convert it from RGB -> YUV420P, then encode it using x264, then ideally pass the NAL packets on to Live555. I made a class called H264FramedSource that derived from FramedSource basically by copying the DeviceSource file. Instead of the input being an input file, I've made it a NAL packet which I update each frame.
I'm quite new to codecs and streaming, so I could be doing everything completely wrong. In each doGetNextFrame() should I be grabbing the NAL packet and doing something like
memcpy(fTo, nal->p_payload, nal->i_payload)
I assume that the payload is my frame data in bytes? If anybody has an example of a class they derived from FramedSource that might at least be close to what I'm trying to do I would love to see it, this is all new to me and a little tricky to figure out what's happening. Live555's documentation is pretty much the code itself which doesn't exactly make it easy for me to figure out.
Ok, I finally got some time to spend on this and got it working! I'm sure there are others who will be begging to know how to do it so here it is.
You will need your own FramedSource to take each frame, encode, and prepare it for streaming, I will provide some of the source code for this soon.
Essentially throw your FramedSource into the H264VideoStreamDiscreteFramer, then throw this into the H264RTPSink. Something like this
scheduler = BasicTaskScheduler::createNew();
env = BasicUsageEnvironment::createNew(*scheduler);
framedSource = H264FramedSource::createNew(*env, 0,0);
h264VideoStreamDiscreteFramer
= H264VideoStreamDiscreteFramer::createNew(*env, framedSource);
// initialise the RTP Sink stuff here, look at
// testH264VideoStreamer.cpp to find out how
videoSink->startPlaying(*h264VideoStreamDiscreteFramer, NULL, videoSink);
env->taskScheduler().doEventLoop();
Now in your main render loop, throw over your backbuffer which you've saved to system memory to your FramedSource so it can be encoded etc. For more info on how to setup the encoding stuff check out this answer How does one encode a series of images into H264 using the x264 C API?
My implementation is very much in a hacky state and is yet to be optimised at all, my d3d application runs at around 15fps due to the encoding, ouch, so I will have to look into this. But for all intents and purposes this StackOverflow question is answered because I was mostly after how to stream it. I hope this helps other people.
As for my FramedSource it looks a little something like this
concurrent_queue<x264_nal_t> m_queue;
SwsContext* convertCtx;
x264_param_t param;
x264_t* encoder;
x264_picture_t pic_in, pic_out;
EventTriggerId H264FramedSource::eventTriggerId = 0;
unsigned H264FramedSource::FrameSize = 0;
unsigned H264FramedSource::referenceCount = 0;
int W = 720;
int H = 960;
H264FramedSource* H264FramedSource::createNew(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
{
return new H264FramedSource(env, preferredFrameSize, playTimePerFrame);
}
H264FramedSource::H264FramedSource(UsageEnvironment& env,
unsigned preferredFrameSize,
unsigned playTimePerFrame)
: FramedSource(env),
fPreferredFrameSize(fMaxSize),
fPlayTimePerFrame(playTimePerFrame),
fLastPlayTime(0),
fCurIndex(0)
{
if (referenceCount == 0)
{
}
++referenceCount;
x264_param_default_preset(¶m, "veryfast", "zerolatency");
param.i_threads = 1;
param.i_width = 720;
param.i_height = 960;
param.i_fps_num = 60;
param.i_fps_den = 1;
// Intra refres:
param.i_keyint_max = 60;
param.b_intra_refresh = 1;
//Rate control:
param.rc.i_rc_method = X264_RC_CRF;
param.rc.f_rf_constant = 25;
param.rc.f_rf_constant_max = 35;
param.i_sps_id = 7;
//For streaming:
param.b_repeat_headers = 1;
param.b_annexb = 1;
x264_param_apply_profile(¶m, "baseline");
encoder = x264_encoder_open(¶m);
pic_in.i_type = X264_TYPE_AUTO;
pic_in.i_qpplus1 = 0;
pic_in.img.i_csp = X264_CSP_I420;
pic_in.img.i_plane = 3;
x264_picture_alloc(&pic_in, X264_CSP_I420, 720, 920);
convertCtx = sws_getContext(720, 960, PIX_FMT_RGB24, 720, 760, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
if (eventTriggerId == 0)
{
eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
}
}
H264FramedSource::~H264FramedSource()
{
--referenceCount;
if (referenceCount == 0)
{
// Reclaim our 'event trigger'
envir().taskScheduler().deleteEventTrigger(eventTriggerId);
eventTriggerId = 0;
}
}
void H264FramedSource::AddToBuffer(uint8_t* buf, int surfaceSizeInBytes)
{
uint8_t* surfaceData = (new uint8_t[surfaceSizeInBytes]);
memcpy(surfaceData, buf, surfaceSizeInBytes);
int srcstride = W*3;
sws_scale(convertCtx, &surfaceData, &srcstride,0, H, pic_in.img.plane, pic_in.img.i_stride);
x264_nal_t* nals = NULL;
int i_nals = 0;
int frame_size = -1;
frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);
static bool finished = false;
if (frame_size >= 0)
{
static bool alreadydone = false;
if(!alreadydone)
{
x264_encoder_headers(encoder, &nals, &i_nals);
alreadydone = true;
}
for(int i = 0; i < i_nals; ++i)
{
m_queue.push(nals[i]);
}
}
delete [] surfaceData;
surfaceData = NULL;
envir().taskScheduler().triggerEvent(eventTriggerId, this);
}
void H264FramedSource::doGetNextFrame()
{
deliverFrame();
}
void H264FramedSource::deliverFrame0(void* clientData)
{
((H264FramedSource*)clientData)->deliverFrame();
}
void H264FramedSource::deliverFrame()
{
x264_nal_t nalToDeliver;
if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) {
if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) {
// This is the first frame, so use the current time:
gettimeofday(&fPresentationTime, NULL);
} else {
// Increment by the play time of the previous data:
unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime;
fPresentationTime.tv_sec += uSeconds/1000000;
fPresentationTime.tv_usec = uSeconds%1000000;
}
// Remember the play time of this data:
fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize;
fDurationInMicroseconds = fLastPlayTime;
} else {
// We don't know a specific play time duration for this data,
// so just record the current time as being the 'presentation time':
gettimeofday(&fPresentationTime, NULL);
}
if(!m_queue.empty())
{
m_queue.wait_and_pop(nalToDeliver);
uint8_t* newFrameDataStart = (uint8_t*)0xD15EA5E;
newFrameDataStart = (uint8_t*)(nalToDeliver.p_payload);
unsigned newFrameSize = nalToDeliver.i_payload;
// Deliver the data here:
if (newFrameSize > fMaxSize) {
fFrameSize = fMaxSize;
fNumTruncatedBytes = newFrameSize - fMaxSize;
}
else {
fFrameSize = newFrameSize;
}
memcpy(fTo, nalToDeliver.p_payload, nalToDeliver.i_payload);
FramedSource::afterGetting(this);
}
}
Oh and for those who want to know what my concurrent queue is, here it is, and it works brilliantly http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html
Enjoy and good luck!
The deliverFrame method lacks the following check at its start:
if (!isCurrentlyAwaitingData()) return;
see DeviceSource.cpp in LIVE
I write an application, which plays a sound getting from Hardware (like a ring buffer filled with a sinus wave with certain frequency). Everything works fine, and I can playback the created sound correctly except a periodical clicking (maybe at the end of buffer?) and noise.
I initialize and run the Buffer:
void Audiooutput::InitializeAudioParameters()
{
Audio_DataWritten = 0;
Audio_fragments = 4;
Audio_channels = 2;
Audio_BufferSize = 256;
Audio_Samplerate = 8000;
Audio_ResamplingFactor = 1;
Audio_Framesize = 2;
// (SND_PCM_FORMAT_S16_LE / 8);
Audio_frames = Audio_BufferSize / Audio_Framesize * Audio_fragments;
snd_pcm_uframes_t size;
err = snd_pcm_hw_params_any(pcmPlaybackHandle, hw_params);
err = snd_pcm_hw_params_set_rate_resample(pcmPlaybackHandle, hw_params, 1);
// qDebug()<<a1.sprintf(" % d \t snd_pcm_hw_params_set_rate: %s",Audio_Samplerate,snd_strerror(err));
err =
snd_pcm_hw_params_set_format(pcmPlaybackHandle, hw_params,
SND_PCM_FORMAT_S16_LE);
err =
snd_pcm_hw_params_set_channels(pcmPlaybackHandle, hw_params,
Audio_channels);
err = snd_pcm_hw_params_set_rate_near(pcmPlaybackHandle, hw_params, &Audio_Samplerate, 0);
// qDebug()<<a1.sprintf(" % d \t snd_pcm_hw_params_set_rate: %s",Audio_Samplerate,snd_strerror(err));
if ((err =
snd_pcm_hw_params_set_periods_near(pcmPlaybackHandle, hw_params,
&Audio_fragments, 0)) < 0) {
qDebug() << a1.sprintf("Error setting # fragments to %d: %s\n",
Audio_fragments, snd_strerror(err));
} else
qDebug() << a1.sprintf("setting # fragments to %d: %s\n",
Audio_fragments, snd_strerror(err));
err = snd_pcm_hw_params_get_buffer_size(hw_params, &size);
if ((err =
snd_pcm_hw_params_set_buffer_size_near(pcmPlaybackHandle,
hw_params,
&Audio_frames)) < 0) {
qDebug() << a1.
sprintf("Error setting buffer_size %d frames: %s",
Audio_frames, snd_strerror(err));
} else
qDebug() << a1.sprintf("setting Buffersize to %d --> %d: %s\n",
Audio_BufferSize, Audio_frames,
snd_strerror(err));
Audio_BufferSize = Audio_frames;
if ((err = snd_pcm_hw_params(pcmPlaybackHandle, hw_params)) < 0) {
qDebug() << a1.sprintf("Error setting HW params: %s",
snd_strerror(err));
}
Q_ASSERT(err >= 0);
}
void Audiooutput::ProduceAudioOutput(int n, int mmodes, int totalMModeGates,
short *sinusValue, short *cosinusValue)
{
for (int audioSample = 0; audioSample < n;
audioSample += Audio_ResamplingFactor) {
currentposition =
(int)(m_Audio.generalPos % (Audio_BufferSize / 2));
if (currentposition == 0) {
QueueAudioBuffer();
m_Audio.currentPos = 0;
}
m_Audio.generalPos++;
AudioData[currentposition * 2] =
(short)(sinusValue[audioSample]);
AudioData[currentposition * 2 + 1] =
(short)(cosinusValue[audioSample]);
}
}
void Audiooutput::QueueAudioBuffer()
{
snd_pcm_prepare(pcmPlaybackHandle);
Audio_DataWritten +=
snd_pcm_writei(pcmPlaybackHandle, AudioData, Audio_BufferSize);
}
Changing the audiobuffer size or fragments changes also the clicking period.
Can anyone help me with this issue ?
I checked also the first and Last Values. Thy are always difference.
OS: Ubuntu 11
more detail.
the count of received data is dynamically, and changes depend of different parameters. But I play always a certain part e.g. 128 values or 256 or 512....
// I get the Audiodata from a hardware (in a Timerloop)
audiobuffersize = 256;
short *AudioData = new short[256];
int generalAudioSample = 0;
void CollectDataFromHw()
{
...
int n = 0;
n = GetData(buf1,buf2);//buf1 = new short[MAX_SHRT]
if(n > 0)
FillAudioBuffer(n,buf1,buf2)
...
}
-------------------------------------------
void FillAudioBuffer(int n, short*buf1, short*buf2)
{
for(int audioSample = 0;audioSample < n; audioSample++){
iCurrentAudioSample = (int)(generalAudioSample % (audiobuffersize/2));
if(iCurrentAudioSample == 0) {
snd_pcm_writei(pcmPlaybackHandle,AudioData,audiobuffersize );
memset(AudioData,0x00,audiobuffersize*sizeof(short));
}
generalAudioSample++;
AudioData[iCurrentAudioSample * 2] = (short)(buf1[audioSample];
AudioData[iCurrentAudioSample * 2 +1] = (short)(buf2[audioSample];
}
}
I changed the audiobuffersize also. If I set it to a bigger size, I have some Echo additional to clicks.
any Idea ?
//-----------------------
the Problem is
snd_pcm_prepare(pcmPlaybackHandle);
every call of this function produce a click in sound !
Can't test the source code, but I think that the high-frequency clicks you hear are discontinuities in the sound wave. You have to assure that looping period (or, buffer size) is multiple of wave period.
Check if first and last value of buffer are almost the same (+/- 1, for example). Their distance determines the amplitude of the unwanted click.
solved
buffer has been played several times before it was filled with the data.
stupid error in the code.missing a parantez --> audio_buffersize/2 <--
and therefore the result was very often if(iCurrentAudioSample == 0) true !!!!!
iCurrentAudioSample = (int)(generalAudioSample % (audio_buffersize/2));
if(iCurrentAudioSample == 0)
{
writetoaudioStream(audiobuffer);
}