process video stream from memory buffer - c++

I need to parse a video stream (mpeg ts) from proprietary network protocol (which I already know how to do) and then I would like to use OpenCV to process the video stream into frames. I know how to use cv::VideoCapture from a file or from a standard URL, but I would like to setup OpenCV to read from a buffer(s) in memory where I can store the video stream data until it is needed. Is there a way to setup a call back method (or any other interfrace) so that I can still use the cv::VideoCapture object? Is there a better way to accomplish processing the video with out writing it out to a file and then re-reading it. I would also entertain using FFMPEG directly if that is a better choice. I think I can convert AVFrames to Mat if needed.

I had a similar need recently. I was looking for a way in OpenCV to play a video that was already in memory, but without ever having to write the video file to disk. I found out that the FFMPEG interface already supports this through av_open_input_stream. There is just a little more prep work required compared to the av_open_input_file call used in OpenCV to open a file.
Between the following two websites I was able to piece together a working solution using the ffmpeg calls. Please refer to the information on these websites for more details:
http://ffmpeg.arrozcru.org/forum/viewtopic.php?f=8&t=1170
http://cdry.wordpress.com/2009/09/09/using-custom-io-callbacks-with-ffmpeg/
To get it working in OpenCV, I ended up adding a new function to the CvCapture_FFMPEG class:
virtual bool openBuffer( unsigned char* pBuffer, unsigned int bufLen );
I provided access to it through a new API call in the highgui DLL, similar to cvCreateFileCapture. The new openBuffer function is basically the same as the open( const char* _filename ) function with the following difference:
err = av_open_input_file(&ic, _filename, NULL, 0, NULL);
is replaced by:
ic = avformat_alloc_context();
ic->pb = avio_alloc_context(pBuffer, bufLen, 0, pBuffer, read_buffer, NULL, NULL);
if(!ic->pb) {
// handle error
}
// Need to probe buffer for input format unless you already know it
AVProbeData probe_data;
probe_data.buf_size = (bufLen < 4096) ? bufLen : 4096;
probe_data.filename = "stream";
probe_data.buf = (unsigned char *) malloc(probe_data.buf_size);
memcpy(probe_data.buf, pBuffer, probe_data.buf_size);
AVInputFormat *pAVInputFormat = av_probe_input_format(&probe_data, 1);
if(!pAVInputFormat)
pAVInputFormat = av_probe_input_format(&probe_data, 0);
// cleanup
free(probe_data.buf);
probe_data.buf = NULL;
if(!pAVInputFormat) {
// handle error
}
pAVInputFormat->flags |= AVFMT_NOFILE;
err = av_open_input_stream(&ic , ic->pb, "stream", pAVInputFormat, NULL);
Also, make sure to call av_close_input_stream in the CvCapture_FFMPEG::close() function instead of av_close_input_file in this situation.
Now the read_buffer callback function that is passed in to avio_alloc_context I defined as:
static int read_buffer(void *opaque, uint8_t *buf, int buf_size)
{
// This function must fill the buffer with data and return number of bytes copied.
// opaque is the pointer to private_data in the call to avio_alloc_context (4th param)
memcpy(buf, opaque, buf_size);
return buf_size;
}
This solution assumes the entire video is contained in a memory buffer and would probably have to be tweaked to work with streaming data.
So that's it! Btw, I'm using OpenCV version 2.1 so YMMV.

Code to do similar to the above, for opencv 4.2.0 is on:
https://github.com/jcdutton/opencv
Branch: 4.2.0-jcd1
Load the entire file into RAM pointed to by buffer, of size buffer_size.
Sample code:
VideoCapture d_reader1;
d_reader1.open_buffer(buffer, buffer_size);
d_reader1.read(input1);
The above code reads the first frame of video.

Related

WASAPI captured packets do not align

I'm trying to visualize a soundwave captured by WASAPI loopback but find that the packets I record do not form a smooth wave when put together.
My understanding of how the WASAPI capture client works is that when I call pCaptureClient->GetBuffer(&pData, &numFramesAvailable, &flags, NULL, NULL) the buffer pData is filled from the front with numFramesAvailable datapoints. Each datapoint is a float and they alternate by channel. Thus to get all available datapoints I should cast pData to a float pointer, and take the first channels * numFramesAvailable values. Once I release the buffer and call GetBuffer again it provides the next packet. I would assume that these packets would follow on from each other but it doesn't seem to be the case.
My guess is that either I'm making an incorrect assumption about the format of the audio data in pData or the capture client is either missing or overlapping frames. But have no idea how to check these.
To make the code below as brief as possible I've removed things like error status checking and cleanup.
Initialization of capture client:
const CLSID CLSID_MMDeviceEnumerator = __uuidof(MMDeviceEnumerator);
const IID IID_IMMDeviceEnumerator = __uuidof(IMMDeviceEnumerator);
const IID IID_IAudioClient = __uuidof(IAudioClient);
const IID IID_IAudioCaptureClient = __uuidof(IAudioCaptureClient);
pAudioClient = NULL;
IMMDeviceEnumerator * pDeviceEnumerator = NULL;
IMMDevice * pDeviceEndpoint = NULL;
IAudioClient *pAudioClient = NULL;
IAudioCaptureClient *pCaptureClient = NULL;
int channels;
// Initialize audio device endpoint
CoInitialize(nullptr);
CoCreateInstance(CLSID_MMDeviceEnumerator, NULL, CLSCTX_ALL, IID_IMMDeviceEnumerator, (void**)&pDeviceEnumerator );
pDeviceEnumerator ->GetDefaultAudioEndpoint(eRender, eConsole, &pDeviceEndpoint );
// init audio client
WAVEFORMATEX *pwfx = NULL;
REFERENCE_TIME hnsRequestedDuration = 10000000;
REFERENCE_TIME hnsActualDuration;
audio_device_endpoint->Activate(IID_IAudioClient, CLSCTX_ALL, NULL, (void**)&pAudioClient);
pAudioClient->GetMixFormat(&pwfx);
pAudioClient->Initialize(AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_LOOPBACK, hnsRequestedDuration, 0, pwfx, NULL);
channels = pwfx->nChannels;
pAudioClient->GetService(IID_IAudioCaptureClient, (void**)&pCaptureClient);
pAudioClient->Start(); // Start recording.
Capture of packets (note that std::mutex packet_buffer_mutex and vector<vector<float>> packet_bufferare already be defined and used by another thread to safely display the data):
UINT32 packetLength = 0;
BYTE *pData = NULL;
UINT32 numFramesAvailable;
DWORD flags;
int max_packets = 8;
std::unique_lock<std::mutex>write_guard(packet_buffer_mutex, std::defer_lock);
while (true) {
pCaptureClient->GetNextPacketSize(&packetLength);
while (packetLength != 0)
{
// Get the available data in the shared buffer.
pData = NULL;
pCaptureClient->GetBuffer(&pData, &numFramesAvailable, &flags, NULL, NULL);
if (flags & AUDCLNT_BUFFERFLAGS_SILENT)
{
pData = NULL; // Tell CopyData to write silence.
}
write_guard.lock();
if (packet_buffer.size() == max_packets) {
packet_buffer.pop_back();
}
if (pData) {
float * pfData = (float*)pData;
packet_buffer.emplace(packet_buffer.begin(), pfData, pfData + channels * numFramesAvailable);
} else {
packet_buffer.emplace(packet_buffer.begin());
}
write_guard.unlock();
hpCaptureClient->ReleaseBuffer(numFramesAvailable);
pCaptureClient->GetNextPacketSize(&packetLength);
}
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
I store the packets in a vector<vector<float>> (where each vector<float> is a packet) removing the last one and inserting the newest at the start so I can iterate over them in order.
Below is the result of a captured sinewave, plotting alternating values so it only represents a single channel. It is clear where the packets are being stitched together.
Something is playing a sine wave to Windows; you're recording the sine wave back in the audio loopback; and the sine wave you're getting back isn't really a sine wave.
You're almost certainly running into glitches. The most likely causes of glitching are:
Whatever is playing the sine wave to Windows isn't getting data to Windows in time, so the buffer is running dry.
Whatever is reading the loopback data out of Windows isn't reading the data in time, so the buffer is filling up.
Something is going wrong in between playing the sine wave to Windows and reading it back.
It is possible that more than one of these are happening.
The IAudioCaptureClient::GetBuffer call will tell you if you read the data too late. In particular it will set *pdwFlags so that the AUDCLNT_BUFFERFLAGS_DATA_DISCONTINUITY bit is set.
Looking at your code, I see you're doing the following things between the GetBuffer and the WriteBuffer:
Waiting on a lock
Sometimes doing something called "pop_back"
Doing something called "emplace"
I quote from the above-linked documentation:
Clients should avoid excessive delays between the GetBuffer call that acquires a packet and the ReleaseBuffer call that releases the packet. The implementation of the audio engine assumes that the GetBuffer call and the corresponding ReleaseBuffer call occur within the same buffer-processing period. Clients that delay releasing a packet for more than one period risk losing sample data.
In particular you should NEVER DO ANY OF THE FOLLOWING between GetBuffer and ReleaseBuffer because eventually they will cause a glitch:
Wait on a lock
Wait on any other operation
Read from or write to a file
Allocate memory
Instead, pre-allocate a bunch of memory before calling IAudioClient::Start. As each buffer arrives, write to this memory. On the side, have a regularly scheduled work item that takes written memory and writes it to disk or whatever you're doing with it.

pjsip capture and play pcm data

I have some embedded Devices that have no audio device by default. They communicate with each other via a FPGA. So my question is, how do I capture/play back audio from pjsip in pcm in order to send/receive it with the FPGA?
I know that there is pjmedia_mem_player_create() and pjmedia_mem_capture_create() but I can't seem to find any good info towards using these functions.
I tried the following piece of code, but an assertion failed cause one of the function's parameter is "empty".
Error:
pjmedia_mem_capture_create: Assertion `pool && buffer && size && clock_rate && channel_count && samples_per_frame && bits_per_sample && p_port' failed.
Note: I'm mainly using pjsua2 for everything else like registrations, transports etc. Also the default audio is set to null with ep.audDevManager().setNullDev(); as without this, making/receiving a call would simply fail?!
void MyCall::onCallMediaState(OnCallMediaStateParam &prm){
CallInfo ci = getInfo();
pj_caching_pool_init(&cp, &pj_pool_factory_default_policy, 0);
pj_pool_t *pool = pj_pool_create(&cp.factory, "POOLNAME", 2000, 2000, NULL);
void *buffer;
pjmedia_port *prt;
#define CLOCK_RATE 8000
#define CHANELS 1
#define SAMPLES_PER_FRAME 480
#define BITS_PER_SAMPLE 16
pjmedia_mem_capture_create( pool, //Pool
buffer, //Buffer
2000, //Buffer Size
CLOCK_RATE,
CHANELS,
SAMPLES_PER_FRAME,
BITS_PER_SAMPLE,
0, //Options
&prt); //The return port}
UPDATE
The assertion failed cause the buffer variable doesn't have any memory allocated to it. Allocate with twice the amount of samples per frame to have sufficient memory.
buffer = pj_pool_zalloc(pool, 960);
Also a callback needs to be registered with pjmedia_mem_capture_set_eof_cb2() (The two at the end is necessary for PJSIP 2.10 or later) Apparently from there the buffer can be used. Just that my implementation atm doesn't execute the callback.
Looks like I found the solution, I have modified your code and wrote a simple code in C with pjsua API to dump every frame to file. Sorry for mess, I'm not proficient in C:
pjsua_call_info ci;
pjsua_call_get_info(call_id, &ci);
pjsua_conf_port_info cpi;
pjsua_conf_get_port_info(ci.conf_slot, &cpi);
pj_pool_t *pool = pjsua_pool_create("POOLNAME", 2000, 2000);
pjmedia_port *prt;
uint buf_size = cpi.bits_per_sample*cpi.samples_per_frame/8;
void *buffer = pj_pool_zalloc(pool, buf_size);
pjsua_conf_port_id port_id;
pjmedia_mem_capture_create( pool,
buffer,
buf_size,
cpi.clock_rate,
cpi.channel_count,
cpi.samples_per_frame,
cpi.bits_per_sample,
0,
&prt);
pjmedia_mem_capture_set_eof_cb(prt, buffer, dump_incoming_frames);
pjsua_conf_add_port(pool, prt, &port_id);
pjsua_conf_connect(ci.conf_slot, port_id); //connect port with conference
///////dumping frames///
static pj_status_t dump_incoming_frames(pjmedia_port * port, void * usr_data){
pj_size_t buf_size = pjmedia_mem_capture_get_size(port);
char * data = usr_data;
...
fwrite(data,sizeof(data[0]),buf_size,fptr);
...
}
Documenation says pjmedia_mem_capture_set_eof_cb is deprecated but I couldn't make work pjmedia_mem_capture_set_eof_cb2, buf_size is 0 for every call of dump_incoming_frames so just left with deprecated function. I also succeed the same result with creating custom port.
I hope you can modify it easily to your C++/pjsua2 code
UPD:
I have modified the PJSIP and packed audio in-out streaming into proper PJSUA2/Media classes so it can be called from Python. Full code is here.

C++/C FFmpeg artifact build up across video frames

Context:
I am building a recorder for capturing video and audio in separate threads (using Boost thread groups) using FFmpeg 2.8.6 on Ubuntu 16.04. I followed the demuxing_decoding example here: https://www.ffmpeg.org/doxygen/2.8/demuxing_decoding_8c-example.html
Video capture specifics:
I am reading H264 off a Logitech C920 webcam and writing the video to a raw file. The issue I notice with the video is that there seems to be a build-up of artifacts across frames until a particular frame resets. Here is my frame grabbing, and decoding functions:
// Used for injecting decoding functions for different media types, allowing
// for a generic decode loop
typedef std::function<int(AVPacket*, int*, int)> PacketDecoder;
/**
* Decodes a video packet.
* If the decoding operation is successful, returns the number of bytes decoded,
* else returns the result of the decoding process from ffmpeg
*/
int decode_video_packet(AVPacket *packet,
int *got_frame,
int cached){
int ret = 0;
int decoded = packet->size;
*got_frame = 0;
//Decode video frame
ret = avcodec_decode_video2(video_decode_context,
video_frame, got_frame, packet);
if (ret < 0) {
//FFmpeg users should use av_err2str
char errbuf[128];
av_strerror(ret, errbuf, sizeof(errbuf));
std::cerr << "Error decoding video frame " << errbuf << std::endl;
decoded = ret;
} else {
if (*got_frame) {
video_frame->pts = av_frame_get_best_effort_timestamp(video_frame);
//Write to log file
AVRational *time_base = &video_decode_context->time_base;
log_frame(video_frame, time_base,
video_frame->coded_picture_number, video_log_stream);
#if( DEBUG )
std::cout << "Video frame " << ( cached ? "(cached)" : "" )
<< " coded:" << video_frame->coded_picture_number
<< " pts:" << pts << std::endl;
#endif
/*Copy decoded frame to destination buffer:
*This is required since rawvideo expects non aligned data*/
av_image_copy(video_dest_attr.video_destination_data,
video_dest_attr.video_destination_linesize,
(const uint8_t **)(video_frame->data),
video_frame->linesize,
video_decode_context->pix_fmt,
video_decode_context->width,
video_decode_context->height);
//Write to rawvideo file
fwrite(video_dest_attr.video_destination_data[0],
1,
video_dest_attr.video_destination_bufsize,
video_out_file);
//Unref the refcounted frame
av_frame_unref(video_frame);
}
}
return decoded;
}
/**
* Grabs frames in a loop and decodes them using the specified decoding function
*/
int process_frames(AVFormatContext *context,
PacketDecoder packet_decoder) {
int ret = 0;
int got_frame;
AVPacket packet;
//Initialize packet, set data to NULL, let the demuxer fill it
av_init_packet(&packet);
packet.data = NULL;
packet.size = 0;
// read frames from the file
for (;;) {
ret = av_read_frame(context, &packet);
if (ret < 0) {
if (ret == AVERROR(EAGAIN)) {
continue;
} else {
break;
}
}
//Convert timing fields to the decoder timebase
unsigned int stream_index = packet.stream_index;
av_packet_rescale_ts(&packet,
context->streams[stream_index]->time_base,
context->streams[stream_index]->codec->time_base);
AVPacket orig_packet = packet;
do {
ret = packet_decoder(&packet, &got_frame, 0);
if (ret < 0) {
break;
}
packet.data += ret;
packet.size -= ret;
} while (packet.size > 0);
av_free_packet(&orig_packet);
if(stop_recording == true) {
break;
}
}
//Flush cached frames
std::cout << "Flushing frames" << std::endl;
packet.data = NULL;
packet.size = 0;
do {
packet_decoder(&packet, &got_frame, 1);
} while (got_frame);
av_log(0, AV_LOG_INFO, "Done processing frames\n");
return ret;
}
Questions:
How do I go about debugging the underlying issue?
Is it possible that running the decoding code in a thread other than the one in which the decoding context was opened is causing the problem?
Am I doing something wrong in the decoding code?
Things I have tried/found:
I found this thread that is about the same problem here: FFMPEG decoding artifacts between keyframes
(I cannot post samples of my corrupted frames due to privacy issues, but the image linked to in that question depicts the same issue I have)
However, the answer to the question is posted by the OP without specific details about how the issue was fixed. The OP only mentions that he wasn't 'preserving the packets correctly', but nothing about what was wrong or how to fix it. I do not have enough reputation to post a comment seeking clarification.
I was initially passing the packet into the decoding function by value, but switched to passing by pointer on the off chance that the packet freeing was being done incorrectly.
I found another question about debugging decoding issues, but couldn't find anything conclusive: How is video decoding corruption debugged?
I'd appreciate any insight. Thanks a lot!
[EDIT] In response to Ronald's answer, I am adding a little more information that wouldn't fit in a comment:
I am only calling decode_video_packet() from the thread processing video frames; the other thread processing audio frames calls a similar decode_audio_packet() function. So only one thread calls the function. I should mention that I have set the thread_count in the decoding context to 1, failing which I would get a segfault in malloc.c while flushing the cached frames.
I can see this being a problem if the process_frames and the frame decoder function were run on separate threads, which is not the case. Is there a specific reason why it would matter if the freeing is done within the function, or after it returns? I believe the freeing function is passed a copy of the original packet because multiple decode calls would be required for audio packet in case the decoder doesnt decode the entire audio packet.
A general problem is that the corruption does not occur all the time. I can debug better if it is deterministic. Otherwise, I can't even say if a solution works or not.
A few things to check:
are you running multiple threads that are calling decode_video_packet()? If you are: don't do that! FFmpeg has built-in support for multi-threaded decoding, and you should let FFmpeg do threading internally and transparently.
you are calling av_free_packet() right after calling the frame decoder function, but at that point it may not yet have had a chance to copy the contents. You should probably let decode_video_packet() free the packet instead, after calling avcodec_decode_video2().
General debugging advice:
run it without any threading and see if that works;
if it does, and with threading it fails, use thread debuggers such as tsan or helgrind to help in finding race conditions that point to your code.
it can also help to know whether the output you're getting is reproduceable (this suggests a non-threading-related bug in your code) or changes from one run to the other (this suggests a race condition in your code).
And yes, the periodic clean-ups are because of keyframes.

SDL2 & SMPEG2 - Empty sound buffer trying to read a MP3

I'm trying to load a MP3 in a buffer using the SMPEG2 library, which comes with the SDL2. Every SMPEG function calls returns without error, but when I'm done, the sound buffer is full of zeros.
Here's the code :
bool LoadMP3(char* filename)
{
bool success = false;
const Uint32 Mp3ChunkLen = 4096;
SMPEG* mp3;
SMPEG_Info infoMP3;
Uint8 * ChunkBuffer;
Uint32 MP3Length = 0;
// Allocate a chunk buffer
ChunkBuffer = (Uint8*)malloc(Mp3ChunkLen);
SDL_RWops *mp3File = SDL_RWFromFile(filename, "rb");
if (mp3File != NULL)
{
mp3 = SMPEG_new_rwops(mp3File, &infoMP3, 1, 0);
if(mp3 != NULL)
{
if(infoMP3.has_audio)
{
Uint32 readLen;
// Inform the MP3 of the output audio specifications
SMPEG_actualSpec(mp3, &asDeviceSpecs); // static SDL_AudioSpec asDeviceSpecs; containing valid values after a call to SDL_OpenAudioDevice
// Enable the audio and disable the video.
SMPEG_enableaudio(mp3, 1);
SMPEG_enablevideo(mp3, 0);
// Play the MP3 once to get the size of the needed finale buffer
SMPEG_play(mp3);
while ((readLen = SMPEG_playAudio(mp3, ChunkBuffer, Mp3ChunkLen)) > 0)
{
MP3Length += readLen;
}
SMPEG_stop(mp3);
if(MP3Length > 0)
{
// Reallocate the buffer with the new length (if needed)
if (MP3Length != Mp3ChunkLen)
{
ChunkBuffer = (Uint8*)realloc(ChunkBuffer, MP3Length);
}
// Replay the entire MP3 into the new ChunkBuffer.
SMPEG_rewind(mp3);
SMPEG_play(mp3);
bool readBackSuccess = (MP3Length == SMPEG_playAudio(mp3, ChunkBuffer, MP3Length));
SMPEG_stop(mp3);
if(readBackSuccess)
{
// !!! Here, ChunkBuffer contains only zeros !!!
success = true;
}
}
}
SMPEG_delete(mp3);
mp3 = NULL;
}
SDL_RWclose(mp3File);
mp3File = NULL;
}
free(ChunkBuffer);
return success;
}
The code's widely based on SDL_Mixer, which I cannot use for my projet, based on its limitations.
I know Ogg Vorbis would be a better choice of file format, but I'm porting a very old project, and it worked entirely with MP3s.
I'm sure the sound system is initialized correctly because I can play WAV files just fine. It's intialized with a frequency of 44100, 2 channels, 1024 samples, and the AUDIO_S16SYS format (the latter which is, as I understood from the SMPEG source, mandatory).
I've calculated the anticipated buffer size, based on the bitrate, the amount of data in the MP3 and the OpenAudioDevice audio specs, and everything is consistent.
I cannot figure why everything but the buffer data seems to be working.
UPDATE #1
Still trying to figure out what's wrong, I thought the support for MP3 might not be working, so I created the following function :
SMPEG *mpeg;
SMPEG_Info info;
mpeg = SMPEG_new(filename,&info, 1);
SMPEG_play(mpeg);
do { SDL_Delay(50); } while(SMPEG_status(mpeg) == SMPEG_PLAYING);
SMPEG_delete(mpeg);
The MP3 played. So, the decoding should actually be working. But that's not what I need ; I really need the sound buffer data so I can send it to my mixer.
After much tinkering, research and digging through the SMPEG source code, I realized that I had to pass 1 as the SDLAudio parameter to SMPEG_new_rwops function.
The comment found in smpeg.h is misleading :
The sdl_audio parameter indicates if SMPEG should initialize the SDL audio subsystem. If not, you will have to use the SMPEG_playaudio() function below to extract the decoded data.
Since the audio subsystem was already initialized and I was using the SMPEG_playaudio() function, I had no reason to think I needed this parameter to be non-zero. In SMPEG, this parameter triggers the audio decompression at opening time, but even though I called SMPEG_enableaudio(mp3, 1); the data is never reparsed. This might be a bug/a shady feature.
I had another problem with the freesrc parameter which needed to be 0, since I freed the SDL_RWops object myself.
For future reference, once ChunkBuffer has the MP3 data, it needs to pass through SDL_BuildAudioCVT/SDL_ConvertAudio if it's to be played through an already opened audio device.
The final working code is :
// bool ReadMP3ToBuffer(char* filename)
bool success = false;
const Uint32 Mp3ChunkLen = 4096;
SDL_AudioSpec mp3Specs;
SMPEG* mp3;
SMPEG_Info infoMP3;
Uint8 * ChunkBuffer;
Uint32 MP3Length = 0;
// Allocate a chunk buffer
ChunkBuffer = (Uint8*)malloc(Mp3ChunkLen);
memset(ChunkBuffer, 0, Mp3ChunkLen);
SDL_RWops *mp3File = SDL_RWFromFile(filename, "rb"); // filename is a char* passed to the function.
if (mp3File != NULL)
{
mp3 = SMPEG_new_rwops(mp3File, &infoMP3, 0, 1);
if(mp3 != NULL)
{
if(infoMP3.has_audio)
{
Uint32 readLen;
// Get the MP3 audio specs for later conversion
SMPEG_wantedSpec(mp3, &mp3Specs);
SMPEG_enablevideo(mp3, 0);
// Play the MP3 once to get the size of the needed buffer in relation with the audio specs
SMPEG_play(mp3);
while ((readLen = SMPEG_playAudio(mp3, ChunkBuffer, Mp3ChunkLen)) > 0)
{
MP3Length += readLen;
}
SMPEG_stop(mp3);
if(MP3Length > 0)
{
// Reallocate the buffer with the new length (if needed)
if (MP3Length != Mp3ChunkLen)
{
ChunkBuffer = (Uint8*)realloc(ChunkBuffer, MP3Length);
memset(ChunkBuffer, 0, MP3Length);
}
// Replay the entire MP3 into the new ChunkBuffer.
SMPEG_rewind(mp3);
SMPEG_play(mp3);
bool readBackSuccess = (MP3Length == SMPEG_playAudio(mp3, ChunkBuffer, MP3Length));
SMPEG_stop(mp3);
if(readBackSuccess)
{
SDL_AudioCVT convertedSound;
// NOTE : static SDL_AudioSpec asDeviceSpecs; containing valid values after a call to SDL_OpenAudioDevice
if(SDL_BuildAudioCVT(&convertedSound, mp3Specs.format, mp3Specs.channels, mp3Specs.freq, asDeviceSpecs.format, asDeviceSpecs.channels, asDeviceSpecs.freq) >= 0)
{
Uint32 newBufferLen = MP3Length*convertedSound.len_mult;
// Make sure the audio length is a multiple of a sample size to avoid sound clicking
int sampleSize = ((asDeviceSpecs.format & 0xFF)/8)*asDeviceSpecs.channels;
newBufferLen &= ~(sampleSize-1);
// Allocate the new buffer and proceed with the actual conversion.
convertedSound.buf = (Uint8*)malloc(newBufferLen);
memcpy(convertedSound.buf, ChunkBuffer, MP3Length);
convertedSound.len = MP3Length;
if(SDL_ConvertAudio(&convertedSound) == 0)
{
// Save convertedSound.buf and convertedSound.len_cvt for future use in your mixer code.
// Dont forget to free convertedSound.buf once it's not used anymore.
success = true;
}
}
}
}
}
SMPEG_delete(mp3);
mp3 = NULL;
}
SDL_RWclose(mp3File);
mp3File = NULL;
}
free(ChunkBuffer);
return success;
NOTE : Some MP3 files I tried lost a few milliseconds and cutoff too early during playback when I resampled them with this code. Some others didn't. I could reproduce the same behaviour in Audacity, so I'm not sure what's going on. There may still have a bug with my code, a bug in SMPEG, or it maybe a known issue with the MP3 format itself. If someone can provide and explanation in the comments, that would be great!

How to read YUV8 data from avi file?

I have avi file that contains uncompressed gray video data. I need to extract frames from it. The size of file is 22 Gb.
How do i do that?
I have already tried ffmpeg, but it gives me "could not find codec parameters for video stream" message - because there is no codec at work, just frames.
Since Opencv just uses ffmpeg to read video, that rules out opencv as well.
The only path that seems to be left is to try and dig into the raw data, but i do not know how.
Edit: this is the code i use to read from the file with opencv. The failure occurs inside the second if. Running ffmpeg binary on the file also fails with the message above (could not find codec aprameters etc)
/* register all formats and codecs */
av_register_all();
/* open input file, and allocate format context */
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
fprintf(stderr, "Could not open source file %s\n", src_filename);
ret = 1;
goto end;
}
fmt_ctx->seek2any = true;
/* retrieve stream information */
int res = avformat_find_stream_info(fmt_ctx, NULL);
if (res < 0) {
fprintf(stderr, "Could not find stream information\n");
ret = 1;
goto end;
}
Edit:
Here is sample code i have tried to make the extraction: pastebin. The result i get is an unchanging buffer after every call to AVIStreamRead.
If you do not need cross platform functionality Video for Windows (VFW) API is a good alternative (http://msdn.microsoft.com/en-us/library/windows/desktop/dd756808(v=vs.85).aspx), i will not put an entire code block, since there's quite much to do, but you should be able to figure it out from the reference link. Basically, you do a AVIFileOpen, then get the video stream via AVIFileGetStream with streamtypeVIDEO, or alternatively do it at once with AVIStreamOpenFromFile and then read samples from the stream with AVIStreamRead. If you get to a point where you fail I can try to help, but it should be pretty straightforward.
Also, not sure why ffmpeg is failing, I have been doing raw AVI reading with ffmpeg without any codecs involved, can you post what call to ffpeg actually fails?
EDIT:
For the issue that you are seeing when the read data size is 0. The AVI file has N slots for frames in each second where N is the fps of the video. In real life the samples won't come exactly at that speed (e.g. IP surveillance cameras) so the actual data sample indexes can be non continuous like 1,5,11,... and VFW would insert empty samples between them (that is from where you read a sample with a zero size). What you have to do is call AVIStreamRead with NULL as buffer and 0 as size until the bRead is not 0 or you run past last sample. When you get an actual size, then you can again call AVIStreamRead on that sample index with the buffer pointer and size. I usually do compressed video so i don't use the suggested size, but at least according to your code snipplet I would do something like this:
...
bRead = 0;
do
{
aviOpRes = AVIStreamRead(ppavi,smpS,1,NULL,0,&bRead,&smpN);
} while (bRead == 0 && ++smpS < si.dwLength + si.dwStart);
if(smpS >= si.dwLength + si.dwStart)
break;
PUCHAR tempBuffer = new UCHAR[bRead];
aviOpRes = AVIStreamRead(ppavi,smpS,1,tempBuffer,bRead,&bRead,&smpN);
/* do whatever you need */
delete tempBuffer;
...
EDIT 2:
Since this may come in handy to someone or yourself to make a choice between VFW and FFMPEG I also updated your FFMPEG example so that it parsed the same file (sorry for the code quality since it lacks error checking but i guess you can see the logical flow):
/* register all formats and codecs */
av_register_all();
AVFormatContext* fmt_ctx = NULL;
/* open input file, and allocate format context */
const char *src_filename = "E:\\Output.avi";
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
fprintf(stderr, "Could not open source file %s\n", src_filename);
abort();
}
/* retrieve stream information */
int res = avformat_find_stream_info(fmt_ctx, NULL);
if (res < 0) {
fprintf(stderr, "Could not find stream information\n");
abort();
}
int video_stream_index = 0; /* video stream is usualy 0 but still better to lookup in case it's not present */
for(; video_stream_index < fmt_ctx->nb_streams; ++video_stream_index)
{
if(fmt_ctx->streams[video_stream_index]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
break;
}
if(video_stream_index == fmt_ctx->nb_streams)
abort();
AVPacket *packet = new AVPacket;
while(av_read_frame(fmt_ctx, packet) == 0)
{
if (packet->stream_index == video_stream_index)
printf("Sample nr %d\n", packet->pts);
av_free_packet(packet);
}
Basically you open the context and read packets from it. You will get both audio and video packets so you should check if the packet belongs to the stream of interest. FFMPEG will save you the trouble with empty frames and give only those samples that have data in them.