cvRetrieveFrame crahses - c++

I'm trying to write a simple openCV code that create a capture and retrieves the first frame from it.
**CvCapture *m_pCapfile = cvCreateFileCapture(m_aviFileName.c_str());
if (m_pCapfile)
m_frames = cvRound(cvGetCaptureProperty(m_pCapfile, CV_CAP_PROP_FRAME_COUNT));
cvSetCaptureProperty(m_pCapfile, CV_CAP_PROP_POS_FRAMES, 0);
int ret = cvGrabFrame( m_pCapfile);
IplImage *cap = cvRetrieveFrame( m_pCapfile);**
In m_frames is have 153, which is the correct number of frames as far as I know.
cvGrabFrame returns 1 to ret however cvRetrieveFrame crashes.
I tries using cvCaptureFromFile and cvCaptureFromAVI instead of cvCreateFileCapture
In both cases cvRetrieveFrame method crashes.
Any ideas?
Thanks

**CvCapture *m_pCapfile = cvCreateFileCapture(m_aviFileName.c_str());
Shouldn't this be CvCapture?
Like the following
CvCapture *m_pCapfile = cvCreateFileCapture(m_aviFileName.c_str());
I think you need to change the code to what I have suggested. Plus if this is your complete code, make sure during the loop in which you are retrieving the frames, you are not calling cvReleaseCapture(). This method is only called at the end when you have caught all the frames or the specified number of frames you want.

Related

Fixing Real Time Audio with PortAudio in Windows 10

I created an application a couple of years ago that allowed me to process audio by downmixing a 6 channel or 8 channel a.k.a 5.1 as 7.1 as matrixed stereo encoded for that purpose I used the portaudio library with great results this is an example of the open stream function and callback to downmix a 7.1 signal
Pa_OpenStream(&Flujo, &inputParameters, &outParameters, SAMPLE_RATE, 1, paClipOff, ptrFunction, NULL);
Notice the use of framesPerBuffer value of just one (1), this is my callback function
int downmixed8channels(const void *input, void *output, unsigned long framesPerBuffer, const PaStreamCallbackTimeInfo * info, PaStreamCallbackFlags state, void * userData)
{
(void)userData;
(void)info;
(void)state;
(void)framesBuffer;
float *ptrInput = (float*)input;
float *ptrOutput = (float*)ouput;
/*This is a struct to identify samples*/
AudioSamples->L = ptrInput[0];
AudioSamples->R = ptrInput[1];
AudioSamples->C = ptrInput[2];
AudioSamples->LFE = ptrInput[3];
AudioSamples->RL = ptrInput[4];
AudioSamples->RR = ptrInput[5];
AudioSamples->SL = ptrInput[6];
AudioSamples->SR = ptrInput[7];
Encoder->8channels(AudioSamples->L,
AudioSamples->R,
AudioSamples->C,
AudioSamples->LFE,
MuestrasdeAudio->SL,
MuestrasdeAudio->SR,
MuestrasdeAudio->RL,
MuestrasdeAudio->RR,);
ptrOutput[0] = Encoder->gtLT();
ptrOutput[1] = Encoder->gtRT();
return paContinue;
}
As you can see the order set by the index in the output and input buffer correspond to a discrete channel
in the case of the output 0 = Left channel, 1 = right Channel. This used to work well, until entered Windows 10 2004, since I updated my system to this new version my audio glitch and I get artifacts like those
Those are captures from the sound of the channel test window under the audio device panel of windows. By the images is clear my program is dropping frames, so the first try to solve this was to use a larger buffer than one to hold samples process them and send then, the reason I did not use a buffer size larger than one in the first place was that the program would drop frames.
But before implementing a I did a proof of concept, would not include audio processing at all, of simple passing of data from input to ouput, for that I set the oputput channelCount parameters to 8 just like the input, resulting in something as simple as this.
for (int i = 0; i < FramesPerBuffer /*1000*/; i++)
{
ptrOutput[i] = ptrOutput[i];
}
but still the program is still dropping samples.
Next I used two callbacks one for writing to a buffer and a second one to read it and send it to output
(void)info;
(void)userData;
(void)state;
(void)output;
float* ptrInput = (float*)input;
for (int i = 0; i < FRAME_SIZE; i++)
{
buffer_input[i] = ptrInput[i];
}
return paContinue;
Callback to store.
(void)info;
(void)userData;
(void)state;
(void)output;
float* ptrOutput = (float*)output;
for (int i = 0; i < FRAME_SIZE; i++)
{
AudioSamples->L = (buffer_input[i] );
AudioSamples->R = (buffer_input[i++]);
AudioSamples->C = (buffer_input[i++] );
AudioSamples->LFE = (buffer_input[i++]);
AudioSamples->SL = (buffer_input[i++] );
AudioSamples->SR = (buffer_input[i++]);
Encoder->Encoder(AudioSamples->L, AudioSamples->R, AudioSamples->C, AudioSamples->LFE,
AudioSamples->SL, AudioSamples->SR);
bufferTransformed[w] = (Encoder->getLT() );
bufferTransformed[w++] = (Encoder->getRT() );
}
w = 0;
for (int i = 0; i < FRAME_REDUCED; i++)
{
ptrOutput[i] = buffer_Transformed[i];
}
return paContinue;
Callback for processing
The processing callback use a reduced frames per buffer since 2 channel is less than eight since it seems in portaudio a frame is composed of a sample for each audio channel.
This also did not work, since the first problem, is how to syncronize the two callback?, after all of this, what recommendation or advice, can you give me to solve this issue,
Notes: the samplerate must be same for both devices, I implemeted logic in the program to prevent this, the bitdepth is also the same I am using paFloat32,
.The portaudio is the modified one use by audacity, since I wanted to use their implementation of WASAPI
loopback
Thank very much in advance!.
At the end of the day it I did not have to change my callbacks functions in any way, what solved it, was changing or increasing the parameter ".suggestedLatency" of the input and output parameters, to 1.0, even the devices defaultLowOutputLatency or defaultHighOutputLatency values where causing to much glitching, I test it until 1.0 was de sweepspot, higher values did not seen to improve.
TL;DR Increased the suggestedLatency until the glitching is gone.

Lib Caffe (C++): input_blobs()[0] causes bottom shape error after first call

I am trying to use Lib Caffe to extract features from images, so I can use it for other purposes on my project.
My Caffe Neural Network works fine for the first image, but at the second image, it throws me the following error:
Check failed: bottom[0]->shape() == bottom[i]->shape() bottom[0]: 87122736 0 85536896 0 (37632), bottom[1]: 1 3 112 112 (37632)
Both first and second images are the same and their shapes are the (112 x 112) with 3 channels.
The code
Caffe::set_mode(Caffe::CPU);
my_net.reset(new caffe::Net<float>(arch, caffe::TEST));
my_net->CopyTrainedLayersFrom(model);
for(int i=0; i<num_images;i++) {
Blob<float> *my_blob = my_net->input_blobs()[0];
// accessing blob attributes just for debugging purposes
// 1 after first call OK, 87122736 at second call (?)
int batch_size = my_blob->num();
// 3 after first call OK, 0 at second call (?)
int channels = my_blob->channels();
// 112 after first call OK, 85536896 at second call (?)
int height = my_blob->height();
// 112 after first call OK, 0 at second call (?)
int width = my_blob->width();
my_blob->set_cpu_data(images[i]);
my_net->Forward();
//delete my_blob; <-- IT WAS CAUSING THE PROBLEM
}
How can I just feed the Net so I can run it over the next images? How to make input_blobs()[0] points to the same memory block as the first attempt? As you can see, the attributes are different for the same parameters input_blobs()[0].
SOLUTION FOUND
In previous release of Caffe, I use to allocate a new input blob, set its data, push it back to a boottom and than use it as bellow:
// allocate new input blob
Blob<float> *my_blob = new Blob<float>(1, 1, width, height);
// set its data
feature_blob->set_cpu_data(image[i]);
// push it back to a bottom
std::vector<Blob<float>*> feature_botton;
feature_botton.push_back(feature_blob);
// use the blob forwarding the network
loss=0;
const std::vector<Blob<float>*>& feature_points(
feature_extraction_net->Forward(feature_botton, loss));
// deallocate memory
feature_botton.clear();
delete feature_blob;
The problem is with a newer release of Caffe, I don't need to deallocate memory. Actually this was the problem root. I just don't delete the blob anymore.
The solution is posted in the git forum of the lib (solution)

ffmpeg C API - creating queue of frames

I have created using the C API of ffmpeg a C++ application that reads frames from a file and writes them to a new file. Everything works fine, as long as I write immediately the frames to the output. In other words, the following structure of the program outputs the correct result (I put only the pseudocode for now, if needed I can also post some real snippets but the classes that I have created for handling the ffmpeg functionalities are quite large):
AVFrame* frame = av_frame_alloc();
int got_frame;
// readFrame returns 0 if file is ended, got frame = 1 if
// a complete frame has been extracted
while(readFrame(inputfile,frame, &got_frame)) {
if (got_frame) {
// I actually do some processing here
writeFrame(outputfile,frame);
}
}
av_frame_free(&frame);
The next step has been to parallelize the application and, as a consequence, frames are not written immediately after they are read (I do not want to go into the details of the parallelization). In this case problems arise: there is some flickering in the output, as if some frames get repeated randomly. However, the number of frames and the duration of the output video remains correct.
What I am trying to do now is to separate completely the reading from writing in the serial implementation in order to understand what is going on. I am creating a queue of pointers to frames:
std::queue<AVFrame*> queue;
int ret = 1, got_frame;
while (ret) {
AVFrame* frame = av_frame_alloc();
ret = readFrame(inputfile,frame,&got_frame);
if (got_frame)
queue.push(frame);
}
To write frames to the output file I do:
while (!queue.empty()) {
frame = queue.front();
queue.pop();
writeFrame(outputFile,frame);
av_frame_free(&frame);
}
The result in this case is an output video with the correct duration and number of frames that is only a repetition of the last 3 (I think) frames of the video.
My guess is that something might go wrong because of the fact that in the first case I use always the same memory location for reading frames, while in the second case I allocate many different frames.
Any suggestions on what could be the problem?
Ah, so I'm assuming that readFrame() is a wrapper around libavformat's av_read_frame() and libavcodec's avcodec_decode_video2(), is that right?
From the documentation:
When AVCodecContext.refcounted_frames is set to 1, the frame is
reference counted and the returned reference belongs to the caller.
The caller must release the frame using av_frame_unref() when the
frame is no longer needed.
and:
When
AVCodecContext.refcounted_frames is set to 0, the returned reference
belongs to the decoder and is valid only until the next call to this
function or until closing or flushing the decoder.
Obviously, from this it follows from this that you need to set AVCodecContext.refcounted_frames to 1. The default is 0, so my gut feeling is you need to set it to 1 and that will fix your problem. Don't forget to use av_fame_unref() on the pictures after use to prevent memleaks, and also don't forget to free your AVFrame in this loop if got_frame = 0 - again to prevent memleaks:
while (ret) {
AVFrame* frame = av_frame_alloc();
ret = readFrame(inputfile,frame,&got_frame);
if (got_frame)
queue.push(frame);
else
av_frame_free(frame);
}
(Or alternatively you could implement some cache for frame so you only realloc it if the previous object was pushed in the queue.)
There's nothing obviously wrong with your pseudocode. The problem almost certainly lies in how you lock the queue between threads.
Your memory allocation seems same to me. Do you maybe do something else in between reading and writing the frames?
Is queue the same queue in the routines that read and write the frames?

How to get the latest frames in ffmpeg, not the next frame

I have an application which connects to an RTSP camera and processes some of the frames of video. Depending on the camera resolution and frame rate, I don't need to process all the frames and sometimes my processing takes a while. I've designed things so that when the frame is read, its passed off to a work queue for another thread to deal with. However, depending on system load/resolution/frame rate/network/file system/etc, I occasionally will find cases where the program doesn't keep up with the camera.
I've found that with ffmpeg(I'm using the latest git drop from mid october and running on windows) that being a couple seconds behind is fine and you keep getting the next frame, next frame, etc. However, once you get, say, 15-20 seconds behind that frames you get from ffmpeg occasionally have corruption. That is, what is returned as the next frame often has graphical glitches (streaking of the bottom of the frame, etc).
What I'd like to do is put in a check, somehow, to detect if I'm more than X frames behind the live stream and if so, flush the caches frames out and start fetching the latest/current frames.
My current snippet of my frame buffer reading thread (C++) :
while(runThread)
{
av_init_packet(&(newPacket));
int errorCheck = av_read_frame(context, &(newPacket));
if (errorCheck < 0)
{
// error
}
else
{
int frameFinished = 0;
int decodeCode = avcodec_decode_video2(ccontext, actualFrame, &frameFinished, &newPacket);
if (decodeCode <0)
{
// error
}
else
if (decodeCode == 0)
{
// no frame could be decompressed / decoded / etc
}
else
if ((decodeCode > 0) && (frameFinished))
{
// do my processing / copy the frame off for later processing / etc
}
else
{
// decoded some data, but frame was not finished...
// Save data and reconstitute the pieces somehow??
// Given that we free the packet, I doubt there is any way to use this partial information
}
av_free_packet(&(newPacket));
}
}
I've google'd and looked through the ffmpeg documents for some function I can call to flush things and enable me to catch up but I can't seem to find anything. This same sort of solution would be needed if you wanted to only occasionally monitor a video source(eg, if you only wanted to snag one frame per second or per minute). The only thing I could come up with is disconnecting from the camera and reconnecting. However, I still need a way to detect if the frames I am receiving are old.
Ideally, I'd be able to do something like this :
while(runThread)
{
av_init_packet(&(newPacket));
// Not a real function, but I'd like to do something like this
if (av_check_frame_buffer_size(context) > 30_frames)
{
// flush frame buffer.
av_frame_buffer_flush(context);
}
int errorCheck = av_read_frame(context, &(newPacket));
...
}
}

Detecting an unplugged capture device (OpenCV)

I'm attempting to detect if my capture camera gets unplugged. My assumption was that a call to cvQueryFrame would return NULL, however it continues to return the last valid frame.
Does anyone know of how to detect camera plug/unplug events with OpenCV? This seems so rudimentary...what am I missing?
There is no API function to do that, unfortunately.
However, my suggestion is that you create another thread that simply calls cvCaptureFromCAM() and check it's result (inside a loop). If the camera get's disconnected then it should return NULL.
I'll paste some code just to illustrate my idea:
// This code should be executed on another thread!
while (1)
{
CvCapture* capture = NULL;
capture = cvCaptureFromCAM(-1); // or whatever parameter you are already using
if (!capture)
{
std::cout << "!!! Camera got disconnected !!!!" << std::endl;
break;
}
// I'm not sure if releasing it will have any affect on the other thread
cvReleaseCapture(&capture);
}
Thanks #karlphillip for pointing me in the right direction. Running calls to cvCaptureFromCAM in a separate thread works. When the camera gets unplugged, the return value is NULL.
However, it appears that this function is not thread-safe. But a simple mutex to lock simultaneous calls to cvCaptureFromCAM seems to do the trick. I used boost::thread, for this example, but one could tweak this easily.
At global scope:
// Create a mutex used to prevent simultaneous calls to cvCaptureFromCAM
boost::shared_mutex mtxCapture;
// Flag to notify when we're done.
// NOTE: not bothering w/mutex for this example for simplicity's sake
bool done = false;
Entry point goes something like this:
int main()
{
// Create the work and the capture monitoring threads
boost::thread workThread(&Work);
boost::thread camMonitorThread(&CamMonitor);
while (! done)
{
// Do whatever
}
// Wait for threads to close themselves
workThread.join();
camMonitorThread.join();
return 0;
}
The work thread is simple. The only caveat is that you need to lock the mutex so you don't get simultaneous calls to cvCaptureFromCAM.
// Work Thread
void Work()
{
Capture * capture = NULL;
mtxCapture.lock(); // Lock calls to cvCaptureFromCAM
capture = cvCaptureFromCAM(-1); // Get the capture object
mtxCapture.unlock(); // Release lock on calls to cvCaptureFromCAM
//TODO: check capture != NULL...
while (! done)
{
// Do work
}
// Release capture
cvReleaseCapture(&capture);
}
And finally, the capture monitoring thread, as suggested by #karlphillip, except with a locked call to cvCaptureFromCAM. In my tests, the calls to cvReleaseCapture were quite slow. I put a call to cvWaitKey at the end of the loop because I don't want the overheard of constantly checking.
void CamMonitor()
{
while (! done)
{
CvCapture * capture = NULL;
mtxCapture.lock(); // Lock calls to cvCaptureFromCAM
capture = cvCaptureFromCAM(-1); // Get the capture object
mtxCapture.unlock(); // Release lock on calls to cvCaptureFromCAM
if (capture == NULL)
done = true; // NOTE: not a thread-safe write...
else
cvReleaseCapture(&capture);
// Wait a while, we don't need to be constantly checking.
cvWaitKey(2500);
}
I will probably end up implementing a ready-state flag which will be able to detect if the camera gets plugged back in. But that's out of the scope of this example. Hope someone finds this useful. Thanks again, #karlphillip.
This still seems to be an issue.
Another solution would be to compare the returned data with the previous one. For a working camera there should always be flickering. if the data is identical you can be sure, the cam was unplugged.
Martin
I think that I have a good workaround for this problem. I create an auxiliary Mat array with zeros with the same resolution like the output from camera. I assign it to Mat array to which just after is assign the frame captured from camera and at the end I check the norm of this array. If it is equal zero it means that there was no new frame captured from camera.
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
Mat frame;
cap >> frame;
Mat emptyFrame = Mat::zeros(CV_CAP_PROP_FRAME_WIDTH, , CV_32F);
for(;;)
{
frame = emptyFrame;
cap >> frame;
if (norm(frame) == 0) break;
}