OpenCV hangs on VideoCapture grab() - c++

I am trying to write a program to capture images from 2 webcams simultaneously (or near simultaneously), but sometimes when I would run my program it would start to hang. What I mean by that is it would drop the FPS so low, that there would be a good 5 - 10 seconds between each image capture. I decided to make a much more sparse program that uses the code I thought might be causing the problem so I could isolate the source. Sure enough, my small program is causing problems but I am stumped as for what is causing them. Most of the time it will run without fault, but sometimes it will exhibit the same symptoms of hanging anywhere from 10 seconds to 1 minute into running the code. No errors are raised, but from the output of my program I am confident that VideoCapture's grab() is the line slowing down.
I am running this in OS X, with two external webcams through a USB hub, OpenCV version 10.4.11_1, and in C++. I don't think the USB hub is causing the problem. Quite frankly, it is so slow to tell when it will and will not freeze that it is difficult to troubleshoot. I would get rid of the USB hub, but I need it in the end and I know bandwidth is not the issue. I can run multiple (I have tried 4) instances of a different OpenCV test program that captures from a single camera with all cameras attached through the USB hub.
I wonder if there is an internal buffer in the VideoCapture class that is filling up, or some other internal issue because I can't seem to find much documentation on VideoCapture's grab() function and find out what it is actually taking so long to do.
Thanks for reading my lengthy description. Here is my code:
int main(){
VideoCapture vc1(1);
VideoCapture vc2(2);
Timer tmr;
Mat img1;
Mat img2;
namedWindow("WINDOW1", CV_WINDOW_NORMAL);
namedWindow("WINDOW2", CV_WINDOW_NORMAL);
waitKey(1);
int count = 0;
while (true){
tmr.reset();
vc1.grab();
vc2.grab();
cout << "Double grab time(" << ++count << "): " << tmr.elapsed() << endl;
tmr.reset();
vc1.retrieve(img1);
vc2.retrieve(img2);
cout << "Double retrieve time: " << tmr.elapsed() << endl;
imshow("WINDOW1", img1);
imshow("WINDOW2", img2);
if (waitKey(25) == 27){
cout << "Quit" << endl;
break;
}
}
return 0;
}
using this timer class from a SO post:
class Timer
{
public:
Timer() : beg_(clock_::now()) {}
void reset() { beg_ = clock_::now(); }
double elapsed() const {
return std::chrono::duration_cast<second_>
(clock_::now() - beg_).count(); }
private:
typedef std::chrono::high_resolution_clock clock_;
typedef std::chrono::duration<double, std::ratio<1> > second_;
std::chrono::time_point<clock_> beg_;
};
and compiled with:
clang++ `pkg-config --libs --cflags opencv` -o test test.cpp
I just can't image I am the only one who has or will run into this, so if I find anything out I will be sure to post it. In the mean time I would be eternally grateful for some help.
Thanks

I have a partial solution in case anyone else runs into this problem. I was able to stop my program from freezing by using different webcams. Originally I used two webcams called "Creative Live! Cam Chat HD, 5.7MP" which otherwise seem to work perfectly, however after replacing them with two Logitech c920s I was able get it to work. (or so it seems. I have been using them now for about 1.5 months and the one time I saw it freeze like it did before was when I was adding code to resize the video based on CLI input in a multithreaded program. I was also getting seg. faults so it is not exactly strong evidence.)
If I find out why the Logitech cameras work when the others didn't I will post a reply, but my advice would be to try using different webcams if anyone runs into a similar problem.

Related

OpenCV 4.5.2 takes a long time (>100ms) to retrieve a single frame from a webcam, C++ on Windows 10

I've been having a tough time getting my webcam working quickly with opencv. Frames take a very long time to read, (a recorded average of 124ms across 500 frames) I've tried on three different computers (running Windows 10) with a logitech C922 webcam. The most recent machine I tested on has a Ryzen 9 3950X, with 32gbs of ram; no lack of power.
Here is the code:
cv::VideoCapture cap = cv::VideoCapture(m_cameraNum);
// Check if camera opened successfully
if (!cap.isOpened())
{
m_logger->critical("Error opening video stream or file\n\r");
return -1;
}
bool result = true;
result &= cap.set(cv::CAP_PROP_FRAME_WIDTH, 1280);
result &= cap.set(cv::CAP_PROP_FRAME_HEIGHT, 720);
bool ready = false;
std::vector<string> timeLog;
timeLog.reserve(50000);
int i = 0;
while (i < 500)
{
auto start = std::chrono::system_clock::now();
cv::Mat img;
ready = cap.read(img);
// If the frame is empty, break immediately
if (!ready)
{
timeLog.push_back("continue");
continue;
}
i++;
auto end = std::chrono::system_clock::now();
timeLog.push_back(std::to_string(std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count()));
}
for (auto& entry : timeLog)
m_logger->info(entry);
cap.release();
return 0;
Notice that I write the elapsed time to a log file at the end of execution. The average time is 124ms for debug and release, and not one instance of "continue" after half a dozen runs.
It doesn't matter if I use USB 2 or USB 3 ports (the camera is USB2) or if I run a debug build or a release build, the log file will show anywhere from 110ms to 130ms of time for each frame. The camera works fine in other app, OBS can get a smooth 1080#30fps or 720#60fps.
Stepping through the debugger and doing a lot of Googling, I've learned the following about my system:
The backend chosen by default is DSHOW. GStreamer and FFMPEG are also available.
DSHOW uses FFMPEG somehow (it needs the FFMPEG dll) but I cannot use FFMPEG directly through opencv. Attempting to use cv::VideoCapture(m_cameraNum, cv::CAP_FFMPEG) always fails. It seems like Opencv's interface to FFMPEG is only capable of opening video files.
Microsoft really screwed up camera devices in Windows a few years back, not sure if this is related to my problem.
Here's a short list of the fixes I have tried, most taken from older SO posts:
result &= cap.set(cv::CAP_PROP_FRAME_COUNT, 30); // Returns false, does nothing
result &= cap.set(cv::CAP_PROP_CONVERT_RGB, 0); // Returns true, does nothing
result &= cap.set(cv::CAP_PROP_MODE, cv::VideoWriter::fourcc('M', 'J', 'P', 'G')); // Returns false, does nothing
Set registry key from http://alax.info/blog/1693 that should disable the new Windows camera server.
Updated from 4.5.0 to 4.5.2, no change.
Asked device manager to find a newer driver, no newer driver found.
I'm out of ideas. Any help?

RtAudio + Qt : duplex not working with RME Fireface on Linux

This is my first post on Stackoverflow, I hope I'm doing this right.
I'm new to C++.
I've been playing with RtAudio and Qt (on linux, desktop and raspberry pi).
Backend is ALSA.
Audio out went fine both on my desktop computer (RME Fireface UCX in ClassCompilant mode and on the Raspberry Pi 3 with HifiBerry and PiSound)
Lately, I tried to add audio input support to my program.
I read the duplex tutorial on RtAudio website, and tried to implement it inside my code.
As soon as I added the input StreamParameters to openStream I got a very cracky sound.
Although, StreamStatus is ok in the callback...
I tried to create an empty C++ project, and simply copy the RtAudio tutorial.
Sadly, the problem remains...
I added this to my project file in Qt Creator
LIBS += -lpthread -lasound
I think my issue is similar to this one, but I couldn't find how (or if) it was solved
I tried different buffer sizes (from 64 to 4096 and more), the cracks are less audible, but still present when buffer size increases
Do you know anything that should be done regarding RtAudio in duplex mode that might solve this ? It seems that buffer size is not the same when in duplex mode.
edit :
Out of curiosity (and despair), I tried even lower buffer sizes with the canonical example from RtAudio help : it turns out using buffer size 1, 2, 4 and 8 frames removes the cracks...
As soon as I use 16 frames, sound is awful
Even 15 frames works, I really don't get what's going on
Code Sample :
RtAudio::StreamOptions options;
options.flags |= RTAUDIO_SCHEDULE_REALTIME;
RtAudio::StreamParameters params_in, params_out;
params_in.deviceId = 3;
params_in.nChannels = 2;
params_out.deviceId = 3;
params_out.nChannels = 2;
When output only, it works :
try {
audio.openStream(
&params_out,
NULL,
RTAUDIO_SINT16,
48000,
&buffer_frames,
&inout,
(void *) &buffer_bytes,
&options
);
}
catch (RtAudioError& e) {
std::cout << "Error while opening stream" << std::endl;
e.printMessage();
exit(0);
}
Cracks appear when changing NULL to &params_in :
try {
audio.openStream(
&params_out,
&params_in,
RTAUDIO_SINT16,
48000,
&buffer_frames,
&inout,
(void *) &buffer_bytes,
&options
);
}
catch (RtAudioError& e) {
std::cout << "Error while opening stream" << std::endl;
e.printMessage();
exit(0);
}
Thank you for your help
Answering my own question.
I re did my tests from scratch on the Raspberry Pi 3 / PiSound.
It turns out I must have done something wrong the first time. The canonical example from RtAudio (and the input implementation I did for my program) work well at 64, 128, etc buffer sizes.
The desktop build still have cracky sound, but works with weird buffer sizes (like 25, 30 or 27). The problem most likely comes from the Fireface UCX which is not well supported on Linux (even in ClassCompilant mode).
Thank you for your help, and sorry if I wasted your time.

SDL - How to play audio asynchronously in C++ without stopping code execution?

I am developing a clone of Asteroid in pure C++ and for that purpose, I need to add sounds to different events such as when a bullet is fired and when an explosion occurs. The issue however is that I don't have any experience with audio libraries.
I am using Simple DirectMedia Layer (SDL) and wrote a function named playsound() to play a sound in case a certain event occurs. The problem however is the fact that if an event occurs, playsound() is called and the code execution stops until the sound is wholly played out or until I return from the function (I delay the return using delay func).
What I would want to do is that the sound plays in the background without creating any lag for the rest of the Game. I am developing on Ubuntu 16.04 and can't use Windows PlaySound() either to call in the ASYNC flag.
Here is the function:
void playsound(string path) {
// Initialize SDL.
if (SDL_Init(SDL_INIT_AUDIO) < 0)
return;
// local variables
Uint32 wav_length; // length of our sample
Uint8 *wav_buffer; // buffer containing our audio file
SDL_AudioSpec wav_spec;
if(SDL_LoadWAV(path.c_str(), &wav_spec, &wav_buffer, &wav_length) == NULL){
return;
}
SDL_AudioDeviceID deviceId = SDL_OpenAudioDevice(NULL, 0, &wav_spec, NULL, 0);
SDL_QueueAudio(deviceId, wav_buffer, wav_length);
SDL_PauseAudioDevice(deviceId, 0);
SDL_Delay(50);
SDL_CloseAudioDevice(deviceId);
SDL_FreeWAV(wav_buffer);
SDL_Quit();
}
Your delay is stopping your code from executing, 50ms of delay is almost 2 frames at 33ms per frame or 3 frames at 16ms per frame, having a frame drop here and there might not be a problem, but you could see how calling several sounds in succession will slow your program down.
This is how I play sounds in my engine, using SDL2_mixer, (short sounds, for music you have another method called Mix_PlayMusic), it might be helpful to you. I have no lag (and I don't use any sleep or delays in my code). Once you call play() the sound should be played in full, unless there is something else pausing your code.
#pragma once
#include <string>
#include <memory>
#include <SDL2/SDL_mixer.h>
class sample {
public:
sample(const std::string &path, int volume);
void play();
void play(int times);
void set_volume(int volume);
private:
std::unique_ptr<Mix_Chunk, void (*)(Mix_Chunk *)> chunk;
};
And the cpp file
#include <sample.h>
sample::sample(const std::string &path, int volume)
: chunk(Mix_LoadWAV(path.c_str()), Mix_FreeChunk) {
if (!chunk.get()) {
// LOG("Couldn't load audio sample: ", path);
}
Mix_VolumeChunk(chunk.get(), volume);
}
// -1 here means we let SDL_mixer pick the first channel that is free
// If no channel is free it'll return an err code.
void sample::play() {
Mix_PlayChannel(-1, chunk.get(), 0);
}
void sample::play(int times) {
Mix_PlayChannel(-1, chunk.get(), times - 1);
}
void sample::set_volume(int volume) {
Mix_VolumeChunk(chunk.get(), volume);
}
Notice that I don't need to thread my model, every time something triggers a sound play the program keeps execution. (I guess SDL_Mixer plays in the main SDL thread).
For this to work, where you init SDL you'll also have to init the mixer as
if (Mix_OpenAudio(44100, MIX_DEFAULT_FORMAT, 2, 1024) < 0) {
// Error message if can't initialize
}
// Amount of channels (Max amount of sounds playing at the same time)
Mix_AllocateChannels(32);
And an example of how to play a sound would be
// at some point loaded a sample s with sample(path to wave mp3 or whatever)
s.play();
A few remarks, you don't need to use, but can, the code as it is, it is more of a simple example of using SDL2_mixer.
This mean functionality is lacking, you might want a tighter handling of sound, for example to stop a sound mid play (for some reason), you can do this if you play your sounds in different channels with the Mix_HaltChannel function, and the play() function could receive the channel where you want it to be played.
All these functions return error values, for example if no unreserved channel is available Mix_PlayChannel will return an error code.
Another thing you want to keep in mind is if you play the same sound several times it'll start to get blurry/you would not notice if the same sound is being played again. So you could add an integer to sample to count how many times a sample can be played.
If you REALLY want to thread your mixer/audio from the main SDL thread (and still only use SDL), you can just spawn a new SDL context in a thread and send in some way signals to play audio.
You want to load all necessary assets when initializing the game. Then, when you want to play them, they're loaded into the game memory and there will be no lags. And also play the sounds in a separate thread maybe, so it won't block your main thread.
There are several tools in C++ for asynchronous operations. You can try the most simple std::async:
auto handle = std::async(std::launch::async,
playsound, std::string{"/path/to/cute/sound"});
// Some other stuff. Your game logic doesn't blocked here.
handle.get(); // This can actually block.
You should specify flag std::launch::async, which means, that new thread will be used. Then name of callable needed to be executed and its parameters. Don't forget to include <future> header.

opencv videocapture default setting

I am using mac book and have a program written in C++, the program is to extract successive frames from the webcam. The extracted frames are then grayscaled and smoothed using opencv functions. After that i would use CVNorm to find out the relative difference between 2 frames. I am using videoCapture class.
I found out that the frame rate is 30fps and using CVNorm, the relative difference obtained between successive frames are less than 200 most of the time.
I am trying to do the same thing on xcode so as to implement the program on ipad. This time I am using AVCaptureSession, the same steps are performed but i realize that the relative difference between 2 frames are much higher (>600).
Thus i would like to know about the default setting for videoCapture class, I know that i can edit the setting using cvSetCaptureProperty but i cannot find the default setting of it. After that i would compare it with the setting of the AVcaptureSession and hope to find out why there is such a huge difference in CVNorm when i use these 2 approaches to extract my frame.
Thanks in advance.
OpenCV's VideoCapture class is just a simple wrapper for capturing video from cameras or for reading video files. It is built upon several multimedia frameworks (avfoundation, dshow, ffmpeg, v4l, gstreamer, etc.) and totally hides them from the outside. The problem is coming from here, it is really hard to achieve the same behaviour of capturing under different platform and multimedia frameworks. The common set of (capture's) properties are poor and setting their values is rather only a suggestion instead of a requirement.
In summary, the default properties can differ under different platforms, but in case of AV Foundation framework:
The cvCreateCameraCapture_AVFoundation(int index) function will create a CvCapture instance under iOS, which is defined in cap_qtkit.mm. Seems like you are not able to change the sampling rate, only CV_CAP_PROP_FRAME_WIDTH, CV_CAP_PROP_FRAME_HEIGHT and DISABLE_AUTO_RESTART are supported.
The grabFrame() implementation is below. I'm absolutely not an Objective-C expert, but it seems like it waits until the capture updates the image or a time out occurs.
bool CvCaptureCAM::grabFrame() {
return grabFrame(5);
}
bool CvCaptureCAM::grabFrame(double timeOut) {
NSAutoreleasePool* localpool = [[NSAutoreleasePool alloc] init];
double sleepTime = 0.005;
double total = 0;
[NSTimer scheduledTimerWithTimeInterval:100 target:nil selector:#selector(doFireTimer:) userInfo:nil repeats:YES];
while (![capture updateImage] && (total += sleepTime)<=timeOut) {
[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:sleepTime]];
}
[localpool drain];
return total <= timeOut;
}

Error in ffmpeg when reading from UDP stream

I'm trying to process frames from a UDP stream using ffmpeg. Everything will run fine for a while but av_read_frame() will always eventually return either AVERROR_EXIT (Immeditate exit requested) or -5 (Error number -5 occurred) while the stream should still be running fine. Right before the error it always prints the following message to the console
[mpeg2video # 0caf6600] ac-tex damaged at 14 10
[mpeg2video # 0caf6600] Warning MVs not available
[mpeg2video # 0caf6600] concealing 800 DC, 800 AC, 800 MV errors in I frame
(the numbers in the message vary from run to run)
I have a suspicion that the error is related to calling av_read_frame too quickly. If I have it run as fast as possible, I usually get an error within 10-20 frames, but if I put a sleep before reading it will run fine for a minute or so and then exit with an error. I realize this is hacky and assume there is a better solution. Bottom line: is there a way to dynamically check if 'av_read_frame()' is ready to be called? or a way to supress the error?
Psuedo code of what I'm doing below. Thanks in advance for the help!
void getFrame()
{
//wait here?? seems hacky...
//boost::this_thread::sleep(boost::posix_time::milliseconds(25));
int av_read_frame_error = av_read_frame(m_input_format_context, &m_input_packet);
if(av_read_frame_error == 0){
//DO STUFF - this all works fine when it gets here
}
else{
//error
char errorBuf[AV_ERROR_MAX_STRING_SIZE];
av_make_error_string(errorBuf, AV_ERROR_MAX_STRING_SIZE, av_read_frame_error);
cout << "FFMPEG Input Stream Exit Code: " << av_read_frame_error << " Message: " << errorBuf << endl;
}
}
Incoming frames needs to be handled in a callback function. So the mechanism should be such that a callback gets called whenever there is a new frame. In this way there is no need to manually fine tune the delay.
Disclaimer: I have not used ffmpeg APIs.