I have three periodic real-time tasks that execute simultaneously, each on a different processor(core) for video capturing using OpenCV.
In order to synchronize between the three tasks, I used an array of cv::Mat and each time I swap the index in order to avoid a data race. Thanks to the answers to my previous post on buffer reading/writing synchronization, I come up with this solution :
cv::Mat frame[3];
int frame_index_write = 0;
int frame_index_read = 1;
int frame_index_read_ = 2;
int SwapIndex(int *fi){
return *fi = (*fi + 1) % 3;
}
Now, the first task grabs a capture and stores it in a buffer and broadcast a signal to the other tasks, so they can get the captured frames :
while (1)
{
capture.grab();
capture.retrieve(frame[frame_index_write], CHANNEL);
SwapIndex(&frame_index_write);
pthread_cond_broadcast(&synch_condition); /* After capturing the frame
signal the displaying tasks*/
}
The second task gets the captured frame and displays it:
while (1)
{
pthread_cond_wait(&synch_condition, &frame_rw); /*wait for the capturing
func to send a signal*/
if (frame[frame_index_read].data)
{
cv::imshow(CAPTURED_IMAGE_WINDOW_NAME, frame[frame_index_read]);
SwapIndex(&frame_index_read);
cv::waitKey(1);
}
else{
std::cout << "Frame reading error" << std::endl;
}
}
The third task gets the captured frame and applies an edge detection process before displaying it :
while (1)
{
pthread_cond_wait(&synch_condition, &frame_rw); /*wait for the capturing
func to send a signal*/
if (frame[frame_index_read_].data)
{
cv::cvtColor(frame[frame_index_read_], gray_capture, cv::COLOR_BGR2GRAY);
cv::blur(gray_capture, detected_edges, cv::Size(3, 3));
cv::Canny(detected_edges, detected_edges, 0, 100, 3);
cv::imshow(EDGE_IMAGE_WINDOW_NAME, detected_edges);
SwapIndex(&frame_index_read_);
cv::waitKey(1);
}
else{
std::cout << "Frame reading error" << std::endl;
}
}
My code seems to work perfectly and the results were also as expected but when I stop the execution on the terminal I got the following output indicating an error :
VIDIOC_DQBUF: Invalid argument
OpenCV Error: Bad argument (Unknown array type) in cvarrToMat, file /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/core/src/matrix.cpp, line 943
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/core/src/matrix.cpp:943: error: (-5) Unknown array type in function cvarrToMat
Is there a way to handle this kind of error?
The complete code is hosted at Github
Any help would be so greatly appreciated on this topic.
Related
I am testing out using the maximilian library with JUCE. I am trying to use the maxiSample feature and I have implemented it exactly how the example code says to. Whenever I run the standalone app, I get the error "External Headphones (8): EXC_BAD_ACCESS (code=1, address=0x0)" and it gives me a breakpoint at line 747 of maximilian.cpp. It's not my headphones as it does the same thing with any playback device. Truly at a loss.
I've attached my MainComponent.cpp below. Any advice would be great, thank you!
#include "MainComponent.h"
#include "maximilian.h"
//==============================================================================
MainComponent::MainComponent()
{
// Make sure you set the size of the component after
// you add any child components.
setSize (800, 600);
// Some platforms require permissions to open input channels so request that here
if (juce::RuntimePermissions::isRequired (juce::RuntimePermissions::recordAudio)
&& ! juce::RuntimePermissions::isGranted (juce::RuntimePermissions::recordAudio))
{
juce::RuntimePermissions::request (juce::RuntimePermissions::recordAudio,
[&] (bool granted) { setAudioChannels (granted ? 2 : 0, 2); });
}
else
{
// Specify the number of input and output channels that we want to open
setAudioChannels (2, 2);
}
}
MainComponent::~MainComponent()
{
// This shuts down the audio device and clears the audio source.
shutdownAudio();
sample1.load("/Users/(username)/JuceTestPlugins/maxiSample/Source/kick.wav");
}
//==============================================================================
void MainComponent::prepareToPlay (int samplesPerBlockExpected, double sampleRate)
{
// This function will be called when the audio device is started, or when
// its settings (i.e. sample rate, block size, etc) are changed.
// You can use this function to initialise any resources you might need,
// but be careful - it will be called on the audio thread, not the GUI thread.
// For more details, see the help for AudioProcessor::prepareToPlay()
}
void MainComponent::getNextAudioBlock (const juce::AudioSourceChannelInfo& bufferToFill)
{
// Your audio-processing code goes here!
// For more details, see the help for AudioProcessor::getNextAudioBlock()
// Right now we are not producing any data, in which case we need to clear the buffer
// (to prevent the output of random noise)
//bufferToFill.clearActiveBufferRegion();
for(int sample = 0; sample < bufferToFill.buffer->getNumSamples(); ++sample){
//float sample2 = sample1.
//float wave = tesOsc.sinewave(200);
//double sample2 = sample1.play();
// leftSpeaker[sample] = (0.25 * wave);
// rightSpeaker[sample] = leftSpeaker[sample];
double *output;
output[0] = sample1.play();
output[1] = output[0];
}
}
void MainComponent::releaseResources()
{
// This will be called when the audio device stops, or when it is being
// restarted due to a setting change.
// For more details, see the help for AudioProcessor::releaseResources()
}
//==============================================================================
void MainComponent::paint (juce::Graphics& g)
{
// (Our component is opaque, so we must completely fill the background with a solid colour)
g.fillAll (getLookAndFeel().findColour (juce::ResizableWindow::backgroundColourId));
// You can add your drawing code here!
}
void MainComponent::resized()
{
// This is called when the MainContentComponent is resized.
// If you add any child components, this is where you should
// update their positions.
}
Can't say for sure, but couple of things catch my attention.
In getNextAudioBlock() you are dereferencing invalid pointers:
double *output;
output[0] = sample1.play();
output[1] = output[0];
The pointer variable output is uninitialised and will probably be filled with garbage or zeros, which will make the program read from invalid memory. This problem is most likely to cause the EXC_BAD_ACCESS. This method is called from the realtime audio thread, so you probably get a crash on a non-main thread (in this case the thread of External Headphones (8)).
It also is no clear to me what exactly it is you're trying to do here, so it's hard for me to say how it should be. What I can say is that assigning the result of sample1.play() to a double value looks suspicious.
Normally, when dealing with juce::AudioSourceChannelInfo you would get pointers to the audio buffers like so:
auto** bufferPointer = bufferToFill.buffer->getArrayOfWritePointers()
Further, you are loading a file inside the destructor of MainComponent. This at least is suspicious, why would you load a file during destruction?
MainComponent::~MainComponent()
{
// This shuts down the audio device and clears the audio source.
shutdownAudio();
sample1.load("/Users/(username)/JuceTestPlugins/maxiSample/Source/kick.wav");
}
So i have this function called preprocess() where I call cv::cvtColor, cv::GaussianBlur and cv::inRange. The first two work perfectly, but for some reason the cv::inRange function doesn't. However, calling it in the scope where i call the preprocess() function makes it work. Why is this happening and how can I solve it? This is the code for the preprocess function :
int preprocess(cv::Mat target, int flag, int debug)
{
if (!debug)
{
if (flag == BLUR) cv::GaussianBlur(target, target, cv::Size(3, 3), 3, 0);
if (flag == TO_HSV) cv::cvtColor(target, target, cv::COLOR_BGR2HSV);
if (flag == ISOLATE) cv::inRange(target, LOWER_LIMIT, UPPER_LIMIT, target);
return OK;
}
// if debugging is enable, convert the image based on the flag then outputs it
preprocess(target, flag, 0);
OUTPUT = target;
std::this_thread::sleep_for(std::chrono::seconds(5));
return OK;
}
This is the snippet that causes problems:
// process the image based on the flag
if (!preprocess(map, BLUR, debug)) std::cout << "error while processing image: blur" << std::endl;
if (!preprocess(map, TO_HSV, debug)) std::cout << "error while processing image: to_hsv" << std::endl;
if (!preprocess(map, ISOLATE, debug)) std::cout << "error while processing image: isolate" << std::endl; //this doesn't
cv::inRange(map, LOWER_LIMIT, UPPER_LIMIT, map); // this works
OUTPUT = map;
std::this_thread::sleep_for(std::chrono::seconds(5));
The debug parameter simply updates my global OUTPUT object so I can display step by step what happens to my image and the flags are macros defined to decide what I want to do inside the preprocess() function. map and target are the same cv::Mat object
I have two tasks (Threads) each task runs on a different processor(core), the first capture an image repeatedly using OpenCV videocapture().
I only used these two lines for the capture :
cv::Mat frame;
capture.read(frame);
Now I want to display the captured image using the second task. After executing the imshow function within the second task's code :
cv::imshow("Display window", frame);
I got the following output error :
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/highgui/src/window.cpp, line 304
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/highgui/src/window.cpp:304: error: (-215) size.width>0 && size.height>0 in function imshow
So, how can I avoid this error?
The complete code is hosted at Github
cv::VideoCapture::read() returns bool indicating if the read was successful or not.
You are passing an empty frame to cv::imshow(). Try checking if the read was successful before trying to show it.
cv::Mat frame;
if(capture.read(frame))
{
cv::imshow(frame);
}
EDIT
OP posted a link to the code. In his program frame is declared as a global variable. In line 120 capture.read(frame) writing into the frame and in line 140 imshow(frame) reads from the frame without using a mutex - that's a data race. Correct code should be along the lines of:
#include <mutex>
#include <opencv2/opencv.hpp>
std::mutex mutex;
cv::Mat frame;
{
mutex.lock();
capture.read(frame);
mutex.unlock();
}
{
mutex.lock();
cv::imshow(frame);
mutex.unlock();
}
The problem with your code is that there is a data race.. Imagine the display thread goes first lock the frame & try to display it before it is read so as you can see the problem
If you want a synchronized solution you can use the pthread condition & wait till an image is read to signal your display function otherwise you are gonna have an active wait!!
// in the declaration part
// Declaration of thread condition variable
pthread_cond_t cond1 = PTHREAD_COND_INITIALIZER;
//in the display function
ptask DisplyingImageTask()
{
int task_job = 0;
while (1)
{
std::cout << "The job " << task_job << " of Task T" << ptask_get_index()
<< " is running on core " << sched_getcpu() << " at time : "
<< ptask_gettime(MILLI) << std::endl;
cv::namedWindow("Display window", cv::WINDOW_AUTOSIZE);
pthread_mutex_lock(&frame_rw);
//wait for the read function to send a signal
pthread_cond_wait(&cond1, &frame_rw);
cv::imshow("Display window", frame);
cv::waitKey(1);
pthread_mutex_unlock(&frame_rw);
ptask_wait_for_period();
task_job++;
}
}
// in the Read function
ptask CapturingImageTask()
{
int task_job = 0;
while (1)
{
std::cout << "The job " << task_job << " of Task T" << ptask_get_index()
<< " is running on core " << sched_getcpu() << " at time : "
<< ptask_gettime(MILLI) << std::endl;
pthread_mutex_lock(&frame_rw);
capture.read(frame);
//after capturing the frame signal the display function & everything should be synchronize as you want
pthread_cond_signal(&cond1);
pthread_mutex_unlock(&frame_rw);
ptask_wait_for_period();
task_job++;
}
}
int main()
{
VideoCapture cap;
while(1){
Mat frame;
cap >> frame;
imshow("frame",frame);
waitKey();}
}
You can try this. İf you write waitKey(); Code want to press any key for get frame and show frame.
As others have mentioned try using a mutex.
You can also have a condition on the cv::Mat before trying to display it:
if (frame.data())
imshow("window", frame);
This will check that the frame to be displayed has data and thus avoiding the error.
Again this condition is only to avoid the imshow error and not to solve the original problem which, as mentioned in other answers, is a data race between the 2 threads.
Context:
I am building a recorder for capturing video and audio in separate threads (using Boost thread groups) using FFmpeg 2.8.6 on Ubuntu 16.04. I followed the demuxing_decoding example here: https://www.ffmpeg.org/doxygen/2.8/demuxing_decoding_8c-example.html
Video capture specifics:
I am reading H264 off a Logitech C920 webcam and writing the video to a raw file. The issue I notice with the video is that there seems to be a build-up of artifacts across frames until a particular frame resets. Here is my frame grabbing, and decoding functions:
// Used for injecting decoding functions for different media types, allowing
// for a generic decode loop
typedef std::function<int(AVPacket*, int*, int)> PacketDecoder;
/**
* Decodes a video packet.
* If the decoding operation is successful, returns the number of bytes decoded,
* else returns the result of the decoding process from ffmpeg
*/
int decode_video_packet(AVPacket *packet,
int *got_frame,
int cached){
int ret = 0;
int decoded = packet->size;
*got_frame = 0;
//Decode video frame
ret = avcodec_decode_video2(video_decode_context,
video_frame, got_frame, packet);
if (ret < 0) {
//FFmpeg users should use av_err2str
char errbuf[128];
av_strerror(ret, errbuf, sizeof(errbuf));
std::cerr << "Error decoding video frame " << errbuf << std::endl;
decoded = ret;
} else {
if (*got_frame) {
video_frame->pts = av_frame_get_best_effort_timestamp(video_frame);
//Write to log file
AVRational *time_base = &video_decode_context->time_base;
log_frame(video_frame, time_base,
video_frame->coded_picture_number, video_log_stream);
#if( DEBUG )
std::cout << "Video frame " << ( cached ? "(cached)" : "" )
<< " coded:" << video_frame->coded_picture_number
<< " pts:" << pts << std::endl;
#endif
/*Copy decoded frame to destination buffer:
*This is required since rawvideo expects non aligned data*/
av_image_copy(video_dest_attr.video_destination_data,
video_dest_attr.video_destination_linesize,
(const uint8_t **)(video_frame->data),
video_frame->linesize,
video_decode_context->pix_fmt,
video_decode_context->width,
video_decode_context->height);
//Write to rawvideo file
fwrite(video_dest_attr.video_destination_data[0],
1,
video_dest_attr.video_destination_bufsize,
video_out_file);
//Unref the refcounted frame
av_frame_unref(video_frame);
}
}
return decoded;
}
/**
* Grabs frames in a loop and decodes them using the specified decoding function
*/
int process_frames(AVFormatContext *context,
PacketDecoder packet_decoder) {
int ret = 0;
int got_frame;
AVPacket packet;
//Initialize packet, set data to NULL, let the demuxer fill it
av_init_packet(&packet);
packet.data = NULL;
packet.size = 0;
// read frames from the file
for (;;) {
ret = av_read_frame(context, &packet);
if (ret < 0) {
if (ret == AVERROR(EAGAIN)) {
continue;
} else {
break;
}
}
//Convert timing fields to the decoder timebase
unsigned int stream_index = packet.stream_index;
av_packet_rescale_ts(&packet,
context->streams[stream_index]->time_base,
context->streams[stream_index]->codec->time_base);
AVPacket orig_packet = packet;
do {
ret = packet_decoder(&packet, &got_frame, 0);
if (ret < 0) {
break;
}
packet.data += ret;
packet.size -= ret;
} while (packet.size > 0);
av_free_packet(&orig_packet);
if(stop_recording == true) {
break;
}
}
//Flush cached frames
std::cout << "Flushing frames" << std::endl;
packet.data = NULL;
packet.size = 0;
do {
packet_decoder(&packet, &got_frame, 1);
} while (got_frame);
av_log(0, AV_LOG_INFO, "Done processing frames\n");
return ret;
}
Questions:
How do I go about debugging the underlying issue?
Is it possible that running the decoding code in a thread other than the one in which the decoding context was opened is causing the problem?
Am I doing something wrong in the decoding code?
Things I have tried/found:
I found this thread that is about the same problem here: FFMPEG decoding artifacts between keyframes
(I cannot post samples of my corrupted frames due to privacy issues, but the image linked to in that question depicts the same issue I have)
However, the answer to the question is posted by the OP without specific details about how the issue was fixed. The OP only mentions that he wasn't 'preserving the packets correctly', but nothing about what was wrong or how to fix it. I do not have enough reputation to post a comment seeking clarification.
I was initially passing the packet into the decoding function by value, but switched to passing by pointer on the off chance that the packet freeing was being done incorrectly.
I found another question about debugging decoding issues, but couldn't find anything conclusive: How is video decoding corruption debugged?
I'd appreciate any insight. Thanks a lot!
[EDIT] In response to Ronald's answer, I am adding a little more information that wouldn't fit in a comment:
I am only calling decode_video_packet() from the thread processing video frames; the other thread processing audio frames calls a similar decode_audio_packet() function. So only one thread calls the function. I should mention that I have set the thread_count in the decoding context to 1, failing which I would get a segfault in malloc.c while flushing the cached frames.
I can see this being a problem if the process_frames and the frame decoder function were run on separate threads, which is not the case. Is there a specific reason why it would matter if the freeing is done within the function, or after it returns? I believe the freeing function is passed a copy of the original packet because multiple decode calls would be required for audio packet in case the decoder doesnt decode the entire audio packet.
A general problem is that the corruption does not occur all the time. I can debug better if it is deterministic. Otherwise, I can't even say if a solution works or not.
A few things to check:
are you running multiple threads that are calling decode_video_packet()? If you are: don't do that! FFmpeg has built-in support for multi-threaded decoding, and you should let FFmpeg do threading internally and transparently.
you are calling av_free_packet() right after calling the frame decoder function, but at that point it may not yet have had a chance to copy the contents. You should probably let decode_video_packet() free the packet instead, after calling avcodec_decode_video2().
General debugging advice:
run it without any threading and see if that works;
if it does, and with threading it fails, use thread debuggers such as tsan or helgrind to help in finding race conditions that point to your code.
it can also help to know whether the output you're getting is reproduceable (this suggests a non-threading-related bug in your code) or changes from one run to the other (this suggests a race condition in your code).
And yes, the periodic clean-ups are because of keyframes.
I'm using OpenCV 3. Grabbing a frame using VideoCapture with an IP Camera is blocking if the camera goes disconnected from the network or there is an issue with a frame.
I first check if videoCapture.isOpened(). If it is, I tried these methods but nothing seems to work:
1) grabber >> frame
if(grabber.isOpened()) {
grabber >> frame;
// DO SOMETHING WITH FRAME
}
2) read
if(grabber.isOpened()) {
if(!grabber.grab()){
cout << "failed to grab from camera" << endl;
} else {
if (grabber.retrieve(frame,0) ){
// DO SOMETHING WITH FRAME
} else {
// SHOW ERROR
}
}
}
3) grab/retrieve
if(grabber.isOpened()) {
if ( !grabber.read(frame) ) {
cout << "Unable to retrieve frame from video stream." << endl;
}
else {
// DO SOMETHING WITH FRAME
}
}
The video stream gets stuck at some point grabbing a frame with all of the previous options, each one blocks but doesn't exit or returns any error.
Do you know if there is a way to handle or solve this? Maybe some validations, try/catch or timer?
this issue is solved by this merge but unfortunetely opencv_ffmpeg.dll is not released yet.
you can find here updated opencv_ffmpeg.dll and test.