I want to make a common video player with opencv C++. Users should be able to freely move between frames of the video(commonly slider). So I simply wrote and tested two methods and both has problems.
My first solution:
int frameIdx = 0; /// This is accessed by other thread
cv::VideoCapture cap("video.mp4");
while (true) {
cv::Mat frame;
cap.set(cv::CAP_PROP_POS_FRAMES, frameIdx);
cap.read(frame);
showFrameToWindow(frame);
frameIdx++;
}
My second solution:
int frameIdx = 0; /// This is accessed by other thread
std::vector<cv::Mat> buffer;
cv::VideoCapture cap("video.mp4")
while (true) {
cv::Mat frame;
cap >> frame;
if (frame.empty()) break;
buffer.push_back(frame);
}
while (true) {
cv::Mat frame = buffer[frameIdx].clone();
showFrameToWindow(frame);
frameIdx++;
}
The first solution is too slow. I think there is an overhead cap.read(cv::Mat). It's not possible to play video more than 20~30fps in my computer.
The second solution, it is satisfactory for speed, but it requires a lot of memory space.
So I imagine what if I change std::vector to std::queue of the buffer, limit its size, and update it in other thread while playing the video.
I'm not sure it's gonna work, and I wonder there's an common algorithm to seek in large video file. Any comments will save me. Thanks.
I developed my second solution to limitied size of frame buffer and handling it with other thread.
/// These variables are accessed from all threads.
#define BUF_MAX 64
bool s_videoFileLoaded;
double s_videoFileHz; // milliseconds per frame
int s_videoFileFrameCount;
bool s_seek;
int s_seekingIndex; // indexing video file, binded with UI
std::queue<cv::Mat> s_frameBuffer;
std::queue<int> s_indexBuffer;
std::mutex s_mtx;
///
/// !!! When seeking slider in UI moved manually,
/// !!! #s_seek turns to True.
///
/// Start of main thread.
cv::VideoCapture cap("video.mp4");
s_videoFileLoaded = true;
s_videoFileFrameCount = cap.get(cv::CAP_PROP_FRAME_COUNT);
s_videoFileHz = 1000.0 / cap.get(cv::CAP_PROP_FPS);
s_seekingIndex = 0;
runThread( std::bind(&_videoHandlingThreadLoop, cap) );
Timer timer;
while (s_videoFileLoaded) {
timer.restart();
{
std::lock_guard<std::mutex> lock(s_mtx);
if (s_frameBuffer.empty())
continue;
cv::Mat frame = s_frameBuffer.front();
s_seekingIndex = s_indexBuffer.front();
s_frameBuffer.pop();
s_indexBuffer.pop();
showFrameToWindow(frame);
}
int remain = s_videoFileHz - timer.elapsed();
if (remain > 0) Sleep(remain);
}
void _videoHandlingThreadLoop(cv::VideoCapture& cap) {
s_seek = true;
int frameIndex = -1;
while (s_videoFileLoaded) {
if (s_frameBuffer.size() > BUF_MAX) {
Sleep(s_videoFileHz * BUF_MAX);
continue;
}
// Check whether it's time to 'seeking'.
if (s_seek) {
std::lock_guard<std::mutex> lock(s_mtx);
// Clear buffer
s_frameBuffer = std::queue<cv::Mat>();
s_indexBuffer = std::queue<int>();
frameIndex = s_seekingIndex;
cap.set(cv::CAP_PROP_POS_FRAMES, frameIndex);
s_seek = false;
}
// Read frame from the file and push to buffer.
cv::Mat frame;
if (cap.read(frame)) {
std::lock_guard<std::mutex> lock(s_mtx);
s_frameBuffer.push(frame);
s_indexBuffer.push(frameIndex);
frameIndex++;
}
// Check whether the frame is end of the file.
if (frameIndex >= s_videoFileFrameCount) {
s_seekingIndex = 0;
s_seek = true;
}
}
}
and this worked. I could play the video file with stable playback speed. But still had some lag when seeking manually.
Related
By using the following code, I can successfully capture and return frame from a separate thread to main(). But now I cannot write to the frame/cv::Mat (contour in main()) for display. If I do not use separate thread to for webcam then I can successfully write to the frame/cv::Mat. What am I doing wrong?
VideoCapture cap(0);
class Camera
{
public:
Camera(void);
~Camera(void);
Mat captureVideo(void);
private:
Mat frame;
double dWidth;
double dHeight;
double fps;
};
Camera::Camera(void) {
int isrunning = 0;
usleep(10);
if (!cap.open(2))
{
cout << "Cannot open the video camera.\n" << endl;
}
else
{
isrunning = 1;
}
if(isrunning == 0) {
this->~Camera();
}
cap >> frame;
}
Camera::~Camera(void) {
cap.release();
}
Mat Camera::captureVideo(void) {
cap >> frame;
return frame;
}
Camera cam1;
const int frameBuffer = 20; // Frame buffer
std::vector<cv::Mat> frameStack;
int stopSig = 0; // Global stop signal...
void grabFrame(void) {
Mat frame;
::frameStack.clear();
while(!::stopSig) {
frame = ::cam1.captureVideo();
// 1. Remove one frame from the back, if the stack has more then 2 frames...
if(::frameStack.size() > 2) { //If the framestack has more then 2 frames...
::frameStack.pop_back();
}
// 2. Add a frame at the front of the stack if the stack is not full...
if (::frameStack.size() < ::frameBuffer) {
::frameStack.push_back(frame); // Put new frame on stack on the computer's RAM...
} else {
::frameStack.clear();
}
}
return;
}
int main(int argc, char* argv[])
{
Mat fr, outImg; // Captured single frames...
Mat contour; // Video stream showin countours of objects...
::frameStack.clear();
thread t1(grabFrame);
cv::namedWindow("MyWindow", 1);
while(1) {
if(::frameStack.size() >= 2) {
contour = ::frameStack.back();
circle(contour, Point(150,150),50, Scalar(0,255,255),cv::FILLED, 8);// PROBLEM -> NO EFFECT
putText(contour, "some text", Point(100,100), FONT_HERSHEY_DUPLEX, 1, Scalar(0,143,143), 2);// PROBLEM -> NO EFFECT
imshow("Contour Video", contour);
}
if (waitKey(1) == 27)
{
::stopSig = 1; // Signal to threads to end their run...
frameStack.clear();
break;
}
}
t1.join();
return 0;
}
In your example, you are sharing stopSig, frameStack and frameBuffer variables between the two threads.
When you are using shared memory between threads, you have to properly synchronize the access to that shared memory. One of the reasons why synchronization is needed is to prevent two threads modifying a variable at the same time. Another reason is visibility - a write to a variable by one thread can be cached inside the CPU cache, and a thread running on another CPU core won't see the variable change.
Perhaps the easiest way to synchronize memory access between threads is by using std::mutex. I'm not gonna post the whole code here, but you can read about mutexes in C++ online, for example, here.
I'm implementing a 3d VR player and for the video decoding I'm using Libvlc.
To render the video I have a texture buffer and a memory buffer, when VLC decodes a frame the data is copied by the main loop to the texture.
I've tested two different mechanisms to lock the memory buffer when vlc is dumping a frame, first I tried a mutex as the examples show but the performance is awful, also I've tested a lock-free mechanism using atomic operations, the performance is not perfect but better.
The best results I get are just not locking at all the buffer, it works very smooth but some times the main loop and vlc loop gets desinchronized and then tearing and stuttering are noticeable.
So my question is, what is the best approach to lock the buffer? any way to don't loss performance?
Here are the important code parts:
-Video context struct
typedef struct _videoContext
{
libvlc_instance_t* instance;
libvlc_media_t* media;
libvlc_media_player_t* player;
unsigned char *pixeldata;
unsigned char currentFrame;
int width;
int height;
int duration;
int time;
int volume;
bool finished;
std::atomic<bool> lock;
} videoContext;
-Video event functions
static void *lock(void *data, void **p_pixels)
{
videoContext* context = (videoContext*)data;
bool expected = false;
while (!context->lock.compare_exchange_strong(expected, true))
Sleep(0);
p_pixels[0] = context->pixeldata;
return NULL;
}
static void unlock(void *data, void *id, void *const *p_pixels)
{
videoContext* context = (videoContext*)data;
context->time = libvlc_media_player_get_time(context->player);
context->lock.store(false);
context->currentFrame++;
}
-Main loop function
if (vctx->currentFrame != currentFrame && vctx->lock.compare_exchange_strong(exchangeExpected, true))
{
currentFrame = vctx->currentFrame;
Texture::setData(&videoTexture, vctx->pixeldata, vctx->width, vctx->height);
vctx->lock.store(false);
}
Please help how to handle this problem:
OpenCV Error: Insufficient memory (Failed to allocate 921604 bytes) in
unknown function, file
........\ocv\opencv\modules\core\src\alloc.cpp, line 52
One of my method using cv::clone and pointer
The code is:
There is a timer every 100ms;
In the timer event, I call this method:
void DialogApplication::filterhijau(const Mat &image, Mat &result) {
cv::Mat resultfilter = image.clone();
int nlhijau = image.rows;
int nchijau = image.cols*image.channels();;
for(int j=0; j<nlhijau; j++) {
uchar *data2=resultfilter.ptr<uchar> (j); //alamat setiap line pada result
for(int i=0; i<nchijau; i++) {
*data2++ = 0; //element B
*data2++ = 255; //element G
*data2++ = 0; //element R
}
// free(data2); //I add this line but the program hung up
}
cv::addWeighted(resultfilter,0.3,image,0.5,0,resultfilter);
result=resultfilter;
}
The clone() method of a cv::Mat performs a hard copy of the data. So the problem is that for each filterhijau() a new image is allocated, and after hundreds of calls to this method your application will have occupied hundreds of MBs (if not GBs), thus throwing the Insufficient Memory error.
It seems like you need to redesign your current approach so it occupies less RAM memory.
I faced this error before, I solved it by reducing the size of the image while reading them and sacrificed some resolution.
It was something like this in Python:
# Open the Video
cap = cv2.VideoCapture(videoName + '.mp4')
i = 0
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
frame = cv2.resize(frame, (900, 900))
# append the frames to the list
images.append(frame)
i += 1
cap.release()
N.B. I know it's not the most optimum solution for the problem but, it was enough for me.
This code snippet is supposed to save part of a video whose range is defined by start and end. There is an array of structures (data[i]) that holds the starting and end frame of a video shot in the original video. There are total of 8 shots.
for (int i = 0; i < finalCount-1; ++i) {
capture = cvCaptureFromAVI("Stats\\Shots\\Cricketc1.avi");
assert(capture);
int frame_number = 0;
int start = data[i].start_frame;
int end = data[i].end_frame;
char shotname[100];
strcpy_s(shotname, "shot_");
char shot_id[30];
_itoa_s(data[i].shot_no, shot_id, 10);
strcat_s(shotname, shot_id);
strcat_s(shotname, ".avi");
IplImage* image = NULL;
CvVideoWriter* writer = NULL;
writer = cvCreateVideoWriter (shotname, CV_FOURCC('i','Y','U','V'), fps, cvSize(width, height), 1);
assert(writer);
while (frame_number >= start && frame_number < end) {
image = cvQueryFrame(capture);
assert(image);
cvWriteFrame(writer, image);
}
cvReleaseImage(&image);
cvReleaseVideoWriter(&writer);
cvReleaseCapture(&capture);
cout << shotname << " saved ..." << endl;
}
After running the program 8 video files are created that have a size of 6kb and do not run. I have tried various codecs like divx, mjpg, mpg2, iyuv etc but all give the same result.
In your while loop, frame_number is never incremented. Since you say the program actually executes and creates the files this means nothing in your while loop ever runs... otherwise you'd get stuck in an infinite loop because frame_number will always be 0.
I would advise you initialize frame_number to start instead of 0 and there's no reason for it to exist outside of the scope of the loop so a for seems more appropriate:
int start = data[i].start_frame;
int end = data[i].end_frame;
...
for (int frame_number = start; frame_number < end; frame_number++) {
image = cvQueryFrame(capture);
assert(image);
cvWriteFrame(writer, image);
}
If Gunther Fox answer won't help try to use different codec - it's very strange, but in my situation iyuv is not working at all and some other codecs works ok, but i can't read them while debugging... For me - ms video and radius cinepak always works fine(writing and reading), iyuv is not working at all, other codes - writing and reading, but not while debugging.
I am having an issue again with ffmpeg, I'm a newbie with ffmpeg, and I can't find a good tutorial up to date...
This time, when I play a video with ffmpeg, it plays too fast, ffmpeg is ignoring the FPS, I don't want to handle that with a thread sleep, because the videos have differents FPS's.
I created a thread, there you can find the loop:
AVPacket framepacket;
while(av_read_frame(formatContext,&framepacket)>= 0){
pausecontrol.lock();
// Is it a video or audio frame¿?
if(framepacket.stream_index==gotVideoCodec){
int framereaded;
// Video? Ok
avcodec_decode_video2(videoCodecContext,videoFrame,&framereaded,&framepacket);
// Yeah, did we get it?
if(framereaded && doit){
AVRational millisecondbase = {1,1000};
int f_number = framepacket.dts;
int f_time = av_rescale_q(framepacket.dts,formatContext->streams[gotVideoCodec]->time_base,millisecondbase);
currentTime=f_time;
currentFrameNumber=f_number;
int stWidth = videoCodecContext->width;
int stHeight = videoCodecContext->height;
SwsContext *ctx = sws_getContext(stWidth, stHeight, videoCodecContext->pix_fmt, stWidth,
stHeight, PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
if(ctx!=0){
sws_scale(ctx,videoFrame->data,videoFrame->linesize,0,videoCodecContext->height,videoFrameRGB->data,videoFrameRGB->linesize);
QImage framecapsule=QImage(stWidth,stHeight,QImage::Format_RGB888);
for(int y=0;y<stHeight;y++){
memcpy(framecapsule.scanLine(y),videoFrameRGB->data[0]+y*videoFrameRGB->linesize[0],stWidth*3);
}
emit newFrameReady(framecapsule);
sws_freeContext(ctx);
}
}
}
if(framepacket.stream_index==gotAudioCodec){
// Audio? Ok
}
pausecontrol.unlock();
av_free_packet(&framepacket);
}
Any Idea?
The simplest solution is to use a delay based on the FPS value
firstFrame = true;
for(;;)
{
// decoding, color conversion, etc.
if (!firstFrame)
{
const double frameDuration = 1000.0 / frameRate;
duration_t actualDelay = get_local_time() - lastTime;
if (frameDuration > actualDelay)
sleep(frameDuration - actualDelay);
}
else
firstFrame = false;
emit newFrameReady(framecapsule);
lastTime = get_local_time();
}
get_local_time() and duration_t is abstract.
A more accurate method is to use a time stamp for each frame, but the idea is the same