In my program, I'm reading from a webcam or a video file, via OpenCV and displaying it via Qt.
I get the fps from the video properties and set the timer accordingly.
I have no problem reading the videos, the fps calibration is good (since the webcam show 0fps, I set it to 30)
But when I record the images, I set the output video's fps to the same as the original video, yet, when I read it in VLC or even Windows Media Player the video is accelerated.
The most curious thing is when I play the recorded video in my program, the fps is good, and the video isn't accelerated.
Here's how I do it :
Constructor()
{
// Initializing the video resources
cv::VideoCapture capture;
cv::VideoWriter writer;
cv::Mat frame;
int fps;
if (webcam)
{
capture.open(0);
fps = 30;
}
else
{
capture.open(inputFilePath);
fps = this->capture.get(CV_CAP_PROP_FPS);
}
// .avi
writer = cv::VideoWriter(outputFilePath, CV_FOURCC('M', 'J', 'P', 'G'), fps, frame.size());
// Defining the refresh timer;
QTimer timer = new QTimer();
connect(timer, SIGNAL(timeout()), this, SLOT(updateFeed()));
this->timer->start(1000 / fps);
}
updateFeed()
{
// ... display the image in a QLabel ... at a reasonnable speed.
writer.write(frame);
}
Now is there anything I do wrong ?
Thanks
Related
I'm using OpenCV with ffmpeg support to read a RTSP stream coming from an IP camera and then to write the frames to a video. The problem is that the frame size is 2816x2816 at 20 fps i.e. there's a lot of data coming in.
I noticed that there was a significant delay in the stream, so I set the buffer size of the cv::VideoCapture object to 1, because I thought that the frames might just get stuck in the buffer instead of being grabbed and processed. This however just caused for frames to be dropped instead.
My next move was to experiment a bit with the frame size/fps and the encoding of the video that I'm writing. All of those things helped to improve the situation, but in the long run I still have to use a frame size of 2816x2816 and support up to 20 fps, so I can't set it lower sadly.
That's where my question comes in: given the fact that the camera stream is going to be either h264 or h265, which one would be read faster by the cv::VideoCapture object? And how should I encode the video I'm writing in order to minimize the time spent decoding/encoding frames?
That's the code I'm using for reference:
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap;
cap.set(CAP_PROP_BUFFERSIZE, 1); // internal buffer will now store only 1 frames
if (!cap.open("rtsp://admin:admin#1.1.1.1:554/stream")) {
return -1;
}
VideoWriter videoWr;
Mat frame;
cap >> frame;
//int x264 = cv::VideoWriter::fourcc('x', '2', '6', '4'); //I was trying different options
int x264 = cv::VideoWriter::fourcc('M', 'J', 'P', 'G');
videoWr = cv::VideoWriter("test_video.avi", 0, 0, 20, frame.size(), true);
namedWindow("test", WINDOW_NORMAL);
cv::resizeWindow("test", 1024, 768);
for (;;)
{
cap >> frame;
if (frame.empty()) break; // end of video stream
imshow("test", frame);
if (waitKey(10) == 27) break;
videoWr << frame;
}
return 0;
}
I am struggling with writing a video player which uses OpenCV to read a frame from video and display it in a QWidget.
This is my code:
// video caputre is opened here
...
void VideoPlayer::run()
{
int sleep = 1000 / static_cast<unsigned long>(video_capture_.get(CV_CAP_PROP_FPS));
forever
{
QScopedPointer<cv::Mat> frame(new cv::Mat);
if(!video_capture_.read(*frame))
break;
cv::resize(*frame, *frame, cv::Size(640, 360), 0, 0, cv::INTER_CUBIC);
cv::cvtColor(*frame, *frame, CV_BGR2RGB);
QImage image(frame->data, frame->cols, frame->rows, QImage::Format_RGB888);
emit signalFrame(image); // notifying QWidget to draw an image
msleep(sleep); // wait before we read another frame
}
}
and on the QWidget side, I am just using this image and drawing it in paintEvent.
It just looks to me that parameter sleep doesn't play important role here. As much as I decrease it (to get more FPS) the video is just not smooth.
The only thing here left for me is that I gave up on that approach because it doesn't work, but I wanted to ask here one more time, just to be sure - am I doing something wrong here?
I am using the following code for capturing video frames from a USB webcam using openCV3 in MS VC++ 2012. But the problem is that sometimes I am able to display the captured frames # 30 fps but sometimes I get black frames with a very low fps (or with a high delay). In other words, the program works randomly. Do you know how I can solve this problem? I tried different solutions suggested in stackoverflow or some other places but none of them solved the problem.
VideoCapture v(1);
v.set(CV_CAP_PROP_FRAME_WIDTH, 720);
v.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
if(!v.isOpened()){
cout << "Error opening video stream or file" << endl;
return;
}
Mat Image;
namedWindow("win",1);
while(1){
v >> Image;
imshow("win", Image);
}
try this:
while(1){
v >> Image;
imshow("win", Image);
char c=waitKey(10);//add a 10ms delay per frame to sync with cam fps
if(c=='b')
{
break;//break when b is pressed
}
}
I have problem with plying video file, why it is slow motion ?
How can I make it normal speed?
#include"opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap("eye.mp4");
// open the default camera
if (!cap.isOpened())
// check if we succeeded
return -1;
namedWindow("Video", 1);
while (1)
{
Mat frame;
cap >> frame;
imshow("Video", frame);
if (waitKey(10) == 'c')
break;
}
return 0;
}
VideoCapture isn't built for playback, it's just a way to grab frames from video file or camera. Other libraries that supports playback, such as GStreamer or Directshow, they set a clock that control the playback, so that it can be configured to play as fastest as possible or use the original framerate.
In your snippet, the interval between frames comes from the time it takes to read a frame and the waitKey(10). Try using waitKey(0), it should at least play faster. Ideally, you could use waitKey(1/fps).
i am using Qt Creator to implement an application that reads a video and by clicking on a button i will save the frame that is being showed. Then i will process that frame with Opencv.
Having a video being displayed with QmediaPlayer, how can i extract a frame from the video? I should then be able to convert that frame to a Mat image in Matlab.
Thanks
QMediaPlayer *player = new QMediaPlayer();
QVideoProbe *probe = new QVideoProbe;
connect(probe, SIGNAL(videoFrameProbed(QVideoFrame)), this, SLOT(processFrame(QVideoFrame)));
probe->setSource(player); // Returns true, hopefully.
processFrame slot:
void processFrame(QVideoFrame const&) {
if (isButtonClicked == false) return;
isButtonClicked = false;
...
process frame
...
}
QVideoProbe reference
QVideoFrame reference
You can use QVideoFrame::bits() to process image with OpenCV