OpenCV network (IP) camera frames per second slow after initial burst - c++

EDIT: Upgrading to OpenCV 2.4.2 and FFMPEG 0.11.1 seems to have solved all the errors and connection problems, but it still hasn't solved the slow-down of frame rate.
I am using the default OpenCV package in Ubuntu 12.04 which I believe is 2.3.1. I am connecting to a Foscam FI8910W which streams MJPEG. I have seen where people have said that the best way is to use opencv+libjpeg+curl since it is faster than the gstreamer solution. However, I can occasionally (50% of the time) connect to the camera from OpenCV as it is built and get a video stream. This stream starts out at around 30 fps for about 1 s but then slows down to 5-10 fps. The project I am working on will require having 6 of cameras preferably running at 15-30 fps (faster is better).
Here are my questions:
Is this a problem that is fixed in 2.4.2 and I should just
upgrade?
If not, any ideas why I get a short burst and then it
slows down?
Is the best solution still to use curl+libjpeg?
I see lots of people who say that solutions have been posted, but
very few actual links to posts with solutions. Having all the actual
solutions (both curl and gstreamer) referenced in one place would be
very handy as per http://opencv-users.1802565.n2.nabble.com/IP-camera-solution-td7345005.html.
Here is my code:
VideoCapture cap;
cap.open("http://10.10.1.10/videostream.asf?user=test&pwd=1234&resolution=32");
Mat frame;
cap >> frame;
wr.open("test.avi", CV_FOURCC('P','I','M','1'), 29.92, frame.size(), true);
if(!wr.isOpened())
{
cout << "Video writer open failed" << endl;
return(-1);
}
Mat dst = Mat::zeros(frame.rows + HEADER_HEIGHT, frame.cols, CV_8UC3);
Mat roi(dst, Rect(0, HEADER_HEIGHT-1, frame.cols, frame.rows));
Mat head(dst, Rect(0,0,frame.cols, HEADER_HEIGHT));
Mat zhead = Mat::zeros(head.rows, head.cols, CV_8UC3);
namedWindow("test", 1);
time_t tnow;
tm *tS;
double t1 = (double)getTickCount();
double t2;
for(int i = 0; i>-1 ; i++) // infinite loop
{
cap >> frame;
if(!frame.data)
break;
tnow = time(0);
tS = localtime(&tnow);
frame.copyTo(roi);
std::ostringstream L1, L2;
L1 << tS->tm_year+1900 << " " << coutPrep << tS->tm_mon+1 << " ";
L1 << coutPrep << tS->tm_mday << " ";
L1 << coutPrep << tS->tm_hour;
L1 << ":" << coutPrep << tS->tm_min << ":" << coutPrep << tS->tm_sec;
actValueStr = L1.str();
zhead.copyTo(head);
putText(dst, actValueStr, Point(0,HEADER_HEIGHT/2), fontFace, fontScale, Scalar(0,255,0), fontThickness, 8);
L2 << "Frame: " << i;
t2 = (double)getTickCount();
L2 << " " << (t2 - t1)/getTickFrequency()*1000. << " ms";
t1 = (double)getTickCount();
actValueStr = L2.str();
putText(dst, actValueStr, Point(0,HEADER_HEIGHT), fontFace, fontScale, Scalar(0,255,0), fontThickness, 8);
imshow("test", dst);
wr << dst; // write frame to file
cout << "Frame: " << i << endl;
if(waitKey(30) >= 0)
break;
}
Here are the errors listed when it runs correctly:
Opening 10.10.1.10
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later.
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later.
[asf # 0x701de0] max_analyze_duration reached
[asf # 0x701de0] Estimating duration from bitrate, this may be inaccurate
[asf # 0x701de0] ignoring invalid packet_obj_size (21084 656 21720 21740)
[asf # 0x701de0] freeing incomplete packet size 21720, new 21696
[asf # 0x701de0] ff asf bad header 0 at:1029744
[asf # 0x701de0] ff asf skip 678 (unknown stream)
[asf # 0x701de0] ff asf bad header 45 at:1030589
[asf # 0x701de0] packet_obj_size invalid
[asf # 0x701de0] ff asf bad header 29 at:1049378
[asf # 0x701de0] packet_obj_size invalid
[asf # 0x701de0] freeing incomplete packet size 21820, new 21684
[asf # 0x701de0] freeing incomplete packet size 21684, new 21836
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later.
Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later.
[asf # 0x701de0] Estimating duration from bitrate, this may be inaccurate
Successfully opened network camera
[swscaler # 0x8cf400] No accelerated colorspace conversion found from yuv422p to bgr24.
Output #0, avi, to 'test.avi':
Stream #0.0: Video: mpeg1video (hq), yuv420p, 640x480, q=2-31, 19660 kb/s, 90k tbn, 29.97 tbc
[swscaler # 0x9d25c0] No accelerated colorspace conversion found from yuv422p to bgr24.
Frame: 0
[swscaler # 0xa89f20] No accelerated colorspace conversion found from yuv422p to bgr24.
Frame: 1
[swscaler # 0x7f7840] No accelerated colorspace conversion found from yuv422p to bgr24.
Frame: 2
[swscaler # 0xb9e6c0] No accelerated colorspace conversion found from yuv422p to bgr24.
Frame: 3
Sometimes it hangs after the first Estimating duration from bitrate statement

Have you tried removing the code that writes to disk? I've seen very similar performance issues with USB cameras when a disk buffer fills up. Great framerate at first, and then it drops dramatically.
If that is the issue, another option is to change your compression codec to something that compresses more significantly.

A fast initial FPS that changes to a slow FPS would suggest that the camera is increasing exposure time to compensate for a poorly lit subject. The camera is analyzing the first few frames and then adjusting the exposure time accordingly.
It seems that actual FPS is a combination of two things:
Hardware Limitations (defines the max FPS)
Required Exposure Time (defines the min FPS)
The hardware may have the bandwidth required to transfer X FPS, but poorly lit conditions may require an exposure time that slows down the actual FPS. For example, if each frame needs to be exposed for 0.1 seconds, the fasted FPS will be 10.
To test for this, measure FPS with the camera pointed at poorly lit subject and compare that to the FPS with the camera pointed at a well lit subject. Be sure to exaggerate the lighting conditions and give the camera a few seconds to detect the required exposure.

Related

Streaming Logitec C922 at 60fps with C++

I would like to capture images with a webcam (i.e. logitech C922) with C++. Does anyone succeed in capturing images with the webcam at 60fps and 720p? I read the code in the following thread and add "cap.set(CAP_PROP_FPS, 60)", but the frame rate was maintained at about 30fps.
How to set camera fps in opencv?
Then I posted the same question, but the forum is under maintenance.
http://answers.opencv.org/question/172992/streaming-logitec-c922-at-60fps-with-c/
I added the both proposed codes to mine.
As the result, the value of fps was 33.3... and FPS was 60.0 in the case that I used cap.set(CAP_PROP_EXPOSURE, -5) because I'm in office and at night here.
I tried to use lower value for CAP_PROP_EXPOSURE (e.g. -10), but the fps didn't change.
The image shown with imshow wasn't updated obviously at 60fps.
Is there anything I can do?
This is the code I used.
VideoCapture cap(0); //capture the video from web cam
if (!cap.isOpened()) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
cap.set(CV_CAP_PROP_FOURCC, CV_FOURCC('M','J','P','G'));
cap.set(CAP_PROP_FPS, 60);
cap.set(CAP_PROP_EXPOSURE, -5);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 720);
cout << cap.get(CAP_PROP_FPS) << endl;
cvNamedWindow("img");
time_t cap_start, cap_end;
Mat frame;
double MAX_FRAME_NUM = 100;
time(&cap_start);
for (int n = 0; n < MAX_FRAME_NUM; n++) {
cap >> frame;
}
time(&cap_end);
double fps = MAX_FRAME_NUM / difftime(cap_end, cap_start);
cout << "fps:" << fps << endl;
cv::waitKey(0);
Environment Information OpenCv: 3.3.0 OS: Windows 10 Pro IDE: Visual Studio 2017 CPU: i7-7560U RAM 16GB USB: 3.0
Best regards, gellpro
I stumbled upon the same issue with this camera.
My environment is Ubuntu 18.04, python 3.6.5 and OpenCV 3.4.
I found this solution from your first link to be working:
cap.set(CV_CAP_PROP_FOURCC, CV_FOURCC('M', 'J', 'P', 'G'));
For python, the code I use is:
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M','J','P','G'))
cap.set(cv2.CAP_PROP_FPS, 60)

OpenCV VideoCapture lag due to the capture buffer

I am capturing video through a webcam which gives a mjpeg stream.
I did the video capture in a worker thread.
I start the capture like this:
const std::string videoStreamAddress = "http://192.168.1.173:80/live/0/mjpeg.jpg?x.mjpeg";
qDebug() << "start";
cap.open(videoStreamAddress);
qDebug() << "really started";
cap.set(CV_CAP_PROP_FRAME_WIDTH, 720);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 576);
the camera is feeding the stream at 20fps.
But if I did the reading in 20fps like this:
if (!cap.isOpened()) return;
Mat frame;
cap >> frame; // get a new frame from camera
mutex.lock();
m_imageFrame = frame;
mutex.unlock();
Then there is a 3+ seconds lag.
The reason is that the captured video is first stored in a buffer.When I first start the camera, the buffer is accumulated but I did not read the frames out. So If I read from the buffer it always gives me the old frames.
The only solutions I have now is to read the buffer at 30fps so it will clean the buffer quickly and there's no more serious lag.
Is there any other possible solution so that I could clean/flush the buffer manually each time I start the camera?
OpenCV Solution
According to this source, you can set the buffersize of a cv::VideoCapture object.
cv::VideoCapture cap;
cap.set(CV_CAP_PROP_BUFFERSIZE, 3); // internal buffer will now store only 3 frames
// rest of your code...
There is an important limitation however:
CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently)
Update from comments. In newer versions of OpenCV (3.4+), the limitation seems to be gone and the code uses scoped enumerations:
cv::VideoCapture cap;
cap.set(cv::CAP_PROP_BUFFERSIZE, 3);
Hackaround 1
If the solution does not work, take a look at this post that explains how to hack around the issue.
In a nutshell: the time needed to query a frame is measured; if it is too low, it means the frame was read from the buffer and can be discarded. Continue querying frames until the time measured exceeds a certain limit. When this happens, the buffer was empty and the returned frame is up to date.
(The answer on the linked post shows: returning a frame from the buffer takes about 1/8th the time of returning an up to date frame. Your mileage may vary, of course!)
Hackaround 2
A different solution, inspired by this post, is to create a third thread that grabs frames continuously at high speed to keep the buffer empty. This thread should use the cv::VideoCapture.grab() to avoid overhead.
You could use a simple spin-lock to synchronize reading frames between the real worker thread and the third thread.
Guys this is pretty stupid and nasty solution, but accepted answer didn't helped me for some reasons. (Code in python but the essence pretty clear)
# vcap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
data = np.zeros((1140, 2560))
image = plt.imshow(data)
while True:
vcap = cv2.VideoCapture("rtsp://admin:#192.168.3.231")
ret, frame = vcap.read()
image.set_data(frame)
plt.pause(0.5) # any other consuming operation
vcap.release()
An implementation of Hackaround 2 in Maarten's answer using Python. It starts a thread and keeps the latest frame from camera.read() as a class attribute. A similar strategy can be done in c++
import threading
import cv2
# Define the thread that will continuously pull frames from the camera
class CameraBufferCleanerThread(threading.Thread):
def __init__(self, camera, name='camera-buffer-cleaner-thread'):
self.camera = camera
self.last_frame = None
super(CameraBufferCleanerThread, self).__init__(name=name)
self.start()
def run(self):
while True:
ret, self.last_frame = self.camera.read()
# Start the camera
camera = cv2.VideoCapture(0)
# Start the cleaning thread
cam_cleaner = CameraBufferCleanerThread(camera)
# Use the frame whenever you want
while True:
if cam_cleaner.last_frame is not None:
cv2.imshow('The last frame', cam_cleaner.last_frame)
cv2.waitKey(10)
You can make sure that grabbing the frame took a bit of time. It is quite simple to code, though a bit unreliable; potentially, this code could lead to a deadlock.
#include <chrono>
using clock = std::chrono::high_resolution_clock;
using duration_float = std::chrono::duration_cast<std::chrono::duration<float>>;
// ...
while (1) {
TimePoint time_start = clock::now();
camera.grab();
if (duration_float(clock::now() - time_start).count() * camera.get(cv::CAP_PROP_FPS) > 0.5) {
break;
}
}
camera.retrieve(dst_image);
The code uses C++11.
There is an option to drop old buffers if you use a GStreamer pipeline. appsink drop=true option "Drops old buffers when the buffer queue is filled". In my particular case, there is a delay (from time to time) during the live stream processing, so it's needed to get the latest frame each VideoCapture.read call.
#include <chrono>
#include <thread>
#include <opencv4/opencv2/highgui.hpp>
static constexpr const char * const WINDOW = "1";
void video_test() {
// It doesn't work properly without `drop=true` option
cv::VideoCapture video("v4l2src device=/dev/video0 ! videoconvert ! videoscale ! videorate ! video/x-raw,width=640 ! appsink drop=true", cv::CAP_GSTREAMER);
if(!video.isOpened()) {
return;
}
cv::namedWindow(
WINDOW,
cv::WINDOW_GUI_NORMAL | cv::WINDOW_NORMAL | cv::WINDOW_KEEPRATIO
);
cv::resizeWindow(WINDOW, 700, 700);
cv::Mat frame;
const std::chrono::seconds sec(1);
while(true) {
if(!video.read(frame)) {
break;
}
std::this_thread::sleep_for(sec);
cv::imshow(WINDOW, frame);
cv::waitKey(1);
}
}
If you know the framerate of your camera you may use this information (i.e. 30 frames per second) to grab the frames until you got a lower frame rate.
It works because if grab function become delayed (i.e. get more time to grab a frame than the standard frame rate), it means that you got every frame inside the buffer and opencv need to wait the next frame to come from camera.
while(True):
prev_time=time.time()
ref=vid.grab()
if (time.time()-prev_time)>0.030:#something around 33 FPS
break
ret,frame = vid.retrieve(ref)

OpenCV - Dramatically increase frame rate from playback

In OpenCV is there a way to dramatically increase the frame rate of a video (.mp4). I have tried to methods to increase the playback of a video including :
Increasing the frame rate:
cv.SetCaptureProperty(cap, cv.CV_CAP_PROP_FPS, int XFRAMES)
Skipping frames:
for( int i = 0; i < playbackSpeed; i++ ){originalImage = frame.grab();}
&
video.set (CV_CAP_PROP_POS_FRAMES, (double)nextFrameNumber);
is there another way to achieve the desired results? Any suggestions would be greatly appreciated.
Update
Just to clarify, the play back speed is NOT slow, I am just searching for a way to make it much faster.
You're using the old API (cv.CaptureFromFile) to capture from a video file.
If you use the new C++ API, you can grab frames at the rate you want. Here is a very simple sample code :
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap("filename"); // put your filename here
namedWindow("Playback",1);
int delay = 15; // 15 ms between frame acquisitions = 2x fast-forward
while(true)
{
Mat frame;
cap >> frame;
imshow("Playback", frame);
if(waitKey(delay) >= 0) break;
}
return 0;
}
Basically, you just grab a frame at each loop and wait between frames using cvWaitKey(). The number of milliseconds that you wait between each frame will set your speedup. Here, I put 15 ms, so this example will play a 30 fps video at approx. twice the speed (minus the time necessary to grab a frame).
Another option, if you really want control about how and what you grab from video files, is to use the GStreamer API to grab the images, and then convert to OpenCV for image tratment. You can find some info about this on this post : MJPEG streaming and decoding

OpenCV - FPS from phone camera not correct

I have multiple recorded video samples, when I run these through my program it returns the FPS among other things. It is accurate enough for all of my video samples (see table below) but when I run a video sample taken through my smartphone it is returning the FPS at 90000, this happens with every video that captured through my smartphone so it is not just a problem with a single video file.
File Actual FPS OpenCV FPS ffmpeg FPS
action-60fps 60 59 60
action-24fps 24 24 24
phone_panning 29 90000 29
What is causing this problem?
EDIT: Managed to forget to add my code...
VideoCapture capture(argv[1]);
Mat frame;
if(capture.isOpened()) {
int fps = capture.get(CV_CAP_PROP_FPS);
int width = capture.get(CV_CAP_PROP_FRAME_WIDTH);
int height = capture.get(CV_CAP_PROP_FRAME_HEIGHT);
cout << "FPS: " << fps << ", width: " << width << ", height: " << height << endl;
VideoWriter writer("output.mpg",
CV_FOURCC('P','I','M','1'), fps, cvSize(width, height), 0); // 0 means gray, 1 means color
if(writer.isOpened()) {
while(true) {
capture >> frame;
if(!frame.empty()) {
imshow("Video", frame);
}
Mat frame_gray = frame.clone();
cvtColor(frame, frame_gray, CV_RGB2GRAY);
writer << frame_gray;
int key = waitKey(25);
if((char)key == 'q') {
break;
}
}
}
}
I had the same problem with opencv in calculating the FPS and number of frames in a video. (It was returning 90,000 for FPS and 5,758,245 for frame count for a 64-second video!!)
According to this answer:
OpenCV captures only a fraction of the frames from a video file
it's an opencv issue and they are working on it.
Another reason could be a problem with file header, mine was caused by converting video format. I solved it by using original video format mp4, instead of avi.

copy frames from video file recording

I start a recording session from webcam and then I want to make another file with 200 frames already recorded and then the other realtime frames from the webcam.
I made this to get frames from the "Blob.avi" file that is recording:
VideoWriter writeVideo;
VideoCapture savedVideo;
savedVideo.open("E:\\Blob.avi");
int startFrame = savedVideo.get(CV_CAP_PROP_FRAME_COUNT);
cout << " startFrame:"<< startFrame;
startFrame -= 200;
savedVideo.set(CV_CAP_PROP_POS_FRAMES,startFrame);
cout <<"-"<< startFrame << " ms: " << savedVideo.get(CV_CAP_PROP_POS_MSEC) << endl;
sprintf_s(filename,150, "E:\\FramesTest.avi");
writeVideo.open(filename,xvid, 10, Size(640,480));
Mat tempPic;
for( int i = 0; i < 200; i++ ){
savedVideo >> tempPic;
writeVideo.write(tempPic);
}
//then add realtime frames.
The problem is that it can't read total frames in the file that is recording: it gives me 0 and also the ms. This way it starts the new file recording realtime without copying first the 200 frames of the other file.
I think the problem is that the file video is in use.
So how can I copy some frames from a not released file video?
EDIT:
I must add at least 30ms of delay:
for( int i = 0; i < 200; i++ ){
savedVideo >> tempPic;
writeVideo.write(tempPic);
Sleep(30)
}
This way it works, also if it starts from frame 0 minus 200 instead of subtracting 200 from the last frame.
But it's a lot slow: I loose about 10 seconds of realtime capture.
Also because there is a re-encoding of each frame.
So I think it should be better recording first the realtime capture and then add the 200 frames at the beginning of the newly created file.
How can I achieve this? possibly witout re-encoding, better at file-level.
The purpose of this is to make a video when there is some blob detection, including some seconds of video before the blob triggered.