I am capturing video through a webcam which gives a mjpeg stream.
I did the video capture in a worker thread.
I start the capture like this:
const std::string videoStreamAddress = "http://192.168.1.173:80/live/0/mjpeg.jpg?x.mjpeg";
qDebug() << "start";
cap.open(videoStreamAddress);
qDebug() << "really started";
cap.set(CV_CAP_PROP_FRAME_WIDTH, 720);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 576);
the camera is feeding the stream at 20fps.
But if I did the reading in 20fps like this:
if (!cap.isOpened()) return;
Mat frame;
cap >> frame; // get a new frame from camera
mutex.lock();
m_imageFrame = frame;
mutex.unlock();
Then there is a 3+ seconds lag.
The reason is that the captured video is first stored in a buffer.When I first start the camera, the buffer is accumulated but I did not read the frames out. So If I read from the buffer it always gives me the old frames.
The only solutions I have now is to read the buffer at 30fps so it will clean the buffer quickly and there's no more serious lag.
Is there any other possible solution so that I could clean/flush the buffer manually each time I start the camera?
OpenCV Solution
According to this source, you can set the buffersize of a cv::VideoCapture object.
cv::VideoCapture cap;
cap.set(CV_CAP_PROP_BUFFERSIZE, 3); // internal buffer will now store only 3 frames
// rest of your code...
There is an important limitation however:
CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently)
Update from comments. In newer versions of OpenCV (3.4+), the limitation seems to be gone and the code uses scoped enumerations:
cv::VideoCapture cap;
cap.set(cv::CAP_PROP_BUFFERSIZE, 3);
Hackaround 1
If the solution does not work, take a look at this post that explains how to hack around the issue.
In a nutshell: the time needed to query a frame is measured; if it is too low, it means the frame was read from the buffer and can be discarded. Continue querying frames until the time measured exceeds a certain limit. When this happens, the buffer was empty and the returned frame is up to date.
(The answer on the linked post shows: returning a frame from the buffer takes about 1/8th the time of returning an up to date frame. Your mileage may vary, of course!)
Hackaround 2
A different solution, inspired by this post, is to create a third thread that grabs frames continuously at high speed to keep the buffer empty. This thread should use the cv::VideoCapture.grab() to avoid overhead.
You could use a simple spin-lock to synchronize reading frames between the real worker thread and the third thread.
Guys this is pretty stupid and nasty solution, but accepted answer didn't helped me for some reasons. (Code in python but the essence pretty clear)
# vcap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
data = np.zeros((1140, 2560))
image = plt.imshow(data)
while True:
vcap = cv2.VideoCapture("rtsp://admin:#192.168.3.231")
ret, frame = vcap.read()
image.set_data(frame)
plt.pause(0.5) # any other consuming operation
vcap.release()
An implementation of Hackaround 2 in Maarten's answer using Python. It starts a thread and keeps the latest frame from camera.read() as a class attribute. A similar strategy can be done in c++
import threading
import cv2
# Define the thread that will continuously pull frames from the camera
class CameraBufferCleanerThread(threading.Thread):
def __init__(self, camera, name='camera-buffer-cleaner-thread'):
self.camera = camera
self.last_frame = None
super(CameraBufferCleanerThread, self).__init__(name=name)
self.start()
def run(self):
while True:
ret, self.last_frame = self.camera.read()
# Start the camera
camera = cv2.VideoCapture(0)
# Start the cleaning thread
cam_cleaner = CameraBufferCleanerThread(camera)
# Use the frame whenever you want
while True:
if cam_cleaner.last_frame is not None:
cv2.imshow('The last frame', cam_cleaner.last_frame)
cv2.waitKey(10)
You can make sure that grabbing the frame took a bit of time. It is quite simple to code, though a bit unreliable; potentially, this code could lead to a deadlock.
#include <chrono>
using clock = std::chrono::high_resolution_clock;
using duration_float = std::chrono::duration_cast<std::chrono::duration<float>>;
// ...
while (1) {
TimePoint time_start = clock::now();
camera.grab();
if (duration_float(clock::now() - time_start).count() * camera.get(cv::CAP_PROP_FPS) > 0.5) {
break;
}
}
camera.retrieve(dst_image);
The code uses C++11.
There is an option to drop old buffers if you use a GStreamer pipeline. appsink drop=true option "Drops old buffers when the buffer queue is filled". In my particular case, there is a delay (from time to time) during the live stream processing, so it's needed to get the latest frame each VideoCapture.read call.
#include <chrono>
#include <thread>
#include <opencv4/opencv2/highgui.hpp>
static constexpr const char * const WINDOW = "1";
void video_test() {
// It doesn't work properly without `drop=true` option
cv::VideoCapture video("v4l2src device=/dev/video0 ! videoconvert ! videoscale ! videorate ! video/x-raw,width=640 ! appsink drop=true", cv::CAP_GSTREAMER);
if(!video.isOpened()) {
return;
}
cv::namedWindow(
WINDOW,
cv::WINDOW_GUI_NORMAL | cv::WINDOW_NORMAL | cv::WINDOW_KEEPRATIO
);
cv::resizeWindow(WINDOW, 700, 700);
cv::Mat frame;
const std::chrono::seconds sec(1);
while(true) {
if(!video.read(frame)) {
break;
}
std::this_thread::sleep_for(sec);
cv::imshow(WINDOW, frame);
cv::waitKey(1);
}
}
If you know the framerate of your camera you may use this information (i.e. 30 frames per second) to grab the frames until you got a lower frame rate.
It works because if grab function become delayed (i.e. get more time to grab a frame than the standard frame rate), it means that you got every frame inside the buffer and opencv need to wait the next frame to come from camera.
while(True):
prev_time=time.time()
ref=vid.grab()
if (time.time()-prev_time)>0.030:#something around 33 FPS
break
ret,frame = vid.retrieve(ref)
Related
Let's say we have a time consuming process that when we call it on any frame, it takes about 2 seconds for it to complete the operation.
As we capture our frames with Videocapture, frames are stored in a buffer behind the scene(maybe it's occurred in the camera itself), and when process on the nth frame completes, it grabs next frame ((n+1)th frame), However, I would like to grab the frames in real-time, not in order(i.e. skip the intermediate frames)
for example you can test below example
cv::Mat frame;
cv::VideoCapture cap("rtsp url");
while (true) {
cap.read(frame);
cv::imshow("s",frame);
cv::waitKey(2000); //artificial delay
}
Run the above example, and you will see that the frame that shows belongs to the past, not the present. I am wondering how to skip those frames?
NEW METHOD
After you start videoCapture, you can use the following function in OpenCV.
videoCapture cap(0);
cap.set(CV_CAP_PROP_BUFFERSIZE, 1); //now the opencv buffer just one frame.
USING THREAD
The task was completed using multi-threaded programming. May I ask what's the point of this question? If you are using a weak processor like raspberrypi or maybe you have a large algorithm that takes a long time to run, you may experience this problem, leading to a large lag in your frames.
This problem was solved in Qt by using the QtConcurrent class that has the ability to run a thread easily and with little coding. Basically, we run our main thread as a processing unit and run a thread to continuously capture frames, so when the main thread finishes processing on one specific frame, it asks for another frame from the second thread. In the second thread, it is very important to capture frames without delay. Therefore, if a processing unit takes two seconds and a frame is captured in 0.2 seconds, we will lose middle frames.
The project is attached as follows
1.main.cpp
#include <opencv2/opencv.hpp>
#include <new_thread.h> //this is a custom class that required for grab frames(we dedicate a new thread for this class)
#include <QtConcurrent/QtConcurrent>
using namespace cv;
#include <QThread> //need for generate artificial delay, which in real situation it will produce by weak processor or etc.
int main()
{
Mat frame;
new_thread th; //create an object from new_thread class
QtConcurrent::run(&th,&new_thread::get_frame); //start a thread with QtConcurrent
QThread::msleep(2000); //need some time to start(open) camera
while(true)
{
frame=th.return_frame();
imshow("frame",frame);
waitKey(20);
QThread::msleep(2000); //artifical delay, or a big process
}
}
2.new_thread.h
#include <opencv2/opencv.hpp>
using namespace cv;
class new_thread
{
public:
new_thread(); //constructor
void get_frame(void); //grab frame
Mat return_frame(); //return frame to main.cpp
private:
VideoCapture cap;
Mat frame; //frmae i
};
3.new_thread.cpp
#include <new_thread.h>
#include <QThread>
new_thread::new_thread()
{
cap.open(0); //open camera when class is constructing
}
void new_thread::get_frame() //get frame method
{
while(1) // while(1) because we want to grab frames continuously
{
cap.read(frame);
}
}
Mat new_thread::return_frame()
{
return frame; //whenever main.cpp need updated frame it can request last frame by usign this method
}
I have a system that is typically running a scan time of 100 HZ or 10 ms and performing time critical tasks. I'm trying to add a camera with opencv to once a while (depends on when a user interacts with the system so it can be anywhere from 10 seconds pauses to minutes) capture an image for quality control.
Here is what my code is doing:
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
UMat frame;
for(;;){
if (timing_variable_100Hz){
cap >> frame; // get a new frame from camera
*Do something time critical*
if(some_criteria_is_met){
if(!frame.empty()) imwrite( "Image.jpg", frame);
}
}
}
return 0;
}
Now the issue I'm having is that cap >> frame takes a lot of time.
My scan time regularly runs around 3ms and now it's at 40ms. Now my question is, are there anyway to open the camera, capture, then not have to capture every frame after until I have to? I tried to move the cap >> frame inside the if(some_criteria_is_met) which allowed me to capture the first image correctly but the second image which was taken a few minutes later was a single frame past the first captured image (I hope that makes sense).
Thanks
The problem is that your camera has a framerate of less than 100fps, probably 30fps (according to the 32ms you measured), so grab wait for a new frame to be available.
Since there is no way to do a non blocking read in opencv, i think that your best option is to do the video grabbing in another thread.
Something like this, if you use c++11 (this is an example, not sure it is entirely correct):
void camera_loop(std::atomic<bool> &capture, std::atomic<bool> &stop)
{
VideoCapture cap(0);
Mat frame;
while(!stop)
{
cap.grab();
if(capture)
{
cap.retrieve(frame);
// do whatever you do with the frame
capture=false;
}
}
}
int main()
{
std::atomic<bool> capture=false, stop=false;
std::thread camera_thread(camera_loop, std::ref(capture), std::ref(stop));
for(;;)
{
// do something time critical
if(some_criteria_is_met)
{
capture=true;
}
}
stop=true;
camera_thread.join();
}
It doesn't answer your question of are there anyway to open the camera, capture, then not have to capture every frame after until I have to?, but a suggestion
You could try and have the cap >> frame in a background thread which is responsible only for capturing the frames.
Once the frame is in memory, push it to some sort of shared cyclic queue to be accessed from the main thread.
I am having issues with reading a recording I made using the Recorder class in openni2 with the asus Xtion PRO LIVE. The problem is that once every ~50 frames a wrong frame is read from the recording, this was tested by storing the generated image (converted to an opencv matrix) as an .png image (using the opencv imwrite function) with an index. Shown below is my code, I also tried the code posted in the "OpenNI2: Reading in .oni Recording" question. This also doesnt work. The videostreams, device, videoFrameRefs and the playback controller are all initialized in an initialization function. I can post this as well if necessary.
openni::VideoStream depthStream;
openni::VideoStream colorStream;
openni::Device device;
openni::VideoFrameRef m_depthFrame;
openni::VideoFrameRef m_colorFrame;
openni::PlaybackControl *controler;
typedef struct _ImagePair {
cv::Mat GrayscaleImage;
cv::Mat DepthImage;
cv::Mat RGBImage;
float AverageDistance;
std::string FileName;
int index;
}ImagePair;
ImagePair SensorData::getNextRawImagePair(){
ImagePair result;
openni::Status rc;
std::cout<<"Getting next raw image pair...";
rc = controler->seek(depthStream, rawPairIndex);
if(!correctStatus(rc,"Seek depth"))return result;
openni::VideoStream** m_streams = new openni::VideoStream*[2];
m_streams[0] = &depthStream;
m_streams[1] = &colorStream;
bool newDepth=false,newColor=false;
while(!newDepth || !newColor){
int changedIndex;
openni::Status rc = openni::OpenNI::waitForAnyStream(m_streams, 2, &changedIndex);
if (rc != openni::STATUS_OK)
{
printf("Wait failed\n");
//return 1;
}
switch (changedIndex)
{
case 0:
rc = depthStream.readFrame(&m_depthFrame);
if(!correctStatus(rc,"Read depth")){
return result;
}
newDepth = true;
break;
case 1:
rc = colorStream.readFrame(&m_colorFrame);
if(!correctStatus(rc,"Read color")){
return result;
}
newColor = true;
break;
default:
printf("Error in wait\n");
}
}
//convert rgb to bgr cv::matrix
cv::Mat cv_image;
const openni::RGB888Pixel* colorImageBuffer = (const openni::RGB888Pixel*)m_colorFrame.getData();
cv_image.create(m_colorFrame.getHeight(), m_colorFrame.getWidth(), CV_8UC3);
memcpy( cv_image.data, colorImageBuffer,3*m_colorFrame.getHeight()*m_colorFrame.getWidth()*sizeof(uint8_t) );
//convert to BGR opencv color order
cv::cvtColor(cv_image,cv_image,cv::COLOR_RGB2BGR);
result.RGBImage = cv_image.clone();
//convert to grayscale
cv::cvtColor(cv_image,cv_image,cv::COLOR_BGR2GRAY);
result.GrayscaleImage =cv_image.clone();
//convert depth to cv::Mat INT16
const openni::DepthPixel* depthImageBuffer = (const openni::DepthPixel*)m_depthFrame.getData();
cv_image.create(m_depthFrame.getHeight(), m_depthFrame.getWidth(), CV_16UC1);
memcpy( cv_image.data, depthImageBuffer,m_depthFrame.getHeight()*m_depthFrame.getWidth()*sizeof(INT16) );
result.DepthImage = cv_image.clone();
result.index = rawPairIndex;
rawPairIndex++;
std::cout<<"done"<<std::endl;
return result;
}
I also tried it without the using waitForAnyStream part but that only made it worse. The file I am trying to load is over 1GB, not sure if that is a problem. The behaviour seems random because the indexes of the wrong images are not always the same.
UPDATE:
I changed the seek function to seek in the color stream instead of the depth stream. I also ensured that each stream is only waited for once in the waitForAnyStream function by setting the corresponding point in m_streams to null.
I found out that it is possible to find the actual frame index for each frame (.getFrameIndex) so I was able to compare the indices. After getting 1000 images there where 18 wrong color images, 17 of which had an index error of 53 and 1 had an error of 46. The depth images had an almost constant index error of 9.
The second thing I found was after adding a sleep of 1 ms (also tried 10ms but the results where the same) before the waitForAnyStream function the returned indices change significantly. The indices dont make large jumps anymore but the color images have a standard offset of 53 and the depth images have an offset of 9.
When I change the seek function to seek in the depth stream then with the delay the color stream has a constant error of 46 and the depth stream has an error of 1. Without the delay the color stream has an error of 45 and the depth stream has no error with occasional spikes of errors of 1.
I also looked at the indices when I dont use seek and waitForAnyStream but just do it as is proposed as an answer in "OpenNI2: Reading in .oni Recording". This shows that when the file is read by just calling m_readframe multiple times then the first color frame has index 59 instead of 0. After checking it turns out that frame 0 does excist and is an earlier frame than frame 59. So just opening the file and using readframe doesnt work at all. There are however no index errors, just like when I added the sleep to my own implementation.
UPDATE2:
I have come to the conclusion that either the openni2 library doesnt work properly or I have installed it incorrectly. This is because I am also having problems setting the Xtion to a resolution of 640x480 for both streams. When I do this the depth image only gets updated once every ~100 seconds. I have written a quick fix for my original problem by just filtering out the frames who's indices are wrong and just continuing with the next image.
I would still like to know how to fix this, so if you know please tell me.
UPDATE 3:
The code I am using for setting the framerate and fps for recording is:
//colorstream
openni::VideoMode color_videoMode = m_colorStream.getVideoMode();
color_videoMode.setResolution(WIDTH,HEIGHT);
color_videoMode.setFps(30);
m_colorStream.setVideoMode(color_videoMode);
//depth stream
openni::VideoMode depth_videoMode = m_depthStream.getVideoMode();
depth_videoMode.setResolution(WIDTH,HEIGHT);
depth_videoMode.setFps(30);
m_depthStream.setVideoMode(depth_videoMode);
I forgot to mention that I also tested the resolution by running the sampleviewer example program (I think, it was a few months ago) and changing the resolution in the xml config file. Here the color images would be shown fine but the depth images would update verry rarely.
In OpenCV is there a way to dramatically increase the frame rate of a video (.mp4). I have tried to methods to increase the playback of a video including :
Increasing the frame rate:
cv.SetCaptureProperty(cap, cv.CV_CAP_PROP_FPS, int XFRAMES)
Skipping frames:
for( int i = 0; i < playbackSpeed; i++ ){originalImage = frame.grab();}
&
video.set (CV_CAP_PROP_POS_FRAMES, (double)nextFrameNumber);
is there another way to achieve the desired results? Any suggestions would be greatly appreciated.
Update
Just to clarify, the play back speed is NOT slow, I am just searching for a way to make it much faster.
You're using the old API (cv.CaptureFromFile) to capture from a video file.
If you use the new C++ API, you can grab frames at the rate you want. Here is a very simple sample code :
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap("filename"); // put your filename here
namedWindow("Playback",1);
int delay = 15; // 15 ms between frame acquisitions = 2x fast-forward
while(true)
{
Mat frame;
cap >> frame;
imshow("Playback", frame);
if(waitKey(delay) >= 0) break;
}
return 0;
}
Basically, you just grab a frame at each loop and wait between frames using cvWaitKey(). The number of milliseconds that you wait between each frame will set your speedup. Here, I put 15 ms, so this example will play a 30 fps video at approx. twice the speed (minus the time necessary to grab a frame).
Another option, if you really want control about how and what you grab from video files, is to use the GStreamer API to grab the images, and then convert to OpenCV for image tratment. You can find some info about this on this post : MJPEG streaming and decoding
I am taking input from the camera. to be more clear i added a photo:
2 cameras that connected on the same usb port
with OpenCV as the following :
#define CamLeft 2
#define CamRight 0
#define WIN_L "win_l"
#define WIN_R "win_r"
int main(int argc, const char * argv[])
{
VideoCapture capLeft(CamLeft);
bool opened = capLeft.isOpened();
if(!opened /*|| !capRight.isOpened()*/) // check if we succeeded
return -1;
Mat edges;
namedWindow(WIN_L,1);
for(;;)
{
Mat frameL;
Mat frameR;
capLeft >> frameL; // get a new frame from camera
cvtColor(frameL, edges, CV_RGB2RGBA);
imshow(WIN_L, edges);
if(waitKey(30) >= 0) break;
}
return 0;
}
So I am creating a window named "win_l" stands for window left and process video capture. It works well. Now I upgraded my code to support another camera like this:
int main(int argc, const char * argv[])
{
VideoCapture capLeft(CamLeft);
VideoCapture capRight(CamRight);
bool opened = capLeft.isOpened();
if(!opened /*|| !capRight.isOpened()*/) // check if we succeeded
return -1;
Mat edges;
namedWindow(WIN_L,1);
namedWindow(WIN_R,1);
for(;;)
{
Mat frameL;
Mat frameR;
capLeft >> frameL; // get a new frame from camera
cvtColor(frameL, edges, CV_RGB2RGBA);
imshow(WIN_L, edges);
imshow(WIN_R, edges);
if(waitKey(30) >= 0) break;
}
return 0;
}
But then I don't see the debugger hit this line: bool opened.... is it the correct way to take capture from 2 cameras?
I suspect USB bandwidth allocating problem. Not very sure if that is the cause though, but it is usually the case for 2 separate camera with individual USB cable each.
But still, do give it a shot. Some methods to overcome that problem: 1)put a Sleep(ms) in between the lines of your capture line. 2)Use lower resolution which would reduce the bandwidth used by each camera. 3)Use MJPEG format(compressed frames).
EDIT:
I came across this website, WEBCAM AND USB DRIVERS, thought you might like to read it.
Anyway, I am not sure if both cameras are able to run concurrently with each other only through one USB port and the reason is as mentioned(I actually forgotten about it. My bad):
USB: Universal Serial Bus, as the name suggest, it is a serial data transmission, meaning, that it can only send data from one device to/and from the computer.
What happening in your code is a deadlock, causing your program to not progress, even on one camera.
2 reasons that causes this:
1)I suspect that the driver for each camera is consistently overwriting each other, sending an I/O Request Packet (IRP), and the "pipe" keeps on receiving and processing each request, and the system keep deallocating each resource immediately after it is allocated. This goes on and on.
2)There is starvation from the 2nd camera, where if it is not allocated the resource, the program cannot progress. Hence the system is stuck in a loop where the 2nd camera keep requesting for the resource.
However, it would be weird if somebody invented this 2 webcam using a single serial bus when each camera needs a channel/pipe of it's own to communicate with the system. Maybe you would like to check with the manufacturer straight? Maybe they came up with a workaround, hence they invented this camera?
If not, the only purpose for this invention is pretty redundant. I can only think of a security purpose, one configured for night view and one configured for day view.