I am trying to display a video stream with OpenCV, but I am having horrible problems with framerate. My video source can put out a maximum of 60 fps, but I have limited it to 30. The issue is I am receiving it at about 2fps
I have simplified my program down as far as possible to make it easier to read:
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <stdio.h>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
Mat image1;
int k;
const char* right_cam_gst = "nvcamerasrc sensor-id=0 ! video/x-raw(memory:NVMM),\
width=(int)640,\
height=(int)360,\
format=(string)I420,\
framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw,\
format=(string)I420 ! videoconvert ! video/x-raw,\
format=(string)BGR ! appsink";
VideoCapture cap1 = VideoCapture(right_cam_gst);
for (;;)
{
cap1 >> image1;
imshow("image1", image1);
if(waitKey(1) == 27)
break;
}
}
This should grab and display the image as fast as the stream can allow. right?
Thanks for the help guys!
EDIT Looks like if I simply display an image as fast as possible, it only shows at about 1fps. This is eliminating the camera entirely.
SYSTEM: ubuntu on nvidia Jetson TX1
Found the answer! Looks like even though my ethernet connection was fast, it was somehow using the server for computing. (Not sure how). See this post: https://devtalk.nvidia.com/default/topic/1025856/very-slow-framerate-jetson-tx1-and-opencv/?offset=8
I disabled the Xserver and plugged straight in, and I got full 60 FPS.
Related
I am going through the book Learning OpenCV 3 and test out the video example 2.3. I could edit, compile and run it, but the problem is that it closed down immediately.
// DisplayPicture.cpp : Defines the entry point for the console application.
//
//#include "opencv2/opencv.hpp" // Include file for every supported OpenCV function
#include "opencv2\imgproc\imgproc.hpp"
#include "opencv2\highgui\highgui.hpp"
#include <opencv2/videoio.hpp>
#include <stdio.h>
#include <string.h>
using namespace cv;
using namespace std;
int main(int argc, char** argv) {
namedWindow("video3", WINDOW_AUTOSIZE);
VideoCapture cap;
cap.open( string(argv[1]));
int tell = 0;
Mat frame;
for (;;) {
cap >> frame;
//waitKey(30);
if (frame.empty())
{
break;
//end of film
}
imshow("video3", frame);
}
return 0;
}
I found that my computer processed the data too fast. It could not read the next frame fast enough. if (frame.empty()) became true the program reached the break statement and ended.
By adding a waitkey of 30 millisec before viewing the image frame, the video program works very well. At least I can view the video. Since this example is from the 'bible' it should work, but not with my computer.
I am running a MSI gt72 2PE computer with nvidia gtx880m. Not sure if that matters.
I assume that adding a waitKey(30) is not appropriate, so I am seeking suggestions as to what could be done differently.
I am trying to stream a combined video stream taken from two webcams to android app after processing in opencv(combining two frames).
Here I am trying to use RTSP to send video stream from opencv to Android(using a gstreamer pipeline).
But i am stuck in how to send .sdp file configurations to the client(file name is live.sdp),and here's the code I used so far.
//basic
#include <iostream>
#include <stdio.h>
#include <stdio.h>
#include <stdlib.h>
//opencv libraries
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main(int argc,char **argv){
Mat im1;
Mat im2;
VideoCapture cap1(1);
VideoCapture cap2(2);
VideoWriter video;
video.open("appsrc ! videoconvert ! x264enc noise-reduction=10000 tune=zerolatency byte-stream=true threads=4 ! mpegtsmux ! rtpmp2tpay send-config=true config-interval=10 pt=96 ! udpsink host=localhost port=5000 -v"
, 0, (double)20,Size(1280, 480), true);
if(video.isOpened()){
cout << "Video Writer is opened!"<<endl;
}else{
cout << "Video Writer is Closed!"<<endl;
return -1;
}
while(1){
cap1.grab();
cap2.grab();
bool bSuccess1 = cap1.read(im2);
bool bSuccess2 = cap2.read(im1);
Size sz1 = im1.size();
Size sz2 = im2.size();
Mat im3(sz1.height, sz1.width+sz2.width, CV_8UC3);
Mat left(im3, Rect(0, 0, sz1.width, sz1.height));
im1.copyTo(left);
Mat right(im3, Rect(sz1.width, 0, sz2.width, sz2.height));
im2.copyTo(right);
video << im3;
//imshow("im3", im3);
if(waitKey(10) == 27){
break;
}
}
cap1.release();
cap2.release();
video.release();
return 0;
}
And the .sdp file configuration,
v=0
m=video 5000 RTP/AVP 96
c=IN IP4 localhost
a=rtpmap:96 MP2T/90000
I can play the stream from vlc inside the local folder,
using
vlc live.sdp
but not inside the network,by using,
vlc rtsp://localhost:5000/live.sdp
Used Gstreamer with opencv solved the problem,but still have small lag.
I have this code which captures image from Webcam using OpenCV:
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <stdio.h>
using namespace std;
using namespace cv;
int main( )
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat meter_image;
cap >> meter_image;
imwrite("/boneCV-master/img.jpg", meter_image);
return 0;
}
I get following image as output.
Previously it was working fine. I don't know what is happening. I tried simplest of all codes upon googling but nothing worked. Please let me know what could be wrong with it.
thanks in advance.
EDIT
One thing I forgot to mention is that i am working on beagleBone Black. this same piece of codes works great with my mac.
Maybe adding frame checking will be helpful.
Mat meter_image;
while(meter_image.empty()){
cap >> meter_image;
}
But there's a risk of infinite loop.
I am using OpenCV 3.0.0-rc1 on Ubuntu 14.04 LTS Guest in VirtualBox with Windows 8 Host. I have an extremely simple program to read in frames from a webcam (Logitech C170) (from the OpenCV documentation). Unfortunately, it doesn't work (I have tried 3 different webcams). It throws an error "select timeout" every couple of seconds and reads a frame, but the frame is black. Any ideas?
The code is the following:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
using namespace std;
using namespace cv;
// Main
int main(int argc, char **argv) {
/* webcam setup */
VideoCapture stream;
stream.open(0);
// check if video device has been initialized
if (!stream.isOpened()) {
fprintf(stderr, "Could not open Webcam device");
return -1;
}
int image_width = 640; // image resolution
int image_height = 480;
Mat colorImage,currentImage;
bool loop = true;
/* infinite loop for video stream */
while (loop) {
loop = stream.read(colorImage); // read webcam stream
cvtColor(colorImage, currentImage, CV_BGR2GRAY); // color to gray for current image
imshow("Matches", currentImage);
if(waitKey(30) >= 0) break;
// end stream while-loop
}
return 0;
}
I found the problem: When using a webcam, make sure to connect it to the Virtual Machine using Devices->Webcams and NOT Devices->USB. Even though the webcam is detected as video0 when attaching it via Devices->USB, for some reasons it does not work.
I'm capturing frames from a Webcam using OpenCV in a C++ app both on my Windows machine as well as on a RaspberryPi (ARM, Debian Wheezy). The problem is the CPU usage. I only need to process frames like every 2 seconds - so no real time live view. But how to achieve that? Which one would you suggest?
Grab each frame, but process only some: This helps a bit. I get the most recent frames but this option has no significant impact on the CPU usage (less than 25%)
Grab/Process each frame but sleep: Good impact on CPU usage, but the frames that I get are old (5-10sec)
Create/Destroy VideoCapture in each cycle: After some cycles the application crashes - even though VideoCapture is cleaned up correctly.
Any other idea?
Thanks in advance
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
#include <unistd.h>
#include <stdio.h>
using namespace std;
int main(int argc, char *argv[])
{
cv::VideoCapture cap(0); //0=default, -1=any camera, 1..99=your camera
if(!cap.isOpened())
{
cout << "No camera detected" << endl;
return 0;
}
// set resolution & frame rate (FPS)
cap.set(CV_CAP_PROP_FRAME_WIDTH, 320);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,240);
cap.set(CV_CAP_PROP_FPS, 5);
int i = 0;
cv::Mat frame;
for(;;)
{
if (!cap.grab())
continue;
// Version 1: dismiss frames
i++;
if (i % 50 != 0)
continue;
if( !cap.retrieve(frame) || frame.empty() )
continue;
// ToDo: manipulate your frame (image processing)
if(cv::waitKey(255) ==27)
break; // stop on ESC key
// Version 2: sleep
//sleep(1);
}
return 0;
}
Create/Destroy VideoCapture in each cycle: not yet tested
It may be a bit troublesome on Windows (and maybe on other operating systems too) - First frame grabbed after creating VideoCapture is usually black or gray. Second frame should be fine :)
Other ideas:
- modified idea nr 2 - after sleep grab 2 frames. First frame may be old, but second should be new. It's not tested and generally i'm not sure about that, but it's easy to check it.
- Eventually after sleep you may grab frames in while loop (without sleep) waiting till you grab the same frame twice (but it may be hard to achieve especially on RasberryPi).