OpenCV very slow - webcam - c++

I'm using OpenCV to send camera bytes via UDP to another computer.
The problem is, that the framerate of the camera is only 15fps. If I send a picture, it works with over 200fps.
My camera supports 30fps, so does anyone know why this happen?
Here's my code:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <sys/socket.h>
#include <netinet/in.h>
#include <stdio.h>
#include <fstream>
#include <arpa/inet.h>
#include <unistd.h>
using namespace std;
using namespace cv;
void createSocket(int port, const char* ip);
void sendData(vector<uchar> buff);
int _socket;
struct sockaddr_in serverAdress,clientAdress;
// argv[1] = Bild
int main(int argc, char** argv)
{
if(argc != 5)
{
cout << "Error\n";
return -1;
}
createSocket(atoi(argv[2]),argv[1]);
vector<uchar> buff;
vector<int> param = vector<int>(2);
param[0] = CV_IMWRITE_JPEG_QUALITY;
param[1] = atoi(argv[3]);
// VideoCapture
VideoCapture cap(1);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 360);
if(!cap.isOpened())
{
cout << "Error\n";
return -1;
}
Mat frame;
while(true){
cap >> frame;
cvtColor(frame, frame, CV_8U);
imencode(".jpeg",frame, buff,param);
cout<<"coded file size(jpg)"<<buff.size()<<endl;
sendData(buff);
}
return 0;
}
void createSocket(int port, const char* ip){
_socket = socket(AF_INET,SOCK_DGRAM,0);
serverAdress.sin_family = AF_INET;
serverAdress.sin_addr.s_addr= inet_addr(ip);
serverAdress.sin_port=htons(port);
if(_socket == -1){
cout << "Error";
return;
}
}
void sendData(vector<uchar> buff){
char data[buff.size()];
for(int i = 0; i < buff.size(); i++){
data[i] = buff.at(i);
}
sendto(_socket,data,buff.size(),0,(struct sockaddr *)&serverAdress,sizeof(serverAdress));
}

While 30 FPS might be a theoretical maximum for your camera, it might be reachable on a sunny day, outside. I have a sneaking suspicion that the sub-par acquisition frequency might be caused by insufficient lighting. It just needs more time to catch enough photons to produce image of desired quality. I'm not sure what your acquisition conditions are, but things might improve significantly with more intense light. If you have access to the internal settings of the camera, you could also try reducing the exposure time. It would however require an icrease of gain to keep proper contrast and the images will exhibit more noise.

I believe UDP as a maximum theorotical data size per packet of approximately 65KB, but most protocols limit this to a few hundred bytes for safer transmission.Seeing as the video is being recorded at a 640 by 360 resolution without any compression, i can guarantee you that each frame significantly exceeds 65KB. So it is very unrealistic to expect the frame rate of your transmission in a constrained pipe to be equal to the frame rate at which the camera records at. If you want a faster frame rate at the receiver, either reduce the resolution of your video, or used an appropriate library like gstreamer which offers h.264 compression of the video before transmission.
Another solution would be to perform some processing on each frame via opencv (such as as simple filter), this should help reduce the size of each frame
Also what do you mean by if you send a picture, it works with over 200fps? This statement does not appear to make any sense

Related

c++ laser detection using frame by frame analysis

I am trying to create a program that will open my camera, take a frame, and locate a laser point. I want to analyse each frame and find a red laser dot. Ideally, I would be able to insert a line at the x and y coordinate at the centroid of the laser dot that updates with each new frame. The camera I am using to take pictures of the laser is fairly high quality and takes pictures of size 2048x2048 so going through each pixel takes way too long. I need this software to run fairly quickly. I already have the code to open the camera and acquire frames.
laser image
What is the best way to write the code to locate the central point of the laser point within each frame?
Thanks,
#include <stdio.h>
#include <iostream>
#include "xiApiPlusOcv.hpp"
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
try
{
xiAPIplusCameraOcv cam;
// Retrieving a handle to the camera device
printf("Opening first camera...\n");
cam.OpenFirst();
cam.SetExposureTime(10000); //10000 us = 10 ms
// Note: The default parameters of each camera might be different in different API versions
cam.SetImageDataFormat(XI_RGB24);
printf("Starting acquisition...\n");
cam.StartAcquisition();
printf("First pixel value \n");
#define EXPECTED_IMAGES 200
for (int images = 0; images < EXPECTED_IMAGES; images++)
{
Mat cv_mat_image = cam.GetNextImageOcvMat();
cvWaitKey(20);
Mat outImg;
cv::resize(cv_mat_image, outImg, cv::Size(), .35, .35);;
imshow("image", outImg);
}
cam.StopAcquisition();
cam.Close();
printf("Done\n");
cvWaitKey(500);
}
catch (xiAPIplus_Exception& exp) {
printf("Error:\n");
exp.PrintError();
cvWaitKey(2000);
}
return 0;
}

OpenCV Stitching in real-time with Overlapping Stationary Cameras extremely slow

I'm trying to use the OpenCV Stitcher class to stitch frames from two cameras. I would just like to do this in the most simple way to start with and then get into the details of the Stitcher class.
The code is really simple and not so many line of code.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/stitching.hpp"
#include <opencv2/opencv.hpp>
#include <opencv2/imgcodecs.hpp>
#include<iostream>
#include <fstream>
#include<conio.h> // may have to modify this line if not using Windows
using namespace std;
using namespace cv;
//using namespace cv::detail;
bool try_use_gpu = true;
Stitcher::Mode mode = Stitcher::PANORAMA; //Stitcher::SCANS;
vector<Mat> imgs;
int main()
{
//initialize and allocate memory to load the video stream from camera
cv::VideoCapture camera0(2);
cv::VideoCapture camera1(3);
if (!camera0.isOpened()) return 1;
if (!camera1.isOpened()) return 1;
//Mat output_frame;
cv::Mat3b output_frame;
cv::Stitcher stitcher = cv::Stitcher::createDefault(true);
//Ptr<Stitcher> stitcher = Stitcher::create(mode, try_use_gpu);
Mat camera0_frame_resized;
Mat camera1_frame_resized;
while (true) {
//grab and retrieve each frames of the video sequentially
cv::Mat3b camera0_frame;
cv::Mat3b camera1_frame;
camera0 >> camera0_frame;
camera1 >> camera1_frame;
resize(camera0_frame, camera0_frame_resized, Size(640, 480));
resize(camera1_frame, camera1_frame_resized, Size(640, 480));
imgs.push_back(camera0_frame_resized);
imgs.push_back(camera1_frame_resized);
Stitcher::Status status = stitcher.stitch(imgs, output_frame);
output_frame.empty();
if (status != Stitcher::OK) {
cout << "Can't stitch images, error code = " << int(status) << endl;
return -1;
}
cv::imshow("Camera0", camera0_frame);
cv::imshow("Camera1", camera1_frame);
cv::imshow("Stitch", output_frame);
//wait for 40 milliseconds
int c = cvWaitKey(5);
//exit the loop if user press "Esc" key (ASCII value of "Esc" is 27)
if (27 == char(c)) break;
}
return 0;
}
I am using OpenCV 3.2 and Visual Studio Community Edition 2017 running on windows 10.
The issue is that it is extremely slow, It seems to stitch the first frame and then it kind of gets stuck and nothing else happens, after seconds/minute maybe next frames appear.
I am running on a extremely fast CPU and top of the line GPU.
I was not expecting fast stitching but this is just way to slow and gets me thinking that I am doing something wrong.
Any ideas on why it stitches only the first frame and then gets stuck?

VideoCapture select timeout with OpenCV 3.0.0-rc1

I am using OpenCV 3.0.0-rc1 on Ubuntu 14.04 LTS Guest in VirtualBox with Windows 8 Host. I have an extremely simple program to read in frames from a webcam (Logitech C170) (from the OpenCV documentation). Unfortunately, it doesn't work (I have tried 3 different webcams). It throws an error "select timeout" every couple of seconds and reads a frame, but the frame is black. Any ideas?
The code is the following:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
using namespace std;
using namespace cv;
// Main
int main(int argc, char **argv) {
/* webcam setup */
VideoCapture stream;
stream.open(0);
// check if video device has been initialized
if (!stream.isOpened()) {
fprintf(stderr, "Could not open Webcam device");
return -1;
}
int image_width = 640; // image resolution
int image_height = 480;
Mat colorImage,currentImage;
bool loop = true;
/* infinite loop for video stream */
while (loop) {
loop = stream.read(colorImage); // read webcam stream
cvtColor(colorImage, currentImage, CV_BGR2GRAY); // color to gray for current image
imshow("Matches", currentImage);
if(waitKey(30) >= 0) break;
// end stream while-loop
}
return 0;
}
I found the problem: When using a webcam, make sure to connect it to the Virtual Machine using Devices->Webcams and NOT Devices->USB. Even though the webcam is detected as video0 when attaching it via Devices->USB, for some reasons it does not work.

How to get frame feed of a video in OpenCV?

I need to get the frame feed of the video from OpenCV. My code runs well but I need to get the frames it is processing at each ms.
I am using cmake on Linux.
My code:
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
Mat frame;
namedWindow("feed",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("feed", frame);
if(waitKey(1) >= 0) break;
}
return 0;
}
I'm assuming you want to store the frames. I would recommend std::vector (as GPPK recommends). std::vector allows you to dynamically create an array. The push_back(Mat()) function adds an empty Mat object to the end of the vector and the back() function returns the last element in the array (which allows cap to write to it).
The code would look like this:
#include "cv.h"
#include "highgui.h"
using namespace cv;
#include <vector>
using namespace std; //Usually not recommended, but you're doing this with cv anyway
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
vector<Mat> frame;
namedWindow("feed",1);
for(;;)
{
frame.push_back(Mat());
cap >> frame.back(); // get a new frame from camera
imshow("feed", frame);
// Usually recommended to wait for 30ms
if(waitKey(30) >= 0) break;
}
return 0;
}
Note that you can fill your RAM very quickly like this. For example, if you're grabbing 640x480 RGB frames every 30ms, you will hit 2GB of RAM in around 70s.
std::vector is a very useful container to know, and I would recommend checking out a tutorial on it if it is unfamiliar to you.

OpenCV slowing down WebCam capture

I'm capturing frames from a Webcam using OpenCV in a C++ app both on my Windows machine as well as on a RaspberryPi (ARM, Debian Wheezy). The problem is the CPU usage. I only need to process frames like every 2 seconds - so no real time live view. But how to achieve that? Which one would you suggest?
Grab each frame, but process only some: This helps a bit. I get the most recent frames but this option has no significant impact on the CPU usage (less than 25%)
Grab/Process each frame but sleep: Good impact on CPU usage, but the frames that I get are old (5-10sec)
Create/Destroy VideoCapture in each cycle: After some cycles the application crashes - even though VideoCapture is cleaned up correctly.
Any other idea?
Thanks in advance
#include <opencv2/opencv.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
#include <unistd.h>
#include <stdio.h>
using namespace std;
int main(int argc, char *argv[])
{
cv::VideoCapture cap(0); //0=default, -1=any camera, 1..99=your camera
if(!cap.isOpened())
{
cout << "No camera detected" << endl;
return 0;
}
// set resolution & frame rate (FPS)
cap.set(CV_CAP_PROP_FRAME_WIDTH, 320);
cap.set(CV_CAP_PROP_FRAME_HEIGHT,240);
cap.set(CV_CAP_PROP_FPS, 5);
int i = 0;
cv::Mat frame;
for(;;)
{
if (!cap.grab())
continue;
// Version 1: dismiss frames
i++;
if (i % 50 != 0)
continue;
if( !cap.retrieve(frame) || frame.empty() )
continue;
// ToDo: manipulate your frame (image processing)
if(cv::waitKey(255) ==27)
break; // stop on ESC key
// Version 2: sleep
//sleep(1);
}
return 0;
}
Create/Destroy VideoCapture in each cycle: not yet tested
It may be a bit troublesome on Windows (and maybe on other operating systems too) - First frame grabbed after creating VideoCapture is usually black or gray. Second frame should be fine :)
Other ideas:
- modified idea nr 2 - after sleep grab 2 frames. First frame may be old, but second should be new. It's not tested and generally i'm not sure about that, but it's easy to check it.
- Eventually after sleep you may grab frames in while loop (without sleep) waiting till you grab the same frame twice (but it may be hard to achieve especially on RasberryPi).