Select Timeout error in Ubuntu14.04 with camera Opencv - c++

I am trying to use a webcam for my project. I use Opencv2.4.11 and the Logicool webcam C270.
I wrote a very simple code shown on below.
#include<opencv2/core/core.hpp>
#include<opencv2/highgui/highgui.hpp>
#include <stdio.h>
using namespace std;
using namespace cv;
int main()
{
Mat frame;
VideoCapture capture(0); // open the default camera
if (!capture.isOpened()) // check if we succeeded
return -1;
namedWindow("CVtest",1);
while (true) {
capture >> frame;
imshow("CVtest", frame);
waitKey(20);
}
return 0;
}
but this code is not work for me. I get this error messages.
jiehunt#ubuntu:~/work/test$ ./test.out
init done
opengl support available
select timeout
select timeout
I found this page Select Timeout error in Ubuntu - Opencv , and I followed the step
sudo rmmod uvcvideo
sudo modprobe uvcvideo nodrop=1 timeout=6000 quirks=0x80
It doesn't work for me.
When I turned on the trace of uvcvideo, I got this error message.
[13087.385542] uvcvideo: Marking buffer as bad (error bit set).
[13087.385545] uvcvideo: Frame complete (EOF found).
And now, I don't know what I should do.
My camera formats are shown bellow.
jiehunt#ubuntu:~/work/test$ v4l2-ctl --list-formats
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUV 4:2:2 (YUYV)
Index : 1
Type : Video Capture
Pixel Format: 'MJPG' (compressed)
Name : MJPEG

Related

problem on displaying live video in qt c++ application

I want to show a live stream of the camera connected to raspberry in qt application (OS Linux). After googling it, I found out I must display the video inside QLabel. When displaying an image there's no problem and everything works fine, but when I want to display the live stream inside QLabel, the live stream window opens separately (not inside QLabel). would you tell me how to solve this problem? here's my code :
void Dialog::on_Preview_clicked()
{
command = "raspistill";
args<<"-o"<<"/home/pi/Pictures/Preview/"+Date1.currentDateTime().toString()+".jpg"<<"-t"<<QString::number(20000);
Pic.start(command,args,QIODevice::ReadOnly);
QPixmap pix("//home//pi//Pictures//Preview//test.jpg");
ui->label_2->setPixmap(pix);
ui->label_2->setScaledContents(true);
}
This code opens video capturing screen and captures an image after 20 seconds. the only problem is that the capture screen (which could be used as a live stream). isn't being displayed inside the "Lable_2". Is there anyway to do this without using OpenCV library? If not, tell me how to do it using OpenCV.
Thanks
It is pretty simple in opencv
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("edges", frame);
if(waitKey(30) >= 0) break;
}
return 0;
}
Stream the camera using OpenCV, and show it in QLabel is possible.
When QCamera not working, and also use OpenCV in the project, could use VideoCapture to stream the video instead of QCamera.
The problem can be decomposed into several steps. Basically, We need:
Create a QThread for streaming(Don't let the GUI thread blocked).
In the sub-thread, using cv::VideoCapture to capture the frame into a cv::Mat.
Convert the cv::Mat to QImage(how to convert an opencv cv::Mat to qimage).
Pass QImage frame from sub-thread to the main GUI thread.
Paint the QImage on the QLabel.
I put the complete demo code in Github. it could paint the frame on the QLabel and QML VideoOutput.

OpenCV: isOpened() always fails

I am taking my first steps with OpenCV and I am trying to run this piece of code. It is supposed to open the specified video in a new window and wait for the user to press ESC. I tried passing both the relative and absolute path to VideoCapture but VideoCapture::isOpened() always fails. Why is this happening?
If I pass 0 to VideoCapture and do NOT call isOpened(), then I get a nice little window.
Note that I am using VS15 and OpenCV 2.4 (with the x86 libs)
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(path_to_video); // open the video file
// VideoCapture cap(0);
if(!cap.isOpened()) // check if we succeeded
return -1;
namedWindow("Video",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("Video", frame);
if(waitKey(30) >= 0) break;
}
return 0;
}
EDIT: I solved this by reinstalling OpenCV and creating a new Visual Studio project. The above code miraculously started working.
If I pass 0 to VideoCapture and do NOT call isOpened(), then I get a
nice little window. Why is this happening?
Because the VideoCapture class has two different constructors. The one that takes a string attempts to read from a file. The one that takes an integer attempts to read from a device. Passing 0 to the second version specifies the default device / camera.
VideoCapture::open¶ Open video file or a capturing device for video
capturing
C++: bool VideoCapture::open(const string& filename)
C++: bool VideoCapture::open(int device)
Parameters:
filename – name of the opened video file (eg. video.avi)
or image sequence (eg. img_%02d.jpg, which will read samples like
img_00.jpg, img_01.jpg, img_02.jpg, ...)
device – id of the opened video capturing device (i.e. a camera index).

Opencv capture video from multiple cameras

I am verifying multiple video captures by Opencv 3.4, there are 3 cameras in use, one built-in camera in the laptop and 2 USB cameras connected two separated USB ports. I can't make the video captures happen, it always throws exception as:
libv4l2: error setting pixformat: Device or resource busy
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline) in cvCaptureFromCAM_GStreamer, file /opt/opencv/modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer(CV_CAP_GSTREAMER_V4L2, reinterpret_cast<char *>(index))): raised OpenCV exception:
/opt/opencv/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline in function cvCaptureFromCAM_GStreamer
libv4l2: error setting pixformat: Device or resource busy
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline) in cvCaptureFromCAM_GStreamer, file /opt/opencv/modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer(CV_CAP_GSTREAMER_V4L2, reinterpret_cast<char *>(index))): raised OpenCV exception:
/opt/opencv/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline in function cvCaptureFromCAM_GStreamer
cap1 doesn't work
Source code is quite simple:
using namespace cv;
int main(int, char**)
{
VideoCapture cap0(0); // open the default camera
VideoCapture cap1(1);
VideoCapture cap2(2);
if(!cap0.isOpened()) {
std::cout << "cap0 doesn't work" << std::endl;
return -1;
}
if(!cap1.isOpened()) {
std::cout << "cap1 doesn't work" << std::endl;
return -1;
}
if(!cap2.isOpened()) {
std::cout << "cap2 doesn't work" << std::endl;
return -1;
}
Mat frame0;
Mat frame1;
Mat frame2;
for(;;)
{
cap0 >> frame0; // get a new frame from camera
cap1 >> frame1;
cap2 >> frame2;
imshow("Video0", frame0);
imshow("Video1", frame1);
imshow("Video2", frame2);
if(waitKey(30) >= 0) break;
}
return 0;
}
All cameras are recognized:
crw-rw----+ 1 root video 81, 0 1月 14 09:05 /dev/video0
crw-rw----+ 1 root video 81, 1 1月 14 09:30 /dev/video1
crw-rw----+ 1 root video 81, 2 1月 14 10:11 /dev/video2
I am using Ubuntu 14.04.
Any idea how to make multiple video captures happen?
The first thing with multiple USB cameras, and unrelated to OpenCV is the USB bandwidth, which rarely supports more than one camera bandwidth per USB port, for which it is recommended to connect each camera to a separate USB port, and NOT to a USB hub.
The second thing (my experience in Ubuntu 16.04 and 18.04; haven't tested in Windows) that some cameras will take two indexes, such that if there are two cameras, you can try opening camera index 0 for the first camera, and camera index 2 for the second:
VideoCapture camA(0);
VideoCapture camB(2);
And third, it is good practice to control the apiPreference for the camera stream. For example, to use Linux V4L2 and remove GStreamer from the video pipeline, instead of the above use this (OpenCV must have been built selecting WITH_CAP_V4L2):
VideoCapture camA(0, CAP_V4L2);
VideoCapture camB(2, CAP_V4L2);

Open USB camera with OpenCV and stream to rtsp server

I got a Logitech C920 camera connected via USB to a NVIDIA TX1. I am trying to both stream the camera feed over rtsp to a server while doing some computer vision in OpenCV. I managed to read H264 video from the usb camera in Opencv
#include <iostream>
#include "opencv/cv.h"
#include <opencv2/opencv.hpp>
#include "opencv/highgui.h"
using namespace cv;
using namespace std;
int main()
{
Mat img;
VideoCapture cap;
int heightCamera = 720;
int widthCamera = 1280;
// Start video capture port 0
cap.open(0);
// Check if we succeeded
if (!cap.isOpened())
{
cout << "Unable to open camera" << endl;
return -1;
}
// Set frame width and height
cap.set(CV_CAP_PROP_FRAME_WIDTH, widthCamera);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, heightCamera);
cap.set(CV_CAP_PROP_FOURCC, CV_FOURCC('X','2','6','4'));
// Set camera FPS
cap.set(CV_CAP_PROP_FPS, 30);
while (true)
{
// Copy the current frame to an image
cap >> img;
// Show video streams
imshow("Video stream", img);
waitKey(1);
}
// Release video stream
cap.release();
return 0;
}
I also have streamed the USB camera to a rtsp server by using ffmpeg:
ffmpeg -f v4l2 -input_format h264 -timestamps abs -video_size hd720 -i /dev/video0 -c:v copy -c:a none -f rtsp rtsp://10.52.9.104:45002/cameraTx1
I tried to google how to combine this two functions, i.e. open usb camera in openCV and use openCV to stream H264 rtsp video. However, all I can find is people trying to open rtsp stream in openCV.
Have anyone successfully stream H264 rtsp video using openCV with ffmpeg?
Best regards
Sondre

Capture image with OpenCV without GUI(on console linux)

I am using console linux and I have a camera capture application. I need to capture an image without GUI(The camera should start and capture some images, save it to disk and close). The following code works well on my laptop but doesn't start on console. Any suggestions?
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
Mat frame;
namedWindow("feed",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("feed", frame);
imwrite("/home/zaif/output.png", frame);
if(waitKey(1) >= 0) break;
}
return 0;
}
After the release of OpenCV 2.4.6 there were bug fixes for video capture on Linux. Go straight to 2.4.6.2 and you should get the fixes. Specifically, this revision is probably the relevant fix for you, although there were a number of other revisions pertaining to video capture on android that might effect Linux compilation too.