I got a Logitech C920 camera connected via USB to a NVIDIA TX1. I am trying to both stream the camera feed over rtsp to a server while doing some computer vision in OpenCV. I managed to read H264 video from the usb camera in Opencv
#include <iostream>
#include "opencv/cv.h"
#include <opencv2/opencv.hpp>
#include "opencv/highgui.h"
using namespace cv;
using namespace std;
int main()
{
Mat img;
VideoCapture cap;
int heightCamera = 720;
int widthCamera = 1280;
// Start video capture port 0
cap.open(0);
// Check if we succeeded
if (!cap.isOpened())
{
cout << "Unable to open camera" << endl;
return -1;
}
// Set frame width and height
cap.set(CV_CAP_PROP_FRAME_WIDTH, widthCamera);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, heightCamera);
cap.set(CV_CAP_PROP_FOURCC, CV_FOURCC('X','2','6','4'));
// Set camera FPS
cap.set(CV_CAP_PROP_FPS, 30);
while (true)
{
// Copy the current frame to an image
cap >> img;
// Show video streams
imshow("Video stream", img);
waitKey(1);
}
// Release video stream
cap.release();
return 0;
}
I also have streamed the USB camera to a rtsp server by using ffmpeg:
ffmpeg -f v4l2 -input_format h264 -timestamps abs -video_size hd720 -i /dev/video0 -c:v copy -c:a none -f rtsp rtsp://10.52.9.104:45002/cameraTx1
I tried to google how to combine this two functions, i.e. open usb camera in openCV and use openCV to stream H264 rtsp video. However, all I can find is people trying to open rtsp stream in openCV.
Have anyone successfully stream H264 rtsp video using openCV with ffmpeg?
Best regards
Sondre
Related
I am verifying multiple video captures by Opencv 3.4, there are 3 cameras in use, one built-in camera in the laptop and 2 USB cameras connected two separated USB ports. I can't make the video captures happen, it always throws exception as:
libv4l2: error setting pixformat: Device or resource busy
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline) in cvCaptureFromCAM_GStreamer, file /opt/opencv/modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer(CV_CAP_GSTREAMER_V4L2, reinterpret_cast<char *>(index))): raised OpenCV exception:
/opt/opencv/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline in function cvCaptureFromCAM_GStreamer
libv4l2: error setting pixformat: Device or resource busy
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline) in cvCaptureFromCAM_GStreamer, file /opt/opencv/modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer(CV_CAP_GSTREAMER_V4L2, reinterpret_cast<char *>(index))): raised OpenCV exception:
/opt/opencv/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline in function cvCaptureFromCAM_GStreamer
cap1 doesn't work
Source code is quite simple:
using namespace cv;
int main(int, char**)
{
VideoCapture cap0(0); // open the default camera
VideoCapture cap1(1);
VideoCapture cap2(2);
if(!cap0.isOpened()) {
std::cout << "cap0 doesn't work" << std::endl;
return -1;
}
if(!cap1.isOpened()) {
std::cout << "cap1 doesn't work" << std::endl;
return -1;
}
if(!cap2.isOpened()) {
std::cout << "cap2 doesn't work" << std::endl;
return -1;
}
Mat frame0;
Mat frame1;
Mat frame2;
for(;;)
{
cap0 >> frame0; // get a new frame from camera
cap1 >> frame1;
cap2 >> frame2;
imshow("Video0", frame0);
imshow("Video1", frame1);
imshow("Video2", frame2);
if(waitKey(30) >= 0) break;
}
return 0;
}
All cameras are recognized:
crw-rw----+ 1 root video 81, 0 1月 14 09:05 /dev/video0
crw-rw----+ 1 root video 81, 1 1月 14 09:30 /dev/video1
crw-rw----+ 1 root video 81, 2 1月 14 10:11 /dev/video2
I am using Ubuntu 14.04.
Any idea how to make multiple video captures happen?
The first thing with multiple USB cameras, and unrelated to OpenCV is the USB bandwidth, which rarely supports more than one camera bandwidth per USB port, for which it is recommended to connect each camera to a separate USB port, and NOT to a USB hub.
The second thing (my experience in Ubuntu 16.04 and 18.04; haven't tested in Windows) that some cameras will take two indexes, such that if there are two cameras, you can try opening camera index 0 for the first camera, and camera index 2 for the second:
VideoCapture camA(0);
VideoCapture camB(2);
And third, it is good practice to control the apiPreference for the camera stream. For example, to use Linux V4L2 and remove GStreamer from the video pipeline, instead of the above use this (OpenCV must have been built selecting WITH_CAP_V4L2):
VideoCapture camA(0, CAP_V4L2);
VideoCapture camB(2, CAP_V4L2);
I have purchased an ELP-1MP2CAM001 which shows up as two webcam devices on Windows. If I open the Windwos default "Camera" app and Skype I can display the feeds from both the left and right camera at the same time. I don't think therefore it is a USB Bandwidth issue with two cameras coming into the same port
I'm using fairly standard code (shown below) to open both of these feeds and it works successfully if I use two standard Microsoft HD3000 webcams instead of the single stereo camera.
I've tried a range of numbers inside the cap2() arguments so I don't think it's hiding at number 10 or anything weird like that.
My questions are:
There must be some sort of on board hub for the ELP cameras, do I need to do something different in OpenCV?
Could it be that both frames are accessible through cap(0)? This seems unlikely to me.
This questions says I don't need to do anything special? but obviously I'm missing something.
Any help on this would be great.
Code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
VideoCapture cap2(1); // open the default camera
cap.set(CV_CAP_PROP_FRAME_WIDTH, 240);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
cap2.set(CV_CAP_PROP_FRAME_WIDTH, 240);
cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
if (!cap.isOpened()) // check if we succeeded
return -1;
if (!cap2.isOpened()) // check if we succeeded
return -1;
Mat frame;
Mat frame2;
namedWindow("Frame", 1);
namedWindow("Frame2", 1);
for (;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("Frame", frame);
Mat frame2;
cap2 >> frame2;
imshow("Frame2", frame2);
if (waitKey(30) >= 0) break; // Finish on "esc" key
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
I have the same camera as this, and I ran into the same problem before. Try change the order of your code as below:
VideoCapture cap(0);
cap.set(CV_CAP_PROP_FRAME_WIDTH, 240);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
VideoCapture cap2(1);
cap2.set(CV_CAP_PROP_FRAME_WIDTH, 240);
cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
I think it's a problem with USB bandwidth. In your code, you opened two cameras in full resolution at the beginning, then you change the resolution of two cameras.
When you call VideoCapture cap(0); // open the default camera, cap has resolution 1280*720. cap already occupied the bandwidth. Thereby, VideoCapture cap2(1); won't open camera cap2 sucessfully.
Hope it helps.
According to VideoCapture documentation, there is a function called cv::VideoCapture::grab:
The primary use of the function is in multi-camera environments, especially when the cameras do not have hardware synchronization. That is, you call VideoCapture::grab() for each camera and after that call the slower method VideoCapture::retrieve() to decode and get frame from each camera
You can try that, with:
cap.grab();
cap.retrieve(...);
I am using OpenCV 3.0.0-rc1 on Ubuntu 14.04 LTS Guest in VirtualBox with Windows 8 Host. I have an extremely simple program to read in frames from a webcam (Logitech C170) (from the OpenCV documentation). Unfortunately, it doesn't work (I have tried 3 different webcams). It throws an error "select timeout" every couple of seconds and reads a frame, but the frame is black. Any ideas?
The code is the following:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
using namespace std;
using namespace cv;
// Main
int main(int argc, char **argv) {
/* webcam setup */
VideoCapture stream;
stream.open(0);
// check if video device has been initialized
if (!stream.isOpened()) {
fprintf(stderr, "Could not open Webcam device");
return -1;
}
int image_width = 640; // image resolution
int image_height = 480;
Mat colorImage,currentImage;
bool loop = true;
/* infinite loop for video stream */
while (loop) {
loop = stream.read(colorImage); // read webcam stream
cvtColor(colorImage, currentImage, CV_BGR2GRAY); // color to gray for current image
imshow("Matches", currentImage);
if(waitKey(30) >= 0) break;
// end stream while-loop
}
return 0;
}
I found the problem: When using a webcam, make sure to connect it to the Virtual Machine using Devices->Webcams and NOT Devices->USB. Even though the webcam is detected as video0 when attaching it via Devices->USB, for some reasons it does not work.
I have problem with plying video file, why it is slow motion ?
How can I make it normal speed?
#include"opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap("eye.mp4");
// open the default camera
if (!cap.isOpened())
// check if we succeeded
return -1;
namedWindow("Video", 1);
while (1)
{
Mat frame;
cap >> frame;
imshow("Video", frame);
if (waitKey(10) == 'c')
break;
}
return 0;
}
VideoCapture isn't built for playback, it's just a way to grab frames from video file or camera. Other libraries that supports playback, such as GStreamer or Directshow, they set a clock that control the playback, so that it can be configured to play as fastest as possible or use the original framerate.
In your snippet, the interval between frames comes from the time it takes to read a frame and the waitKey(10). Try using waitKey(0), it should at least play faster. Ideally, you could use waitKey(1/fps).
I am using console linux and I have a camera capture application. I need to capture an image without GUI(The camera should start and capture some images, save it to disk and close). The following code works well on my laptop but doesn't start on console. Any suggestions?
#include "cv.h"
#include "highgui.h"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
Mat frame;
namedWindow("feed",1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("feed", frame);
imwrite("/home/zaif/output.png", frame);
if(waitKey(1) >= 0) break;
}
return 0;
}
After the release of OpenCV 2.4.6 there were bug fixes for video capture on Linux. Go straight to 2.4.6.2 and you should get the fixes. Specifically, this revision is probably the relevant fix for you, although there were a number of other revisions pertaining to video capture on android that might effect Linux compilation too.