c++ opencv get encoded webcam stream - c++

I am currently work on a project that capture video from webcam and send the encoded stream via UDP to do a real time streaming.
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
VideoCapture cap(0); // open the video camera no. 0
double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH); //get the width of frames of the video
double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT); //get the height of frames of the video
while (1)
{
Mat frame;
bool bSuccess = cap.read(frame); // read a new frame from video
if (!bSuccess) //if not success, break loop
{
cout << "Cannot read a frame from video stream" << endl;
break;
}
return 0;
}
Some people say that the frame get from cap.read(frame) is already the decoded frame,I have no idea how and when does that happen. And what I want is the encoded frame or stream. What should I do to get it? Should I encoded it back again?

According to the docs, calling VideoCapture::read() is equivalent to calling VideoCapture::grab() then VideoCapture::retrieve().
The docs for the Retrieve function say it does indeed decode the frame.
Why not just use the decoded frame; presumably you'd be decoding it at the far end in any case?

OpenCV API does not give access to the encoded frames.
You will have to use a more low-level library, probably device and platform dependent. If your OS is Linux, Video4Linux2 may be an option, there must be equivalent libraries for Windows/MacOS. You may also have a look at mjpg-streamer, which does something very similar to what you want to achieve (on linux only).
Note that the exact encoding of the image will depend on your webcam, some usb webcam support mjpeg compression (or even h264), but other are only able to send raw data (usually in yuv colorspace).
Another option is to grab the decoded image wit Opencv, and reencode it, for example with imencode. It has the advantages of simplicity and portability, but image reencoding will use more resource.

Related

USB camera encoded data stream

Currently, I am working on a streaming project. I need to grab frames from USB camera and send them over TCP.
To open USB camera video stream I'm using cv::VideoCapture. This allows me to read already decoded frames. According to this question I understood that there is no way to get encoded frame data using cv::VideoCapture and I need to encode each frame again and send it whatever I need using cv::imencode. The problem is that I can encode frames to some specific format listed here, and, in case, I use either .jpg or .png the file size still quite big and on receiving side frame rate very poor.
My question is: Is there any way to get mjpeg or h264 encoded data directly
or maybe you can suggest a better way to encode frames.
OpenCV 3.4.3, camera RICOH THETA V, language C++.
My code:
void Streamer::start()
{
cv::Mat img;
cv::VideoCapture cap(0);
cap.set(CV_CAP_PROP_FOURCC, CV_FOURCC('H', '2', '6', '4'));
if (!cap.isOpened())
throw std::invalid_argument("No device found.");
std::vector<int> format_params;
format_params.push_back(CV_LOAD_IMAGE_COLOR);
format_params.push_back(CV_IMWRITE_PNG_STRATEGY);
for (;;)
{
cap.read(img);
cv::imencode(".png", img, buffer_, format_params);
std::string strbuf(buffer_.begin(), buffer_.end());
server_->sendString(socket, strbuf);
}
cap.release();
}

OpenCV multiple cameras single image

I'm trying to access multiple usb-cameras in openCV with MacOS 10.11.
My goal is to connect up to 20 cameras to the pc via USB Quad-Channel extensions and take single images. I do not need to have live streaming.
I tried with the following code and I can take a single image from all cameras (currently only 3, via one usb controller).
The question is: does opencv stream a live video from the usb cameras all the time, or does grab() stores an image on the camera which can be retrieve with retrieve() ?
I couldn't find the information, wether opencv uses the grab() command on it's internal video buffer, or on the camera.
int main(int argument_number, char* argument[])
{
std::vector<int> cameraIDs{0,1,2};
std::vector<cv::VideoCapture> cameraCaptures;
std::vector<std::string> nameCaptures{"a","b","c"};
//Load all cameras
for (int i = 0;i<cameraIDs.size();i++)
{
cv::VideoCapture camera(cameraIDs[i]);
if(!camera.isOpened()) return 1;
camera.set(CV_CAP_PROP_FRAME_WIDTH, 640);
camera.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
cameraCaptures.push_back(camera);
}
cv::namedWindow("a");
while(true) {
int c = cvWaitKey(2);
if(27 == char(c)){ //if esc pressed. grab new image and display it.
for (std::vector<cv::VideoCapture>::iterator it=cameraCaptures.begin();it!=cameraCaptures.end();it++)
{
(*it).grab();
}
int i=0;
for (std::vector<cv::VideoCapture>::iterator it=cameraCaptures.begin();it!=cameraCaptures.end();it++)
{
cv::Mat3b frame;
(*it).retrieve(frame);
cv::imshow(nameCaptures[i++], frame);
}
}
}
return 0;
}
Could you please make the statement more clear. You only want the frame from the feed or you want the streams to be connected all the time.
Opencv camera capture is always in running mode unless you release the capture device. So if you say you want only one frame from a device then its better to release this device once you retrieve the frame.
Another point is instead of using grab or retrieve in a multi cam environment, its better to use read() which combines both the above methods and reduces the overhead in decoding streams. So if you want say frame # 2sec position from each of the cams then in time domain they are pretty closely captured as in say the frame x from cam1 captured # 2sec position then frame x at 2.00001sec and frame x from cam3 captured at 2.00015sec..(time multiplexing - multithreading - internal by ocv)
I hope I am clear in explanation.
Thanks

Writing a video file using H.264 compression in OpenCV

How do I write a video using H.264 compression with the VideoWriter class in OpenCV? I basically want to get a video from the webcam and save it after a character is pressed. The ouput video file is huge when using MPEG4 Part 2 compression.
You can certainly use the VideoWriter class, but you need to use the correct FourCC code that represents the the H264 standard. FourCC stands for Four Character Code, which is an identifier for a video codec, compression format, colour or pixel format used in media files.
Specifically, when you create a VideoWriter object, you specify the FourCC code when constructing it. Consult the OpenCV docs for more details: http://docs.opencv.org/trunk/modules/highgui/doc/reading_and_writing_images_and_video.html#videowriter-videowriter
I'm assuming you're using C++, and so the definition of the VideoWriter constructor is:
VideoWriter::VideoWriter(const String& filename, int fourcc,
double fps, Size frameSize, bool isColor=true)
filename is the output of the video file, fourcc is the FourCC code for the code you wish to use, fps is the desired frame rate, frameSize is the desired dimensions of the video, and isColor specifies whether or not you want the video to be in colour. Even though FourCC uses four characters, OpenCV has a utility that parses FourCC and outputs a single integer ID which is used as a lookup to be able to write the correct video format to file. You use the CV_FOURCC function, and specify four single characters - each corresponding to a single character in the FourCC code of the codec you want. Note that CV_FOURCC is for OpenCV 2.x. It is recommended you use cv::Videowriter::fourcc for OpenCV 3.x and beyond.
Specifically, you would call it like this:
int fourcc = CV_FOURCC('X', 'X', 'X', 'X');
int fourcc = VideoWriter::fourcc('X', 'X', 'X', 'X');
Replace X with each character that belongs to the FourCC (in order). Because you want the H264 standard, you would create a VideoWriter object like so:
#include <iostream> // for standard I/O
#include <string> // for strings
#include <opencv2/core/core.hpp> // Basic OpenCV structures (cv::Mat)
#include <opencv2/highgui/highgui.hpp> // Video write
using namespace std;
using namespace cv;
int main()
{
VideoWriter outputVideo; // For writing the video
int width = ...; // Declare width here
int height = ...; // Declare height here
Size S = Size(width, height); // Declare Size structure
// Open up the video for writing
const string filename = ...; // Declare name of file here
// Declare FourCC code - OpenCV 2.x
// int fourcc = CV_FOURCC('H','2','6','4');
// Declare FourCC code - OpenCV 3.x and beyond
int fourcc = VideoWriter::fourcc('H','2','6','4');
// Declare FPS here
double fps = ...;
outputVideo.open(filename, fourcc, fps, S);
// Put your processing code here
// ...
// Logic to write frames here... see below for more details
// ...
return 0;
}
Alternatively, you could simply do this when declaring your VideoWriter object:
VideoWriter outputVideo(filename, fourcc, fps, S);
If you use the above, it's not required that you call open as this will automatically open up the writer for writing frames to file.
If you're not sure if H.264 is supported on your computer, specify -1 as the FourCC code, and a window should pop up when you run the code that displays all of the available video codecs that are on your computer. I'd like to mention that this only works for Windows. Linux or Mac OS doesn't have this window popping out when you specify -1. In other words:
VideoWriter outputVideo(filename, -1, fps, S);
You can choose which one is most suitable should H.264 not exist on your computer. Once that is done, OpenCV will create the right FourCC code to be input into the VideoWriter constructor so that you will get a VideoWriter instance that represents a VideoWriter that will write that type of video to file.
Once you have a frame ready, stored in frm for writing to the file, you can do either:
outputVideo << frm;
OR
outputVideo.write(frm);
As a bonus, here's a tutorial on how to read/write videos in OpenCV: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html - However, it's written for Python, but what is good to know is near the bottom of the link, there is a list of FourCC codes that are known to work for each operating system. BTW, the FourCC code they specify for the H264 standard is actually 'X','2','6','4', so if 'H','2','6','4' doesn't work, replace H with X.
Another small note. If you are using Mac OS, then what you need to use is 'A','V','C','1' or 'M','P','4','V'. From experience, 'H','2','6','4'or 'X','2','6','4'when trying to specify the FourCC code doesn't seem to work.
As rayryeng states, you need to pass the correct FourCC code to VideoWriter.
For H.264 most people use AVC, which would look like this:
cv2.VideoWriter_fourcc('a','v','c','1')
mp4v also seems to work for many. If they both don't, check your list of available codecs.

Video from 2 cameras (for Stereo Vision) using OpenCV, but one of them is lagging

I'm trying to create Stereo Vision using 2 logitech C310 webcams.
But the result is not good enough. One of the videos is lagging as compared to the other one.
Here is my openCV program using VC++ 2010:
#include <opencv\cv.h>
#include <opencv\highgui.h>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
try
{
VideoCapture cap1;
VideoCapture cap2;
cap1.open(0);
cap1.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
cap1.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
cap2.open(1);
cap2.set(CV_CAP_PROP_FRAME_WIDTH, 1040.0);
cap2.set(CV_CAP_PROP_FRAME_HEIGHT, 920.0);
Mat frame,frame1;
for (;;)
{
Mat frame;
cap1 >> frame;
Mat frame1;
cap2 >> frame1;
transpose(frame, frame);
flip(frame, frame, 1);
transpose(frame1, frame1);
flip(frame1, frame1, 1);
imshow("Img1", frame);
imshow("Img2", frame1);
if (waitKey(1) == 'q')
break;
}
cap1.release();
return 0;
}
catch (cv::Exception & e)
{
cout << e.what() << endl;
}
}
How can I avoid the lagging?
you're probably saturating the usb bus.
try to plug one in front, the other in the back(in the hope to land on different buses),
or reduce the frame size / FPS to generate less traffic.
I'm afraid you can't do it like this. The opencv Videocapture is really only meant for testing, it uses the simplest underlying operating system features and doesn't really try and do anything clever.
In addition simple webcams aren't very controllable of sync-able even if you can find a lower level API to talk to them.
If you need to use simple USB webcams for a project the easiest way is to have an external timed LED flashing at a few hertz and detect the light in each camera and use that to sync the frames.
I know this post is getting quite old but I had to deal with the same problem recently so...
I don't think you were saturating the USB bus. If you were, you should have had an explicit message in the terminal. Actually, the creation of a VideoCapture object is quite slow and I'm quite sure that's the reason of your lag: you initialize your first VideoCapture object cap1, cap1 starts grabbing frames, you initialize your second VideoCapture cap2, cap2 starts grabbing frames AND THEN you start getting your frames from cap1 and cap2 but the first frame stored by cap1 is older than the one stored by cap2 so... you've got a lag.
What you should do if you really want to use opencv for that is to add some threads: one dealing with left frames and the other with right frames, both doing nothing but saving the last frame received (so you'll always deal with the newest frames only). If you want to get your frames, you'll just have to get them from theses threads.
I've done a little something if you need here.

Reading frames from CIF video in OpenCV 2.4.2

The problem is the following:
There is a CIF video file that is supposed to be processed by the OpenCV.
Unfortunately, I am not able to read frames from this video. The following code
cv::VideoCapture cap = cv::VideoCapture("foreman.cif");
if(!cap.isOpened()) {
std::cout << "Open file error" << std::endl;
return -1;
}
gives Open file error in console.
Is there any way to grab frames from CIF video using OpenCV?
I don't think openCV can read raw YUV streams directly.
You can use memcoder to convert it to an avi, compressed or not.
Or you can use regular c/c++ to read the data blocks, copy them into an image and use cvCvtColor() to convert YUV->BGR