Create an RTP/RTSP or HTTP stream using OpenCV frames - c++

I have a custom board which takes input stream from a IP camera and the application perform facial detection using OpenCV on the input video stream.
My use case is to provide an output stream through network which will be accessible through VLC on any device connected in the same network.
I tried writing OpenCV frames through VideoWriter:
VideoWriter outStream("/home/user/frames/frame.mjpg", CV_FOURCC('M','J','P','G'), CAP_PROP_FPS, img.size(), true);
if (outStream.isOpened()){
outStream.write(img);
and creating a stream using mjpg_streamer like:
mjpg_streamer -i "input_file.so -f /home/user/frames" -o "output_http.so -w /usr/local/www -p 5241"
But the above process shows a lot of latency.
I can't use imshow as my hardware does not have any video output port.
Here is my code : https://pastebin.com/s66xGjAC

I would suggest using imwrite(), to save jpeg images in the directory specified by Mjpeg-Streamer. Write low quality Jpegs, set the " CV_IMWRITE_JPEG_QUALITY" to the lowest value that satisfies your requirement.

Related

Use Source Reader to get H264 samples from webcam source

When using the Source Reader I can use it to get decoded YUV samples using an mp4 file source (example code).
How can I do the opposite with a webcam source? Use the Source Reader to provide encoded H264 samples? My webcam supports RGB24 and I420 pixel formats and I can get H264 samples if I manually wire up the H264 MFT transform. But it seems as is the Source Reader should be able to take care of the transform for me. I get an error whenever I attempt to set MF_MT_SUBTYPE of MFVideoFormat_H264 on the Source Reader.
Sample snippet is shown below and the full example is here.
// Get the first available webcam.
CHECK_HR(MFCreateAttributes(&videoConfig, 1), "Error creating video configuration.");
// Request video capture devices.
CHECK_HR(videoConfig->SetGUID(
MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE,
MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_GUID), "Error initialising video configuration object.");
CHECK_HR(videoConfig->SetGUID(MF_MT_SUBTYPE, WMMEDIASUBTYPE_I420),
"Failed to set video sub type to I420.");
CHECK_HR(MFEnumDeviceSources(videoConfig, &videoDevices, &videoDeviceCount), "Error enumerating video devices.");
CHECK_HR(videoDevices[WEBCAM_DEVICE_INDEX]->GetAllocatedString(MF_DEVSOURCE_ATTRIBUTE_FRIENDLY_NAME, &webcamFriendlyName, &nameLength),
"Error retrieving video device friendly name.\n");
wprintf(L"First available webcam: %s\n", webcamFriendlyName);
CHECK_HR(videoDevices[WEBCAM_DEVICE_INDEX]->ActivateObject(IID_PPV_ARGS(&pVideoSource)),
"Error activating video device.");
CHECK_HR(MFCreateAttributes(&pAttributes, 1),
"Failed to create attributes.");
// Adding this attribute creates a video source reader that will handle
// colour conversion and avoid the need to manually convert between RGB24 and RGB32 etc.
CHECK_HR(pAttributes->SetUINT32(MF_SOURCE_READER_ENABLE_VIDEO_PROCESSING, 1),
"Failed to set enable video processing attribute.");
CHECK_HR(pAttributes->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video), "Failed to set major video type.");
// Create a source reader.
CHECK_HR(MFCreateSourceReaderFromMediaSource(
pVideoSource,
pAttributes,
&pVideoReader), "Error creating video source reader.");
MFCreateMediaType(&pSrcOutMediaType);
CHECK_HR(pSrcOutMediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video), "Failed to set major video type.");
CHECK_HR(pSrcOutMediaType->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_H264), "Error setting video sub type.");
CHECK_HR(pSrcOutMediaType->SetUINT32(MF_MT_AVG_BITRATE, 240000), "Error setting average bit rate.");
CHECK_HR(pSrcOutMediaType->SetUINT32(MF_MT_INTERLACE_MODE, 2), "Error setting interlace mode.");
CHECK_HR(pVideoReader->SetCurrentMediaType((DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM, NULL, pSrcOutMediaType),
"Failed to set media type on source reader.");
CHECK_HR(pVideoReader->GetCurrentMediaType((DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM, &pFirstOutputType),
"Error retrieving current media type from first video stream.");
std::cout << "Source reader output media type: " << GetMediaTypeDescription(pFirstOutputType) << std::endl << std::endl;
Output:
bind returned success
First available webcam: Logitech QuickCam Pro 9000
Failed to set media type on source reader. Error: C00D5212.
finished.
Source Reader does not look like suitable API here. It is API to implement "half of pipeline" which includes necessary decoding but not encoding. The other half is Sink Writer API which is capable to handle encoding, and which can encode H.264.
Or your another option, unless you are developing a UWP project, is Media Session API which implements a pipeline end to end.
Even though technically (in theory) you could have an encoding MFT as a part of Source Reader pipeline, Source Reader API itself is insufficiently flexible to add encoding style tansforms based on requested media types.
So, one solution could be to have Source Reader to read with necessary decoding (such as up to having RGB32 or NV12 video frames), then Sink Writer to manage encoding with respectively appropriate media sink on its end (or Sample Grabber as media sink). Another solution is to put Media Foundation primitives into Media Session pipeline which can manage both decoding and encoding parts, connected together.
Now, your use case is clearer.
For me, your MFWebCamRtp is the best optimized way of doing : WebCam Source Reader -> Encoding -> RTP Streaming.
But you are experiencing presentation clock issues, synchronization issues, or unsynchronized audio video issues. Am I right ?
So you tried Sample Grabber Sink, and now Source Reader, like I suggested to you. Of course, you can think that a Media Session will be able to do it better.
I think so, but extra work will be needed.
Here is what I would do in your case :
Code a custom RTP Sink
Create a topology with webcam source, h264 encoder, your custom RTP Sink
Add your topology to a MediaSession
Use the MediaSession to play the process
If you want a networkstream sink sample, see this : MFSkJpegHttpStreamer
This is old, but it's a good start. This program also uses winsock, like your.
You should be aware that RTP protocol uses UDP. A very good way to have synchronization issues... Definitely your main problem, as I guess.
What I think. You are trying to compensate for the weaknesses of the RTP protocol (UDP), with a management of the audio / video synchronization of MediaFoundation. I think you will just fail with this approach.
I think your main problem is RTP protocol.
EDIT
No I'm not having synchronisation issues. The Source Reader and Sample Grabber both provide correct timestamps which I can use in the RTP header. Likewise no problems with RTP/UDP etc. that's the bit I do know about. My questions are originating from a desire to understand the most efficient (least amount of plumbing code) and flexible solution. And yes it does look like a custom sink writer is the optimal solution.
Again things are clearer. If you need help with a custom RTP sink, I'll be there.

Stream OpenCV cv::Mat image to website (html5 page)

I have c++ code running on a raspberry pi using OpenCV to process the camera input (form and color detection). Here is the thread where i capture my images from my pi cam:
(variables names are in french, sorry about that)
Mat imgOriginal;
VideoCapture camera;
int largeur = camPartage->getLargeur();
int hauteur = camPartage->getHauteur();
camera.open(0);
if ( !camera.isOpened() )
{
screen->dispStr(10,1,"Cannot open the web cam");
}
else
{
screen->dispStr(10,1,"Open the web cam");
camera.set(CV_CAP_PROP_FRAME_WIDTH,largeur);
camera.set(CV_CAP_PROP_FRAME_HEIGHT,hauteur);
camera.set(CV_CAP_PROP_FPS,30);
}
while(1)
{
if(largeur != camPartage->getLargeur() || hauteur != camPartage->getHauteur())
{
largeur = camPartage->getLargeur();
hauteur = camPartage->getHauteur();
camera.set(CV_CAP_PROP_FRAME_WIDTH,largeur);
camera.set(CV_CAP_PROP_FRAME_HEIGHT,hauteur);
}
camera.grab();
camera.retrieve(imgOriginal);
camPartage->setImageCam(imgOriginal); //shared object
if(thread.destruction == DESTRUCTION_SYNCHRONE)
{
pthread_testcancel();
}
usleep(20000);
}
Now, i want to stream those images to my website hosted on another raspberry pi. I have looked into gstreamer, ffmpeg and sockets but i didn't find any good example in c++ that worked for me. Im trying to get the lowest latency possible.
Some people suggested to use raspistill but i can't open the camera in another program since its already open by OpenCV.
If you need more information let me know, any help is appreciated.
If you need to stream your camera images from a RPi on the network, There are many approaches to do that, based on your needs.
One approach is to use high-level applications like MJPG streamer, RPi IP Camera, etc.
Another approach is, you can stream camera images throw a network (by RTP, UDP, etc) with GStreamer, FFmpeg, Raspistill, etc. With this approach, you need to have a receiver app to get streams (e.g FFmpeg).
There is also another approach which you already stated in your question and that is directly accessing the camera and capture images then transfer them manually throw network. With this approach, you have more freedom to modify the design (like adding your own compression, encryption, etc) but you should take care of the network protocol by yourself.
In your example, you can transfer each frame in network with a simple TCP/IP socket or you can build up a simple web server. It is obvious that you can't access the cam with two apps at the same time. You can use v4l2loopback to create multiple camera interfaces and access them by multiple apps but it won't solve your problem.
There are good projects like rpi-webrtc-streamer and streameye which uses low-level protocols to transfer images.

OpenCV cannot connect to video stream - lack of some codec?

I use application IPCamera on my mobile phone with Android to output (share) video image from it's camera to LAN. I can access it on PC browser - that is ok.
However, I want to make OpenCV capture this video stream from IP address by typing
VideoCapture cap("http://admin:admin#192.168.0.11:8081/?action=stream?dummy=param.mjpg");
while( cap.isOpened() )
{
Mat frame;
if ( ! cap.read(frame) )
break;
cout << "Connected!!";
imshow("lalala",frame);
int k = waitKey(10);
if ( k==27 )
break;
}
and i got error:
Actual codec, which is used by phone is mjpeg (i read it from application on my mobile). I don't know if OpenCV supports this, but is that about mobile application uses some kind of unique codec, or my PC lacks it, or maybe C++/OpenCV code is wrong?
On PC opencv can capture your video stream from your mobile prone..
Like. You are using right connection string, like this for rtsp stream in my case.
VideoCapture capture("rtsp://USER:PASS#xxx.xxx.xxx.xxx/axis-media/media.amp?camera=2");
Probably, You don't have FFMPEG instaled corectly. You need to reinstall Opencv. First you need to install FFMPEG and Opencv After that.
In opencv 3.0.0 and 3.1 try to add
#include <opencv2\videoio.hpp>
#include <opencv2\imgcodecs.hpp>
Some tips how to install ffmpeg and sample in C++ on linux debian Here Code and tips and tricks

How to make .avi, .mp4 file with jpeg frames?

I'm working with IP camera, and I have got Jpeg frames and audio data (PCM) from camera.
Now, I want to create video file (both audio and video) under .avi or .mp4 format from above data.
I searched and I knew that ffmpeg library can do it. But I don't know how to using ffmpeg to do this.
Can you suggest me some sample code or the function of ffmpeg to do it?
If your objective is to write a c++ app to do this for you please disregard this answer, I'll just leave it here for future reference. If not, here's how you can do it in bash:
First, make sure your images are in a nice format, easy to handle by ffmpeg. You can copy the images to a different directory:
mkdir tmp
x=1; for i in *jpg; do counter=$(printf %03d $x); cp "$i" tmp/img"$counter".jpg; x=$(($x+1)); done
Copy your audio data to the tmp directory and encode the video. Let's say your camera took a picture every ten seconds:
cd tmp
ffmpeg -i audio.wav -f image2 -i img%03d.jpg -vcodec msmpeg4v2 -r 0.1 -intra out.avi
Where -r 0.1 indicates a framerate of 0.1 which is one frame every 10 seconds.
The possible issues here are:
Your audio/video might go slightly out of sync unless you calculate your desired framerate carefully in advance. You should be able to get the length of the audio (or video) using ffmpeg and some grep magic. Even so the sync might be an issue with longer clips.
if you have more than 999 images the %03d format will not be enough, make sure to change the 3 to the desired length of the index
The video will inherit its length from the longer of the streams, you can restrict it using the -t switch:
-t duration - Restrict the transcoded/captured video sequence to the duration specified in seconds. "hh:mm:ss[.xxx]" syntax is also supported.

Saving H.264 RTP stream without re-encoding?

My C++ application receives a H.264 RTP video stream.
Right now it decodes the stream, saves it into a YUV file and later I use ffmpeg to re-ecode the file into something suitable to watch on a Windows PC (eg. Mpeg4 AVI).
Shouldn't it be possible to save the H.264 stream into a AVI (or similar) container without having to decode and re-encode it ? That would require some H.264 decoder on the PC to watch, but it should be much more efficient.
How could that be done ? Are there any libraries supporting that ?
using ffmpeg is correct but the answers posted so far dont look right to me.
the correct switch should be:
-vcodec copy
Your program could pipe the rtp itself through ffmpeg - even invoking it using popen3().
It seems that you need to use an intermediate SDP file - I speculate that you can specify a file you created as a named pipe or with tmpfile() which your application writes to - using the file as an intermediary.
The command-line would be something like:
int p[3];
const char* const out_fmt = "avi";
const char* cmd[] = {"ffmpeg","-f",,"-i",temp_sdp_filename,"-vcodec","copy","-f",out_fmt,"-",NULL};
if(-1 == popen3(p,cmd)) ...
// write the rtp that you receive to p[STDIN_FILENO]
// read the avi from p[STDOUT_FILENO]
// read any messages and error text from p[STDERR_FILENO]
I believe that in this circumstance ffmpeg is clever enough to repackage the container (rtp stream vs AVI) without transcoding the video and audio (this is the -vcodec copy switch); therefore, you'd have no loss of quality and it'd be blazingly fast.