How to write the Mat/Image to the RTMP stream in OpenCV - c++

I can open RTSP video stream by using the VideoCapture and read the frames.
It is the URL of RTSP stream:
rtsp://username:password#xxx.xxx.xxx.xxx:554/Streaming/Channels/102
Now I want to send/write the image/mat back to the output stream (RTMP stream over LAN network)
I have already set up the RTMP server which is NGINX, and for a test, I downloaded the FFMPEG, and when running the following command (in CMD) then it works well, and successfully read and write the stream to NGINX server.
ffmpeg -i rtsp://username:password#xxx.xxx.xxx.xxx:554/Streaming/Channels/102 -vcodec copy -acodec copy -f flv rtmp://xxx.xxx.xxx.xxx:1395/mylive/test
And now if I put this rtmp://xxx.xxx.xxx.xxx:1395/mylive/test URL to the video tag of HTML, then the video can be opened.
So the question is how I can push the image through RTMP (NGINX) from my code after processing?
Any suggestion. Thanks in advance!!
Edit:
I used VideoWriter class as the following ways, but no one was working:
VideoWriter writer = new VideoWriter();
writer.open("rtmp://xxx.xxx.xxx.xxx:1395/mylive/test", CAP_FFMPEG, 0, 25,
new Size(640, 480));
//writer.open("ffmpeg -vcodec copy -acodec copy -f flv
//rtmp://xxx.xxx.xxx.xxx:1395/mylive/test", CAP_FFMPEG, 0, 25, new Size(640,
//480));
// or without CAP_FFMPEG
if (!writer.isOpened()) {
System.out.println("open error");
}

Related

Change the default audio and video codec loaded by avformat_alloc_output_context2

I'm using ffmpeg library for live streaming via RTMP. I want to know how to give my choice of audio and video codec for the particular format in avformat_alloc_output_context2.
In Detail:
The following command works perfectly for me.
ffmpeg -re -stream_loop -1 -i ~/Downloads/Microsoft_Surface.mp4 -vcodec copy -c:a aac -b:a 160k -ar 44100 -strict -2 -f flv -flvflags no_duration_filesize rtmp://192.168.1.7/live/surface
In the output, I have set my audio codec to be aac and copied the video codec from input, which is H264.
I want to emulate this in the library, but don't know how to.
avformat_alloc_output_context2(&_ctx, NULL, "flv", NULL);
Above code sets oformat audio codec to ADPCM_SWF and video codec to FLV1. How to change that to AAC and H264 ?
So far, used av_guess_format to construct AVOutputFormat. It accepts only format as input. And I don't know where to mention audio and video codec.
AVOutputFormat* output_format = av_guess_format("flv", NULL, NULL);
Also tried giving filename to avformat_alloc_output_context2 with the rest of the parameters NULL.
AVOutputFormat* output_format = av_guess_format(NULL, "flv_acc_sample.flv", NULL);
This file has AAC audio and H264 video. But still ffmpeg loads oformat with ADPCM_SWF audio and FLV1 video codecs.
Searched stackoverflow for similar questions, but could not find the solution I was looking for.
Any hint/guidance is hugely appreciated. Thank you.

Remux mp4 file containing data stream

I’m developing an app that needs to clone an MP4 video file with all the streams using FFmpeg C++ API and have successfully made it work based on the FFmpeg remuxing example.
This works great for video and audio streams, but when the video includes a data stream (actually a QuickTime Time Code according to MediaInfo) I get this error.
Output #0, mp4, to 'C:\Users\user\Desktop\shortOut.mp4':
Stream #0:0: Video: hevc (Main 10) (hev1 / 0x31766568), yuv420p10le(tv,progressive), 3840x2160 [SAR 1:1 DAR 16:9], q=2-31, 1208 kb/s
Stream #0:1: Audio: mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, s16p, 32s
Stream #0:2: Data: none (tmcd / 0x64636D74), 0 kb/s
[mp4 # 0000000071edf600] Could not find tag for codec none in stream #2, codec not currently supported in container
I’ve found this happens in the call to avformat_write_header().
It makes sense that if FFmpeg doesn’t know the codec it can’t write to the header about it, but I found out that using the ffmpeg command line I can make it to work perfectly using the copy command for the stream, something like:
ffmpeg -i input.mp4 -c:v copy -c:a copy -c:a copy output.mp4
I have been analyzing ffmpeg.c implementation to try to understand how they do a stream copy, but it’s been very painful following along the huge pipeline.
What would be a proper way to remux a data stream of this type with FFmpeg C++ API? Any tip or pointers?

How to stream a opencv Mat with ffmpeg in c++

I'm new here. I'm trying to stream some images processed with opencv on a LAN using ffmpeg.
I saw this:
Pipe raw OpenCV images to FFmpeg
but it doesn't work for me, it creates only noise. I think the data I'm sending are not in the right format.
I also have a look to this:
How to send opencv video's to ffmpeg
but (looking at the last answer) the option -f jpeg_pipe give me error.
What I do now:
I have a RGB Mat called "composedImage"
I send in output with:
std::cout << composedImage;
The output are the pixels values separated by comma
then I call:
./example | ffmpeg -f rawvideo -pixel_format bgr24 -video_size 160x120 -framerate 20 -i - -f udp://192.168.1.79:1234
I try to read using VLC (it didn't work) and with ffplay:
ffplay -f rawvideo -pixel_format gray -video_size 160x120 -framerate 30 udp://192.168.1.79:1234
Here it seems so easy:
http://ffmpeg.org/ffmpeg-formats.html#rawvideo
I have also tried to write the image and sent it, but I have errors. Probably it tries to send before the image is complete.
Thank you for your help.
I managed to stream, albeit with potato quality, maybe some ffmpeg guru can help out with the ffmpeg commands.
#include <iostream>
#include <opencv2/opencv.hpp>
#include <vector>
using namespace cv;
int main() {
VideoCapture cap("Your video or stream goes here");
Mat frame;
std::ios::sync_with_stdio(false);
while (cap.read(frame)) {
for (size_t i = 0; i < frame.dataend - frame.datastart; i++)
std::cout << frame.data[i];
}
return 0;
}
And then pipe it to ffmpeg like
./test | ffmpeg -f rawvideo -pixel_format bgr24 -video_size 1912x796 -re -framerate 20 -i - -f mpegts -preset ultrafast udp://127.0.0.1:1234
And play it with ffplay mpv or whatever
ffplay udp://127.0.0.1:1234

Create an RTP/RTSP or HTTP stream using OpenCV frames

I have a custom board which takes input stream from a IP camera and the application perform facial detection using OpenCV on the input video stream.
My use case is to provide an output stream through network which will be accessible through VLC on any device connected in the same network.
I tried writing OpenCV frames through VideoWriter:
VideoWriter outStream("/home/user/frames/frame.mjpg", CV_FOURCC('M','J','P','G'), CAP_PROP_FPS, img.size(), true);
if (outStream.isOpened()){
outStream.write(img);
and creating a stream using mjpg_streamer like:
mjpg_streamer -i "input_file.so -f /home/user/frames" -o "output_http.so -w /usr/local/www -p 5241"
But the above process shows a lot of latency.
I can't use imshow as my hardware does not have any video output port.
Here is my code : https://pastebin.com/s66xGjAC
I would suggest using imwrite(), to save jpeg images in the directory specified by Mjpeg-Streamer. Write low quality Jpegs, set the " CV_IMWRITE_JPEG_QUALITY" to the lowest value that satisfies your requirement.

Set rtsp_flags to listen ffmpeg in c code

I am trying to create a client client server application to stream and then receive video using rtsp using ffmpeg libraries. I am done with the client part which is streaming the video and i can receive the video on ffplay using following command
ffplay -rtsp_flags listen rtsp://127.0.0.1:8556/live.sdp
My problem is that i want receive the video in a c code and i need to set rtsp_flags option in it. Can anyone plz help??
P.S. i cannot use ffserver because i am working on windows and ffserver is not available for windows as far as i knw
You need to add the option when opening the stream:
AVDictionary *d = NULL; // "create" an empty dictionary
av_dict_set(&d, "rtsp_flags", "listen", 0); // add an entry
//open rtsp
if ( avformat_open_input( &ifcx, sFileInput, NULL, &d) != 0 ) {
printf( "ERROR: Cannot open input file\n" );
return EXIT_FAILURE;
}