FFMPEG TS Null packet transmission - c++

I am trying to transmit TS packets using Ethernet. I am using C++ and ffmpeg libraries. At the moment I can send a hevc encoded ts stream via Ethernet successfully. But the output data rate varies. I want to maintain a constant(approximately) data rate..
I am using "av_interleaved_write_frame()" to transmit the TS packets.
I know this can be achieved using NULL packet transmission. Can anyone tell me how to do this using ffmpeg?
Thank you.

What you are trying to achieve is called Constant BitRate: you should set minrate, maxrate and bitrate to the same value to get it.
cf similar questions for more detailed examples:
https://superuser.com/a/314355/329216
How to force Constant Bit Rate using FFMPEG
And interesting external links:
https://support.octoshape.com/entries/25126002-Encoding-best-practices-using-ffmpeg

Related

Writing multithreaded video and audio packets with FFmpeg

I couldn't find any information on the way av_interleaved_write_frame deals with video and audio packets.
I have multiple audio and video packets coming from 2 threads. Each thread calls a write_video_frame or write_audio_frame, locks a mutex, initialize an AVPacket and writes data to an .avi file.
Initialization of AVCodecContext and AVFOrmatContext is ok.
-- Edit 1 --
Audio and video are coming from an external source (microphone and camera) and are captured as raw data without any compression (even for video).
I use h264 to encode video and no compression for Audio (PCM).
Audio captured is: 16bits, 44100khz, stereo
Video captured is 25FPS
Question:
1) Is it a problem if I write multiple video packets at once (let's say 25 packets/sec) and just one audio packet/sec.
Answer: Apparently not, the function av_interleaved_write_frame should be able to manage that kind of data as soon as pts and dts is well managed
This means I call av_interleaved_write_frame 25 times for video writing and just 1 for audio writing per second. Could this be a problem ? If it is how can I deal with this scenario ?
2) How can I manage pts and dts in this case ? It seems to be a problem in my application since I cannot correctly render the .avi file. Can I use real time stamps for both video and audio ?
Answer: The best thing to do here is to use the timestamp given when capturing audio / video as pts and dts for this kind of application. So these are not exactly real time stamps (from wall clock) but media capture timestamps.
Thank you for your precious advices.
av_interleaved_write_frame writes otput packets in such way so they are properly interleaved (maybe queueing them internally). "Properly interleaved" depends on a container format but usually it means that DTS stamps of the packets in output file are monotonically increasing.
av_interleaved_write_frame, like most FFmpeg APIs, shouldn't be ever called simultaneously by two threads with same AVFormatContext. I assume you make sure of that with a mutex. If you do then it doesn't matter whether it is multithreaded application or now.
Is it a problem if I write multiple video packets at once (let's say 25 packets/sec) and just one audio packet/sec
It is not a problem in general but most audio codecs can't output 1 second long audio packets. Which codec do you use?
How can I manage pts and dts in this case? Can I use real time stamps for both video and audio ?
Same way as you would in single-threaded application. Dts usually are generated by codecs from pts. Pts usually comes from a capture device/decoder together with the corresponding audio/video data.
Real time stamps might be ok to use but it really depends on how and when you are acquiring them. Please elaborate on what exactly you are trying to do. Where is audio/video data coming from?

Avcodec : generate OPUS header for a stream

I'm using OPUS with avcodec to encode sounds and stream it using my own protocol.
It works with the MP2 codec so far but when I'm switching to OPUS, I have this issue :
[opus # 1b06d040] Error parsing the packet header.
I suppose that unlike MP2, I need to generate a header for my OPUS encoded data stream but I don't know how.
Can someone explain me how to do that? Thanks.
This error comes from ff_opus_parse_packet() failing, which handles the raw opus packet header, what the specification calls the 'TOC' (for table-of-contents) byte and optional subframe lengths. It means libavcodec couldn't find the packet duration where it expected.
So probably your custom protocol is corrupting the data, returning the wrong data length, or you're otherwise not splitting the opus packet out of your framing layer correctly.
You don't need to invent your own protocol if you don't want to. There are two established designs: Opus over RTP for interactive use (like live chat where latency matters) is documented in RFC 7587. For HTTP streaming, file storage for recording, playback and other applications like that use the Ogg container, documented here. There are implementations of both of these in libavformat. See rtpenc.c, oggenc.c and oggparseopus.c if you're curious about the details.

Resample PCM network stream to 8000Hz 8-bit mono via libsndfile sf_open_virtual function

My goal is to take a PCM stream in Node.js that is, even for example, 44100Hz 16 bit stereo, and then resample it to 8000 Hz 8 bit mono to then be encoded into Opus and then streamed.
My thought was to try making bindings for libsndfile in C++ and using sf_open_virtual function for resampling on the stream. However:
How can I reply to its callback function requesting a certain amount
of data (found here:
http://www.mega-nerd.com/libsndfile/api.html#open_virtual) if my
program is still receiving data from the network? Do I just let it
hang in a loop until the loop detects that the buffer is a certain
percent full?
Since the PCM data is going to be headerless, how can
I specify the format type for libsndfile to expect?
Or am I over-complicating things totally?

Write RTP Stream Data to file

I have written an application which triggers an IP Camera to stream it's data (MPEG4) over RTP. This works fine so far - I start to setup and start the stream with the corresponding RTSP commands ( DESCRIBE, SETUP and PLAY ).
While streaming I receive the usual Sender Reports and send my own Receiver Reports - Everything is working fine here.
Now with the application mentioned above, I do NOT read the stream. I have a seperate hardware , which just logs all the stuff going over the Ethernet ( a little bit like Wireshark ). Now when the whole streaming is finished I can download those logs from my hardware and extract data from them.
So what I have then is a logfile with all the data from the RTP stream as raw data.
My question would now is: How do I write this appropriately into a MPEG4 file? I know this is a very broad question and I don't expect to get a step-by-step tutorial. But actually I am a bit overwhelmed and don't know where to start.If I just memcpy all the Payload from the RTP messages sequentially into a MPEG4 file it doesn't work. Now I am also a bit confused by SDP and stuff.
Well maybe someone has a link or some help for me..?
You should first read RFC3016, which describes the RTP format of MPEG-4 stream, then you'll know how to extract MPEG-4 frames from the RTP stream.
I actually changed from MPEG4 to H.264 - it actually was a little bit easier to write a video file like this. For H.264 this answer covers it pretty much:
How to process raw UDP packets so that they can be decoded by a decoder filter in a directshow source filter

Service a live OpenCV H.264 stream through Live555 on Windows

Totally new to this! As the title says, I'm trying to serve a stream from OpenCV through Live555 using H.264 that is captured from a webcam.
I've tried something like:
#define LOCALADDRESS "rtsp://localhost:8081" // Address media is served
#define FOURCCCODEC CV_FOURCC('H','2','6','4') // H.264 codec
#define FPS 25 // Frame rate things run at
m_writer = cvCreateVideoWriter(LOCALADDRESS, FOURCCCODEC, FPS, cvSize(VIDEOWIDTH, VIDEOHEIGHT));
as reading a rtsp stream, is done similarly:
CvCapture *capture = cvCreateFileCapture(LOCALADDRESS);
which doesn't work so I'm turning to Live555. How do I feed a CvCapture encoded in H.264 to be served by Live555? There doesn't seem to be a straitforward way to serve a bytestream from one to another or perhaps I'm missing something.
There really isn't a straight-forward way I know of; certainly nothing that will happen in anything less than a few hundred lines of code.
I'm assuming you want to use an on-demand RTSP server (this is where the server's just sitting there, waiting for a client to connect, and then it starts streaming when the client establishes a connection and makes a request)? If so, this item in the Live555 FAQ applies.
However, Live555 is a weird (possibly misguided?) library, so it's unfortunately a bit more complicated than that. Live555 uses a single thread of operation with an event loop, so what you'll have to do is shove your raw bytestream into a buffer or queue, and then in your subsession class for streaming H.264, you'll check and see if there's available data in the queue and if so, pass it along. If not, schedule another check in a few milliseconds. You'll also need to strip off any NALU identifiers before you pass them along to live555.