RTP H.264 save and replay - c++

We are interested in saving a H.264 stream and replaying it. Is there any one who experience saving h.264 using winpcap and replaying it. We were able to save H.263 and replay, but same logic does not work for H.264.
We also tried rtpdump tool to save H264 stream, but we were unable to replay it in that format?
thanks in advance

An H.264 stream is usually sent as a Transport Stream (TS). If you want to save it to file then you need to demux it and then mux it to a format suitable for file storage, for example MP4.
You will probably need to disable bframes in your encoder. Saving an RTP H.264 didn't work for me with bframes enabled.
I also advise to use a low keyint value because the dump will only be readable after the first keyframe.
You can use VLC to save the incoming stream with this command:
vlc -I rc rtp://#:4444 :sout=#std{access=file,mux=mp4,dst=output.mp4} :ipv4
Replace 4444 with the port number.

Related

Gstreamer H264 RTP

I am using GStreamer 1.0 to capture and display a video broadcast by an MGW ACE encode(or from a VLC itself), I am using RTP with H264
I have read that the sender's SPS and PPS information is needed in order to decode.
Both information is added in the sprop-parameter-sets parameter.
But if I can't get that information, is there any way I can decode and display without adding that parameter?
My Payload is the following:
gst-launch-1.0 -vvv udpsrc port = 9001 caps = "application / x-rtp, media = (string) video"! rtph264depay! decodebin! autovideosink
I have verified that from two different hosts, one to emit and another to receive through gstreamer, I have no problem, I can send and receive it without problem.
But when I try to receive a video from a MGW ACE encode from a VLC itself, I cannot display it.
Some RTP streaming scenarios repeat SPS/PPS periodically in-band before each IDR-frame. However I believe they do for convenience for that particular case. If i remember correctly RTP defines SPS/PPS transmission to occur out of band, via SDP information.

How can I send arbitrary data as part of an Ogg or Matroska stream in a GStreamer application?

I have live audio and video data that I can either send as an Ogg or Matroska (WebM) stream. I also have dynamic metadata that will be sent from the server to the client, and should be correlated with the A/V streams. For example, the exact server time when the audio packet was recorded.
I attempted to hack this with Kate encoding, but that appears to send all the subtitle data at once in the beginning, and not dynamically as it happens. If you could tell me how to send well-correlated dynamic subtitle data, then that's a viable solution as well.

Write RTP Stream Data to file

I have written an application which triggers an IP Camera to stream it's data (MPEG4) over RTP. This works fine so far - I start to setup and start the stream with the corresponding RTSP commands ( DESCRIBE, SETUP and PLAY ).
While streaming I receive the usual Sender Reports and send my own Receiver Reports - Everything is working fine here.
Now with the application mentioned above, I do NOT read the stream. I have a seperate hardware , which just logs all the stuff going over the Ethernet ( a little bit like Wireshark ). Now when the whole streaming is finished I can download those logs from my hardware and extract data from them.
So what I have then is a logfile with all the data from the RTP stream as raw data.
My question would now is: How do I write this appropriately into a MPEG4 file? I know this is a very broad question and I don't expect to get a step-by-step tutorial. But actually I am a bit overwhelmed and don't know where to start.If I just memcpy all the Payload from the RTP messages sequentially into a MPEG4 file it doesn't work. Now I am also a bit confused by SDP and stuff.
Well maybe someone has a link or some help for me..?
You should first read RFC3016, which describes the RTP format of MPEG-4 stream, then you'll know how to extract MPEG-4 frames from the RTP stream.
I actually changed from MPEG4 to H.264 - it actually was a little bit easier to write a video file like this. For H.264 this answer covers it pretty much:
How to process raw UDP packets so that they can be decoded by a decoder filter in a directshow source filter

How can I play gtalk rtp payload data for video using codec h264?

I deal with rtp packets of gtalk video. I want to make video using gtalk rtp payload data. According to my search gtalk use h264 codec for video.
I combined all of rtp payload which is send with gtalk video and wanted to play ffplay using this comment "ffplay -f h264 "filename" but I can't see
anything and I take this error "Could not find codec parameters (Video: h264, yuv420p)". I think my wrong is combining rtp payload. how can I play this payload?
Thanks for your helps.
Cheers.
It could be that you need the sequence and picture parameters sets (SPS and PPS) which are often transferred during session setup. Standard protocols used for this are RTSP and SIP though I have no clue whether gtalk uses either of these.

parse userdata field of all key frames of MPEG header from rtp video stream using GStreamer

How to parse MPEG stream using GStreamer..? I need to process all userdata field of only key frames(not P-Frames) of MPEG stream.
MPEG stream is coming through rtp protocol. I am able to display the video using GStreamer pipeline, but, my final requirement is to parse userdata field of all key frames and overlay that info into the display video.
using fakesink in pipeline and adding handleoff callback function