Gstreamer H264 RTP - gstreamer

I am using GStreamer 1.0 to capture and display a video broadcast by an MGW ACE encode(or from a VLC itself), I am using RTP with H264
I have read that the sender's SPS and PPS information is needed in order to decode.
Both information is added in the sprop-parameter-sets parameter.
But if I can't get that information, is there any way I can decode and display without adding that parameter?
My Payload is the following:
gst-launch-1.0 -vvv udpsrc port = 9001 caps = "application / x-rtp, media = (string) video"! rtph264depay! decodebin! autovideosink
I have verified that from two different hosts, one to emit and another to receive through gstreamer, I have no problem, I can send and receive it without problem.
But when I try to receive a video from a MGW ACE encode from a VLC itself, I cannot display it.

Some RTP streaming scenarios repeat SPS/PPS periodically in-band before each IDR-frame. However I believe they do for convenience for that particular case. If i remember correctly RTP defines SPS/PPS transmission to occur out of band, via SDP information.

Related

Gstreamer RTSP webcam server

I want to read the feed from a webcam and host a RTSP stream without encoding the feed. I have access to high bandwidth network but the CPUs are very low end and have other tasks to full fill due to which I want to skip the encoding/decoding steps to save up on CPU usage. Before jumping on to RTSP I tried a simple MJPG stream and tried to skip the jpegenc (JPG encoding) as it can be done directly with a simple gst pipeline:
gst-launch-1.0 -v autovideosrc ! videoconvert ! videoscale ! video/x-raw,format=I420,width=800,height=600,framerate=25/1 ! rtpjpegpay ! udpsink host=10.0.1.10 port=5000
However, I got a warning:
WARNING: erroneous pipeline: could not link videoscale0 to
rtpjpegpay0, rtpjpegpay0 can't handle caps video/x-raw,
format=(string)I420, width=(int)800, height=(int)600,
framerate=(fraction)25/1
I'm new to Gstreamer and not sure if this is possible and how to move forward next. The same command above works if I include the jpg encoding. Any suggestions would be appreciated.
rtpjpegpay is an element that takes in a Motion JPEG stream and translates it to RTP. The input you're giving it however isvideo/x-raw, which means it is unencoded, rather than encoded with Motion JPEG. If you want to use this element, you'll first have to encode it to Motion JPEG, using something like jpegenc.
Like #vermaete already mentions: if you really, really don't want to encode your video, you can use someting like rtpvrawpay, which will translate your raw video into RTP packets. However: sending raw, unencoded video over the network is not really advisable (and not even workable if you have a bad connection, or even impossible if you plan on sending it over the Internet). You might also end up using a lot of resources on your CPU just to get everything payloaded properly, and gettign it sent to your network card.

Search for i-frame in RTP Packet

I am implementing RTSP in C# using an Axis IP Camera. Everything is working fine but when i try to display the video, I am getting first few frames with lots of Green Patches. I suspect the issue that I am not sending the i-frame first to the client.
Hence, I want to know the algorithm required to detect an i-frame in RTP Packet.
when initiating a RTSP-Session the server normaly starts the RTP-stream with config-data followed by the first I-Frame.
It is thinkable, that your Axis-camera is set to "always multicast" - in this case the RTSP-communication leads to a SDP description which tells the client all necessary network and streaming details for receiving the multicast stream.
Since the multicast stream is always present, you most probably receive some P- or B- frames first (depending on GOP-size).
You can detect these P/B-frames in your RTP client the same way you were detecting the I-frames as suggested by Ralf by identyfieng them via the NAL-unit type. Simply skip all frames in the RTP client until you receive the first I-frame.
Now you can forward all following frames to the decoder.
or you gave to change you camera settings!
jens.
ps: don't forget that you have fragmentation in your RTP stream - that means that beside of the RTP header there are some fragmentation information. Before identifying a frame you have to reassemble it.
It depends on the video media type. If you take H.264 for instance, you would look at the NAL unit header to check the nal unit type.
The green patches can indeed be caused by not having received an iframe first.

How can I play gtalk rtp payload data for video using codec h264?

I deal with rtp packets of gtalk video. I want to make video using gtalk rtp payload data. According to my search gtalk use h264 codec for video.
I combined all of rtp payload which is send with gtalk video and wanted to play ffplay using this comment "ffplay -f h264 "filename" but I can't see
anything and I take this error "Could not find codec parameters (Video: h264, yuv420p)". I think my wrong is combining rtp payload. how can I play this payload?
Thanks for your helps.
Cheers.
It could be that you need the sequence and picture parameters sets (SPS and PPS) which are often transferred during session setup. Standard protocols used for this are RTSP and SIP though I have no clue whether gtalk uses either of these.

RTP H.264 save and replay

We are interested in saving a H.264 stream and replaying it. Is there any one who experience saving h.264 using winpcap and replaying it. We were able to save H.263 and replay, but same logic does not work for H.264.
We also tried rtpdump tool to save H264 stream, but we were unable to replay it in that format?
thanks in advance
An H.264 stream is usually sent as a Transport Stream (TS). If you want to save it to file then you need to demux it and then mux it to a format suitable for file storage, for example MP4.
You will probably need to disable bframes in your encoder. Saving an RTP H.264 didn't work for me with bframes enabled.
I also advise to use a low keyint value because the dump will only be readable after the first keyframe.
You can use VLC to save the incoming stream with this command:
vlc -I rc rtp://#:4444 :sout=#std{access=file,mux=mp4,dst=output.mp4} :ipv4
Replace 4444 with the port number.

parse userdata field of all key frames of MPEG header from rtp video stream using GStreamer

How to parse MPEG stream using GStreamer..? I need to process all userdata field of only key frames(not P-Frames) of MPEG stream.
MPEG stream is coming through rtp protocol. I am able to display the video using GStreamer pipeline, but, my final requirement is to parse userdata field of all key frames and overlay that info into the display video.
using fakesink in pipeline and adding handleoff callback function