playing demuxed audio/video packet using GStreamer - gstreamer

I am newbie in multimedia. I have a application where I am getting demuxed audio/video packet. I checked GStreamer tutorials but most of the examples are based on url muxed stream.
Does GStreamer provides such interface where one can pass demux audio/video buffer for video playback?

appsrc is capable pushing data from your application into gstreamer pipeline.
The pipeline can be:
your demuxed data -> appsrc ! some-decoder ! some-sink
Here is some example and info about appsrc.
Also you can check some more examples here

Related

GStreamer: Input from osxaudiosrc with more than 8 channels

I am trying to receive audio from an OSX audio device with 64 input channels (Soundflower64 in this example) and record them to a multichannel wav.
I get the first 8 channels without problems with this basic command:
gst-launch-1.0 osxaudiosrc device=78 ! wavenc ! filesink location=audio.wav
But I found no way to widen this pipeline to 64 channels. Nothing seems to work...
Can it be done or is this an inherent limitation of GStreamer?
This still looks like a hardcoded limit in GStreamer - https://github.com/GStreamer/gst-plugins-good/blob/972184f434c3212aa313eacb9025869d050e91c3/sys/osxaudio/gstosxcoreaudio.h#L67
Edit: I made a pull request to Gstreamer to increase this to 64, which has now been released in v1.20. See https://gitlab.freedesktop.org/gstreamer/gstreamer/-/blob/1.20/subprojects/gst-plugins-good/sys/osxaudio/gstosxcoreaudio.h#L66

Gstreamer H264 RTP

I am using GStreamer 1.0 to capture and display a video broadcast by an MGW ACE encode(or from a VLC itself), I am using RTP with H264
I have read that the sender's SPS and PPS information is needed in order to decode.
Both information is added in the sprop-parameter-sets parameter.
But if I can't get that information, is there any way I can decode and display without adding that parameter?
My Payload is the following:
gst-launch-1.0 -vvv udpsrc port = 9001 caps = "application / x-rtp, media = (string) video"! rtph264depay! decodebin! autovideosink
I have verified that from two different hosts, one to emit and another to receive through gstreamer, I have no problem, I can send and receive it without problem.
But when I try to receive a video from a MGW ACE encode from a VLC itself, I cannot display it.
Some RTP streaming scenarios repeat SPS/PPS periodically in-band before each IDR-frame. However I believe they do for convenience for that particular case. If i remember correctly RTP defines SPS/PPS transmission to occur out of band, via SDP information.

Gstreamer RTSP webcam server

I want to read the feed from a webcam and host a RTSP stream without encoding the feed. I have access to high bandwidth network but the CPUs are very low end and have other tasks to full fill due to which I want to skip the encoding/decoding steps to save up on CPU usage. Before jumping on to RTSP I tried a simple MJPG stream and tried to skip the jpegenc (JPG encoding) as it can be done directly with a simple gst pipeline:
gst-launch-1.0 -v autovideosrc ! videoconvert ! videoscale ! video/x-raw,format=I420,width=800,height=600,framerate=25/1 ! rtpjpegpay ! udpsink host=10.0.1.10 port=5000
However, I got a warning:
WARNING: erroneous pipeline: could not link videoscale0 to
rtpjpegpay0, rtpjpegpay0 can't handle caps video/x-raw,
format=(string)I420, width=(int)800, height=(int)600,
framerate=(fraction)25/1
I'm new to Gstreamer and not sure if this is possible and how to move forward next. The same command above works if I include the jpg encoding. Any suggestions would be appreciated.
rtpjpegpay is an element that takes in a Motion JPEG stream and translates it to RTP. The input you're giving it however isvideo/x-raw, which means it is unencoded, rather than encoded with Motion JPEG. If you want to use this element, you'll first have to encode it to Motion JPEG, using something like jpegenc.
Like #vermaete already mentions: if you really, really don't want to encode your video, you can use someting like rtpvrawpay, which will translate your raw video into RTP packets. However: sending raw, unencoded video over the network is not really advisable (and not even workable if you have a bad connection, or even impossible if you plan on sending it over the Internet). You might also end up using a lot of resources on your CPU just to get everything payloaded properly, and gettign it sent to your network card.

Write gstreamer source with opencv

My goal is, to write a GigEVision to Gstreamer application.
The first approach was to read the frames via a GigEVision API and then send it via gstreamer as raw RTP/UDP stream.
This stream can then be received by any gstreamer application.
Here is a minimal example for a webcam: https://github.com/tik0/mat2gstreamer
The drawback of this is, alot of serialization and deserialization when the package is send via UDP to the next application.
So the question: Is it possible to write a gstreamer source pad easily with opencv, to overcome the drawbacks? (Or do you have any other suggestions?)
Greetings
I think I've found the best solution for my given setup (s.t. the data is exchanged between applications on the same PC).
Just using the plugin for shared memory allows data exchange with minimal effort.
So my OpenCV pileline looks like:
appsrc ! shmsink socket-path=/tmp/foo sync=true wait-for-connection=false
And any other receiver (in this case gstreamer-1.0) looks like:
gst-launch-1.0 shmsrc socket-path=/tmp/foo ! video/x-raw, format=BGR ,width=<myWidth>,height=<myHeight>,framerate=<myFps> ! videoconvert ! autovideosink
Works very nice even with multiple access.

gstreamer get video playing event

I am quite new to gstreamer and trying to get some metrics on an existing pipeline. The pipeline is set as 'appsrc queue mpegvideoparse avdec_mpeg2video deinterlace videobalance xvimagesink'.
xvimagesink only has a sink pad and I am not sure where and how its output is connected to but I am interested in knowing when the actual video device/buffer displays the first I frame and then video starts rolling.
The application sets the pipeline state to 'playing' quite early on, so, listening on this event does not help.
Regards,
Check out GST_MESSAGE_STREAM_START and probes. However, I am not sure, what exactly do you want: at GStreamer level you can only detect moment when buffer is handled via some element, not when it's actually displayed.
xvimagesink has no srcpad (output), only sinkpad (input).
You can read about preroll here: http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-preroll.txt
Be sure to read GStreamer manual first:
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html