what is codec_data in Gstreamer? - gstreamer

I am new to Gstreamer and trying debug an issue with aac codec. I found different codec_data in different scenarios. Following are caps I got from the different scenarios.
src caps: audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)1, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)131056e59d4800, rate=(int)24000, channels=(int)2
setcaps: audio/mpeg, mpegversion=(int)4, codec_data=(string)11900800, stream-format=(string)raw, framed=(boolean)true, enable-svp=(string)true, rate=(int)48000, channels=(int)2
Could you please help me to understand what is codec_data?

codec_data contains additional data to initialize the decoder. E.g. it contains information about the sample rate and number of channels in the stream.
You can parse this data according to the codec being used. Check the codec specification about this data's format.

Related

How to write ROS AudioData message into wav file?

I'm using ReSpeaker Mic Array v2.0 on my robot, I used the following git repo: https://github.com/furushchev/respeaker_ros.git to capture the audio received by the speaker. I subscribed to it's raw audio ros topic /audio which is just byte array data(http://docs.ros.org/noetic/api/audio_common_msgs/html/msg/AudioData.html)
How can I write the AudioData message's uint8[] data into a wav file in C++? I would like to play the wav file by other means afterwards.
I saw that in ros audio_common library example it uses gstreamer to do the writing, but I'm quite confused after reading the code(https://github.com/ros-drivers/audio_common/blob/master/audio_capture/src/audio_capture.cpp)
Example that you saw is using Gstremaer's alsasrc to capture audio from mic in this line
_source = gst_element_factory_make("alsasrc", "source");
So Gstreamer's pipeline is internally handling/capturing audio byte array and, in case of input parameters dst_type=="filesink" and format=="wave", encoding it with
_filter = gst_element_factory_make("wavenc", "filter");
and creating .wav file with
_sink = gst_element_factory_make("filesink", "sink");
On the other hand, running that code with input parameters dst_type=="appsink" and format=="wave" actually captures audio bytes again but, instead of writing to file, publishes them on ros topic /audio.
If you cannot (from any reason) use this code with input parameters dst_type=="filesink" and format=="wave", I suppose you will need to use Gstreamer's appsrc element and feed it with bytes from your AudioData message. In that case, the rest of Gstreamer pipeline for encoding and writing to file should remain the same as in the example.

Missing element: MPEG4-GENERIC audio RTP depayloader Gstreamer

When I try to record an RTSP stream with audio and video using gstreamer I get the above error. When only video is recorded it works but when audio pipeline is added the file size becomes zero and the above error is displayed. Further following is also displayed
Missing element: MPEG4-GENERIC audio RTP depayloader
WARNING: from element /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: No decoder available for type 'application/x-rtp, media=(string)audio, payload=(int)96, clock-rate=(int)48000, encoding-name=(string)MPEG4-GENERIC, streamtype=(string)5, profile-level-id=(string)1, mode=(string)aac-hbr, sizelength=(string)13, indexlength=(string)3, indexdeltalength=(string)3, config=(string)1188, a-tool=(string)"LIVE555\ Streaming\ Media\ v2016.01.29", a-type=(string)broadcast, x-qt-text-nam=(string)"KMStreaming\ Server", x-qt-text-inf=(string)ch01, clock-base=(uint)3130203504, seqnum-base=(uint)34845, npt-start=(guint64)0, play-speed=(double)1, play-scale=(double)1, ssrc=(uint)3216157947'.
Additional debug info:
gsturidecodebin.c(921): unknown_type_cb (): /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0
There are two different MPEG4 audio RTP formats in the wild. MP4A-LATM and MPEG4-GENERIC. See RFC 3016 and RFC 3640 respectively.
Looks like GStreamer only supports MP4A-LATM. So basically, yes, the format you are trying to receive is not supported.

GStreamer: VBI data stream decoding

I have a question related to processing VBI data stream through 'teletextdec' plug-in.
Basically, I've already checked functionality of the next configuration:
gst-launch-1.0 -v -m filesrc location=sample.mpeg ! tsdemux ! teletextdec ! videoconvert ! ximagesink
where sample.mpeg is a video file containing not only video data, but also VBI data. Result is presented as video frames with overlayed VBI decoded VBI data.
At the moment, I'm interested to repeat similar configuration, but when video and VBI data are separated from each other (need to work with two data sources: one for video data providing, another for VBI data providing).
In other words, we have two next data sources:
/dev/vbi0 - is a character device, with binary data as an output
/dev/video0 - v4l2 device, with RAW data video as an output
I understand, that generally I need to have something like that:
avfvideosrc name=src ! videomixer src.vbi_01 ! teletextdec
Does anyone have similar experience?
Could anyone explain how correctly to execute mixing and decoding of RAW video data with VBI data?
Thanks in advance!

Is that any way to set rtph264pay's profile-level-id in gst-good-1.8?

since i use gstreamer to transcode vp8 to h264 and sent to chrome,but profile-level-id don't match. so is that any way to set rtph264pay's profile-level-id in gst-good-1.8 ?
If you add a h264parse element after your h264 encoder would identify the profile-level-idc and put it into caps which you can then use.

How do I use gstreamer to encode an ffv1 file?

I'd like to encode a video with gstreamer to an FFV1 (ffmpeg's lossless video format) file. However, I cannot work out what type of mux'ing to use. If I run this:
gst-launch videotestsrc ! ffenc_ffv1 ! filesink location="test.ffv1"
Then the thing runs OK, but the resulting file doesn't appear to be a valid video file. When creating theora videos, I've previously written "theora ! oggmux ! filesink" in the pipeline, and this works. However, oggmux doesn't work here. What type of transport stream should I be using here, and what is the correct gst-launch fudge to use?
Cheers.
This does not seem to be supported in the version I have installed. You can check it for your version by saving the output of gst-xmlinspect to a file and searching for video/x-ffv in this file. The elements where this mime type is mentioned are:
avidemux
ogmvideoparse
ffdec_ffv1
ffenc_ffv1
So it seems this is supported by the avi demuxer but not by any muxer.
PS: The mime type can be found with gst-inspect ffenc_ffv1.