adding audio to appsrc video pipeline - gstreamer

I'm using appsrc to generate an HLS stream, this is my successful pipeline:
appsrc->videoconvert->openh264enc->h264parse->mpegtsmux->hlssink
However, I'd like to generate some audio via audiotestsrc before mpegtsmux which would look like the following:
audiotestsrc->lamemp3enc->mpegtsmux
Audiotestsrc and lame have 'always' pads, so I link the two just like my other video elements.
When it comes to linking lame's "always" "src", to mpegtsmux's "request" "sink_%d", the result says that there's no issue:
//Returns 0
gst_pad_link(h264ParsePad, mpegtsmuxSinkPad);
//Returns 0
gst_pad_link(audioEncPad, mpegtsmuxSinkPad);
//Returns 0
gst_pad_link(mpegtsmuxSrcPad, hlssinkPad);
But running the app results in pipeline failure with
"Internal data stream error."
Removing the audioEncPad linking just makes the stream work like normal but of course without audio. How should I go about doing this?

Few things needed to be done:
Use aacparse
Clean the solution
Link voaacenc with aacparse
#2 caused me a lot of torment since everything theoretically should've worked. D'oh.

Related

QtGStreamer not producing expected output on streaming from camera

I am trying to use QtGStreamer to stream camera frames and render it onto a QML window. I have a simple Gstreamer pipeline which works fine when the I use gst-launch-1.0
gst-launch-1.0 autovideosrc ! videoscale ! video/x-raw, width=480,height=270 ! xvimagesink -e
Now I create a corresponding QtGStreamer pipeline as:
void Streamer::startStreaming()
{
if (!m_streaming_pipeline) {
m_streaming_pipeline = QGst::Pipeline::create();
if (m_streaming_pipeline) {
QGst::ElementPtr source = QGst::ElementFactory::make("autovideosrc");
QGst::ElementPtr scale = QGst::ElementFactory::make("videoscale");
scale->setProperty("caps", QGst::Caps::fromString("video/x-raw, width=480,height=270"));
if (m_videoSink) {
m_videoSink->setProperty("sync", false);
m_streaming_pipeline->add(source, scale, m_videoSink);
source->link(scale);
scale->link(m_videoSink);
QGst::BusPtr bus = m_streaming_pipeline->bus();
bus->addSignalWatch();
QGlib::connect(bus, "message", this, &Recorder::onBusMessage);
m_streaming_pipeline->setState(QGst::StatePlaying);
qDebug() << "Done";
}
}
}
}
So first off, this is really slow. While the original gstreamer command runs easily at 30 frames/second, this is running at a couple of frames per second. I also get this output on the console when I set GST_DEBUG=3
0:00:08.661824920 23980 0x2ac6370 WARN v4l2bufferpool gstv4l2bufferpool.c:540:gst_v4l2_buffer_pool_set_config:<autovideosrc0-actual-src-v4l:pool:src> libv4l2 converter detected, disabling CREATE_BUFS
0:00:08.665945185 23980 0x2ac6370 WARN v4l2bufferpool gstv4l2bufferpool.c:748:gst_v4l2_buffer_pool_start:<autovideosrc0-actual-src-v4l:pool:src> Uncertain or not enough buffers, enabling copy threshold
Another thing I noticed is that the frames that do get rendered, it is almost that the color scheme is flipped. So, it seems that something along the line is also flipping the colour channels.
EDIT
I figured out that I needed to add a capsfilter to get the correct format. So adding something like:
QGst::ElementPtr capsfilter = QGst::ElementFactory::make("capsfilter", "capsfilter");
capsfilter->setProperty("caps", QGst::Caps::fromString("video/x-raw, width=1920, height=1080, format=RGB, framerate=30/1"));
and then adding it via
m_streaming_pipeline->add(source, capsfilter, scale, m_videoSink);
and subsequently linking it solved the problem.
Although now my question is how does gstreamer pick a valid format in my original pipeline?

Stream Icecast using Gstreamer

I'm designing a program to stream an icecast server (radio.clarkson.edu). Ultimately it will be written in Python3, but for now I'm using gst-launch to test the pipeline. I've been working on Debian Jessie and using gstreamer-1.0. Using a file on Wikimedia, I was able to play pretty easily using:
url=https://upload.wikimedia.org/wikipedia/commons/0/0c/Muriel-Nguyen-Xuan-Korsakov-Flight-of-the-bumblebee.flac.oga
gst-launch-1.0 -v souphttpsrc location =$url ! decodebin ! audioconvert ! audioresample ! alsasink
Running the same commands with my stream, I get the output:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = text/uri-list
Missing element: text/uri-list decoder
ERROR: from element /GstPipeline:pipeline0/GstDecodeBin:decodebin0: Your GStreamer installation is missing a plug-in.
Additional debug info:
gstdecodebin2.c(3977): gst_decode_bin_expose (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0:
no suitable plugins found
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
/GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = "NULL"
Freeing pipeline ...
I have tried too many other pipelines to put on one post, but I can answer any other questions.
Thank you
By now you probably have solved that problem, but still here's an idea: text/uri-list indicates that you didn't hand an actual stream to gstreamer, but rather a (textual) playlist that contains stream addresses. I guess gstreamer can't handle those, hence you need to parse them beforehand and then hand an actual audio stream address to it.

Segmented mp4 in GStreamer

I'm have pipeline:
gst-launch-1.0 rtspsrc location=rtsp://ip/cam ! rtph264depay ! h264parse ! mp4mux fragment-duration=10000 streamable=1 ! multifilesink next-file=2 location=file-%03d.mp4
The first segment is played well, others not. When I'm try to view the structure of damaged mp4 see an interesting bug:
MOOV
Some data
MOOF
MDAT
MOOF
MDAT
The most interesting thing in "Some data". There is no header data, they simply exist. By block size I think it MDAT. I find size of the block and add before it MDAT header. File immediately becomes valid and playing. But the unknown piece can't be played because before it no MOOF header.
Problem is at mp4mux and qtmux. Tested on GStreamer 1.1.0 and 1.2.2. All results are identical.
Can use multifilesink not correct?
If you take look at documentation for multifilesink you will find the answer:
It is not possible to use this element to create independently playable mp4 files, use the splitmuxsink element for that instead. ...
So use splitmuxsink and don't forget to send EOS when you done to correct finish last file
Update
Looks like at time when question has been asked there wasn't such element like splitmuxsink
Can this be reproduced using videotestsrc instead of rtsp?
Try replacing your h264 receiving and depayloading with "videotestsrc num-buffers= ! x264enc ! mp4mux ..."
This might be a bug, please file it at https://bugzilla.gnome.org/enter_bug.cgi?product=GStreamer so it gets proper attention from maintainers.
Also, how are you trying to play it?
Thanks

dynamically replacing elements in a playing gstreamer pipeline

I'm looking for the correct technique, if one exists, for dynamically replacing an element in a running gstreamer pipeline. I have a gstreamer based c++ app and the pipeline it creates looks like this (using gst-launch syntax) :
souphttpsrc location="http://localhost/local.ts" ! mpegtsdemux name=d ! queue ! mpeg2dec ! xvimagesink d. ! queue ! a52dec ! pulsesink
During the middle of playback (i.e. GST_STATE_PLAYING is the pipeline state and the user is happily watching video), I need to remove souphttpsrc from the pipeline and create a new souphttpsrc, or even a new neonhttpsource, and then immediately add that back into the pipeline and continue playback of the same uri source stream at the same time position where playback was before we performed this operation. The user might see a small delay and that is fine.
We've barely figured out how to remove and replace the source, and we need more understanding. Here's our best attempt thus far:
gst_element_unlink(source, demuxer);
gst_element_set_state(source, GST_STATE_NULL);
gst_bin_remove(GST_BIN(pipeline), source);
source = gst_element_factory_make("souphttpsrc", "src");
g_object_set(G_OBJECT(source), "location", url, NULL);
gst_bin_add(GST_BIN(pipeline), source);
gst_element_link(source, demuxer);
gst_element_sync_state_with_parent(source);
This doesn't work perfectly because the source is playing back from the beginning and the rest of the pipeline is waiting for the correct timestamped buffers (I assume) because after several seconds, playback picks back up. I tried seeking the source in multiple ways but nothing has worked.
I need to know the correct way to do this. It would be nice to know a general technique, if one exists, as well, in case we wanted to dynamically replace the decoder or some other element.
thanks
I think this may be what you are looking for:
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
(starting at line 115)

Gstreamer: Pausing/resuming video in RTP streams

I'm constructing a gstreamer pipeline that receives two RTP streams from an networked source:
ILBC Audio stream + corresponding RTCP stream
H263 Video stream + corresponding RTCP stream
Everything is put into one gstreamer pipeline so it will use the RTCP from both streams to synchronize audio/video. So far I've come up with this (using gst-launch for prototyping):
gst-launch -vvv gstrtpbin name=rtpbin
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-2000" port=40000 ! rtpbin.recv_rtp_sink_0
rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink
udpsrc port=40001 ! rtpbin.recv_rtcp_sink_0
rtpbin.send_rtcp_src_0 ! udpsink port=40002 sync=false async=false
udpsrc caps="application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)PCMU,encoding-params=(string)1,octet-align=(string)1" port=60000 rtpbin.recv_rtp_sink_1
rtpbin. ! rtppcmudepay ! autoaudiosink
udpsrc port=60001 ! rtpbin.recv_rtcp_sink_1
rtpbin.send_rtcp_src_1 ! udpsink port=60002 sync=false async=false
This pipeline works well if the networked source starts out with sending both video and audio. If the videostream is paused later on, gstreamer will still playback audio and even will start playing back the video when the networked source resumes the video stream.
My problem is however that if the networked source starts out with only an audio stream (video might be added later on), the pipeline seems to pause/freeze until the video stream starts as well.
Since video is optional (and can be added/removed at will by the user) in my application, is there any way I can hook up for instance an 'videotestsrc' that will provide some kind of fallback video data to keep the pipeline running when there is no networked video data?
I've tried experimenting with 'videotestsrc' and a thing called 'videomixer' but I think that mixer still requires both streams to be alive. Any feedback is greatly appreciated!
I present a simple function for pause resume by changing bins. In the following example I provide the logic to change destination bin on the fly dynamically. This shall not completely stop the pipeline which is what you seek I believe. A similar logic could be used for src bins. Here you may remove your network source bin and related decoder/demux bins and add videotestsrc bins.
private static void dynamic_bin_replacement(Pipeline pipe, Element src_bin, Element dst_bin_new, Element dst_bin_old) {
pipe.pause();
src_bin.unlink(dst_bin_old);
pipe.remove(dst_bin_old);
pipe.add(dst_bin_new);
dst_bin_new.syncStateWithParent();
src_bin.link(dst_bin_new);
pipe.ready();
pipe.play();
}
The other logic you may want to try is "PADLOCKING". Please take a look at the following posts
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
and
http://web.archiveorange.com/archive/v/8yxpz7FmOlGqxVYtkPb4
and
Adding and removing audio sources to/from GStreamer pipeline on-the-go
UPDATE
Try output-selector and input-selector bins as they seem to be better alternative. I found them most reliable and have had immense luck with them. I use fakesink or fakesrc respectively as the other end of the selector.
valve bin is another alternative that I found doesn't even need fakesink or fakesrc bins. It is also extremely reliable.
Also the correct state transition order for media file source
NULL -> READY -> PAUSED -> PLAYING (Upwards)
PLAYING -> PAUSED -> READY -> NULL (Downwards)
My order in the above example should be corrected where ready() should come before pause(). Also I would tend to think un-linking should be performed after null() state and not after pause(). I haven't tried these changes but theoretically they should work.
See the following link for detailed info
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-states.txt?h=BRANCH-RELEASE-0_10_19