QtGStreamer not producing expected output on streaming from camera - gstreamer

I am trying to use QtGStreamer to stream camera frames and render it onto a QML window. I have a simple Gstreamer pipeline which works fine when the I use gst-launch-1.0
gst-launch-1.0 autovideosrc ! videoscale ! video/x-raw, width=480,height=270 ! xvimagesink -e
Now I create a corresponding QtGStreamer pipeline as:
void Streamer::startStreaming()
{
if (!m_streaming_pipeline) {
m_streaming_pipeline = QGst::Pipeline::create();
if (m_streaming_pipeline) {
QGst::ElementPtr source = QGst::ElementFactory::make("autovideosrc");
QGst::ElementPtr scale = QGst::ElementFactory::make("videoscale");
scale->setProperty("caps", QGst::Caps::fromString("video/x-raw, width=480,height=270"));
if (m_videoSink) {
m_videoSink->setProperty("sync", false);
m_streaming_pipeline->add(source, scale, m_videoSink);
source->link(scale);
scale->link(m_videoSink);
QGst::BusPtr bus = m_streaming_pipeline->bus();
bus->addSignalWatch();
QGlib::connect(bus, "message", this, &Recorder::onBusMessage);
m_streaming_pipeline->setState(QGst::StatePlaying);
qDebug() << "Done";
}
}
}
}
So first off, this is really slow. While the original gstreamer command runs easily at 30 frames/second, this is running at a couple of frames per second. I also get this output on the console when I set GST_DEBUG=3
0:00:08.661824920 23980 0x2ac6370 WARN v4l2bufferpool gstv4l2bufferpool.c:540:gst_v4l2_buffer_pool_set_config:<autovideosrc0-actual-src-v4l:pool:src> libv4l2 converter detected, disabling CREATE_BUFS
0:00:08.665945185 23980 0x2ac6370 WARN v4l2bufferpool gstv4l2bufferpool.c:748:gst_v4l2_buffer_pool_start:<autovideosrc0-actual-src-v4l:pool:src> Uncertain or not enough buffers, enabling copy threshold
Another thing I noticed is that the frames that do get rendered, it is almost that the color scheme is flipped. So, it seems that something along the line is also flipping the colour channels.
EDIT
I figured out that I needed to add a capsfilter to get the correct format. So adding something like:
QGst::ElementPtr capsfilter = QGst::ElementFactory::make("capsfilter", "capsfilter");
capsfilter->setProperty("caps", QGst::Caps::fromString("video/x-raw, width=1920, height=1080, format=RGB, framerate=30/1"));
and then adding it via
m_streaming_pipeline->add(source, capsfilter, scale, m_videoSink);
and subsequently linking it solved the problem.
Although now my question is how does gstreamer pick a valid format in my original pipeline?

Related

Adding timestamps to HEVC with ffmpeg library, mismatch in timestamps when run with Gstreamer pipeline

We are encoding sample frames using NvEncoder with HEVC. Since HEVC frames do not have any timestamps, in order to perform seek operation on the video, we wrote a remuxer in C++ that creates timestamps for the frames in an orderly fashion and writes the encoded frames in a video container (mp4,mov). The output video in mp4 container looks fine when played with ffplay, and timestamps seems correct when checked with ffprobe. However, when we try to play the video in Gstreamer pipeline, 2nd and 3rd frames seem to have the exactly same timestamp. So when the video is played, 3rd frame is skipped and 2nd frame is shown twice. We cannot tolerate any frame loss, so we need to solve this problem, which we think is due to an incompatibility between ffmpeg and gstreamer regarding frame timestamps. I can also provide the source-code of our remuxer and example outputs if that would help.
I used the following Gstreamer pipeline to play the mp4:
gst-launch-1.0 filesrc location=5_fps.mp4 ! qtdemux name=demux demux.video_0 ! queue! decodebin ! videoconvert ! videoscale ! autovideosink
Following command also gives the same mismatching frame timestamps:
ffmpeg -i 5_fps.bin -vcodec copy -acodec copy 5_fps.mp4
Many thanks!
Edit: I am adding the part of remuxer where each frame from the input stream is read and timestamps are added.
int frame_no=-1; //starting with -1 gives the same ffprobe results as command line ffmpeg container conversion, starting with 0 again causes the same timestamp problem
while (1) {
AVStream *in_stream, *out_stream;
_status = av_read_frame(_ifmt_ctx, &_pkt);
if (_status < 0) break;
in_stream = _ifmt_ctx->streams[_pkt.stream_index];
if (_pkt.stream_index >= _stream_mapping_size ||
_stream_mapping[_pkt.stream_index] < 0) {
av_packet_unref(&_pkt);
continue;
}
double inputFPS=av_q2d(in_stream->r_frame_rate);
double outputFPS=av_q2d(in_stream->r_frame_rate);
_pkt.stream_index = _stream_mapping[_pkt.stream_index];
out_stream = _ofmt_ctx->streams[_pkt.stream_index];
_pkt.pts=frame_no*in_stream->time_base.den/inputFPS;
_pkt.dts=_pkt.pts;
_pkt.duration = in_stream->time_base.den/inputFPS;
_pkt.pos = -1;
std::cout<<"rescaled pts: "<<_pkt.pts<<" dts: "<<_pkt.dts<<" frame no: "<< frame_no<<std::endl;
std::cout<<"input time_base den: "<<in_stream->time_base.den<<" output time_base den: "<<out_stream->time_base.den<<std::endl;
frame_no++;
_status = av_interleaved_write_frame(_ofmt_ctx, &_pkt);
if (_status < 0) {
cout<<"Error muxing packet\n";
break;
}
av_packet_unref(&_pkt);}
I first tried this method where each frame timestamp (pts and dts) is incremented by packet duration. At first I thought this method would not work since B-frames are decoded in different order, so I first tried videos with no b-frames. However, when I tried with videos with b-frames, it again worked. I thought decoded frames would be in a different order, however that was not the case. The only issue is that only the second and third frames appear to have the same timestamps in Gstream (not in Ffmpeg), other than these two frames remaining video plays just fine. Overall, I am also confused that b-frames do not cause any frame order problem.
Example encoded input, example output video if you want to examine the frames. (I don't know if it's okay to share files over google drive, please correct me if there is a better way to share, or not.)
As #AlanBirtles mentioned, assigning timestamps of b-frames in my naive way is not correct at all. I assumed that since the video was playable, timestamp are somehow corrected by Ffmpeg or Gstreamer, and I did not relate my main problem to this. However, when I tried to convert container of a video with no b-frames, the problem of 3rd frame being lost is solved. So I understand that either set timestamps of b-frames correctly, or I should use videos without b-frames. Even though it is not the viable solution, for the time being we will not use b-frames, however in the future I will try to reimplement the remuxer so that any video is remuxed OK.

Gstreamer Appsink not getting Data from the Pipeline

I am designing a pipeline to Encode a video frame from a opencv application (got from a web cam) to video/x-h264 format, send it via network and decode it on another device of different type (probably a raspberry pi ) to a proper RGB stream for my project.
For this I am supposed to use a hardware accelerated Encoder and Decoder.
Since , the whole scenario is huge , the current development is performed on a Intel machine using the gstreamer VAAPI plugins(vaapiencode_h264 & vaapidecode ) . Ánd also, the fact that we need to NOT use any of the networking plugins like TCPServer or UDPServer
For this I have used the below pipeline for my purpose :
On the Encoder End:
appsrc name=applicationSource ! videoconvert ! video/x-raw, format=I420, width=640, height=480,framerate=30/1, pixel-aspect-ratio=1/1,interlace-mode=progressive ! vaapiencode_h264 bitrate=600 tune=high-compression ! h264parse config-interval=1 ! appsink name=applicationSink sync=false
The Appsrc part works perfectly well while the appsink part is having some issue with it.
The appsink part of this pipeline has been set with the below caps:
"video/x-h264, format=(string){avc,avc3,byte-stream },alignment=(string){au,nal};video/mpeg, mpegversion=(int)2, profile=(string)simple"
The code for the data extraction of my appsink is
bool HWEncoder::grabData()
{
// initial checks..
if (!cameraPipeline)
{
GST_ERROR("ERROR AS TO NO PIPE FOUND ... Stopping FRAME GRAB HERE !! ");
return false;
}
if (gst_app_sink_is_eos (GST_APP_SINK(applicationSink)))
{
GST_WARNING("APP SINK GAVE US AN EOS! BAILING OUT ");
return false;
}
if (sample)
{
cout << "sample available ... unrefing it ! "<< endl;
gst_sample_unref(sample);
}
sample = gst_app_sink_pull_sample (GST_APP_SINK(applicationSink));
if (!sample)
{
GST_WARNING("No valid sample");
return false; // no valid sample pulled !
}
sink_buffer = gst_sample_get_buffer(sample);
if (!sink_buffer)
{
GST_ERROR("No Valid Buffer ");return false;
}
return true;
}
After bringing up the pipeline and checking for the buffer filling up in my appsink, I am getting stuck at the below said lines ofmy code indefinitely:
sample = gst_app_sink_pull_sample (GST_APP_SINK(applicationSink));
I have the following questions :
1) Is my Caps for appsink correct ? If not How can I determine the caps for them ?
2) Is there something wrong in my pipeline above ?
How can I fix this issue with Appsink ??
Any kind of help would be useful!
Thanks !!
Just a guess (I had similar problems) the problem having appsink and appsrc in same pipeline may be that when you fill/empty one of them it will block the other(more on that below).
appsink and appsrc would block when they are full/empty - this is normal desired behaviour. There is option drop for appsink or for appsrc there is option block - but using these it may be just workaround and you will get glitches in your stream. Proper solution is to handle the synchronisation between appsrc and appsink in a better way.
You can react on appsrc signals enough-data and need-data - this is our way. Also we fiddled with properties of appsrc: is-live, do-timestamp and buffer size (this may or may not help you):
g_object_set(src->appsrc,
"stream-type", GST_APP_STREAM_TYPE_STREAM,
"format", GST_FORMAT_TIME,
"do-timestamp", TRUE,
"is-live", TRUE,
"block", TRUE,
NULL);
Why do they block each other?
Because (I guess) you process appsink and at the same time appsrc in main application thread. When one of the appsink/appsrc block the thread there is no one that would handle the processing for the other one. So when appsink is blocked because it does not have any data there is noone that can feed appsrc with new data - thus endless deadlock.
We also implemented noblock version of appsink *pull_sample method but it was just a workaround and resulted in more problems than solutions.
If you want to debug what is happening you can add GST_DEBUG entry for appsrc/appsink (I do not remember what they were), you can add callback on mentioned enough-data and need-data signals or you may add queues and enable GST_DEBUG=queue_dataflow:5 to see which queue is filled first etc.. this is always helpful when debugging the "data-deadlock".

How to check type of new added pad?

My pipeline scheme(dynamic link):
videotestsrc OR audiotestsrc ! decodebin ! queue ! autovideosink OR
autoaudiosink
I trying to use this advice to check which type of data I got (video/audio), but if I use decodebin like demuxer, then I get just "src_0" instead of "audio" or "video". How I can check my pad type for linking right element for playback? May be I can use one universal element for audio playback and video playback, like playsink(but it does not work for video)?
You can get the caps of the newly added pad and check if it contains audio or video caps (or something else).
Try with:
gst_pad_get_current_caps (pad);
or:
gst_pad_get_allowed_caps (pad);
If you are using gstreamer 0.10 (which is 3+ years obsolete an unmantained), you have:
gst_pad_get_caps_reffed (pad);
Then just check the returned caps if it is audio or video by getting the structure from the caps and checking if its name starts with video or audio.
/* There might be multiple structures depending on how you do it,
* but usually checking one in this case is enough */
structure = gst_caps_get_structure (caps, 0);
name = gst_structure_get_name (structure);
if (g_str_has_prefix (name, "video/")) {
...
} else if (g_str_has_prefix (name, "audio/")) {
...
}

dynamically replacing elements in a playing gstreamer pipeline

I'm looking for the correct technique, if one exists, for dynamically replacing an element in a running gstreamer pipeline. I have a gstreamer based c++ app and the pipeline it creates looks like this (using gst-launch syntax) :
souphttpsrc location="http://localhost/local.ts" ! mpegtsdemux name=d ! queue ! mpeg2dec ! xvimagesink d. ! queue ! a52dec ! pulsesink
During the middle of playback (i.e. GST_STATE_PLAYING is the pipeline state and the user is happily watching video), I need to remove souphttpsrc from the pipeline and create a new souphttpsrc, or even a new neonhttpsource, and then immediately add that back into the pipeline and continue playback of the same uri source stream at the same time position where playback was before we performed this operation. The user might see a small delay and that is fine.
We've barely figured out how to remove and replace the source, and we need more understanding. Here's our best attempt thus far:
gst_element_unlink(source, demuxer);
gst_element_set_state(source, GST_STATE_NULL);
gst_bin_remove(GST_BIN(pipeline), source);
source = gst_element_factory_make("souphttpsrc", "src");
g_object_set(G_OBJECT(source), "location", url, NULL);
gst_bin_add(GST_BIN(pipeline), source);
gst_element_link(source, demuxer);
gst_element_sync_state_with_parent(source);
This doesn't work perfectly because the source is playing back from the beginning and the rest of the pipeline is waiting for the correct timestamped buffers (I assume) because after several seconds, playback picks back up. I tried seeking the source in multiple ways but nothing has worked.
I need to know the correct way to do this. It would be nice to know a general technique, if one exists, as well, in case we wanted to dynamically replace the decoder or some other element.
thanks
I think this may be what you are looking for:
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
(starting at line 115)

Gstreamer: Pausing/resuming video in RTP streams

I'm constructing a gstreamer pipeline that receives two RTP streams from an networked source:
ILBC Audio stream + corresponding RTCP stream
H263 Video stream + corresponding RTCP stream
Everything is put into one gstreamer pipeline so it will use the RTCP from both streams to synchronize audio/video. So far I've come up with this (using gst-launch for prototyping):
gst-launch -vvv gstrtpbin name=rtpbin
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-2000" port=40000 ! rtpbin.recv_rtp_sink_0
rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink
udpsrc port=40001 ! rtpbin.recv_rtcp_sink_0
rtpbin.send_rtcp_src_0 ! udpsink port=40002 sync=false async=false
udpsrc caps="application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)PCMU,encoding-params=(string)1,octet-align=(string)1" port=60000 rtpbin.recv_rtp_sink_1
rtpbin. ! rtppcmudepay ! autoaudiosink
udpsrc port=60001 ! rtpbin.recv_rtcp_sink_1
rtpbin.send_rtcp_src_1 ! udpsink port=60002 sync=false async=false
This pipeline works well if the networked source starts out with sending both video and audio. If the videostream is paused later on, gstreamer will still playback audio and even will start playing back the video when the networked source resumes the video stream.
My problem is however that if the networked source starts out with only an audio stream (video might be added later on), the pipeline seems to pause/freeze until the video stream starts as well.
Since video is optional (and can be added/removed at will by the user) in my application, is there any way I can hook up for instance an 'videotestsrc' that will provide some kind of fallback video data to keep the pipeline running when there is no networked video data?
I've tried experimenting with 'videotestsrc' and a thing called 'videomixer' but I think that mixer still requires both streams to be alive. Any feedback is greatly appreciated!
I present a simple function for pause resume by changing bins. In the following example I provide the logic to change destination bin on the fly dynamically. This shall not completely stop the pipeline which is what you seek I believe. A similar logic could be used for src bins. Here you may remove your network source bin and related decoder/demux bins and add videotestsrc bins.
private static void dynamic_bin_replacement(Pipeline pipe, Element src_bin, Element dst_bin_new, Element dst_bin_old) {
pipe.pause();
src_bin.unlink(dst_bin_old);
pipe.remove(dst_bin_old);
pipe.add(dst_bin_new);
dst_bin_new.syncStateWithParent();
src_bin.link(dst_bin_new);
pipe.ready();
pipe.play();
}
The other logic you may want to try is "PADLOCKING". Please take a look at the following posts
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
and
http://web.archiveorange.com/archive/v/8yxpz7FmOlGqxVYtkPb4
and
Adding and removing audio sources to/from GStreamer pipeline on-the-go
UPDATE
Try output-selector and input-selector bins as they seem to be better alternative. I found them most reliable and have had immense luck with them. I use fakesink or fakesrc respectively as the other end of the selector.
valve bin is another alternative that I found doesn't even need fakesink or fakesrc bins. It is also extremely reliable.
Also the correct state transition order for media file source
NULL -> READY -> PAUSED -> PLAYING (Upwards)
PLAYING -> PAUSED -> READY -> NULL (Downwards)
My order in the above example should be corrected where ready() should come before pause(). Also I would tend to think un-linking should be performed after null() state and not after pause(). I haven't tried these changes but theoretically they should work.
See the following link for detailed info
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-states.txt?h=BRANCH-RELEASE-0_10_19