Gstreamer: pass two streams through a single element - c++

Is it possible to pass two streams through a single element? I have two streams
Need to extract data from, can be destroyed in element or passed
through to a sink.
Video stream, will be edited based on the
data extracted from stream 1, passed through to autovideosink
GStreamer Core Library version 1.16.2
Written in c
chain functions:
static GstFlowReturn
gst_test2_chain (GstPad * pad, GstObject * parent, GstBuffer * buf)
{
Gsttest2 *filter;
filter = GST_TEST2 (parent);
/* just push out the incoming buffer without touching it */
return gst_pad_push (filter->srcpad, buf);
}
//second pads chain function
static GstFlowReturn
gst_test2_chain2 (GstPad * pad, GstObject * parent, GstBuffer * buf)
{
g_print("\ninside chain2\n");
Gsttest2 *filter;
filter = GST_TEST2 (parent);
return gst_pad_push (filter->srcpad2, buf);
}
//Pad templates:
static GstStaticPadTemplate src_factory = GST_STATIC_PAD_TEMPLATE ("src",
GST_PAD_SRC,
GST_PAD_ALWAYS,
GST_STATIC_CAPS ("video/x-raw")
);
Extracting the data from one stream and editing the other works fine. Currently using two video/x-raw src and sink pads for testing, but the stream for extracting the data would eventually be meta/x-klv. Using a single pad and source works fine with videotestsrc, but trying to use both sources and sinks result in pipeline errors unable to link or syntax. Does gstreamer support sending two streams through a single element? Would it be simpler to destroy the buffer of the no longer needed stream in element?

Related

How can I get frame by using Gstreamer?

I`m a beginner at using Gstreamer to handle some input videos. I have already built the pipeline using GStreamer to transcode the videos but the last part I cannot do is How I can get those batches of frames and do some custom image processing techniques to handle the purpose of my task.
Input Videos -----> Gstreamer Pipeline -----> Task: Apply some Image Processing Techniques
I`ve been searching about this problem on the Internet but cannot find any solution and the more I search, the more I am confused.
AppSink is the good element for you. You can enable "emit-signal" property and listen the event "new-sample". Then you can get an access to the buffer.
Here the entire documentation :
https://gstreamer.freedesktop.org/documentation/tutorials/basic/short-cutting-the-pipeline.html?gi-language=c
You have to create appsink element, enable "emit-signals" then register "new-sample" callback like this :
g_signal_connect (data.app_sink, "new-sample", G_CALLBACK (new_sample), &data)
static GstFlowReturn new_sample (GstElement *sink, CustomData *data) {
GstSample *sample;
/* Retrieve the buffer */
g_signal_emit_by_name (sink, "pull-sample", &sample);
if (sample) {
/* The only thing we do in this example is print a * to indicate a received buffer */
g_print ("*");
gst_sample_unref (sample);
return GST_FLOW_OK;
}
return GST_FLOW_ERROR;
}
Now you can retrieve buffer from sample instead of g_print ... (gst_sample_get_buffer)
https://gstreamer.freedesktop.org/documentation/gstreamer/gstsample.html?gi-language=c
Then read data inside the buffer :
GstMapInfo info;
gst_buffer_map (buf, &info, GST_MAP_READ);
gst_buffer_unmap (buf, &info);
gst_buffer_unref (buf);
info.data ==> buffer content.
Best regards.

How to use splitmuxsink in a dynamic pipeline

What is the correct way of using splitmuxsink in a dynamic pipeline?
Previously I have used filesink to record (no problem what so ever) but there is requirement to save the file in segments so I have tried to use splitmuxsink in dynamic pipeline(there is async time in recording). In doing so I have faced two problems
when I tried to stop the recording, I use a idle pad to block the recording queue and launch a callback function to do steps to delink the recording branch (send eos, set elements in recording bin to NULL, then dequeue the bin). I have set a downstream data probe to notify me that the eos has reached the splitmuxsink sink before I tried to do step 2..(set elemets to null)
However, the end result is that i still have an empty last file (o bytes). It seem that the pipe is not yet closed or having some problem. I did a workaround to split the video immediately when the record stop (though I lost a few frames)
How should one stop in a dynamic branch?
When I tried to create the recording bin when i start the recording(utilizing the pad-added signal when a pad is created to connect the recording bin). Previously I have created the recording bin in normal sequence (not creating them during the glib loop that I have created). The previous step work ok but the present step has the splitmuxsink's filesink in a locked state
How should I workaround this? What causes the lock state?
Here is my code
/// create record bin
static void
pad-added(GstElement * self,
GstPad * new_pad,
gpointer user_data)
{
char* pad_name = gst_pad_get_name(new_pad);
if(g_str_equal(pad_name,"src"))
{
//RECORD records;
records.recording = gst_bin_new("recording");
records.queue = gst_element_factory_make("queue","queue");
records.enc = gst_element_factory_make("vpuenc_h264","enc");
records.parser = gst_element_factory_make("h264parse","parser");
records.sink = gst_element_factory_make("splitmuxsink","sink");
// Add it to the main pipeline
gst_bin_add_many(GST_BIN(records.recording),
records.queue,
records.enc,
records.parser,
records.sink,NULL);
// link up the recording elements queue
gst_element_link_many(records.queue,
records.enc,
records.parser,
records.sink,NULL)
g_object_set(G_OBJECT(records.fsink),
//"location","video_%d.mp4",
"max-size-time", (guint64) 10L * GST_SECOND,
"async-handling", TRUE,
"async-finalize", TRUE,
NULL);
records.queue_sink_pad = gst_element_get_static_pad (records.queue, "sink");
records.ghost_pad = gst_ghost_pad_new ("sink", records.queue_sink_pad);
gst_pad_set_active(records.ghost_pad, TRUE);
gst_element_add_pad(GST_ELEMENT(records.recording),records.ghost_pad);
g_signal_connect (records.sink, "format-location",
(GCallback)format_location_callback,
&records);
}
}
gboolean cmd_loop()
{
// other cmd not shown here
if(RECORD)
{
//create tee sink pad
// this step will trigger the pad-added function
tee_sink_pad = gst_element_get_request_pad (tee,"src");
// ....other function
}
}
int main()
{
// add the pad-added signal response
g_signal_connect(tee, "pad-added", G_CALLBACK(pad-added), NULL);
// use to construct the loop (cycle every 1s)
GSource* source = g_timeout_source_new(1000);
// set function to watch for command
g_source_set_callback(source,
(GSourceFunc)cmd_loop,
NULL,
NULL);
}

How to play raw char* buffer with Gstreamer?

I have char* buffer that I read from video.mp4 file. This buffer has size 4096.
I tried to create GstBuffer from char* buffer
GstBuffer* Buffer = gst_buffer_new_wrapped(data, size);
dataBuffer = gst_buffer_copy(tmpBuf);
Then I push this buffer to the appsrc
GstElement* source = gst_bin_get_by_name (GST_BIN (consumer), "source");
gst_app_src_push_buffer (GST_APP_SRC (source), dataBuffer);
gst_object_unref (source);
Pipeline consumer was created in the next way:
gchar* videoConsumerString = g_strdup_printf ("appsrc max-buffers=5 drop=false name=source ! decodebin ! xvimagesink");
consumer = gst_parse_launch (videoConsumerString, NULL);
gst_element_set_state (consumer, GST_STATE_NULL);
g_free (videoConsumerString);
After the create of pipeline I set its state to the GST_STATE_NULL.
When I starts playing I set its state to GST_STATE_PLAYING.
But in the out I got error:
ERROR from element mpegvparse0: No valid frames found before end of stream
I tried to change size of char* buffer, use different elements in the pipeline (e.g. ffmpegcolorspace, videconvert, some other) but did not resolve this issue.
If run with GST_DEBUG=3, i have a lot of warnings
0:00:00.064480642 4059 0x12c66d0 WARN codecparsers_mpegvideo gstmpegvideoparser.c:887:gst_mpeg_video_packet_parse_picture_header: Unsupported picture type : 0
I use gstreamer 1.0.
Does anybody faced with such problem?
P.S. I don't have possibility to read data from file with Gstreamer. I only can read buffers from file with fread and then try to play them.
Maybe I have to set some specific fixed size of readed buffer?
I solved this problem.
Unexpectedly for me it was in the creating of the GstBuffer.
Correct way to create such buffer from data(char*) with known size is
GstBuffer * buffer = gst_buffer_new_allocate(NULL, size, NULL);
gst_buffer_fill(m_dataBufferProducer, 0, data, size);
Thank you for your help!

streaming H.264 over RTP with libavformat

I've been trying over the past week to implement H.264 streaming over RTP, using x264 as an encoder and libavformat to pack and send the stream. Problem is, as far as I can tell it's not working correctly.
Right now I'm just encoding random data (x264_picture_alloc) and extracting NAL frames from libx264. This is fairly simple:
x264_picture_t pic_out;
x264_nal_t* nals;
int num_nals;
int frame_size = x264_encoder_encode(this->encoder, &nals, &num_nals, this->pic_in, &pic_out);
if (frame_size <= 0)
{
return frame_size;
}
// push NALs into the queue
for (int i = 0; i < num_nals; i++)
{
// create a NAL storage unit
NAL nal;
nal.size = nals[i].i_payload;
nal.payload = new uint8_t[nal.size];
memcpy(nal.payload, nals[i].p_payload, nal.size);
// push the storage into the NAL queue
{
// lock and push the NAL to the queue
boost::mutex::scoped_lock lock(this->nal_lock);
this->nal_queue.push(nal);
}
}
nal_queue is used for safely passing frames over to a Streamer class which will then send the frames out. Right now it's not threaded, as I'm just testing to try to get this to work. Before encoding individual frames, I've made sure to initialize the encoder.
But I don't believe x264 is the issue, as I can see frame data in the NALs it returns back.
Streaming the data is accomplished with libavformat, which is first initialized in a Streamer class:
Streamer::Streamer(Encoder* encoder, string rtp_address, int rtp_port, int width, int height, int fps, int bitrate)
{
this->encoder = encoder;
// initalize the AV context
this->ctx = avformat_alloc_context();
if (!this->ctx)
{
throw runtime_error("Couldn't initalize AVFormat output context");
}
// get the output format
this->fmt = av_guess_format("rtp", NULL, NULL);
if (!this->fmt)
{
throw runtime_error("Unsuitable output format");
}
this->ctx->oformat = this->fmt;
// try to open the RTP stream
snprintf(this->ctx->filename, sizeof(this->ctx->filename), "rtp://%s:%d", rtp_address.c_str(), rtp_port);
if (url_fopen(&(this->ctx->pb), this->ctx->filename, URL_WRONLY) < 0)
{
throw runtime_error("Couldn't open RTP output stream");
}
// add an H.264 stream
this->stream = av_new_stream(this->ctx, 1);
if (!this->stream)
{
throw runtime_error("Couldn't allocate H.264 stream");
}
// initalize codec
AVCodecContext* c = this->stream->codec;
c->codec_id = CODEC_ID_H264;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->bit_rate = bitrate;
c->width = width;
c->height = height;
c->time_base.den = fps;
c->time_base.num = 1;
// write the header
av_write_header(this->ctx);
}
This is where things seem to go wrong. av_write_header above seems to do absolutely nothing; I've used wireshark to verify this. For reference, I use Streamer streamer(&enc, "10.89.6.3", 49990, 800, 600, 30, 40000); to initialize the Streamer instance, with enc being a reference to an Encoder object used to handle x264 previously.
Now when I want to stream out a NAL, I use this:
// grab a NAL
NAL nal = this->encoder->nal_pop();
cout << "NAL popped with size " << nal.size << endl;
// initalize a packet
AVPacket p;
av_init_packet(&p);
p.data = nal.payload;
p.size = nal.size;
p.stream_index = this->stream->index;
// send it out
av_write_frame(this->ctx, &p);
At this point, I can see RTP data appearing over the network, and it looks like the frames I've been sending, even including a little copyright blob from x264. But, no player I've used has been able to make any sense of the data. VLC quits wanting an SDP description, which apparently isn't required.
I then tried to play it through gst-launch:
gst-launch udpsrc port=49990 ! rtph264depay ! decodebin ! xvimagesink
This will sit waiting for UDP data, but when it is received, I get:
ERROR: element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: No RTP
format was negotiated. Additional debug info:
gstbasertpdepayload.c(372): gst_base_rtp_depayload_chain ():
/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Input buffers
need to have RTP caps set on them. This is usually achieved by setting
the 'caps' property of the upstream source element (often udpsrc or
appsrc), or by putting a capsfilter element before the depayloader and
setting the 'caps' property on that. Also see
http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtp/README
As I'm not using GStreamer to stream itself, I'm not quite sure what it means with RTP caps. But, it makes me wonder if I'm not sending enough information over RTP to describe the stream. I'm pretty new to video and I feel like there's some key thing I'm missing here. Any hints?
h264 is an encoding standard. It specifies how video data is compressed and stored in a format that can be decompressed into a video stream at later point.
RTP is a transmission protocol. It specifies format and order of packets that can carry audio-video data that was encoded by an arbitrary encoder.
GStreamer expects to receive data that conforms to the RTP procotol. Is your expectation that libaformat will produce the RTP packets immediately readable by GStreamer warranted? Maybe GStreamers expect an additional stream description that would enable it to accept and decode the streamed packets using the proper decoder? Maybe it requires an additional RTSP exchange or the SDP stream descriptor file?
The error message states pretty clearly that an RTP format has not been negotiated. caps are short-hand for capabilities. Receiver needs to know transmitter's capabilities to set up the receiver/decoding machinery correctly.
I strongly suggest trying at least to create an SDP file for your RTP stream. libavformat should be able to do it for you.

process video stream from memory buffer

I need to parse a video stream (mpeg ts) from proprietary network protocol (which I already know how to do) and then I would like to use OpenCV to process the video stream into frames. I know how to use cv::VideoCapture from a file or from a standard URL, but I would like to setup OpenCV to read from a buffer(s) in memory where I can store the video stream data until it is needed. Is there a way to setup a call back method (or any other interfrace) so that I can still use the cv::VideoCapture object? Is there a better way to accomplish processing the video with out writing it out to a file and then re-reading it. I would also entertain using FFMPEG directly if that is a better choice. I think I can convert AVFrames to Mat if needed.
I had a similar need recently. I was looking for a way in OpenCV to play a video that was already in memory, but without ever having to write the video file to disk. I found out that the FFMPEG interface already supports this through av_open_input_stream. There is just a little more prep work required compared to the av_open_input_file call used in OpenCV to open a file.
Between the following two websites I was able to piece together a working solution using the ffmpeg calls. Please refer to the information on these websites for more details:
http://ffmpeg.arrozcru.org/forum/viewtopic.php?f=8&t=1170
http://cdry.wordpress.com/2009/09/09/using-custom-io-callbacks-with-ffmpeg/
To get it working in OpenCV, I ended up adding a new function to the CvCapture_FFMPEG class:
virtual bool openBuffer( unsigned char* pBuffer, unsigned int bufLen );
I provided access to it through a new API call in the highgui DLL, similar to cvCreateFileCapture. The new openBuffer function is basically the same as the open( const char* _filename ) function with the following difference:
err = av_open_input_file(&ic, _filename, NULL, 0, NULL);
is replaced by:
ic = avformat_alloc_context();
ic->pb = avio_alloc_context(pBuffer, bufLen, 0, pBuffer, read_buffer, NULL, NULL);
if(!ic->pb) {
// handle error
}
// Need to probe buffer for input format unless you already know it
AVProbeData probe_data;
probe_data.buf_size = (bufLen < 4096) ? bufLen : 4096;
probe_data.filename = "stream";
probe_data.buf = (unsigned char *) malloc(probe_data.buf_size);
memcpy(probe_data.buf, pBuffer, probe_data.buf_size);
AVInputFormat *pAVInputFormat = av_probe_input_format(&probe_data, 1);
if(!pAVInputFormat)
pAVInputFormat = av_probe_input_format(&probe_data, 0);
// cleanup
free(probe_data.buf);
probe_data.buf = NULL;
if(!pAVInputFormat) {
// handle error
}
pAVInputFormat->flags |= AVFMT_NOFILE;
err = av_open_input_stream(&ic , ic->pb, "stream", pAVInputFormat, NULL);
Also, make sure to call av_close_input_stream in the CvCapture_FFMPEG::close() function instead of av_close_input_file in this situation.
Now the read_buffer callback function that is passed in to avio_alloc_context I defined as:
static int read_buffer(void *opaque, uint8_t *buf, int buf_size)
{
// This function must fill the buffer with data and return number of bytes copied.
// opaque is the pointer to private_data in the call to avio_alloc_context (4th param)
memcpy(buf, opaque, buf_size);
return buf_size;
}
This solution assumes the entire video is contained in a memory buffer and would probably have to be tweaked to work with streaming data.
So that's it! Btw, I'm using OpenCV version 2.1 so YMMV.
Code to do similar to the above, for opencv 4.2.0 is on:
https://github.com/jcdutton/opencv
Branch: 4.2.0-jcd1
Load the entire file into RAM pointed to by buffer, of size buffer_size.
Sample code:
VideoCapture d_reader1;
d_reader1.open_buffer(buffer, buffer_size);
d_reader1.read(input1);
The above code reads the first frame of video.