How to play raw char* buffer with Gstreamer? - c++

I have char* buffer that I read from video.mp4 file. This buffer has size 4096.
I tried to create GstBuffer from char* buffer
GstBuffer* Buffer = gst_buffer_new_wrapped(data, size);
dataBuffer = gst_buffer_copy(tmpBuf);
Then I push this buffer to the appsrc
GstElement* source = gst_bin_get_by_name (GST_BIN (consumer), "source");
gst_app_src_push_buffer (GST_APP_SRC (source), dataBuffer);
gst_object_unref (source);
Pipeline consumer was created in the next way:
gchar* videoConsumerString = g_strdup_printf ("appsrc max-buffers=5 drop=false name=source ! decodebin ! xvimagesink");
consumer = gst_parse_launch (videoConsumerString, NULL);
gst_element_set_state (consumer, GST_STATE_NULL);
g_free (videoConsumerString);
After the create of pipeline I set its state to the GST_STATE_NULL.
When I starts playing I set its state to GST_STATE_PLAYING.
But in the out I got error:
ERROR from element mpegvparse0: No valid frames found before end of stream
I tried to change size of char* buffer, use different elements in the pipeline (e.g. ffmpegcolorspace, videconvert, some other) but did not resolve this issue.
If run with GST_DEBUG=3, i have a lot of warnings
0:00:00.064480642 4059 0x12c66d0 WARN codecparsers_mpegvideo gstmpegvideoparser.c:887:gst_mpeg_video_packet_parse_picture_header: Unsupported picture type : 0
I use gstreamer 1.0.
Does anybody faced with such problem?
P.S. I don't have possibility to read data from file with Gstreamer. I only can read buffers from file with fread and then try to play them.
Maybe I have to set some specific fixed size of readed buffer?

I solved this problem.
Unexpectedly for me it was in the creating of the GstBuffer.
Correct way to create such buffer from data(char*) with known size is
GstBuffer * buffer = gst_buffer_new_allocate(NULL, size, NULL);
gst_buffer_fill(m_dataBufferProducer, 0, data, size);
Thank you for your help!

Related

How can I get frame by using Gstreamer?

I`m a beginner at using Gstreamer to handle some input videos. I have already built the pipeline using GStreamer to transcode the videos but the last part I cannot do is How I can get those batches of frames and do some custom image processing techniques to handle the purpose of my task.
Input Videos -----> Gstreamer Pipeline -----> Task: Apply some Image Processing Techniques
I`ve been searching about this problem on the Internet but cannot find any solution and the more I search, the more I am confused.
AppSink is the good element for you. You can enable "emit-signal" property and listen the event "new-sample". Then you can get an access to the buffer.
Here the entire documentation :
https://gstreamer.freedesktop.org/documentation/tutorials/basic/short-cutting-the-pipeline.html?gi-language=c
You have to create appsink element, enable "emit-signals" then register "new-sample" callback like this :
g_signal_connect (data.app_sink, "new-sample", G_CALLBACK (new_sample), &data)
static GstFlowReturn new_sample (GstElement *sink, CustomData *data) {
GstSample *sample;
/* Retrieve the buffer */
g_signal_emit_by_name (sink, "pull-sample", &sample);
if (sample) {
/* The only thing we do in this example is print a * to indicate a received buffer */
g_print ("*");
gst_sample_unref (sample);
return GST_FLOW_OK;
}
return GST_FLOW_ERROR;
}
Now you can retrieve buffer from sample instead of g_print ... (gst_sample_get_buffer)
https://gstreamer.freedesktop.org/documentation/gstreamer/gstsample.html?gi-language=c
Then read data inside the buffer :
GstMapInfo info;
gst_buffer_map (buf, &info, GST_MAP_READ);
gst_buffer_unmap (buf, &info);
gst_buffer_unref (buf);
info.data ==> buffer content.
Best regards.

mux klv data with h264 by mpegtsmux

I need to mux klv metadata into the h264 stream. I have created application. But the stream is playing only as long as klv-data is being inserted. When i stop pushing klv-data the whole stream stops. What is the right method to mux asynchronous klv data by mpegtsmux?
Klv-data need to be inserted into the following working pipeline:
v4l2src input-src=Camera ! videorate drop-only=true ! 'video/x-raw, format=(string)NV12, width=1920, height=1088, framerate=25/1' ! ce_h264enc target-bitrate=6000000 idrinterval=25 intraframe-interval=60 ! queue ! mpegtsmux alignment=7 ! udpsink host=192.168.0.1 port=3000 -v
This pipeline is collected in the application. To insert klv-metedata appsrc is created:
appSrc = gst_element_factory_make("appsrc", nullptr);
gst_app_src_set_caps (GST_APP_SRC (appSrc), gst_caps_new_simple("meta/x-klv", "parsed", G_TYPE_BOOLEAN, TRUE, "sparse", G_TYPE_BOOLEAN, TRUE, nullptr));
g_object_set(appSrc, "format", GST_FORMAT_TIME, nullptr);
Then appsrc is linked to the pipeline:
gst_bin_add(GST_BIN(pipeline), appSrc);
gst_element_link(appSrc, mpegtsmux);
Here is push function:
void AppSrc::pushData(const std::string &data)
{
GstBuffer *buffer = gst_buffer_new_allocate(nullptr, data.size(), nullptr);
GstMapInfo map;
GstClock *clock;
GstClockTime abs_time, base_time;
gst_buffer_map (buffer, &map, GST_MAP_WRITE);
memcpy(map.data, data.data(), data.size());
gst_buffer_unmap (buffer, &map);
GST_OBJECT_LOCK (element);
clock = GST_ELEMENT_CLOCK (element);
base_time = GST_ELEMENT (element)->base_time;
gst_object_ref (clock);
GST_OBJECT_UNLOCK (element);
abs_time = gst_clock_get_time (clock);
gst_object_unref (clock);
GST_BUFFER_PTS (buffer) = abs_time - base_time;
GST_BUFFER_DURATION (buffer) = gst_util_uint64_scale_int (1, GST_SECOND, 1);
gst_app_src_push_buffer(GST_APP_SRC(element), buffer);
}
Gstreamer version is 1.6.1.
What can be wrong with my code? I'd appreciate your help.
I can push dummy klv-packets to maintain video stream. But i don't want to pollute upcomming stream and i am sure there should be more delicate solution.
I have found that i can send event with GST_STREAM_FLAG_SPARSE, which should be appropriate for subtitles. But as a result i have no output at all.
GstEvent* stream_start = gst_event_new_stream_start("klv-04");
gst_event_set_stream_flags(stream_start, GST_STREAM_FLAG_SPARSE);
GstPad* pad = gst_element_get_static_pad(GST_ELEMENT(element), "src");
gst_pad_push_event (pad, stream_start);
While debugging i have found that after applying the following patch to the gstreamer and using GST_STREAM_FLAG_SPARSE, the stream doesn't stop when the appsrc stops pushing packets.
diff --git a/libs/gst/base/gstcollectpads.c b/libs/gst/base/gstcollectpads.c
index 8edfe41..14f9926 100644
--- a/libs/gst/base/gstcollectpads.c
+++ b/libs/gst/base/gstcollectpads.c
## -1440,7 +1440,8 ## gst_collect_pads_recalculate_waiting (GstCollectPads * pads)
if (!GST_COLLECT_PADS_STATE_IS_SET (data, GST_COLLECT_PADS_STATE_WAITING)) {
/* start waiting */
gst_collect_pads_set_waiting (pads, data, TRUE);
- result = TRUE;
+ if (!GST_COLLECT_PADS_STATE_IS_SET (data, GST_COLLECT_PADS_STATE_LOCKED))
+ result = TRUE;
}
}
}
Anyway, the receiver stops updating screen 10 seconds after the last klv packet.
This is a bit of an old thread but,
In my experience though, if there is no queue between the appsrc and the muxer, you will get this behavior. I would change your:
gst_element_link(appSrc, mpegtsmux);
To this:
gst_element_link(appSrc, appSrcQueue);
gst_element_link(appSrcQueue, mpegtsmux);
And I'm not sure if the mpegtsmux has the capability for it or not but the muxer that we have used has a property called do-timestamping and when that was set to TRUE we had a better experience.
Another tip I would give is to use the gst-inspect tool to see what options each elements have.

Gstreamer. Write appsink to filesink

I have written a code for appsrc to appsink and it works. I see the actual buffer. It's encoded in H264(vpuenc=avc). Now I want to save it in a file(filesink). How I approach it?
app:
int main(int argc, char *argv[]) {
gst_init (NULL, NULL);
GstElement *pipeline, *sink;
gchar *descr;
GError *error = NULL;
GstAppSink *appsink;
descr = g_strdup_printf (
"mfw_v4lsrc device=/dev/video1 capture_mode=0 ! " // grab from mipi camera
"ffmpegcolorspace ! vpuenc codec=avc ! "
"appsink name=sink"
);
pipeline = gst_parse_launch (descr, &error);
if (error != NULL) {
g_print ("could not construct pipeline: %s\n", error->message);
g_error_free (error);
exit (-1);
}
gst_element_set_state(pipeline, GST_STATE_PAUSED);
sink = gst_bin_get_by_name (GST_BIN (pipeline), "sink");
appsink = (GstAppSink *) sink;
gst_app_sink_set_max_buffers ( appsink, 2); // limit number of buffers queued
gst_app_sink_set_drop( appsink, true ); // drop old buffers in queue when full
gst_element_set_state (pipeline, GST_STATE_PLAYING);
int i = 0;
while( !gst_app_sink_is_eos(appsink) )
{
GstBuffer *buffer = gst_app_sink_pull_buffer(appsink);
uint8_t* data = (uint8_t*)GST_BUFFER_DATA(buffer);
uint32_t size = GST_BUFFER_SIZE(buffer);
gst_buffer_unref(buffer);
}
return 0; }
If as mentioned in the comments, what you actually want to know is how to do a network video stream in GStreamer, you should probably close this question because you're on the wrong path. You don't need to use an appsink or filesink for that. What you'll want to investigate are the GStreamer elements related to RTP, RTSP, RTMP, MPEGTS, or even MJPEGs (if your image size is small enough).
Here are two basic send/receive video stream pipelines:
gst-launch-0.10 v4l2src ! ffmpegcolorspace ! videoscale ! video/x-raw-yuv,width=640,height=480 ! vpuenc ! h264parse ! rtph264pay ! udpsink host=localhost port=5555
gst-launch-0.10 udpsrc port=5555 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! ffdec_h264 ! videoconvert ! ximagesink
In this situation you don't write your own while loop. You register callbacks and wait for buffers (GStreamer 0.10) to arrive. If you're using GStreamer 1.0, you use samples instead of buffers. Samples are a huge pain in the ass compared to buffers but oh well.
Register the callback:
GstAppSinkCallbacks* appsink_callbacks = (GstAppSinkCallbacks*)malloc(sizeof(GstAppSinkCallbacks));
appsink_callbacks->eos = NULL;
appsink_callbacks->new_preroll = NULL;
appsink_callbacks->new_sample = app_sink_new_sample;
gst_app_sink_set_callbacks(GST_APP_SINK(appsink), appsink_callbacks, (gpointer)pointer_to_data_passed_to_the_callback, free);
And your callback:
GstFlowReturn app_sink_new_sample(GstAppSink *sink, gpointer user_data) {
prog_data* pd = (prog_data*)user_data;
GstSample* sample = gst_app_sink_pull_sample(sink);
if(sample == NULL) {
return GST_FLOW_ERROR;
}
GstBuffer* buffer = gst_sample_get_buffer(src);
GstMemory* memory = gst_buffer_get_all_memory(buffer);
GstMapInfo map_info;
if(! gst_memory_map(memory, &map_info, GST_MAP_READ)) {
gst_memory_unref(memory);
gst_sample_unref(sample);
return GST_FLOW_ERROR;
}
//render using map_info.data
gst_memory_unmap(memory, &map_info);
gst_memory_unref(memory);
gst_sample_unref(sample);
return GST_FLOW_OK;
}
You can keep your while loop as it is--using gst_app_sink_is_eos()--but make sure to put a sleep in it. Most of the time I use something like the following instead:
GMainLoop* loop = g_main_loop_new(NULL, FALSE);
g_main_loop_run(loop);
g_main_loop_unref(loop);
Note: Unless you need to do something special with the data you can use the "filesink" element directly.
Simpler option would be write to the file directly in the appsink itself ie when you get a callback when the buffer is done write to the file and make sure you close it on eos.
Hope that helps.

Gstreamer appsrc: odd behaviour of need-data callback

I'm implementing gstreamer media player with my own source of data using appsrc. Everything works fine except one thing:
When stream reaches it's end, callback emits "end-of-stream" signal. Signals sending fucntion g_signal_emit_by_name(appsrc, "end-of-stream", &ret) returns GstFlowReturn value GST_FLOW_OK. But then it calls need-data my callback again, so it returns "end-of-stream" signal again. And this time GstFlowReturn value is (-3) which is GST_FLOW UNEXPECTED. I assume that it does not expect "end-of-stream" signal when it already recieved one, but why it requests more data than? Maybe it is because I didn't set size value iof the steam?
Gstreamer version is 0.10.
Callback function code (appsrc type is seekable btw):
static void cb_need_data (GstElement *appsrc, guint size, gpointer user_data)
{
GstBuffer *buffer;
GstFlowReturn ret;
AppsrcData* data = static_cast<AppsrcData*>(user_data);
buffer = gst_buffer_new_and_alloc(size);
int read = fread(GST_BUFFER_DATA(buffer), 1, size, data->file);
GST_BUFFER_SIZE(buffer) = read;
g_signal_emit_by_name (appsrc, "push-buffer", buffer, &ret);
if (ret != GST_FLOW_OK) {
/* something wrong, stop pushing */
g_printerr("GST_FLOW != OK, return value is %d\n", ret);
g_main_loop_quit (data->loop);
}
if(feof(data->file) || read == 0)
{
g_signal_emit_by_name(appsrc, "end-of-stream", &ret);
if (ret != GST_FLOW_OK) {
g_printerr("EOF reached, GST_FLOW != OK, return value is %d\nAborting...", ret);
g_main_loop_quit (data->loop);
}
}
}
You should provide some corrections to your code(if they are not there already) that should alleviate your issue and help the overall application:
Never try and send a buffer without first checking if it actually has data. So, simply check the buffer data and length to make sure that the data is not NULL and that the length is >0
You can flag that a stream is ended in your user_data. When you send your EOS, set an item in your userdata to indicate that it has been sent and if the appsrc requests more data, simply check if it has been sent and then do not send anything else to the buffer.
Listen for the EOS on your pipeline bus so that it can destroy the stream and close the loop when the EOS message is handled so that you can be sure that your mediasink has received the EOS and you can safely dispose of the pipeline and loop without losing any data.
Have you tried the method gst_app_src_end_of_stream()? I'm not sure what return code you should use after invoking it, but it should be either GST_FLOW_OK or GST_FLOW_UNEXPECTED.
In GStreamer 1.x you return GST_FLOW_EOS.

streaming H.264 over RTP with libavformat

I've been trying over the past week to implement H.264 streaming over RTP, using x264 as an encoder and libavformat to pack and send the stream. Problem is, as far as I can tell it's not working correctly.
Right now I'm just encoding random data (x264_picture_alloc) and extracting NAL frames from libx264. This is fairly simple:
x264_picture_t pic_out;
x264_nal_t* nals;
int num_nals;
int frame_size = x264_encoder_encode(this->encoder, &nals, &num_nals, this->pic_in, &pic_out);
if (frame_size <= 0)
{
return frame_size;
}
// push NALs into the queue
for (int i = 0; i < num_nals; i++)
{
// create a NAL storage unit
NAL nal;
nal.size = nals[i].i_payload;
nal.payload = new uint8_t[nal.size];
memcpy(nal.payload, nals[i].p_payload, nal.size);
// push the storage into the NAL queue
{
// lock and push the NAL to the queue
boost::mutex::scoped_lock lock(this->nal_lock);
this->nal_queue.push(nal);
}
}
nal_queue is used for safely passing frames over to a Streamer class which will then send the frames out. Right now it's not threaded, as I'm just testing to try to get this to work. Before encoding individual frames, I've made sure to initialize the encoder.
But I don't believe x264 is the issue, as I can see frame data in the NALs it returns back.
Streaming the data is accomplished with libavformat, which is first initialized in a Streamer class:
Streamer::Streamer(Encoder* encoder, string rtp_address, int rtp_port, int width, int height, int fps, int bitrate)
{
this->encoder = encoder;
// initalize the AV context
this->ctx = avformat_alloc_context();
if (!this->ctx)
{
throw runtime_error("Couldn't initalize AVFormat output context");
}
// get the output format
this->fmt = av_guess_format("rtp", NULL, NULL);
if (!this->fmt)
{
throw runtime_error("Unsuitable output format");
}
this->ctx->oformat = this->fmt;
// try to open the RTP stream
snprintf(this->ctx->filename, sizeof(this->ctx->filename), "rtp://%s:%d", rtp_address.c_str(), rtp_port);
if (url_fopen(&(this->ctx->pb), this->ctx->filename, URL_WRONLY) < 0)
{
throw runtime_error("Couldn't open RTP output stream");
}
// add an H.264 stream
this->stream = av_new_stream(this->ctx, 1);
if (!this->stream)
{
throw runtime_error("Couldn't allocate H.264 stream");
}
// initalize codec
AVCodecContext* c = this->stream->codec;
c->codec_id = CODEC_ID_H264;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->bit_rate = bitrate;
c->width = width;
c->height = height;
c->time_base.den = fps;
c->time_base.num = 1;
// write the header
av_write_header(this->ctx);
}
This is where things seem to go wrong. av_write_header above seems to do absolutely nothing; I've used wireshark to verify this. For reference, I use Streamer streamer(&enc, "10.89.6.3", 49990, 800, 600, 30, 40000); to initialize the Streamer instance, with enc being a reference to an Encoder object used to handle x264 previously.
Now when I want to stream out a NAL, I use this:
// grab a NAL
NAL nal = this->encoder->nal_pop();
cout << "NAL popped with size " << nal.size << endl;
// initalize a packet
AVPacket p;
av_init_packet(&p);
p.data = nal.payload;
p.size = nal.size;
p.stream_index = this->stream->index;
// send it out
av_write_frame(this->ctx, &p);
At this point, I can see RTP data appearing over the network, and it looks like the frames I've been sending, even including a little copyright blob from x264. But, no player I've used has been able to make any sense of the data. VLC quits wanting an SDP description, which apparently isn't required.
I then tried to play it through gst-launch:
gst-launch udpsrc port=49990 ! rtph264depay ! decodebin ! xvimagesink
This will sit waiting for UDP data, but when it is received, I get:
ERROR: element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: No RTP
format was negotiated. Additional debug info:
gstbasertpdepayload.c(372): gst_base_rtp_depayload_chain ():
/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0: Input buffers
need to have RTP caps set on them. This is usually achieved by setting
the 'caps' property of the upstream source element (often udpsrc or
appsrc), or by putting a capsfilter element before the depayloader and
setting the 'caps' property on that. Also see
http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtp/README
As I'm not using GStreamer to stream itself, I'm not quite sure what it means with RTP caps. But, it makes me wonder if I'm not sending enough information over RTP to describe the stream. I'm pretty new to video and I feel like there's some key thing I'm missing here. Any hints?
h264 is an encoding standard. It specifies how video data is compressed and stored in a format that can be decompressed into a video stream at later point.
RTP is a transmission protocol. It specifies format and order of packets that can carry audio-video data that was encoded by an arbitrary encoder.
GStreamer expects to receive data that conforms to the RTP procotol. Is your expectation that libaformat will produce the RTP packets immediately readable by GStreamer warranted? Maybe GStreamers expect an additional stream description that would enable it to accept and decode the streamed packets using the proper decoder? Maybe it requires an additional RTSP exchange or the SDP stream descriptor file?
The error message states pretty clearly that an RTP format has not been negotiated. caps are short-hand for capabilities. Receiver needs to know transmitter's capabilities to set up the receiver/decoding machinery correctly.
I strongly suggest trying at least to create an SDP file for your RTP stream. libavformat should be able to do it for you.