Make h264 source seekable - gstreamer

I have a h.264 stream that I’m trying to make it seakable
Once the stream is ended video is recorded however by default, it won't be seekable. I know by making the source streamable it disables seekability however I was wondering if there’s a way to inject index info at the end of the stream or is there a anyway to make the video seekable in some other way?
How I'm setting up GSTBus:
eosBus = gst_element_get_bus(pipeline);
gst_bus_add_signal_watch (eosBus);
g_signal_connect (eosBus, "message", (GCallback) message_cb, pipeline);
The way I'm handling EOS:
static void gst_native_stop_recording (JNIEnv* env, jobject thiz) {
gst_element_send_event(pipeline, gst_event_new_eos());
gst_element_set_state (pipeline, GST_STATE_PAUSED);
//unlink elements
Here's how I setup the calback message function (which is never triggered)
static void message_cb (GstBus * bus, GstMessage * message, gpointer user_data)
{
switch (GST_MESSAGE_TYPE (message)) {
case GST_MESSAGE_EOS: {
GST_DEBUG("Got EOS\n");
break;
}
default:
break;
}
}

Related

Get gstreamer bus messages using non-static message handler

I have created a program using gstreamer which listens to different ports (say 5) for rtp packets.
Now I have created a class (say GstClass) which creates the pipeline, and has a Callback function which listens to the bus messages (I need this message system to shut down the pipeline after a certain timeout).
The main function looks like this - 2 threads are created with 2 objects and the GstFunc is called in both threads. The first function would listen to port 5000 and the second would listen to port 5008
int main() {
char filepath1[ ]= "/home/rohan/Tornado1.raw";
char filepath2[ ]= "/home/rohan/Tornado2.raw";
unsigned int port1 = 5000;
unsigned int port2 = 5008;
GstClass GstObj1;
GstClass GstObj2;
boost::thread thrd1 { &GstClass::GstFunc, &GstObj1, filepath1, &port1 };
boost::thread thrd2 { &GstClass::GstFunc, &GstObj2, filepath2, &port2 };
thrd1.join();
thrd2.join();
return 0;
}
the class GstClass looks like this -
class GstClass {
protected:
//some other variables...
GMainLoop *msLoop;
public:
gboolean bus_call(GstBus *bus, GstMessage *message,
gpointer data);
void GstFunc(char *filepath, unsigned int *port);
};
For detailed function view please look at this example. Replace the function int main (int argc, char *argv[]) with void GstFunc(char *filepath, unsigned int *port) with appropriate changes.
The GstFunc looks like
void GstFunc(char *filepath, unsigned int *port)
GMainLoop *loop;
GstElement *pipeline, *source, *conv, *sink;
GstBus *bus;
guint bus_watch_id;
gst_init (NULL, NULL);
loop = g_main_loop_new (NULL, FALSE);
/* Create gstreamer elements */
pipeline = gst_pipeline_new ("audio-player");
source = gst_element_factory_make ("autoaudiosrc", "audiosource");
conv = gst_element_factory_make ("audioconvert", "converter");
sink = gst_element_factory_make ("autoaudiosink", "audio-output");
if (!pipeline || !source || !conv || !sink) {
g_printerr ("One element could not be created. Exiting.\n");
return -1;
}
/* we add a message handler */
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
bus_watch_id = gst_bus_add_watch (bus, bus_call, NULL);
gst_object_unref (bus);
gst_bin_add_many (GST_BIN (pipeline), source, conv, sink, NULL);
gst_element_link_many (GST_BIN (pipeline), source, conv, sink, NULL);
g_main_loop_run (loop);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (GST_OBJECT (pipeline));
g_source_remove (bus_watch_id);
g_main_loop_unref (loop);
return 0;
}
Now the dilemma I am facing is with the static function (in example) bus_call(...).
Since I am creating 2 pipelines in 2 different threads which are listening to 2 different ports, I cannot have this function as static (shared between the objects). how can I make these 2 pipelines dis-joint from each other? Or how can I get this static bus_call(...) to become non-static?
simply removing static keyword didn't help and giving this error
error: invalid use of non-static member function ‘gboolean GstClass::bus_call(GstBus*, GstMessage*, gpointer)‘
Few Imp points
I have referred to this document which says To use a bus, attach a message handler to the bus of a pipeline using gst_bus_add_watch()
The gst_bus_add_watch() in the GstClass::GstFunc() (which callbacks the bus_call) is mapped to the header file gstbus.h and the the declaration is simply
GST_API
guint gst_bus_add_watch(GstBus * bus, GstBusFunc func, gpointer user_data);
My initial guess is that the gst_bus_add_watch is expecting the 2nd parameter to be a static function. I am not sure why though. What can be done here?
*********************** Question Edit 2 ***********************
Is it possible to add an argument to the bus_call like gboolean bus_call(GstBus *bus, GstMessage *message,gpointer data,**SOME POINTER TO THE OBJECT**)?
This way the function will remain static while having a pointer to the object calling it, and acting upon the objects (say close pipeline of that object).
I don't think you can get away with what you want. The signature for GstBusFunc(), the callback, is for a pointer to function, not pointer to member function. They're different things. (I've also failed with std::bind, fwiw).
I've done something very similar to what you describe, though not quite the same, but I took a different approach that might help you. You can use a static method, but you must pass a pointer to your pipeline class to gst_bus_add_watch. Inside your busCallback you dereference the pointer and off you go! You may need to implement some kind of locking scheme as well.
class MyPipeline {
GstElement *m_pipeline;
public:
MyPipeline(...);
static void gboolean busCallback(GstBus *bus, GstMessage *msg, gpointer p);
}
MyPipeline::MyPipeline(...)
{
// create pipeline...
m_pipeline = ...;
// bus callback, pass 'this' as arg for callback
GstBus *bus = gst_pipeline_get_bus(GST_PIPELINE(m_pipeline));
gst_bus_add_watch(bus, &MyPipeline::busCallback, this);
gst_object_unref(bus);
// ...
}
gboolean MyPipeline::busCallback(GstBus *, GstMessage *msg, gpointer p)
{
// get lock if needed...
// recover your class instance
MyPipeline *myPipeline = (MyPipeline *)p;
// do what you need to, free lock
return TRUE;
}

place external live video frames from non supported V4L file into Gstreamer Qt , C++ Qthreads

OS: Ubuntu 14.04
SDK: Qt
GStreamer: > 1.0
I am wondering how would I put continuously captured frames from a non supported V4L camera into GStreamer.
Actually my task is to grab frames from the camera and use only GStreamer to send them to different computer via UDP. But at the moment, I just want to display it on my machine.
What I did so far:
a) Implemented code in Qt for an IDS camera that captures frames and displays then on Qt as live streaming.
b) Separately, I have written ( or rather copied ) code that displays live streaming via gstreamer using a webcam that supports V4L file.
Now as I mentioned, I want to use gstreamer to display capture frames in Qt environment.
I have developed code in qt 5.5 which makes use of multithreading to run separate threads for gstreamer, capturing frames and GUI. The code has become quite long but I will try best to place minimum code here.
Issue: when I try to run the code and added debug message, I can see frames are continuously coming from another thread into main but gstreamer function start successfully and at the very first time I get debug message from cb_need_data` but nothing after data.
Source code is shown below.
streaming gstream class:
class StreamG : public QObject
{
Q_OBJECT
public:
explicit StreamG(QObject *parent = 0);
bool addLinkElements();
static void cb_need_data (GstElement *appsrc,
guint unused_size,
gpointer user_data);
static GMainLoop *loop;
static char* bufferFrame;
signals:
void sigFinish();
public slots:
void start();
void stop();
private:
GstElement *pipeline, *source, *sink, *convert;
GstBus *bus;
GstMessage *msg;
GstStateChangeReturn ret;
};
Streaming using gstreaming cpp file below
GMainLoop* StreamG::loop;
char* StreamG::bufferFrame = NULL; // this will take buffer frames from other function
void StreamG::cb_need_data (GstElement *appsrc,
guint unused_size,
gpointer user_data )
{
qDebug()<< " cb_need_data is called ...";
static GstClockTime timestamp = 0;
GstBuffer *buffer;
guint size;
GstFlowReturn ret;
guchar *data1;
GstMapInfo map;
data1 = (guchar *)bufferFrame;
size = 385*288*2;
if( data1 )
{
buffer = gst_buffer_new_allocate (NULL, size, NULL);
gst_buffer_map (buffer, &map, GST_MAP_WRITE);
memcpy( (guchar *)map.data, data1, gst_buffer_get_size( buffer ) );
GST_BUFFER_PTS (buffer) = timestamp;
GST_BUFFER_DURATION (buffer) = gst_util_uint64_scale_int (1, GST_SECOND, 2);
timestamp += GST_BUFFER_DURATION (buffer);
g_signal_emit_by_name (appsrc, "push-buffer", buffer, &ret);
if (ret != GST_FLOW_OK)
{
// something wrong, stop pushing //
g_debug("push buffer returned %d for %d bytes \n", ret, size);
g_main_loop_quit (loop);
}
}
}
static gboolean bus_call (GstBus *bus, GstMessage *msg, gpointer data)
{
GMainLoop *loop = (GMainLoop *) data;
switch (GST_MESSAGE_TYPE (msg)) {
case GST_MESSAGE_EOS:
g_print ("End of stream\n");
qDebug() <<" end of msg in gstreamer";
g_main_loop_quit (loop);
break;
case GST_MESSAGE_ERROR: {
gchar *debug;
GError *error;
gst_message_parse_error (msg, &error, &debug);
g_free (debug);
g_printerr ("Error: %s\n", error->message);
qDebug() <<" end of msg in gstreamer";
g_error_free (error);
g_main_loop_quit (loop);
break;
}
default:
break;
}
return TRUE;
}
StreamG::StreamG(QObject *parent) : QObject(parent)
{
// Initialize GStreamer /
gst_init( NULL, NULL );
loop = g_main_loop_new( NULL, FALSE );
// Create the elements
source = gst_element_factory_make ("appsrc", "source");
sink = gst_element_factory_make ("autovideosink", "sink");
convert =gst_element_factory_make("videoconvert","convert");
g_assert( convert );
pipeline = gst_pipeline_new ("test-pipeline");
/* g_object_set (G_OBJECT (source), "caps",
gst_caps_new_simple ("video/x-raw",
"format", G_TYPE_STRING, "RGB",
"width", G_TYPE_INT, 640,
"height", G_TYPE_INT, 360,
"framerate", GST_TYPE_FRACTION, 1, 1,
NULL), NULL);*/
g_object_set (G_OBJECT (source), "caps",
gst_caps_new_simple ("video/x-raw",
"format", G_TYPE_STRING, "RGB",
"width", G_TYPE_INT, 640,
"height", G_TYPE_INT, 360, NULL), NULL);
}
void StreamG::start()
{
addLinkElements();
gst_element_set_state (pipeline, GST_STATE_PLAYING);
// Iterate
g_print ("Running...Gstreamer\n");
g_main_loop_run (loop);
// Out of the main loop, clean up nicely
g_print ("Returned, stopping playback\n");
gst_element_set_state (pipeline, GST_STATE_NULL);
}
void StreamG::stop()
{
g_print ("Deleting pipeline\n");
g_main_loop_quit(loop);
gst_object_unref(GST_OBJECT(pipeline));
gst_object_unref (bus);
g_main_loop_unref (loop);
emit sigFinish();
}
bool StreamG::addLinkElements()
{
if (!pipeline || !source || !sink || !convert )
{
g_printerr ("Not all elements could be created.\n");
return false;
}
// g_object_set (G_OBJECT ( source ), "device", "/dev/video0", NULL);
gst_bin_add_many( GST_BIN (pipeline), source , sink, convert, NULL );
if (gst_element_link (convert, sink) != TRUE)
{
g_printerr ("Elements could not be linked confert sink.\n");
gst_object_unref (pipeline);
return false;
}
if (gst_element_link (source, convert) != TRUE)
{
g_printerr ("Elements could not be linked source -convert.\n");
gst_object_unref (pipeline);
return false;
}
g_print("Linked all the Elements together\n");
/* we add a message handler */
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_add_watch (bus, bus_call, loop);
g_object_set (G_OBJECT (source),
"stream-type", 0,
"format", GST_FORMAT_TIME, NULL);
g_signal_connect (source, "need-data", G_CALLBACK (cb_need_data), NULL);
return true;
}
Function in MainWidget .. I have places only imp member variables and functions
class UEYEMain : public QWidget
{
Q_OBJECT
public:
int openCamera( bool bStartLive );
INT _GetImageID (char* pbuf);//
bool _AllocImages(); //function for IDS camera
void onLive(); // function for IDS camera
void transferLastFrameToGstream();
private slots:
void eventreceived (int event); // this is slot which receives frames and copied into StreamingG static varibale
private:
Ui::UEYEMain *ui;
.......
.......
StreamG* StreamingG;
QElapsedTimer m_Time;
QRgb m_table[256];
int m_nUpdateTicks;
QThread* threadForStream;
char *m_pLastBuffer;
EventThread *m_pEvFrame; // Another thread to recive frames
void ProcessFrame(); // function on receiving frames
void DrawImage (char *pBuffer); // this draw image to Qt widget , I use it for testing purpose
};
void UEYEMain::eventreceived (int event)
{
bool bUpdateCameraList = false;
switch (event)
{... some other cases
case IS_SET_EVENT_FRAME:
qDebug() << " new frame received";
if (!m_hCamera)
{
break;
}
ProcessFrame ();
break;
default:
break;
}
}
void UEYEMain::transferLastFrameToGstream()
{
//memcpy( StreamingG->bufferFrame, m_pLastBuffer, sizeof(m_pLastBuffer) );
if(m_pLastBuffer ) // just pointing buffer to streamG variable
{
StreamingG->bufferFrame = m_pLastBuffer;
}
}
void UEYEMain::ProcessFrame ()
{
INT dummy = 0;
char *pLast = NULL, *pMem = NULL;
qDebug() << " counter for frame recv -->" << countFrameDebug;
countFrameDebug++;
is_GetActSeqBuf (m_hCamera, &dummy, &pMem, &pLast);
m_pLastBuffer = pLast;
if (m_bReady)
{
m_bReady = FALSE;
update();
if (m_pLastBuffer )
{
int nTicks = 0;
// Frame rate limit ?
if (m_nUpdateTicks > 0)
{
nTicks = m_Time.elapsed();
bDraw = (nTicks >= m_nUpdateTicks) ? true : false;
}
if (bDraw)
{
nDisplayed++;
m_Time.restart();
transferLastFrameToGstream();
//DrawImage(m_pLastBuffer); // this func succesffully stream video on Qt widget
}
}
}
}
void UEYEMain::onLive()
{
INT nRet = 1;
time_t start;
static char str[64];
if (!m_bLive)
{
m_bLive = TRUE;
m_bReady = TRUE;
is_CaptureVideo (m_hCamera, IS_DONT_WAIT);
threadForStream->start();
}
}
The above function onLive() is called from another thread workThreadFinished
connect(m_workThread, SIGNAL(finished()), this, SLOT(workThreadFinished()), Qt::QueuedConnection);
The following is the output I get and I dont see StreamG::cb_need_data has been called more than once.
no of camera detected : 1
started event 2 detection!// this thread acquire frames
started event 8 detection!
Linked all the Elements together // gst
Running...Gstreamer //gstreamer
cb_need_data is called ... // gstreamer
new frame received //
counter for frame recv --> 0
new frame received
counter for frame recv --> 1
new frame received
counter for frame recv --> 2
new frame received
........... and so on

How to access pipeline through GMainLoop?

I've got an application reading a movie file, and I would like to reset the stream to its initial position when it reaches the end of the stream.
So I've got the usual structure, I added a bus to watch the events
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_add_watch (bus, bus_call, loop);
gst_object_unref (bus);
And here is a snipet of the bus_call function
static gboolean bus_call (GstBus *bus,
GstMessage *msg,
gpointer data)
{
GMainLoop *loop = (GMainLoop *) data;
switch (GST_MESSAGE_TYPE (msg))
{
case GST_MESSAGE_EOS:
g_print ("End of stream\n");
g_main_loop_quit (loop);
break;
default:
break;
}
return TRUE;
}
So for now when I reach the end of the stream I just quit the loop.
Can I access my pipeline throught the loop?
Thanks for reading, please let me know if I'm trying to do something impossible
ps: I want to avoid setting my pipeline as a global variable, or pass to bus_call a structure containing my pipeline and loop, because it feels wrong.
My goal was to auto rewind when the end of the stream was reached. The user could then hit the play button, to play it again.
So I skirted the problem and modified the callback function of my play button.
Now it detect the current stream position, and compare it to the length of the stream.
gint64 streamPosition, streamLength;
GstFormat format = GST_FORMAT_TIME;
gst_element_query_position (pipeline, &format, &streamPosition);
gst_element_query_duration (pipeline, &format, &streamLength);
if (streamPosition==streamLength)
stopIt(widget,pipeline);
and if the current stream position equals the stream length, it means we are at the end of the stream. and so I call a function to rewind the stream....
Feels really jerky, but that's the "best" solution I have for now.
I'm still open to suggestions.

Gstreamer appsink receiving buffers much slower than real time on CARMA board

I am relatively new to asking question on stack overflow, but I will do my best to explain the problem thoroughly.
I am currently using an Axis IP Camera to obtain live video to a CARMA board. GStreamer then takes these frames using an RTSP client, performs an RTP depayload, and then decodes the h.264 images that are being sent from the camera. When I perform this process on my computer (currently equipped with an i7 processor) there is no lag time and the stream is output to the screen in real time, updating at a rate of 30 Hz. The problem arises when I switch over to the CARMA board I am working on. Instead of displaying in real time, the appsink is receives buffers at a rate much slower than normal. More specifically, instead of receiving buffers at a rate of 30 Hz, it only receives buffers at a rate of about 10 Hz on average when no other processing is occurring on the CARMA board. It should also be noted that no frames are dropped; the appsink that is receiving buffers is receiving all buffers, but not in real time. Any insight as to why this is occurring is greatly appreciate. I have checked to ensure that the timestamps are not an issue as well (i.e. the rate at which the appsink receives a buffer is does not change if I am or am not using a GST timestamp). The CARMA board is currently using ubuntu 11.04 and using the GCC to compile. Below are some code snippets and their respective explanations.
Some definitions
#define APPSINK_CAPS "video/x-raw-yuv,format=(fourcc)I420"
#define RTSP_URI "rtsp://(ipaddress)/axis-media/media.amp?videocodec=h264"
#define RTSP_LATENCY 0
#define RTSP_BUFFER_MODE 0
#define RTSP_RTP_BLOCKSIZE 65536
GStreamer pipeline set-up code:
/* Initialize GStreamer */
gst_init (&argc, &argv);
/* Create the elements */
data.rtspsrc = gst_element_factory_make("rtspsrc", NULL);
data.rtph264depay = gst_element_factory_make("rtph264depay", NULL);
data.nv_omx_h264dec = gst_element_factory_make("nv_omx_h264dec", NULL);
data.appsink = gst_element_factory_make("appsink", NULL);
if (!data.rtspsrc || !data.rtph264depay || !data.nv_omx_h264dec || !data.appsink) {
g_printerr ("Not all elements could be created.\n");
return -1;
}
/* Set element properties */
g_object_set( data.rtspsrc, "location", RTSP_URI,
"latency", RTSP_LATENCY,
"buffer-mode", RTSP_BUFFER_MODE,
"rtp-blocksize", RTSP_RTP_BLOCKSIZE,
NULL);
g_object_set( data.rtph264depay, "byte-stream", FALSE, NULL);
g_object_set( data.nv_omx_h264dec, "use-timestamps", TRUE, NULL);
/* Configure appsink. This plugin will allow us to access buffer data */
GstCaps *appsink_caps;
appsink_caps = gst_caps_from_string (APPSINK_CAPS);
g_object_set (data.appsink, "emit-signals", TRUE,
"caps", appsink_caps,
NULL);
g_signal_connect (data.appsink, "new-buffer", G_CALLBACK (appsink_new_buffer), &data);
gst_caps_unref (appsink_caps);
/* Create the empty pipeline */
data.pipeline = gst_pipeline_new ("test-pipeline");
if (!data.pipeline) {
g_printerr ("Pipeline could not be created.");
}
/* Build the pipeline */
/* Note that we are NOT linking the source at this point. We will do it later. */
gst_bin_add_many (GST_BIN(data.pipeline),
data.rtspsrc,
data.rtph264depay,
data.nv_omx_h264dec,
data.appsink,
NULL);
if (gst_element_link (data.rtph264depay, data.nv_omx_h264dec) != TRUE) {
g_printerr ("rtph264depay and nv_omx_h264dec could not be linked.\n");
gst_object_unref (data.pipeline);
return -1;
}
if (gst_element_link (data.nv_omx_h264dec, data.appsink) != TRUE) {
g_printerr ("nv_omx_h264dec and appsink could not be linked.\n");
gst_object_unref (data.pipeline);
return -1;
}
/* Connect to the pad-added signal (CALLBACK!) */
g_signal_connect (data.rtspsrc, "pad-added", G_CALLBACK (pad_added_handler), &data);
/* Add a probe to perform hashing on H.264 bytestream */
GstPad *rtph264depay_src_pad = gst_element_get_static_pad (data.rtph264depay, "src");
(gulong) gst_pad_add_buffer_probe (rtph264depay_src_pad, G_CALLBACK (hash_and_report), (gpointer)(&data));
gst_object_unref (rtph264depay_src_pad); //unreference the source pad
/* Start playing */
ret = gst_element_set_state (data.pipeline, GST_STATE_PLAYING);
if (ret == GST_STATE_CHANGE_FAILURE) {
g_printerr ("Unable to set the pipeline to the playing state.\n");
gst_object_unref (data.pipeline);
return -1;
}
/* Wait until error or EOS */
bus = gst_element_get_bus (data.pipeline);
do {
msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, (GstMessageType)(GST_MESSAGE_STATE_CHANGED | GST_MESSAGE_ERROR | GST_MESSAGE_EOS));
/* Parse message */
if (msg != NULL) {
GError *err;
gchar *debug_info;
switch (GST_MESSAGE_TYPE (msg)) {
case GST_MESSAGE_ERROR:
gst_message_parse_error (msg, &err, &debug_info);
g_printerr ("Error received from element %s: %s\n", GST_OBJECT_NAME (msg->src), err->message);
g_printerr ("Debugging information: %s\n", debug_info ? debug_info : "none");
g_clear_error (&err);
g_free (debug_info);
terminate = TRUE;
break;
case GST_MESSAGE_EOS:
g_print ("End-Of-stream reached.\n");
break;
case GST_MESSAGE_STATE_CHANGED:
/* We are only interested in state-changed messages from the pipeline */
if (GST_MESSAGE_SRC (msg) == GST_OBJECT (data.pipeline)) {
GstState old_state, new_state, pending_state;
gst_message_parse_state_changed (msg, &old_state, &new_state, &pending_state);
g_print ("Pipeline state changed from %s to %s:\n", gst_element_state_get_name (old_state), gst_element_state_get_name (new_state));
}
break;
default:
//we should not reach here because we only asked for ERRORs and EOS and State Changes
g_printerr ("Unexpected message received.\n");
break;
}
gst_message_unref (msg);
}
} while (!terminate);
Now the pad_added_handler:
/* This function will be called by the pad-added signal */
//Thread 1
static void pad_added_handler (GstElement *src, GstPad *new_pad, CustomData *data) {
GstPad *sink_pad = gst_element_get_static_pad (data->rtph264depay, "sink");
GstPadLinkReturn ret;
GstCaps *new_pad_caps = NULL;
GstStructure *new_pad_struct = NULL;
const gchar *new_pad_type = NULL;
g_print ("Received new pad '%s' from '%s':\n", GST_PAD_NAME (new_pad), GST_ELEMENT_NAME (src));
/* Check the new pad's type */
new_pad_caps = gst_pad_get_caps (new_pad);
new_pad_struct = gst_caps_get_structure (new_pad_caps, 0);
new_pad_type = gst_structure_get_name (new_pad_struct);
if (!g_str_has_prefix (new_pad_type, "application/x-rtp")) {
g_print (" It has type '%s' which is not RTP. Ignoring.\n", new_pad_type);
goto exit;
}
/* If our converter is already linked, we have nothing to do here */
if (gst_pad_is_linked (sink_pad)) {
g_print (" We are already linked. Ignoring.\n");
goto exit;
}
/* Attempt the link */
ret = gst_pad_link (new_pad, sink_pad);
if (GST_PAD_LINK_FAILED (ret)) {
g_print (" Type is '%s' but link failed.\n", new_pad_type);
} else {
g_print (" Link succeeded (type '%s').\n", new_pad_type);
}
exit:
/* Unreference the new pad's caps, if we got them */
if (new_pad_caps != NULL)
gst_caps_unref (new_pad_caps);
/* Unreference the sink pad */
gst_object_unref (sink_pad);
}
And now the appsink that is called every time the appsink receives a buffer. This is the function that I believe (though am not certain) is not receiving buffers at real time, leading me to believe that there is some kind of processing that I am doing that is causing too much time to pass before another buffer can be processed:
// Called when appsink receives a buffer: Thread 1
void appsink_new_buffer (GstElement *sink, CustomData *data) {
GstBuffer *buffer;
/* Retrieve the buffer */
g_signal_emit_by_name (sink, "pull-buffer", &buffer);
if (buffer) {
(((CustomData*)data)->appsink_buffer_count)++;
//push buffer onto queue, to be processed in different thread
if (GstBufferQueue->size() > GSTBUFFERQUEUE_SIZE) {
//error message
printf ("GstBufferQueue is full!\n");
//release buffer
gst_buffer_unref (buffer);
} else {
//push onto queue
GstBufferQueue->push(buffer);
//activate thread
connectionDataAvailable_GstBufferQueue.notify_all();
}
}
}
A link to the camera I am using:
http://www.axis.com/products/cam_p1357/index.htm
Hope this helps. I will continue to investigate this problem myself and provide updates as they come. Let me know if you need any other information and I look forward to reading your responses!
Thanks
So apparently the problem was not the program (i.e. the software design) but rather that the hardware components on the CARMA board were not able to keep up with the amount of processing that I was doing. In other words, the Tegra 3 processor on the CARMA was insufficient as a device. Possible solutions are to cut down the processing I am doing on the CARMA board or upgrade to a different board. I hope this helps people understand both that the limited processing that is available on smaller devices, but also to be aware that processors (specifically, in the category of Tegra 3 that implement the System on a Chip model) may not have currently have the computational power required to keep up with projects or systems that require large, real-time calculations.
To put it short, be careful what you buy! Do your best to ensure that what you are purchasing is right for the project! That being said, don't be scared to try new devices. Despite not being able to do what I wanted, I learned more than I could have ever expected. After all, computer science is just continuous learning :p

Pushing images into a gstreamer pipeline

After playing around with some toy applications, exploring the
documentation and googling around (including the mailing list
archives) I am still puzzled for what I would think is a rather common
use case.
I have an existing code that generates images (in memory) and I would
like to push these images into a gstreamer pipeline (to create a flv
video at the end).
I could not find an "obvious way to do it". My best guess will be to
dig in the source code of GstMultiFileSrc and its parent GstPushSrc,
to figure it out.
Could any of you point me out to the "obvious way" of doing this ?
Is it there any related piece of documentation/tutorial/example on this ?
Once I have the input right, the rest is a piece of cake, thanks to
Gstreamer awesomeness !
(something like "my magic input -> ffmpegcolorspace ! ffenc_flv !
flvmux ! filesink location=desktop.flv" )
Thanks for your answers.
GStreamer uses plugins to do everything. Plugins that create data or take it from an external source are called "src" plugins.
The generic src plugin for injecting application-generated data into a pipeline is called appsrc. The API provided by appsrc is documented as part of the App Library.
Here's one example that demonstrates feeding appsrc with generated images: gdk-gstappsrc-stream.c. It seems to be derived from some test code in the GStreamer source tree: here.
Another approach would be to create your own src plugin. Look at the goom music visualization plugin for an example that seems to work in a way similar to what you have specified.
I found a solution (maybe) to this (i get the images with OpenCV) ... but i have an error with the pipeline: ERROR from element mysource: Error en el flujo de datos interno.
Debugging info: gstbasesrc.c(2574): gst_base_src_loop (): /GstPipeline:pipeline0/GstAppSrc:mysource:
streaming task paused, reason not-negotiated (-4)
this is the code:
typedef struct _App App;
struct _App{
GstElement *pipeline;
GstElement *appsrc;
GMainLoop *loop;
guint sourceid;
GTimer *timer;
};
App s_app;
CvCapture *capture;
static gboolean read_data(App *app){
GstFlowReturn ret;
GstBuffer *buffer = gst_buffer_new();
IplImage* frame = cvQueryFrame(capture);
GST_BUFFER_DATA(buffer) = (uchar*)frame->imageData;
GST_BUFFER_SIZE(buffer) = frame->width*frame->height*sizeof(uchar*);
g_signal_emit_by_name(app->appsrc,"push-buffer",buffer,&ret);
gst_buffer_unref(buffer);
if(ret != GST_FLOW_OK){
GST_DEBUG("Error al alimentar buffer");
return FALSE;
}
return TRUE;
}
static void start_feed(GstElement* pipeline,guint size, App* app){
if(app->sourceid == 0){
GST_DEBUG("Alimentando");
app->sourceid = g_idle_add((GSourceFunc) read_data, app);
}
}
static void stop_feed(GstElement* pipeline, App* app){
if(app->sourceid !=0 ){
GST_DEBUG("Stop feeding");
g_source_remove(app->sourceid);
app->sourceid = 0;
}
}
static gboolean
bus_message (GstBus * bus, GstMessage * message, App * app)
{
GST_DEBUG ("got message %s",
gst_message_type_get_name (GST_MESSAGE_TYPE (message)));
switch (GST_MESSAGE_TYPE (message)) {
case GST_MESSAGE_ERROR: {
GError *err = NULL;
gchar *dbg_info = NULL;
gst_message_parse_error (message, &err, &dbg_info);
g_printerr ("ERROR from element %s: %s\n",
GST_OBJECT_NAME (message->src), err->message);
g_printerr ("Debugging info: %s\n", (dbg_info) ? dbg_info : "none");
g_error_free (err);
g_free (dbg_info);
g_main_loop_quit (app->loop);
break;
}
case GST_MESSAGE_EOS:
g_main_loop_quit (app->loop);
break;
default:
break;
}
return TRUE;
}
int main(int argc, char* argv[]){
App *app = &s_app;
GError *error = NULL;
GstBus *bus;
GstCaps *caps;
capture = cvCaptureFromCAM(0);
gst_init(&argc,&argv);
/* create a mainloop to get messages and to handle the idle handler that will
* feed data to appsrc. */
app->loop = g_main_loop_new (NULL, TRUE);
app->timer = g_timer_new();
app->pipeline = gst_parse_launch("appsrc name=mysource ! video/x-raw-rgb,width=640,height=480,bpp=24,depth=24 ! ffmpegcolorspace ! videoscale method=1 ! theoraenc bitrate=150 ! tcpserversink host=127.0.0.1 port=5000", NULL);
g_assert (app->pipeline);
bus = gst_pipeline_get_bus (GST_PIPELINE (app->pipeline));
g_assert(bus);
/* add watch for messages */
gst_bus_add_watch (bus, (GstBusFunc) bus_message, app);
/* get the appsrc */
app->appsrc = gst_bin_get_by_name (GST_BIN(app->pipeline), "mysource");
g_assert(app->appsrc);
g_assert(GST_IS_APP_SRC(app->appsrc));
g_signal_connect (app->appsrc, "need-data", G_CALLBACK (start_feed), app);
g_signal_connect (app->appsrc, "enough-data", G_CALLBACK (stop_feed), app);
/* set the caps on the source */
caps = gst_caps_new_simple ("video/x-raw-rgb",
"bpp",G_TYPE_INT,24,
"depth",G_TYPE_INT,24,
"width", G_TYPE_INT, 640,
"height", G_TYPE_INT, 480,
NULL);
gst_app_src_set_caps(GST_APP_SRC(app->appsrc), caps);
/* go to playing and wait in a mainloop. */
gst_element_set_state (app->pipeline, GST_STATE_PLAYING);
/* this mainloop is stopped when we receive an error or EOS */
g_main_loop_run (app->loop);
GST_DEBUG ("stopping");
gst_element_set_state (app->pipeline, GST_STATE_NULL);
gst_object_unref (bus);
g_main_loop_unref (app->loop);
cvReleaseCapture(&capture);
return 0;
}
Any idea???
You might try hacking imagefreeze to do what you want. appsrc might also do it.