creating a new gstelement object tee - gstreamer

I have created a new object of GstElement tee. The code is below here:
GstElement *teeElement = gst_element_factory_make ("tee", "camera_tee");
But, GstElement is not created at all. What can be the reason for this.
What library and/or header file has to be included for the purpose.
Regards,
iSight

I have to initialize gstreammer using gst_init.

Related

How to generate fragmented mp4 files programmatically

I have a media .h264 live streaming server, and want to mux .h264 frames to fragmented mp4 files. I am wondering does any library could support this?
As far as I know, ffmpeg.exe and Bento4 could support this, but I want to use a library to do that in my code, not executing another process.
To specify my point, I want to generate fragmented mp4 files, which could be achieved by executing ffmpeg.exe like below,
ffmpeg -i xx.h264
-vcodec copy -an -f mp4 -reset_timestamps 0
-movflags empty_moov+default_base_moof+frag_keyframe -loglevel quiet
xxx.mp4"
I want to mux mp4 files in my code, not create another process to do it.
Thanks.
More detailed:
AVDictionary* opts = NULL;
av_dict_set(&opts, "movflags", "frag_keyframe+empty_moov", 0);
and then later:
//init muxer, write output file header
avformat_write_header(formatContext, &opts);
where formatContext is pointer of AVFormatContext obtained when output file is opened using: avformat_alloc_output_context2 and avio_open functions.

GStreamer reading camera resolution

I'm trying to read resolutions supported by camera using GStreamer and plugin camerabin2. The problem is that I'm getting NULL.
#include <gst/gst.h>
#include <stdio.h>
#define gstRef(element) { gst_object_ref(GST_OBJECT(element)); gst_object_sink(GST_OBJECT(element)); }
int main(int argc, char *argv[]) {
gst_init (&argc, &argv);
GstElement *m_camerabin = gst_element_factory_make("camerabin2", "camerabin2");
gstRef(m_camerabin);
GstCaps *supportedCaps = 0;
g_object_get(G_OBJECT(m_camerabin), "image-capture-supported-caps",
&supportedCaps, NULL);
char *c = gst_caps_to_string(supportedCaps);
printf("%s\n",c);
return 0;
}
Is there a better way to get supported resolutions? Should I use different plugin?
Thanks.
I haven't used this element, but in GStreamer the resolutions normally won't be available to your code until the element is placed in a pipeline and the pipeline is "played". Then the elements are activated and connect and make info available.
Hate to link and run, but you may want to start here.
https://gitorious.org/gstreamer-camerabin2/gst-plugins-bad/source/28540988b25f493274762d394c55a4beded5e428:tests/examples/camerabin2
I haven't used camerabin2 but I strongly suggest to use GstDeviceMonitor. By enabling GstDeviceMonitor, you can access all devices that is connected to PC. Not also microphone,speakers but also cameras. Furthermore, you can access whole information of camera devices like resolution, supported formats, fps and etc.
You will use:
GList* devices = gst_device_monitor_get_devices(mMonitor);
Then, you need to extract information from GList*. I cannot give the whole code because of company policies. I just give you the clue.
Suggested references about code of GstDeviceMonitor
https://gstreamer.freedesktop.org/documentation/gstreamer/gstdevicemonitor.html?gi-language=c

initializing an openni::VideoStream object without kinect plugged in

I'm using openNI2 in order to capture kinect depth data.
in order to initialize m_depth, i have to use some methods of the class openni::VideoStream, like this:
openni::VideoStream m_depth;
openni::Device device;
const char* device_uri;
openni::Status ret;
device_uri = openni::ANY_DEVICE;
ret = openni::STATUS_OK;
ret = openni::OpenNI::initialize();
ret = device.open(device_uri);
ret = m_depth.create(device, openni::SENSOR_DEPTH);
The is that i want to initialize the object "m_depth" without the kinect plugged in. of course i can't because the methods of this class, like "m_depth.create" doesn't work.
There is a way to do that?
You can try using an .ONI file (a dummy could work) to init
Quoting the OpenNI2 documentation
Later, this file can be used to initialize a file Device, and used to
play back the same data that was recorded.
Opening a file device is done by passing its path as the uri to the
Device::open() method
So, you can change this line
device_uri = openni::ANY_DEVICE;
to the path of the dummy ONI file...
I don't think in OpenNI2 there is another way to create a depth stream, and actually it doesn't make sense to create a stream without a camera unless you want to use the coordinate converter class...
In openni 1.x you can try using mockdepth (though I didn't managed to make it work correctly)

VP8-DirectShowFilter: QueryInterface results in E_NOINTERFACE (C++)

I am new in Directshow and C++. I try to capture Video from a Source and encode this with VP8. To accomplish this I'm using the DirectShow-Filters from https://code.google.com/p/webm/downloads/list
My Filtergraph is working and consists of these four filters:
recorder -> WebM VP8 Encoder Filter -> WebM Muxer Filter -> FileWriter
The Problem is, that I need to change the properties from the VP8 Encoder Filter. With GraphEdit I can change for example the Targed Bitrate, but I don't know how to do this programmatically in C++ (I don't want to use the PropertyPage).
I also downloaded the source code and found and included the file vp8encoder\vp8encoderfilter.hpp. This lead to the problem that I needed to include the vp8encoderidl.h file. At first I did not found this file in the soure folder, so I downloaded it from somewhere in the internet. Later I saw the IDL folder containing a vp8encoder.idl file, which I add to my project, compiled it and included the resulting vp8encoder_h.h file. In both cases (with the code from the Internet or from the header file) I can compile my project and record the video. So I tried to get the IVP8Encoder Interface from the DirectShow Filter:
//Instanziate Encoder-Filter
hr = CoCreateInstance(__uuidof(IVP8Encoder), NULL, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void**)&pVideoEncoder);
//Get Interface
IVP8Encoder *iEncoder = NULL;
hr = pVideoEncoder->QueryInterface(__uuidof(IVP8Encoder), (void**)&iEncoder);
The QueryInterface()-method returns E_NOINTERFACE. So I think, that probably the first parameter is not correct, but I don't have an idea which parameter is needed instead.
I appreciate your help and thanks in advance!
You instantiate classes (CLSID_VP8Encoder) and classes implement interfaces (IBaseFilter, IVP8Encoder).
Your code should be:
IBaseFilter* pVideoEncoder;
hr = CoCreateInstance(CLSID_VP8Encoder, NULL, CLSCTX_INPROC_SERVER,
IID_IBaseFilter, (void**) &pVideoEncoder);
IVP8Encoder *iEncoder = NULL;
hr = pVideoEncoder->QueryInterface(__uuidof(IVP8Encoder), (void**) &iEncoder);

Gstreamer - using a variable for video path

I have a question concerning Gstreamer and the path for the video (uri).
Indeed, in order to try my code, I used to set the path to my video directly in the C++ source code, that way :
data.pipeline = gst_parse_launch ("playbin2 uri=file:///D:/video", NULL);
But now, I am using a user interface (wxWidgets) in order to get the path to the video that the user wants to play. The path is now in a variable, m_txtVideoPath. And I don't know how I can launch the video using this variable, instead of D:/video.
Thanks in advance for your answer !
you have to construct the pipeline with the user-defined filename, rather than hardcode everything.
this is very basic string handling, you might want to consult a beginner's tutorial for your programming language of choice.
e.g.
std::string pipeline = "playbin2";
pipeline+=" uri=file://"+m_txtVideoPath;
std::cout << "PIPELINE: " << pipeline << std::endl; // for debugging
data.pipeline = gst_parse_launch (pipeline.c_str(), NULL);