Suppose I have an application written with GStreamer that has a pipeline for processing video data. For simplicity let's assume it looks like this:
appsrc->identity->appsink
While application with this hardcoded pipeline provides some functionality, I could imagine that users of my application might want to replace identity element with arbitrarily complex pipelines (still having the interface with one sink and one source with defined capabilities). Does GStreamer provide any functionality that would allow injecting whole pipelines into my application? If the pipeline could be defined with gst-launch it would be great, but C code is also fine. Or do I need to resort to some generic mechanisms for writing plugins?
Related
In my code, I currently have a pipeline description that is a string. I use gst_parse_launch(...) to utilize this pipeline and everything is working great.
However, now I am interested in setting some properties on one of the elements in the pipeline. Specifically the pipeline sink element (in my case autovideosink). I would like the set the property enable-last-sample but the autovideosink doesn't have that property. Thus my question is, how can I determine which video sink the autovideosink has resolved to so I can set this property?
My application is written in C++.
One way to find out what it resolved to is to use the awesome pipeline graph debug feature. For example:
GST_DEBUG_BIN_TO_DOT_FILE(yourPipeline, GST_DEBUG_GRAPH_SHOW_ALL, file_name)
See GST_DEBUG_BIN_TO_DOT_FILE for details.
You can then render that graphviz graph and inspect your pipeline (including all bin-children).
autovideosink implements the GstChildProxy interface:
https://gstreamer.freedesktop.org/documentation/gstreamer/gstchildproxy.html?gi-language=c
You should be able to set things directly via this interface, or hook into the callbacks directly when a new child is being added.
I would like to write a gstreamer pipeline that mixes the audio from two sources. I then want to be able to select an audio source from an app on my computer, i.e. Discord, such that the mixed audio will play as if it was coming from my mic.
It seems simple enough to get the mixing right, but it seems like I need to use something like Virtual Audio Cable to achieve the second part. Is there a way to do this entirely in gstreamer or with something more lightweight than installing Virtual Audio Cable?
There is no [yet] Windows API to create virtual audio endpoint, so that applications could interact with it similarly to real device. Consequently, there is no GStreamer wrapper over this non-existent API either.
Doing it without VAC would require that you still install an audio driver where you would provide your own endpoint for Windows to use and expose to applications.
I am trying to write a pro music/audio processing application, and I would like to be able to interact with the audio inputs/outputs at a very low level - ideally something allowing me to apply effects to the audio inputs and output this in real-time, similar to programs like Logic, Ableton etc.
I have written a pretty basic program that detects audio endpoint devices and can change their volumes using the MMDevice interface, but this is nowhere near the functionality I would like.
I have learned from the Microsoft docs that the four core-audio APIs are:
MMDevice
WASAPI
DeviceTopology
EndpointVolume
but it doesn't seem like any of these have the capabilities that I need. I'm thinking that I will need to be able to interact with the speakers at the level of setting the position of the membrane at a given time.
Is this even possible? If so, what can I use to do this?
The Windows Audio Session API (WASAPI) is the best bet for this purpose. It allows interaction with audio endpoints and setting up audio streams (which are streams of data that you can send or receive in real time). A good example is here.
I would like to implement a gstreamer pipeline for video streaming without using a v4l2 driver in Linux. The thing is that the video frames I have them already in the RAM(the vdma core which is configured by a different OS on a different core takes care of that) . And also I had difficulties debugging some DMA slave errors which appeared always after a dma completion callback.
Therefore I would be happy if I would not have to use v4l2 driver in order to have gstreamer on top.
I have found this plugin from Bosch that fits my case:
https://github.com/igel-oss/v4l-gst
My question would be if somebody has experience with this approach and if is a feasible one?
Other question would be how to configure the source in the gstreamer pipeline as it is not a device /dev/videoxxx but rather a memory location or even a bmp file.
Thanks, Mihaita
You could use appsrc and repeatedly call gst_app_src_push_buffer (). Your application will have all freedom to read the video data from anywhere it likes - memory, files etc. See also the relevant section of the GStreamer Application Development Manual.
If you want more flexibility, like using the video source in several applications, you should consider implementing your own custom GStreamer element.
I am working on project where I need to read a USB camera's input, put some effects on it and then send that data to a virtual camera so it can be accessed by skype etc.
I have compiled and used the vcam filter. I was also able to make a few changes in FillBuffer method. I now need to know that is it possible to send data to vcam filter from another application or do I need to write another filter.
The vcam project you currently have as a template is the interface to other video consuming applications like Skype, those which use DirectShow API to access video capture devices and match in platform/bitness with your filter.
You are responsible for developing the rest of the supposed filter: you either access real device right in your filter (simplifying the task greatly, this is what you fill your FillBuffer with, the code that generates video from another source), or alternatively you are to implement interprocess communication so that FillBuffer implementation could transfer data from another application.
Nethier vcam nor any of standard DriectShow samples offer functionality to cover interprocess communication, and you might also need to deal with other complications: one application and multiple instances of filters to consume video, platform mismatch etc.
See also:
How to implement a "source filter" for splitting camera video based on Vivek's vcam?