I've found that the standard way to feed gstreamer with data from another application is to launch gstreamer with
gst-launch-1.0 fdsrc ! ...
and push the data to gstreamer's stdin.
But I've found out that fdsrc plugin is missing on the Windows build of gstreamer. Is there some equivalent source element for Windows?
Or is there some other way to feed the data to gstremer on Windows? Something like named pipes, etc.
GStreamer has a plugin called 'appsrc' which can be used to feed data to pipelines from external applications.
Similarly there is a 'appsink' can be used to output data from GStreamer pipelines to external applications.
appsrc
example
Related
Context: I have an audio device running Mopidy, which outputs to a gstreamer pipeline. My device has an interface for an equalizer - for this I've set up my ALSA config to go through ALSA equalizer - the GStreamer pipeline targets this. The code that handles the interface uses python alsamixer to realise the values.
This works, but ALSA equalizer is a bit janky and has a very narrow range before it distorts the audio. GStreamer has an equalizer plugin which I think is better; I can implement this as per the example launch line at:
gst-launch-1.0 filesrc location=song.ogg ! oggdemux ! vorbisdec ! audioconvert ! equalizer-10bands band2=3.0 ! alsasink
However, I want to be able to dynamically change band0-band9 parameters while the stream is playing - either via python or from the command line. I'm not sure what direction to look - is this possible?
Properties from a plugin can be set via g_object_set() function. Whether they can be changed on the fly or only when the pipeline is stopped depends on the plugin's implementation.
Hello everyone,
I am trying to implement low-latency video streaming using WebRTC. I write my code in C++ (websocket etc.), use only webrtc signalling server which is written in Python (ref1).
When I use a webcam, I do not have any problem streaming video to the client, however, I try to use the FLIR camera, I get a lot of problems while implementation.
There are a few questions in my mind to clear. I hope you guys give me some recommendations.
Is there any specific data-type that I should do pipeline to webrtc as a source? I just would like to know what kind of data I should send as a source in webrtc?
I try to send an image to check whether my WebRTC implementation works properly (except webcam), it gives me the error "Pipeline is empty". What can cause this problem? This is actually the main problem why I would like to know data type etc. to understand what exactly I should pipe into webrtc.
ref1: https://github.com/centricular/gstwebrtc-demos/tree/master/signalling
P.S.:
Client and Jetson Nano in the network
Server for signals is running on Jetson Nano
By running gst-inspect-1.0 webrtcbin you will find that both source and sink capability for this plugin is just application/x-rtp.
Therefore, if you want webrtcbin to work as a source pad, you will need to pipe it to some sort of RTP depayloader such as rtph264depay for video and rtpopusdepay for audio.
I use gstreamer for streaming my webcam over wireless network
I use a Arm board for streaming and receive in my pc.
want to import the video received in qt for using with opencv.
stream the video using this command:
./capture -c 10000 -o | gst-launch-0.10 -v -e filesrc location=/dev/fd/0 ! h264parse ! rtph264pay ! tcpserversink host=127.0.0.1 port=8080
and for receive:
gst-launch udpsrc port=1234 ! "application/x-rtp, payload=127" ! rtph264depay ! ffdec_h264 ! xvimagesink sync=false
what should i do for using received video in qt.
i want to use for image processing.
You need to write your own application for receiving instead of using gst-launch. For that refer to the documentation at gstreamer.freedesktop.org. Specially the application development manual http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html
For using the video inside a Qt window you can use the GstXOverlay/GstVideoOverlay interface to tell xvimagesink where to draw the video (the window id).
There is an opencv plugin in gstreamer that wraps a few function/filters from opencv. If the function of your interest isn't implemented you can wrap it in a new gstreamer element for using it. Or you can write a buffer probe to modify the buffers from your application and do the processing calling opencv yourself.
From your gst-launch lines I can see that you are using version 0.10. If possible you should consider moving to 1.x versions as 0.10 is obsolete and unmantained. If you must stick to 0.10, pay attention when looking for the doc to make sure you are reading the correct documentation for your version.
The best solution is to use QtGstreamer in order to capture and stream videos using gstreamer in a Qt environment. The main advantage is that you could insert your pipeline description into a String and the library will do the hard coding for you. You'll avoid to code all the pipeline components by yourself. However you will have to code your own sink to use the captured frames with OpenCV.
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/qt-gstreamer/html/index.html
I have been making a few experiments with GStreamer by using the gst-launch utility. However, ultimately, the aim is to implement this same functionality on my own application using GStreamer libraries.
The problem is that it's ultimately difficult (at least for someone that is not used to the GStreamer API) to "port" what I test on the command line to C/C++ code.
An example of a command that I may need to port is:
gst-launch filesrc location="CLIP8.mp4" ! decodebin2 ! jpegenc ! multifilesink location="test%d.jpg"
What's the most "straight forward" way/approach to take such command and write it in C on my own app.
Also, as a side question, how could I replace the multifilesink with the possibility of doing this work on memory (I'm using OpenCV to perform a few calculation on a given image that should be extracted from the video). Is it possible to decode directly to memory and use it right away without first saving to the filesystem? It could (and should) be sequential, I mean that would only move on to the next frame after I'm done with processing the current one so that I wouldn't have to keep thousands of frames in memory.
What do you say?
I found the solution. There's a function built in on GStreamer that parses gst-launch arguments and returns a pipeline. The function is called gst_parse_launch and is documented here: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstParse.html
I haven't tested it but it's possible the fastest solution to convert what have been testing on the command line to C/C++ code.
You could always pop open the source of gst-launch and grab the bits that parse out the command-line and turn it into a GStreamer pipeline.
That way you can just pass in the "command line" as a string, and the function will return a complete pipeline for you.
By the way, there is an interesting GStreamer element that provides a good way to integrate a processing pipeline into your (C/C++) application: appsink
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-appsink.html
With this one you can basically retrieve the frames from the pipeline in a big C array and do whatever you want with them. You setup a callback function, which will be activated every time a new frame is available from the pipeline thread...
I would like to create a GStreamer application for streaming video/audio over wireless network.For codec I will use H.264 .Please advice transmitting the data what should I use MPEG2-TS or RTP? I am not sure from where I should start to write the application.
I will work in c/c++ in ubuntu.Please help.
Thanks in advance.
Rtp is commonly used. You can have a look at examples under gst-plugins-good/tests/examples/rtp/.