Context: I have an audio device running Mopidy, which outputs to a gstreamer pipeline. My device has an interface for an equalizer - for this I've set up my ALSA config to go through ALSA equalizer - the GStreamer pipeline targets this. The code that handles the interface uses python alsamixer to realise the values.
This works, but ALSA equalizer is a bit janky and has a very narrow range before it distorts the audio. GStreamer has an equalizer plugin which I think is better; I can implement this as per the example launch line at:
gst-launch-1.0 filesrc location=song.ogg ! oggdemux ! vorbisdec ! audioconvert ! equalizer-10bands band2=3.0 ! alsasink
However, I want to be able to dynamically change band0-band9 parameters while the stream is playing - either via python or from the command line. I'm not sure what direction to look - is this possible?
Properties from a plugin can be set via g_object_set() function. Whether they can be changed on the fly or only when the pipeline is stopped depends on the plugin's implementation.
Related
I found something like a game capture plugin for Gst Streamer that uses OBS GameCapture.
But a long time ago, friends stopped providing support, and they did not even leave any pipeline as an example.
I'm fairly new to GStreamer and I've been messing around with the code but couldn't create a pipeline to run it.
Can someone help with GStreamer to create a sample pipeline?
gst-inspect-1.0 libgstgamecapture.dll
On Windows, you could use dx9screencapsrc to screen capture the screen. For example:
gst-launch-1.0 dx9screencapsrc x=100 y=100 width=320 height=240 ! videoconvert ! autovideosink
I use gstreamer for streaming my webcam over wireless network
I use a Arm board for streaming and receive in my pc.
want to import the video received in qt for using with opencv.
stream the video using this command:
./capture -c 10000 -o | gst-launch-0.10 -v -e filesrc location=/dev/fd/0 ! h264parse ! rtph264pay ! tcpserversink host=127.0.0.1 port=8080
and for receive:
gst-launch udpsrc port=1234 ! "application/x-rtp, payload=127" ! rtph264depay ! ffdec_h264 ! xvimagesink sync=false
what should i do for using received video in qt.
i want to use for image processing.
You need to write your own application for receiving instead of using gst-launch. For that refer to the documentation at gstreamer.freedesktop.org. Specially the application development manual http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html
For using the video inside a Qt window you can use the GstXOverlay/GstVideoOverlay interface to tell xvimagesink where to draw the video (the window id).
There is an opencv plugin in gstreamer that wraps a few function/filters from opencv. If the function of your interest isn't implemented you can wrap it in a new gstreamer element for using it. Or you can write a buffer probe to modify the buffers from your application and do the processing calling opencv yourself.
From your gst-launch lines I can see that you are using version 0.10. If possible you should consider moving to 1.x versions as 0.10 is obsolete and unmantained. If you must stick to 0.10, pay attention when looking for the doc to make sure you are reading the correct documentation for your version.
The best solution is to use QtGstreamer in order to capture and stream videos using gstreamer in a Qt environment. The main advantage is that you could insert your pipeline description into a String and the library will do the hard coding for you. You'll avoid to code all the pipeline components by yourself. However you will have to code your own sink to use the captured frames with OpenCV.
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/qt-gstreamer/html/index.html
I'm looking to create a Real-time transport/streaming protocol (RT(S)P) server using gstreamer api in c++ (on a linux platform) with the possibility to send out data encoded by a custom encoder/decoder.
So far i have a simple server working using the following tutorial:
http://www.ip-sense.com/linuxsense/how-to-develop-a-rtsp-server-in-linux-using-gstreamer/
The following step would be to find a way to do so with raw images and then with my custom encoder.
Can anyone point me towards a tutorial/example of something similar and perhaps explain which of both RTSP and RTP (or both?) would be best to use for this?
to use custom encoder/decoder you would need to write your own gstreamer plugin.
If you look at line 83 to 85 in tutorial code, it is defining a gstreamer pipeline.
gst_rtsp_media_factory_set_launch (factory, "( "
"videotestsrc ! video/x-raw-yuv,width=320,height=240,framerate=10/1 ! "
"x264enc ! queue ! rtph264pay name=pay0 pt=96 ! audiotestsrc ! audio/x-raw-int,rate=8000 ! alawenc ! rtppcmapay name=pay1 pt=97 "")");
here the pipeline is using x264enc and H.264 encoder. After writing a gstreamer plugin you can change the above pipeline to use your encoder.
I have been making a few experiments with GStreamer by using the gst-launch utility. However, ultimately, the aim is to implement this same functionality on my own application using GStreamer libraries.
The problem is that it's ultimately difficult (at least for someone that is not used to the GStreamer API) to "port" what I test on the command line to C/C++ code.
An example of a command that I may need to port is:
gst-launch filesrc location="CLIP8.mp4" ! decodebin2 ! jpegenc ! multifilesink location="test%d.jpg"
What's the most "straight forward" way/approach to take such command and write it in C on my own app.
Also, as a side question, how could I replace the multifilesink with the possibility of doing this work on memory (I'm using OpenCV to perform a few calculation on a given image that should be extracted from the video). Is it possible to decode directly to memory and use it right away without first saving to the filesystem? It could (and should) be sequential, I mean that would only move on to the next frame after I'm done with processing the current one so that I wouldn't have to keep thousands of frames in memory.
What do you say?
I found the solution. There's a function built in on GStreamer that parses gst-launch arguments and returns a pipeline. The function is called gst_parse_launch and is documented here: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstParse.html
I haven't tested it but it's possible the fastest solution to convert what have been testing on the command line to C/C++ code.
You could always pop open the source of gst-launch and grab the bits that parse out the command-line and turn it into a GStreamer pipeline.
That way you can just pass in the "command line" as a string, and the function will return a complete pipeline for you.
By the way, there is an interesting GStreamer element that provides a good way to integrate a processing pipeline into your (C/C++) application: appsink
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-appsink.html
With this one you can basically retrieve the frames from the pipeline in a big C array and do whatever you want with them. You setup a callback function, which will be activated every time a new frame is available from the pipeline thread...
I have three videos:
a lecture that was filmed with a video camera
a video of the desktop capture of the computer used in the lecture
and the video of the whiteboard
I want to create a final video with those three components taking up a certain region of the screen.
Is open-source software that would allow me to do this (mencoder, ffmpeg, virtualdub..)? Which do you recommend?
Or is there a C/C++ API that would enable me to create something like that programmatically?
EditThere will be multiple recorded lectures in the future. This means that I need a generic/automated solution.
I'm currently checking out if I could write an application with GStreamer to do this job. Any comments on that?
Solved!
I succeeded in doing this with GStreamer's videomixer element. I use the gst-launch syntax to create a pipeline and then load it with gst_parse_launch. It's a really productive way to implement complex pipelines.
Here's a pipeline that takes two incoming video streams and a logo image, blends them into one stream and the duplicates it so that it simultaneously displayed and saved to disk.
desktop. ! queue
! ffmpegcolorspace
! videoscale
! video/x-raw-yuv,width=640,height=480
! videobox right=-320
! ffmpegcolorspace
! vmix.sink_0
webcam. ! queue
! ffmpegcolorspace
! videoscale
! video/x-raw-yuv,width=320,height=240
! vmix.sink_1
logo. ! queue
! jpegdec
! ffmpegcolorspace
! videoscale
! video/x-raw-yuv,width=320,height=240
! vmix.sink_2
vmix. ! t.
t. ! queue
! ffmpegcolorspace
! ffenc_mpeg2video
! filesink location="recording.mpg"
t. ! queue
! ffmpegcolorspace
! dshowvideosink
videotestsrc name="desktop"
videotestsrc name="webcam"
multifilesrc name="logo" location="logo.jpg"
videomixer name=vmix
sink_0::xpos=0 sink_0::ypos=0 sink_0::zorder=0
sink_1::xpos=640 sink_1::ypos=0 sink_1::zorder=1
sink_2::xpos=640 sink_2::ypos=240 sink_2::zorder=2
tee name="t"
It can be done with ffmpeg; I've done it myself. That said, it is technically complex. That said, again, it is what any other software you might use is going to do in its core essence.
The process works like this:
Demux audio from source 1 to raw wav
Demux audio from source 2
Demux audio from source 3
Demux video from source 1 to MPEG1
Demux video from source 2
Demux video from source 3
Concatenate audio 1 + audio 2 + audio 3
Concatenate video 1 + video 2 + video 3
Mux audio 123 and video 123 into target
encode to target format
I think what surprises folks is that you can literally concatenate two raw PCM wav audio files, and the result is valid. What really, really surprises people is that you can do the same with MPEG1/h.261 video.
Like I've said, I've done it. There are some specifics left out, but it most definately works. My program was done in a bash script with ffmpeg. While I've never used the ffmpeg C API, I don't see why you could not use it to do the same thing.
It's a highly educational project to do, if you are inclined. If your goal is just to slap some videos together for a one off project, then maybe using a GUI tool is a better idea.
If you just want to combine footage into a single video and crop the video, I'd use virtual dub.
you can combine multiple video files/streams into one picture with VLC:
there is a command-line interface so you can script/automate it.
http://wiki.videolan.org/Mosaic
avisynth can do it rather easily. Look here under the Mosaic section for an example.
I've used ffmpeg quite a bit and I have never stumbled upon this functionality, but that doesn't mean it isn't there. You can always do it yourself in C or C++ with libavformat and libavcodec (ffmpeg libraries) if you're looking for a project, but you will have to get your hands very dirty with compositing the video yourself. If you are just looking to get the video done and not tinker with code, definitely use a pre-made tool like avisynth or virtualdub.