Gstreamer RT(S)P Server with custom encoder C++ - c++

I'm looking to create a Real-time transport/streaming protocol (RT(S)P) server using gstreamer api in c++ (on a linux platform) with the possibility to send out data encoded by a custom encoder/decoder.
So far i have a simple server working using the following tutorial:
http://www.ip-sense.com/linuxsense/how-to-develop-a-rtsp-server-in-linux-using-gstreamer/
The following step would be to find a way to do so with raw images and then with my custom encoder.
Can anyone point me towards a tutorial/example of something similar and perhaps explain which of both RTSP and RTP (or both?) would be best to use for this?

to use custom encoder/decoder you would need to write your own gstreamer plugin.
If you look at line 83 to 85 in tutorial code, it is defining a gstreamer pipeline.
gst_rtsp_media_factory_set_launch (factory, "( "
"videotestsrc ! video/x-raw-yuv,width=320,height=240,framerate=10/1 ! "
"x264enc ! queue ! rtph264pay name=pay0 pt=96 ! audiotestsrc ! audio/x-raw-int,rate=8000 ! alawenc ! rtppcmapay name=pay1 pt=97 "")");
here the pipeline is using x264enc and H.264 encoder. After writing a gstreamer plugin you can change the above pipeline to use your encoder.

Related

How to get sample pipeline for a plugin made for GStreamer?

I found something like a game capture plugin for Gst Streamer that uses OBS GameCapture.
But a long time ago, friends stopped providing support, and they did not even leave any pipeline as an example.
I'm fairly new to GStreamer and I've been messing around with the code but couldn't create a pipeline to run it.
Can someone help with GStreamer to create a sample pipeline?
gst-inspect-1.0 libgstgamecapture.dll
On Windows, you could use dx9screencapsrc to screen capture the screen. For example:
gst-launch-1.0 dx9screencapsrc x=100 y=100 width=320 height=240 ! videoconvert ! autovideosink

Dynamically change GStreamer plugin (equalizer) parameters

Context: I have an audio device running Mopidy, which outputs to a gstreamer pipeline. My device has an interface for an equalizer - for this I've set up my ALSA config to go through ALSA equalizer - the GStreamer pipeline targets this. The code that handles the interface uses python alsamixer to realise the values.
This works, but ALSA equalizer is a bit janky and has a very narrow range before it distorts the audio. GStreamer has an equalizer plugin which I think is better; I can implement this as per the example launch line at:
gst-launch-1.0 filesrc location=song.ogg ! oggdemux ! vorbisdec ! audioconvert ! equalizer-10bands band2=3.0 ! alsasink
However, I want to be able to dynamically change band0-band9 parameters while the stream is playing - either via python or from the command line. I'm not sure what direction to look - is this possible?
Properties from a plugin can be set via g_object_set() function. Whether they can be changed on the fly or only when the pipeline is stopped depends on the plugin's implementation.

Converting mp4 files and streaming them to a Viewer with the Raspberry Pi

Heads up: The goal of my project is to replace a regular Intel Core PC with a Raspberry Pi 4
I have a camera simulation that runs on a Intel PC pretty well. It takes MP4 files and encodes them into jpeg with jpegenc. Using GStreamer and its plugins, namely avdec_h264 and qtdemux this works pretty well.
There is also an option to use the vaapih264dec and its jpeg encoder counterpart. This is useful because the CPU usage is super high using the non hardware optimized plugins. i.e. on the Pi this program works as well but with only 4 cameras we are at 100% usage on all 4 Kernels.
Now I have been researching quite a lot and the first answer was using omxh264dec since that is the vaapi counterpart for the RPi (or so I´m assuming). I cant get this to work and every time I try anything different the Pipeline simply wont build.
I have tried :
-Swapping the demuxer
-Changing the decoder and encoder (no combination other than the CPU using ones seemed to work)
-Asking on the GStreamer forum (was just told that it doesn't work that way, but got no clue as to where to start looking elsewhere)
-Even tried to build the pipeline without the whole program but even that doesn't seem to work with omxh264
Pipeline :
gst-launch-1.0 filesrc location=/home/pi/test.mp4 ! qtdemux ! h264parse ! omxh264dec ! autovideosink
gives this error :
Leitung wird auf PAUSIERT gesetzt ...
Leitung läuft vor …
FEHLER: Von Element /GstPipeline:pipeline0/GstQTDemux:qtdemux0: Internal data s$
Zusätzliche Fehlerdiagnoseinformation:
qtdemux.c(6073): gst_qtdemux_loop (): /GstPipeline:pipeline0/GstQTDemux:qtdemux$
streaming stopped, reason not-negotiated (-4)
So my question is really : is it somehow possible to use Gstreamer and stream omxdecoded footage and if not how I can still use less CPU on my program so my RPi doesn't end up dying.
Raspberry Pi supports only 1080p60 H.264 high-profile encode/decode.
you can see test.mp4 profile with run this pipe in PC.
gst-launch-1.0 filesrc location=/home/pi/test.mp4 ! qtdemux ! h264parse ! avdec_h264 ! autovideosink -v

Import Gstreamer video in Qt+Opencv

I use gstreamer for streaming my webcam over wireless network
I use a Arm board for streaming and receive in my pc.
want to import the video received in qt for using with opencv.
stream the video using this command:
./capture -c 10000 -o | gst-launch-0.10 -v -e filesrc location=/dev/fd/0 ! h264parse ! rtph264pay ! tcpserversink host=127.0.0.1 port=8080
and for receive:
gst-launch udpsrc port=1234 ! "application/x-rtp, payload=127" ! rtph264depay ! ffdec_h264 ! xvimagesink sync=false
what should i do for using received video in qt.
i want to use for image processing.
You need to write your own application for receiving instead of using gst-launch. For that refer to the documentation at gstreamer.freedesktop.org. Specially the application development manual http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/index.html
For using the video inside a Qt window you can use the GstXOverlay/GstVideoOverlay interface to tell xvimagesink where to draw the video (the window id).
There is an opencv plugin in gstreamer that wraps a few function/filters from opencv. If the function of your interest isn't implemented you can wrap it in a new gstreamer element for using it. Or you can write a buffer probe to modify the buffers from your application and do the processing calling opencv yourself.
From your gst-launch lines I can see that you are using version 0.10. If possible you should consider moving to 1.x versions as 0.10 is obsolete and unmantained. If you must stick to 0.10, pay attention when looking for the doc to make sure you are reading the correct documentation for your version.
The best solution is to use QtGstreamer in order to capture and stream videos using gstreamer in a Qt environment. The main advantage is that you could insert your pipeline description into a String and the library will do the hard coding for you. You'll avoid to code all the pipeline components by yourself. However you will have to code your own sink to use the captured frames with OpenCV.
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/qt-gstreamer/html/index.html

Combine multiple videos into one

I have three videos:
a lecture that was filmed with a video camera
a video of the desktop capture of the computer used in the lecture
and the video of the whiteboard
I want to create a final video with those three components taking up a certain region of the screen.
Is open-source software that would allow me to do this (mencoder, ffmpeg, virtualdub..)? Which do you recommend?
Or is there a C/C++ API that would enable me to create something like that programmatically?
EditThere will be multiple recorded lectures in the future. This means that I need a generic/automated solution.
I'm currently checking out if I could write an application with GStreamer to do this job. Any comments on that?
Solved!
I succeeded in doing this with GStreamer's videomixer element. I use the gst-launch syntax to create a pipeline and then load it with gst_parse_launch. It's a really productive way to implement complex pipelines.
Here's a pipeline that takes two incoming video streams and a logo image, blends them into one stream and the duplicates it so that it simultaneously displayed and saved to disk.
desktop. ! queue
! ffmpegcolorspace
! videoscale
! video/x-raw-yuv,width=640,height=480
! videobox right=-320
! ffmpegcolorspace
! vmix.sink_0
webcam. ! queue
! ffmpegcolorspace
! videoscale
! video/x-raw-yuv,width=320,height=240
! vmix.sink_1
logo. ! queue
! jpegdec
! ffmpegcolorspace
! videoscale
! video/x-raw-yuv,width=320,height=240
! vmix.sink_2
vmix. ! t.
t. ! queue
! ffmpegcolorspace
! ffenc_mpeg2video
! filesink location="recording.mpg"
t. ! queue
! ffmpegcolorspace
! dshowvideosink
videotestsrc name="desktop"
videotestsrc name="webcam"
multifilesrc name="logo" location="logo.jpg"
videomixer name=vmix
sink_0::xpos=0 sink_0::ypos=0 sink_0::zorder=0
sink_1::xpos=640 sink_1::ypos=0 sink_1::zorder=1
sink_2::xpos=640 sink_2::ypos=240 sink_2::zorder=2
tee name="t"
It can be done with ffmpeg; I've done it myself. That said, it is technically complex. That said, again, it is what any other software you might use is going to do in its core essence.
The process works like this:
Demux audio from source 1 to raw wav
Demux audio from source 2
Demux audio from source 3
Demux video from source 1 to MPEG1
Demux video from source 2
Demux video from source 3
Concatenate audio 1 + audio 2 + audio 3
Concatenate video 1 + video 2 + video 3
Mux audio 123 and video 123 into target
encode to target format
I think what surprises folks is that you can literally concatenate two raw PCM wav audio files, and the result is valid. What really, really surprises people is that you can do the same with MPEG1/h.261 video.
Like I've said, I've done it. There are some specifics left out, but it most definately works. My program was done in a bash script with ffmpeg. While I've never used the ffmpeg C API, I don't see why you could not use it to do the same thing.
It's a highly educational project to do, if you are inclined. If your goal is just to slap some videos together for a one off project, then maybe using a GUI tool is a better idea.
If you just want to combine footage into a single video and crop the video, I'd use virtual dub.
you can combine multiple video files/streams into one picture with VLC:
there is a command-line interface so you can script/automate it.
http://wiki.videolan.org/Mosaic
avisynth can do it rather easily. Look here under the Mosaic section for an example.
I've used ffmpeg quite a bit and I have never stumbled upon this functionality, but that doesn't mean it isn't there. You can always do it yourself in C or C++ with libavformat and libavcodec (ffmpeg libraries) if you're looking for a project, but you will have to get your hands very dirty with compositing the video yourself. If you are just looking to get the video done and not tinker with code, definitely use a pre-made tool like avisynth or virtualdub.