I want to use Qt to create a simple GUI application that can play a local video file. I could use Phonon which does all the work behind the scenes, but I need to have a little more control. I have already succeeded in implementing an GStreamer pipeline using the decodebin and autovideosink elements. Now I want to use a Qt widget to channel the output to.
Has anyone ever succeeded in doing this? (I suppose so since there are Qt-based video players that build upon GStreamer.) Can someone point me in the right direction on how to do it?
Note: This question is similar to my previous posted question on how to connect Qt with an incoming RTP stream. This seemed to be quite challenging. This question will be easier to answer I think.
Update 1
Patrice's suggestion to use libVLC is very helpful already. Here's a somewhat cleaner version of the code found on VLC's website:
Sample for Qt + libVLC.
However, my original question remains: How do I connect GStreamer to a Qt widget?
Update 2
After some experimentation I ended up with this working sample. It depends on GstWidget.h and GstWidget.cpp from my own little GstSupport library. However, take note that is is currently only tested on the Mac version of Qt.
To connect Gstreamer with your QWidget, you need to get the window handle using QWidget::winId() and you pass it to gst_x_overlay_set_xwindow_id();
Rough sample code:
sink = gst_element_factory_make("xvimagesink", "sink");
gst_element_set_state(sink, GST_STATE_READY);
QApplication::syncX();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(sink), widget->winId());
Also, you will want your widget to be backed by a native window which is achieved by setting the Qt::AA_NativeWindows attribute at the application level or the Qt::WA_NativeWindow attribute at the widget level.
Since Phonon is based on gstreamer, the place to look for details is the Phonon source tree (available here: http://gitorious.org/phonon/import/trees/master). For a video player you are most likely going to need a video display widget, such as the gstreamer/videowidget.h (cpp) that in turn used the X11 renderer (gstreamer/x11renderer.h, cpp). The sink used is the xvimagesink, falling back onto the ximagesink if the first cannot be created.
The basic trick is to overlay the VideoWidget with the video output. The X11 handle needed to do this is retrieved using the QWidget::winId method, which is platform specific (as are the sinks, so no biggie).
Also, if overlay is unavailable, a QWidgetVideoSink is used, which converts the video frames into individual frames for the WidgetRenderer class. This class, in turn, makes the current frame available as a QImage object, ready for any type of processing.
So to answer your question - use either overlays (as X11Renderer) or extract individual QImages from the video stream (as QWidgetVideoSink).
VLC version is a QT-based video player (since version 0.99). It allows too to stream or read a stream: You can find all information you need here: http://wiki.videolan.org/Developers_Corner. You only have create an instance of the player and associate it to a widget. Then you have full control on the player.
I have already tested it (on Linux and Windows) playing local music and video files and it works fine.
Give it a try and see by yourself.
Hope that helps.
Edit:
It seems if you want to use VLC, you need to write or find (I do not know if one exists) a GStreamer codec as explain on the videolan wiki. I think I would do that.
Related
I'm trying to develop a little application in which you can load a mp3 file and play it in variable speeds! (I know it already exists :-) )
I'm using Qt and C++. I already have the basic player but I'm stuck with the rate thing, because I want to change the rate smoothly (like in Mixxx) without stopping the playback! The QMediaPlayer always stops if I change the value and creates a gap in the sound. Also I don't want the pitch to change!
I already found something called "SoundTouch" but now I'm completely clueless what to do with it, how to process my mp3 data and how to get it to the player! The "SoundTouch" Library is capable of doing what I want, i got that from the samples on the homepage.
How do I have to import the mp3 file, so I can process it with the SoundTouch functions
How can I play the output from the SoundTouch function? (Perhaps QMediaPlayer can do the job?)
How is that stuff done live? I have to do some kind of stream I guess? So I can change the speed during play and keep on playing without gaps. Graphicaly in my head it has to be something that sits between the data and the player, where all data has to go through live, with a small buffer (20-50 ms or so) behind to avoid gaps during processing future data.
Any help appreciated! I'm also open to any another solution then "SoundTouch" as long as I can stay with Qt/C++!
(Second thing: I want to view a waveform overview aswell as moving part of it (around actual position of the song), so I could also use hints on how to get the waveform data)
Thanks in advance!
As of now (Qt 5.5) this is impossible to do with QMediaPlayer only. You need to do the following:
Decode the audio using GStreamer, FFMpeg or (new) QAudioDecoder: http://doc.qt.io/qt-5/qaudiodecoder.html - this will give you raw PCM stream;
Apply SoundTouch or some other library to this raw data to change the pitch. If GPL is ok, take a look at http://nsound.sourceforge.net/examples/index.html, if you develop proprietary stuff, STK might be a better choice: https://ccrma.stanford.edu/software/stk/
Output the modified data into audio device by using QAudioOutput.
This strategy uses Qt as much as possible, and brings you the best platform coverage (you still lose Android though as it does not support QAudioOutput)
We have a requirement to lets users record a video of our 3D application. I can already grab the individual rendered frames so this question is specifically about how to write frames into a video file.
I don't think writing each frame as a separate file and post-processing is a workable option.
I can look at options to record to a simple video file for later optimising/encoding, or writing directly to a sensibly encoded format.
FFmpeg was suggested in another post but it looks a bit daunting to me. Is it the best option, if not what can be suggested? We can work with LGPL but not full GPL.
We're working on Windows (Win32 not MFC) in C++. Sample/pseudo code with your recommended library is very much appreciated... basically after how to do 3 functions:
startRecording() does whatever initialization is needed
recordFrame() takes pointer to frame data and encodes it, ideally with timing data
endRecording() finalizes the video file, shuts down video system, etc
Check out the sources to Taksi on sourceforge. http://taksi.sourceforge.net/
You need 2 things.
1. A code to compress the frames.
2. A container file format. Like AVI or MPG.
Taksi useses the old VideoForWindows API and AVI not the newer COM API's but it still might work for you.
My application displays video and audio and I want to add a recording feature.
I've considered FFmpeg, but I have to compile my application with VS so I can't use it. So I'm trying to do it with GStreamer, but I'm not finding any example or guide on how to create a video. Any help?
(I can also consider using any other alternatives, but they must be cross-platform).
Application Development Manual explains very well how to use gstreamer from your code. Try to read it first.
Than you can experiment with gst-launch tool, build pipeline and execute it from your application using gst-parse-launch function.
You can expose more details of your problem if you want more helpful answer.
I'm trying to get mpg123 audio decoder to work with QT on windows. How do i play the decoded audio data at the right speed with Qmultimedia module in push mode. Currently i'm using simple timer to get it to play audio but it's not very efficient way to do it, if I do anything else at the same time audio get all distorted. Is there any better way to send the decoded data to audio output? It would be nice if anyone could point me to any nice examples using Qmultimedia module and Qaudiooutput class. I've tried to figure out QT example project "audiooutput" but it seems that it's also using timer to send audio to output in push mode.. Hope that I'm not too confusing.
I also had to figure that out and I would also suggest using the Phonon framework to do this.
It uses Windows Media Player as host on Windows, QuickTime on Mac and some KDE stuff on Linux.
So it's pretty platform independent.
If you need more low-level functionality, you should take a look into an open-source project called portaudio. It's very easy to use and you can manipulate or even fill buffers from code.
I used it to build an oscillator.
Hope that helps!
Best,
guitarflow
I want to create a Qt widget that can play incoming RTP streams where the video is encoded as H264 and contains no audio.
My basic plan for implementation is this:
Create a Phonon MediaSource object (Stream type).
Connect it with a QIODevice subclass that provides the data
Obtain the video data using either:
The JRTPLIB client library
The GStreamer gstrtpbin plugin. This plugin takes care depayloading the packages and decoding the video. Maybe this improves the chances that Phonon will recognize the data.
My environment:
Ubuntu 9.10
Qt 4.6
My questions:
Is my approach a good one? Perhaps I'm overlooking a more obvious or simple solution?
I'm currently experiencing this issue: when trying to play the video stream the state of the MediaObject turns to ErrorState with errorType FatalError. Can anyone tell me what I'm doing wrong?
Edit
One solution I found is using libVLC in combination with Qt, which I learned about in this thread. Here's a code sample for the interested.
I'm still looking for a Phonon-based solution.
Ideally I would only need to provide an SDP file and job is done.
I was able to get it to work using the libVLC solution. I can't garantuee that this is the best solution though as I simply stopped looking after that.
Here's a link to the libVLC sample.
The way I understand Phonon works at least in Windows is that QT provides a phonon backend plugin for DirectShow (\plugins\phonon_backend\phonon_ds94.dll) and GStreamer in your case. Then you would either obtain or write your own DirectShow filter which can accept RTP streams as a source. DirectShow takes care of the decoding, and Phonon will take care of the rendering.
So if the backend works, the application code is as simple as:
Phonon::MediaObject *media = new Phonon::MediaObject();
Phonon::VideoWidget *video = new Phonon::VideoWidget();
Phonon::createPath(media, video);
media->setCurrentSource(source);
media->play();
Seems that the problem lies with the GStreamer backend accepting RTP as a source. Can you playback that source in standalone GStreamer without any problems?