I want to develop an application to play sounds from web radio streams.
I first did it using the following code :
mRequest.setUrl(QUrl(QString("http://dummy.com/file.mp3")));
mPlayer.setMedia(mRequest);
mPlayer.play();
It works fine but I want to do it passing a custom QIODevice because I have to process the stream in a specific way before playing it.
I found some example describing how to do that but it seems not to work.
I then found an interesting link here describing an issue with streaming on MAC OS using Qt (QMediaplayer streaming from a custom QIODevice with encryption on Mac OS (10.9)) and what seems to explain the problem (Supported media player features).
So, assuming I want to get a stream from a web radio, process specific received bytes before playing, what QT class do you advice me to use ?
Thanks in advance for your help.
Related
I'm making a call software using Qt and Qml, and I need to get the microphone feed from Qml running as webgl to the C++ side, if not straight to GStreamer using server.
I already have a Qt program as the client using GStreamer to push audio stream to the server. GStreamer, of course, doesn't go to the webgl client side though. I've found, that you can get permissions to use mic/camera from Qml, but I haven't found any example actually grabbing the stream from there. I've also checked out the usage of WebRTC. It seems like it could work with Qml and I have found some examples using it with GStreamer, but I haven't been able to get the combination of WebRTC and GStreamer working even with the examples.
So the full question:
How can I get the audio from the Qml running as webgl? Is there a way within Qt or do I have to go through WebRTC? If so, is there some simpler or more beginner friendly example than the Nirbheek's gstwebrtc demos for connecting WebRTC to GStreamer?
Not the answer I wanted, but this ended up working in my case:
As the C++ side is also running another Qt GUI, Qml to be specific, I
can use WebEngineView with html and javascript to never bother the C++
implementation with GStreamer for WebRTC. So currently I'm running
PeerJS on both sides of the connection with PeerJS' signaling
server in between.
I'd have preferred to use C++ with GStreamer to connect to WebRTC, but
I can't find other easy way for me to connect browser's audio to Qt.
edit: I apologise, this answer doesn't work in the end. I have been testing the programs on a single computer, thus I didn't realize WebGL hosted Qml doesn't run WebEngineView's Javascript on the frontend, but backend instead.
I am developing C++/Qt application which interacts with Tektronix TDS2002 oscilloscope via USB. The oscilloscope appears as "USB Test and Measurement device (IVI)".
Currently I use TekVISA library supplied by the oscilloscope's vendor. It works, but it is huge, old, buggy and poorly maintained. Therefore I would like to bypass the library and interface the device directly.
So far I have found this simple library: https://github.com/xyphro/WinUsbTmc It is exactly what I am looking for, but it uses libusb which requires to install some device filter and in addition it is advised to be more development tool than customer solution. Do you have any experience on this?
What is the easiest way to interact with USB Test and Measurement device in Windows/C++/Qt?
Thank you for your suggestions :)
You need a USB driver. My oscilloscope works with the driver included in this VISA package (the driver can be extracted very easily): http://www.keysight.com/main/software.jspx?cc=CZ&lc=eng&nid=-11143.0.00&id=2504667&pageMode=CV I assume all USB TMC devices can use the same driver, but I have no possibility to check this.
USB driver can be accessed via standard Windows functions. Guys on this forum were really close:
https://forum.tek.com/viewtopic.php?f=568&t=137573 and also this document was very useful: http://www.ivifoundation.org/downloads/Class%20Specifications/Ivi-6%202_USBTMC_2010-03-23.doc
You cannot write commands to OSC directly - data you send and receive have certain header which has to be in the correct format, otherwise the oscilloscope ignores the message. See reading and writting implementation in this simple library: https://github.com/xyphro/WinUsbTmc I didn't use this library because it uses libusb library which uses some kind of device filter and I personally do not like this concept (and in addition I have genuine working driver).
Data you read have also a simple header. To ensure you fit the header structure on input data well, you should first flush the input buffer. Then you issue reading request (using write command - see WinUsbTmc library above) and finally you receive the data and fit the header on its beginning.
I hope this will help to somebody :)
With regards
klasyc
Currently I am trying to use the libspotify lib to write a Windows Spotify Player. I'm new to audio streaming but not to video streaming.
I have the basics working for most of the user data and track info but the problem I'm having is that I can't figure out how to render the Raw PCM data on Windows.
I've been looking at the Jukebox example but it doesn't compile on Windows and I would like to keep this app as a native Windows app. This example is using OpenAL to render the stream and just wondering if that the best solution (vs something like Windows Audio Session API).
Seems like playing back a track should be straightforward. Am I missing something? It's turning out to be quite difficult.
I did see this post which is a little discouraging.
Getting the examples in libspotify to work under Windows 7
Any help or direction on this topic would be very much appreciated! Working samples even better. :)
I'm trying to get mpg123 audio decoder to work with QT on windows. How do i play the decoded audio data at the right speed with Qmultimedia module in push mode. Currently i'm using simple timer to get it to play audio but it's not very efficient way to do it, if I do anything else at the same time audio get all distorted. Is there any better way to send the decoded data to audio output? It would be nice if anyone could point me to any nice examples using Qmultimedia module and Qaudiooutput class. I've tried to figure out QT example project "audiooutput" but it seems that it's also using timer to send audio to output in push mode.. Hope that I'm not too confusing.
I also had to figure that out and I would also suggest using the Phonon framework to do this.
It uses Windows Media Player as host on Windows, QuickTime on Mac and some KDE stuff on Linux.
So it's pretty platform independent.
If you need more low-level functionality, you should take a look into an open-source project called portaudio. It's very easy to use and you can manipulate or even fill buffers from code.
I used it to build an oscillator.
Hope that helps!
Best,
guitarflow
I want to create a Qt widget that can play incoming RTP streams where the video is encoded as H264 and contains no audio.
My basic plan for implementation is this:
Create a Phonon MediaSource object (Stream type).
Connect it with a QIODevice subclass that provides the data
Obtain the video data using either:
The JRTPLIB client library
The GStreamer gstrtpbin plugin. This plugin takes care depayloading the packages and decoding the video. Maybe this improves the chances that Phonon will recognize the data.
My environment:
Ubuntu 9.10
Qt 4.6
My questions:
Is my approach a good one? Perhaps I'm overlooking a more obvious or simple solution?
I'm currently experiencing this issue: when trying to play the video stream the state of the MediaObject turns to ErrorState with errorType FatalError. Can anyone tell me what I'm doing wrong?
Edit
One solution I found is using libVLC in combination with Qt, which I learned about in this thread. Here's a code sample for the interested.
I'm still looking for a Phonon-based solution.
Ideally I would only need to provide an SDP file and job is done.
I was able to get it to work using the libVLC solution. I can't garantuee that this is the best solution though as I simply stopped looking after that.
Here's a link to the libVLC sample.
The way I understand Phonon works at least in Windows is that QT provides a phonon backend plugin for DirectShow (\plugins\phonon_backend\phonon_ds94.dll) and GStreamer in your case. Then you would either obtain or write your own DirectShow filter which can accept RTP streams as a source. DirectShow takes care of the decoding, and Phonon will take care of the rendering.
So if the backend works, the application code is as simple as:
Phonon::MediaObject *media = new Phonon::MediaObject();
Phonon::VideoWidget *video = new Phonon::VideoWidget();
Phonon::createPath(media, video);
media->setCurrentSource(source);
media->play();
Seems that the problem lies with the GStreamer backend accepting RTP as a source. Can you playback that source in standalone GStreamer without any problems?