Currently I am trying to use the libspotify lib to write a Windows Spotify Player. I'm new to audio streaming but not to video streaming.
I have the basics working for most of the user data and track info but the problem I'm having is that I can't figure out how to render the Raw PCM data on Windows.
I've been looking at the Jukebox example but it doesn't compile on Windows and I would like to keep this app as a native Windows app. This example is using OpenAL to render the stream and just wondering if that the best solution (vs something like Windows Audio Session API).
Seems like playing back a track should be straightforward. Am I missing something? It's turning out to be quite difficult.
I did see this post which is a little discouraging.
Getting the examples in libspotify to work under Windows 7
Any help or direction on this topic would be very much appreciated! Working samples even better. :)
Related
In the past years I used my own audio engine written using the WaveOutOpen API. Now I would like to port it to Linux / MacOS. I thought about using OpenAL.
What would be an efficient way to port this to Linux without changing too much code?
I'm mixing the audio data from WAV files and apply effects, such as 3D position, looping, frequency change, and echo.
From what I've seen OpenAL seems similar, but I don't have a very broad view on audio API's.
Would there be someone with audio programming experience who could point me in the right direction?
Simply put, I want my C++/CX XAML Windows 8 app to output continuous synthesized sound (not sound effects). However I've been looking all over the Web and I cannot figure out how to get the system feed it buffers of PCM samples (or better, have it ask me for them through callbacks) for them to be played. I would use the old waveOut* APIs, however they are banned in Store app development.
So, what is the simplest way to do this? Please note that I am not interested in playing media files (.wav, .mp3) or web audio streaming.
Thanks in advance.
You need to use WASAPI which is enabled in Windows Store apps. This article will get you started with how to use the API to render audio. One annoyance is that WASAPI devices generally don't resample for you so you'll have to be willing to go with what the device is using (probably 44.1kHz or 48kHz) or do the resampling yourself (for which you can make use of the Resampler Media Foundation transform).
I have an SDL app, that works under Linux, Mac and Windows. It's something like a media player, and can play audio just fine. I'd like to add audio recording feature to it, but I'd like to encode it in real time to MP3. Can anyone point me to an example how can I use LibLame, LibSoX, or possibly some other library to achieve this?
-- OR --
I'm also willing to rewrite the whole thing into something easier to manage than C++. I've looked at Kivy and Love2d which uses Lua, but audio recording it's still an issue there. If you know ANY toolkit that:
is cross platform
helps you build GUI using your own graphics
can play AND record mp3 files
ideally can operate under framebuffer (no X Window server under Linux)
Please let me know. I'm looking at Python + Pygame + Pyaudio, it can do graphics and output sound, but still can't record MP3's, only WAV's. Any way to integrate LAME into this to make it work?
FMOD can play practically anything, and handle audio input as well, although I don't know if integrating an entire audio engine is a bit overkill for your project.
It's free for non-commercial usage.
As for encoding, LAME is definitely the de-facto choice for MP3.
There's a very simple library called lame_enc.dll which wraps LAME's capabilities in a simple API. It's Windows only, but you could look at it's source for a good reference on how to use LAME.
I'm trying to get mpg123 audio decoder to work with QT on windows. How do i play the decoded audio data at the right speed with Qmultimedia module in push mode. Currently i'm using simple timer to get it to play audio but it's not very efficient way to do it, if I do anything else at the same time audio get all distorted. Is there any better way to send the decoded data to audio output? It would be nice if anyone could point me to any nice examples using Qmultimedia module and Qaudiooutput class. I've tried to figure out QT example project "audiooutput" but it seems that it's also using timer to send audio to output in push mode.. Hope that I'm not too confusing.
I also had to figure that out and I would also suggest using the Phonon framework to do this.
It uses Windows Media Player as host on Windows, QuickTime on Mac and some KDE stuff on Linux.
So it's pretty platform independent.
If you need more low-level functionality, you should take a look into an open-source project called portaudio. It's very easy to use and you can manipulate or even fill buffers from code.
I used it to build an oscillator.
Hope that helps!
Best,
guitarflow
I want to create a Qt widget that can play incoming RTP streams where the video is encoded as H264 and contains no audio.
My basic plan for implementation is this:
Create a Phonon MediaSource object (Stream type).
Connect it with a QIODevice subclass that provides the data
Obtain the video data using either:
The JRTPLIB client library
The GStreamer gstrtpbin plugin. This plugin takes care depayloading the packages and decoding the video. Maybe this improves the chances that Phonon will recognize the data.
My environment:
Ubuntu 9.10
Qt 4.6
My questions:
Is my approach a good one? Perhaps I'm overlooking a more obvious or simple solution?
I'm currently experiencing this issue: when trying to play the video stream the state of the MediaObject turns to ErrorState with errorType FatalError. Can anyone tell me what I'm doing wrong?
Edit
One solution I found is using libVLC in combination with Qt, which I learned about in this thread. Here's a code sample for the interested.
I'm still looking for a Phonon-based solution.
Ideally I would only need to provide an SDP file and job is done.
I was able to get it to work using the libVLC solution. I can't garantuee that this is the best solution though as I simply stopped looking after that.
Here's a link to the libVLC sample.
The way I understand Phonon works at least in Windows is that QT provides a phonon backend plugin for DirectShow (\plugins\phonon_backend\phonon_ds94.dll) and GStreamer in your case. Then you would either obtain or write your own DirectShow filter which can accept RTP streams as a source. DirectShow takes care of the decoding, and Phonon will take care of the rendering.
So if the backend works, the application code is as simple as:
Phonon::MediaObject *media = new Phonon::MediaObject();
Phonon::VideoWidget *video = new Phonon::VideoWidget();
Phonon::createPath(media, video);
media->setCurrentSource(source);
media->play();
Seems that the problem lies with the GStreamer backend accepting RTP as a source. Can you playback that source in standalone GStreamer without any problems?