I am searching for an opensource, cross-platform and free (for commercial usage) audio input/output library. Search finally boiled down to SDL(1.3) and PortAudio. SDL supports large number of platforms and sound systems. But it is not very well optimized and it is difficult to extend any functionality which may be required at later stages. Whereas portaudio has everything which I need. Your opinions on using audio input/output libraries would be appreciated.
You answered it yourself :
portaudio has everything which I need
and
SDL supports large number of platforms
and sound systems. But it is not very
well optimized and it is difficult to
extend any functionality which may be
required at later stages.
Why look any further?
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a question about sound.
I have used sound libraries like OpenAL in my projects before.
What I need, is insight as to what underling OS APIs these libraries use.
Even if each library provides an easy way to manipulate the input file
according to format, the very basic "raw byte-to-byte, send to the driver"
function has to exist.
I mean, surely there has to be a default api (one for windows, another for Linux)
that these libraries use. I don't suppose they use directly each sound
card's drivers, so the OS has to somehow do the magic. Am I correct?
Now, I know DirectX supports sound (Although I have never used it), but
DirectX isn't installed by default on windows, so I suppose it doesn't count,
and I have no idea what happens on Linux, and I would like to know about both.
I know it's probably impractical not to use a dedicated library, and I don't
really intend not to, but I'm curious about this subject.
So please indulge me.
So, for basic graphics it's OpenGL and DirectX... But what about sound?
Thank you in advance.
Each major platform has a number of API's that allow you to work with sound, On Windows and Mac there are Native Sound Api's that are used by default by the OS as well as well as others that are either Non-Standard or deprecated.
Have A look at he diagram HERE, it has a useful breakdown of many of the major sound API's across major platforms.
In addition to each platform having native sound API's there are also many cross platform API's that encapsulate the way that each native API works in order to allow you to write portable audio software.
For example there is: PortAudio which is a well know C language API
there is also RtAudio which is a C++ API for sound, but it is somewhat of an older C++ style in my opinion(Does not take advantage of post C++11 features).
I am currently working on my own more modern C++11 audio API which can be found HERE. At the moment my API is a thin wrapper around PortAudio that allows you to work with audio in a more modern C++ way.
Keep in mind, the library that you choose will also depend on what kind of audio work you intend to do. All of the libraries I have listed above deal with real-time audio processing and do not deal with audio files. If working with audio files is what you are trying to do you could use libsndfile which is a popular open source sound file manipulation API.
From the context of your question it sound like you have been dealing with sound primarily in a Game Dev related context. It's worth mentioning that working with sound at the level that most of the libraries I have suggested thus far will be at a much lower level than simply calling one function to play a sound file.
Trying to answer the question of how OpenAl interacts with the OS is an answer best left to reading the OpenAL documentation.
I would also suggest looking into basic digital audio theory as well as digital signal processing. There are many resources available for free online on either subject.
EDIT:
In regards to how audio API's work... The average audio api works off of several layers of abstraction between the programmer and the sound card. Typically the programmer is given a buffer of audio data stored as an array of values. The programmer will have requested a specific set of parameters that the system will use for playback (sampling rate, buffer size, number of channels). The programmer will do their work with the audio data and hand the output buffer over to the api which in turn will eventually hand the buffers data over to a device driver written specifically for the installed sound card. The driver for that sound card will have been implemented based off of an interface specified by the platform that the driver is targeting. That is why when you install a new sound card on a machine you may be required to install drivers for it, by installing the drivers you are giving the OS level api a means of communicating with the device.
(There is a lot more that goes on than is possible to easily explain, and i'm sure i have missed a few steps in the process. But I hope that should be a good enough explanation to get started)
I am developing H264 H/W accelerated video decoder for android. So far, I've come around with some libraries MediaCodec, Stagefright, OpenMax IL, OpenMax AL and FFmpeg. After a bit research, I've found that -
I found a great resource of using stagefright with FFmpeg, but I can not use FFmpeg as for its license, it is quite restrictive for distributed software. (Or possible to discard FFmpeg from this approach?)
I can not use MediaCodec as its a Java API and I have to call it via the JNI from C++ layer which is relatively slow and I am not allowed.
I can not use OpenMax AL as it only supports the decoding of MPEG-2 transport stream via a buffer queue. This rules out passing raw h264 NALUs or other media formats for that matter.
Now only left are - stagefright and OpenMax IL. I came to know that stagefright uses OpenMax(OMX) interface. So should I go with stagefright or OpenMax IL? Which will be more promising?
Also, I came to know that Android H/W accelerated decoder is vendor specific and every vendors has their own OMX interfacing APIs. Is it true? If so, do I need to write H/W vendor specific implementation incase of OpenMax IL? What about stagefright? - Is it hardware agnostic or hardware dependent? If there is no way of H/W indenpent implementation using stagefright or OpenMax IL, I need to support at least Qualcomm's Snapdragon, Samsung's Exynos and Tegra-4.
Note that, I need to decode H264 Annex B stream and expect decoded data after decode which I will send to my video rendering pipeline. So basically, I only need the decoder module.
I am really confused a lot. Please help me putting in right direction. Thanks in advance!
EDIT
My software is for commercial purpose and the source code is private as well. And I am also restricted to use ffmpeg by client. :)
You really should go for MediaCodec. Calling java methods via JNI does have some overhead, but you should keep in mind what order of magnitude the overhead is. If you'd call a function per pixel, the overhead of JNI calls might be problematic. But for using MediaCodec, you only do a few function calls per frame, and the overhead there is negligible.
See e.g. http://git.videolan.org/?p=vlc.git;a=blob;f=modules/codec/omxil/mediacodec_jni.c;h=57df9889c97706436823a4960206e323565e221c;hb=b31df501269b56c65327be181cdca3df48946fb1 as an example on using MediaCodec from C code using JNI. As others also have gone this way, I can assure you that the JNI overhead is not a reason to consider other APIs than MediaCodec.
Using stagefright or OMX directly is problematic; the ABI differs between each platform version (so you can either only target one version, or compile multiple times targeting different versions, packaging it all up in one package), and you'd have to deal with a lot of device specific quirks, while MediaCodec should (and on modern versions does) work the same across all devices.
I found a great resource of using stagefright with FFmpeg, but I can not use FFmpeg as for its license, it is quite restrictive for distributed software. (Or possible to discard FFmpeg from this approach?)
That's not true. FFmpeg is LGPL, so you can just use it in your commercially redistributable application.
However, you might be using modules of FFmpeg which are GPL licensed, e.g. libx264. In that case, your program must be GPL-compliant.
But not even that is bad for distributing software -- it just means that you need to give your customers (who should be kings, anyway), access to the source code of the application they are paying for, and are not allowed to restrict their freedoms. Not a bad deal, IMHO.
Also, I came to know that Android H/W accelerated decoder is vendor specific and every vendors has their own OMX interfacing APIs. Is it true?
Obviously, yes. If you need hardware acceleration, someone has to write a program that makes your specific hardware accelerate something.
It's recently been suggested to me that I should
"skip low level apis entirely for now and just use some high level libraries built on top of them. Because building on plain opengl/directx is a lot of work, even for an experienced programmer"
Can anyone suggest some or a place where i can find some that will suite me? Thanks!
It really depends on what you're trying to do. Many people opt for something like SDL (simple directmedia layer) that is an abstraction over OpenGL/DirectDraw/GDI (and more) but it's still kind of low-level. It works natively with c++.
Simple DirectMedia Layer is a
cross-platform multimedia library
designed to provide low level access
to audio, keyboard, mouse, joystick,
3D hardware via OpenGL, and 2D video
framebuffer. It is used by MPEG
playback software, emulators, and many
popular games, including the award
winning Linux port of "Civilization:
Call To Power."
http://www.libsdl.org/
One advantage of choosing a very popular library like this one is that there's a TON of example work out there.
IMO, in terms of abstracting from the platform that you're working with (ie, getting a context, getting keyboard/mouse input, etc) GLFW beats all.
But people are often looking for more than a way to open a window. More often than not, what they're looking for is an implementation of what's called a scene graph. A good one will abstract just about everything one can do in GL into an intuitive tree structure (technically a graph, but it's often easier to consider it a tree). And nearly all the libraries in this category provide context-opening, model-loading, and debugging capabilities of their own, for completeness.
Some of the popular libraries in this category are OpenSceneGraph and Ogre3D. Horde3D looks promising as well but it hasn't had an 'official' release yet.
I need in analyzing system output sound runtime. OS: Linux. The first thing I need is get different frequency values. Programming language: c++.
One semi-portable* way that comes to mind for grabbing all the sound from multiple sources is PulseAudio. (In this case, semi-portable means working with many sound cards, not working with different OSes, though there is a WinXP version of PulseAudio). One of the PulseAudio modules provides a pipe sink. Hopefully all your outputs will be PulseAudio-compatible - nearly everything that plays nice with ALSA should be fine. You should then be able to just read from that pipe to get your input.
You can then use a library like FFTW (first suggested by Thomas' answer) for fast Fourier transform, assuming this is what you mean by 'get the frequency values'.
*In this case, semi-portable means working with many sound cards, not working with different OSes, though there is a WinXP version of PulseAudio (haven't tried it myself).
The question is a bit vague, but here's some potentially useful information.
PCM-encoded WAV files are pretty easy to parse; you don't really need a library for that.
For the frequency analysis, I would use FFTW to do the Fourier transform.
I am going to start a game in about 3 weeks and I would really like the game to run at least on another platform (linux, MacOS) but my team thinks that's a lot of work. I am up for it but wanted to know what are the things I should watch out for that won't port to linux (apart from Windows specific APIs like DirectXsound)?
I've been reading online and Windows "_s" functions like sprintf_s appear to exist only on Windows; is this correct or are they implemented on linux also?
No, the _s functions are NOT implemented in the standard gcc library.
(At least, grepping the include files for 'sprintf_s' turns up nothing at all.)
It might be worth looking at cross platform libraries like boost and apr to do some of the heavy lifting work.
A sample of specific things to look for:
Input/Output (DirectX / SDL / OpenGL)
Win32/windows.h functionality (CreateThread, etc)
Using windows controls on the UI
Synchronization primitives (critical sections, events)
Filepaths (directory separators, root names)
Wide char implementations (16 bit on windows, 32bit on linux)
No MFC support on linux (CString, etc)
If I were you I would use some of the available Frameworks out there, that handle Platform independence.
I wrote a 3D-Game as a hobby project with a friend of mine, the server being in Java and the clients running on Windows and Linux. We ended up using Ogre as 3D-Engine and OpenAL as Sound-Engine, both platform independent and available under LGPL.
The only things I really had to write separately were the whole Socket-handling, reading the config from file system and the initialization of the System. Compared to the rest of the Program, that was almost nothing.
The most time consuming will be to set up the entire project to compile under Windows and Linux (or Mac), especially if you're concentrating on one and only occasionally check the other for problems. If you have one in your team who checks regularly for these problems while they're being produced you won't have that much overhead from that as well.
All in all compared to the programming of the game itself, adapting it to different platforms is almost no effort, if all frameworks used are well written, platform independent systems.
Try to encapsulate any non-standard extentions like DirectX, OpenGL, SDL, etc. Then you only have to rewrite those parts based on platform.
I also would make it playable on one OS before even thinking of porting.
For the 'safe' functions: they are non-standard, and almost safe :)
Endianess is something look out for.
Endianess is the order of the bits in a byte. Some platforms are big endian while some are little endian.
This can affect how cross-platform your program is. But the biggest impact this would have would be in network communications. You have to convert from one endian to another before sending or receiving a network message.
If you focus on gameplay, design a game, and them implement that porting should not be especially onerous. If you implement it simultaneous on several platforms it should be straight forward.
But if you focus on effects, design something that you feel is going to "blow the others out of the water," and try to paste a game idea onto them, you are doomed.
So really it is up to you.
Don't know much about windows-apis, but I set up a daily (or on-commit) fully automatic build-system on all platforms you want to support. If you develop something on your windows-box that doesn't work on the others, your build-system should notify you of "failed build on platform x, see logfile/attachment/whatnot for details". It'll catch a lot of cross-paltform issues. Unittests will help as well.
Whether or not to target multiple platforms from the start is a good idea is another question.
Personally I'd start developing on another platform and then see about porting it to windows at a later time ;-)
Just remember that you are creating a model of game that does not depend on the details of any operating system. Your game depends on state management and algorithms which don't depend on the OS. The key is to write your game logic without dependencies to specific libraries which means a lot of encapsulation.
You shouldn't call sprintf_s directly you should write an routine, class, or MACRO that can be changed based on the platform, Don't use DWORD when you can use a class or typedef that can be tailored to different platforms.
For instance if you where making a football game, then algorithms for throwing the ball, running, tackling, positions of the players could be done completely in standard C++ without platform dependencies. If your encapsulation was good you could dump the state of you game to a file and display it separately with a rendering program.
If you truly want to do cross platform development easily I would suggest using one of the already built cross-platform engines like Unity or one of the Garage Games stuff like Torque Game Builder (2D).
I have virtually zero experience in either so can't tell you which is better but the Torque Game builder demo couldn't get through the first tutorial without having problems and they don't answer tech support questions in their forums like they claim to do so I can say avoid them if you are a novice in game design like myself. The big thing about Garage Games was supposed to be their great support, I saw zero support and in fact only saw a bunch of, "Hey, anybody here?" posts with no answers so I guess they are pretty much giving up on supporting their products.
http://unity3d.com/
http://www.garagegames.com/
I'm surprised nobody mentions libSDL and OpenGL because most cross platform games were written using those libraries.
If your game is 2D, you can use libSDL. A good example of game written using it is The Battle of Wesnoth. SDL uses DirectX on Windows, it's just a thin wrapper on it.
If your game is 3D, use OpenGL. For example, Quake 3 uses that library. You can find tons of examples and documentation on it. Of course, there are many libraries that wrap OpenGL, so you don't have to do low-level stuff. Look into OGRE, Crystal Space, etc.
As for the basic C/C++ libraries and functions compatibility, it's best you install some Linux and simply run man page for the function to see if it exists. Of you can look it up on the Internet.