Hardware accelerated audio decoding with OpenAL - c++

Is it possible to use the iPhone's hardware accelerated decoding of mp3s and AAC when using the OpenAL library?
I suppose there are two possible approaches if this is possible.
iPhone specific OpenAL extensions.
iPhone APIs to decode audio into raw bytes.
I have two specific use cases.
Completely decode a short sound bite.
Piecewise decode a larger sound file so it can be streamed into OpenAL rather than loaded all at once.
update
Boy! no one's got an answer for this? Does Apple's NDA stiffle these kinds of questions? What's going on? Surely someone else using OpenAL has wanted better audio performance.

There is at least one hardware (or hardware assisted) decoder in all iPhone device models. It can be accessed to convert mp3 and AAC files into raw PCM bytes by using the Audio Queue Services API. From thence you can process those bytes or send them to OpenAL.

AFAIK, there is no hardware audio decoder in the iPhone, 3S, and 3GS. This might have changed in the iPhone 4, but I have not heard anything to make be believe so.

Related

UWP Hardware Video Decoding - DirectX12 vs Media Foundation

I would like to use DirectX 12 to load each frame of an H264 file into a texture and render it. There is however little to no information on doing this, and the Microsoft website has limited superficial documentation.
Media Foundation has plenty of examples and offers Hardware Enabled decoding. Is the Media Foundation a wrapper around DirectX or is it doing something else?
If not, how much less optimised would the Media Foundation equivalent be in comparison to a DX 12 approach?
Essentially, what are the big differences between Media Foundation and DirectX12 Video Decoding?
I am already using DirectX 12 in my engine so this is specifically regarding DX12.
Thanks in advance.
Hardware video decoding comes from DXVA (DXVA2) API. It's DirectX 11 evolution is D3D11 Video Device part of D3D11 API. Microsoft provides wrappers over hardware accelerated decoders in the format of Media Foundation API primitives, such as H.264 Video Decoder. This decoder is offering use of hardware decoding capabilities as well as fallback to software decoding scenario.
Note that even though Media Foundation is available for UWP development, your options are limited and you are not offered primitives like mentioned transform directly. However if you use higher level APIs (Media Foundation Source Reader API in particular) you can leverage hardware accelerated video decoding in your UWP application.
Media Foundation implementation offers interoperability with Direct3D 11, in the part of video encoding/decoding in particular, but not Direct3D 12. You will not be able to use Media Foundation and DirectX 12 together out of the box. You will either have to implement Direct3D 11/12 interop to transfer the data between the APIs (or, where applicable, use shared access to the same GPU data).
Or alternatively you will have to step down to underlying ID3D12VideoDevice::CreateVideoDecoder which is further evolution of mentioned DXVA2 and Direct3D 11 video decoding APIs with similar usage.
Unfortunately if Media Foundation is notoriously known for poor documentation and hard-to-start development, Direct3D 12 video decoding has zero information and you will have to enjoy a feeling of a pioneer.
Either way all the mentioned are relatively thin wrappers over hardware assisted video decoding implementation with the same great performance. I would recommend taking Media Foundation path and implement 11/12 interop if/when it becomes necessary.
You will get a lot of D3D12 errors caused by Media Foundation if you pass a D3D12 device to IMFDXGIDeviceManager::ResetDevice.
The errors could be avoided if you call IMFSourceReader::ReadSample slowly. It doesn't matter that you adopt sync or async mode to use this method. And, how slowly it should be depends on the machine that runs the program. I use ::Sleep(1) between ReadSample calls for sync mode playing a stream from network, and ::Sleep(3) for sync mode playing a local mp4 file on my machine.
Don't ask who I am. My name is 'the pioneer'.

C++ ffmpeg real-time video transmisson

I am a student currently working on my final project. Our project is focusing on new type network coding research. Now my task is to do a real-time video transmission to test the network coding. I have learned something of ffmepg and opencv and have finished a c++ program which can divide the video into frames and send it frame by frame. However, by this way, the transmission data (the frames)size are quite much more than the original video file size. My prof advise me try to find the keyframe and inter frame diff of the video (mjpeg format), so that transmit the keyframe and interframe diff only instead of all the frames with large amount of redundancy, and therefore reduce the transmission data. I have no idea in how to do this in c++ and ffmpeg or opencv. Can any one give any advice?
For my old program, please refer to here. C++ Video streaming and transimisson
I would recommend against using ffmpeg/libav* at all. I would recommend using libx264 directly. By using x264 you can have greater control of NALU slice sizes as well as lower encoder latency by utilizing callbacks.
Two questions which already may help yourself:
How are you interfacing from c++ to ffmpeg? ffmpeg generally refers to the command line tool, from c++ you generally use the individual libs which are part of ffmpeg. You should use libavcodec to encode your frames and possibly libavformat to packetize them into a container format.
Which codec do you use?

How to output multiple sounds with SDL?

I have a library to decode some audio data into PCM and it works fine with Alsa.
I chose SDL to abstract the audio output because SDL is platform-independent. I rewrote it to use SDL to output the audio and it works. However, I want to output multiple sounds simultaneously and SDL only supports one sound per time.
What should I do?
I can use other audio library if it is free, lightweight and supports Linux, Windows XP and Android 2.3.
EDIT: Instead of decoding the entire audio data and filling the audio buffer, I have to fill the buffer partially on each iteration. Loops or callback functions are the solution to fill the next audio frame to play.
SDL_Mixer is the way to go if you're using SDL. It can play multiple sounds at a time, although only one music.
You can download SDL_Mixer, and get documentation, at http://www.libsdl.org/projects/SDL_mixer/ (or google SDL_Mixer).

Best way to load in a video and to grab images using c++

I am looking for a fast way to load in a video file and to create images from them at certain intervals ( every second, every minute, every hour, etc.).
I tried using DirectShow, but it just ran too slow for me to start the video file and move to a certain location to get data and to save it out to an image. Even if I disabled the reference clock. Tried OpenCV, but it has trouble opening the AVI file unless I know the exact codec information. So if I know a way to get the codec information out from OpenCV I may give it another shot. I tried to use FFMPEG, but I don't have as much control over it as well as I would wish.
Any advice would be greatly appreciated. This is being developed on a Windows box since it has to be hosted on a Windows box.
MPEG-4 format is not an intra-coded format, so you can't just jump to a random frame and decode it on its own, as most frames only encode the differences from one or more other frames. I suspect your decoding is slow because when you land on a frame for which several other dependent frames to be decoded first.
One way to improve performance would be to determine which frames are keyframes (or sometimes also called 'sync' points) and limit your decoding to those frames, since these can be decoded on their own.
I'm not very familiar with DirectShow capabilities, but I would expect it has some API to expose sync points.
Also, I should mention that the QuickTime SDK on Windows is possibly another good option that you have for decoding frames from movies. You should first test that your AVI movies are played correctly in the QuickTime Player. And the QT SDK does expose sync points, see the section Finding Interesting Times in the QT SDK documentation.
ffmpeg's libavformat might work for ya...

on-line recording with ffmpeg

Is this possible? Someone tried to do on-line recording of audio and video(of the screen) with ffmpeg? I read everything google can find about ffmpeg in the net. The variant of recording I deed load CPU to 100%, but it still can't convert frames with appr. speed relevant to how fast frames are recording, audio go good, but video lost frames..
Recording audio/video of the screen is possible with ffmpeg. People do this for the purposes of screen casting. Performance of this depends on the hardware in use, the codecs used and various other factors.
See this post (or this one) for some further advice and command line use.
This pretty much depends on the codec used, the frame size/complexity and obviously the capabilities of the computer doing the compression. You can try a low complexity codec like MJPEG, which might improve your experience.