Swift3 AudioToolbox: PCM playback how to AudioQueueAllocateBuffer? - swift3

I am following https://github.com/AlesTsurko/LearningCoreAudioWithSwift2.0/tree/master/CH05_Player
to playback a frequency but it is with Swift2.
Get microphone input using Audio Queue in Swift 3 has resolved many of the issues but it is for recording.
I am stuck at allocating a buffer to audio queue
var ringBuffers = [AudioQueueBufferRef](repeating:nil, count:3)
AudioQueueAllocateBuffer(inQueue!, bufferSize, &ringBuffers[0])
It gives an error
main.swift:152:29: Expression type '[AudioQueueBufferRef]' is ambiguous without more context
main.swift:153:20: Cannot pass immutable value as inout argument: implicit conversion from 'AudioQueueBufferRef' to 'AudioQueueBufferRef?' requires a temporary
--After Spads' answer--
var ringBuffers = [AudioQueueBufferRef?](repeating:nil, count:3)
let status = AudioQueueAllocateBuffer(inQueue!, bufferSize, &ringBuffers[0])
print("\(status.description)")
prints
vm_map failed: 0x4 ((os/kern) invalid argument)
4
audio description I have used is
inFormat = AudioStreamBasicDescription(
mSampleRate: Double(sampleRate),
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked,
mBytesPerPacket: UInt32(numChannels * MemoryLayout<UInt16>.size),
mFramesPerPacket: 1,
mBytesPerFrame: UInt32(numChannels * MemoryLayout<UInt16>.size),
mChannelsPerFrame: UInt32(numChannels),
mBitsPerChannel: UInt32(8 * (MemoryLayout<UInt16>.size)),
mReserved: UInt32(0)
)
AudioQueueNewOutput(&inFormat, AQOutputCallback, &player, nil, nil, 0, &inQueue)

Should you not have an array of AudioQueueBufferRef? instead of AudioQueueBufferRef
var ringBuffers = [AudioQueueBufferRef?](repeating:nil, count:3)
AudioQueueAllocateBuffer(inQueue!, bufferSize, &ringBuffers[0])

Related

Vulkan CommandBuffer in use after vkDeviceWait Idle

During initialization of my program I want to use some single time command buffers for image layout transitions and acceleration structure building etc.
However, I can't seem to free the command buffer once it's finished.
VkCommandBuffer AppContext::singleTimeCommandBuffer() const {
VkCommandBuffer ret;
auto allocInfo = vks::initializers::commandBufferAllocateInfo(vkCommandPool, VK_COMMAND_BUFFER_LEVEL_PRIMARY, 1);
vkCheck(vkAllocateCommandBuffers(vkDevice, &allocInfo, &ret));
auto beginInfo = vks::initializers::commandBufferBeginInfo();
beginInfo.flags = VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT;
vkCheck(vkBeginCommandBuffer(ret, &beginInfo));
return ret;
}
void AppContext::endSingleTimeCommands(VkCommandBuffer cmdBuffer) const {
vkCheck(vkEndCommandBuffer(cmdBuffer));
auto submitInfo = vks::initializers::submitInfo(&cmdBuffer);
vkQueueSubmit(queues.graphics, 1, &submitInfo, VK_NULL_HANDLE);
vkQueueWaitIdle(queues.graphics);
// Overkill, I know
vkDeviceWaitIdle(vkDevice);
vkFreeCommandBuffers(vkDevice, vkCommandPool, 1, &cmdBuffer);
}
Which produces the following validation error:
VUID-vkFreeCommandBuffers-pCommandBuffers-00047(ERROR / SPEC): msgNum: 448332540 - Validation Error: [ VUID-vkFreeCommandBuffers-pCommandBuffers-00047 ] Object 0: handle = 0x5586acaeff78, type = VK_OBJECT_TYPE_COMMAND_BUFFER; | MessageID = 0x1ab902fc | Attempt to free VkCommandBuffer 0x5586acaeff78[] which is in use. The Vulkan spec states: All elements of pCommandBuffers must not be in the pending state (https://vulkan.lunarg.com/doc/view/1.2.182.0/linux/1.2-extensions/vkspec.html#VUID-vkFreeCommandBuffers-pCommandBuffers-00047)
Objects: 1
[0] 0x5586acaeff78, type: 6, name: NULL
I don't see how this makes sense since the VkQueueWaitIdle as well as the vkDeviceWaitIdle should ensure the command buffer to not be in the pending state. Am I misunderstanding the Vulkan specs or might I have stumbled upon a bug in the video driver, or perhaps the validation layer?
You aren't checking the return values of vkQueueSubmit(), vkQueueWaitIdle() or vkDeviceWaitIdle(). Are any of them failing? That could cause this error.

flatbuffer from TS to C++ not working

I have a flatbuffer schema for a message:
table NodeConstructionInfo {
type:string (id: 0, required);
name:string (id: 1, required);
}
table AddNodeRequest {
graphId:string (id:0, required);
node:NodeConstructionInfo (id:1, required);
}
which I construct (write) in TypeScript and receive (read) in C++:
let builder = new flatbuffers.Builder(356);
let offGraphId = builder.createString("2992ebff-c950-4184-8876-5fe6ac029aa5");
let offType = builder.createString("MySuperDuperNode");
let offName = builder.createString("DummyNode");
sz.NodeConstructionInfo.startNodeConstructionInfo(builder);
sz.NodeConstructionInfo.addName(builder, offName);
sz.NodeConstructionInfo.addType(builder, offType);
let off = sz.NodeConstructionInfo.endNodeConstructionInfo(builder);
sz.AddNodeRequest.startAddNodeRequest(builder);
sz.AddNodeRequest.addGraphId(builder, offGraphId);
sz.AddNodeRequest.addNode(builder, off);
off = sz.AddNodeRequest.endAddNodeRequest(builder);
builder.finish(off);
let requestPayload = builder.asUint8Array();
In C++ I receive the 356 bytes (requestPayload) and try to verify it by doing
flatbuffers::Verifier v(buffer.getData(), buffer.getSize());
v.VerifyBuffer<AddNodeRequest>();
which always fails in <flatbuffers/flatbuffers.h> at:
template<typename T>
bool VerifyBufferFromStart(const char *identifier, const uint8_t *start)
{
...
// Call T::Verify, which must be in the generated code for this type.
auto o = VerifyOffset(start); <--- HERE (the first read flatbuffers::uoffset_t should not be 0 (don't know why)?)
...
}
Am I missing some important detail?
The buffer looks like
PostData received:
'\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,
\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,
\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,
\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,
\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,
\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,
\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,
\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,
\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,
\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,
\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,\x00,
\x00,\f,\x00,\x00,\x00,\b,\x00,\f,\x00,\b,\x00,\x04,\x00,\b,\x00,\x00,\x00,\x10,\x00,\x00,\x00,#,\x00,\x00,
\x00,\b,\x00,\f,\x00,\x04,\x00,\b,\x00,\b,\x00,\x00,\x00, ,\x00,\x00,\x00,\x04,\x00,\x00,\x00,\x10,\x00,\x00,
\x00,M,y,S,u,p,e,r,D,u,p,e,r,N,o,d,e,\x00,\x00,\x00,\x00,\t,\x00,\x00,\x00,D,u,m,m,y,N,o,d,e,
\x00,\x00,\x00,$,\x00,\x00,\x00,2,9,9,2,e,b,f,f,-,c,9,5,0,-,4,1,8,4,-,8,8,7,6,-,5,f,e,6,a,c,0,2,9,a,a,5,
\x00,\x00,\x00,\x00
Reading messages in TypeScript written from C++ works... (?)
flatbuffer version 1.9.0
As your buffer dump shows, the problem is that it contains a lot of leading zeros. A FlatBuffer while being constructed actually may contain leading zeroes (since it is being constructed in a larger buffer backwards), but asUint8Array normally takes care of trimming that down to just the array you need. So either you're not actually using asUint8Array in your actual code, or the zeroes are being pre-pended by some other code.
The problem is the position and how data is sent in the post. The buffer must be sliced to the position specified in ByteBuffer and the post must be send like a Blob.
builder.finish(end);
var buffer: flatbuffers.ByteBuffer = builder.dataBuffer();
var data: Uint8Array = buffer.bytes().slice(buffer.position());
this.http.post(environment.apiRoot + "hello", new Blob([data])).subscribe(
() => {},
(error) => {}
);

Core Audio specify which audio track to decode

I am able to successfully get the decoded PCM data of an audio file using Core Audio API. Below is the reduced code that shows how do I do that:
CFStringRef urlStr = CFStringCreateWithCString(kCFAllocatorDefault, "file.m4a", kCFStringEncodingUTF8);
CFURLRef urlRef = CFURLCreateWithFileSystemPath(NULL, urlStr, kCFURLPOSIXPathStyle, false);
ExtAudioFileOpenURL(urlRef, &m_audioFile);
bzero(&m_outputFormat, sizeof(AudioStreamBasicDescription));
m_outputFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
m_outputFormat.mSampleRate = m_inputFormat.mSampleRate;
m_outputFormat.mFormatID = kAudioFormatLinearPCM;
m_outputFormat.mChannelsPerFrame = m_inputFormat.mChannelsPerFrame;
m_outputFormat.mBytesPerFrame = sizeof(short) * m_outputFormat.mChannelsPerFrame;
m_outputFormat.mBitsPerChannel = sizeof(short) * 8;
m_outputFormat.mFramesPerPacket = 1;
m_outputFormat.mBytesPerPacket = m_outputFormat.mBytesPerFrame * m_outputFormat.mFramesPerPacket;
ExtAudioFileSetProperty(m_audioFile, kExtAudioFileProperty_ClientDataFormat, sizeof(m_outputFormat), &m_outputFormat)
short* transformData = new short[sampleCount];
AudioBufferList fillBufList;
fillBufList.mNumberBuffers = 1;
fillBufList.mBuffers[0].mNumberChannels = channels;
fillBufList.mBuffers[0].mDataByteSize = m_sampleCount * sizeof(short);
fillBufList.mBuffers[0].mData = (void*)(&transformData[0]);
ExtAudioFileRead(m_audioFile, &m_frameCount, &fillBufList);
I am interested in how can I specify the audio track I want to decode (suppose that media file contains more than one)?
One method is to decode all tracks and then extract (copy) the desired track data (every other sample for interleaved stereo, etc.) into another buffer, array, or file. Compared to the decode time, the extra copy time is insignificant.

Swift 3: how to supply a C-style 'void *' object to iOS frameworks for use with callbacks? [duplicate]

This question already has answers here:
How to cast self to UnsafeMutablePointer<Void> type in swift
(4 answers)
Closed 5 years ago.
Foundation is chock full of functions that take an opaque void *info then later vend it back. In pre-ARC Objective C days, you could retain an object, supply it, then when it was handed back to your callback release it.
For example,
CGDataProviderRef CGDataProviderCreateWithData(void *info, const void *data, size_t size, CGDataProviderReleaseDataCallback releaseData);
typedef void (*CGDataProviderReleaseDataCallback)(void *info, const void *data, size_t size);
In this case, you could supply a retained object in info, then release it in the callback (after appropriate casting).
How would I do this in Swift?
With assistance from Quinn 'The Eskimo' at Apple I found out how to do this. Given an object:
let pixelBuffer: CVPixelBuffer
get a pointer:
Get an unmanaged object after retaining it:
let fooU: Unmanaged = Unmanaged.passRetained(pixelBuffer)
Convert it to a raw pointer
let foo: UnsafeMutableRawPointer = fooU.toOpaque()
Recover the object while releasing it:
Convert the raw pointer to an unmanaged typed object
let ptr: Unmanaged<CVPixelBuffer> = Unmanaged.fromOpaque(pixelPtr)
Recover the actual object while releasing it
let pixelBuffer: CVPixelBuffer = ptr.takeRetainedValue()
The following code has been tested in an app. Note without Apple's help I'd never have figured this out thus the Q & A! Hope it helps someone!
Also, note the use of #convention(c), something I'd never seen before!
let fooU: Unmanaged = Unmanaged.passRetained(pixelBuffer)
let foo: UnsafeMutableRawPointer = fooU.toOpaque()
/* Either "bar" works */
/* let bar: #convention(c) (UnsafeMutableRawPointer?, UnsafeRawPointer, Int) -> Swift.Void = { */
let bar: CGDataProviderReleaseDataCallback = {
(_ pixelPtr: UnsafeMutableRawPointer?, _ data: UnsafeRawPointer, _ size: Int) in
if let pixelPtr = pixelPtr {
let ptr: Unmanaged<CVPixelBuffer> = Unmanaged.fromOpaque(pixelPtr)
let pixelBuffer: CVPixelBuffer = ptr.takeRetainedValue()
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
DispatchQueue.main.async {
print("UNLOCKED IT!")
}
}
}
let val: CVReturn = CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
if val == kCVReturnSuccess,
let sourceBaseAddr = CVPixelBufferGetBaseAddress(pixelBuffer),
let provider = CGDataProvider(dataInfo: foo, data: sourceBaseAddr, size: sourceRowBytes * height, releaseData: bar)
{
let colorspace = CGColorSpaceCreateDeviceRGB()
let image = CGImage(width: width, height: height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: sourceRowBytes,
space: colorspace, bitmapInfo: bitmapInfo, provider: provider, decode: nil,
shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)
/* CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0)) */
return image
} else {
return nil
}
Quinn recently updated the Apple Forum thread on this, stating that somehow this technique never made it into either of the two Apple Swift Documents, and that he just entered a rdar to get it added. So you won't find this info anywhere else (well, at least now!)

i have error with the sample rate 8khz to read real time audio buffer

with my following code i have the error :
var engine = AVAudioEngine()
let input = engine.inputNode!
let bus = 0
let mixer = AVAudioMixerNode()
engine.attach(mixer)
engine.connect(input, to: mixer, format: input.outputFormat(forBus: 0))
//pcmFormatFloat64 -- pcmFormatFloat32
let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 8000, channels: 1, interleaved: true)
mixer.installTap(onBus: bus, bufferSize: 512, format: fmt) { (buffer, time) -> Void in
// 8kHz buffers!
print(buffer.format)
}
try! engine.start()
}
ERROR : kAudioUnitErr_TooManyFramesToProcess : inFramesToProcess=1024, mMaxFramesPerSlice=768
with the sample rate 441khz is everything fine but with 8khz not
what is wrong with this code ?
I encountered the same error message at 8kHz and fixed it by setting the preferred sampling rate and capture duration for my audio session instance:
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setPreferredSampleRate(sampleRate)
try audioSession.setPreferredIOBufferDuration(TimeInterval(processingInterval))