QCRenderer.renderAtTime fails with 'invalid framebuffer operation' - opengl

I initialize my QCRenderer like this:
let glPFAttributes:[NSOpenGLPixelFormatAttribute] = [
UInt32(NSOpenGLPFABackingStore),
UInt32(0)
]
let glPixelFormat = NSOpenGLPixelFormat(attributes: glPFAttributes)
if glPixelFormat == nil {
println("Pixel Format is nil")
return
}
let openGLView = NSOpenGLView(frame: glSize, pixelFormat: glPixelFormat)
let openGLContext = NSOpenGLContext(format: glPixelFormat, shareContext: nil)
let qcRenderer = QCRenderer(openGLContext: openGLContext, pixelFormat: glPixelFormat, file: compositionPath)
Further down in the code, I call renderAtTime like this:
if !qcRenderer.renderAtTime(frameTime, arguments: nil) {
println("Rendering failed at \(frameTime)s.")
return
}
which always produces this error message:
2014-10-30 15:30:50.976 HQuartzRenderer[3996:692255] *** Message from <QCClear = 0x100530590 "Clear_1">:
OpenGL error 0x0506 (invalid framebuffer operation)
2014-10-30 15:30:50.976 HQuartzRenderer[3996:692255] *** Message from <QCClear = 0x100530590 "Clear_1">:
Execution failed at time 0.000
2014-10-30 15:30:50.976 HQuartzRenderer[3996:692255] *** Message from <QCPatch = 0x100547860 "(null)">:
Execution failed at time 0.000
Rendering failed at 0.0s.
The Quartz Composition is just a simple GLSL shader which runs just fine in Quartz Composer.
Here's a screenshot of the Quartz Composer window:
There's not much on the internet on this that I could find. I hope someone here knows something that might help.
By the way, I know that I could just initialize the QCRenderer like
let qcRenderer = QCRenderer(offScreenWithSize: size, colorSpace: CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB), composition: qcComposition)
but I want to take advantage of my GPU's multisampling capabilities to get an antialiased image. That's gotta be more performant than rendering at 4x size, then manually downsizing the image.
Edit: Changed pixel format to
let glPFAttributes:[NSOpenGLPixelFormatAttribute] = [
UInt32(NSOpenGLPFAAccelerated),
UInt32(NSOpenGLPFADoubleBuffer),
UInt32(NSOpenGLPFANoRecovery),
UInt32(NSOpenGLPFABackingStore),
UInt32(NSOpenGLPFAColorSize), UInt32(128),
UInt32(NSOpenGLPFADepthSize), UInt32(24),
UInt32(NSOpenGLPFAOpenGLProfile),
UInt32(NSOpenGLProfileVersion3_2Core),
UInt32(0)
]

Related

iOS playing audio in iOS 13 throws C++ exception, freezes app

TL;DR:
The code below (all ten lines of it, apart from debugging)
fails (UI freezes) on iOS 13.x (Simulator)
succeeds (audio plays) on 14.x (Simulator and devices)
I don't have any devices with iOS 13.x. But...analytics from live apps suggest it is failing in the field on both iOS 13 and 14 devices. False positives? (See line of code commented with $$$.)
Steps To Reproduce
Create a new SwiftUI project that can run in iOS 13. Replace the text in ContentView.swift with the code below. Add an audio resource named clip.mp3. Build and run.
I am using Xcode 12.4, macOS 11.1, Swift 5.
See Also
Apple Dev Forum 1   // Unresolved
Apple Dev Forum 2   // Attributed to beta iOS/Xcode
Stackoverflow 1   // Unresolved
Stackoverflow 2   // Refers to next link
Apple Dev Forum 3   // Claims fixed in Xcode 12b5
[...and many more...]
Code
import SwiftUI
import AVKit
struct ContentView: View {
var body: some View {
Text("Boo!").onAppear { playClip() }
}
}
var clipDelegate: AudioTimerDelegate! // Hold onto it to forestall GC.
var player : AVAudioPlayer! // Ditto.
func playClip() {
let u = Bundle.main.url(forResource: "clip", withExtension: "mp3")!
player = try! AVAudioPlayer(contentsOf: u)
clipDelegate = AudioTimerDelegate() // Wait till now to instantiate, for correct timing.
player.delegate = clipDelegate
player.prepareToPlay()
NSLog("*** Starting clip play") // NSLog so we get timestamp.
player.play()
// Wait 5 seconds and see if audioPlayerDidFinishPlaying.
DispatchQueue.main.asyncAfter(deadline: .now() + 5) {
if let d = clipDelegate.clipDuration {
NSLog("*** Caller clip duration = \(d)")
} else {
NSLog("!!! Caller found nil clip duration")
// $$$ In live app, post audio-freeze event to analytics.
}
}
}
class AudioTimerDelegate: NSObject, AVAudioPlayerDelegate {
private var startTime : Double
var clipDuration: Double?
override init() {
self.startTime = CFAbsoluteTimeGetCurrent()
super.init()
}
func audioPlayerDidFinishPlaying(_ player: AVAudioPlayer, successfully flag: Bool) {
clipDuration = CFAbsoluteTimeGetCurrent() - startTime
NSLog("*** Delegate clip duration = \(clipDuration!)")
}
}
Console Output
Simulator iOS 14.4
The audio plays and the Console (edited for brevity) reads:
14:33:17 [plugin] AddInstanceForFactory: No factory registered for ... F8BB1C28-...
14:33:17 *** Starting clip play
14:33:19 *** Delegate clip duration = 1.692...
14:33:22 *** Caller clip duration = 1.692...
I gather that the first line is innocuous and related to the Simulator's sound drivers.
Is anyone else getting this console message with AVAudioPlayer in Xcode 11 (and 11.1)?
Device 14.4
Results are the same, without the AddInstanceForFactory complaint.
Simulator 13.6
Audio never sounds, the delegate callback never runs, and in the Console I get:
14:30:10 [plugin] AddInstanceForFactory: No factory registered for ... F8BB1C28-...
14:30:11 HALB_IOBufferManager_Client::GetIOBuffer: the stream index is out of range
14:30:11 HALB_IOBufferManager_Client::GetIOBuffer: the stream index is out of range
14:30:11 [aqme] AQME.h:254:IOProcFailure: AQDefaultDevice (1): output stream 0: null buffer
14:30:11 [aqme] AQMEIO_HAL.cpp:1774:IOProc: EXCEPTION thrown (-50): error != 0
14:30:26 [aqme] AQMEIO.cpp:179:AwaitIOCycle: timed out after 15.000s (0 1); suspension count=0 (IOSuspensions: )
14:30:26 CA_UISoundClient.cpp:241:StartPlaying_block_invoke: CA_UISoundClientBase::StartPlaying: AddRunningClient failed (status = -66681).
14:30:26 *** Starting clip play
14:30:26 HALB_IOBufferManager_Client::GetIOBuffer: the stream index is out of range
14:30:26 HALB_IOBufferManager_Client::GetIOBuffer: the stream index is out of range
14:30:26 [aqme] AQME.h:254:IOProcFailure: AQDefaultDevice (1): output stream 0: null buffer
14:30:26 [aqme] AQMEIO_HAL.cpp:1774:IOProc: EXCEPTION thrown (-50): error != 0
14:30:41 [aqme] AQMEIO.cpp:179:AwaitIOCycle: timed out after 15.000s (1 2); suspension count=0 (IOSuspensions: )
14:30:46 !!! Caller found nil clip duration
Remarks
It seems that there are two fifteen-second delays going on in the failure case.

How do I extract a GL texture id from a BufferRef using the gstreamer crates?

I am working on a tool to use GL to render frames from a video onto a texture-mapped mesh. I already have a GL app working with a single image (PNG). Now I am trying to use gstreamer to decode the video.
I started with the appsink example.
I have gotten as far as piping the decoded video through glupload into an appsink. Now I need to convert the BufferRef I get from appsink.pull_sample().get_buffer() into a GL texture id (a u32) so I can pass it to GL functions like gl::BindTexture(gl::TEXTURE_2D, tex). I used set_caps() on the appsink to ensure the buffer has feature memory:GLMemory, so it better be a texture and not off-GPU.
How do I extract a GL texture id from a BufferRef using Rust's gstreamer and gstreamer-* crates?
Retrieving the texture from a GstGLMemory in C requires mapping the GstGLMemory itself with a special GST_MAP_GL flag. That specific interface for mapping an OpenGL texture does not currently have an analogue in rust yet. There is some work in a related area within https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/merge_requests/581 to help improve the situation with GStreamer OpenGL usage in rust.
If you only need readable access to the texture, there is an extension trait VideoFrameGLExt on VideoFrame that can get you access to the OpenGL texture. There is a usage of VideoFrameGLExt in the glupload example in the gstreamer-rs repository available from https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/blob/master/examples/src/bin/glupload.rs. The VideoFrameGLExt trait is currently implemented within https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/blob/master/gstreamer-gl/src/gl_video_frame.rs
Something like the following should work for read-only access:
// buffer: gst::Buffer
// info: gst::VideoInfo
if let Ok(frame) = gst_video::VideoFrame::from_buffer_readable_gl(buffer, &info) {
if let Some(texture) = frame.get_texture_id(0) {
// use texture somehow
}
}
If instead you also need to write to the texture, that is currently not exposed and manual bindings would need to be written.
The code I eventually got working was
fn get_gl_memory(bref: &BufferRef, idx: u32) -> Option<*mut GstGLMemory> {
unsafe {
let n = gst_sys::gst_buffer_n_memory(bref.as_ptr() as *mut _);
if idx >= n {
return None;
}
let mem = gst_sys::gst_buffer_peek_memory(bref.as_ptr() as *mut _, idx);
if 0 != gst_gl_sys::gst_is_gl_memory(mem) {
Some(mem as *mut _)
} else {
None
}
}
}
//
let gl_mem = get_gl_memory(buffer, 0).unwrap();
let gl_mem = unsafe { &*gl_mem };
let tex_id = gl_mem.tex_id;
although the solution from ystreet00 works great if you have convenient access to the gst::VideoInfo.

Using multiple audio input devices with AVAudioEngine

I've got several usb audio input devices connected and I want to select more than 1 of them to connect to the mixer. The way I understand it, it will not be able to use AVAudioInputNode for this purpose.
I could accomplish this by using AVFoundation to read samples from each device and then schedule them to a player but it seems to me that this might be easier done with AVAudioUnit. Unfortunately, I have not been able to make that work.
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_HALOutput;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
[AVAudioUnit
instantiateWithComponentDescription:desc
options:kAudioComponentInstantiation_LoadInProcess
completionHandler:^(AVAudioUnit *xx, NSError *error) {
AudioUnitSetProperty(xx.audioUnit, kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global, 0, &inputDeviceID, sizeof(inputDeviceID));
[_engine attachNode:xx];
AVAudioMixerNode *mixer = [_engine mainMixerNode];
[_engine connect:xx to:mixer format:[xx outputFormatForBus:0]];
}];
[central] 54: ERROR: >avae> AVAudioEngineGraph.mm:1235: AddNode:
required condition is false: inImpl != nil && !IsIONode(inAVNode)
Any suggestions would be appreciated.

selecting attitude reference frame in IOS 10 coremotion using swift 3

I am trying to use CMMotionManager to update the attitude of a camera viewpoint in scenekit. I am able to get the following code using the default reference to work.
manager.deviceMotionUpdateInterval = 0.01
manager.startDeviceMotionUpdates(to: motionQueue, withHandler:{ deviceManager, error in
if (deviceManager?.attitude) != nil {
let rotation = deviceManager?.attitude.quaternion
OperationQueue.main.addOperation {
self.cameraNode.rotation = SCNVector4(rotation!.x,rotation!.y,rotation!.z,rotation!.w)
}
}
})
I am however unable to get startDeviceMotionUpdates to work with a selected reference frame as shown below:
manager.deviceMotionUpdateInterval = 0.01
manager.startDeviceMotionUpdates(using: CMAttitudeReferenceFrameXMagneticNorthZVertical, to: motionQueue, withHandler:{ deviceManager, error in
if (deviceManager?.attitude) != nil {
let rotation = deviceManager?.attitude.quaternion
OperationQueue.main.addOperation {
self.cameraNode.rotation = SCNVector4(rotation!.x,rotation!.y,rotation!.z,rotation!.w)
}
}
})
The error i receive is:
Use of unresolved identifier 'CMAttitudeReferenceFrameXMagneticNorthZVertical'
I get similar error messages for other reference frames as well. Can anyone shed any light on the use of the "using:" parameter for the startDeviceMotionUpdates function? All the examples i have found are for older versions of swift or objective c so it is quite possible that it is simply an issue with not understanding Swift 3 syntax.
After some additional fiddling i figured out that the using argument expects a member of the new CMAttitudeReferenceFrame struct. i.e. it should be passed as:
manager.deviceMotionUpdateInterval = 0.01
manager.startDeviceMotionUpdates(using: CMAttitudeReferenceFrame.xMagneticNorthZVertical
,to: motionQueue, withHandler:{
deviceManager, error in
if (deviceManager?.attitude) != nil {
let rotation = deviceManager?.attitude.quaternion
OperationQueue.main.addOperation {
self.cameraNode.rotation = SCNVector4(rotation!.x,rotation!.y,rotation!.z,rotation!.w)
}
}
})
This is a change from earlier version that allowed the direct use of constants such as "CMAttitudeReferenceFrameXMagneticNorthZVertical"

AVAudioUnit (OS X) render block only called for certain sample rates

I'm having trouble getting AVAudioEngine (OS X) to play nice with all sample rates.
Here's my code for building the connections:
- (void)makeAudioConnections {
auto hardwareFormat = [self.audioEngine.outputNode outputFormatForBus:0];
auto format = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:hardwareFormat.sampleRate channels:2];
NSLog(#"format: %#", format);
#try {
[self.audioEngine connect:self.avNode to:self.audioEngine.mainMixerNode format:format];
[self.audioEngine connect:self.audioEngine.inputNode to:self.avNode format:format];
} #catch(NSException* e) {
NSLog(#"exception: %#", e);
}
}
On my audio interface, the render callback is called for 44.1, 48, and 176.4kHz. It is not called for 96 and 192 kHz. On the built-in audio, the callback is called for 44.1, 48, 88 but not 96.
My AU's allocateRenderResourcesAndReturnError is being called for 96kHz. No errors are returned.
- (BOOL) allocateRenderResourcesAndReturnError:(NSError * _Nullable *)outError {
if(![super allocateRenderResourcesAndReturnError:outError]) {
return NO;
}
_inputBus.allocateRenderResources(self.maximumFramesToRender);
_sampleRate = _inputBus.bus.format.sampleRate;
return YES;
}
Here's my AU's init method, which is mostly just cut & paste from Apple's AUv3 demo:
- (instancetype)initWithComponentDescription:(AudioComponentDescription)componentDescription options:(AudioComponentInstantiationOptions)options error:(NSError **)outError {
self = [super initWithComponentDescription:componentDescription options:options error:outError];
if (self == nil) {
return nil;
}
// Initialize a default format for the busses.
AVAudioFormat *defaultFormat = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100. channels:2];
// Create the input and output busses.
_inputBus.init(defaultFormat, 8);
_outputBus = [[AUAudioUnitBus alloc] initWithFormat:defaultFormat error:nil];
// Create the input and output bus arrays.
_inputBusArray = [[AUAudioUnitBusArray alloc] initWithAudioUnit:self busType:AUAudioUnitBusTypeInput busses: #[_inputBus.bus]];
_outputBusArray = [[AUAudioUnitBusArray alloc] initWithAudioUnit:self busType:AUAudioUnitBusTypeOutput busses: #[_outputBus]];
self.maximumFramesToRender = 256;
return self;
}
To keep things simple, I'm setting the sample rate before starting the app.
I'm not sure where to begin tracking this down.
Update
Here's a small project which reproduces the issue I'm having:
Xcode project to reproduce issue
You'll get errors pulling from the input at certain sample rates.
On my built-in audio running at 96kHz the render block is called with alternating 511 and 513 frame counts and errors -10863 (kAudioUnitErr_CannotDoInCurrentContext) and -10874 (kAudioUnitErr_TooManyFramesToProcess) respectively. Increasing maximumFramesToRender doesn't seem to help.
Update 2
I simplified my test down to just connecting the input to the main mixer:
[self.audioEngine connect:self.audioEngine.inputNode to:self.audioEngine.mainMixerNode format:nil];
I tried explicitly setting the format argument.
This still will not play through at 96kHz. So I'm thinking this may be a bug in AVAudioEngine.
For play-through with AVAudioEngine, the input and output hardware formats and all the connection formats must be at the same sample rate. So the following should work.
AVAudioFormat *outputHWFormat = [self.audioEngine.outputNode outputFormatForBus:0];
AVAudioFormat *inputHWFormat = [self.audioEngine.inputNode inputFormatForBus:0];
if (inputHWFormat.sampleRate == outputHWFormat.sampleRate) {
[self.audioEngine connect:self.audioEngine.inputNode to:self.audioEngine.mainMixerNode format:inputHWFormat];
[self.audioEngine connect:self.audioEngine.mainMixerNode to:self.audioEngine.outputNode format:inputHWFormat];
}