So, I want to have an app in Mac that will show a live video preview through my iSight while allowing users to take snapshots. Basically, it's similar to the Still Motion Demo, except now I have to make it work under a Qt app. That is, I'm having mm files that are mostly adhering to the c++ structure, with occasional Obj-c messaging.
Currently, I'm having problems with placing the QTCaptureView with the rest of the Qt module.
Right now, I have managed to place it into my Qt gui through QMacCocoaViewContainer, and wanted to resize it to an appropriate size; since we can't use the Interface Builder anymore, I have to code it somehow. I've tried setting it's frame and bounds right after I created it, but it's not making any difference.
CCaptureViewWidget::CCaptureViewWidget(QWidget *parent) :
QMacCocoaViewContainer(0, parent)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
NSRect kVideoFrame = NSMakeRect( 100.0f, 100.0f, 400.0f, 300.0f );
QTCaptureView *camView = [[QTCaptureView alloc] initWithFrame:kVideoFrame];
[camView setFrame:kVideoFrame];
[camView setBounds:kVideoFrame];
NSRect ourPreviewBounds = [ camView previewBounds ];
NSColor *pFillColor = [ [NSColor colorWithDeviceRed:0.0f green:1.0f blue:0.0f alpha:1.0f] copy];
[camView setFillColor:pFillColor];
[camView setPreservesAspectRatio:YES];
[camView setNeedsDisplay:YES];
setCocoaView( camView );
[camView release];
[pool release];
}
The ourPreviewBounds, as far as I can tell from the debuger in Xcode, has a size of 0x0, even after the setFrame and setBounds call. And just to prove that camView is valid, I did manage to change the fill color of the QTCaptureView.
I read from sources that overriding QTCaptureView's previewBounds might be an option, but I haven't been able to find any working examples of that.
If anyone can give me a hint as to how to resize a QTCaptureView outside of the Interface Builder and inside the code, I will very much appreciate it.
How to set NSView size programmatically?
Related
I'm building a cross platform app, and 1 of those platforms is Macos.
Now the core of my code is written in C++, and Obj-C++.
I create a window like this:
NSWindow* window = [[NSWindow alloc] initWithContentRect:NSMakeRect(x, y, width, height) styleMask:macosWindowStyle backing:NSBackingStoreBuffered defer:false];
but I wanted to listen to the window. I could've subclassed it, but I chose not to.
The other way to get events from the NSWindow was to set a delegate.
Now since my code was in Obj-C++, I couldn't have a C++ class inherit from a Obj-C protocol.
So, I created a Obj-C header, which implemented NSWindowDelegate.
Here it is:
#interface SomeClass : NSObject<NSWindowDelegate>
#end
I overrode windowShouldClose as such:
- (BOOL)windowShouldClose:(NSWindow *)sender {
NSLog(#"Hello!");
return true;
}
and in my Obj-C++ file, I did this:
NSWindow* window = [[NSWindow alloc] initWithContentRect:NSMakeRect(x, y, width, height) styleMask:macosWindowStyle backing:NSBackingStoreBuffered defer:false];
SomeClass* someClass = [[SomeClass alloc] init];
[window setDelegate:someClass];
However, when I pressed the X button, nothing happened.
I then proceeded to test the same thing in Swift, same result.
I then realized that the delegate was being destroyed because it was a weak reference.
My question is, how do I get around this?
OK, I figured it out.
I for some reason thought I could not have Obj-C class pointers in my Obj-C++ code. I thought that was one of the limitations.
I've got a project that is a mix of pure C++ and Objective-C++ in order to incorporate some C++ libraries.
I've tried adding some basic SCNScenes into the mix. (By basic I mean a scene with a box node in it and that's it). Every time i get the error:
Assertion failed: (renderSize.x != 0), function -[SCNRenderContextMetal _setupDescriptor:forPass:isFinalTechnique:], file /BuildRoot/Library/Caches/com.apple.xbs/Sources/SceneKit/SceneKit-332.6/sources/Core3DRuntime/NewRenderer/SCNRenderContextMetal.mm, line 688.
Does anyone know what causes this, and if so how can I get round it?
EDIT:
In my ViewController.mm I've got:
self.sceneView = [[SCNView alloc] initWithFrame:frame];
self.sceneView.scene = [SCNScene scene];
SCNNode *cube = [SCNNode nodeWithGeometry:[SCNBox boxWithWidth:1.0 height:1.0 depth:1.0 chamferRadius:0]];
cube.geometry.firstMaterial.diffuse.contents = [UIColor redColor];
[self.sceneView.scene.rootNode addChildNode:cube];
[self.view addSubview:self.sceneView];
Sounds like you are starting up your SceneKit scene using a storyboard.
If so, the recent version of the SDK now requires that you set the constraints on views or else they end up having trivial size. It might just be a function of setting constraints on your SceneKit scene.
You also need to set the frame to something valid if it isn't. eg
CGRect frame = [[UIScreen mainScreen] applicationFrame];
I discovered that SceneKit throws a fit if you set the SCNView frame to CGRectZero. There has to be at least 1 pixel of rendering real estate. Simple as that.
I am porting an app to Qt and have troubles with the integration of the Syphon framework (http://syphon.v002.info/) which is used for video streams sharing between applications via the GPU (Mac OS X only).
Two C++ implementations for Syphon are available, one for Cinder (github/astellato/Cinder-Syphon) and one for openFrameworks (github/astellato/ofxSyphon). I started from the Cinder implementation (both are quite similar) and tried to port it to Qt but I can't find a way to create a QOpenGlTexture using an already created texture.
Here is the code from Cinder-Syphon that I'm trying to get to work (file syphonClient.mm) :
void syphonClient::bind()
{
NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];
if(bSetup)
{
[(SyphonNameboundClient*)mClient lockClient];
SyphonClient *client = [(SyphonNameboundClient*)mClient client];
latestImage = [client newFrameImageForContext:CGLGetCurrentContext()];
NSSize texSize = [(SyphonImage*)latestImage textureSize];
GLuint m_id = [(SyphonImage*)latestImage textureName];
mTex = ci::gl::Texture::create(GL_TEXTURE_RECTANGLE_ARB, m_id,
texSize.width, texSize.height, true);
mTex->setFlipped();
mTex->bind();
}
else
std::cout<<"syphonClient is not setup, or is not properly connected to server. Cannot bind.\n";
[pool drain];
}
In the Qt version I'm writing, mTex is of type QOpenGlTexture but I couldn't find an equivalent of mTex = ci::gl::Texture::create(GL_TEXTURE_RECTANGLE_ARB, m_id, texSize.width, texSize.height, true);, that is, creating a QOpenGLTexture using an existing texture ID without having to allocate storage again.
Did I miss something in the Qt OpenGL API or is it just not possible ? If not I guess I will have to use direct OpenGL calls or pulling in the whole Cinder OpenGL API ?
I am programming with Cocos2d 3.0 now, In Cocos2d 2.0, we can use the following code to add accelerometer to app, but this example was based on class CCLayer which has deprecate in Cocos2d 3.0, and UIAccelerometer also replaced by CMMotionManager in IOS 5.0, so I am wondering how to do this in Cocos2d 3.0? I googled for a while, didn't find anything useful.
-(id) init
{
if ((self = [super init]))
{
// ...
self.isAccelerometerEnabled = YES;
// ...
}
}
-(void) accelerometer:(UIAccelerometer *)accelerometer
didAccelerate:(UIAcceleration *)acceleration
{
// ...
}
===
We've written a tutorial on exactly this: https://www.makegameswith.us/gamernews/371/accelerometer-with-cocos2d-30-and-ios-7
You need to use the CoreMotion framework.
Well, there are two problems in the tutorial example given above.
Single instance of CMMotionManager.
Acceleration data become +Ve or -Ve according to the orientation of the device. You also need to add Scene as observer of device orientation change notification.
If you don't want handle these overheads, you can use CCAccelerometer class. It solves both the problems.
HOW TO USE
Add CoreMotion Framework in your project from Build Phases.
Copy CCAccelerometer.h and CCAccelerometer.m files in your project.
Import CCAccelerometer.h file in the Prefix.pch.
Implement the <CCSharedAccelerometerDelegate> in the CCScene where you want to use the accelerometer.
Create shared instance in init method by simply calling [CCAcceleroMeter sharedAccelerometer];
Start accelerometer in -(void)onEnterTransitionDidFinish by calling [CCAcceleroMeter sharedAccelerometer]startUpdateForScene:self];
Define delegate method -(void)acceleroMeterDidAccelerate:(CMAccelerometerData*)accelerometerData in your scene.
Stop accelerometer in -(void)onExitTransitionDidStart by calling [CCAcceleroMeter sharedAccelerometer]stopUpdateForScene:self];
You can find out the example project in GitHub.
Here is example:
Device::setAccelerometerEnabled(true);
auto accelerometerListener = EventListenerAcceleration::create([this](Acceleration* acc, Event* event)
{
});
getEventDispatcher()->addEventListenerWithSceneGraphPriority(accelerometerListener, this);
Also video tutorial https://www.youtube.com/watch?v=Xk6lXK6trxU
I have made a button so that when it's pressed by the user and a particular row(s) are selected it does something.
So far I have this:
if (pickerView selectedRowInComponent:0) {
[mailComposerTwo setToRecipients:[NSArray arrayWithObjects:#"email#blah.com",nil]];
}
It works on its own. But when I do the if statement multiple times it crashes.
An ways of making it work?
Any help appreciated, thanx.
The problem probably lies with your mail composer, not the picker view. When you show the composer, make sure that you only create it if it hasn't already created.
Also, make sure you release it after you show it:
MFMailComposeViewController *picker = [[MFMailComposeViewController alloc] init];
picker.mailComposeDelegate = self;
...[configure the picker]
[rootContainer presentModalViewController:picker animated:YES];
[picker release];
NSArray *finalList = [[NSArray alloc]init];
//put all your if statements
if (pickerView selectedRowInComponent:0)
{
[finalList arrayByAddingObjectsFromArray:#[#"email#address.com",#"second#address.com",...];
}
if (pickerView selectedRowInComponent:1)
{
[finalList arrayByAddingObjectsFromArray:#[#"another#address.com",#"fourth#address.com",...];
}
//end of if statements
[mailComposerTwo setToRecipients:finalList];
[self presentViewController:yourInitializedMessageController animated:YES completion:^{NSLog(#"message controller is presented");}];
This will do a single method call rather than continually reassigning which for some odd reason is causing your exception. presentModalViewController:animated: has been deprecated as of iOS 6.0? if not 7.0 I believe.
NOTE! Make the message controller a property of the main view controller. It is good practice so that it is not auto-released by iOS if you need to bring it back up. However if you use MFMessageComposer iOS will keep messenger allocated or running in a thread somewhere so initializing a view controller for it is quick.