mouseMoved function not called when I move the mouse? - swift3

I am trying to find mouse coordinates within an SKScene, however, the moveMouse function is not being called. (This is in a Swift Playground by the way) I even wrote a print function that tested to see if the function was even being called, but it prints absolutely nothing.
This is how I set up my NSTrackingArea:
let options = [NSTrackingAreaOptions.mouseMoved, NSTrackingAreaOptions.activeInKeyWindow, NSTrackingAreaOptions.activeAlways, NSTrackingAreaOptions.inVisibleRect, ] as NSTrackingAreaOptions
let tracker = NSTrackingArea(rect: viewFrame, options: options, owner: self.view, userInfo: nil)
self.view?.addTrackingArea(tracker)
And here is the mouseMoved function (the one that is not being called)
override public func mouseMoved(with event: NSEvent) {
point = event.location(in: self)
print(point)
}
Is there a reason that mouseMoved isn't being called?

I created a playground with the following code (and only that code):
import AppKit
import SpriteKit
import PlaygroundSupport
class Scene:SKScene {
override public func mouseMoved(with event: NSEvent) {
let point = event.location(in: self)
print(point)
}
}
let frame = CGRect(x:0, y:0, width:1920, height:1080)
let view = SKView(frame:frame)
let scene = Scene(size: CGSize(width: 1080, height: 1080))
scene.backgroundColor = #colorLiteral(red: 0.4078431373, green: 0.7843137255, blue: 0.6509803922, alpha: 1)
scene.scaleMode = .aspectFit
let options = [NSTrackingAreaOptions.mouseMoved, NSTrackingAreaOptions.activeInKeyWindow, NSTrackingAreaOptions.activeAlways, NSTrackingAreaOptions.inVisibleRect, ] as NSTrackingAreaOptions
let tracker = NSTrackingArea(rect:frame, options: options, owner:view, userInfo: nil)
view.addTrackingArea(tracker)
PlaygroundPage.current.needsIndefiniteExecution = true
view.presentScene(scene)
PlaygroundPage.current.liveView = view
Then, I opened the playground Timeline view by clicking the "Show the Assistant Editor" button in the toolbar. I also opened the Debug area so that I could see the console.
At that point, the Timeline view showed a green view. I moved my mouse pointer over the green view and I could see the mouse coordinates being printed out in the console. So, as far as I can tell, the above code works fine.
Could you please try the code at your end and see what happens?

Related

UIKit pinch gesture in a mixed SwiftUI / UIKit environment presents issues with scaleEffect, anchor and offset

Apple provides some elegant code for managing pinch gestures in a UIKit environment, this can be downloaded directly from Apple. In this sample code you will see three coloured rectangles that can each be panned, pinched and rotated. I will focus mainly on an issue with the pinch gesture.
My problem arises when trying to make this code work in a mixed environment by using UIKit gestures created on a UIViewRepresentable's Coordinator that talk to a model class that in turn publishes values that trigger redraws in SwiftUI. Passing data doesn't seem to be an issue but the behaviour on the SwiftUI side is not what I expect.
Specifically the pinch gesture shows an unexpected jump when starting the gesture. When the scale is bigger this quirky effect is more notorious. I also noticed that the anchor position and the previous anchor position seem to be affecting this behaviour (but I'm not sure how exactly).
Here is Apple's code for a UIKit environment:
func pinchPiece(_ pinchGestureRecognizer: UIPinchGestureRecognizer) {
guard pinchGestureRecognizer.state == .began || pinchGestureRecognizer.state == .changed,
let piece = pinchGestureRecognizer.view else {
return
}
adjustAnchor(for: pinchGestureRecognizer)
let scale = pinchGestureRecognizer.scale
piece.transform = piece.transform.scaledBy(x: scale, y: scale)
pinchGestureRecognizer.scale = 1 // Clear scale so that it is the right delta next time.
}
private func adjustAnchor(for gestureRecognizer: UIGestureRecognizer) {
guard let piece = gestureRecognizer.view, gestureRecognizer.state == .began else {
return
}
let locationInPiece = gestureRecognizer.location(in: piece)
let locationInSuperview = gestureRecognizer.location(in: piece.superview)
let anchorX = locationInPiece.x / piece.bounds.size.width
let anchorY = locationInPiece.y / piece.bounds.size.height
piece.layer.anchorPoint = CGPoint(x: anchorX, y: anchorY)
piece.center = locationInSuperview
}
A piece in Apple's code is one of the rectangles we see in the sample code. In my code a piece is a UIKit object living in a UIViewRepresentable, I call it uiView and it holds all the gestures that it responds to:
#objc func pinch(_ gesture: UIPinchGestureRecognizer) {
guard gesture.state == .began || gesture.state == .changed,
let uiView = gesture.view else {
return
}
adjustAnchor(for: gesture)
parent.model.scale *= gesture.scale
gesture.scale = 1
}
private func adjustAnchor(for gesture: UIPinchGestureRecognizer) {
guard let uiView = gesture.view, gesture.state == .began else {
return
}
let locationInUIView = gesture.location(in: uiView)
let locationInSuperview = gesture.location(in: uiView.superview)
let anchorX = locationInUIView.x / uiView.bounds.size.width
let anchorY = locationInUIView.y / uiView.bounds.size.height
parent.model.anchor = CGPoint(x: anchorX, y: anchorY)
// parent.model.offset = CGSize(width: locationInSuperview.x, height: locationInSuperview.y)
}
The parent.model refers to the model class that comes through an EnvironmentObject directly into the UIViewRepresentable struct.
In the SwiftUI side of things, ContentView looks like this (for clarity I'm just using one CustomUIView instead of the three pieces of Apple's code):
struct ContentView: View {
#EnvironmentObject var model: Model
var body: some View {
CustomUIView()
.frame(width: 300, height: 300)
.scaleEffect(model.scale, anchor: model.anchor)
.offset(document.offset)
}
}
As soon as you try to pinch on the CustomUIView, the rectangle jumps a little as if it would not be correctly applying an initial translation to compensate for the anchor. The scaling does appear to work according to the anchor and the offset seems to be applied correctly when panning.
One odd hint: the initial jump seems to be going in the direction of the anchor but stays half way there, effectively not reaching the right translation and making the CustomUIView jump under your fingers. As you keep on pinching closer to the previous anchor, the jump is less notorious.
Any help on this one would be greatly appreciated!

SwiftUI subview going off of the window

As a SwiftUI beginner, I have just recently started creating my first MacOS app. However, when I was trying to implement the NSVisualEffectView to blur the background, the contentView that I was using went off of the screen, which wasn't visible at all. It looked something like this:
So I tried fixing the size of the text I had in the contentView by making the text I had into Text("Hello world!").frame(width: 700, height:500), and the screen became like this:
By doing this, in the bottom left corner, the tiny shapes of Hello world! can be seen. However, unless I position the text on the top right of the contentView, I can't seem to move it. Does anyone know how to fix this?
*For reference, here is the AppDelegate.swift contents:
import SwiftUI
#main
class AppDelegate: NSObject, NSApplicationDelegate {
var window: NSWindow!
func applicationDidFinishLaunching(_ aNotification: Notification) {
// Create the SwiftUI view that provides the window contents.
let contentView = ContentView()
let visualEffect = NSVisualEffectView()
visualEffect.blendingMode = .behindWindow
visualEffect.state = .active
visualEffect.material = .fullScreenUI
visualEffect.addSubview(NSHostingView(rootView: contentView))
// Create the window and set the content view.
window = NSWindow(
contentRect: .zero,
styleMask: [.titled, .closable, .miniaturizable, .resizable, .fullSizeContentView],
backing: .buffered, defer: false)
window.isReleasedWhenClosed = false
window.center()
window.setFrameAutosaveName("Main Window")
window.contentView = visualEffect
window.makeKeyAndOrderFront(nil)
window.titlebarAppearsTransparent = true
window.titleVisibility = .hidden
}
func applicationWillTerminate(_ aNotification: Notification) {
// Insert code here to tear down your application
}
}

Swift 3 submit form - UITextField changes only after focusing field again

I am working on a login view and trying to change the border color of a UITextField in Xcode/swift3 when validation of the textfield fails. The UITextField should get a red border color.
The problem is that if enter an email, then a password and then press the submit button, i have to focus email text field again before it gets a red border.
This is my LoginViewController.swift so far:
import Foundation
import UIKit
class LoginViewController : UIViewController, UITextFieldDelegate {
#IBOutlet weak var userEmailTextField: UITextField!
#IBOutlet weak var userPasswordTextField: UITextField!
override func viewDidLoad() {
super.viewDidLoad()
}
// login button action
#IBAction func loginButtonTabbed(_ sender: Any) {
// getting values from text fields
let userEmail = userEmailTextField.text;
let userPassword = userPasswordTextField.text;
// set enpoind data
let requestURL = NSURL(string: Constants.apiUrl)
//creating a task to send the post request
var request = URLRequest(url: requestURL as! URL)
request.httpMethod = "POST"
let postString = "cmd=addUser&email="+userEmail!+"&password="+userPassword!
request.httpBody = postString.data(using: .utf8)
let task = URLSession.shared.dataTask(with: request) { data, response, error in
guard let data = data, error == nil else { // check for fundamental networking error
print("error=\(error)")
return
}
if let httpStatus = response as? HTTPURLResponse, httpStatus.statusCode != 200 { // check for http errors
print("statusCode should be 200, but is \(httpStatus.statusCode)")
print("response = \(response)")
}
do {
let json = try? JSONSerialization.jsonObject(with: data, options: [])
// store json response to dictionary
if let dictionary = json as? [String: Any] {
// check if we got validation errors
if let nestedDictionary = dictionary["validation"] as? [String: Any] {
// display validation messages on device
if let emailMsg = nestedDictionary["Email"] as? String { // change color of textfield
self.userEmailTextField.errorField()
}
}
}
} catch let error as NSError {
print(error)
}
}
//executing the task
task.resume()
}
}
and the UITextField extension UITextField.swift:
import Foundation
import UIKit
extension UITextField {
func errorField(){
self.layer.borderColor = UIColor(red: 255/255.0, green: 59/255.0, blue: 48/255.0, alpha: 1.0).cgColor
self.layer.borderWidth = 1.0;
}
}
When you're doing a network call, it always happens in the background...so in order to do any kind of UI updates you need to be on the main queue. Just put the self.userEmailTextField.errorField() inside DispatchQueue.main.async {...} so it would be done immediately.
Also haven't really tested your code very well. Why?
Even in your current code the border would still turn red, but it turns red after almost like 6-7 seconds (it could take less or more for you)...because it's being ran from background thread.
What I don't understand is why clicking on the textField again brings the red border right away!? Here's what I'm guessing happens:
From the background thread you update the model ie change the textField color which queues the UI/view to be updated...but since we're on a background queue, that UI updated could take a few seconds to happen
But then you tapped on the textField right away and forced a super quick read of the textField and all its properties which includes the border—from main thread (actual user touches are always handled through main thread)...which even though are not yet red on the screen, but since it's red on the model it will read from it and change color to red immediately.

TVOS adjustsImageWhenAncestorFocused Size

Is it possible to adjust the size/frame of a ImageView when focused using imgView.adjustsImageWhenAncestorFocused = true ?
Been scouring the webs but can't find anything that would set the zoom size of the effect, seems to just be some default value.
It seems you can't do that. And I think Apple doesn't allow doing so for a reason.
There are very detailed human interface guidelines for tvOS. They recommend spacing and item size for a grid layout with different number of columns, so that the viewing experience is optimal:
The following grid layouts provide an optimal viewing experience. Be
sure to use appropriate spacing between unfocused rows and columns to
prevent overlap when an item is brought into focus.
I guess the "default" frame for the focused UIImageView takes these recommended item sizes into account. And Apple doesn't allow to change it, because it might cause issues, like other grid items being overlapped.
So you can't modify the frame of focused UIImageView, but you can access it indirectly - by using focusedFrameGuide property.
You can adjust size via imgView.transform. If your imgView inside another view (e.g. inside UICollectionViewCell) you can use code below to scale down image by 10% when receiving focus
override func didUpdateFocus(in context: UIFocusUpdateContext, with coordinator: UIFocusAnimationCoordinator) {
super.didUpdateFocus(in: context, with: coordinator)
if context.nextFocusedView === self {
coordinator.addCoordinatedAnimations({
self.imgView.transform = self.transform.scaledBy(x: 0.9, y: 0.9)
}, completion: nil)
}
if context.previouslyFocusedView === self {
coordinator.addCoordinatedAnimations({
self.imgView.transform = .identity
}, completion: nil)
}
}
Also you can calculate system focus scale for UIImageView with adjustsImageWhenAncestorFocused = true with next code:
let xScale = imgView.focusedFrameGuide.layoutFrame.size.width / imgView.frame.size.width
let yScale = imgView.focusedFrameGuide.layoutFrame.size.height / imgView.frame.size.height
If you want to remove scale when focusing on UIImageView with adjustsImageWhenAncestorFocused = true use:
override func didUpdateFocus(in context: UIFocusUpdateContext, with coordinator: UIFocusAnimationCoordinator) {
super.didUpdateFocus(in: context, with: coordinator)
let xScale = imgView.focusedFrameGuide.layoutFrame.size.width / imgView.frame.size.width
let yScale = imgView.focusedFrameGuide.layoutFrame.size.height / imgView.frame.size.height
if context.nextFocusedView === self {
coordinator.addCoordinatedAnimations({
self.imgView.transform = self.transform.scaledBy(x: 1 / xScale, y: 1 / yScale)
}, completion: nil)
}
if context.previouslyFocusedView === self {
coordinator.addCoordinatedAnimations({
self.imgView.transform = .identity
}, completion: nil)
}
}
P.S. Don't forget to set clipsToBounds = false on UIImageView

Keyboard & textfields layout

I'm posting this to hopefully help people have encountered the same problem as I have, and for answers on one question.
In one UIViewController I have multiple textfields, some located above the keyboard and some will be located beneath the keyboard. At the same time I have a toolbar located above the keyboard to toggle through the UITextFields.
This is what the view looks like:
To set the right origin of the textfields I have compiled codes from different answer into this code:
To register for keyboardShow and Hide:
override function viewDidLoad() {
NotificationCenter.default.addObserver(self, selector: #selector(keyboardWillShow), name: .UIKeyboardWillShow, object: nil)
NotificationCenter.default.addObserver(self, selector: #selector(keyboardWillHide), name: .UIKeyboardWillHide, object: nil)
}
KeyboardWillShow selector method:
func keyboardWillShow(_ notification: Notification) {
if let userInfo = notification.userInfo,
let keyboard = (userInfo[UIKeyboardFrameBeginUserInfoKey] as? NSValue)?.cgRectValue.size {
let keyboardHeight = self.view.frame.height - keyboard.height
for (_, textField) in textFields.enumerated() {
let textFieldBottom = textField.frame.origin.y + textField.frame.height
let dif = textFieldBottom - keyboardHeight
if textFieldBottom >= keyboardHeight {
if textField.isFirstResponder {
self.view.frame.origin.y = 0
self.view.frame.origin.y -= dif
}
}
}
}
}
KeyboardWillHide selector method:
func keyboardWillHide(_ notification: Notification) {
view.frame.origin.y = 0
}
My question is now, when I quickly tap one of the arrows in my toolbar the code will not move the necessary distance, so that the correct UITextField is just above the keyboard.
Example: First Incorrect Answer is the firstResponder. I tap twice quickly on > arrow. Second Incorrect Answer will be the UITextField just above the keyboard while Third Incorrect Answer will be the firstResponder and not viewable by the user.