Animation TextField Is Gone in Swift - swift3

please i have an question about why when i set the animation for the textfield bounds property the animation is gone when i typing inside textfield?
the code below :
// textField delegate
func textFieldDidBeginEditing(_ textField: UITextField) {
UIView.animate(withDuration: 0.4, delay: 0.0, usingSpringWithDamping: 0.6, initialSpringVelocity: 0, options: .curveEaseInOut, animations: {
textField.layer.shadowOpacity = 6
textField.bounds.size.width += 15
textField.bounds.size.height += 15
}, completion: nil)
}
and please how can i fix this issue
thanks a lot

Although "animation is gone" seems unclear to me, UIView.animate does not affect the layer of the component.
Also, if the textfield frame has been set by constraints, you should edit the constants of them instead of changing the frame directly.

Related

UIKit pinch gesture in a mixed SwiftUI / UIKit environment presents issues with scaleEffect, anchor and offset

Apple provides some elegant code for managing pinch gestures in a UIKit environment, this can be downloaded directly from Apple. In this sample code you will see three coloured rectangles that can each be panned, pinched and rotated. I will focus mainly on an issue with the pinch gesture.
My problem arises when trying to make this code work in a mixed environment by using UIKit gestures created on a UIViewRepresentable's Coordinator that talk to a model class that in turn publishes values that trigger redraws in SwiftUI. Passing data doesn't seem to be an issue but the behaviour on the SwiftUI side is not what I expect.
Specifically the pinch gesture shows an unexpected jump when starting the gesture. When the scale is bigger this quirky effect is more notorious. I also noticed that the anchor position and the previous anchor position seem to be affecting this behaviour (but I'm not sure how exactly).
Here is Apple's code for a UIKit environment:
func pinchPiece(_ pinchGestureRecognizer: UIPinchGestureRecognizer) {
guard pinchGestureRecognizer.state == .began || pinchGestureRecognizer.state == .changed,
let piece = pinchGestureRecognizer.view else {
return
}
adjustAnchor(for: pinchGestureRecognizer)
let scale = pinchGestureRecognizer.scale
piece.transform = piece.transform.scaledBy(x: scale, y: scale)
pinchGestureRecognizer.scale = 1 // Clear scale so that it is the right delta next time.
}
private func adjustAnchor(for gestureRecognizer: UIGestureRecognizer) {
guard let piece = gestureRecognizer.view, gestureRecognizer.state == .began else {
return
}
let locationInPiece = gestureRecognizer.location(in: piece)
let locationInSuperview = gestureRecognizer.location(in: piece.superview)
let anchorX = locationInPiece.x / piece.bounds.size.width
let anchorY = locationInPiece.y / piece.bounds.size.height
piece.layer.anchorPoint = CGPoint(x: anchorX, y: anchorY)
piece.center = locationInSuperview
}
A piece in Apple's code is one of the rectangles we see in the sample code. In my code a piece is a UIKit object living in a UIViewRepresentable, I call it uiView and it holds all the gestures that it responds to:
#objc func pinch(_ gesture: UIPinchGestureRecognizer) {
guard gesture.state == .began || gesture.state == .changed,
let uiView = gesture.view else {
return
}
adjustAnchor(for: gesture)
parent.model.scale *= gesture.scale
gesture.scale = 1
}
private func adjustAnchor(for gesture: UIPinchGestureRecognizer) {
guard let uiView = gesture.view, gesture.state == .began else {
return
}
let locationInUIView = gesture.location(in: uiView)
let locationInSuperview = gesture.location(in: uiView.superview)
let anchorX = locationInUIView.x / uiView.bounds.size.width
let anchorY = locationInUIView.y / uiView.bounds.size.height
parent.model.anchor = CGPoint(x: anchorX, y: anchorY)
// parent.model.offset = CGSize(width: locationInSuperview.x, height: locationInSuperview.y)
}
The parent.model refers to the model class that comes through an EnvironmentObject directly into the UIViewRepresentable struct.
In the SwiftUI side of things, ContentView looks like this (for clarity I'm just using one CustomUIView instead of the three pieces of Apple's code):
struct ContentView: View {
#EnvironmentObject var model: Model
var body: some View {
CustomUIView()
.frame(width: 300, height: 300)
.scaleEffect(model.scale, anchor: model.anchor)
.offset(document.offset)
}
}
As soon as you try to pinch on the CustomUIView, the rectangle jumps a little as if it would not be correctly applying an initial translation to compensate for the anchor. The scaling does appear to work according to the anchor and the offset seems to be applied correctly when panning.
One odd hint: the initial jump seems to be going in the direction of the anchor but stays half way there, effectively not reaching the right translation and making the CustomUIView jump under your fingers. As you keep on pinching closer to the previous anchor, the jump is less notorious.
Any help on this one would be greatly appreciated!

Create a custom NSAttributedString.Key Everything is normal, but it just doesn't show up and has no effect

This question has been rummaged throughout the network. I customized a sizeFont sizeFont inherits UIFont. Can use func systemFont print is also normal and has been set.
But the display has no effect.
My Configuration:
mac OS 11.1
iPhone 14.3
Xcode 12.3
I have tried the following methods.
1.
Create custom NSAttributedString.Key
But no effect
2.Simulator and real machine (No, no effect)
this is my code
import SwiftUI
struct ContentView: View {
var body: some View {
UIkitTextView()
.padding()
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
extension NSAttributedString.Key {
static let textStyle: NSAttributedString.Key = .init("textStyle")
}
struct UIkitTextView: UIViewRepresentable {
var fullString: NSMutableAttributedString = NSMutableAttributedString(string: "Hello, World")
func makeUIView(context: Context) -> UITextView {
let view = UITextView()
let attributedtest: [NSAttributedString.Key: Any] = [
.sizefont: UIFont.systemFont(ofSize: 72),
.foregroundColor: UIColor.red,
]
fullString.setAttributes(attributedtest, range: NSRange(location: 0, length: 5))
view.attributedText = fullString
print("\(fullString.attributedSubstring(from: NSRange(location: 0, length: 5)))")
return view
}
func updateUIView(_ uiView: UITextView, context: Context) {
}
}
class sizeFont: UIFont{
}
extension NSAttributedString.Key {
static let sizefont: NSAttributedString.Key = .init(rawValue:"sizeFont")
}
And Picture
Thanks
My only experience with NSAttributed text is for smooth conversion of a integer to a smoothly scaled image.
This is how I did it, perhaps it can help :
func imageOf(_ val: Int, backgroundColor: UIColor = .gray, foregroundColor: UIColor = .yellow) -> UIImage {
let t:String = (val==0) ? " " : String(val)
let attributes = [
NSAttributedString.Key.foregroundColor: foregroundColor,
NSAttributedString.Key.backgroundColor: backgroundColor,
NSAttributedString.Key.font: UIFont.systemFont(ofSize: 70)
]
let textSize = t.size(withAttributes: attributes)
let renderer = UIGraphicsImageRenderer(size: textSize)
let newImage = renderer.image(actions: { _ in t.draw(at: CGPoint(x: 0, y:0), withAttributes: attributes) })
return newImage
}
Ask you this:
How iOS is supposed to know what effect to apply according to your new NSAttributedString.Key? How to render it?
By reading the only value (and not the key) and act according its type? If so, how could iOS know that an UIColor value is for NSAttributedString.Key.foregroundColor or for NSAttributedString.Key.backgroundColor.
In other words, you added information, but noone reads your mind and knows what to do with that.
You'd have to play with CoreText.framework, with useful infos there.
As seen, you can play more with a NSLayoutManager. For instance, at stome point drawUnderline(forGlyphRange:underlineType:baselineOffset:lineFragmentRect:lineFragmentGlyphRange:containerOrigin:) will be called, and will render it.
Are you starting to see the logic, who will call that method with which params? Applied to your new key/value, which code will call the corresponding method?
I guess it should be in the NSAttributedString.Key.font, so you might have a method to convert it into the correct key. Maybe it's for a toggle (toggle between currently set font, and the value of sizeFont?), But according to the name, why not just put a CGFloat value?
You might ask a new question explaining what's the effect you want. But this answer should answer "why it's not working", or rather, "why it's not doing anything" (because working, I don't know what's supposed to do in the first place).

mouseMoved function not called when I move the mouse?

I am trying to find mouse coordinates within an SKScene, however, the moveMouse function is not being called. (This is in a Swift Playground by the way) I even wrote a print function that tested to see if the function was even being called, but it prints absolutely nothing.
This is how I set up my NSTrackingArea:
let options = [NSTrackingAreaOptions.mouseMoved, NSTrackingAreaOptions.activeInKeyWindow, NSTrackingAreaOptions.activeAlways, NSTrackingAreaOptions.inVisibleRect, ] as NSTrackingAreaOptions
let tracker = NSTrackingArea(rect: viewFrame, options: options, owner: self.view, userInfo: nil)
self.view?.addTrackingArea(tracker)
And here is the mouseMoved function (the one that is not being called)
override public func mouseMoved(with event: NSEvent) {
point = event.location(in: self)
print(point)
}
Is there a reason that mouseMoved isn't being called?
I created a playground with the following code (and only that code):
import AppKit
import SpriteKit
import PlaygroundSupport
class Scene:SKScene {
override public func mouseMoved(with event: NSEvent) {
let point = event.location(in: self)
print(point)
}
}
let frame = CGRect(x:0, y:0, width:1920, height:1080)
let view = SKView(frame:frame)
let scene = Scene(size: CGSize(width: 1080, height: 1080))
scene.backgroundColor = #colorLiteral(red: 0.4078431373, green: 0.7843137255, blue: 0.6509803922, alpha: 1)
scene.scaleMode = .aspectFit
let options = [NSTrackingAreaOptions.mouseMoved, NSTrackingAreaOptions.activeInKeyWindow, NSTrackingAreaOptions.activeAlways, NSTrackingAreaOptions.inVisibleRect, ] as NSTrackingAreaOptions
let tracker = NSTrackingArea(rect:frame, options: options, owner:view, userInfo: nil)
view.addTrackingArea(tracker)
PlaygroundPage.current.needsIndefiniteExecution = true
view.presentScene(scene)
PlaygroundPage.current.liveView = view
Then, I opened the playground Timeline view by clicking the "Show the Assistant Editor" button in the toolbar. I also opened the Debug area so that I could see the console.
At that point, the Timeline view showed a green view. I moved my mouse pointer over the green view and I could see the mouse coordinates being printed out in the console. So, as far as I can tell, the above code works fine.
Could you please try the code at your end and see what happens?

TVOS adjustsImageWhenAncestorFocused Size

Is it possible to adjust the size/frame of a ImageView when focused using imgView.adjustsImageWhenAncestorFocused = true ?
Been scouring the webs but can't find anything that would set the zoom size of the effect, seems to just be some default value.
It seems you can't do that. And I think Apple doesn't allow doing so for a reason.
There are very detailed human interface guidelines for tvOS. They recommend spacing and item size for a grid layout with different number of columns, so that the viewing experience is optimal:
The following grid layouts provide an optimal viewing experience. Be
sure to use appropriate spacing between unfocused rows and columns to
prevent overlap when an item is brought into focus.
I guess the "default" frame for the focused UIImageView takes these recommended item sizes into account. And Apple doesn't allow to change it, because it might cause issues, like other grid items being overlapped.
So you can't modify the frame of focused UIImageView, but you can access it indirectly - by using focusedFrameGuide property.
You can adjust size via imgView.transform. If your imgView inside another view (e.g. inside UICollectionViewCell) you can use code below to scale down image by 10% when receiving focus
override func didUpdateFocus(in context: UIFocusUpdateContext, with coordinator: UIFocusAnimationCoordinator) {
super.didUpdateFocus(in: context, with: coordinator)
if context.nextFocusedView === self {
coordinator.addCoordinatedAnimations({
self.imgView.transform = self.transform.scaledBy(x: 0.9, y: 0.9)
}, completion: nil)
}
if context.previouslyFocusedView === self {
coordinator.addCoordinatedAnimations({
self.imgView.transform = .identity
}, completion: nil)
}
}
Also you can calculate system focus scale for UIImageView with adjustsImageWhenAncestorFocused = true with next code:
let xScale = imgView.focusedFrameGuide.layoutFrame.size.width / imgView.frame.size.width
let yScale = imgView.focusedFrameGuide.layoutFrame.size.height / imgView.frame.size.height
If you want to remove scale when focusing on UIImageView with adjustsImageWhenAncestorFocused = true use:
override func didUpdateFocus(in context: UIFocusUpdateContext, with coordinator: UIFocusAnimationCoordinator) {
super.didUpdateFocus(in: context, with: coordinator)
let xScale = imgView.focusedFrameGuide.layoutFrame.size.width / imgView.frame.size.width
let yScale = imgView.focusedFrameGuide.layoutFrame.size.height / imgView.frame.size.height
if context.nextFocusedView === self {
coordinator.addCoordinatedAnimations({
self.imgView.transform = self.transform.scaledBy(x: 1 / xScale, y: 1 / yScale)
}, completion: nil)
}
if context.previouslyFocusedView === self {
coordinator.addCoordinatedAnimations({
self.imgView.transform = .identity
}, completion: nil)
}
}
P.S. Don't forget to set clipsToBounds = false on UIImageView

'CGAffineTransformIdentity' is unavailable in Swift

Came across this error when trying to do adapt some animations into Swift3 syntax.
UIView.animate(withDuration: duration, delay: 0.0, usingSpringWithDamping: 0.5,
initialSpringVelocity: 0.8, options: [] , animations: {
fromView.transform = offScreenLeft
toView.transform = CGAffineTransformIdentity
}, completion: { finished in
transitionContext.completeTransition(true)
})
and got this:
'CGAffineTransformIdentity' is unavailable in Swift
Found this link which suggested that "The global constant was moved into a static property, and the Swift 3 migrator, as you've discovered, failed to correct for that. " and that you can simply change the code to :
toView.transform = CGAffineTransform.identity
EDIT
or even simpler:
toView.transform = .identity