Is it possible to adjust the size/frame of a ImageView when focused using imgView.adjustsImageWhenAncestorFocused = true ?
Been scouring the webs but can't find anything that would set the zoom size of the effect, seems to just be some default value.
It seems you can't do that. And I think Apple doesn't allow doing so for a reason.
There are very detailed human interface guidelines for tvOS. They recommend spacing and item size for a grid layout with different number of columns, so that the viewing experience is optimal:
The following grid layouts provide an optimal viewing experience. Be
sure to use appropriate spacing between unfocused rows and columns to
prevent overlap when an item is brought into focus.
I guess the "default" frame for the focused UIImageView takes these recommended item sizes into account. And Apple doesn't allow to change it, because it might cause issues, like other grid items being overlapped.
So you can't modify the frame of focused UIImageView, but you can access it indirectly - by using focusedFrameGuide property.
You can adjust size via imgView.transform. If your imgView inside another view (e.g. inside UICollectionViewCell) you can use code below to scale down image by 10% when receiving focus
override func didUpdateFocus(in context: UIFocusUpdateContext, with coordinator: UIFocusAnimationCoordinator) {
super.didUpdateFocus(in: context, with: coordinator)
if context.nextFocusedView === self {
coordinator.addCoordinatedAnimations({
self.imgView.transform = self.transform.scaledBy(x: 0.9, y: 0.9)
}, completion: nil)
}
if context.previouslyFocusedView === self {
coordinator.addCoordinatedAnimations({
self.imgView.transform = .identity
}, completion: nil)
}
}
Also you can calculate system focus scale for UIImageView with adjustsImageWhenAncestorFocused = true with next code:
let xScale = imgView.focusedFrameGuide.layoutFrame.size.width / imgView.frame.size.width
let yScale = imgView.focusedFrameGuide.layoutFrame.size.height / imgView.frame.size.height
If you want to remove scale when focusing on UIImageView with adjustsImageWhenAncestorFocused = true use:
override func didUpdateFocus(in context: UIFocusUpdateContext, with coordinator: UIFocusAnimationCoordinator) {
super.didUpdateFocus(in: context, with: coordinator)
let xScale = imgView.focusedFrameGuide.layoutFrame.size.width / imgView.frame.size.width
let yScale = imgView.focusedFrameGuide.layoutFrame.size.height / imgView.frame.size.height
if context.nextFocusedView === self {
coordinator.addCoordinatedAnimations({
self.imgView.transform = self.transform.scaledBy(x: 1 / xScale, y: 1 / yScale)
}, completion: nil)
}
if context.previouslyFocusedView === self {
coordinator.addCoordinatedAnimations({
self.imgView.transform = .identity
}, completion: nil)
}
}
P.S. Don't forget to set clipsToBounds = false on UIImageView
Related
Apple provides some elegant code for managing pinch gestures in a UIKit environment, this can be downloaded directly from Apple. In this sample code you will see three coloured rectangles that can each be panned, pinched and rotated. I will focus mainly on an issue with the pinch gesture.
My problem arises when trying to make this code work in a mixed environment by using UIKit gestures created on a UIViewRepresentable's Coordinator that talk to a model class that in turn publishes values that trigger redraws in SwiftUI. Passing data doesn't seem to be an issue but the behaviour on the SwiftUI side is not what I expect.
Specifically the pinch gesture shows an unexpected jump when starting the gesture. When the scale is bigger this quirky effect is more notorious. I also noticed that the anchor position and the previous anchor position seem to be affecting this behaviour (but I'm not sure how exactly).
Here is Apple's code for a UIKit environment:
func pinchPiece(_ pinchGestureRecognizer: UIPinchGestureRecognizer) {
guard pinchGestureRecognizer.state == .began || pinchGestureRecognizer.state == .changed,
let piece = pinchGestureRecognizer.view else {
return
}
adjustAnchor(for: pinchGestureRecognizer)
let scale = pinchGestureRecognizer.scale
piece.transform = piece.transform.scaledBy(x: scale, y: scale)
pinchGestureRecognizer.scale = 1 // Clear scale so that it is the right delta next time.
}
private func adjustAnchor(for gestureRecognizer: UIGestureRecognizer) {
guard let piece = gestureRecognizer.view, gestureRecognizer.state == .began else {
return
}
let locationInPiece = gestureRecognizer.location(in: piece)
let locationInSuperview = gestureRecognizer.location(in: piece.superview)
let anchorX = locationInPiece.x / piece.bounds.size.width
let anchorY = locationInPiece.y / piece.bounds.size.height
piece.layer.anchorPoint = CGPoint(x: anchorX, y: anchorY)
piece.center = locationInSuperview
}
A piece in Apple's code is one of the rectangles we see in the sample code. In my code a piece is a UIKit object living in a UIViewRepresentable, I call it uiView and it holds all the gestures that it responds to:
#objc func pinch(_ gesture: UIPinchGestureRecognizer) {
guard gesture.state == .began || gesture.state == .changed,
let uiView = gesture.view else {
return
}
adjustAnchor(for: gesture)
parent.model.scale *= gesture.scale
gesture.scale = 1
}
private func adjustAnchor(for gesture: UIPinchGestureRecognizer) {
guard let uiView = gesture.view, gesture.state == .began else {
return
}
let locationInUIView = gesture.location(in: uiView)
let locationInSuperview = gesture.location(in: uiView.superview)
let anchorX = locationInUIView.x / uiView.bounds.size.width
let anchorY = locationInUIView.y / uiView.bounds.size.height
parent.model.anchor = CGPoint(x: anchorX, y: anchorY)
// parent.model.offset = CGSize(width: locationInSuperview.x, height: locationInSuperview.y)
}
The parent.model refers to the model class that comes through an EnvironmentObject directly into the UIViewRepresentable struct.
In the SwiftUI side of things, ContentView looks like this (for clarity I'm just using one CustomUIView instead of the three pieces of Apple's code):
struct ContentView: View {
#EnvironmentObject var model: Model
var body: some View {
CustomUIView()
.frame(width: 300, height: 300)
.scaleEffect(model.scale, anchor: model.anchor)
.offset(document.offset)
}
}
As soon as you try to pinch on the CustomUIView, the rectangle jumps a little as if it would not be correctly applying an initial translation to compensate for the anchor. The scaling does appear to work according to the anchor and the offset seems to be applied correctly when panning.
One odd hint: the initial jump seems to be going in the direction of the anchor but stays half way there, effectively not reaching the right translation and making the CustomUIView jump under your fingers. As you keep on pinching closer to the previous anchor, the jump is less notorious.
Any help on this one would be greatly appreciated!
My app requests JSON data (latitude, longitude, and other information about a place) and then displays them on a map in a form of clickable annotations. I'm receiving around 30,000 of those, so as you can imagine, the app can get a little "laggy".
The solution I think would fit the app best is to show those annotations only on a certain zoom level (for example when the user zooms so only one city is visible at once, the annotations will show up). Since there's a lot of them, showing all 30,000 would probably crash the app, that's why I also aim at showing just those that are close to where the user zoomed in.
The code below shows immediately all annotations at once at all zoom levels. Is there a way to adapt it to do the things I described above?
struct Map: UIViewRepresentable {
#EnvironmentObject var model: ContentModel
#ObservedObject var data = FetchData()
var locations:[MKPointAnnotation] {
var annotations = [MKPointAnnotation]()
// Loop through all places
for place in data.dataList {
// If the place does have lat and long, create an annotation
if let lat = place.latitude, let long = place.longitude {
// Create an annotation
let a = MKPointAnnotation()
a.coordinate = CLLocationCoordinate2D(latitude: Double(lat)!, longitude: Double(long)!)
a.title = place.address ?? ""
annotations.append(a)
}
}
return annotations
}
func makeUIView(context: Context) -> MKMapView {
let mapView = MKMapView()
mapView.delegate = context.coordinator
// Show user on the map
mapView.showsUserLocation = true
mapView.userTrackingMode = .followWithHeading
return mapView
}
func updateUIView(_ uiView: MKMapView, context: Context) {
// Remove all annotations
uiView.removeAnnotations(uiView.annotations)
// HERE'S WHERE I SHOW THE ANNOTATIONS
uiView.showAnnotations(self.locations, animated: true)
}
static func dismantleUIView(_ uiView: MKMapView, coordinator: ()) {
uiView.removeAnnotations(uiView.annotations)
}
// MARK: Coordinator Class
func makeCoordinator() -> Coordinator {
return Coordinator(map: self)
}
class Coordinator: NSObject, MKMapViewDelegate {
var map: Map
init(map: Map) {
self.map = map
}
func mapView(_ mapView: MKMapView, viewFor annotation: MKAnnotation) -> MKAnnotationView? {
// Don't treat user as an annotation
if annotation is MKUserLocation {
return nil
}
// Check for reusable annotations
var annotationView = mapView.dequeueReusableAnnotationView(withIdentifier: Constants.annotationReusedId)
// If none found, create a new one
if annotationView == nil {
annotationView = MKMarkerAnnotationView(annotation: annotation, reuseIdentifier: Constants.annotationReusedId)
annotationView!.canShowCallout = true
annotationView!.rightCalloutAccessoryView = UIButton(type: .detailDisclosure)
} else {
// Carry on with reusable annotation
annotationView!.annotation = annotation
}
return annotationView
}
}
}
Been searching for an answer for a while now and found nothing that worked well. I imagine there's a way to get visible map rect and then condition that in Map struct, but don't know how to do that. Thanks for reading this far!
Your delegate can implement mapView(_:regionDidChangeAnimated:) to be notified when the user finishes a gesture that changes the map's visible region. It can implement mapViewDidChangeVisibleRegion(_:) to be notified while the gesture is happening.
You can get the map's visible region by asking it for its region property. Regarding zoom levels, the region documentation says this:
The region encompasses both the latitude and longitude point on which the map is centered and the span of coordinates to display. The span values provide an implicit zoom value for the map. The larger the displayed area, the lower the amount of zoom. Similarly, the smaller the displayed area, the greater the amount of zoom.
Your updateUIView method recalculates the locations array every time SwiftUI calls it (because locations is a computed property). You should check how often SwiftUI is calling updateUIView and decide whether you need to cache the locations array.
If you want to efficiently find the locations in the visible region, try storing the locations in a quadtree.
Finally figured that out...
The Coordinator class can implement mapView(_:regionDidChangeAnimated:) (as #rob mayoff said) that gets called after the user finishes a gesture that changes the map's visible region. When that happens, annotations on the map and their array are updated. Looks something like this...
func mapView(_ mapView: MKMapView, regionDidChangeAnimated animated: Bool) {
if mapView.region.span.latitudeDelta < <Double that represents zoom> && mapView.region.span.longitudeDelta < <Double that represents zoom> {
mapView.removeAnnotations(mapView.annotations)
mapView.addAnnotations(map.getLocations(center: mapView.region.center))
}
}
... phrases (doubles missing from the if statement) in < > are to be replaced with your own code (the greater the double, the smaller zoom is needed to view the annotations). The array of annotations is updated by a function defined in Map struct and looks like this...
func getLocations(center: CLLocationCoordinate2D) -> [MKPointAnnotation] {
var annotations = [MKPointAnnotation]()
let annotationSpanIndex: Double = model.latlongDelta * 10 * 0.035
// Loop through all places
for place in data.dataList {
// If the place does have lat and long, create an annotation
if let lat = place.latitude, let long = place.longitude {
// Create annotations only for places within a certain region
if Double(lat)! >= center.latitude - annotationSpanIndex && Double(lat)! <= center.latitude + annotationSpanIndex && Double(long)! >= center.longitude - annotationSpanIndex && Double(long)! <= center.longitude + annotationSpanIndex {
// Create an annotation
let a = MKPointAnnotation()
a.coordinate = CLLocationCoordinate2D(latitude: Double(lat)!, longitude: Double(long)!)
a.title = place.adresa ?? ""
annotations.append(a)
}
}
}
return annotations
}
... where annotationSpanIndex determines in how big of a region around the center point will the annotations be shown (greater the index, bigger the region). This region should be ideally slightly larger than the zoom on which the annotations are shown.
I was wondering how can one get DragGesture Velocity?
I understand the formula works and how to manually get it but when I do so it is no where what Apple returns (at least some times its very different).
I have the following code snippet
struct SecondView: View {
#State private var lastValue: DragGesture.Value?
private var dragGesture: some Gesture {
DragGesture()
.onChanged { (value) in
self.lastValue = value
}
.onEnded { (value) in
if lastValue = self.lastValue {
let timeDiff = value.time.timeIntervalSince(lastValue.time)
print("Actual \(value)") // <- A
print("Calculated: \((value.translation.height - lastValue.translation.height)/timeDiff)") // <- B
}
}
var body: some View {
Color.red
.frame(width: 50, height: 50)
.gesture(self.dragGesture)
}
}
From above:
A will output something like Value(time: 2001-01-02 16:37:14 +0000, location: (250.0, -111.0), startLocation: (249.66665649414062, 71.0), velocity: SwiftUI._Velocity<__C.CGSize>(valuePerSecond: (163.23212105439427, 71.91841849340494)))
B will output something like Calculated: 287.6736739736197
Note from A I am looking at the 2nd value in valuePerSecond which is the y velocity.
Depending on how you drag, the results will be either different or the same. Apple provides the velocity as a property just like .startLocation and .endLocation but unfortunately there is no way for me to access it (at least none that I know) so I have to calculate it myself, theoretically my calculations are correct but they are very different from Apple. So what is the problem here?
This is another take on extracting the velocity from DragGesture.Value. It’s a bit more robust than parsing the debug description as suggested in the other answer but still has the potential to break.
import SwiftUI
extension DragGesture.Value {
/// The current drag velocity.
///
/// While the velocity value is contained in the value, it is not publicly available and we
/// have to apply tricks to retrieve it. The following code accesses the underlying value via
/// the `Mirror` type.
internal var velocity: CGSize {
let valueMirror = Mirror(reflecting: self)
for valueChild in valueMirror.children {
if valueChild.label == "velocity" {
let velocityMirror = Mirror(reflecting: valueChild.value)
for velocityChild in velocityMirror.children {
if velocityChild.label == "valuePerSecond" {
if let velocity = velocityChild.value as? CGSize {
return velocity
}
}
}
}
}
fatalError("Unable to retrieve velocity from \(Self.self)")
}
}
Just like this:
let sss = "\(value)"
//Intercept string
let start = sss.range(of: "valuePerSecond: (")
let end = sss.range(of: ")))")
let arr = String(sss[(start!.upperBound)..<(end!.lowerBound)]).components(separatedBy: ",")
print(Double(arr.first!)!)
The video I'm playing does not take the entire area of the UIView (named videoView), which has a gray color: iPhone 7 Plus Simulator Screenshot
Most of the answers claim that I need to either set the frame to bounds (of UIView) or set videoGravity to AVLayerVideoGravityResizeAspectFill. I've tried both, but for some reason it still does not fill the space entirely.
var avPlayer: AVPlayer!
var avPlayerLayer: AVPlayerLayer!
var paused: Bool = false
#IBOutlet weak var videoView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
let theURL = Bundle.main.url(forResource:"HOTDOG", withExtension: "mp4")
avPlayer = AVPlayer(url: theURL!)
avPlayerLayer = AVPlayerLayer(player: avPlayer)
avPlayerLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
avPlayerLayer.frame = videoView.layer.bounds
videoView.layer.insertSublayer(avPlayerLayer, at: 0)
}
Any help will be appreciated. :)
After long time I found the answer.
Code below should be moved into viewDidAppear() like:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
// Resizing the frame
avPlayerLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
avPlayerLayer.frame = videoView.layer.bounds
videoView.layer.insertSublayer(avPlayerLayer, at: 0)
avPlayer.play()
paused = false
}
The layout was designed for iPhone SE (small screen), so when it was tested on a bigger screen the app was taking originally set size from the Auto-layout and shaping it according to that. By moving the code into viewDidAppear() the app resizes the window according to new constraints.
Just move the frame line avPlayerLayer.frame = videoView.layer.bounds into viewDidLayoutSubviews like this:
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
avPlayerLayer.frame = videoView.layer.bounds
}
The rest should stick into the viewDidLoad function, just like you did.
Is there a more efficient way to animate text shivering with typewriting all in one sklabelnode? I'm trying to achieve the effect in some games like undertale where the words appear type writer style while they are shivering at the same time.
So far I've only been able to achieve it but with such luck:
class TextEffectScene: SKScene {
var typeWriterLabel : SKLabelNode?
var shiveringText_L : SKLabelNode?
var shiveringText_O : SKLabelNode?
var shiveringText_S : SKLabelNode?
var shiveringText_E : SKLabelNode?
var shiveringText_R : SKLabelNode?
var button : SKSpriteNode?
override func sceneDidLoad() {
button = self.childNode(withName: "//button") as? SKSpriteNode
self.scaleMode = .aspectFill //Very important for ensuring that the screen sizes do not change after transitioning to other scenes
typeWriterLabel = self.childNode(withName: "//typeWriterLabel") as? SKLabelNode
shiveringText_L = self.childNode(withName: "//L") as? SKLabelNode
shiveringText_O = self.childNode(withName: "//O") as? SKLabelNode
shiveringText_S = self.childNode(withName: "//S") as? SKLabelNode
shiveringText_E = self.childNode(withName: "//E") as? SKLabelNode
shiveringText_R = self.childNode(withName: "//R") as? SKLabelNode
}
// Type writer style animation
override func didMove(to view: SKView) {
fireTyping()
shiveringText_L?.run(SKAction.repeatForever(SKAction.init(named: "shivering")!))
shiveringText_O?.run(SKAction.repeatForever(SKAction.init(named: "shivering2")!))
shiveringText_S?.run(SKAction.repeatForever(SKAction.init(named: "shivering3")!))
shiveringText_E?.run(SKAction.repeatForever(SKAction.init(named: "shivering4")!))
shiveringText_R?.run(SKAction.repeatForever(SKAction.init(named: "shivering5")!))
}
let myText = Array("You just lost the game :)".characters)
var myCounter = 0
var timer:Timer?
func fireTyping(){
typeWriterLabel?.text = ""
timer = Timer.scheduledTimer(timeInterval: 0.5, target: self, selector: #selector(TextEffectScene.typeLetter), userInfo: nil, repeats: true)
}
func typeLetter(){
if myCounter < myText.count {
typeWriterLabel?.text = (typeWriterLabel?.text!)! + String(myText[myCounter])
//let randomInterval = Double((arc4random_uniform(8)+1))/20 Random typing speed
timer?.invalidate()
timer = Timer.scheduledTimer(timeInterval: 0.2, target: self, selector: #selector(TextEffectScene.typeLetter), userInfo: nil, repeats: false)
} else {
timer?.invalidate() // stop the timer
}
myCounter += 1
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first
if let location = touch?.location(in: self) {
if (button?.contains(location))! {
print("doggoSceneLoaded")
let transition = SKTransition.fade(withDuration: 0.5)
let newScene = SKScene(fileNamed: "GameScene") as! GameScene
self.view?.presentScene(newScene, transition: transition)
}
}
}
}
As you can see, I had to animate each individual label node in a word "loser".
To create this effect:
For those who may be interested to Swift 4 I've realized a gitHub project around this special request called SKAdvancedLabelNode.
You can find here all sources.
Usage:
// horizontal alignment : left
var advLabel = SKAdvancedLabelNode(fontNamed:"Optima-ExtraBlack")
advLabel.name = "advLabel"
advLabel.text = labelTxt
advLabel.fontSize = 20.0
advLabel.fontColor = .green
advLabel.horizontalAlignmentMode = .left
addChild(self.advLabel)
advLabel.position = CGPoint(x:frame.width / 2.5, y:frame.height*0.70)
advLabel.sequentiallyBouncingZoom(delay: 0.3,infinite: true)
Output:
something i have a lot of experience with... There is no way to do this properly outside of what you are already doing. My solution (for a text game) was to use NSAttributedString alongside CoreAnimation which allows you to have crazy good animations over UILabels... Then adding the UILabels in over top of SpriteKit.
I was working on a better SKLabel subclass, but ultimately gave up on it after I realized that there was no way to get the kerning right without a lot more work.
It is possible to use an SKSpriteNode and have a view as a texture, then you would just update the texture every frame, but this requires even more timing / resources.
The best way to do this is in the SK Editor how you have been doing it. If you need a lot of animated text, then you need to use UIKit and NSAttributedString alongside CoreAnimation for fancy things.
This is a huge, massive oversight IMO and is a considerable drawback to SpriteKit. SKLabelNode SUCKS.
As I said in a comment, you can subclass from an SKNode and use it to generate your labels for each characters. You then store the labels in an array for future reference.
I've thrown something together quickly and it works pretty well. I had to play a little bit with positionning so it looks decent, because spaces were a bit too small. Also horizontal alignement of each label has to be .left or else, it will be all crooked.
Anyway, it'S super easy to use! Go give it a try!
Here is a link to the gist I just created.
https://gist.github.com/sonoblaise/e3e1c04b57940a37bb9e6d9929ccce27