Swiftui size images remains the same when zoom background - swiftui

I am using part of the following repo: https://github.com/pd95/CS193p-EmojiArt
Have modified some of the code, as I am using images instead of Emojies, but the size part I can't figure out.
The Emojies use size with a same number for width and height, but with my images, I use a different for the width and height (all images have the same width and height).
When I zoom the page, the images do not resize.
From the mentioned repo, I haven't changed the size part and zoom part.
Somebody has an idea how I can fix that?
Updating with example code
Size is set as int:
var size: Int
There is a scaleInstruments function:
func scaleInstrument(_ instrument: StageManager.Instrument, by scale: CGFloat) {
if let index = stageManager.instruments.firstIndex(matching: instrument) {
stageManager.instruments[index].size = Int((CGFloat(stageManager.instruments[index].size) * scale).rounded(.toNearestOrEven))
}
}
And the zoomScale / zoomGesture functions:
#GestureState private var gestureZoomScale: CGFloat = 1.0
private var zoomScale: CGFloat {
document.steadyStateZoomScale * (hasSelection ? 1 : gestureZoomScale)
}
private func zoomScale(for instrument: StageManager.Instrument) -> CGFloat {
if isInstrumentSelected(instrument) {
return document.steadyStateZoomScale * gestureZoomScale
}
else {
return zoomScale
}
}
private func zoomGesture() -> some Gesture {
MagnificationGesture()
.updating($gestureZoomScale, body: { (latestGestureScale, gestureZoomScale, transaction) in
gestureZoomScale = latestGestureScale
})
.onEnded { finalGestureScale in
if self.hasSelection {
self.selectedInstrumentIDs.forEach { (instrumentId) in
if let instrument = self.document.instruments.first(where: {$0.id == instrumentId }) {
self.document.scaleInstrument(instrument, by: finalGestureScale)
}
}
}
else {
self.document.steadyStateZoomScale *= finalGestureScale
}
}
}
Hope this sufficient to explain the issue I have.

I managed to do this, using the following code change:
From:
.frame(width: 140, height: 70)
To:
.frame(width: 140 * self.zoomScale(for: instrument), height: 70 * self.zoomScale(for: instrument))
Now the images will resize according the zoom.

Related

Animate an animation's speed

I want the effect of a rotating record that has some ease to it whenever it starts and stops rotating. In code below the trigger is the isRotating Bool.
But I guess it's not possible to animate the speed of an animation?
struct PausableRotatingButtonStyle: ButtonStyle {
var isRotating: Bool
#State private var speed: Double = 1.0
#State private var degrees: Double = 0.0
var foreverAnimation: Animation {
Animation.linear(duration: 2)
.repeatForever(autoreverses: false)
.speed(speed)
}
func makeBody(configuration: Configuration) -> some View {
VStack {
Text("speed: \(speed.description)")
configuration.label
.rotationEffect(Angle(degrees: degrees))
.animation(foreverAnimation)
.onAppear {
degrees = 360.0
}
.onChange(of: isRotating) { value in
withAnimation(.linear) {
speed = value ? 1 : 0
}
}
}
}
}
struct TestRotatingButtonStyle_Previews: PreviewProvider {
static var previews: some View {
TestRotatingButtonStyle()
}
struct TestRotatingButtonStyle: View {
#State private var isPlaying: Bool = true
var body: some View {
VStack {
Button {
isPlaying.toggle()
} label: {
Text("💿")
.font(.system(size: 200))
}
.buttonStyle(PausableRotatingButtonStyle(isRotating: isPlaying))
}
}
}
}
If .easeOut and .spring options don't cut it, you can make a timing curve. This function accepts x and y values for two points (c0, c1).
These points define anchors that (choose a mental model verb: stretch/form/define) a cubic animation timing curve between the start and end points of your animation. (Just like drawing a path between 0,0 and 1,1. If this still sounds like gibberish, look at the objc.io link below for visuals.)
Image("wheel")
.animation(.timingCurve(0, 0.5, 0.25, 1, duration: 2))
An ease-in-out type curve could be .timingCurve(0.17, 0.67, 0.83, 0.67)
https://cubic-bezier.com/#.42,0,.58,1
You can read more via the objc.io guys.
https://www.objc.io/blog/2019/09/26/swiftui-animation-timing-curves/
Edit re: comment on speed
While timing is the intended API, you might be able to change speed in response to a binding from a GeometryEffect progress reporter.
In the animation below, I apply or remove the shadow beneath the ball based on the progress of the vertical-sin-wave-travel GeometryEffect. The progress value is between 0 and 1. (Takeoff/flight/landing is achieved by another boolean and animation curve for x-axis offset.)
[
/// Ball
.modifier(BouncingWithProgressBinding(
currentEffect: $currentEffectSize, // % completion
axis: .vertical,
offsetMax: flightHeight,
interationProgress: iteration
).ignoredByLayout())
struct BouncingWithProgressBinding: GeometryEffect {
#Binding var currentEffect: CGFloat // % completion
var axis: Axis
var offsetMax: CGFloat
var interationProgress: Double
var animatableData: Double {
get { interationProgress }
set { interationProgress = newValue }
}
func effectValue(size: CGSize) -> ProjectionTransform {
let progress = interationProgress - floor(interationProgress)
let curvePosition = cos(2 * progress * .pi)
let effectSize = (curvePosition + 1) / ( .pi * 1.25 )
let translation = offsetMax * CGFloat(1 - effectSize)
DispatchQueue.main.async { currentEffect = CGFloat(1 - effectSize) }
if axis == .horizontal {
return ProjectionTransform(CGAffineTransform(translationX: translation, y: 0))
} else {
return ProjectionTransform(CGAffineTransform(translationX: 0, y: translation))
}
}
}

How to use geometry reader so that the view does not expand?

I have used geometry reader like this
GeometryReader { r in
ScrollView {
Text("SomeText").frame(width: r.size.width / 2)
}
}
The problem is that the reader expands vertically much like Spacer().
Is there anyway that I can make it not do this?
After googling around I found this answer here.
Create this new struct
struct SingleAxisGeometryReader<Content: View>: View {
private struct SizeKey: PreferenceKey {
static var defaultValue: CGFloat { 10 }
static func reduce(value: inout CGFloat, nextValue: () -> CGFloat) {
value = max(value, nextValue())
}
}
#State private var size: CGFloat = SizeKey.defaultValue
var axis: Axis = .horizontal
var alignment: Alignment = .center
let content: (CGFloat)->Content
var body: some View {
content(size)
.frame(maxWidth: axis == .horizontal ? .infinity : nil,
maxHeight: axis == .vertical ? .infinity : nil,
alignment: alignment)
.background(GeometryReader {
proxy in
Color.clear.preference(key: SizeKey.self, value: axis == .horizontal ? proxy.size.width : proxy.size.height)
}).onPreferenceChange(SizeKey.self) { size = $0 }
}
}
And then use it like this
SingleAxisGeometryReader { width in // For horizontal
// stuff here
}
or
SingleAxisGeometryReader(axis: .vertical) { height in // For vertical
// stuff here
}
With this answer, it’s now generic with no code change.
Since background is fit to actual view size always.you can use this trick, adding GeometryReader in background without changing the size of the view itself.
ScrollView {
}.background(
GeometryReader { r in
// stuff
}
)
}
It's somewhat unclear what you're actually trying to do with the views if it's not actually the code you gave at the top. With regards to that, though, you can swap the position of the GeometryReader and the ScrollView. What the GeometryReader does is find the frame of the available space, and it fills it. With a ScrollView the actual height is 0. So, this:
ScrollView {
GeometryReader {r in
Text("SomeText").frame(width: r.size.width / 2)
}
}

How can I read width of view inside a Array of View without using GeometryReader?

This is my Array of some Rec I need to know if I can read size of this Rec without using GeometryReader, for example width of Rec?
var body: some View {
let arrayOfRec = [Rectangle().frame(width: 100, height: 100), Rectangle().frame(width: 200, height: 200)]
VStack
{
ForEach(0..<arrayOfRec.count) { index in
arrayOfRec[index]
}
}
}
Unfortunately, it does not seem possible to get the width and height of the Rectangle without geometry reader in pure SwiftUI. I tried the methods that were attached to that Shape, but nothing resulted in the size.
Unknown what the use case is, but it is possible to make a custom object that holds the width and height of the rectangle.
struct Rectangle_TEST {
var rectangle: Rectangle
var width: CGFloat
var height: CGFloat
init(width: CGFloat, height: CGFloat) {
self.width = width
self.height = height
self.rectangle = Rectangle()
}
func getView(color: Color) -> some View {
return self.rectangle
.fill(color)
.frame(width: self.width, height: self.height)
}
}
struct ContentView: View {
var body: some View {
let arrayOfRec: [Rectangle_TEST] = [Rectangle_TEST(width: 100, height: 100), Rectangle_TEST(width: 200, height: 200)]
VStack {
ForEach(0..<arrayOfRec.count) { index in
Text("Width: \(arrayOfRec[index].width), Height: \(arrayOfRec[index].height)")
arrayOfRec[index].getView(color: .red)
}
}
}
}
Above I made the custom object and stores the width and height that can be accessed individually, and then a function that will return a view, and you can pass in whatever modifiers are needed. But, currently, I don't think it is possible to get the dimensions without geometry reader in SwiftUI.

How to display an image for one second? SWIFTUI

I want to show an image for one second when the player achieve the goal. I thought about putting an alert but it will slow down the game. I just want the image to stay for one second at the top of the screen and disappear until the next achievement.
Example code is below:
var TapNumber = 0
func ScoreUp() {
TapNumber += 1
if TapNumber == 100 {
showImage()
}
}
func showImage() {
// this is the function I want to create but. I do not know how
show image("YouEarnedAPointImage") for one second
}
Here is a demo of possible approach
struct DemoShowImage1Sec: View {
#State private var showingImage = false
var body: some View {
ZStack {
VStack {
Text("Main Content")
Button("Simulate") { self.showImage() }
}
if showingImage {
Image(systemName: "gift.fill")
.resizable()
.frame(width: 100, height: 100)
.background(Color.yellow)
}
}
}
private func showImage() {
self.showingImage = true
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
self.showingImage = false
}
}
}

SwiftUI: Drawing rectangles around elements recognized with Firebase ML Kit

I am currently trying to achieve to draw boxes of the text that was recognized with Firebase ML Kit on top of the image.
Currently, I did not have success yet and I can't see any box at all as they are all shown offscreen. I was looking at this article for a reference: https://medium.com/swlh/how-to-draw-bounding-boxes-with-swiftui-d93d1414eb00 and also at that project: https://github.com/firebase/quickstart-ios/blob/master/mlvision/MLVisionExample/ViewController.swift
This is the view where the boxes should be shown:
struct ImageScanned: View {
var image: UIImage
#Binding var rectangles: [CGRect]
#State var viewSize: CGSize = .zero
var body: some View {
// TODO: fix scaling
ZStack {
Image(uiImage: image)
.resizable()
.scaledToFit()
.overlay(
GeometryReader { geometry in
ZStack {
ForEach(self.transformRectangles(geometry: geometry)) { rect in
Rectangle()
.path(in: CGRect(
x: rect.x,
y: rect.y,
width: rect.width,
height: rect.height))
.stroke(Color.red, lineWidth: 2.0)
}
}
}
)
}
}
private func transformRectangles(geometry: GeometryProxy) -> [DetectedRectangle] {
var rectangles: [DetectedRectangle] = []
let imageViewWidth = geometry.frame(in: .global).size.width
let imageViewHeight = geometry.frame(in: .global).size.height
let imageWidth = image.size.width
let imageHeight = image.size.height
let imageViewAspectRatio = imageViewWidth / imageViewHeight
let imageAspectRatio = imageWidth / imageHeight
let scale = (imageViewAspectRatio > imageAspectRatio)
? imageViewHeight / imageHeight : imageViewWidth / imageWidth
let scaledImageWidth = imageWidth * scale
let scaledImageHeight = imageHeight * scale
let xValue = (imageViewWidth - scaledImageWidth) / CGFloat(2.0)
let yValue = (imageViewHeight - scaledImageHeight) / CGFloat(2.0)
var transform = CGAffineTransform.identity.translatedBy(x: xValue, y: yValue)
transform = transform.scaledBy(x: scale, y: scale)
for rect in self.rectangles {
let rectangle = rect.applying(transform)
rectangles.append(DetectedRectangle(width: rectangle.width, height: rectangle.height, x: rectangle.minX, y: rectangle.minY))
}
return rectangles
}
}
struct DetectedRectangle: Identifiable {
var id = UUID()
var width: CGFloat = 0
var height: CGFloat = 0
var x: CGFloat = 0
var y: CGFloat = 0
}
This is the view where this view is nested in:
struct StartScanView: View {
#State var showCaptureImageView: Bool = false
#State var image: UIImage? = nil
#State var rectangles: [CGRect] = []
var body: some View {
ZStack {
if showCaptureImageView {
CaptureImageView(isShown: $showCaptureImageView, image: $image)
} else {
VStack {
Button(action: {
self.showCaptureImageView.toggle()
}) {
Text("Start Scanning")
}
// show here View with rectangles on top of image
if self.image != nil {
ImageScanned(image: self.image ?? UIImage(), rectangles: $rectangles)
}
Button(action: {
self.processImage()
}) {
Text("Process Image")
}
}
}
}
}
func processImage() {
let scaledImageProcessor = ScaledElementProcessor()
if image != nil {
scaledImageProcessor.process(in: image!) { text in
for block in text.blocks {
for line in block.lines {
for element in line.elements {
self.rectangles.append(element.frame)
}
}
}
}
}
}
}
The calculation of the tutorial caused the rectangles being to big and the one of the sample project them being too small.
(Similar for height)
Unfortunately I can't find on which size Firebase determines the element's size.
This is how it looks like:
Without calculating the width & height at all, the rectangles seem to have about the size they are supposed to have (not exactly), so this gives me the assumption, that ML Kit's size calculation is not done in proportion to the image.size.height/width.
This is how i changed the foreach loop
Image(uiImage: uiimage!).resizable().scaledToFit().overlay(
GeometryReader{ (geometry: GeometryProxy) in
ForEach(self.blocks , id: \.self){ (block:VisionTextBlock) in
Rectangle().path(in: block.frame.applying(self.transformMatrix(geometry: geometry, image: self.uiimage!))).stroke(Color.purple, lineWidth: 2.0)
}
}
)
Instead of passing the x, y, width and height, I am passing the return value from transformMatrix function to the path function.
My transformMatrix function is
private func transformMatrix(geometry:GeometryProxy, image:UIImage) -> CGAffineTransform {
let imageViewWidth = geometry.size.width
let imageViewHeight = geometry.size.height
let imageWidth = image.size.width
let imageHeight = image.size.height
let imageViewAspectRatio = imageViewWidth / imageViewHeight
let imageAspectRatio = imageWidth / imageHeight
let scale = (imageViewAspectRatio > imageAspectRatio) ?
imageViewHeight / imageHeight :
imageViewWidth / imageWidth
// Image view's `contentMode` is `scaleAspectFit`, which scales the image to fit the size of the
// image view by maintaining the aspect ratio. Multiple by `scale` to get image's original size.
let scaledImageWidth = imageWidth * scale
let scaledImageHeight = imageHeight * scale
let xValue = (imageViewWidth - scaledImageWidth) / CGFloat(2.0)
let yValue = (imageViewHeight - scaledImageHeight) / CGFloat(2.0)
var transform = CGAffineTransform.identity.translatedBy(x: xValue, y: yValue)
transform = transform.scaledBy(x: scale, y: scale)
return transform
}
}
and the output is
ML Kit has a QuickStart app showing exactly what you are trying to do: recognizing the text and drawing a rectangle around the text. Here is the Swift code:
https://github.com/firebase/quickstart-ios/tree/master/mlvision/MLVisionExample