How to position CGRect paths neatly - swiftui

in my macOS app I create array of objects based on Cell class below
class Cell {
var cell = Path()
var live = false
private let gridOrigin = (CGFloat(10), CGFloat(10))
private var x = CGFloat(0)
private var y = CGFloat(0)
func createCell(column: CGFloat, row: CGFloat, cellSize: CGFloat) {
x = gridOrigin.0 + CGFloat(column) * cellSize
y = gridOrigin.1 + CGFloat(row) * cellSize
cell.addRect(CGRect(x: x, y: y, width: cellSize, height: cellSize))
cell.closeSubpath()
live = Bool.random()
}
}
and I use this code to display them by this code
VStack(spacing: 0) {
ForEach(0..<50) { column in
HStack(spacing: 0) {
ForEach(0..<50) { row in
let cellIndex = column * 50 + row
if cells[cellIndex].live {
cells[cellIndex].cell.fill().foregroundColor(.green)
} else {
cells[cellIndex].cell.fill().foregroundColor(.blue)
}
}
}
}
}
unfortunately rectangles are drawn with spaces (horizontal and vertical) even if individual Rect coordinates are calculated and set properly. Note that spaces are dynamically changing as I resize window. I tried to used LazyVGrid but it didn't help.
Do you have any hint how to draw rects without spaces?

Related

Swift Charts - Is there a way query the y value of a LineMark at a specific x value?

So I have a chart displaying line data at equal x intervals, with varying y values - something like this:
let ints = Array([0...10])
Chart {
ForEach(ints) { int in
let someRandomYValue = Int.random()
LineMark(x: int, y: someRandomYValue, series: "Line")
}
}
And I need to put an overlay exactly in the middle of two consecutive points. The overlay needs to touch the line.
So I need the x and the y value to place the overlay.
It's easy enough to get the x value - I just need the split the difference between my two x values. But I can't figure out how to get the y value.
Is there any way to "query" the y value of a line mark at specified x value? Maybe something like this:
let queriedY = LineMark(series: "Line").yValue(at: 8.5)
You can use .chartOverlay which gives you a chartProxy that can be queried for the position of chart values.
Here is an example:
struct ContentView: View {
let data: [(Int, Int)] = {
(0...15).map { ($0, Int.random(in: 0...50)) }
}()
#State private var selectedX = 6
var body: some View {
VStack {
Chart {
ForEach(data.indices, id: \.self) { i in
let (x,y) = data[i]
LineMark(
x: .value("x", x),
y: .value("y", y)
)
}
}
.frame(height: 300)
.chartOverlay { proxy in
let pos1 = proxy.position(for: (x: selectedX, y: data[selectedX].1)) ?? .zero
let pos2 = proxy.position(for: (x: selectedX+1, y: data[selectedX+1].1)) ?? .zero
Text("x: \(selectedX), y: \(data[selectedX].1)")
.padding()
.background(.gray.opacity(0.5))
.position(x: (pos1.x + pos2.x)/2 , y: (pos1.y+pos2.y)/2)
}
Stepper("", value: $selectedX, in: 0...data.count-1)
}
.padding()
}
}

Swiftui size images remains the same when zoom background

I am using part of the following repo: https://github.com/pd95/CS193p-EmojiArt
Have modified some of the code, as I am using images instead of Emojies, but the size part I can't figure out.
The Emojies use size with a same number for width and height, but with my images, I use a different for the width and height (all images have the same width and height).
When I zoom the page, the images do not resize.
From the mentioned repo, I haven't changed the size part and zoom part.
Somebody has an idea how I can fix that?
Updating with example code
Size is set as int:
var size: Int
There is a scaleInstruments function:
func scaleInstrument(_ instrument: StageManager.Instrument, by scale: CGFloat) {
if let index = stageManager.instruments.firstIndex(matching: instrument) {
stageManager.instruments[index].size = Int((CGFloat(stageManager.instruments[index].size) * scale).rounded(.toNearestOrEven))
}
}
And the zoomScale / zoomGesture functions:
#GestureState private var gestureZoomScale: CGFloat = 1.0
private var zoomScale: CGFloat {
document.steadyStateZoomScale * (hasSelection ? 1 : gestureZoomScale)
}
private func zoomScale(for instrument: StageManager.Instrument) -> CGFloat {
if isInstrumentSelected(instrument) {
return document.steadyStateZoomScale * gestureZoomScale
}
else {
return zoomScale
}
}
private func zoomGesture() -> some Gesture {
MagnificationGesture()
.updating($gestureZoomScale, body: { (latestGestureScale, gestureZoomScale, transaction) in
gestureZoomScale = latestGestureScale
})
.onEnded { finalGestureScale in
if self.hasSelection {
self.selectedInstrumentIDs.forEach { (instrumentId) in
if let instrument = self.document.instruments.first(where: {$0.id == instrumentId }) {
self.document.scaleInstrument(instrument, by: finalGestureScale)
}
}
}
else {
self.document.steadyStateZoomScale *= finalGestureScale
}
}
}
Hope this sufficient to explain the issue I have.
I managed to do this, using the following code change:
From:
.frame(width: 140, height: 70)
To:
.frame(width: 140 * self.zoomScale(for: instrument), height: 70 * self.zoomScale(for: instrument))
Now the images will resize according the zoom.

Animate an animation's speed

I want the effect of a rotating record that has some ease to it whenever it starts and stops rotating. In code below the trigger is the isRotating Bool.
But I guess it's not possible to animate the speed of an animation?
struct PausableRotatingButtonStyle: ButtonStyle {
var isRotating: Bool
#State private var speed: Double = 1.0
#State private var degrees: Double = 0.0
var foreverAnimation: Animation {
Animation.linear(duration: 2)
.repeatForever(autoreverses: false)
.speed(speed)
}
func makeBody(configuration: Configuration) -> some View {
VStack {
Text("speed: \(speed.description)")
configuration.label
.rotationEffect(Angle(degrees: degrees))
.animation(foreverAnimation)
.onAppear {
degrees = 360.0
}
.onChange(of: isRotating) { value in
withAnimation(.linear) {
speed = value ? 1 : 0
}
}
}
}
}
struct TestRotatingButtonStyle_Previews: PreviewProvider {
static var previews: some View {
TestRotatingButtonStyle()
}
struct TestRotatingButtonStyle: View {
#State private var isPlaying: Bool = true
var body: some View {
VStack {
Button {
isPlaying.toggle()
} label: {
Text("💿")
.font(.system(size: 200))
}
.buttonStyle(PausableRotatingButtonStyle(isRotating: isPlaying))
}
}
}
}
If .easeOut and .spring options don't cut it, you can make a timing curve. This function accepts x and y values for two points (c0, c1).
These points define anchors that (choose a mental model verb: stretch/form/define) a cubic animation timing curve between the start and end points of your animation. (Just like drawing a path between 0,0 and 1,1. If this still sounds like gibberish, look at the objc.io link below for visuals.)
Image("wheel")
.animation(.timingCurve(0, 0.5, 0.25, 1, duration: 2))
An ease-in-out type curve could be .timingCurve(0.17, 0.67, 0.83, 0.67)
https://cubic-bezier.com/#.42,0,.58,1
You can read more via the objc.io guys.
https://www.objc.io/blog/2019/09/26/swiftui-animation-timing-curves/
Edit re: comment on speed
While timing is the intended API, you might be able to change speed in response to a binding from a GeometryEffect progress reporter.
In the animation below, I apply or remove the shadow beneath the ball based on the progress of the vertical-sin-wave-travel GeometryEffect. The progress value is between 0 and 1. (Takeoff/flight/landing is achieved by another boolean and animation curve for x-axis offset.)
[
/// Ball
.modifier(BouncingWithProgressBinding(
currentEffect: $currentEffectSize, // % completion
axis: .vertical,
offsetMax: flightHeight,
interationProgress: iteration
).ignoredByLayout())
struct BouncingWithProgressBinding: GeometryEffect {
#Binding var currentEffect: CGFloat // % completion
var axis: Axis
var offsetMax: CGFloat
var interationProgress: Double
var animatableData: Double {
get { interationProgress }
set { interationProgress = newValue }
}
func effectValue(size: CGSize) -> ProjectionTransform {
let progress = interationProgress - floor(interationProgress)
let curvePosition = cos(2 * progress * .pi)
let effectSize = (curvePosition + 1) / ( .pi * 1.25 )
let translation = offsetMax * CGFloat(1 - effectSize)
DispatchQueue.main.async { currentEffect = CGFloat(1 - effectSize) }
if axis == .horizontal {
return ProjectionTransform(CGAffineTransform(translationX: translation, y: 0))
} else {
return ProjectionTransform(CGAffineTransform(translationX: 0, y: translation))
}
}
}

SwiftUI: Drawing rectangles around elements recognized with Firebase ML Kit

I am currently trying to achieve to draw boxes of the text that was recognized with Firebase ML Kit on top of the image.
Currently, I did not have success yet and I can't see any box at all as they are all shown offscreen. I was looking at this article for a reference: https://medium.com/swlh/how-to-draw-bounding-boxes-with-swiftui-d93d1414eb00 and also at that project: https://github.com/firebase/quickstart-ios/blob/master/mlvision/MLVisionExample/ViewController.swift
This is the view where the boxes should be shown:
struct ImageScanned: View {
var image: UIImage
#Binding var rectangles: [CGRect]
#State var viewSize: CGSize = .zero
var body: some View {
// TODO: fix scaling
ZStack {
Image(uiImage: image)
.resizable()
.scaledToFit()
.overlay(
GeometryReader { geometry in
ZStack {
ForEach(self.transformRectangles(geometry: geometry)) { rect in
Rectangle()
.path(in: CGRect(
x: rect.x,
y: rect.y,
width: rect.width,
height: rect.height))
.stroke(Color.red, lineWidth: 2.0)
}
}
}
)
}
}
private func transformRectangles(geometry: GeometryProxy) -> [DetectedRectangle] {
var rectangles: [DetectedRectangle] = []
let imageViewWidth = geometry.frame(in: .global).size.width
let imageViewHeight = geometry.frame(in: .global).size.height
let imageWidth = image.size.width
let imageHeight = image.size.height
let imageViewAspectRatio = imageViewWidth / imageViewHeight
let imageAspectRatio = imageWidth / imageHeight
let scale = (imageViewAspectRatio > imageAspectRatio)
? imageViewHeight / imageHeight : imageViewWidth / imageWidth
let scaledImageWidth = imageWidth * scale
let scaledImageHeight = imageHeight * scale
let xValue = (imageViewWidth - scaledImageWidth) / CGFloat(2.0)
let yValue = (imageViewHeight - scaledImageHeight) / CGFloat(2.0)
var transform = CGAffineTransform.identity.translatedBy(x: xValue, y: yValue)
transform = transform.scaledBy(x: scale, y: scale)
for rect in self.rectangles {
let rectangle = rect.applying(transform)
rectangles.append(DetectedRectangle(width: rectangle.width, height: rectangle.height, x: rectangle.minX, y: rectangle.minY))
}
return rectangles
}
}
struct DetectedRectangle: Identifiable {
var id = UUID()
var width: CGFloat = 0
var height: CGFloat = 0
var x: CGFloat = 0
var y: CGFloat = 0
}
This is the view where this view is nested in:
struct StartScanView: View {
#State var showCaptureImageView: Bool = false
#State var image: UIImage? = nil
#State var rectangles: [CGRect] = []
var body: some View {
ZStack {
if showCaptureImageView {
CaptureImageView(isShown: $showCaptureImageView, image: $image)
} else {
VStack {
Button(action: {
self.showCaptureImageView.toggle()
}) {
Text("Start Scanning")
}
// show here View with rectangles on top of image
if self.image != nil {
ImageScanned(image: self.image ?? UIImage(), rectangles: $rectangles)
}
Button(action: {
self.processImage()
}) {
Text("Process Image")
}
}
}
}
}
func processImage() {
let scaledImageProcessor = ScaledElementProcessor()
if image != nil {
scaledImageProcessor.process(in: image!) { text in
for block in text.blocks {
for line in block.lines {
for element in line.elements {
self.rectangles.append(element.frame)
}
}
}
}
}
}
}
The calculation of the tutorial caused the rectangles being to big and the one of the sample project them being too small.
(Similar for height)
Unfortunately I can't find on which size Firebase determines the element's size.
This is how it looks like:
Without calculating the width & height at all, the rectangles seem to have about the size they are supposed to have (not exactly), so this gives me the assumption, that ML Kit's size calculation is not done in proportion to the image.size.height/width.
This is how i changed the foreach loop
Image(uiImage: uiimage!).resizable().scaledToFit().overlay(
GeometryReader{ (geometry: GeometryProxy) in
ForEach(self.blocks , id: \.self){ (block:VisionTextBlock) in
Rectangle().path(in: block.frame.applying(self.transformMatrix(geometry: geometry, image: self.uiimage!))).stroke(Color.purple, lineWidth: 2.0)
}
}
)
Instead of passing the x, y, width and height, I am passing the return value from transformMatrix function to the path function.
My transformMatrix function is
private func transformMatrix(geometry:GeometryProxy, image:UIImage) -> CGAffineTransform {
let imageViewWidth = geometry.size.width
let imageViewHeight = geometry.size.height
let imageWidth = image.size.width
let imageHeight = image.size.height
let imageViewAspectRatio = imageViewWidth / imageViewHeight
let imageAspectRatio = imageWidth / imageHeight
let scale = (imageViewAspectRatio > imageAspectRatio) ?
imageViewHeight / imageHeight :
imageViewWidth / imageWidth
// Image view's `contentMode` is `scaleAspectFit`, which scales the image to fit the size of the
// image view by maintaining the aspect ratio. Multiple by `scale` to get image's original size.
let scaledImageWidth = imageWidth * scale
let scaledImageHeight = imageHeight * scale
let xValue = (imageViewWidth - scaledImageWidth) / CGFloat(2.0)
let yValue = (imageViewHeight - scaledImageHeight) / CGFloat(2.0)
var transform = CGAffineTransform.identity.translatedBy(x: xValue, y: yValue)
transform = transform.scaledBy(x: scale, y: scale)
return transform
}
}
and the output is
ML Kit has a QuickStart app showing exactly what you are trying to do: recognizing the text and drawing a rectangle around the text. Here is the Swift code:
https://github.com/firebase/quickstart-ios/tree/master/mlvision/MLVisionExample

Cut image in pieces swift3 / Ambiguous use of init((CGImage: scale: orientation:)

I am trying to following this discussion. The suggested solution was written for Swift 2. I have updated it to Swift 3 and got an error "Ambiguous use of init((CGImage: scale: orientation:)" for the line:
images.append(UIImage(CGImage: tileCgImage, scale: image.scale, orientation: image.imageOrientation))
Have you any idea how to repair it? Here is the code:
import UIKit
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
func slice(image: UIImage, into howMany: Int) -> [UIImage] {
let width: CGFloat
let height: CGFloat
switch image.imageOrientation {
case .left, .leftMirrored, .right, .rightMirrored:
width = image.size.height
height = image.size.width
default:
width = image.size.width
height = image.size.height
}
let tileWidth = Int(width / CGFloat(howMany))
let tileHeight = Int(height / CGFloat(howMany))
let scale = Int(image.scale)
var images = [UIImage]()
let cgImage = image.cgImage!
var adjustedHeight = tileHeight
var y = 0
for row in 0 ..< howMany {
if row == (howMany - 1) {
adjustedHeight = Int(height) - y
}
var adjustedWidth = tileWidth
var x = 0
for column in 0 ..< howMany {
if column == (howMany - 1) {
adjustedWidth = Int(width) - x
}
let origin = CGPoint(x: x * scale, y: y * scale)
let size = CGSize(width: adjustedWidth * scale, height: adjustedHeight * scale)
let tileCgImage = cgImage.cropping(to: CGRect(origin: origin, size: size))!
images.append(UIImage(CGImage: tileCgImage, scale: image.scale, orientation: image.imageOrientation))
x += tileWidth
}
y += tileHeight
}
return images
}
}
Just wanted to make sure that Rob's comment gets highlighted since that seems to be the correct answer. To add to it, as of Swift 4, method signature stays what Rob has mentioned.
Rob:
"In Swift 3, the first label to that function is now cgImage:, not CGImage:. See init(cgImage:scale:orientation:)."
For e.g.:
let resultUIImg = UIImage(cgImage: someCGImg!, scale: origUIImg.scale, orientation: origUIImg.imageOrientation)
In Swift 3
You can using like this:
let image:UIImag = UIImage( cgImage: cgImage )