adding custom marks to SwiftUI chart using path - swiftui

I want to use a custom shape that I have drawn using a path to the marks of my chart. So instead of points or bars or whatever marking the data, I want the shape to be a specific symbol I have drawn. I am quite new to swift so I apologise if I add unnecessary information.
So I have a graph that looks like this:
graph image
here is the code for it:
Chart {
PointMark(x: .value("Boat", "Pace Boat"), y: .value("Pace Boat", (viewModel.duration*(Double(viewModel.split) / paceBoat))))
PointMark(x: .value("Boat", "You"), y: .value("Your Pace", viewModel.boat.last?.1.distance ?? 0))
}
This is code for the way I produce a shape:
struct BoatShape: Shape {
func path(in rect: CGRect) -> Path {
var path = Path()
path.move(to: CGPoint(x: rect.midX, y: rect.minY))
//... do drawing blah
path.closeSubpath()
return path
}
}
I have seen I might be able to use the ChartSymbolShape protocol but I can't find much on the internet on implementation and I am quite new to swift. Any direction would be much appreciated
Additionally:
I would like to be to add text over the top of the shape just sort of pinning it to the x and y of the shape on the graph. Although this is not the main priority

Basically, what you should be doing is use any of the marks provided by Apple however set the opacity to 0 so that they don't appear. And then use the annotation therein you render your view.

Related

SwiftUI scale path when Image size change

in my project i'm drawing a box over a an image using a custom drawing gesture.
Image(uiImage: image!)
.resizable()
.scaledToFit()
.onTouch(type:.all ,limitToBounds: true, perform: updateLocation) //custom touch
.overlay(
ForEach(paths) { container in
// draw the bounding box
container.path
.stroke(Color.red, lineWidth: 4)
}
)
my issue is the following:
when my image change the dimension to fit the view I want to scale up, or down the bounding box I draw in order to keep the same proportion.
before change change image scale:
after image scale up:
as you can see on the second screenshot when image frame become bigger the path draw on it change location a dimension.
How can I solve this issue?
I tried with the following code when the picture frame dimension change:
.onPreferenceChange(ViewRectKey.self) { rects in
pictureFrame = rects.first
// temp to aplly scale
guard let path = paths.last else {return}
paths.removeAll()
print("------------")
let pa = path.path.applying(CGAffineTransform(scaleX: 5, y: 5)) // 5 just for testing, need to use correct scale factor
let cont = PathContainer(id: UUID(), path: pa)
paths.append(cont)
}
but I don't know how to calculate how much the scale should be and second how to keep the position where initially I draw my path.

Layering views using ForEach and offset using ZStack to create a stack of poker chips (SwiftUI)

I'm trying to create a stack of poker chips in SwiftUI. I've got a ChipView which is a view of a single chip. I'm now trying to layer ChipViews in a stack to create of poker chips. I'm having trouble building a for loop following the rules of a view builder.
I want to do the following:
ZStack {
for index in 0..<chipCount {
ChipView()
.offset( CGSize(width: index * 5, height: index * 5))
}
}
But I know that I can't do that in a view builder. I know I should use ForEach but can't think of how. I could build an array of indexes to use ForEach(indexArray) { index in ... } but that seems very clunky and unsatisfying.
I feel like this is trivially easy but I couldn't a solution from googling.
So...it turns out, that is is trivially easy. I found the answer to my question right here: How to have a dynamic List of Views using SwiftUI
My code now looks like this:
ZStack{
ForEach (0..<chipCount) { index in
ChipView()
.offset( CGSize(width: index * 5, height: index * 5))
}
}

Problems with CIImageAccumulator from MTKView texture

I want capture the output of an MTKView via the view's texture into a CIImageAccumulator to achieve a gradual painting build up effect. The problem is that the accumulator seems to be messing with the color/alpha/colorspace of the original, as shown below:
From the image above, the way I capture the darker-looking brushstroke is via the view's currentDrawable.texture property:
lastSubStrokeCIImage = CIImage(mtlTexture: self.currentDrawable!.texture, options: nil)!.oriented(CGImagePropertyOrientation.downMirrored)
subStrokeUIView.image = UIImage(ciImage: lastSubStrokeCIImage)
Now, once I take the same image and pipe it into a CIIAcumulator for later processing (I only do this once per drawing segment), the result is the brighter-looking result shown in the upper portion of the attachment:
lazy var ciSubCurveAccumulator: CIImageAccumulator =
{
[unowned self] in
return CIImageAccumulator(extent: CGRect(x: 0, y: 0, width: self.frame.width * self.contentScaleFactor, height: self.frame.height * self.window!.screen.scale ) , format: kCIFormatBGRA8)
}()!
ciSubCurveAccumulator.setImage(lastSubStrokeCIImage)
strokeUIView.image = UIImage(ciImage: ciSubCurveAccumulator.image())
I have tried using a variety of kCIFormats in the CIImageAccumulator definition, all to no avail. What is the CIImageAccumulator doing to mess with the original, and how can I fix it? Note that I intend the use ciSubCurveAccumulator to gradually build up a continuous brushstroke of consistent color. For simplicity of the question, I'm not showing the accumulating part. This problem is stopping me dead on my tracks.
Any suggestions would kindly be appreciated
The problem came down to two things: one, I needed to set up MTLRenderPipelineDescriptor() for compositeOver compositing and two, I needed to set introduce a CIFilter to hold intermediate compositing over the accumulating CIImageAccumulator. This CIFilter also needed to be set up to CIOverCompositing. Below is a snippet of code that captures all of the above:
lazy var ciSubCurveAccumulator: CIImageAccumulator =
{
[unowned self] in
return CIImageAccumulator(extent: CGRect(x: 0, y: 0, width: self.frame.width * self.contentScaleFactor, height: self.frame.height * self.window!.screen.scale ) , format: kCIFormatBGRA8)
}()! // set up CIImageAccumulator
let lastSubStrokeCIImage = CIImage(mtlTexture: self.currentDrawable!.texture, options: nil)!.oriented(CGImagePropertyOrientation.downMirrored)
let compositeFilter : CIFilter = CIFilter(name: "CISourceOverCompositing")!
compositeFilter.setValue(lastSubStrokeCIImage, forKey: kCIInputImageKey) // foreground image
compositeFilter.setValue(ciSubCurveAccumulator.image(), forKey: kCIInputBackgroundImageKey) // background image
let bboxChunkSubCurvesScaledAndYFlipped = CGRect(...) // capture the part of the texture that was drawn to
ciSubCurveAccumulator.setImage(compositeFilter.value(forKey: kCIOutputImageKey) as! CIImage, dirtyRect: bboxChunkSubCurvesScaledAndYFlipped) // comp bbox with latest updates
Wrapping the above bit of code inside draw() allows gradual painting to accumulate quite nicely. Hopefully this helps someone at some point.

AVCapture and zooming of previewLayer in Swift

I have a camera app which allows the user to both take pictures and record video. The iPhone is attached to a medical otoscope using an adapter, so the video that is captured is very small (about the size of a dime). I need to be able to zoom the video to fill the screen, but have not been able to figure out how to do so.
I found this answer here on SO that uses ObjC but have not had success in translating it to Swift. I am very close but am getting stuck. Here is my code for handling a UIPinchGestureRecgoznier:
#IBAction func handlePinchGesture(sender: UIPinchGestureRecognizer) {
var initialVideoZoomFactor: CGFloat = 0.0
if (sender.state == UIGestureRecognizerState.began) {
initialVideoZoomFactor = (captureDevice?.videoZoomFactor)!
} else {
let scale: CGFloat = min(max(1, initialVideoZoomFactor * sender.scale), 4)
CATransaction.begin()
CATransaction.setAnimationDuration(0.01)
previewLayer?.transform = CGAffineTransform(scaleX: scale, y: scale)
CATransaction.commit()
if ((captureDevice?.lockForConfiguration()) != nil) {
captureDevice?.videoZoomFactor = scale
captureDevice?.unlockForConfiguration()
}
}
}
This line...
previewLayer?.transform = CGAffineTransform(scaleX: scale, y: scale)
... gives me the error 'Cannot assign value of type 'CGAffineTransform' to type 'CGTransform3D'. I'm trying to figure this out but my attempts to fix this have been unfruitful.
Figured it out: Changed the problematic line to:
previewLayer?.setAffineTransform(CGAffineTransform(scaleX: scale, y: scale))

SCNProgram - video input

How can I attach a video input to a SCNProgram in SceneKit?
Without using a custom program, I could do:
func set(video player: AVPlayer, on node: SCNNode) {
let size = player.currentItem!.asset.tracks(
withMediaType: AVMediaTypeVideo).first!.naturalSize
let videoNode = SKVideoNode(avPlayer: player)
videoNode.position = CGPoint(x: size.width/2, y: size.height/2)
videoNode.size = size
let canvasScene = SKScene()
canvasScene.size = size
canvasScene.addChild(videoNode)
let material = SCNMaterial()
material.diffuse.contents = canvasScene
node.geometry?.materials = [material]
}
which renders the video to a SKScene and uses it as an input for a SCNMaterial.
I'd like to use a SCNProgram on the node, but I could not figure out to attach the player input. I don't mind if the solution doesn't uses a SKScene intermediate rendering. It actually sounds even better if it's possible to do without.
Have you tried using AVPlayerLayer, which is a subclass of CALayer ? You can feed a CALayer to the contents property of an SCNMaterialProperty.