Problems with CIImageAccumulator from MTKView texture - swift3

I want capture the output of an MTKView via the view's texture into a CIImageAccumulator to achieve a gradual painting build up effect. The problem is that the accumulator seems to be messing with the color/alpha/colorspace of the original, as shown below:
From the image above, the way I capture the darker-looking brushstroke is via the view's currentDrawable.texture property:
lastSubStrokeCIImage = CIImage(mtlTexture: self.currentDrawable!.texture, options: nil)!.oriented(CGImagePropertyOrientation.downMirrored)
subStrokeUIView.image = UIImage(ciImage: lastSubStrokeCIImage)
Now, once I take the same image and pipe it into a CIIAcumulator for later processing (I only do this once per drawing segment), the result is the brighter-looking result shown in the upper portion of the attachment:
lazy var ciSubCurveAccumulator: CIImageAccumulator =
{
[unowned self] in
return CIImageAccumulator(extent: CGRect(x: 0, y: 0, width: self.frame.width * self.contentScaleFactor, height: self.frame.height * self.window!.screen.scale ) , format: kCIFormatBGRA8)
}()!
ciSubCurveAccumulator.setImage(lastSubStrokeCIImage)
strokeUIView.image = UIImage(ciImage: ciSubCurveAccumulator.image())
I have tried using a variety of kCIFormats in the CIImageAccumulator definition, all to no avail. What is the CIImageAccumulator doing to mess with the original, and how can I fix it? Note that I intend the use ciSubCurveAccumulator to gradually build up a continuous brushstroke of consistent color. For simplicity of the question, I'm not showing the accumulating part. This problem is stopping me dead on my tracks.
Any suggestions would kindly be appreciated

The problem came down to two things: one, I needed to set up MTLRenderPipelineDescriptor() for compositeOver compositing and two, I needed to set introduce a CIFilter to hold intermediate compositing over the accumulating CIImageAccumulator. This CIFilter also needed to be set up to CIOverCompositing. Below is a snippet of code that captures all of the above:
lazy var ciSubCurveAccumulator: CIImageAccumulator =
{
[unowned self] in
return CIImageAccumulator(extent: CGRect(x: 0, y: 0, width: self.frame.width * self.contentScaleFactor, height: self.frame.height * self.window!.screen.scale ) , format: kCIFormatBGRA8)
}()! // set up CIImageAccumulator
let lastSubStrokeCIImage = CIImage(mtlTexture: self.currentDrawable!.texture, options: nil)!.oriented(CGImagePropertyOrientation.downMirrored)
let compositeFilter : CIFilter = CIFilter(name: "CISourceOverCompositing")!
compositeFilter.setValue(lastSubStrokeCIImage, forKey: kCIInputImageKey) // foreground image
compositeFilter.setValue(ciSubCurveAccumulator.image(), forKey: kCIInputBackgroundImageKey) // background image
let bboxChunkSubCurvesScaledAndYFlipped = CGRect(...) // capture the part of the texture that was drawn to
ciSubCurveAccumulator.setImage(compositeFilter.value(forKey: kCIOutputImageKey) as! CIImage, dirtyRect: bboxChunkSubCurvesScaledAndYFlipped) // comp bbox with latest updates
Wrapping the above bit of code inside draw() allows gradual painting to accumulate quite nicely. Hopefully this helps someone at some point.

Related

adding custom marks to SwiftUI chart using path

I want to use a custom shape that I have drawn using a path to the marks of my chart. So instead of points or bars or whatever marking the data, I want the shape to be a specific symbol I have drawn. I am quite new to swift so I apologise if I add unnecessary information.
So I have a graph that looks like this:
graph image
here is the code for it:
Chart {
PointMark(x: .value("Boat", "Pace Boat"), y: .value("Pace Boat", (viewModel.duration*(Double(viewModel.split) / paceBoat))))
PointMark(x: .value("Boat", "You"), y: .value("Your Pace", viewModel.boat.last?.1.distance ?? 0))
}
This is code for the way I produce a shape:
struct BoatShape: Shape {
func path(in rect: CGRect) -> Path {
var path = Path()
path.move(to: CGPoint(x: rect.midX, y: rect.minY))
//... do drawing blah
path.closeSubpath()
return path
}
}
I have seen I might be able to use the ChartSymbolShape protocol but I can't find much on the internet on implementation and I am quite new to swift. Any direction would be much appreciated
Additionally:
I would like to be to add text over the top of the shape just sort of pinning it to the x and y of the shape on the graph. Although this is not the main priority
Basically, what you should be doing is use any of the marks provided by Apple however set the opacity to 0 so that they don't appear. And then use the annotation therein you render your view.

SwiftUI scale path when Image size change

in my project i'm drawing a box over a an image using a custom drawing gesture.
Image(uiImage: image!)
.resizable()
.scaledToFit()
.onTouch(type:.all ,limitToBounds: true, perform: updateLocation) //custom touch
.overlay(
ForEach(paths) { container in
// draw the bounding box
container.path
.stroke(Color.red, lineWidth: 4)
}
)
my issue is the following:
when my image change the dimension to fit the view I want to scale up, or down the bounding box I draw in order to keep the same proportion.
before change change image scale:
after image scale up:
as you can see on the second screenshot when image frame become bigger the path draw on it change location a dimension.
How can I solve this issue?
I tried with the following code when the picture frame dimension change:
.onPreferenceChange(ViewRectKey.self) { rects in
pictureFrame = rects.first
// temp to aplly scale
guard let path = paths.last else {return}
paths.removeAll()
print("------------")
let pa = path.path.applying(CGAffineTransform(scaleX: 5, y: 5)) // 5 just for testing, need to use correct scale factor
let cont = PathContainer(id: UUID(), path: pa)
paths.append(cont)
}
but I don't know how to calculate how much the scale should be and second how to keep the position where initially I draw my path.

How to scale CAShapeLayer to different UIImageView

I have a UIImageView that you can tap on and it draws a circle. I store the location of the circles in an Array of Dictionaries. This allows me to "replay" the drawing of the circles. However, when the UIImageView is a different size from the original, the circles don't scale to the new UIImageView.
How can I get the circles to scale? For demonstration purposes, the top picture is the size of the UIImageView used for input and the second one is the size for replay.
Inputing the circles:
Replay the circles (the circles should be in the blue UIImageView
import Foundation
import UIKit
class DrawPuck {
func drawPuck(circle: CGPoint, circleColour: CGColor, circleSize: CGFloat, imageView: UIImageView) {
let circleBezierPath = UIBezierPath(arcCenter: CGPoint(x: circle.x,y: circle.y), radius: CGFloat(circleSize), startAngle: CGFloat(0), endAngle:CGFloat(M_PI * 2), clockwise: true)
let shapeLayer = CAShapeLayer()
shapeLayer.path = circleBezierPath.cgPath
//change the fill color
shapeLayer.fillColor = circleColour
//you can change the stroke color
shapeLayer.strokeColor = UIColor.white.cgColor
//you can change the line width
shapeLayer.lineWidth = 0.5
imageView.layer.addSublayer(shapeLayer)
}
}
I was able to resolve this with CATransform3DMakeScale As long as I keep the original aspect ratio of the original image it works great.
let width = yellowImageView.frame.width / blueImageView.frame.width
let height = yellowImageView.frame.height / blueImageView.frame.height
shapeLayer.transform = CATransform3DMakeScale(height, width, 1.0)
view.layer.addSublayer(shapeLayer)

SceneKit snapshot crashes if I put it in trackMotion, but not otherwise - timing issue?

I have a view with a SceneKit in the background and an AVCaptureVideoPreviewLayer in the foreground. I want a semi-transparent version of the background to be visible over the camera. So based on an older thread here, I added a UIImageView and put this in an action:
let background = sceneView.snapshot()
let cameraview = background.cgImage!.cropping(to: overlayView.bounds)
overlayView.image = UIImage.init(cgImage: cameraview!, scale: 1, orientation: .upMirrored)
This works. The problem is that I have to track the motion of the phone so that the grid aligns with the real world. So...
func trackMotion(motion: CMDeviceMotion?, error: Error?) {
guard let motion = motion else { return }
guard (self.camera) != nil else { return }
self.camera.orientation = motion.gaze(atOrientation: UIApplication.shared.statusBarOrientation)
let background = sceneView.snapshot()
let cameraview = background.cgImage!.cropping(to: overlayView.bounds)
overlayView.image = UIImage.init(cgImage: cameraview!, scale: 1, orientation: .upMirrored)
}
EXE_BAD_ACCESS on the sceneView.snapshot. I note again, it works perfectly if I just put the exact same code in an action. I suspect this is some sort of timing issue, because if I put a breakpoint there it does not crash, but instead, the SceneKit is all-black even after I continue.
Any advice?

raphael pie chart always blue when there is only 1 value (how to set color of a pie with one slice)

I am having an issue with raphael pie charts. The data I am using is dynamic, and in some instances, only 1 value is returned, meaning the whole chart is filled, as it is the ONLY slice. The problem is that when there is only 1 value, it ignores my color designation.
For example: Below is the creation of a raphael pie chart with 2 values, and each slice has the proper color designated in the "colors" section:
var r = Raphael("holder");
r.piechart(160, 136, 120, [100,200],{colors: ["#000","#cecece"]});
This works fine, and I get two properly sized slices, one black, and one grey.
However the example below creates one full pie, ALWAYS filled with blue, regardless of my color setting.
var r = Raphael("holder");
r.piechart(160, 136, 120, [100],{colors: ["#000"]});
In this situation, I really need that full pie to be black, as it is set in "colors"
Am I doing something wrong, or is this a bug?
INMO its a bug cause when the pie got only one slice its color is hard coded...
Here is how I solved it (all I did is use the colors arg if it exist...)
in g.pie.js after line 47 add this
var my_color = chartinst.colors[0];
if(opts.colors !== undefined){
my_color = opts.colors[0];
}
then in the following line (line 48 in the original js file)
series.push(paper.circle(cx, cy, r).attr({ fill: chartinst.colors[0]....
replace the chartinst.colors[0] with my_color
that's it
if (len == 1) {
var my_color = chartinst.colors[0];
if(opts.colors !== undefined){
my_color = opts.colors[0];
}
series.push(paper.circle(cx, cy, r).attr({ fill: my_color, ....
You've probably figured this out on your own since this question is already a day old... but you can "trick" Raphael into rendering a black unit by special-casing datasets of one to add an infinitesimal second value. So, given an array data with your data points...
if ( data.length == 1 )
data.push( 0.000001 );
canvas.piechart(250, 250, 120, data, {colors: ["#000", "#CECECE", "#F88" /*, ... */ ] });
The tiny sliver will still be rendered as a single-pixel line in the 180 degree position, but you could probably fudge that by playing with your color palette.
Yes, it's a trick. I don't believe gRaphael's behavior is buggy so much as poorly implemented (single-element datasets are obviously special cased since they produce a circle instead of a path as they would in all other cases).
Easy way for me without edit g.pie.js
var r = Raphael('st_diagram');
r.piechart(140, 140, 137, 100, 0.0001],{
colors:['#9ae013','#9ae013'],
strokewidth: 0
});