SCNProgram - video input - opengl

How can I attach a video input to a SCNProgram in SceneKit?
Without using a custom program, I could do:
func set(video player: AVPlayer, on node: SCNNode) {
let size = player.currentItem!.asset.tracks(
withMediaType: AVMediaTypeVideo).first!.naturalSize
let videoNode = SKVideoNode(avPlayer: player)
videoNode.position = CGPoint(x: size.width/2, y: size.height/2)
videoNode.size = size
let canvasScene = SKScene()
canvasScene.size = size
canvasScene.addChild(videoNode)
let material = SCNMaterial()
material.diffuse.contents = canvasScene
node.geometry?.materials = [material]
}
which renders the video to a SKScene and uses it as an input for a SCNMaterial.
I'd like to use a SCNProgram on the node, but I could not figure out to attach the player input. I don't mind if the solution doesn't uses a SKScene intermediate rendering. It actually sounds even better if it's possible to do without.

Have you tried using AVPlayerLayer, which is a subclass of CALayer ? You can feed a CALayer to the contents property of an SCNMaterialProperty.

Related

adding custom marks to SwiftUI chart using path

I want to use a custom shape that I have drawn using a path to the marks of my chart. So instead of points or bars or whatever marking the data, I want the shape to be a specific symbol I have drawn. I am quite new to swift so I apologise if I add unnecessary information.
So I have a graph that looks like this:
graph image
here is the code for it:
Chart {
PointMark(x: .value("Boat", "Pace Boat"), y: .value("Pace Boat", (viewModel.duration*(Double(viewModel.split) / paceBoat))))
PointMark(x: .value("Boat", "You"), y: .value("Your Pace", viewModel.boat.last?.1.distance ?? 0))
}
This is code for the way I produce a shape:
struct BoatShape: Shape {
func path(in rect: CGRect) -> Path {
var path = Path()
path.move(to: CGPoint(x: rect.midX, y: rect.minY))
//... do drawing blah
path.closeSubpath()
return path
}
}
I have seen I might be able to use the ChartSymbolShape protocol but I can't find much on the internet on implementation and I am quite new to swift. Any direction would be much appreciated
Additionally:
I would like to be to add text over the top of the shape just sort of pinning it to the x and y of the shape on the graph. Although this is not the main priority
Basically, what you should be doing is use any of the marks provided by Apple however set the opacity to 0 so that they don't appear. And then use the annotation therein you render your view.

Turn NSGradient into NSColor in Swift

I am trying to make an MKPolyline for a SwiftUI map where it shows a persons location for a day and I want a gradient changing from blue to green from the first point in their location to the last point in blue. I have this code
renderer.strokeColor = NSGradient(colors: [NSColor.blue, NSColor.green])
I have also tried
renderer.strokeColor = NSColor(NSGradient(colors: [NSColor.blue, NSColor.green]))
and
renderer.strokeColor = NSColor(Color(Gradient(colors: [Color.blue, Color.green])))
but these all return errors about turning Gradients into colors. Thanks!
Thanks to #vadian I did this
if let routePolyline = overlay as? MKPolyline {
let renderer = MKGradientPolylineRenderer(polyline: routePolyline)
renderer.setColors([NSColor.blue, NSColor.green], locations: [])
renderer.lineWidth = 2
return renderer
}

Problems with CIImageAccumulator from MTKView texture

I want capture the output of an MTKView via the view's texture into a CIImageAccumulator to achieve a gradual painting build up effect. The problem is that the accumulator seems to be messing with the color/alpha/colorspace of the original, as shown below:
From the image above, the way I capture the darker-looking brushstroke is via the view's currentDrawable.texture property:
lastSubStrokeCIImage = CIImage(mtlTexture: self.currentDrawable!.texture, options: nil)!.oriented(CGImagePropertyOrientation.downMirrored)
subStrokeUIView.image = UIImage(ciImage: lastSubStrokeCIImage)
Now, once I take the same image and pipe it into a CIIAcumulator for later processing (I only do this once per drawing segment), the result is the brighter-looking result shown in the upper portion of the attachment:
lazy var ciSubCurveAccumulator: CIImageAccumulator =
{
[unowned self] in
return CIImageAccumulator(extent: CGRect(x: 0, y: 0, width: self.frame.width * self.contentScaleFactor, height: self.frame.height * self.window!.screen.scale ) , format: kCIFormatBGRA8)
}()!
ciSubCurveAccumulator.setImage(lastSubStrokeCIImage)
strokeUIView.image = UIImage(ciImage: ciSubCurveAccumulator.image())
I have tried using a variety of kCIFormats in the CIImageAccumulator definition, all to no avail. What is the CIImageAccumulator doing to mess with the original, and how can I fix it? Note that I intend the use ciSubCurveAccumulator to gradually build up a continuous brushstroke of consistent color. For simplicity of the question, I'm not showing the accumulating part. This problem is stopping me dead on my tracks.
Any suggestions would kindly be appreciated
The problem came down to two things: one, I needed to set up MTLRenderPipelineDescriptor() for compositeOver compositing and two, I needed to set introduce a CIFilter to hold intermediate compositing over the accumulating CIImageAccumulator. This CIFilter also needed to be set up to CIOverCompositing. Below is a snippet of code that captures all of the above:
lazy var ciSubCurveAccumulator: CIImageAccumulator =
{
[unowned self] in
return CIImageAccumulator(extent: CGRect(x: 0, y: 0, width: self.frame.width * self.contentScaleFactor, height: self.frame.height * self.window!.screen.scale ) , format: kCIFormatBGRA8)
}()! // set up CIImageAccumulator
let lastSubStrokeCIImage = CIImage(mtlTexture: self.currentDrawable!.texture, options: nil)!.oriented(CGImagePropertyOrientation.downMirrored)
let compositeFilter : CIFilter = CIFilter(name: "CISourceOverCompositing")!
compositeFilter.setValue(lastSubStrokeCIImage, forKey: kCIInputImageKey) // foreground image
compositeFilter.setValue(ciSubCurveAccumulator.image(), forKey: kCIInputBackgroundImageKey) // background image
let bboxChunkSubCurvesScaledAndYFlipped = CGRect(...) // capture the part of the texture that was drawn to
ciSubCurveAccumulator.setImage(compositeFilter.value(forKey: kCIOutputImageKey) as! CIImage, dirtyRect: bboxChunkSubCurvesScaledAndYFlipped) // comp bbox with latest updates
Wrapping the above bit of code inside draw() allows gradual painting to accumulate quite nicely. Hopefully this helps someone at some point.

Spritekit Hexagonal map: tile detection in end SKAction.

I ask this question because i didn't found any solution for this kind of issues. In fact Hex map support are not very popular.
I'am making a game with the SpriteKit Framework. I use SktileMapNode with an Hexagonal map, with 1 set of 4 groups tiles.
The playernode moves on each tiles, what i wan't it's when he move on a specifics tiles some event can be triggered ( print , function , Sktransition) but for the moment i'm stuck on just detecting those tiles.
I setup the user data ( as bool ) and apply them in the code , but nothing happened, even with a touch event on a tile.
extension GameScene {
func move(theXAmount:CGFloat , theYAmount:CGFloat, theAnimation:String ) {
let wait:SKAction = SKAction.wait(forDuration: 0.05)
let walkAnimation:SKAction = SKAction(named: theAnimation, duration: moveSpeed )!
let moveAction:SKAction = SKAction.moveBy(x: theXAmount, y: theYAmount, duration: moveSpeed )
let group:SKAction = SKAction.group( [ walkAnimation, moveAction ] )
let finish:SKAction = SKAction.run {
let position = self.thePlayer.position
let column = self.galaxieMapnode.tileColumnIndex(fromPosition: position)
let row = self.galaxieMapnode.tileRowIndex(fromPosition: position)
if let tile:SKTileDefinition = self.galaxieMapnode.tileDefinition(atColumn: column, row: row) {
if let tileBoolData = tile.userData?["wormholeTile"] as? Bool {
if (tileBoolData == true) {
print("Wormhole touched")
}
}
} else {
print("different tile")
}
}
Only the "different tile " output is fired.
Any help are welcome.
Link for image example : the thing I want
I think you want to run your finish block after the other actions have completed? You can do this as a sequence.
var sequence = [SKAction]()
let action = SKAction.move(to: location, duration: 1)
let completionHandler = SKAction.run({
// Check tile info here
})
sequence += [action, completionHandler]
self.player.run(SKAction.sequence(sequence))
Typically in a hexagonal map you would move the player to an adjacent tile and check the tiles attributes, then continue with the next move action. This requires the graph and pathfinding functionality from GameplayKit.

Create a SKSpriteNode that blurs the nodes below

I want to create a Sprite that has an alpha value and sits on top of some other nodes.
let objects = SKSpriteNode(imageNamed: "objects")
let blurredOverlay = SKSpriteNode(imageNamed: "overlay")
addChild(objects)
addChild(blurredOverlay)
My intention is to add a visual effect to the 'blurredOverlay' Node so that only the nodes that are overlapped by this node show the blurred effect?
Anyone with an idea?
This answer take and modify code from: Add glowing effect to an SKSpriteNode
For you solution:
Swift 3
let objects = SKSpriteNode(imageNamed: "objects")
let blurredOverlay = SKSpriteNode(imageNamed: "overlay")
let effectNode = SKEffectNode()
effectNode.shouldRasterize = true
effectNode.zPosition = 1
effectNode.alpha = 0.5
effectNode.addChild(SKSpriteNode(texture: blurredOverlay.texture))
effectNode.filter = CIFilter(name: "CIGaussianBlur", withInputParameters: ["inputRadius":30])
objects.addChild(effectNode)
addChild(objects)
Example: (Up tree have a blurredOverlay)
On the example images "objects" and "overlay" are the same image.