I'd like to be able to take an Screenshot of a SwiftUI View in a XCTest.
I've tried things like the hackingwithswifts extension: https://www.hackingwithswift.com/quick-start/swiftui/how-to-convert-a-swiftui-view-to-an-image
However, my use case it slightly different, I need it to run in a XCTest. I've also seen pointfreeco snapshot testing, however, I want to understand why what I've written just produces either a black or empty image.
I've tried using a displayLink to screenshot during a loop, but the image is still empty. I feel I'm missing something fundamental.
Can anyone offer any help? Thank you
import SwiftUI
import XCTest
final class MyTests: XCTestCase {
func test_screenshot_view() throws {
let swiftUIView = Button {
} label: {
Text("Hello, World!")
}
.frame(width: 140, height: 56)
let controller = UIHostingController(rootView: swiftUIView)
let window = UIWindow()
window.rootViewController = controller
let view = controller.view
let targetSize = controller.view.intrinsicContentSize
view!.bounds = CGRect(origin: .zero, size: targetSize)
view!.backgroundColor = UIColor.yellow
UIGraphicsBeginImageContextWithOptions(view!.bounds.size, view!.isOpaque, 0)
view!.drawHierarchy(in: view!.bounds, afterScreenUpdates: true)
let snapshotImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//snapshotImage is either black or empty
}
}
Just an update, this seems to work, but has some sizing issues:
import SwiftUI
import CoreGraphics
extension View {
func asImage() -> UIImage {
let controller = UIHostingController(rootView: self)
let view = controller.view
let targetSize = controller.view.intrinsicContentSize
let bounds = CGRect(origin: .zero, size: targetSize)
let window = UIWindow(frame: bounds)
window.rootViewController = controller
window.makeKeyAndVisible()
view?.bounds = bounds
view?.backgroundColor = .clear
let image = controller.view.asImage()
return image
}
}
extension UIView {
func asImage() -> UIImage {
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}
}
}
Related
I can't find a way to make a UIImageView wrapped in a UIViewRepresentable be sized to fit the frame. It always resizes beyond the screen no matter what content Mode or clipping or explcit framing I do. (The image dimensions are much larger than the device screen frame)
To clarify: I need to use UIImageView due to some subview positioning I have to do down the line and various other reasons.
Here's a paired down example:
struct ImageView: UIViewRepresentable {
var image: UIImage
func makeUIView(context: Context) -> some UIView {
let imageView = UIImageView()
imageView.image = image
imageView.backgroundColor = .red
imageView.contentMode = .scaleAspectFit
imageView.clipsToBounds = true
imageView.frame = CGRect(x: 0, y: 0, width: 300, height: 400)
return imageView
}
func updateUIView(_ uiView: UIViewType, context: Context) {
}
}
Then this is how I'm trying to implement it
struct ContentView: View {
var body: some View {
ImageView(image: UIImage(named: "full-ux-bg-image")!)
//.frame(width: 300, height: 400, alignment: .center) //This is just a test of explicit sizing
.padding()
}
}
Any ideas how to make this work? I want it to fit in the SwiftUI view without going over.
It has default constrains for content hugging/compression, to have possibility to manipulate with view externally we need to lowered those (... and never set frame for representable, just in case)
Here is fixed variant (tested with Xcode 14 / iOS 16)
func makeUIView(context: Context) -> some UIView {
let imageView = UIImageView()
imageView.image = image
imageView.backgroundColor = .red
imageView.contentMode = .scaleAspectFit
imageView.clipsToBounds = true
imageView.setContentHuggingPriority(.defaultLow, for: .vertical)
imageView.setContentHuggingPriority(.defaultLow, for: .horizontal)
imageView.setContentCompressionResistancePriority(.defaultLow, for: .vertical)
imageView.setContentCompressionResistancePriority(.defaultLow, for: .horizontal)
return imageView
}
I have some experience in SwiftUI, but am new to UIKit.
I'd like to import the zoom and position from one instance of an UIViewRepresentable UIKit ScrollView to another. So, basically, the user scrolls and zooms and later, in another branch of the view hierarchy, I want to start zoomed in at that zoom and position. I can't get it to work though, even after many attempts.
Below is my makeUIView function where I try to set the position and zoom that I want (after some initial setup).
func makeUIView(context: Context) -> UIScrollView {
// set up the UIScrollView
let scrollView = UIScrollView()
scrollView.delegate = context.coordinator
scrollView.bouncesZoom = true
scrollView.delaysContentTouches = false
scrollView.maximumZoomScale = 0.85 * screenScale * 10
scrollView.minimumZoomScale = 0.85 * screenScale
// create a UIHostingController to hold our SwiftUI content
let hostedView = context.coordinator.hostingController.view!
hostedView.translatesAutoresizingMaskIntoConstraints = true
hostedView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
hostedView.frame = scrollView.bounds
scrollView.addSubview(hostedView)
/*
Here I add the zoom and position
*/
scrollView.zoomScale = 0.85 * screenscale
// add zoom and content offset
if let zoomScale = zoomScale, let contentOffset = contentOffset {
scrollView.contentOffset = contentOffset
// make sure it is within the bounds
var newZoomScale = zoomScale
if zoomScale < scrollView.minimumZoomScale {
print("too small")
newZoomScale = scrollView.minimumZoomScale
} else if zoomScale > scrollView.maximumZoomScale {
print("too large")
newZoomScale = scrollView.maximumZoomScale
}
scrollView.setContentOffset(contentOffset, animated: true)
scrollView.setZoomScale(newZoomScale, animated: true)
}
return scrollView
}
The way I get the zoom and contentOffset in the first place is to grab the values from the Coordinator in first ScrollView instance using the below code. As far as I can tell this works well and I get updates with sensible values after zooming or scrolling. The first code snippet contains the makeCoordinator function where I initiate the coordinator with methods from an environmentObject (which then updates said object). The second snippet contains the Coordinator.
func makeCoordinator() -> Coordinator {
return Coordinator(hostingController: UIHostingController(rootView: self.content),
userScrolledAction: drawingModel.userScrollAction,
userZoomedAction: drawingModel.userZoomAction)
}
class Coordinator: NSObject, UIScrollViewDelegate {
var hostingController: UIHostingController<Content>
let userScrolledAction: (CGPoint) -> Void
let userZoomedAction: (CGFloat) -> Void
init(hostingController: UIHostingController<Content>, userScrolledAction: #escaping (CGPoint) -> Void, userZoomedAction: #escaping (CGFloat) -> Void) {
self.hostingController = hostingController
self.userScrolledAction = userScrolledAction
self.userZoomedAction = userZoomedAction
}
func viewForZooming(in scrollView: UIScrollView) -> UIView? {
return hostingController.view
}
func scrollViewDidScroll(_ scrollView: UIScrollView) {
userScrolledAction(scrollView.contentOffset)
}
func scrollViewDidZoom(_ scrollView: UIScrollView) {
userZoomedAction(scrollView.zoomScale)
}
}
I want to open a 3D model and make its background transparent, so that I can see the UI behind the SceneView. I've tried this code, but sceneView becomes white, not transparent.
struct ModelView: View {
var body: some View {
ZStack {
Text("Behind Text Behind Text Behind Text")
SceneView(
scene: { () -> SCNScene in
let scene = SCNScene()
scene.background.contents = UIColor.clear
return scene
}(),
pointOfView: { () -> SCNNode in
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(x: 0, y: 0, z: 10)
return cameraNode
}(),
options: [
.allowsCameraControl,
.temporalAntialiasingEnabled,
]
)
}
}
}
I use XCode 12.5 and IPhone 8.
EDIT 1:
Thanks to the comments below, I decided to try new approaches but they still don't work.
Approach #1
First, I tried to create a MySceneView using SCNView through UIViewRepresentable:
struct MySceneView: UIViewRepresentable {
typealias UIViewType = SCNView
typealias Context = UIViewRepresentableContext<MySceneView>
func updateUIView(_ uiView: UIViewType, context: Context) {}
func makeUIView(context: Context) -> UIViewType {
let view = SCNView()
view.allowsCameraControl = true
view.isTemporalAntialiasingEnabled = true
view.autoenablesDefaultLighting = true
view.scene = MySceneView.scene
return view
}
static let scene: SCNScene = {
let scene = SCNScene(named: "art.scnassets/man.obj")!
scene.background.contents = UIColor.clear
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(x: 0, y: 0, z: 10)
scene.rootNode.addChildNode(cameraNode)
return scene
}()
}
Approach #2
I tried using SpriteView, here is the code:
ZStack {
Text("Behind Text Behind Text Behind Text")
SpriteView(scene: { () -> SKScene in
let scene = SKScene()
scene.backgroundColor = UIColor.clear
let model = SK3DNode(viewportSize: .init(width: 200, height: 200))
model.scnScene = MySceneView.scene
scene.addChild(model)
return scene
}(), options: [.allowsTransparency])
}
Update:
A much simpler solution is to use UIViewRepresentable, create SCNView and set backgroundColor to clear
Old:
Thanks George_E, your idea with SpriteKit worked perfectly. Here is the code:
SpriteView(scene: { () -> SKScene in
let scene = SKScene()
scene.backgroundColor = UIColor.clear
let model = SK3DNode(viewportSize: .init(width: 200, height: 200))
model.scnScene = {
let scene = SCNScene(named: "art.scnassets/man.obj")!
scene.background.contents = UIColor.clear
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(x: 0, y: 0, z: 10)
scene.rootNode.addChildNode(cameraNode)
return scene
}()
scene.addChild(model)
return scene
}(), options: [.allowsTransparency])
I wasn't able to find a fully working snippet here, but thanks to the answers from Arutyun I managed to compile a working solution without the need for SpriteKit.
import SceneKit
import SwiftUI
struct MySceneView: UIViewRepresentable {
typealias UIViewType = SCNView
typealias Context = UIViewRepresentableContext<MySceneView>
func updateUIView(_ uiView: UIViewType, context: Context) {}
func makeUIView(context: Context) -> UIViewType {
let view = SCNView()
view.backgroundColor = UIColor.clear // this is key!
view.allowsCameraControl = true
view.autoenablesDefaultLighting = true
// load the object here, could load a .scn file too
view.scene = SCNScene(named: "model.obj")!
return view
}
}
And use it just like a regular view:
import SwiftUI
struct MySceneView: View {
var body: some View {
ZStack{
// => background views here
MySceneView()
.frame( // set frame as required
maxWidth: .infinity,
maxHeight: .infinity,
alignment: .center
)
}
}
}
struct MySceneView_Previews: PreviewProvider {
static var previews: some View {
MySceneView()
}
}
With SwiftUI:
It is slightly different in SwiftUI using a SpriteView.
To implement a transparent SpriteView in SwiftUI, you have to use the 'options' parameter:
Configure your SKScene with clear background (view AND scene)
Configure your SpriteView with the correct option
// 1. configure your scene in 'didMove'
override func didMove(to view: SKView) {
self.backgroundColor = .clear
view.backgroundColor = .clear
}
and most importantly:
// 2. configure your SpriteView with 'allowsTranspanency'
SpriteView(scene: YourSKScene(), options: [.allowsTransparency])
I didn't like using SpriteKit to make a SceneKit scenes background contents transparent because you completely loose access to the SCNView. Here is what I believe to be the correct approach.
Create a Scene Kit Scene file named GameScene.scn
Drop your 3D object into the GameScene
Use the code below
/// The SCNView view
struct GameSceneView: UIViewRepresentable {
#ObservedObject var viewModel: GameSceneViewModel
func makeUIView(context: UIViewRepresentableContext<GameSceneView>) -> SCNView {
let view = SCNView()
view.backgroundColor = viewModel.backgroundColor
view.allowsCameraControl = viewModel.allowsCameraControls
view.autoenablesDefaultLighting = viewModel.autoenablesDefaultLighting
view.scene = viewModel.scene
return view
}
func updateUIView(_ uiView: SCNView, context: UIViewRepresentableContext<GameSceneView>) {}
}
/// The view model supplying the SCNScene and its properties
class GameSceneViewModel: ObservableObject {
#Published var scene: SCNScene?
#Published var backgroundColor: UIColor
#Published var allowsCameraControls: Bool
#Published var autoenablesDefaultLighting: Bool
init(
sceneName: String = "GameScene.scn",
cameraName: String = "camera",
backgroundColor: UIColor = .clear,
allowsCameraControls: Bool = true,
autoenablesDefaultLighting: Bool = true
) {
self.scene = SCNScene(named: sceneName)
self.backgroundColor = backgroundColor
self.allowsCameraControls = allowsCameraControls
self.autoenablesDefaultLighting = autoenablesDefaultLighting
scene?.background.contents = backgroundColor
scene?.rootNode.childNode(withName: cameraName, recursively: false)
}
}
/// Usage
struct ContentView: View {
var body: some View {
VStack {
GameSceneView(viewModel: GameSceneViewModel())
}
.background(Color.blue)
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
How can I use GLKView in SwiftUI? I'm using CIFilter but would like to apply filters through GLKit / OpenGL. Any ideas?
struct ContentView: View {
#State private var image: Image?
var body: some View {
VStack {
image?
.resizable()
.scaledToFit()
}
.onAppear(perform: loadImage)
}
func loadImage() {
guard let inputImage = UIImage(named: "squirrel") else {
return
}
let ciImage = CIImage(image: inputImage)
let context = CIContext()
let blur = CIFilter.gaussianBlur()
blur.inputImage = ciImage
blur.radius = 20
guard let outputImage = blur.outputImage else {
return
}
if let cgImg = context.createCGImage(outputImage, from: ciImage!.extent) {
let uiImg = UIImage(cgImage: cgImg)
image = Image(uiImage: uiImg)
}
}
}
Here's a working GLKView in SwiftUI using UIViewControllerRepresentable.
A few things to keep in mind.
GLKit was deprecated with the release of iOS 12, nearly 2 years ago. While I hope Apple won't kill it anytime soon (way too many apps still use it), they recommend using Metal or an MTKView instead. Most of the technique here is still the way to go for SwiftUI.
I worked with SwiftUI in hopes of making my next CoreImage app be a "pure" SwiftUI app until I had too many UIKit needs to bring in. I stopped working on this around Beta 6. The code works but is clearly not production ready. The repo for this is here.
I'm more comfortable working with models instead of putting code for things like using a CIFilter directly in my views. I'll assume you know how to create a view model and have it be an EnvironmentObject. If not look at my code in the repo.
Your code references a SwiftUI Image view - I never found any documentation that suggests it uses the GPU (as a GLKView does) so you won't find anything like that in my code. If you are looking for real-time performance when changing attributes, I found this to work very well.
Starting with a GLKView, here's my code:
class ImageView: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
public var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
}
override public func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
This is very old production code, taken from objc.io issue 21 dated February 2015! Of note is that it encapsulates a CIContext, needs it's own clear color defined before using it's draw method, and renders an image as scaleAspectFit. If you should try using this in UIKit, it'll like work perfectly.
Next, a "wrapper" UIViewController:
class ImageViewVC: UIViewController {
var model: Model!
var imageView = ImageView()
override func viewDidLoad() {
super.viewDidLoad()
view = imageView
NotificationCenter.default.addObserver(self, selector: #selector(updateImage), name: .updateImage, object: nil)
}
override func viewDidLayoutSubviews() {
imageView.setNeedsDisplay()
}
override func traitCollectionDidChange(_ previousTraitCollection: UITraitCollection?) {
if traitCollection.userInterfaceStyle == .light {
imageView.clearColor = UIColor.white
} else {
imageView.clearColor = UIColor.black
}
}
#objc func updateImage() {
imageView.image = model.ciFinal
imageView.setNeedsDisplay()
}
}
I did this for a few reasons - pretty much adding up to the fact that i'm not a Combine expert.
First, note that the view model (model) cannot access the EnvironmentObject directly. That's a SwiftUI object and UIKit doesn't know about it. I think an ObservableObject *may work, but never found the right way to do it.
Second, note the use of NotificationCenter. I spent a week last year trying to get Combine to "just work" - particularly in the opposite direction of having a UIButton tap notify my model of a change - and found that this is really the easiest way. It's even easier than using delegate methods.
Next, exposing the VC as a representable:
struct GLKViewerVC: UIViewControllerRepresentable {
#EnvironmentObject var model: Model
let glkViewVC = ImageViewVC()
func makeUIViewController(context: Context) -> ImageViewVC {
return glkViewVC
}
func updateUIViewController(_ uiViewController: ImageViewVC, context: Context) {
glkViewVC.model = model
}
}
The only thing of note is that here's where I set the model variable in the VC. I'm sure it's possible to get rid of the VC entirely and have a UIViewRepresentable, but I'm more comfortable with this set up.
Next, my model:
class Model : ObservableObject {
var objectWillChange = PassthroughSubject<Void, Never>()
var uiOriginal:UIImage?
var ciInput:CIImage?
var ciFinal:CIImage?
init() {
uiOriginal = UIImage(named: "vermont.jpg")
uiOriginal = uiOriginal!.resizeToBoundingSquare(640)
ciInput = CIImage(image: uiOriginal!)?.rotateImage()
let filter = CIFilter(name: "CIPhotoEffectNoir")
filter?.setValue(ciInput, forKey: "inputImage")
ciFinal = filter?.outputImage
}
}
Nothing to see here at all, but understand that in SceneDelegate, where you instantiate this, it will trigger the init and set up the filtered image.
Finally, ContentView:
struct ContentView: View {
#EnvironmentObject var model: Model
var body: some View {
VStack {
GLKViewerVC()
Button(action: {
self.showImage()
}) {
VStack {
Image(systemName:"tv").font(Font.body.weight(.bold))
Text("Show image").font(Font.body.weight(.bold))
}
.frame(width: 80, height: 80)
}
}
}
func showImage() {
NotificationCenter.default.post(name: .updateImage, object: nil, userInfo: nil)
}
}
SceneDelegate instantiates the view model which now has the altered CIImage, and the button beneath the GLKView (an instance of GLKViewVC, which is just a SwiftUI View) will send a notification to update the image.
Apple's WWDC 2022 contained a tutorial/video entitled "Display EDR Content with Core Image, Metal, and SwiftUI" which describes how to blend Core Image with Metal and SwiftUI. It points to some new sample code entitled "Generating an Animation with a Core Image Render Destination" (here).
While it doesn't address your question about using GLKView, it does provide some elegant, clean, Apple-sanctioned code for using Metal within SwiftUI.
This sample project is very CoreImage-centric (which matches your background with CIFilter), but I wish Apple would post more sample-code examples showing Metal integrated with SwiftUI.
I have a simple test application and I want to pan an image inside its view. It will not pan or zoom and I can't see what's wrong with my code.
I have followed this tutorial but implemented it in code. I've made the image width the same as the height so I can pan without necessarily zooming.
Here is my code
import UIKit
class ViewController: UIViewController, UIScrollViewDelegate {
let scrollView: UIScrollView = {
let scrollView = UIScrollView()
return scrollView
}()
let zoomImageView: UIImageView = {
let imageView = UIImageView()
imageView.isUserInteractionEnabled = true
imageView.clipsToBounds = true
imageView.contentMode = .scaleAspectFill
imageView.image = #imageLiteral(resourceName: "lighthouse")
return imageView
}()
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .white
let screenSize = UIScreen.main.bounds
let screenHeight = screenSize.height
scrollView.frame = CGRect(x:0, y:0, width: screenHeight, height: screenHeight)
scrollView.delegate = self
scrollView.minimumZoomScale = 1.0
scrollView.maximumZoomScale = 3.0
zoomImageView.frame = CGRect(x:0, y:0, width: screenHeight, height: screenHeight)
scrollView.addSubview(self.zoomImageView)
view.addSubview(scrollView)
}
func viewForZooming(in scrollView: UIScrollView) -> UIView? {
return zoomImageView
}
}
Search in your code for the term contentSize. You don't see it, do you? But the most fundamental fact about how a scroll view works is this: a scroll view without a contentSize cannot scroll (i.e. "pan" as you put it). In particular, it must have a content size larger than its own bounds size in order to be able to scroll along that axis (height or width, or both). Otherwise, there is nothing to scroll.