GLKView in SwiftUI? - swiftui

How can I use GLKView in SwiftUI? I'm using CIFilter but would like to apply filters through GLKit / OpenGL. Any ideas?
struct ContentView: View {
#State private var image: Image?
var body: some View {
VStack {
image?
.resizable()
.scaledToFit()
}
.onAppear(perform: loadImage)
}
func loadImage() {
guard let inputImage = UIImage(named: "squirrel") else {
return
}
let ciImage = CIImage(image: inputImage)
let context = CIContext()
let blur = CIFilter.gaussianBlur()
blur.inputImage = ciImage
blur.radius = 20
guard let outputImage = blur.outputImage else {
return
}
if let cgImg = context.createCGImage(outputImage, from: ciImage!.extent) {
let uiImg = UIImage(cgImage: cgImg)
image = Image(uiImage: uiImg)
}
}
}

Here's a working GLKView in SwiftUI using UIViewControllerRepresentable.
A few things to keep in mind.
GLKit was deprecated with the release of iOS 12, nearly 2 years ago. While I hope Apple won't kill it anytime soon (way too many apps still use it), they recommend using Metal or an MTKView instead. Most of the technique here is still the way to go for SwiftUI.
I worked with SwiftUI in hopes of making my next CoreImage app be a "pure" SwiftUI app until I had too many UIKit needs to bring in. I stopped working on this around Beta 6. The code works but is clearly not production ready. The repo for this is here.
I'm more comfortable working with models instead of putting code for things like using a CIFilter directly in my views. I'll assume you know how to create a view model and have it be an EnvironmentObject. If not look at my code in the repo.
Your code references a SwiftUI Image view - I never found any documentation that suggests it uses the GPU (as a GLKView does) so you won't find anything like that in my code. If you are looking for real-time performance when changing attributes, I found this to work very well.
Starting with a GLKView, here's my code:
class ImageView: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
public var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
}
override public func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
This is very old production code, taken from objc.io issue 21 dated February 2015! Of note is that it encapsulates a CIContext, needs it's own clear color defined before using it's draw method, and renders an image as scaleAspectFit. If you should try using this in UIKit, it'll like work perfectly.
Next, a "wrapper" UIViewController:
class ImageViewVC: UIViewController {
var model: Model!
var imageView = ImageView()
override func viewDidLoad() {
super.viewDidLoad()
view = imageView
NotificationCenter.default.addObserver(self, selector: #selector(updateImage), name: .updateImage, object: nil)
}
override func viewDidLayoutSubviews() {
imageView.setNeedsDisplay()
}
override func traitCollectionDidChange(_ previousTraitCollection: UITraitCollection?) {
if traitCollection.userInterfaceStyle == .light {
imageView.clearColor = UIColor.white
} else {
imageView.clearColor = UIColor.black
}
}
#objc func updateImage() {
imageView.image = model.ciFinal
imageView.setNeedsDisplay()
}
}
I did this for a few reasons - pretty much adding up to the fact that i'm not a Combine expert.
First, note that the view model (model) cannot access the EnvironmentObject directly. That's a SwiftUI object and UIKit doesn't know about it. I think an ObservableObject *may work, but never found the right way to do it.
Second, note the use of NotificationCenter. I spent a week last year trying to get Combine to "just work" - particularly in the opposite direction of having a UIButton tap notify my model of a change - and found that this is really the easiest way. It's even easier than using delegate methods.
Next, exposing the VC as a representable:
struct GLKViewerVC: UIViewControllerRepresentable {
#EnvironmentObject var model: Model
let glkViewVC = ImageViewVC()
func makeUIViewController(context: Context) -> ImageViewVC {
return glkViewVC
}
func updateUIViewController(_ uiViewController: ImageViewVC, context: Context) {
glkViewVC.model = model
}
}
The only thing of note is that here's where I set the model variable in the VC. I'm sure it's possible to get rid of the VC entirely and have a UIViewRepresentable, but I'm more comfortable with this set up.
Next, my model:
class Model : ObservableObject {
var objectWillChange = PassthroughSubject<Void, Never>()
var uiOriginal:UIImage?
var ciInput:CIImage?
var ciFinal:CIImage?
init() {
uiOriginal = UIImage(named: "vermont.jpg")
uiOriginal = uiOriginal!.resizeToBoundingSquare(640)
ciInput = CIImage(image: uiOriginal!)?.rotateImage()
let filter = CIFilter(name: "CIPhotoEffectNoir")
filter?.setValue(ciInput, forKey: "inputImage")
ciFinal = filter?.outputImage
}
}
Nothing to see here at all, but understand that in SceneDelegate, where you instantiate this, it will trigger the init and set up the filtered image.
Finally, ContentView:
struct ContentView: View {
#EnvironmentObject var model: Model
var body: some View {
VStack {
GLKViewerVC()
Button(action: {
self.showImage()
}) {
VStack {
Image(systemName:"tv").font(Font.body.weight(.bold))
Text("Show image").font(Font.body.weight(.bold))
}
.frame(width: 80, height: 80)
}
}
}
func showImage() {
NotificationCenter.default.post(name: .updateImage, object: nil, userInfo: nil)
}
}
SceneDelegate instantiates the view model which now has the altered CIImage, and the button beneath the GLKView (an instance of GLKViewVC, which is just a SwiftUI View) will send a notification to update the image.

Apple's WWDC 2022 contained a tutorial/video entitled "Display EDR Content with Core Image, Metal, and SwiftUI" which describes how to blend Core Image with Metal and SwiftUI. It points to some new sample code entitled "Generating an Animation with a Core Image Render Destination" (here).
While it doesn't address your question about using GLKView, it does provide some elegant, clean, Apple-sanctioned code for using Metal within SwiftUI.
This sample project is very CoreImage-centric (which matches your background with CIFilter), but I wish Apple would post more sample-code examples showing Metal integrated with SwiftUI.

Related

UIHostingController doesn't update its size properly properly

I am using UIHostingController in one of the apps I'm working on and I have a problem. The embedded SwiftUI View changes its' height dynamically, but I can't seem to get the grasp on how to update it in the view it is embedded in.
The problem doesn't seem to be in the implementation as even the most basic one has this issue.
The UIView is written like this:
var hostingController = UIHostingController(rootView: GrowingView())
override func viewDidLoad() {
super.viewDidLoad()
prepareHostingController()
}
func prepareHostingController() {
view.addSubview(hostingController.view)
hostingController.view.translatesAutoresizingMaskIntoConstraints = false
NSLayoutConstraint.activate([
hostingController.view.leadingAnchor.constraint(equalTo: view.leadingAnchor),
hostingController.view.trailingAnchor.constraint(equalTo: view.trailingAnchor),
hostingController.view.topAnchor.constraint(equalTo: view.topAnchor, constant: 100)
])
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
}
}
and the SwiftUI View is like this:
struct GrowingView: View {
#State var height: CGFloat = 100
var body: some View {
Button(action: tap) {
Rectangle()
.foregroundColor(.red)
.frame(height: height)
}
}
func tap() {
height = 200
}
}
Is there anything obvious I'm missing or is this just the behaviour of the UIHostingController which I can do nothing about? It seems like the latter shouldn't be the case.

Why doesn't my AVVideoCapturePreview render in the layer?

I'm trying to implement a camera preview view in SwiftUI, for which I have the following code:
import SwiftUI
import AVFoundation
struct CameraPreview: UIViewRepresentable {
let session: AVCaptureSession
func makeUIView(context: Context) -> UIView {
let view = UIView()
view.backgroundColor = .gray
let videoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
videoPreviewLayer.frame = view.bounds
videoPreviewLayer.videoGravity = .resizeAspectFill
videoPreviewLayer.connection?.videoOrientation = .portrait
view.layer.addSublayer(videoPreviewLayer)
return view
}
func updateUIView(_ uiView: UIView, context: Context) {
for layer in uiView.layer.sublayers ?? [] {
layer.frame = uiView.bounds
}
}
}
However I do see the gray background view that I set on the view, but it never starts showing the camera output. I've set a AVCaptureVideoDataOutputSampleBufferDelegate class and I can see the frames being captured and processed, yet for some reason it does not start rendering the output.
I have this other snippet that DOES render the output, but it does so by setting the preview layer as the root layer which is what I want to avoid, here's the code that works:
struct CameraPreview: UIViewRepresentable {
let session: AVCaptureSession
func makeUIView(context: Context) -> UIView {
let view = VideoView()
view.backgroundColor = .gray
view.previewLayer.session = session
view.previewLayer.videoGravity = .resizeAspectFill
view.previewLayer.connection?.videoOrientation = .portrait
return view
}
func updateUIView(_ uiView: UIView, context: Context) {
for layer in uiView.layer.sublayers ?? [] {
layer.frame = uiView.bounds
}
}
class VideoView: UIView {
override class var layerClass: AnyClass {
AVCaptureVideoPreviewLayer.self
}
var previewLayer: AVCaptureVideoPreviewLayer {
layer as! AVCaptureVideoPreviewLayer
}
}
}
Some examples I found showed I should be able to show the preview like I do in the first example. I've tried initializing the session with inputs before and after the preview view is created and I've gotten the same result. Am I missing anything? am I not retaining the layer or is there a special configuration for the session to look out for? to make it work I simply swap the implementations and the one with the inner class does render?
Any help is really appreciated.
Some resources:
https://nsscreencast.com/episodes/296-camera-capture-preview-layer-sample-buffer
https://www.appcoda.com/avfoundation-swift-guide/
https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture

How to present a full screen AVPlayerViewController in SwiftUI

In SwiftUI, it seems like the best way to set up an AVPlayerViewController is to use the UIViewControllerRepresentable in a fashion somewhat like this...
struct PlayerViewController: UIViewControllerRepresentable {
var videoURL: URL?
private var player: AVPlayer {
return AVPlayer(url: videoURL!)
}
func makeUIViewController(context: Context) -> AVPlayerViewController {
let controller = AVPlayerViewController()
controller.modalPresentationStyle = .fullScreen
controller.player = player
controller.player?.play()
return controller
}
func updateUIViewController(_ playerController: AVPlayerViewController, context: Context) {
}
}
However from the documentation that the only way to show this controller in a full-screen way is to present it using a sheet.
.sheet(isPresented: $showingDetail) {
PlayerViewController(videoURL: URL(string: "..."))
.edgesIgnoringSafeArea(.all)
}
This doesn't give you a full-screen video with a dismiss button but a sheet modal which can be swiped away instead.
In standard non-SwiftUI Swift, it would seem like the best way would be to present this controller...
let controller = PlayerViewController(videoURL: URL(string: "..."))
self.present(controller, animated: true)
...but SwiftUI doesn't have a self.present as part of it. What would be the best way to present a full-screen video in SwiftUI?
Instead of sheet I would use the solution with ZStack (probably with custom transition if needed), like below
ZStack {
// ... other your content below
if showingDetail { // covers full screen above all
PlayerViewController(videoURL: URL(string: "..."))
.edgesIgnoringSafeArea(.all)
//.transition(AnyTransition.move(edge: .bottom).animation(.default)) // if needed
}
}

Trouble to make a custom UIView aspect scale fit/fill with SwiftUI

No Public API in SwiftUI to response for the resizable modifier of View protocol. Only Image in SwiftUI could work with .resizable(). Custom UIView like UIView for GIF is not resizable now.
I use SDWebImageSwiftUI AnimatedImage, which is backing UIKit View SDAnimatedImageView. AnimatedImage is not response to .resizable(), .scaleToFit, .aspectRatio(contentMode: .fit), etc. WebImage is backing SwiftUI Image, so it's working fine.
import SwiftUI
import SDWebImageSwiftUI
struct ContentView: View {
let url = URL(string: "https://media.giphy.com/media/H62DGtBRwgbrxWXh6t/giphy.gif")!
var body: some View {
VStack {
AnimatedImage(url: url)
.scaledToFit()
.frame(width: 100, height: 100)
WebImage(url: url)
.scaledToFit()
.frame(width: 100, height: 100)
}
}
}
Not sure if it's an Apple bug. Expect custom view like SDWebImageSwiftUI AnimatedImage is responsive to SwiftUI size related modifiers like .scaledToFit().
Related issue: https://github.com/SDWebImage/SDWebImageSwiftUI/issues/3
SwiftUI uses the compression resistance priority and the content hugging priority to decide what resizing is possible.
If you want to resize a view below its intrinsic content size, you need to reduce the compression resistance priority.
Example:
func makeUIView(context: Context) -> UIView {
let imageView = UIImageView(image: UIImage(named: "yourImage")!)
imageView.setContentCompressionResistancePriority(.defaultLow, for: .horizontal)
imageView.setContentCompressionResistancePriority(.defaultLow, for: .vertical)
return imageView
}
This will allow you to set .frame(width:height:) to any size you want.
Finally found a solution.
Make a UIView wrapper outside of the SDAnimationImageView or UIImageView, then override layoutSubviews() set the frame of subview.
Here is full code by me.
And SDWebImageSwiftUI also release a new version which uses wrapper to solve this problem.
class ImageModel: ObservableObject {
#Published var url: URL?
#Published var contentMode: UIView.ContentMode = .scaleAspectFill
}
struct WebImage: UIViewRepresentable {
#ObservedObject var imageModel = ImageModel()
func makeUIView(context: UIViewRepresentableContext<WebImage>) -> ImageView {
let uiView = ImageView(imageModel: imageModel)
return uiView
}
func updateUIView(_ uiView: ImageView, context: UIViewRepresentableContext<WebImage>) {
uiView.imageView.sd_setImage(with: imageModel.url)
uiView.imageView.contentMode = imageModel.contentMode
}
func url(_ url: URL?) -> Self {
imageModel.url = url
return self
}
func scaledToFit() -> Self {
imageModel.contentMode = .scaleAspectFit
return self
}
func scaledToFill() -> Self {
imageModel.contentMode = .scaleAspectFill
return self
}
}
class ImageView: UIView {
let imageView = UIImageView()
init(imageModel: ImageModel) {
super.init(frame: .zero)
addSubview(imageView)
}
override func layoutSubviews() {
super.layoutSubviews()
imageView.frame = bounds
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}

How can I launch a SwiftUI View without Navigation back tracking?

I want to launch a View as a standalone View without the navigation hierarchy. The reason that I don't want to use a NavigationButton is that I don't want the user to return to the calling form.
I have tried the following approach that is similar to how the first view is launched in ScenceDelegate but nothing happens:
let appDelegate = UIApplication.shared.delegate as! AppDelegate
let window = appDelegate.getNewWindow()
window.rootViewController = UIHostingController(rootView: NewView())
window.makeKeyAndVisible()
I have a legitimate reason not to use the navigation UI, I'm leaving the explanation out to keep this short. I'm avoiding Storyboards to keep this as a simple as possible.
Thank you for any solution suggestions.
This is how I accomplished loading a new root View.
I added the following to the AppDelegate code
#UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
var window: UIWindow?
...
func loadNewRootSwiftUIView(rootViewController: UIViewController)
{
let window = UIWindow(frame: UIScreen.main.bounds)
window.rootViewController = rootViewController
self.window = window
window.makeKeyAndVisible()
}
}
And placed the following in my form:
struct LaunchView : View {
var body: some View {
VStack {
Button(
action: {
LaunchLoginView()
},
label: {
Text("Login")
}
)
}
}
}
func LaunchLoginView(){
let appDelegate = UIApplication.shared.delegate as! AppDelegate
let vContoller = UIHostingController(rootView: LoginView())
appDelegate.loadNewRootSwiftUIView(rootViewController: vContoller)
}