UIVisualEffectView does not work in iOS 14 Widget - swiftui

I put on an image and it fills widget view.
And I made small view and wanted this view shows blurred imaged of behind image.
(image and small view are in the ZStack)
I used few code (something like Option 2 from Is there a method to blur a background in SwiftUI?)
but the result is like
I think that yellow box means 'VisualEffectView doesn't work in WidgetKit.'
So I wonder if there is other technique to show small view that shows blurred behind image?

I think that yellow box means 'VisualEffectView doesn't work in
WidgetKit.'
Yes, specifically because you can't use UIViewRepresentable in Widgets. See:
Display UIViewRepresentable in WidgetKit
It means that the only option is to use SwiftUI code. A possible solution is here (Option 1):
Is there a method to blur a background in SwiftUI?
struct WidgetEntryView: View {
var entry: Provider.Entry
var body: some View {
ZStack {
Image("testImage")
.blur(radius: 10)
}
.edgesIgnoringSafeArea(.all)
}
}

I found some workaround
first, i put the original picture
then put a picture which applied Gaussian filter on the original picture
and add clipShape to the filtered image.
object for argument of clipShape needs to confirm Shape protocol like
struct MyShape: Shape {
func path(in rect: CGRect) -> Path {
RoundedRectangle(cornerRadius: 10.0).path(...)
}
}
and I found Gaussian filter code from
https://gist.github.com/Zedd0202/8d3e567161d0c92e7d585bb74e926413#file-applyblur_usingclamp-swift
pseudo code
ZStack {
Image("image")
Image("image")
.clipShape(YourShape())
.frame(...)
.padding(...)
}
---
extension UIImage {
func applyBlur_usingClamp(radius: CGFloat) -> UIImage {
let context = CIContext()
guard let ciImage = CIImage(image: self),
let clampFilter = CIFilter(name: "CIAffineClamp"),
let blurFilter = CIFilter(name: "CIGaussianBlur") else {
return self
}
clampFilter.setValue(ciImage, forKey: kCIInputImageKey)
blurFilter.setValue(clampFilter.outputImage, forKey: kCIInputImageKey)
blurFilter.setValue(radius, forKey: kCIInputRadiusKey)
guard let output = blurFilter.outputImage,
let cgimg = context.createCGImage(output, from: ciImage.extent) else {
return self
}
return UIImage(cgImage: cgimg)
}
}
---
struct YourShape: Shape {
func path(in rect: CGRect) -> Path {
RoundedRectangle(cornerRadius: 10.0)
.path(in: CGRect(...))
}
}
then you will get something like this
Update
looks like widget has kind of memory limit
if you run this code on real device it can be crashed (most time simulator works find, in my case)
you can resolve this problem through adjusting radius value of applyBlur_usingClamp method. (250 crashed, 100 is fine for me)

Related

How to apply a blur to a view with a mask?

I need to implement an onboarding like in the image below. I can't apply blur avoiding the icon. I know how to do it using UIViewRepresentable, but I want to reach the goal using SwiftUI 2.0(min iOS 14.0). Is there a way to create such a masked blur without UIViewRepresentable?
UPD. I have a view hierarchy. I need to blur it(a gaussian blur effect with radius 5) and cover it with a black tint with an opacity 0.3 and a mask with a hole. Everything is ok, but the blur modifier also applies an effect to an element from the hole. It must not be blurred(like "Flag" icon at the screenshot). I can't separate this element from the view hierarchy. This onboarding view modifier must be reusable across the app.
There are two ways of doing so.
Either use the blur just on the background:
struct ContentView: View {
var body: some View {
VStack {
Icon()
Spacer()
}.background(Image("cat").blur(radius: 2.5))
}
}
Or, when having a entire view in the background you can use the ZStack and just blur the View you want to be blurred. Make sure that the View/ Element you don't want blurred is above. Like so:
struct ContentView2: View {
var body: some View {
ZStack {
Image("cat").blur(radius: 2.5)
VStack {
Icon()
Spacer()
}
}
}
}
I used both times a entire View for the Icon which is probably a little over engineered but it gives you the idea:
struct Icon: View {
var body: some View {
Image(systemName: "pencil.circle.fill")
.resizable()
.frame(width: 50, height: 50)
.padding()
.foregroundColor(.red)
}
}
Both giving you this as result:
I found a solution to my problem. I used some code from other answers and wrote my OnboardingViewModifier. I used content from ViewModifier body function twice in a ZStack. The first one is an original view, the second one is blurred and masked. It gave me the needed result.
extension Path {
var reversed: Path {
let reversedCGPath = UIBezierPath(cgPath: cgPath)
.reversing()
.cgPath
return Path(reversedCGPath)
}
}
struct ShapeWithHole: Shape {
let hole: CGRect
let cornerRadius: CGFloat
func path(in rect: CGRect) -> Path {
var path = Rectangle().path(in: rect)
path.addPath(RoundedRectangle(cornerRadius: cornerRadius).path(in: hole).reversed)
return path
}
}
struct OnboardingViewModifier<DescriptionView>: ViewModifier where DescriptionView: View {
let hole: CGRect
let isPresented: Bool
#ViewBuilder let descriptionOverlay: () -> DescriptionView
func body(content: Content) -> some View {
content
.disabled(isPresented)
.overlay(overlay(content))
}
#ViewBuilder
func overlay(_ content: Content) -> some View {
if isPresented {
ZStack {
content
.blur(radius: 5)
.disabled(isPresented)
Color.black.opacity(0.3)
descriptionOverlay()
}
.compositingGroup()
.mask(ShapeWithHole(hole: hole, cornerRadius: 25))
.ignoresSafeArea(.all)
}
}
}
extension View {
func onboardingWithHole<DescriptionView>(
isPresented: Bool,
hole: CGRect,
#ViewBuilder descriptionOverlay: #escaping () -> DescriptionView) -> some View where DescriptionView: View {
modifier(OnboardingViewModifier(hole: hole, isPresented: isPresented, descriptionOverlay: descriptionOverlay))
}
}

How do I make SwiftUI show entire vertical content on iPad?

I have a SwiftUI application that was laid out using an iPhone. Now when I run it on an iPad, it appears to fill the entire width of the screen, but much of the view content is cutoff on the top and bottom. The top level view contains a container (which can hold any number of different views, based on navigation) and a splash view, which times out after the animation. Is there a way to tell it to honor the size required to fit all of the vertical views, and auto-size the width?
This is the top level view. I can post more, but that is a lot of code.
import SwiftUI
struct ContentView: View {
#State var showSplash = true
var body: some View {
ZStack() {
ContainerView()
SplashView()
.opacity(showSplash ? 1 : 0)
.onAppear {
DispatchQueue.main.asyncAfter(deadline: .now() + 3.5) {
withAnimation() {
self.showSplash = false
splashDidFinish()
}
}
}
}.onAppear {
NSLog(".onAppear()")
}
}
func splashDidFinish() {
NotificationCenter.default.post(name: NSNotification.Name(rawValue: "checkApplicationReady"), object: nil)
}
}
I was able to fix it using:
.aspectRatio(contentMode: .fit)

How to present a full screen AVPlayerViewController in SwiftUI

In SwiftUI, it seems like the best way to set up an AVPlayerViewController is to use the UIViewControllerRepresentable in a fashion somewhat like this...
struct PlayerViewController: UIViewControllerRepresentable {
var videoURL: URL?
private var player: AVPlayer {
return AVPlayer(url: videoURL!)
}
func makeUIViewController(context: Context) -> AVPlayerViewController {
let controller = AVPlayerViewController()
controller.modalPresentationStyle = .fullScreen
controller.player = player
controller.player?.play()
return controller
}
func updateUIViewController(_ playerController: AVPlayerViewController, context: Context) {
}
}
However from the documentation that the only way to show this controller in a full-screen way is to present it using a sheet.
.sheet(isPresented: $showingDetail) {
PlayerViewController(videoURL: URL(string: "..."))
.edgesIgnoringSafeArea(.all)
}
This doesn't give you a full-screen video with a dismiss button but a sheet modal which can be swiped away instead.
In standard non-SwiftUI Swift, it would seem like the best way would be to present this controller...
let controller = PlayerViewController(videoURL: URL(string: "..."))
self.present(controller, animated: true)
...but SwiftUI doesn't have a self.present as part of it. What would be the best way to present a full-screen video in SwiftUI?
Instead of sheet I would use the solution with ZStack (probably with custom transition if needed), like below
ZStack {
// ... other your content below
if showingDetail { // covers full screen above all
PlayerViewController(videoURL: URL(string: "..."))
.edgesIgnoringSafeArea(.all)
//.transition(AnyTransition.move(edge: .bottom).animation(.default)) // if needed
}
}

SwiftUI - animating a new image inside the current view

I have a View where I use a Picture(image) subview to display an image, which can come in different height and width formats.
The reference to the image is extracted from an array, which allows me to display different images in my View, by varying the reference. SwiftUI rearrange the content of view for each new image
I would like an animation on this image, say a scale effect, when the image is displayed
1) I need a first .animation(nil) to avoid animating the former image (otherwise I have an ugly fade out and aspect ratio deformation). Seems the good fix
2) But then I have a problem with the scaleEffect modifier (even if I put it to scale = 1, where it should do nothing)
The animation moves from image 1 to image 2 by imposing that the top left corner of image 2 starts from the position of top left corner of image 1, which, with different widths and heights, provokes a unwanted translation of the image center
This is reproduced in the code below where for demo purposes I'm using system images (which are not prone to bug 1))
How can I avoid that ?
3) In the demo code below, I trigger the new image with a button, which allows me to use an action and handle "scale" modification and achieve explicitly the desired effect. However in my real code, the image modification is triggered by another change in another view.
Swift knows that, hence I can use an implicit .animation modifier.
However, I can't figure out how to impose a reset of "scale" for any new image and perform my desired effect.
If I use onAppear(my code), it only works for the first image displayed, and not the following ones.
In the real code, I have a Picture(image) view, and Picture(image.animation()) does not compile.
Any idea how to achieve the action in the below code in the Button on an implicit animation ?
Thanks
import SwiftUI
let portrait = Image(systemName: "square.fill")
let landscape = Image(systemName: "square.fill")
struct ContentView: View {
#State var modified = false
#State var scale: CGFloat = 1
var body: some View {
return VStack(alignment: .center) {
Pictureclip(bool: $modified)
.animation(nil)
.scaleEffect(scale)
.animation(.easeInOut(duration: 1))
Button(action: {
self.modified.toggle()
self.scale = 1.1
DispatchQueue.main.asyncAfter(deadline: .now() + 1)
{self.scale = 1}
}) {
Text("Tap here")
.animation(.linear)
}
}
}
}
struct Pictureclip: View {
#Binding var bool: Bool
var body: some View {
if bool == true {
return portrait
.resizable()
.frame(width: 100, height: 150)
.foregroundColor(.green)
} else {
return landscape
.resizable()
.frame(width: 150, height: 100)
.foregroundColor(.red)
}
}
}
I have a semi answer to my question, namely points 1 & 2 (here with reference to two jpeg images in the asset catalog)
import SwiftUI
let portrait = Image("head")
let landscape = Image("sea")
struct ContentView: View {
#State var modified = false
#State var scale: CGFloat = 0.95
var body: some View {
VStack(){
GeometryReader { geo in
VStack {
Picture(bool: self.modified)
.frame(width: geo.size.width * self.scale)
}
}
Spacer()
Button(action: {
self.scale = 0.95
self.modified.toggle()
withAnimation(.easeInOut(duration: 0.5)){
self.scale = 1
}
}) {
Text("Tap here")
}
}
}
}
struct Picture: View {
var bool: Bool
var body: some View {
if bool == true {
return portrait
.resizable().aspectRatio(contentMode: .fit)
.padding(.all,6.0)
} else {
return landscape
.resizable().aspectRatio(contentMode: .fit)
.padding(.all,6.0)
}
}
}
This solution enables scaling without distorting the aspect ratio of the new image during the animation. But It does not work in a code where the image update is triggered in another view. I guess I have to restructure my code, either to solve my problem or to expose it more clearly.
Edit: a quick and dirty solution is to put the triggering code (here the action code in the button) in the other view. Namely, put in view B the code that animates view A, with a state variable passed to it (here, "scale"). I'm sure there are cleaner ways, but at least this works.
I am not sure about it, but maybe it can be helpful for you.
Use DataBinding structure. I use it like this:
let binding = Binding<String>(get: {
self.storage
}, set: { newValue in
self.textOfPrimeNumber = ""
self.storage = newValue
let _ = primeFactorization(n: Int(self.storage)!, k: 2, changeable: &self.textOfPrimeNumber)
})

GLKView in SwiftUI?

How can I use GLKView in SwiftUI? I'm using CIFilter but would like to apply filters through GLKit / OpenGL. Any ideas?
struct ContentView: View {
#State private var image: Image?
var body: some View {
VStack {
image?
.resizable()
.scaledToFit()
}
.onAppear(perform: loadImage)
}
func loadImage() {
guard let inputImage = UIImage(named: "squirrel") else {
return
}
let ciImage = CIImage(image: inputImage)
let context = CIContext()
let blur = CIFilter.gaussianBlur()
blur.inputImage = ciImage
blur.radius = 20
guard let outputImage = blur.outputImage else {
return
}
if let cgImg = context.createCGImage(outputImage, from: ciImage!.extent) {
let uiImg = UIImage(cgImage: cgImg)
image = Image(uiImage: uiImg)
}
}
}
Here's a working GLKView in SwiftUI using UIViewControllerRepresentable.
A few things to keep in mind.
GLKit was deprecated with the release of iOS 12, nearly 2 years ago. While I hope Apple won't kill it anytime soon (way too many apps still use it), they recommend using Metal or an MTKView instead. Most of the technique here is still the way to go for SwiftUI.
I worked with SwiftUI in hopes of making my next CoreImage app be a "pure" SwiftUI app until I had too many UIKit needs to bring in. I stopped working on this around Beta 6. The code works but is clearly not production ready. The repo for this is here.
I'm more comfortable working with models instead of putting code for things like using a CIFilter directly in my views. I'll assume you know how to create a view model and have it be an EnvironmentObject. If not look at my code in the repo.
Your code references a SwiftUI Image view - I never found any documentation that suggests it uses the GPU (as a GLKView does) so you won't find anything like that in my code. If you are looking for real-time performance when changing attributes, I found this to work very well.
Starting with a GLKView, here's my code:
class ImageView: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
public var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
}
override public func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
This is very old production code, taken from objc.io issue 21 dated February 2015! Of note is that it encapsulates a CIContext, needs it's own clear color defined before using it's draw method, and renders an image as scaleAspectFit. If you should try using this in UIKit, it'll like work perfectly.
Next, a "wrapper" UIViewController:
class ImageViewVC: UIViewController {
var model: Model!
var imageView = ImageView()
override func viewDidLoad() {
super.viewDidLoad()
view = imageView
NotificationCenter.default.addObserver(self, selector: #selector(updateImage), name: .updateImage, object: nil)
}
override func viewDidLayoutSubviews() {
imageView.setNeedsDisplay()
}
override func traitCollectionDidChange(_ previousTraitCollection: UITraitCollection?) {
if traitCollection.userInterfaceStyle == .light {
imageView.clearColor = UIColor.white
} else {
imageView.clearColor = UIColor.black
}
}
#objc func updateImage() {
imageView.image = model.ciFinal
imageView.setNeedsDisplay()
}
}
I did this for a few reasons - pretty much adding up to the fact that i'm not a Combine expert.
First, note that the view model (model) cannot access the EnvironmentObject directly. That's a SwiftUI object and UIKit doesn't know about it. I think an ObservableObject *may work, but never found the right way to do it.
Second, note the use of NotificationCenter. I spent a week last year trying to get Combine to "just work" - particularly in the opposite direction of having a UIButton tap notify my model of a change - and found that this is really the easiest way. It's even easier than using delegate methods.
Next, exposing the VC as a representable:
struct GLKViewerVC: UIViewControllerRepresentable {
#EnvironmentObject var model: Model
let glkViewVC = ImageViewVC()
func makeUIViewController(context: Context) -> ImageViewVC {
return glkViewVC
}
func updateUIViewController(_ uiViewController: ImageViewVC, context: Context) {
glkViewVC.model = model
}
}
The only thing of note is that here's where I set the model variable in the VC. I'm sure it's possible to get rid of the VC entirely and have a UIViewRepresentable, but I'm more comfortable with this set up.
Next, my model:
class Model : ObservableObject {
var objectWillChange = PassthroughSubject<Void, Never>()
var uiOriginal:UIImage?
var ciInput:CIImage?
var ciFinal:CIImage?
init() {
uiOriginal = UIImage(named: "vermont.jpg")
uiOriginal = uiOriginal!.resizeToBoundingSquare(640)
ciInput = CIImage(image: uiOriginal!)?.rotateImage()
let filter = CIFilter(name: "CIPhotoEffectNoir")
filter?.setValue(ciInput, forKey: "inputImage")
ciFinal = filter?.outputImage
}
}
Nothing to see here at all, but understand that in SceneDelegate, where you instantiate this, it will trigger the init and set up the filtered image.
Finally, ContentView:
struct ContentView: View {
#EnvironmentObject var model: Model
var body: some View {
VStack {
GLKViewerVC()
Button(action: {
self.showImage()
}) {
VStack {
Image(systemName:"tv").font(Font.body.weight(.bold))
Text("Show image").font(Font.body.weight(.bold))
}
.frame(width: 80, height: 80)
}
}
}
func showImage() {
NotificationCenter.default.post(name: .updateImage, object: nil, userInfo: nil)
}
}
SceneDelegate instantiates the view model which now has the altered CIImage, and the button beneath the GLKView (an instance of GLKViewVC, which is just a SwiftUI View) will send a notification to update the image.
Apple's WWDC 2022 contained a tutorial/video entitled "Display EDR Content with Core Image, Metal, and SwiftUI" which describes how to blend Core Image with Metal and SwiftUI. It points to some new sample code entitled "Generating an Animation with a Core Image Render Destination" (here).
While it doesn't address your question about using GLKView, it does provide some elegant, clean, Apple-sanctioned code for using Metal within SwiftUI.
This sample project is very CoreImage-centric (which matches your background with CIFilter), but I wish Apple would post more sample-code examples showing Metal integrated with SwiftUI.