To play videos in my project.I have created Custom video player(i.e AVplayerLayer instead of AVPlayerViewController) for custom controls.The problem is when i try to take a screenshot of videoPlayer it returns black image.
This is the code i have used to take a screenshot
func captureScreen() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, false, UIScreen.main.scale)
view.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
Any suggestions would be greatly helpful!
If you need to take a snapshot of a view containing an AVPlayerLayer, to use in animations/freeze states, you can also use view.snapshotView(afterScreenUpdates:).
This returns a view with a screenshot of your view.
Handy in combination with UIVisualEffectView! :)
Try this,hope it will help you...
extension UIView
{
public func getSnapshotImage() -> UIImage
{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0)
self.drawHierarchy(in: self.bounds, afterScreenUpdates: false)
let snapshotImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return snapshotImage
}
}
Usage:
let Snap = self.VideoPlayerView.getSnapshotImage
Since you are using AVPlayer rather than AVAssetReader+AVAssetReaderTrackOutput(which can take screen shot just along with playing)
I guess this will help you.
if let item = player.currentItem {
let imageGenerator = AVAssetImageGenerator(asset: item.asset)
imageGenerator.appliesPreferredTrackTransform = true
if let cgImage = try? imageGenerator.copyCGImage(at: item.currentTime(), actualTime: nil) {
let image = UIImage(cgImage: cgImage)
}
}
TIP: You can get that item at you own way. (e. the same when you create the AVPlayer)
The assumptions just for safty which compiler indicates you. (You can forced unwrapping of course)
Related
I am trying to use that the ARKit imageDetection functionality in a swiftUI project and have troubles to implement the renderer. This is what happened so far:
In Xcode 11.2 one can start a new ARKit project using swiftUI. The UIViewRepresentable protocol is used in the ARViewContainer struct that returns an ARView. An ARView object/var is created inside that struct and this "arView" apparently does have a "session" vobject.
I think I could set up this (AR)-session object like it used to work with SceneKit:
struct ScanARViewContainer: UIViewRepresentable {
makeUIView(context: Context) -> ARView {
//let arView = MyARView(frame: .zero)
// changed this line to the following to have an own renderer
let arView = ARView(frame: .zero, cameraMode: ARView.CameraMode.ar, automaticallyConfigureSession: false)
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else {
fatalError("Missing expected asset catalog resources.")
}
let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
return arView
}
The code is compiled with no complains and when sending this to the phone the AR session is started and seems to do something.
The next step would be to change the renderer to show detected images. In SceneKit one needed to make use of the ARSCNViewDelegate (Image detection results). This is where I got stuck.
I tried to create an own myARView class first to get access to the ARSessionDelegate hoping for being able to access didadd anchor functions.
class MyARView : ARView, ARSessionDelegate {
required init(frame: CGRect) {
super.init(frame: frame, cameraMode: ARView.CameraMode.ar, automaticallyConfigureSession: false)
self.session.delegate = self
}
}
Then I wanted to uses this "new" class in the ARViewContainer struct:
let arView = MyARView(frame: .zero, cameraMode: ARView.CameraMode.ar, automaticallyConfigureSession: false)
//old : let arView = ARView(frame: .zero, cameraMode: ARView.CameraMode.ar, automaticallyConfigureSession: false)
But the compiler complains about "Type 'ARViewContainer' does not conform to protocol 'UIViewRepresentable'.
Or I get this complain "Type of expression is ambiguous without more context" when declaring the
let arView = MyARView(...'
Does anybody know how to do this correctly?
I think I found good inspiration here about how to "catch" the delegate callbacks.
ARKit & Reality composer - how to Anchor scene using image coordinates
Thanks to Mark D
I have been gradually getting my head around filters but I can't for the life of me get this to work and there are minimal articles about this specific filter.
I have a mask (png) with a gradient between white and black and an image and I can't get it to blur an image called bg.png. The app crashes out with:
'NSUnknownKeyException', reason: '[ setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key inputMaskImage.'
let mask = CIImage(image: UIImage(named: "mask.png")!)
let context = CIContext()
let filter = CIFilter(name: "CIMaskedVariableBlur")
filter?.setValue(mask, forKey: kCIInputMaskImageKey)
let image = CIImage(image: UIImage(named: "bg.png")!)
filter?.setValue(image, forKey: kCIInputImageKey)
let result = filter?.outputImage!
let cgImage = context.createCGImage(result!, from:(result?.extent)!)
let outputImg = UIImage(cgImage: cgImage!)
bgImage.image = outputImg
I have been playing around with different ways of doing it but all give the same error and I assume it is to do with the mask type?... I have no idea!
If you are targeting iOS 8, the key kCIInputMaskImageKey won't work. It's only for iOS 9 or later. But the good news is you can get things to work in iOS 8 by typing in the name of the keys. I usually do this anyways. Here's a function that should work for you:
func applyMaskedVariableBlur(image:UIImage, mask:UIImage) -> UIImage {
let filter = CIFilter(name: "CIMaskedVariableBlur")
// convert UIImages to CIImages and set as input
let ciInput = CIImage(image: image)
let ciMask = CIImage(image: mask)
filter?.setValue(ciInput, forKey: "inputImage")
filter?.setValue(ciMask, forKey: "inputMask")
// get output CIImage, render as CGImage first to retain proper UIImage scale
let ciOutput = filter?.outputImage
let ciContext = CIContext()
let cgImage = ciContext.createCGImage(ciOutput!, from: (ciOutput?.extent)!)
return UIImage(cgImage: cgImage!)
}
I've created one UIView and change the background image to it by using the following code
slot1_view?.backgroundColor = UIColor(patternImage: UIImage(named: "drum_cut.png")!)
It's working fine on iphone but on ipad the background image become repeated.
How can I change its contentmode to something the fill the entire UIView?
Please help!!!!
Try this code-
Repeat:
var img = UIImage(named: "bg.png")
view.backgroundColor = UIColor(patternImage: img!)
Stretched
var img = UIImage(named: "bg.png")
view.layer.contents = img?.cgImage
Hope it helps!
Try this
let backgroundImage = UIImageView(frame: UIScreen.main.bounds)
backgroundImage.image = UIImage(named: "RubberMat")
backgroundImage.contentMode = UIViewContentMode.scaleAspectFill
self.view.insertSubview(backgroundImage, at: 0)
If you're using Swift5, try this on a button click event. you might need to do some layout margin for the UIView
myView?.contentMode = UIView.ContentMode.scaleToFill
myView.layer.contents = UIImage(named:"bg.png")?.cgImage
self.view.bringSubviewToFront(myView)
Swift 5
This work fine for view background image
let backgroundImage = UIImageView(frame: UIScreen.main.bounds)
backgroundImage.image = AppImageAssets.appAssets.appBackgroundImage()
backgroundImage.contentMode = UIView.ContentMode.scaleAspectFill
self.view.insertSubview(backgroundImage, at: 0)
The video I'm playing does not take the entire area of the UIView (named videoView), which has a gray color: iPhone 7 Plus Simulator Screenshot
Most of the answers claim that I need to either set the frame to bounds (of UIView) or set videoGravity to AVLayerVideoGravityResizeAspectFill. I've tried both, but for some reason it still does not fill the space entirely.
var avPlayer: AVPlayer!
var avPlayerLayer: AVPlayerLayer!
var paused: Bool = false
#IBOutlet weak var videoView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
let theURL = Bundle.main.url(forResource:"HOTDOG", withExtension: "mp4")
avPlayer = AVPlayer(url: theURL!)
avPlayerLayer = AVPlayerLayer(player: avPlayer)
avPlayerLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
avPlayerLayer.frame = videoView.layer.bounds
videoView.layer.insertSublayer(avPlayerLayer, at: 0)
}
Any help will be appreciated. :)
After long time I found the answer.
Code below should be moved into viewDidAppear() like:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
// Resizing the frame
avPlayerLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
avPlayerLayer.frame = videoView.layer.bounds
videoView.layer.insertSublayer(avPlayerLayer, at: 0)
avPlayer.play()
paused = false
}
The layout was designed for iPhone SE (small screen), so when it was tested on a bigger screen the app was taking originally set size from the Auto-layout and shaping it according to that. By moving the code into viewDidAppear() the app resizes the window according to new constraints.
Just move the frame line avPlayerLayer.frame = videoView.layer.bounds into viewDidLayoutSubviews like this:
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
avPlayerLayer.frame = videoView.layer.bounds
}
The rest should stick into the viewDidLoad function, just like you did.
I am using swift 3. When I hit a specific string while going through a for loop of iterated geojson data (jsonObj[i]) I need it to call the right image file. I am able to return the right pass type and load my markers in the map correctly. I just can't seem to figure out how to get the app to load the right color per marker type.
I have tried a switch statement but with no luck. I have looked at the if statement documentation among many other stack overflow posts and tutorials.
var pass : String?
var brown = UIImage(named: "brown.png")
var purple = UIImage(named: "purple.png")
var green = UIImage(named: "green.png")
var image : UIImage?
The block of code below is in the for loop. So each record that is iterated through loads that into a marker as a title and subtitle. With this code it reads the first marker color "purple" and loads all points with the purple marker instead of selecting based on pass1, pass2 that is returned.
var pass = jsonObj[i]["properties"]["Pass"]
print("propertiesPassString: ", pass)
if pass == "pass1" {
print("Pass1!")
image = purple
}
else if pass == "pass2" {
print("Pass2")
image = brown
}
else if pass == "" {
print("Pass3")
image = green
}
Here is the func that loads the marker, not sure if this is needed but it applies or loads the "image" or custom marker/variable in each iteration of the for loop. This is largely copy and pasted from mapbox ios sdk
func mapView(_ mapView: MGLMapView, annotationCanShowCallout annotation: MGLAnnotation) -> Bool {
// Always try to show a callout when an annotation is tapped.
return true
}
func mapView(_ mapView: MGLMapView, imageFor annotation: MGLAnnotation) -> MGLAnnotationImage? {
// Try to reuse the existing ‘point’ annotation image, if it exists.
var annotationImage = mapView.dequeueReusableAnnotationImage(withIdentifier: "point")
if annotationImage == nil {
print("annotationImage")
image = image?.withAlignmentRectInsets(UIEdgeInsets(top: 0, left: 0, bottom: (image?.size.height)!/2, right: 0))
// Initialize the ‘point’ annotation image with the UIImage we just loaded.
annotationImage = MGLAnnotationImage(image: image!, reuseIdentifier: "point")
}
//image = nil
return annotationImage!
}
If you have any advice/suggestions for a newer programmer in swift 3 I would certainly appreciate it. I've been stuck on this for several days now. I am still learning how to utilize stack overflow as well so please let me know if there is a better way to go about posting this question, I am open to constructive critiques.