I'm facing weird crash while long pressing and then clicking a link inside UITextView. Below is my code for handling touch event on link.
func textView(_ textView: UITextView, shouldInteractWith URL: URL, in characterRange: NSRange) -> Bool {
let termsAndConditions : TRTermsAndConditionsViewController = TRTermsAndConditionsViewController(nibName: "TRTermsAndConditionsViewController", bundle: nil)
let navigationtermsAndConditions = TRBaseNavigationViewController(rootViewController: termsAndConditions)
self.present(navigationtermsAndConditions, animated: true, completion: nil)
return false
}
I'm getting below error:
*** Assertion failure in -[TRADFRI.TRTextViewNonEditable startInteractionWithLinkAtPoint:], /BuildRoot/Library/Caches/com.apple.xbs/Sources/UIKit/UIKit-3512.29.5/UITextView_LinkInteraction.m:377
I googled a lot and gone through these links as well link1 link2 but didn't get any success. I have tried solution given by "Sukhrob" and "ryanphillipthomas" on link1 and solution given by "nate.m" and "chrismorris" on link2. More weird thing is that i'm getting this crash on devices that support 3D touch like iPhone 6S, iPhone 6S Plus(with iOS 9 or above). Can anybody help me out for this issue.
At last i resolved this issue by getting some help from this link
So i have removed the link attribute from NSAttributedString and make use of underline attribute only. By making use of tap gesture , i'm detecting the index of character on which user is tapping and if that index lying in my hyperlink range i'm opening the URL.
Related
I am developing an iOS App that will write files to an external device. I have installed and used UIDocumentPicker with the following code:
let documentPickerController = UIDocumentPickerViewController(forExporting: usbSendungURL)
documentPickerController.delegate = self
self.present(documentPickerController, animated: true, completion: nil)
Which successfully copied to my device.
However as these files can be somewhat large I decided that I needed to install a UIProgressView. Unfortunately DocumentPicker doesn’t doesn’t have this function so I changed my code as follows:
let documentPickerController = UIDocumentPickerViewController(forOpeningContentTypes: [.folder])
documentPickerController.delegate = self
self.present(documentPickerController, animated: true, completion: nil)
That enabled me to pick up the selected folder in the delegate:
func documentPicker(_ controller: UIDocumentPickerViewController, didPickDocumentsAt urls: [URL]) {
documentPickerUrls = urls
moveFilesToUSB(urls: urls)
}
I then append the filenames as appropriate and used FileManager moveItem function:
try FileManager.default.moveItem(at: usbSendungURL[0], to: url1!)
Which enabled me to install the necessary timer functions for the progress updates.
Unfortunately FileManager appears not to able to write to an external device as I am receiving an error message that states that the I don’t have authorisation to write to the device although DocumentPicker does.
Is there a way around this issue.
Any help would be appreciated.
The URLs returned by UIDocumentPickerViewController are security-scoped and access to the directory requires the use of url.startAccessingSecurityScopedResource() and url.stopAccessingSecurityScopedResource(). This is explained clearly in Providing Access To Directories.
i'm trying to recognize munchkin cards from the card game. i've been trying to use a variety of image recognition APIs(google vision api, vize.ai, azure's computer vision api and more), but none of them seem to work ok.
they're able to recognize one of the cards when only one appears in the demo image, but when both appear with another one it fails to identify one or the other.
i've trained the APIs with a set of about 40 different images per card, with different angles, backgrounds and lighting.
i've also tried using ocr(via google vision api) which works only for some cards, probably due to small letters and not much details on some cards.
Does anyone know of a way i can teach one of these APIs(or another) to read these cards better? or perhaps recognize cards in a different way?
the outcome should be a user capturing an image while playing the game and have the application understand which cards he has in front of him and return the results.
thank you.
What a coincidence! I've recently done something very similar – link to video – with great success! Specifically, I was trying to recognise and track Chinese-language Munchkin cards to replace them with English ones. I used iOS's ARKit 2 (requires an iPhone 6S or higher; or a relatively new iPad; and isn't supported on desktop).
I basically just followed the Augmented Reality Photo Frame demo 41 minutes into WWDC 2018's What's New in ARKit 2 presentation. My code below is a minor adaptation to theirs (merely replacing the target with a static image rather than a video). The tedious part was scanning all the cards in both languages, cropping them out, and adding them as AR resources...
Here's my source code, ViewController.swift:
import UIKit
import SceneKit
import ARKit
import Foundation
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
var videoPlayer: AVPlayer
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
sceneView.scene = SCNScene()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a configuration
let configuration = ARImageTrackingConfiguration()
guard let trackingImages = ARReferenceImage.referenceImages(inGroupNamed: "card_scans", bundle: Bundle.main) else {
print("Could not load images")
return
}
// Setup configuration
configuration.trackingImages = trackingImages
configuration.maximumNumberOfTrackedImages = 16
// Run the view's session
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
// Pause the view's session
sceneView.session.pause()
}
// MARK: - ARSCNViewDelegate
// Override to create and configure nodes for anchors added to the view's session.
public func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
// Create a plane
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width,
height: imageAnchor.referenceImage.physicalSize.height)
print("Asset identified as: \(anchor.name ?? "nil")")
// Set UIImage as the plane's texture
plane.firstMaterial?.diffuse.contents = UIImage(named:"replacementImage.png")
let planeNode = SCNNode(geometry: plane)
// Rotate the plane to match the anchor
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
return node
}
func session(_ session: ARSession, didFailWithError error: Error) {
// Present an error message to the user
}
func sessionWasInterrupted(_ session: ARSession) {
// Inform the user that the session has been interrupted, for example, by presenting an overlay
}
func sessionInterruptionEnded(_ session: ARSession) {
// Reset tracking and/or remove existing anchors if consistent tracking is required
}
}
Unfortunately, I met a limitation: card recognition becomes rife with false positives the more cards you add as AR targets to distinguish from (to clarify: not the number of targets simultaneously onscreen, but the library size of potential targets). While a 9-target library performed with 100% success rate, it didn't scale to a 68-target library (which is all the Munchkin treasure cards). The app tended to flit between 1-3 potential guesses when faced with each target. Seeing the poor performance, I didn't take the effort to add all 168 Munchkin cards in the end.
I used Chinese cards as the targets, which are all monochrome; I believe it could have performed better if I'd used the English cards as targets (as they are full-colour, and thus have richer histograms), but on my initial inspection of a 9-card set in each language, I was receiving as many warnings for the AR resources being hard to distinguish for English as I was for Chinese. So I don't think the performance would improve so far as to scale reliably to the full 168-card set.
Unity's Vuforia would be another option to approach this, but again has a hard limit of 50-100 targets. With (an eye-wateringly expensive) commercial licence, you can delegate target recognition to cloud computers, which could be a viable route for this approach.
Thanks for investigating the OCR and ML approaches – they would've been my next ports of call. If you find any other promising approaches, please do leave a message here!
You are going to wrong direction. As i understand, you have an image. And inside that image, there are several munchkin cards (2 in your example). It is not just only "Recognition" but also "Card detection" is needed. So your task should be divided into card detection task and card's text recognition task
For each task you can use the following algorithm
1. Card detection task
Simple color segmentation
( if you have enough time and patient, train SSD to detect card)
2. Card's text recognition
Use tesseract with english dictionary
(You could add some card rotating process to improve accuracy)
Hope that help
You can try this: https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/csharp#OCR. It will detect text and then you can have your custom logic (based on detected text) to handle actions.
My app has :
APP ID - push-notification enabled,
Provision Profile (Development) - push-notification enabled,
Target - capabilities - push-notification enabled,
Target - background mode - background fetch, remote notification enabled
In Appdelegate
1.import UserNotifications
2.class AppDelegate: UIResponder, UIApplicationDelegate, UINavigationControllerDelegate, UNUserNotificationCenterDelegate
3.
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
if #available(iOS 10.0, *)
{
let center = UNUserNotificationCenter.current()
center.delegate = self
center.requestAuthorization(options: [.badge, .sound, .alert], completionHandler: { (granted, error) in
if error == nil
{
//UIApplication.shared.registerForRemoteNotifications()
application.registerForRemoteNotifications()
}
else
{
print("\(error?.localizedDescription)")
}
})
}
else
{
registerForPushNotifications(application)
// Fallback on earlier versions
}
}
but didRegisterForRemoteNotificationsWithDeviceToken is not get called instead didFailToRegisterForRemoteNotificationsWithError is getting call
func application(_ application: UIApplication, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data) {
Following error I'm getting at didFailToRegisterForRemoteNotificationsWithError
Domain=NSCocoaErrorDomain Code=3000 "no valid 'aps-environment' entitlement string found for application" UserInfo={NSLocalizedDescription=no valid 'aps-environment' entitlement string found for application}
Is I am missing something?
I am also face the same issue in Xcode 8 and I had resolved the issue by
select the option in Capabilities->Push Notification->Add the Push Notifications entitlement to your entitlements file.
For more information please find the below screenshot.
And once you select the option then a entitlement file added to your project folder. Please find the below screenshot.
As XCode 8, the entitlements are set from your local entitlements file rather than from the provisioning profile you created on the Apple Developer Portal. The entitlements will now need to be added to your Xcode build under Capabilities in addition to in your provisioning profile.
Hope it works for you!!!
I ran to similar kind of problem when I update to latest version of XCode and Sierra.
I got this fixed after add the Apple ID related to that project under
Project -> Preference as follow, Try it.
I also performed all the steps you have followed for supporting iOS 10.
I have some code that worked fine prior to upgrade to Swift 3 and xCode 8.0.
print("Thumb", self.theTempPath!)
video["videoThumbnail"] = CKAsset(fileURL: self.theTempPath! as URL)
produces this in the Console
Thumb /Users/prw/Documents/thumbTemp.jpg
2016-09-27 10:32:06.140 PA Places Data[2386:68875] Non-file URL
The print statement is for debugging only.
It appears to me that theTempPath! is a path to a file, so I am at a loss about how to address the issue. Execution does not halt, but nothing happens after CKAsset statement.
Can anyone explain what might be causing the issue?
You can use absoluteURL property of NSURL it will return URL object read Apple Documentation for more detail.
if let url = self.theTempPath!.absoluteURL {
video["videoThumbnail"] = CKAsset(fileURL: url)
}
I am trying to write a hybrid app for Android using VS 2013 update 3 and the multi-device hybrid app extension (Cordova v3.5.0). Everything is working well except the Media Capture plugin. I am calling navigator.device.capture.captureImage(MediaCaptureSuccess, MediaCaptureError, { limit: 3 }) which opens up the camera app. I can take a picture but when I click Ok on the device, my error callback is executed with CaptureError.CAPTURE_INTERNAL_ERR with no other information. I have tried switching to org.apache.cordova.media-capture#0.3.4 (currently using 0.3.1) but when I try to compile, I get a plugman error when it tries to retrieve it. I have searched the debug output for clues and the only thing that I found was the following line "Unknown permission android.permission.RECORD_VIDEO in package..." but that seems to be a valid user permission. When I look at capture.java generated by the build, I can see that this error is returned if there is an IOException occurs.
Does anyone have any suggestions on how to fix this or what to check next?
Try this plugin
Config:
<vs:feature>org.apache.cordova.camera#0.3.0</vs:feature>
JS:
navigator.camera.getPicture(onSuccess, onFail, {
quality: 30,
destinationType: Camera.DestinationType.FILE_URI,
saveToPhotoAlbum: true
});