Show notification when beacon is in region - swift3

I want to show a single notification when a beacons enters the region. I used the following code:
func locationManager(_ manager: CLLocationManager, didEnterRegion region: CLRegion) {
if region is CLBeaconRegion {
BeaconNotificationFound()
}
}
The locations manager and the region is correct. The only problem is, that with this function nothing happens.
What can I do?
This is the code above:
let locationManager = CLLocationManager()
let region = CLBeaconRegion(proximityUUID: UUID(uuidString: "E2C56DB5-DFFB-48D2-B060-D0F5A71096E0")!, identifier: "AirLocate")
override func viewDidLoad() {
super.viewDidLoad()
//Darf Standort genutzt werden?
locationManager.delegate = self
if (CLLocationManager.authorizationStatus() != CLAuthorizationStatus.authorizedWhenInUse) {
locationManager.requestWhenInUseAuthorization()
}
locationManager.startRangingBeacons(in: region)
locationManager.startMonitoring(for: region)
}

It sounds like you are not getting callbacks to the didEnterRegion method. A few things to check:
Make sure your Info.plist contains an entry like below, otherwise it won't be able to prompt you for location access: <key>NSLocationWhenInUseUsageDescription</key><string>This app needs to access your location so it can tell when you are near a beacon.</string>
Make sure you are prompted for location access and you have actually granted it. Check in Settings -> [Your app name] -> Location, and verify it says "ALLOW LOCATION ACCESS" has a checkmark that is not next to Never.
Make sure bluetooth is on.
Force a region exit by bringing your app to the foreground, and turning off the beacon for at least 60 seconds. Once this is done, turn the beacon back on.
If none of the above works, use an off the shelf beacon detector app like Locate or AirLocate to verify it can see your beacon when transmitting. If using one of these apps, you must configure your UUID in the app for detection.

Related

iOS Copy Files to external device

I am developing an iOS App that will write files to an external device. I have installed and used UIDocumentPicker with the following code:
let documentPickerController = UIDocumentPickerViewController(forExporting: usbSendungURL)
documentPickerController.delegate = self
self.present(documentPickerController, animated: true, completion: nil)
Which successfully copied to my device.
However as these files can be somewhat large I decided that I needed to install a UIProgressView. Unfortunately DocumentPicker doesn’t doesn’t have this function so I changed my code as follows:
let documentPickerController = UIDocumentPickerViewController(forOpeningContentTypes: [.folder])
documentPickerController.delegate = self
self.present(documentPickerController, animated: true, completion: nil)
That enabled me to pick up the selected folder in the delegate:
func documentPicker(_ controller: UIDocumentPickerViewController, didPickDocumentsAt urls: [URL]) {
documentPickerUrls = urls
moveFilesToUSB(urls: urls)
}
I then append the filenames as appropriate and used FileManager moveItem function:
try FileManager.default.moveItem(at: usbSendungURL[0], to: url1!)
Which enabled me to install the necessary timer functions for the progress updates.
Unfortunately FileManager appears not to able to write to an external device as I am receiving an error message that states that the I don’t have authorisation to write to the device although DocumentPicker does.
Is there a way around this issue.
Any help would be appreciated.
The URLs returned by UIDocumentPickerViewController are security-scoped and access to the directory requires the use of url.startAccessingSecurityScopedResource() and url.stopAccessingSecurityScopedResource(). This is explained clearly in Providing Access To Directories.

How do I fetch HomeKit values for usage in iOS 14 Widgets?

I am writing a HomeKit app that successfully shows live data from my supported accessories in-app. I can read single values (HMCharacteristic.readValue) or use notifications to stay updated (HMCharacteristic.enableNotification).
Now I want to implement Widgets that show this data on the user's Home Screen. This consists of four steps:
A dynamic Intent fetches all the registered (and supported) Accessories from the HMHomeManager and enables the user to select one of them to be shown on the Widget.
Inside the IntentTimelineProvider's getTimeline function I can then again use the HMHomeManager to retrieve the Accessory I want to display on the Widget (based on the Accessory's UUID which is stored inside the getTimeline's configuration parameter - the Intent).
Still inside the getTimeline function I can choose the Services and Characteristics I need for displaying the Accessory's Widget from the HMHomeManager.
Up until here everything works fine.
However, when I try to read the values from the Characteristics I chose before using HMCharacteristic.readValue, the callback contains an error stating
Error Domain=HMErrorDomain Code=80 "Missing entitlement for API."
The Widget's Info.plist contains the 'Privacy - HomeKit Usage Description' field and the Target has the HomeKit capability.
After some research I came up with the following theory: Obviously the whole WidgetKit API runs my code in background. And it seems like HomeKit does not allow access from a background context. Well, it does allow access to Homes/Services/Characteristics, but it does not allow reading or writing on Characteristics (I guess to make sure App developers use HomeKit Automations and don't try to implement custom automations that are controlled by some background process of their app running on the iPhone).
My (simplified) getTimeline code:
func getTimeline(for configuration: SelectAccessoryIntent, in context: Context, completion: #escaping (Timeline<Entry>) -> ()) {
// id stores the uuid of the accessory that was chosen by the user using the dynamic Intent
if let id = configuration.accessory?.identifier {
// Step 2.: fetch the accessory
// hm is a HMHomeManager
let hm = HomeStore.shared.homeManager
// take a short nap until the connection to the local HomeKit instance is established (otherwise hm.homes will create an empty array on first call)
sleep(1)
let accessories = hm.homes.flatMap({ h in h.accessories })
if let a = accessories.filter({ a in a.uniqueIdentifier.uuidString == id }).first {
// a holds our HMAccessory
// Step 3.: select the characteristic I want
// obviously the real code chooses a specific characteristic
let s: HMService = a.services.first!
let c: HMCharacteristic = s.characteristics.first!
// Step 4.: read the characteristic's value
c.readValue(completionHandler: {err in
if let error = err {
print(error)
} else {
print(c.value ?? "nil")
}
// complete with timeline
completion(Timeline(entries: [RenderAccessoryEntry(date: Date(), configuration: configuration, value: c.value)], policy: .atEnd))
})
}
}
}
}
My questions:
First: Is my theory correct?
If so: What can I do? Are there any entitlements that allow me to access HomeKit in background or similar? Do I need to perform the readValue call elsewhere? Or is it just impossible to use the HomeKit API with WidgetKit with the current versions of HomeKit/WidgetKit/iOS and best I can do is hope they introduce this capability at some point in the future?
If not: What am I missing?

recognize playing cards in an image

i'm trying to recognize munchkin cards from the card game. i've been trying to use a variety of image recognition APIs(google vision api, vize.ai, azure's computer vision api and more), but none of them seem to work ok.
they're able to recognize one of the cards when only one appears in the demo image, but when both appear with another one it fails to identify one or the other.
i've trained the APIs with a set of about 40 different images per card, with different angles, backgrounds and lighting.
i've also tried using ocr(via google vision api) which works only for some cards, probably due to small letters and not much details on some cards.
Does anyone know of a way i can teach one of these APIs(or another) to read these cards better? or perhaps recognize cards in a different way?
the outcome should be a user capturing an image while playing the game and have the application understand which cards he has in front of him and return the results.
thank you.
What a coincidence! I've recently done something very similar – link to video – with great success! Specifically, I was trying to recognise and track Chinese-language Munchkin cards to replace them with English ones. I used iOS's ARKit 2 (requires an iPhone 6S or higher; or a relatively new iPad; and isn't supported on desktop).
I basically just followed the Augmented Reality Photo Frame demo 41 minutes into WWDC 2018's What's New in ARKit 2 presentation. My code below is a minor adaptation to theirs (merely replacing the target with a static image rather than a video). The tedious part was scanning all the cards in both languages, cropping them out, and adding them as AR resources...
Here's my source code, ViewController.swift:
import UIKit
import SceneKit
import ARKit
import Foundation
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
var videoPlayer: AVPlayer
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
sceneView.scene = SCNScene()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a configuration
let configuration = ARImageTrackingConfiguration()
guard let trackingImages = ARReferenceImage.referenceImages(inGroupNamed: "card_scans", bundle: Bundle.main) else {
print("Could not load images")
return
}
// Setup configuration
configuration.trackingImages = trackingImages
configuration.maximumNumberOfTrackedImages = 16
// Run the view's session
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
// Pause the view's session
sceneView.session.pause()
}
// MARK: - ARSCNViewDelegate
// Override to create and configure nodes for anchors added to the view's session.
public func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
// Create a plane
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width,
height: imageAnchor.referenceImage.physicalSize.height)
print("Asset identified as: \(anchor.name ?? "nil")")
// Set UIImage as the plane's texture
plane.firstMaterial?.diffuse.contents = UIImage(named:"replacementImage.png")
let planeNode = SCNNode(geometry: plane)
// Rotate the plane to match the anchor
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
return node
}
func session(_ session: ARSession, didFailWithError error: Error) {
// Present an error message to the user
}
func sessionWasInterrupted(_ session: ARSession) {
// Inform the user that the session has been interrupted, for example, by presenting an overlay
}
func sessionInterruptionEnded(_ session: ARSession) {
// Reset tracking and/or remove existing anchors if consistent tracking is required
}
}
Unfortunately, I met a limitation: card recognition becomes rife with false positives the more cards you add as AR targets to distinguish from (to clarify: not the number of targets simultaneously onscreen, but the library size of potential targets). While a 9-target library performed with 100% success rate, it didn't scale to a 68-target library (which is all the Munchkin treasure cards). The app tended to flit between 1-3 potential guesses when faced with each target. Seeing the poor performance, I didn't take the effort to add all 168 Munchkin cards in the end.
I used Chinese cards as the targets, which are all monochrome; I believe it could have performed better if I'd used the English cards as targets (as they are full-colour, and thus have richer histograms), but on my initial inspection of a 9-card set in each language, I was receiving as many warnings for the AR resources being hard to distinguish for English as I was for Chinese. So I don't think the performance would improve so far as to scale reliably to the full 168-card set.
Unity's Vuforia would be another option to approach this, but again has a hard limit of 50-100 targets. With (an eye-wateringly expensive) commercial licence, you can delegate target recognition to cloud computers, which could be a viable route for this approach.
Thanks for investigating the OCR and ML approaches – they would've been my next ports of call. If you find any other promising approaches, please do leave a message here!
You are going to wrong direction. As i understand, you have an image. And inside that image, there are several munchkin cards (2 in your example). It is not just only "Recognition" but also "Card detection" is needed. So your task should be divided into card detection task and card's text recognition task
For each task you can use the following algorithm
1. Card detection task
Simple color segmentation
( if you have enough time and patient, train SSD to detect card)
2. Card's text recognition
Use tesseract with english dictionary
(You could add some card rotating process to improve accuracy)
Hope that help
You can try this: https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/csharp#OCR. It will detect text and then you can have your custom logic (based on detected text) to handle actions.

Multipeer Connectivity Not Connecting Programmatically

I am creating an iOS/macOS app that uses remote control functionality via the Multipeer Connectivity Framework. Since the device to be remotely monitored and controlled will run over an extended period of time, it's not viable to use the automatic view controller methods since the monitoring device may be locked or go to sleep and then disconnect the connection. So I'm using the programatic approach so that when the monitoring devices lose connection, they will automatically pair up when they are unlocked/woken up and the app is started again. My connection works fine using the ViewController method but not the programatic delegate approach. The advertising, browsing and inviting works fine, but when the invitation is accepted on the remote side I get several errors and then a failed connection. What's weird is that several of the errors are GCKSession errors.
So why is it trying to use the GameCenter framework? And why is it failing after accepting the invitation? Could it just be a bug in the Xcode 8 / Swift 3 /iOS 10 / macOS Sierra Beta SDKs?
[ViceroyTrace] [ICE][ERROR] ICEStopConnectivityCheck() found no ICE check with call id (2008493930)
[GCKSession] Wrong connection data. Participant ID from remote connection data = 6FBBAE66, local participant ID = 3A4C626C
[MCSession] GCKSessionEstablishConnection failed (FFFFFFFF801A0020)
Peer Changing
Failed
[GCKSession] Not in connected state, so giving up for participant [77B72F6A] on channel [0]
Here is the code from my connection class
func startAdvertisingWithoutUI () {
if advertiserService == nil {
advertiserService = MCNearbyServiceAdvertiser (peer: LMConnectivity.peerID, discoveryInfo: nil, serviceType: "mlm-timers")
advertiserService?.delegate = self
session.delegate = self
}
advertiserService?.startAdvertisingPeer()
}
func browserForNearbyDevices () {
if browserService == nil {
browserService = MCNearbyServiceBrowser (peer: LMConnectivity.peerID, serviceType: "mlm-timers")
browserService?.delegate = self
session.delegate = self
}
browserService?.startBrowsingForPeers()
}
func sendInvitation(to peer: MCPeerID) {
browserService?.invitePeer(peer, to: session, withContext: nil, timeout: 60)
}
func advertiser(_ advertiser: MCNearbyServiceAdvertiser, didReceiveInvitationFromPeer peerID: MCPeerID, withContext context: Data?, invitationHandler: (Bool, MCSession?) -> Void) {
let trustedNames = GetPreferences.trustedRemoteDevices
for name in trustedNames {
if name == peerID.displayName {
invitationHandler(true,session)
return
}
}
invitationHandler (false, session)
}
None has worked for me.
I've resolved only disabling encryption...
let session = MCSession(peer:myPeerId, securityIdentity: nil, encryptionPreference: MCEncryptionPreference.none)
When the peerID used to make the session and the peerID used to make the advertiser or browser do not match, I get this part of the error.
[GCKSession] Wrong connection data. Participant ID from remote connection data = 6FBBAE66, local participant ID = 3A4C626C
Once peerIDs match, that part of the error goes away.
There might still be some other connection problems though.
I found out what was wrong. The MCPeerID object that I was passing into the MCSession instances, I was vending it as a Computed Class Property instead of storing it as a Stored Property. So I changed it to a Stored Instance Property and everything started working! Thanks Tanya for pointing me in the direction of the MCPeerID object.
Old Code
// Class Properties
static var localPeer : MCPeerID { return MCPeerID(displayName: GetPreferences.deviceName!) }
New Code
// Instance Properties
let localPeer = MCPeerID (displayName: GetPreferences.deviceName!)
The problem for me was that I never set the delegate of MCSession. I got all the same error messages that the OP mentioned, which made me think the connection was broken, but really I just forgot to set the delegate. After setting the delegate, all the error messages still printed, but otherwise my delegate methods got called normally upon receiving a message!
I've inflicted this problem on myself twice. Hopefully this helps someone reading along!
I got to work with TViOS 10.0 beta with this ...
peerID = MCPeerID(displayName: UIDevice.current.name)
Although I am still seeing this error...
2016-09-08 10:13:43.016600 PeerCodeATV[208:51135] [ViceroyTrace] [ICE][ERROR] ICEStopConnectivityCheck() found no ICE check with call id (847172408)
2016-09-08 10:13:47.577645 PeerCodeATV[208:51155] [GCKSession] SSLHandshake returned with error [-9819].
Same problem for me with an app I've had on the itunes store for years.
The latest 10.1 beta update now seems to fix the bluetooth issue with my app without any change to my code.

UIDocumentPickerViewController not loading document from iCloud on device

I just tried to add iOS's document picker (UIDocumentPickerViewController) as directed by the Document Picker Programming Guide
I'm trying to Import (i.e. copy locally...hopefully the simplest case) a file using the iCloud document provider (I've tested others such as DropBox and Google Drive and they work fine). Here's how I call up the document picker menu:
...
UIDocumentMenuViewController *documentPickerMenu = [[UIDocumentMenuViewController alloc]
initWithDocumentTypes:#[#"public.text", #"public.data"]
inMode:UIDocumentPickerModeImport];
documentPickerMenu.delegate = self;
[self presentViewController:documentPickerMenu animated:YES completion:nil];
...
and here is how I bring up the provider-specific document picker:
- (void)documentMenu:(UIDocumentMenuViewController *)documentMenu didPickDocumentPicker:(UIDocumentPickerViewController *)documentPicker
{
documentPicker.delegate = self;
[self presentViewController:documentPicker animated:YES completion:nil];
}
and here is my delegate function, which never gets called (i.e. never see log line, breakpoint never hit):
- (void)documentPicker:(UIDocumentPickerViewController *)controller didPickDocumentAtURL:(NSURL *)url {
if (controller.documentPickerMode == UIDocumentPickerModeImport) {
NSLog(#"Opened %#", url.path);
}
}
I tested it and it worked fine on the iOS simulator. However when I load it on an iOS device (an iPad), the document picker comes up fine, shows me valid iCloud documents, but when I pick one all I see in the console is:
plugin com.apple.UIKit.fileprovider.default invalidated
And my delegate function UIDocumentPickerDelegate:didPickDocumentAtURL: function is never called.
I believe I have properly provisioned the application with the iCloud entitlements.
Can someone please help?