Watch falls asleep during active HKWorkoutSession - swiftui

I get data from the accelerometer (CMMotionManager) and training (HKWorkoutSession) and transfer it to the phone in real time, but at a random moment the watch falls asleep.
In the info I use WKBackgroundModes: workout-processing The strap is tightened tightly, at first I thought that he was losing contact and the reason was in it. When I wrote the same functions earlier using WatchKit, there was no such problem, but now with SwiftUI there is this a problem.
do {
let workoutConfiguration = HKWorkoutConfiguration()
workoutConfiguration.activityType = .mindAndBody
workoutConfiguration.locationType = .unknown
self.session = try HKWorkoutSession(healthStore: self.healthStore, configuration: workoutConfiguration)
self.builder = self.session?.associatedWorkoutBuilder()
self.builder?.dataSource = HKLiveWorkoutDataSource(healthStore: self.healthStore, workoutConfiguration: workoutConfiguration)
self.session?.delegate = self
self.builder?.delegate = self
// timer for update state
self.timerHealth = Timer.scheduledTimer(timeInterval: 1, target: self, selector: #selector(self.getHealth), userInfo: nil, repeats: true)
self.session?.startActivity(with: self.startDate)
self.builder?.beginCollection(withStart: self.startDate) { (success, error) in
guard success else {
print(error?.localizedDescription)
return
}
}
} catch {
print(error.localizedDescription)
return
}
The timer print the current time, at a random moment the output stops and is restored only after the screen is turned on
Apple's documentation write that if the workout process is enabled, the application will continue in the background, but it is not. How to set up background work? What did I miss?

You could get suspended when running in background if your app uses a lot of CPU. See https://developer.apple.com/documentation/healthkit/workouts_and_activity_rings/running_workout_sessions
To maintain high performance on Apple Watch, you must limit the amount
of work your app performs in the background. If your app uses an
excessive amount of CPU while in the background, watchOS may suspend
it. Use Xcode’s CPU report tool or the time profiler in Instruments to
test your app’s CPU usage. The system also generates a log with a
backtrace whenever it terminates your app.
It would be good to check if your SwiftUI app is doing more work than your WatchKit based one causing the suspension. You should also see a log file saved on watch that could indicate this. It'll look like a crash log but should note that CPU time was exceeded.

Related

How do I fetch HomeKit values for usage in iOS 14 Widgets?

I am writing a HomeKit app that successfully shows live data from my supported accessories in-app. I can read single values (HMCharacteristic.readValue) or use notifications to stay updated (HMCharacteristic.enableNotification).
Now I want to implement Widgets that show this data on the user's Home Screen. This consists of four steps:
A dynamic Intent fetches all the registered (and supported) Accessories from the HMHomeManager and enables the user to select one of them to be shown on the Widget.
Inside the IntentTimelineProvider's getTimeline function I can then again use the HMHomeManager to retrieve the Accessory I want to display on the Widget (based on the Accessory's UUID which is stored inside the getTimeline's configuration parameter - the Intent).
Still inside the getTimeline function I can choose the Services and Characteristics I need for displaying the Accessory's Widget from the HMHomeManager.
Up until here everything works fine.
However, when I try to read the values from the Characteristics I chose before using HMCharacteristic.readValue, the callback contains an error stating
Error Domain=HMErrorDomain Code=80 "Missing entitlement for API."
The Widget's Info.plist contains the 'Privacy - HomeKit Usage Description' field and the Target has the HomeKit capability.
After some research I came up with the following theory: Obviously the whole WidgetKit API runs my code in background. And it seems like HomeKit does not allow access from a background context. Well, it does allow access to Homes/Services/Characteristics, but it does not allow reading or writing on Characteristics (I guess to make sure App developers use HomeKit Automations and don't try to implement custom automations that are controlled by some background process of their app running on the iPhone).
My (simplified) getTimeline code:
func getTimeline(for configuration: SelectAccessoryIntent, in context: Context, completion: #escaping (Timeline<Entry>) -> ()) {
// id stores the uuid of the accessory that was chosen by the user using the dynamic Intent
if let id = configuration.accessory?.identifier {
// Step 2.: fetch the accessory
// hm is a HMHomeManager
let hm = HomeStore.shared.homeManager
// take a short nap until the connection to the local HomeKit instance is established (otherwise hm.homes will create an empty array on first call)
sleep(1)
let accessories = hm.homes.flatMap({ h in h.accessories })
if let a = accessories.filter({ a in a.uniqueIdentifier.uuidString == id }).first {
// a holds our HMAccessory
// Step 3.: select the characteristic I want
// obviously the real code chooses a specific characteristic
let s: HMService = a.services.first!
let c: HMCharacteristic = s.characteristics.first!
// Step 4.: read the characteristic's value
c.readValue(completionHandler: {err in
if let error = err {
print(error)
} else {
print(c.value ?? "nil")
}
// complete with timeline
completion(Timeline(entries: [RenderAccessoryEntry(date: Date(), configuration: configuration, value: c.value)], policy: .atEnd))
})
}
}
}
}
My questions:
First: Is my theory correct?
If so: What can I do? Are there any entitlements that allow me to access HomeKit in background or similar? Do I need to perform the readValue call elsewhere? Or is it just impossible to use the HomeKit API with WidgetKit with the current versions of HomeKit/WidgetKit/iOS and best I can do is hope they introduce this capability at some point in the future?
If not: What am I missing?

Experiencing deadlocks when using the Hikari transactor for Doobie with ZIO

I'm using Doobie in a ZIO application, and sometimes I get deadlocks (total freeze of the application). That can happen if I run my app on only one core, or if I reach the number of maximum parallel connections to the database.
My code looks like:
def mkTransactor(cfg: DatabaseConfig): RManaged[Blocking, Transactor[Task]] =
ZIO.runtime[Blocking].toManaged_.flatMap { implicit rt =>
val connectEC = rt.platform.executor.asEC
val transactEC = rt.environment.get.blockingExecutor.asEC
HikariTransactor
.fromHikariConfig[Task](
hikari(cfg),
connectEC,
Blocker.liftExecutionContext(transactEC)
)
.toManaged
}
private def hikari(cfg: DatabaseConfig): HikariConfig = {
val config = new com.zaxxer.hikari.HikariConfig
config.setJdbcUrl(cfg.url)
config.setSchema(cfg.schema)
config.setUsername(cfg.user)
config.setPassword(cfg.pass)
config
}
Alternatively, I set the leak detection parameter on Hikari (config.setLeakDetectionThreshold(10000L)), and I get leak errors which are not due to the time taken to process DB queries.
There is a good explanation in the Doobie documentation about the execution contexts and the expectations for each: https://tpolecat.github.io/doobie/docs/14-Managing-Connections.html#about-transactors
According to the docs, the "execution context for awaiting connection to the database" (connectEC in the question) should be bounded.
ZIO, by default, has only two thread pools:
zio-default-async – Bounded,
zio-default-blocking – Unbounded
So it is quite natural to believe that we should use zio-default-async since it is bounded.
Unfortunately, zio-default-async makes an assumption that its operations never, ever block. This is extremely important because it's the execution context used by the ZIO interpreter (its runtime) to run. If you block on it, you can actually block the evaluation progression of the ZIO program. This happens more often when there's only one core available.
The problem is that the execution context for awaiting DB connection is meant to block, waiting for free space in the Hikari connection pool. So we should not be using zio-default-async for this execution context.
The next question is: does it makes sense to create a new thread pool and corresponding execution context just for connectEC? There is nothing forbidding you to do so, but it is likely not necessary, for three reasons:
You want to avoid creating thread pools, especially since you likely have several already created from your web framework, DB connection pool, scheduler, etc. Each thread pool has its cost. Some examples are:
More to manage for the jvm JVM
Consumes more OS resources
Switching between threads, which that part is expensive in terms of performance
Makes your application runtime more complex to understand(complex thread dumps, etc)
ZIO thread pool ergonomics start to be well optimized for their usage
At the end of the day, you will have to manage your timeout somewhere, and the connection is not the part of the system which is the most likely to have enough information to know how long it should wait: different interactions (ie, in the outer parts of your app, nearer to use points) may require different timeout/retry logic.
All that being said, we found a configuration that works very well in an application running in production:
// zio.interop.catz._ provides a `zioContextShift`
val xa = (for {
// our transaction EC: wait for aquire/release connections, must accept blocking operations
te <- ZIO.access[Blocking](_.get.blockingExecutor.asEC)
} yield {
Transactor.fromDataSource[Task](datasource, te, Blocker.liftExecutionContext(te))
}).provide(ZioRuntime.environment).runNow
def transactTask[T](query: Transactor[Task] => Task[T]): Task[T] = {
query(xa)
}
I made a drawing of how Doobie and ZIO execution context map one other to each other: https://docs.google.com/drawings/d/1aJAkH6VFjX3ENu7gYUDK-qqOf9-AQI971EQ4sqhi2IY
UPDATE: I created a repos with 3 examples of that pattern usage (mixed app, pure app, ZLayer app) here: https://github.com/fanf/test-zio-doobie
Any feedback is welcome.

recognize playing cards in an image

i'm trying to recognize munchkin cards from the card game. i've been trying to use a variety of image recognition APIs(google vision api, vize.ai, azure's computer vision api and more), but none of them seem to work ok.
they're able to recognize one of the cards when only one appears in the demo image, but when both appear with another one it fails to identify one or the other.
i've trained the APIs with a set of about 40 different images per card, with different angles, backgrounds and lighting.
i've also tried using ocr(via google vision api) which works only for some cards, probably due to small letters and not much details on some cards.
Does anyone know of a way i can teach one of these APIs(or another) to read these cards better? or perhaps recognize cards in a different way?
the outcome should be a user capturing an image while playing the game and have the application understand which cards he has in front of him and return the results.
thank you.
What a coincidence! I've recently done something very similar – link to video – with great success! Specifically, I was trying to recognise and track Chinese-language Munchkin cards to replace them with English ones. I used iOS's ARKit 2 (requires an iPhone 6S or higher; or a relatively new iPad; and isn't supported on desktop).
I basically just followed the Augmented Reality Photo Frame demo 41 minutes into WWDC 2018's What's New in ARKit 2 presentation. My code below is a minor adaptation to theirs (merely replacing the target with a static image rather than a video). The tedious part was scanning all the cards in both languages, cropping them out, and adding them as AR resources...
Here's my source code, ViewController.swift:
import UIKit
import SceneKit
import ARKit
import Foundation
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
var videoPlayer: AVPlayer
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
sceneView.scene = SCNScene()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a configuration
let configuration = ARImageTrackingConfiguration()
guard let trackingImages = ARReferenceImage.referenceImages(inGroupNamed: "card_scans", bundle: Bundle.main) else {
print("Could not load images")
return
}
// Setup configuration
configuration.trackingImages = trackingImages
configuration.maximumNumberOfTrackedImages = 16
// Run the view's session
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
// Pause the view's session
sceneView.session.pause()
}
// MARK: - ARSCNViewDelegate
// Override to create and configure nodes for anchors added to the view's session.
public func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
let node = SCNNode()
if let imageAnchor = anchor as? ARImageAnchor {
// Create a plane
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width,
height: imageAnchor.referenceImage.physicalSize.height)
print("Asset identified as: \(anchor.name ?? "nil")")
// Set UIImage as the plane's texture
plane.firstMaterial?.diffuse.contents = UIImage(named:"replacementImage.png")
let planeNode = SCNNode(geometry: plane)
// Rotate the plane to match the anchor
planeNode.eulerAngles.x = -.pi / 2
node.addChildNode(planeNode)
}
return node
}
func session(_ session: ARSession, didFailWithError error: Error) {
// Present an error message to the user
}
func sessionWasInterrupted(_ session: ARSession) {
// Inform the user that the session has been interrupted, for example, by presenting an overlay
}
func sessionInterruptionEnded(_ session: ARSession) {
// Reset tracking and/or remove existing anchors if consistent tracking is required
}
}
Unfortunately, I met a limitation: card recognition becomes rife with false positives the more cards you add as AR targets to distinguish from (to clarify: not the number of targets simultaneously onscreen, but the library size of potential targets). While a 9-target library performed with 100% success rate, it didn't scale to a 68-target library (which is all the Munchkin treasure cards). The app tended to flit between 1-3 potential guesses when faced with each target. Seeing the poor performance, I didn't take the effort to add all 168 Munchkin cards in the end.
I used Chinese cards as the targets, which are all monochrome; I believe it could have performed better if I'd used the English cards as targets (as they are full-colour, and thus have richer histograms), but on my initial inspection of a 9-card set in each language, I was receiving as many warnings for the AR resources being hard to distinguish for English as I was for Chinese. So I don't think the performance would improve so far as to scale reliably to the full 168-card set.
Unity's Vuforia would be another option to approach this, but again has a hard limit of 50-100 targets. With (an eye-wateringly expensive) commercial licence, you can delegate target recognition to cloud computers, which could be a viable route for this approach.
Thanks for investigating the OCR and ML approaches – they would've been my next ports of call. If you find any other promising approaches, please do leave a message here!
You are going to wrong direction. As i understand, you have an image. And inside that image, there are several munchkin cards (2 in your example). It is not just only "Recognition" but also "Card detection" is needed. So your task should be divided into card detection task and card's text recognition task
For each task you can use the following algorithm
1. Card detection task
Simple color segmentation
( if you have enough time and patient, train SSD to detect card)
2. Card's text recognition
Use tesseract with english dictionary
(You could add some card rotating process to improve accuracy)
Hope that help
You can try this: https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/csharp#OCR. It will detect text and then you can have your custom logic (based on detected text) to handle actions.

Chrome Dev Tools: Divergence/error between flame chart and memory graph in "Timeline" profiling

While trying to debug long running, memory leaking code I observed a discrepancy between the memory graph and the flame chart. I suspected that this is a "natural" reading error.
I was trying to reproduce this behaviour with very simplified code and were successful...
The above chart was recorded while profiling this code:
window.onload = function() {
var count = 0;
function addDelayed() {
count++;
if (count > 50) {
return;
}
var x = document.createElement("div");
x.addEventListener("click", function() {
});
setTimeout(function() {
addDelayed();
}, 1000);
}
setTimeout(function() {
addDelayed();
}, 10000);
};
I've zoomed in to an arbitrary listener increase to know when it has occured:
I expected the node and listener raise to be at about the half of Function call, not behind it.
Can I assume that this is a measuring error or am I forgetting to take something else into account?
This was recorded with Chrome 43.0.2357.125 (64-bit) (but the behaviour can be observed with older versions too)
Timeline captures number of event listeners/dom nodes right after timer fire event has finished. We do so for many other events so the step will be at the end of the corresponding event. Showing it somewhere in the middle of the event would be unfair and imprecise as we don't know exact moment when the number changed. On the other hand, tracking each individual node/listener creation/deletion would result in a much heavier instrumentation overhead which we want to avoid.

SKPaymentQueue addPayment doesn't always trigger native confirm dialog

Ok, I'm implementing IAP into an iOs app and only some products in the store actually trigger the native purchase handling dialogs.
Background:
The app uses cocos2dx with javascript bindings for cross-platformability. We're dipping into the iOs native sectors to implement the store handling.
These calls all work correctly:
[[SKPaymentQueue defaultQueue] addTransactionObserver:self];
[SKPaymentQueue canMakePayments];
[[SKProductsRequest alloc] initWithProductIdentifiers:productIdentifiers];
A note on the last one. All product ids are checked and return as valid in the productsRequest:request didReceiveResponse:response callback but only if I don't include the bundle id in the identifiers that get sent. Most examples I saw said this was needed, but if included they all return as invalidProductIdentifiers. Could this be indicative of a problem?
So currently some products bring up the native purchase confirm dialog after their (previously verified) ids are passed to [[SKPaymentQueue defaultQueue] addPayment:payment]. Most of them simply do nothing afterwards. No callback on paymentQueue:queue updatedTransactions:transactions, no error code, no crash.
I can't see a pattern for why some work and most don't. At least one consumable, non-consumable and subscription work, so I don't think it's that. I found that if I break and step through the code pausing after [[SKPaymentQueue defaultQueue] addPayment:payment], there's a small chance a few products work more often, although it's not consistent. This lead me to think it may be a threading issue, but you can see what I've tried below and it didn't help.
Things I've tried:
Reading around SO and elsewhere, people suggested changing test users, clearing the queue with [[SKPaymentQueue defaultQueue] finishTransaction:transaction], and that Apple's Sandbox server sometimes 'has issues'. But none of this fixed it, and it strikes me as odd that I'm not getting crashes or errors, it just doesn't react at all to certain product ids.
Here's the actual call with some things I've tried:
- (void)purchaseProductWithId:(const char*)item_code
{
/** OCCASIONALLY MAY NEED TO CLEAR THE QUEUE **
NSArray *transactions = [[SKPaymentQueue defaultQueue] transactions];
for(id transaction in transactions){
[[SKPaymentQueue defaultQueue] finishTransaction:transaction];
}// */
// dispatch_async(dispatch_get_main_queue(),^ {
SKPayment *payment = [SKPayment paymentWithProductIdentifier:[NSString stringWithUTF8String:item_code]];
// [[SKPaymentQueue defaultQueue] performSelectorOnMainThread:#selector(addPayment:) withObject:payment waitUntilDone:NO];
[[SKPaymentQueue defaultQueue] addPayment:payment];
// } );
}
If there's any other code that could be useful let me know.
Thanks for your help.
Edit:
I've added the hasAddObserver check from this question and that's not the problem either.
Turns out it was a temporary thing. I'd hate to accuse Apple's sandbox servers of being flaky, but nothing was changed and then days later it suddenly worked.
So if you have a similar issue maybe take a break and come back to it later?