How can (or, Can) I find out which iCloud database a CKRecord belongs to?
There should be four possiblities at this time:
private
public
shared
no database connection yet, if created on device
There is nothing in CKRecord that directly tells you what database it came from. But you always know the database from the source of the record.
If you get a subscription notification or from fetching changes, you know the database from the CKQueryNotification, CKRecordZoneNotification, or the new CKDatabaseNotification.
Of course if you are creating a new CKRecord, there is no database yet but you must know which database you intend to use (public, private, or the new shared).
Certainly if you get a CKRecord from performing a query, you know which database was used for the query operation.
If you need to persist a local copy of the record and then later you need to load that local copy and send it to the proper database, then your local copy must include data telling you which database it belongs to. Since you can always determine the source of a CKRecord when you obtain the record, this isn't an issue.
Related
I use NSPersistentCloudKitContainer to save objects in CoreData + CloudKit. I have integrated a sharing function that moves an object to a separate zone to share using UICloudSharingController, as described in https://developer.apple.com/wwdc21/10015
When the user stops sharing, I want the object in the shared zone to be deleted, and moved back to the CoreData + CloudKit standard private zone. Deleting the CKShare and its zone is done using the following method:
/**
Delete the Core Data objects and the records in the CloudKit record zone associcated with the share.
*/
func purgeObjectsAndRecords(with share: CKShare, in persistentStore: NSPersistentStore? = nil) {
guard let store = (persistentStore ?? share.persistentStore) else {
print("\(#function): Failed to find the persistent store for share. \(share))")
return
}
persistentContainer.purgeObjectsAndRecordsInZone(with: share.recordID.zoneID, in: store) { (zoneID, error) in
if let error = error {
print("\(#function): Failed to purge objects and records: \(error)")
}
}
}
How do I deep copy the CKShare back to the private zone before deleting it?
I am not sure if I understand you right, but I will try an answer:
Assume a record hierarchy or a zone in the private database should be shared by the owner.
When sharing is initiated a user is invited to share the data, and a CKShare record is initialized in the owner's private database.
When a user accepts the invitation, the CKShare record and the shared data are made accessible for the user via the user's shared database. They are not copied to the shared database; the shared database is just a window to the private database of the owner. However if the shared database is mirrored by CoreData + CloudKit to a persistent store of the share user, NSManagedObjects are created for the shared data.
When the owner or the user stops sharing, these NSManagedObjects must normally no longer be accessible by the user. In principle this is also handled by iCloud mirroring: The shared database is no longer a window to the owner's data, i.e. it no longer contains the CKShare record and the shared data, and thus mirroring deletes them from the user's persistent store. But this may take long. To delete the local copies of the shared data faster, one can call persistentContainer.purgeObjectsAndRecordsInZone.
Now to your question that I do not understand: What do you mean by "deep copy the CKShare back to the private zone"? The owner's private database has never been modified (except from updating the user's status in the CKShare record or - when sharing stopped for the last user - the deletion of the CKShare record). So there is no need to copy back the CKShare record. The user's private database has neither been modified.
The only situation where a "deep copy" back to the private database makes sense to me is when the share user wants to keep the shared data even after sharing stopped. If you want to do this, you had to copy all shared objects as soon as they become available, i.e. as soon as they are mirrored from the iCloud shared database to the local persistent store. You could use a .NSPersistentStoreRemoteChange notification to do the copying. purgeObjectsAndRecordsInZone would then only delete the originals, not the copies.
EDIT:
Let's take an example:
A user, called here "owner", has some owner records in a CoreData persistent store that is mirrored to iCloud to the owner's private database.
During setup, iOS creates in the owner's private database a new zone "com.apple.coredata.cloudkit.zone".
Assume first that no records are shared.
Then iOS will update the persistent store of all devices logged in to the same iCloud account:
Local changes are exported to this zone in the owner's private database, and iCloud changes are imported to the owner's persistent store.
Now assume that the owner invites another user, called here "participant", to share either an owner's record hierarchy or an owner's zone.
Then, a CKShare record is created in the owner's private database that specifies the sharing details, i.e. what is shared by whome.
The participant, who has the same app, has some participant's records in the participant's persistent store that is mirrored to the participant's private database.
During setup, iOS creates in the participant's private database a new zone "com.apple.coredata.cloudkit.zone".
When the participant accepts the owner's invitation to share data, iOS maps the owner's shared data to the participant's shared database. "Mapping" means that the owner's data that are in the com.apple.coredata.cloudkit.zone zone in the owner's private database appear now in the participant's shared database in a new zone "com.apple.coredata.cloudkit.zone". Together with the shared data, the CKShare record of the owner's private database is also mapped to the participant's shared database.
This zone is now mirrored by iOS to the participant's persistent store.
For the owner, nothing has changed except the CKShare record.
When the owner or the participant stops sharing, the mapping of the owner's data in the owner's private database is terminated, i.e. the owner's shared data no longer appear in the participant's shared database.
Since they are deleted for the participant (but not for the owner), this is mirrored to the participant's persistent store and the shared records are deleted in the participant's persistent store. However, this takes a while. In order to delete the shared data immediately, one can use persistentContainer.purgeObjectsAndRecordsInZone when sharing is terminated.
I hope this clarified the situation!
I see that data objects in the Baqend data storage are versioned.
Are the previous versions accessible?
Does the system store the ID of the user who wrote/updated the object?
Just curious what that is doing and if I need to try to create my own log or if there is something built in.
Specifically, I have "adminstrators" who will be manually verifying some data that is put into the database, then changing a specific field to "approved". We to know the ID of the last person that modified the data.
Baqend does not keep previous versions, and it only collects access logs.
To save which user updated an object, you can use for example an update handler to save the user reference in your object.
I have a new team member, which belongs in a team but the member doesn’t have an ID yet because the record hasn't been send to the server yet how do I tell the team that the member belongs in the team and let it appear in the team list before sending the team member to the server and assigning a unique ID to the member.
I think ember is geared towards saving the record to the back end, and then on the success callback creating your relationship and save it again. This ensures that ember-data remains a slave to the backend, ensuring data integrity.
You could look at creating an ID in ember, but this certainly sounds like here be dragons as you would need to purge your ember-data store and and get the real records and ID's from the server.
This process could be simplified by placing a flag on your model to say that it has a generated ID, but as I said here be dragons.
The safest option is to either just establish relationships once a record has been saved to the back end, or if offline is a real concern, you could use something like Ember Pouch to keep a synced local copy of your datastore, this will make the whole issue of resolving ID's a little more consistent.
Finally you could look into some sort of localStorage man in the middle to sync with your db, it has been discussed in this SO question.
I am using Microsoft Synch Service Framework 4.0 for synching Sql server Database tables with SqlLite Database on the Ipad side.
Before making any Database schema changes in the Sql Server Database, We have to Deprovision the database tables. ALso after making the schema changes, we ReProvision the tables.
Now in this process, the tracking tables( i.e. the Synching information) gets deleted.
I want the tracking table information to be restored after Reprovisioning.
How can this be done? Is it possible to make DB changes without Deprovisioning.
e.g, the application is in Version 2.0, The synching is working fine. Now in the next version 3.0, i want to make some DB changes. SO, in the process of Deprovisioning-Provisioning, the tracking info. gets deleted. So all the tracking information from the previous version is lost. I do not want to loose the tracking info. How can i restore this tracking information from the previous version.
I believe we will have to write a custom code or trigger to store the tracking information before Deprovisioning. Could anyone suggest a suitable method OR provide some useful links regarding this issue.
the provisioning process should automatically populate the tracking table for you. you don't have to copy and reload them yourself.
now if you think the tracking table is where the framework stores what was previously synched, the answer is no.
the tracking table simply stores what was inserted/updated/deleted. it's used for change enumeration. the information on what was previously synched is stored in the scope_info table.
when you deprovision, you wipe out this sync metadata. when you synch, its like the two replicas has never synched before. thus you will encounter conflicts as the framework tries to apply rows that already exists on the destination.
you can find information here on how to "hack" the sync fx created objects to effect some types of schema changes.
Modifying Sync Framework Scope Definition – Part 1 – Introduction
Modifying Sync Framework Scope Definition – Part 2 – Workarounds
Modifying Sync Framework Scope Definition – Part 3 – Workarounds – Adding/Removing Columns
Modifying Sync Framework Scope Definition – Part 4 – Workarounds – Adding a Table to an existing scope
Lets say I have one table "User" that I want to synch.
A tracking table will be created "User_tracking" and some synch information will be present in it after synching.
WHen I make any DB changes, this Tracking table "User_tracking" will be deleted AND the tracking info. will be lost during the Deprovisioning- Provisioning process.
My workaround:
Before Deprovisioning, I will write a script to copy all the "User_tracking" data into another temporary table "User_tracking_1". so all the existing tracking info will be stored in "User_tracking_1". WHen I reprovision the table, a new trackin table "User_Tracking" will be created.
After Reprovisioning, I will copy the data from table "User_tracking_1" to "User_Tracking" and then delete the contents from table "User_Tracking_1".
UserTracking info will be restored.
Is this the right approach...
I am using sqlite to store my data. I have two databases. In my application, each time a new request comes, I am attaching first db to second db. The problem is, if two request come it is showing the db already in use (it is trying to attach twice with same alias name 'db'). I want to know if there is any way to check whether a database is attached or not?
PRAGMA database_list;
outputs a resultset with full list of available databases. The first column is database name, the second is database file (empty if it is not associated with file). The primary database is always named main, temporary db is always temp.
sqlite> attach "foo.db" as foo;
sqlite> pragma database_list;
0|main|
2|foo|/Users/me/tmp/foo.db
I assume you are reusing the same connection to the database for multiple requests. Because databases are attached to the connection object, attaching fails for the second or further requests with the same connection. The solution I think is thus to attach the database immediately after a new connection is made, and not each time a request is received.