I am using NSUbiquitousKeyValueStore to store user preferences in iCloud. To ensure that I understand how NSUbiquitousKeyValueStore works, I successfully stored a String value and monitored the value on a separate device. The following explains my configuration.
I started by observing the Notification that is posted when the NSUbiquitousKeyValueStore changes externally:
NSUbiquitousKeyValueStore.didChangeExternallyNotification
Then, I set a String value that was input of a UITextField:
let store = NSUbiquitousKeyValueStore.default
store.set(text, forKey: key)
store.synchronize()
To ensure that this works, I created a UIAlertController in the method that responds to the notification. I observed the alert appear on the secondary device within six seconds of setting the value on the primary device. However, I never observed the alert appear on the primary device after setting the value.
After reading the documentation for NSUbiquitousKeyValueStore, I was unable to find a reason that the primary device would not also receive the notification after updating the NSUbiquitousKeyValueStore.
Do I need a solution to save the updated values locally before setting them in the NSUbiquitousKeyValueStore? I can persist the values locally in UserDefaults before persisting them in NSUbiquitousKeyValueStore. However, it would require me to revert the values in UserDefaults if an error should occur during the attempt to persist them in the NSUbiquitousKeyValueStore. Without the primary device being notified of the changes that are persisted from it, will I even be notified of the error in the observation of the notification? I struggle to believe that I will if I am not receiving notifications when successful.
Is there something that I am missing, or is this the expected behavior?
Perhaps they updated the documentation (+ 1 yr since you asked).
"This notification is sent only upon a change received from iCloud; it is not sent when your app sets a value."
I assume that means that because no change is observed (the local device and iCloud are already in sync) - you don't get a notification.
Related
I have an alert that I have configured to send email when the sum of executions of cloud functions that have finished in status other than 'error' or 'ok' is above 0 (grouped by the function name).
The way I defined the alert is:
And the secondary aggregator is delta.
The problem is that once the alert is open, it looks like the filters don't matter any more, and the alert stays open because it sees that the cloud function is triggered and finishes with any status (even 'ok' status keeps it open as long as its triggered enough).
ATM the only solution I can think of is to define a log based metric that will count it itself and then the alert will be based on that custom metric instead of on the built in one.
Is there something that I'm missing?
Edit:
Adding another image to show what I think might be the problem:
From the image above we see that the graph wont go down to 0 but will stay at 1, which is not the way other normal incidents work
According to the official documentation:
"Monitoring automatically closes an incident when it observes that the condition is no longer met or when 7 days have passed without an observation that the condition is still being met."
That made me think that there are times where the condition is not relevant to make it close the incident. Which is confirmed here:
"If measurements are missing (for example, if there are no HTTP requests for a couple of minutes), the policy uses the last recorded value to evaluate conditions."
The lack of HTTP requests aren't a reason to close the metric as it keeps using the last recorded value (that triggered the metric).
So, using alerts for Http Requests is fine but you need to close them by yourself. Although I think it would be better to use a custom metric instead if you want them to be disabled automatically.
Just wanted to know from a high level how I would accomplish this.
I thought that when a user opens the application, I will keep track of the last opened time in a Dynamo DB table.
Then I could have a background worker constantly check and see if anybody hasn't used their app in 3 or 4 days and then send a push notification, ie, "you haven't used your app in a while, why don't you open it up and do XYZ."
From a very high level, there are two possible ways:
1.) Local notifications (you don't need AWS for this):
You can schedule a local notification, every time the user opens up the app (or better - every time the user brings the app to foreground). It works like: User opens app -> cancel old scheduled notification if existing -> schedule new notification for "in 3 or 4 days" -> ready :-)
You can use something like this: https://github.com/zo0r/react-native-push-notification (see section Sheduled Notifications).
2.) You could do it with remote notifications (https://aws.amazon.com/sns/):
You can go the way you proposed. Then you have to store an entry in your db with the push notification token of the device and the last time the app was opened. Your worker then has to check and send the push message to the device using a service like SNS.
I would recommend 1.) over 2.) because you are independent from the users internet connection when getting the app opening info. In 2.) you can miss the opening info, when the user opens the app without internet connection. Also 2.) is more expensive then 1.) when you scale your app.
An advantage of 2.) would be, that you are more flexible when and what you send in your notification, since you can edit it on server side. 1.) would mean that it is coded in your app (at least until you build a synchronization mechanism for the variables) :-)
We have a remote event receiver associated to a list and hooked on all events there. When you update any list item using OOB SharePoint page, the event receiver is executed; a web service which is taking care of the afterward actions works nicely. However when you update item use CSOM code e.g. in simple console application, nothing happens. The event receiver is not called at all. I found this issue on both SP 2013 and 2016.
I will not post any code while it is irrelevant: item is updated using standard approach and values are actually changed in the list item, only the event receiver is not fired. To put it simply:
item updated manually from site -> event receiver fired
item updated via CSOM -> event receiver not fired.
I remember similar issue on SharePoint 2010 when using server side code and system account. Could it be that behind the scene web service called by CSOM (e.g. list.asmx) is using system account to make changes as well? It's just hypothesis...
So after deeper investigation and many try/fails we found out it was indeed issue with code in our event receiver. For some strange reason original developers were checking Title field in after properties and cancelling code if not present. I guess it was probably an attempt to prevent looping calls.
One lesson learned: When using CSOM after event properties contains only those fields which were altered by CSOM code. Keep it in a mind in case you need to use other values than those you want to update. You may need to stupidly copy and assign them again just because of this.
I am experimenting with turning a more traditional ember-data based app into a real-time app that uses websockets to keep multiple instances in sync.
My first attempt involves sending any updated record back to all open sessions that have accessed the record so that they all can have the latest copy. This includes the session that initiated the change. This means that after I call record.save() in the client, I get back the updated copy both from the REST API and the websocket. The client-end of the websocket simply calls store.pushPayload(data) to update the store.
This causes problems because the record might be inFlight at the time, and I get the error:
Attempted to handle event `pushedData` on [...] while in state root.deleted.inFlight.
I have several ideas:
Somehow prevent the client from receiving its own records back and only send them to other websocket connections.
Somehow synchronize access to the store so that when I call pushPayload the affected records are not in-flight.
Both of these seem rather complicated and I was hoping there's an established means of keeping multiple Ember apps up-to-date.
We have a very simple AppFabric setup where there are two clients -- lets call them Server A and Server B. Server A is also the lead cache host, and both Server A and B have a local cache enabled. We'd like to be able to make an update to an item from server B and have that change propagate to the local cache of Server A within 30 seconds (for example).
As I understand it, there appears to be two different ways of getting changes propagated to the client:
Set a timeout on the client cache to evict items every X seconds. On next request for the item it will get the item from the host cache since the local cache doesn't have the item
Enable notifications and effectively subscribe to get updates from the cache host
If my requirement is to get updates to all clients within 30 seconds then setting a timeout of less than 30 seconds on the local cache appears to be the only choice if going with option #1 above. Due to the size of the cache, this would be inefficient to evict all of the cache (99.99% of which probably hasn't changed in the last 30 seconds).
I think what we need to implement is option #2 above, but I'm not sure I understand how this works. I've read all of the msdn documentation (http://msdn.microsoft.com/en-us/library/ee808091.aspx) and have looked at some examples but it is still unclear to me whether it is really necessary to write custom code or if this is only if you want to do extra handling.
So my question is: is it necessary to add code to your existing application if want to have updates propagated to all local caches via notifications, or is the callback feature just an bonus way of adding extra handling or code if a notification is pushed down? Can I just enable Notifications and set the appropriate polling interval at the client and things will just work?
It seems like the default behavior (when Notifications are enabled) should be to pull down fresh items automatically at each polling interval.
I ran some tests and am happy to say that you do NOT need to write any code to ensure that all clients are kept in sync. If you set the following as a child element of the cluster config:
In the client config you need to set sync="NotificationBased" on the element.
The element in the client config will tell the client how often it should check for new notifications on the server. In this case, every 15 seconds the client will check for notifications and pull down any items that have changed.
I'm guessing the callback logic that you can add to your app is just in case you want to add your own special logic (like emailing the president every time an item changes in the cache).