AppFabric Syncing Local Caches - appfabric

We have a very simple AppFabric setup where there are two clients -- lets call them Server A and Server B. Server A is also the lead cache host, and both Server A and B have a local cache enabled. We'd like to be able to make an update to an item from server B and have that change propagate to the local cache of Server A within 30 seconds (for example).
As I understand it, there appears to be two different ways of getting changes propagated to the client:
Set a timeout on the client cache to evict items every X seconds. On next request for the item it will get the item from the host cache since the local cache doesn't have the item
Enable notifications and effectively subscribe to get updates from the cache host
If my requirement is to get updates to all clients within 30 seconds then setting a timeout of less than 30 seconds on the local cache appears to be the only choice if going with option #1 above. Due to the size of the cache, this would be inefficient to evict all of the cache (99.99% of which probably hasn't changed in the last 30 seconds).
I think what we need to implement is option #2 above, but I'm not sure I understand how this works. I've read all of the msdn documentation (http://msdn.microsoft.com/en-us/library/ee808091.aspx) and have looked at some examples but it is still unclear to me whether it is really necessary to write custom code or if this is only if you want to do extra handling.
So my question is: is it necessary to add code to your existing application if want to have updates propagated to all local caches via notifications, or is the callback feature just an bonus way of adding extra handling or code if a notification is pushed down? Can I just enable Notifications and set the appropriate polling interval at the client and things will just work?
It seems like the default behavior (when Notifications are enabled) should be to pull down fresh items automatically at each polling interval.

I ran some tests and am happy to say that you do NOT need to write any code to ensure that all clients are kept in sync. If you set the following as a child element of the cluster config:
In the client config you need to set sync="NotificationBased" on the element.
The element in the client config will tell the client how often it should check for new notifications on the server. In this case, every 15 seconds the client will check for notifications and pull down any items that have changed.
I'm guessing the callback logic that you can add to your app is just in case you want to add your own special logic (like emailing the president every time an item changes in the cache).

Related

Race condition for Microservice architecture [CosmosDB]

We have a micro service based architecture. Let's say we have front and backend completely isolated. The backend microserviceA exposes a rest endpoint which basically calls a thirdParty service and updates a record in cosmosDB. Now, this micro service is deployed over kubernetes cluster and hence can have multiple replication factor for load balancing. As mentioned before, the frontEnd is isolated and it consumes the exposed endpoint.
Problem :
FrontEnd has been written in such a manner that if the response is not obtained within a certain time frame or if a network failure occurs, it retries the endpoint. It has been observed that in some rare scenarios(doesn't matter what) UI makes multiple calls (mostly 2) one after another with time difference in milliseconds. Now here comes the race condition at the backend logic.
If the first call goes to ThirdParty first and obtained a success response, the second call will get a failure(bcz the first one was already a success). We can not change the behaviour of ThirdParty.
Taking above scenario as base, Now if the second call(failure one) updates the DB first and reaches the UI. UI takes this as a failure(whereas the first call was already a success) and take failure actions.
If the success calls makes it to the UI first, everything works fine.
Possible solution I can think of:
1)
Put a cache as source of truth.
apiCall : Status
If (entry not present in cache) {
Put Entry in cache With Status NULL or Something with specific TTL
(acquire lock on specific entry) {
If (status is success) return successResponse.
MAKE ThirdParty Call
Update DB
Update cache
Release LOCK
}
} else {
(acquire lock on specific entry) {
MAKE ThirdParty Call
Update DB
Update cache
Release LOCK
}
}
Else block will never be executed. seems like.
Only in case of failure, instead of updating the DB, put a thread.sleep(10000) for couple of times in hope that another thread will update the DB with success response.
If still not success, return a failure update and update DB.
Put a poller on UI side. If it is a failure. Try to poll couple of times more in hope that the status changes. If not, take the failure actions.
Optimistic locking for cosmos record.
https://cosmosdb.github.io/labs/dotnet/labs/10-concurrency-control.html
Not sure how this can help.
Let's say, both api calls read the record when the version was 0.
Now the second api call update the the DB record, as the version was not changed,
it will be a successful update.
Now the DB holds Failure as value.
The first api call tries to update it and it found a version mismatch,
the update will not go through and another attempt will be made to update the DB as it was a success.
In case of failure, no attempts to update DB will be made.
Now, the second API call will appear to UI first and UI will again take the failure action.
UI require a poller in such cases.
But if the UI requires a poller, why do we need the optimistic locking in first place. :)
I don't know cosmosDB functionality much. If there is some functionality cosmos provides to handle, Please be kind enough to share.
What will be the best way to handle such kind of scenarios.
It seems in your application design you have made it necessary to wait for each execution to finish before you fire the next one, I am not debating if this is good or bad that's a different discussion, but it seems the only option you have to fire all your DB Updates in a synchronous manner in this case.
Optimistic locking is very good to ensure that the document you are updating have not been updated while your code did other things but it will not help your UI issue here.
I think you need to abstract the UI in order to make this work properly otherwise you are stuck running things in synchronous mode

Django expire cache every N'th HTTP request

I have a Django view which needs can be cached, however it needs to be recycled every 100th time when the view is called by the HTTP request.
I cannot use the interval based caching here since the number will keep changing upon traffic.
How would I implement this? Are there other nice methods around except maintaining a counter (in db) ?
Here are some ideas / feedback:
You're going to have to centralize something if you need it to be exact - the Redis idea in this linked solution looks OK if you can't put it in the main DB. If Redis is in your stack, I'd use that. If the 100 requests can be per user and you're using sessions, you could attach a counter to the session.
implementing a counter that counts requests with django
To not centralize the counter outside of the webserver would mean your app needs to be and stay single-threaded to keep counts in memory. It would also reset if the server was restarted. Not a great idea IMO...
If you really can't make it work with anything else, you could hack something like a request counter on your load balancer (...if the load balancer is a single machine you control, and you're comfortable doing that) and pass it as a header for Django to read.

Appfabric local cache notifications

I want to see if my understanding of Appfabric local cache invalidation is correct
Assume I have notification based invalidation set up on my local cache
The default polling interval is 5 minutes
Which way does the polling occur? I believe the local cache polls the distributed cache to check for notifications, is this correct?
Does that mean that if a change occurs to the distributed cache it could be anywhere up to 5 minutes before that item in the local cache is invalidated depending on when the last sync occurred?
Is there any way to see the last synched time, through powershell or another mechanism?
Yes, local polls server each pollInterval. The interval can be customized.
Yes, that's correct
Doubt about powershell. Maybe there will be some trace events in case you use Set-CacheLogging but I didn't try. What will definitely work is to subscribe to cache notifications right in the code and put a breakpoint into it.

Windows Server AppFabric Cache Time-out based invalidation callback

I am using Windows Server AppFabric Caching in our application with local cache enabled.
This is configured as following:
<localCache isEnabled="true" sync="TimeoutBased" objectCount="1000" ttlValue="120"/>
I have setup time-out based invalidation with time-out interval of 120 seconds.
As per this configuration, local cache will remove items from in-memory cache after every 120 seconds and retrieve item from cache cluster. Is it possible to add a callback which gets fired whenever local cache tries to hit the cache cluster to retrieve items instead of fetching them locally?
Unfortunately, there is no way to know if data is fetched locally or not. There are cache server notifications but they are not reliable.
In your scenario, a good approach could be the Read-Through and Write-Behind feature. It does not fit to all situations but your can take a quick look.
Here are some links :
http://msdn.microsoft.com/en-us/library/hh377669.aspx
http://blogs.msdn.com/b/prathul/archive/2011/12/06/appfabric-cache-read-from-amp-write-to-database-read-through-write-behind.aspx

Maximum time between an asynchronous call and response (web-services)

Are there any best practices that dictate the maximum time between an asynchronous call and its corresponding response.
Basically I have a process that takes a long time to run (eg: 5 minutes). Option 1: I could expose the process as an asynchronous call. In which case the user calls my service and then at some later time, I respond with a process status.
Option 2
The other way I could implement it is to setup the system such that there is a one-way operation on my web-service that begins the process and immediately returns an id for the process. I could then mandate that the consumer provide a one-way operation, that I can call and report back when the process is done.
The first option is easier as I dont have to mandate anything from the caller. The second seems better as I can report back at anytime (5 minutes to years later).
As I have complete control over the caller and its an internally available service, I am leaning towards option 2.
So I am wondering if there are any time limits imposed on async calls (can they span days? if not what is the best practice). Is option 2 a standard pattern employed?
References would be extremely useful.
Option #2 is better as it's more event driven.
However, there exists an Option #3. Client issues request to server. Server queues request and responds with the id. Client checks back every so often, passing the request id, to see if it's completed.
This way you don't have to depend on the client being available when the request is completed.
I'd probably mix options #2 and #3 and let the client choose if they want an event fired on their side or if they just want to check back later.
UPDATE
Rajah has asked about the maximum time between async request and response. For a WEB application, this is typically measured in seconds. Most servers have timeout values that are typically defaulted in the 30 second range. Personally, I think this is too long.
Consider that an Async call requires the communications channel between the client and server to be open for the duration. How many of those channels can a single server handle? More to the point, how many channels will you have to maintain as requests are made? This can become quite outrageous even if you do control both ends.
Whatever is hosting your services is going to determine the maximum amount of time to keep a request open. Again, every server I've seen measures this in seconds.