I am using Windows Server AppFabric Caching in our application with local cache enabled.
This is configured as following:
<localCache isEnabled="true" sync="TimeoutBased" objectCount="1000" ttlValue="120"/>
I have setup time-out based invalidation with time-out interval of 120 seconds.
As per this configuration, local cache will remove items from in-memory cache after every 120 seconds and retrieve item from cache cluster. Is it possible to add a callback which gets fired whenever local cache tries to hit the cache cluster to retrieve items instead of fetching them locally?
Unfortunately, there is no way to know if data is fetched locally or not. There are cache server notifications but they are not reliable.
In your scenario, a good approach could be the Read-Through and Write-Behind feature. It does not fit to all situations but your can take a quick look.
Here are some links :
http://msdn.microsoft.com/en-us/library/hh377669.aspx
http://blogs.msdn.com/b/prathul/archive/2011/12/06/appfabric-cache-read-from-amp-write-to-database-read-through-write-behind.aspx
Related
My usecase is this:
I have X instances of App Engine NodeJS backend.
I use Redis Cloud Memorystore as caching layer.
To minimize Redis use, and add performance, each backend instance also has its own local caching.
Rules are like this, when a request comes in to a backend instance:
first check local cache
if found, use that.
if not found, look in Redis cache
update local cache with data from Redis
The local cache times out much quicker than the Redis cache, but I would still like to be able to send a "clear cache" message to all instances.
The entire thing works extremely well, but I'd like to be able to update the local caches by sending a pub/sub message which in turn would be received by all backend instances, clearing the local caches.
Yes, I could just opt not to use local caching and just Redis MemoryStore, but that would mean network traffic for every Redis lookup, slower responses, and more Redis resource use.
Is this possible at all, or is there only 1 receiver for each pushed message?
I know the receiver can opt not to ack a message, but how to make sure all instances have received, and how to know that has happened, so ack can be sent by the last instance? Seems impossible
Your use case is uncommon, but IMO, you have 2 solutions:
Use Redis
Put a clear cache datetime in redis. For each request, the App Engine instance call for the clear cache datetime. If the local value (the previous clear cache value, or null if there is no precedent call (new instance case)) is behind the Redis value, clear the local cache. Else continue
Use PubSub
(Not my preferred) When an instance start, create a pull subscription on a PubSub topic. Pull the messages. When you receive one, clear the cache.
With PubSub, when you publish a message in a topic, it is duplicated in all subscription. With that pattern, all the instances will have a subscription and can receive the message.
However, you could reach the number of subscription quotas. To limit that, set the expiration value to 1 day, to clean automatically the subscription after 1 day without any subscriber.
Not sure there is a tear down notice on App Engine instance to let you the time to delete the subscription nicely.
I have hosted my node app in Cloud Run and all of my requests served within 300 - 600ms time. But one endpoint that gets data from a 3rd party service so that request takes 1.2s - 2.5s to complete the request.
My doubts regarding this are
Is 1.2s - 2.5s requests suitable for cloud run? Or is there any rule that the requests should be completed within xx ms?
Also see the screenshot, I got a message along with the request in logs "The request caused a new container instance to be started and may thus take longer and use more CPU than a typical request"
What caused a new container instance to be started?
Is there any alternative or work around to handle long requests?
Any advice / suggestions would be greatly appreciated.
Thanks in advance.
I don't think that will be an issue unless you're worried about the cost of the CPU/memory time, which honestly should only matter if you're getting 10k+ requests/day. So, probably doesn't matter and cloud run can handle that just fine (my own app does requests longer than that with no problem)
It's possible that your service was "scaled to zero" meaning that there were no containers left running to serve requests. In that case, it would be necessary to start up a new instance and wait for whatever initializing/startup costs are associated with that process. It's also possible that it was auto-scaled due to all other instances being at their request limits. Make sure that your setting for max concurrent requests per instance is set greater than one - Node/Express can handle multiple requests at once. Plus, you'll only get charged for the total time spend, not per request:
In situations where you get very long (30 seconds, minutes+) operations, it may be a good idea to switch to some different data transfer method. You could use polling, where the client makes a request every 5 seconds and checks if the response is ready. You could also switch to some kind of push-based system like WebSockets, but Cloud Run doesn't have support for that.
TL;DR longer requests (~10-30 seconds) should be fine unless you're worried about the cost of the increased compute time they may occur at scale.
We have a very simple AppFabric setup where there are two clients -- lets call them Server A and Server B. Server A is also the lead cache host, and both Server A and B have a local cache enabled. We'd like to be able to make an update to an item from server B and have that change propagate to the local cache of Server A within 30 seconds (for example).
As I understand it, there appears to be two different ways of getting changes propagated to the client:
Set a timeout on the client cache to evict items every X seconds. On next request for the item it will get the item from the host cache since the local cache doesn't have the item
Enable notifications and effectively subscribe to get updates from the cache host
If my requirement is to get updates to all clients within 30 seconds then setting a timeout of less than 30 seconds on the local cache appears to be the only choice if going with option #1 above. Due to the size of the cache, this would be inefficient to evict all of the cache (99.99% of which probably hasn't changed in the last 30 seconds).
I think what we need to implement is option #2 above, but I'm not sure I understand how this works. I've read all of the msdn documentation (http://msdn.microsoft.com/en-us/library/ee808091.aspx) and have looked at some examples but it is still unclear to me whether it is really necessary to write custom code or if this is only if you want to do extra handling.
So my question is: is it necessary to add code to your existing application if want to have updates propagated to all local caches via notifications, or is the callback feature just an bonus way of adding extra handling or code if a notification is pushed down? Can I just enable Notifications and set the appropriate polling interval at the client and things will just work?
It seems like the default behavior (when Notifications are enabled) should be to pull down fresh items automatically at each polling interval.
I ran some tests and am happy to say that you do NOT need to write any code to ensure that all clients are kept in sync. If you set the following as a child element of the cluster config:
In the client config you need to set sync="NotificationBased" on the element.
The element in the client config will tell the client how often it should check for new notifications on the server. In this case, every 15 seconds the client will check for notifications and pull down any items that have changed.
I'm guessing the callback logic that you can add to your app is just in case you want to add your own special logic (like emailing the president every time an item changes in the cache).
I want to see if my understanding of Appfabric local cache invalidation is correct
Assume I have notification based invalidation set up on my local cache
The default polling interval is 5 minutes
Which way does the polling occur? I believe the local cache polls the distributed cache to check for notifications, is this correct?
Does that mean that if a change occurs to the distributed cache it could be anywhere up to 5 minutes before that item in the local cache is invalidated depending on when the last sync occurred?
Is there any way to see the last synched time, through powershell or another mechanism?
Yes, local polls server each pollInterval. The interval can be customized.
Yes, that's correct
Doubt about powershell. Maybe there will be some trace events in case you use Set-CacheLogging but I didn't try. What will definitely work is to subscribe to cache notifications right in the code and put a breakpoint into it.
Experiencing very high response latency with Redis, to the point of not being able to output information when using the info command through redis-cli.
This server handles requests from around 200 concurrent processes but it does not store too much information (at least to our knowledge). When the server is responsive, the info command reports used memory around 20 - 30 MB.
When running top on the server, during periods of high response latency, CPU usage hovers around 95 - 100%.
What are some possible causes for this kind of behavior?
It is difficult to propose an explanation only based on the provided data, but here is my guess. I suppose that you have already checked the obvious latency sources (the ones linked to persistence), that no Redis command is hogging the CPU in the slow log, and that the size of the job data pickled by Python-rq is not huge.
According to the documentation, Python-rq inserts the jobs into Redis as hash objects, and let Redis expires the related keys (500 seconds seems to be the default value) to get rid of the jobs. If you have some serious throughput, at a point, you will have many items in Redis waiting to be expired. Their number will be high compared to the pending jobs.
You can check this point by looking at the number of items to be expired in the result of the INFO command.
Redis expiration is based on a lazy mechanism (applied when a key is accessed), and a active mechanism based on key sampling, which is run in the event loop (in pseudo background mode, every 100 ms). The point is when the active expiration mechanism is running, no Redis command can be processed.
To avoid impacting the performance of the client applications too much, only a limited number of keys are processed each time the active mechanism is triggered (by default, 10 keys). However, if more than 25% keys are found to be expired, it tries to expire more keys and loops. This is the way this probabilistic algorithm automatically adapt its activity to the number of keys Redis has to expire.
When many keys are to be expired, this adaptive algorithm can impact the performance of Redis significantly though. You can find more information here.
My suggestion would be to try to prevent Python-rq to delegate item cleaning to Redis by setting expiration. This is a poor design for a queuing system anyway.
I think reduce ttl should not be the right way to avoid CPU usage when Redis expire keys.
Didier says, with a good point, that the current architecture of Python-rq that it delegates the cleaning jobs to Redis by using the key-expire feature. And surely, like Didier said it is not the best way. ( this is used only when result_ttl is greater than 0 )
Then the problem should rise when you have a set of keys/jobs with a expiration dates near one of the other, and it could be done when you have a bursts of job creation.
But Python-rq sets expire key when the job has been finished in one worker,
Then it doesn't have too sense, because the keys should spread around the time with enough time between them to avoid this situation