Troubleshooting AppFabric Scaling Issues (Intermittent ErrorCode<ERRCA0017>:SubStatus<ES0006> Errors) - appfabric

We've implemented AppFabric Windows Server Cache for our web application. Initially, we were able to use the cache without any issues. We then increased traffic roughly 100 fold, and began experiencing intermittent exceptions. The exceptions occur roughly once per 2 days, for about a minute at a time.
Our configuration:
9 web servers inserting/retrieving objects in cache:
Mostly temporal 500 byte operational type objects
Using 1 named region
Objects stored with tags
Retrieved in bulk for a given tag
Cache Cluster:
1 host (lead) AppFabric 1.1 (version reported by get-cachehost is 3)
SQL configuration provider
96GB of RAM on host, the default 50% (48GB) allocated to AppFabric
Cache Host Config
Cache Client Config
The errors in the order that they occur (the exceptions are occur for each of the nine webservers during the 1 minute period):
System.Net.Sockets.SocketException : An existing connection was forcibly closed by the remote host
Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode<ERRCA0016>:SubStatus<ES0001>:The connection was terminated, possibly due to server or network problems or serialized Object size is greater than MaxBufferSize on server. Result of the request is unknown. ---> System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:15:00'. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
--- End of inner exception stack trace ---
at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
at System.ServiceModel.Channels.FramingDuplexSessionChannel.EndReceive(IAsyncResult result)
at Microsoft.ApplicationServer.Caching.WcfClientChannel.CompleteProcessing(IAsyncResult result)
--- End of inner exception stack trace ---
at Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ResponseBody respBody)
at Microsoft.ApplicationServer.Caching.DataCache.GetNextBatch(String region, DataCacheTag[] tags, GetByTagsOperation op, IMonitoringListener listener, Byte[][]& state, Boolean& more)
at Microsoft.ApplicationServer.Caching.CacheEnumerator.MoveNext()
at System.Linq.Enumerable.WhereSelectEnumerableIterator'2.MoveNext()
at System.Linq.Enumerable.<ExceptIterator>d__99'1.MoveNext()
at System.Collections.Generic.List'1..ctor(IEnumerable'1 collection)
at System.Linq.Enumerable.ToList[TSource](IEnumerable'1 source)
Microsoft.ApplicationServer.Caching.DataCacheException:
ErrorCode<ERRCA0017>:SubStatus<ES0006>:There is a temporary failure. Please retry later. (One or more specified cache servers are unavailable, which could be caused by busy network or servers. For on-premises cache clusters, also verify the following conditions. Ensure that security permission has been granted for this client account, and check that the AppFabric Caching Service is allowed through the firewall on all cache hosts. Also the MaxBufferSize on the server must be greater than or equal to the serialized object size sent from the client.)
at Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ResponseBody respBody)
at Microsoft.ApplicationServer.Caching.DataCache.GetNextBatch(String region, DataCacheTag[] tags, GetByTagsOperation op, IMonitoringListener listener, Byte[][]& state, Boolean& more)
at Microsoft.ApplicationServer.Caching.CacheEnumerator.MoveNext()
at System.Linq.Enumerable.WhereSelectEnumerableIterator'2.MoveNext()
at System.Linq.Enumerable.<ExceptIterator>d__99'1.MoveNext()
at System.Collections.Generic.List'1..ctor(IEnumerable'1 collection)
at System.Linq.Enumerable.ToList[TSource](IEnumerable'1 source)
Microsoft.ApplicationServer.Caching.DataCacheException:
ErrorCode<ERRCA0018>:SubStatus<ES0001>:The request timed out.
at Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ResponseBody respBody)
at Microsoft.ApplicationServer.Caching.DataCache.GetNextBatch(String region, DataCacheTag[] tags, GetByTagsOperation op, IMonitoringListener listener, Byte[][]& state, Boolean& more)
at Microsoft.ApplicationServer.Caching.CacheEnumerator.MoveNext()
at System.Linq.Enumerable.WhereSelectEnumerableIterator'2.MoveNext()
at System.Linq.Enumerable.<ExceptIterator>d__99'1.MoveNext()
at System.Collections.Generic.List'1..ctor(IEnumerable'1 collection)
at System.Linq.Enumerable.ToList[TSource](IEnumerable'1 source)
We have also created a tracelog session on the caching server to capture more information to diagnose the issue - any suggestions on how to analyze this would be appreciated (I can make this available if need be).
We also monitored various AppFabric, CLR, and Network performance counters, below is a screenshot of the event as it occurs:
Thanks in advance for any thoughts or advice you can share on resolving this issue.
UPDATE 1
The following are the exceptions occurring continuously on the AppFabric Caching Server during the intermittent errors (abstracted from tracelogs) :
System.ServiceModel.CommunicationException: The socket connection was aborted because an asynchronous send to the socket did not complete within the allotted timeout of 00:00:00.0082078. The time allotted to this operation may have been a portion of a longer timeout. ---> System.ObjectDisposedException: The socket connection has been disposed. Object name: 'System.ServiceModel.Channels.SocketConnection'. --- End of inner exception stack trace --- at System.ServiceModel.Channels.SocketConnection.ThrowIfNotOpen() at System.ServiceModel.Channels.SocketConnection.BeginRead(Int32 offset, Int32 size, TimeSpan timeout, WaitCallback callback, Object state) at System.ServiceModel.Channels.SessionConnectionReader.BeginReceive(TimeSpan timeout, WaitCallback callback, Object state) at System.ServiceModel.Channels.SynchronizedMessageSource.ReceiveAsyncResult.PerformOperation(TimeSpan timeout) at System.ServiceModel.Channels.SynchronizedMessageSource.SynchronizedAsyncResult'1..ctor(SynchronizedMessageSource syncSource, TimeSpan timeout, AsyncCallback callback, Object state) at System.ServiceModel.Channels.FramingDuplexSessionChannel.BeginReceive(TimeSpan timeout, AsyncCallback callback, Object state) at Microsoft.ApplicationServer.Caching.WcfServerChannel.CompleteProcessing(IAsyncResult result)
System.ServiceModel.CommunicationObjectAbortedException: The communication object, System.ServiceModel.Channels.ServerSessionPreambleConnectionReader+ServerFramingDuplexSessionChannel, cannot be used for communication because it has been Aborted. at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.ServiceModel.Channels.FramingDuplexSessionChannel.OnEndSend(IAsyncResult result) at Microsoft.ApplicationServer.Caching.ReplyContext.EndSend(IAsyncResult result)
System.ServiceModel.CommunicationObjectFaultedException: The communication object, System.ServiceModel.Channels.ServerSessionPreambleConnectionReader+ServerFramingDuplexSessionChannel, cannot be used for communication because it is in the Faulted state. at System.ServiceModel.Channels.CommunicationObject.ThrowIfDisposedOrNotOpen() at System.ServiceModel.Channels.OutputChannel.Send(Message message, TimeSpan timeout) at Microsoft.ApplicationServer.Caching.ReplyContext.Reply(Message message, TimeSpan timeout)
System.TimeoutException: Sending to via http://www.w3.org/2005/08/addressing/anonymous timed out after 00:00:15. The time allotted to this operation may have been a portion of a longer timeout. ---> System.TimeoutException: Cannot claim lock within the allotted timeout of 00:00:15. The time allotted to this operation may have been a portion of a longer timeout. --- End of inner exception stack trace --- at System.ServiceModel.Channels.FramingDuplexSessionChannel.OnSend(Message message, TimeSpan timeout) at System.ServiceModel.Channels.OutputChannel.Send(Message message, TimeSpan timeout) at Microsoft.ApplicationServer.Caching.ReplyContext.Reply(Message message, TimeSpan timeout)
UPDATE 2
After another day of troubleshooting we took the following actions which produced some improvement:
Based on this and this we increased maxConnectionsToServer to 3. As a result we gained 50% more client requests/sec as recorded by the AppFabric Caching:Cache perf counter, but the intermittent errors did not stop occuring
We increased the maxBufferSize and maxBufferPoolSize to 2147483647 (int32.max) on the Cache Server configuration. So far we are able to handle 300x traffic volume w/o errors. We will continue to increase traffic volume and monitor. More updates to follow
UPDATE 3
We added two more hosts with 16GB each to the cluster and enabled HighAvailability mode (via Secondaries=1). Currently the original host remains in the cluster with 96GB ram - all hosts have cacheSize = 12GB. On the cache clients we increase the MaxConnectionToServer to 12 (1 per core). Below are our findings:
Occasionally we get (once or twice every 10 minutes):
ErrorCode<ERRCA0017>:SubStatus<ES0005>:There is a temporary failure. Please retry later. (There was a contention on the store.)
ErrorCode<ERRCA0017>:SubStatus<ES0004>:There is a temporary failure. Please retry later. (Replication queue was full. This may happen during reconfiguration of cluster hosts.)
The original 96GB cache hosts still experiences 1 minute outages as described above. The new cache hosts haven't experienced the outage
We plan to remove 80GB ram from the original cache host. More updates to follow.
UPDATE 4
The problem seems to have been solved by reducing the amount of RAM in the cache hosts to 16GB. We no longer see the intermittent errors with traffic increased to 400x. Seems to be cased closed. Now on to the next issue: High Availability

Have you installed http://support.microsoft.com/kb/983182 and http://support.microsoft.com/kb/2527387 ?
In your code do you check for the exception and the retrylater bool?
catch (DataCacheException ex2)
{
if (ex2.ErrorCode == DataCacheErrorCode.RetryLater)
{
The use of a named region forces the server to push the values of that named region to a single server instead of spreading out the hashes across all of your cache servers. ("To provide this added search functionality, objects in a region are limited to a single cache host." http://msdn.microsoft.com/en-us/library/ee790985(v=azure.10).aspx )
What I would recommend is that you shard your named region across 2 more servers and put them in a cluster. This way you are limiting the exceptions to a smaller server when it is running the GC and trying to find more ram to put and store objects and tags into.

Reposting an answer given by Jeff-ITGuy on social.msdn.microsoft.com:
You appear to be encountering an issue nearly identical to one I'm working with Microsoft at the moment. If it's the same issue, it is probably caused by GC taking a long time and causing delays in the response time for AppFabric. From your perf counters it looked like GC time shot up when you started getting the problem so it probably is the same issue.
This issue is being investigated actively by Microsoft. In the meantime, in order to alleviate the problem (at least from our findings) you can run more servers with less memory (shrink the size of the memory space that GC is working against) and you can increase the RequestTimeout on your client. By default that is set to 15000 (15 secs) but we have tried raising it to 30000 which helped eliminate some of the issues. This is NOT a good long term solution in my opinion, just passing on information. I've seen the issue with servers having only 24gb memory (with 12gb for cache) and it only really got noticeably better when we tried 8gb servers with 4gb set to cache - that caused GC to do MUCH better.
Hope that helps, but if this is the issue I think it is there's no solution yet.
It did help, the intermittent errors stopped after we reduced the cache host RAM to 16GB.

The fix for this issue is currently available here:
http://support.microsoft.com/kb/2787717

Related

Spring Data Neo4J - Unable to acquire connection from pool within configured maximum time

We have a Reactive REST API using Spring Data Neo4j (SpringBoot v2.7.5) deployed to Kubernetes. When running a stress test to determine the breaking point, once the volume of requests that the service can handle has been breached, the service does not auto-recover, even after the load has dropped to a level at which the service can handle.
After the service has fallen over the Neo4J health indicator shows:
“org.neo4j.driver.exceptions.ClientException: Unable to acquire connection from the pool within configured maximum time of 60000ms”
With respect to connection/configuration settings we are using defaults configured by SDN.
Observations:
Up until the point at which the service breaks only a small number of connections are utilised, at the point at which it breaks the connections in use jumps up to the max pool size and the above mentioned error is observed. No matter how much time passes (even well beyond the max connection lifetime) the service is unable to acquire a connection from the pool. Upon manually shutting down and restarting the service/pod the service returns to a healthy state.
As an interim solution we now check the Neo4J health indicator, if the mentioned error is present the liveness state is set to down which triggers Kubernetes to restart the service automatically. However, I’m wondering if there is an underlying issue with the connections in the pool not getting ‘cleaned up’?
You can take a look at this discussion https://github.com/spring-projects/spring-data-neo4j/issues/2632
I had the same issue. The problem is that either Spring Framework or Neo4j reactive transaction manager doesn't close connections in a complex reactive flow e.g. when there are a lot of inner calls/mappings and somewhere inside an exception is thrown.
So as a workaround you can add #Transactional in such places to avoid multiple transactions to be created.

A timeout occurred while waiting for memory resources to execute the query in resource pool 'SloDWPool'

I have a series of Azure SQL Data Warehouse databases (for our development/evaluation purposes). Due to a recent unplanned extended outage (due to an issue with the Tenant Ring associated with some of these databases), I decided to resume the canary queries I had been running before but had quiesced for a couple of months due to frequent exceptions.
The canary queries are not running particularly frequently on any specific database, say every 15 minutes. On one database, I've received two indications of issues completing the canary query in 24 hours. The error is:
Msg 110802, Level 16, State 1, Server adwscdev1, Line 1110802;An internal DMS error occurred that caused this operation to fail. Details: A timeout occurred while waiting for memory resources to execute the query in resource pool 'SloDWPool' (2000000007). Rerun the query.
This database is under essentially no load, running at more than 100 DWU.
Other databases on the same logical server may be running under a load, but I have not seen the error on them.
What is the explanation for this error?
Please open a support ticket for this issue, support will have full access to the DMS logs and be able to see exactly what is going on. this behavior is not expected.
While I agree a support case would be reasonable I think you should also try scaling up to say DWU400 and retrying. I would also consider trying largerc or xlargerc on DWU100 and DWU400 as described here. Note it gets more memory and resources per query.
Run the following then retry your query:
EXEC sp_addrolemember 'largerc', 'yourLoginName'

Counting number of requests per second generated by JMeter client

This is how application setup goes -
2 c4.8xlarge instances
10 m4.4xlarge jmeter clients generating load. Each client used 70 threads
While conducting load test on a simple GET request (685 bytes size page). I came across issue of reduced throughput after some time of test run. Throughput of about 18000 requests/sec is reached with 700 threads, remains at this level for 40 minutes and then drops. Thread count remains 700 throughout the test. I have executed tests with different load patterns but results have been same.
The application response time considerably low throughout the test -
According to ELB monitor, there is reduction in number of requests (and I suppose hence the lower throughput ) -
There are no errors encountered during test run. I also set connect timeout with http request but yet no errors.
I discussed this issue with aws support at length and according to them I am not blocked by any network limit during test execution.
Given number of threads remain constant during test run, what are these threads doing? Is there a metrics I can check on to find out number of requests generated (not Hits/sec) by a JMeter client instance?
Testplan - http://justpaste.it/qyb0
Try adding the following Test Elements:
HTTP Cache Manager
and especially DNS Cache Manager as it might be the situation where all your threads are hitting only one c4.8xlarge instance while the remaining one is idle. See The DNS Cache Manager: The Right Way To Test Load Balanced Apps article for explanation and details.

How to survive a database outage?

I have a web service that is made using spring, hibernate and c3p0. I also have a service wide cache(which has the results of requests ever made to the service) which can be used to return results when the service isn't able to return(due to whatever reason). The cache might return stale results when the database is out but that's ok.
I recently faced a database outage and my service came to a crashing halt.
I want the clients of my service to survive database outages happening ever again in future.
For that, I need my service to:
Handle new incoming requests like this: quickly say that the database is down and throw some exception(fast-fail).
Requests already being processed: Don't last longer than x seconds. How do I make the thread handling the request be interrupted somehow.
Cache the whole database in memory for read-only purposes(Is this insane?).
There are some observations that I made:
If there is one or more connection(s) with status ESTABLISHED, then an attempt to checkout a new connection is not made. Seems like any one connection with status ESTABLISED is handed over to the thread receiving the request. Now, this thread just hangs till the time the database comes back up.
I would want to make this request fast-fail by knowing before handling over a connection to a thread whether db is up or not. If no, the service should throw exception instead of hanging up.
If there's no connection with status ESTABLISHED, then the request fails in 10 secs with the exception that "Could not checkout a new connection". This is due to my checkout timeout being set for 10s.
If the service was processing some request, now the db goes and then the service makes a call to db, the thread making the call to db gets stuck forever. It resumes execution only after the db comes back.
I would like to interrupt the thread after say x seconds whether or not it was able to complete the request.
Are there ways to accomplish what I seek?
Thanks in advance.

AppFabric Cache Crashes

The Scenario:
Have written a back end service that periodically checks for new rows in database and inserts it into AppFabric Cache. We have been using this approach since 2 odd months.
We are using a server machine to store data into cache in different regions. We are not using any Cluster of machines. Its only a single machine. The "default" cache region has been divided into 3 regions for 3 different environments. Each environment access this machine for cached data from its specified cache region.
It was working fine until the following is happening since few days.
We are getting the following exception.
ErrorCode: ERRCA0016 :SubStatus ES0001 : The connection was terminated, possibly due to server or network problems or serialized Object size is greater than MaxBufferSize on server. Result of the request is unknown.
The subsequent access to cache throws the following exception:
ErrorCode ERRCA0017 : SubStatus ES0001: There is a temporary failure. Please retry later.
After 4 to 5 try, we get the following exception.
ErrorCode ERRCA0018 : SubStatus ES0001 : The request timed out.
After all this, access to cache throws following exception:
ErrorCode ERRCA0005 : SubStatus ES0001 : Region referred to does not exist. Use CreateRegion API to fix the error.
Looking into the first error ErrorCode: ERRCA0016 :SubStatus ES0001 : Checked if the serialized object to be stored is of greater size than max buffer size. But prior to this larger size object were kept into the cache. Seems this can not be the problem.
What can be the exact problem that can be occurring ?
EDITED:
Did view the logs of event logger for Windows AppFabirc Cache. This is what we found upon our diggings.
These are some of the frequent error logs obtained.
Source :
AppFabricCachingService.Failfast
Param :
Lease with external store expired: Microsoft.Fabric.Federation.ExternalRingStateStoreException: Lease already expired at Microsoft.Fabric.Data.ExternalStoreAuthority.UpdateNode(NodeInfo nodeInfo, TimeSpan timeout) at Microsoft.Fabric.Federation.SiteNode.PerformExternalRingStateStoreOperations(Boolean& canFormRing, Boolean isInsert, Boolean isJoining)
General :
AppFabric Caching service crashed.
Lease with external store expired: Microsoft.Fabric.Federation.ExternalRingStateStoreException:
Lease already expired
   at Microsoft.Fabric.Data.ExternalStoreAuthority.UpdateNode(NodeInfo nodeInfo, TimeSpan timeout)
   at Microsoft.Fabric.Federation.SiteNode.PerformExternalRingStateStoreOperations(Boolean& canFormRing, Boolean isInsert, Boolean isJoining)}
Source:
AppFabricCachingService.Crash
Param :
System.Runtime.CallbackException: Async Callback threw an exception. ---> System.IdentityModel.Tokens.SecurityTokenValidationException: The service does not allow you to log on anonymously. at ......
Probable Causes
Upon Searching for the above event log errors, found that this could be caused due to the following problem. The Cache server is a different server and SQL configuration was used for the same whose database was on different server. So while getting cache configurations from SQL database there would be some failure in creating connection between the cache server and database server. So, we moved the cache configurations from SQL to XML. But still we got the error.
ErrorCode ERRCA0017 : SubStatus ES0001: There is a temporary failure. Please retry later.
ErrorCode ERRCA0005 : SubStatus ES0001 : Region referred to does not exist. Use CreateRegion API to fix the error.
Upon some more digging up, we are guessing that the problem could be this.
Whenever a machine that does not has grant access permissions to access AppFabric Cache, then it tries for some number of attempts and then appfabric stops working. After granting access to cache using Grant powershell commands, the machine is now able to access the cache. Will have to monitor for a few days.
Could this a valid reason ?