AppFabric session state provider - LRU eviction has been disabled for the cache used to store Session - appfabric

I'm using AppFabric to share session between 2 or more different Web Applications.
But I got a problem which said: "Expected the Session item 'FULL_NAME' to be present in the cache, but it could not be found, indicating that the current Session is corrupted. Please verify that LRU eviction has been disabled for the cache used to store Session."
My configuration:
<dataCacheClient>
<hosts>
<host name="CACHE1" cachePort="22233" />
<host name="CACHE2" cachePort="22233" />
<host name="CACHE3" cachePort="22233" />
</hosts>
<machineKey validationKey="C7415df6847D0C0B5146F5605B5973EBD59kjh67EE6414ECD534A95F528F153B6B5F42CFFA9EBF65B2169F7DA5D801C0F9053454A159505253DC687A" decryptionKey="3AE9EE73F1A2781B73DEC6C3fgdgdfD28E0C730284DD314118DA8B" validation="SHA1" decryption="AES" />
<sessionState timeout="40" mode="Custom" customProvider="AFCacheSessionStateProvider">
<providers>
<add name="AFCacheSessionStateProvider" type="Microsoft.Web.DistributedCache.DistributedCacheSessionStateStoreProvider, Microsoft.Web.DistributedCache" cacheName="XXXXX" shareID="YYYYY" retryCount="10" useBlobMode="false" />
</providers>
</sessionState>
is there anyone know what is problem?

It's not the asp.net config you need to check, it's the cache itself.
On a cache host, see the output of the command Get-CacheConfig XXXXX - as the docs say you will see output like this: check the EvictionType:
CacheName : Cache1
TimeToLive : 10 mins
CacheType : Partitioned
Secondaries : 0
IsExpirable : True
EvictionType : LRU
NotificationsEnabled : False
For more information on eviction settings, see the Expiration and Eviction documentation. If your eviction type is not LRU, check cache memory usage, as that page states:
When the memory consumption of the cache service on a cache server
exceeds the low watermark threshold, AppFabric starts evicting objects
that have already expired. When memory consumption exceeds the high
watermark threshold, objects are evicted from memory, regardless of
whether they have expired or not, until memory consumption goes back
down to the low watermark.
There's an expiration troubleshooting page with more info.

Related

Data Stored in Redis cache is not found

We have used redis cache in web api project to add information on to the cache.
The same redis cache connection string we are using in Azure web Job to take the cached data stored in web api project.But we are not getting the cached data in Azure web Job as it is returning null data for that region.
<add key="RedisCacheStorageHours" value="8" />
Please Help.
As I have tested, when you fail to connect to the Redis Cache, it doesn't give any error prompt, and its vault is null.
Here is an article about Use Azure Cache for Redis with a .NET application.
The detailed steps are as below:
1.Set connectionstring in app.config:
<appSettings>
<add key="CacheConnection" value="<cache-name>.redis.cache.windows.net,abortConnect=false,ssl=true,password=<access-key>"/>
</appSettings>
2.Install StackExchange.Redis package.
3.Use the following code to retrieve cache data.
static void Main(string[] args)
{
IDatabase cache = lazyConnection.Value.GetDatabase();
string cacheCommand = "GET Message";
Console.WriteLine("\nCache command : " + cacheCommand + " or StringGet()");
Console.WriteLine("Cache response : " + cache.StringGet("Message").ToString());
lazyConnection.Value.Dispose();
}
After testing fine on local, you could publish the console app as azure webjobs and it will take the cached data as you want.

Recommended Heap Configuration

I can not choose optimal configuration for wildfly as. I have a droplet on DigitalOcean and it has 2GB Ram and 1vCPU. I have a social media application with mongodb(login and see your followers). This app triggers the Firebase Cloud Message service every 15 min. after FCM send notification to clients and clients send a request to server. After server make some db read/write operations. But the problem is server can not response approximately every 2-3 hours so I need to restart it. I track memory usage graph. after reboot memory usage graph increase slowly but always increase. Is this about wildfly conf or you can say anything about this? There is Nginx front of the Wildfly.
Wildfly conf:
bin/standalone.conf
if [ "x$JAVA_OPTS" = "x" ]; then
JAVA_OPTS="-Xms768m -Xmx1024m -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true -Duser.timezone=GMT+3"
else
echo "JAVA_OPTS already set in environment; overriding default settings with values: $JAVA_OPTS"
fi
domain/configuration/domain.xml
<server-group name="main-server-group" profile="full">
<jvm name="default">
<heap size="1024m" max-size="1536m"/>
</jvm>
<socket-binding-group ref="full-sockets"/>
</server-group>
<server-group name="other-server-group" profile="full-ha">
<jvm name="default">
<heap size="1024m" max-size="1536m"/>
</jvm>
<socket-binding-group ref="full-ha-sockets"/>
</server-group>
domain/configuration/host.xml
<jvms>
<jvm name="default">
<heap size="1024m" max-size="1536m"/>
<jvm-options>
<option value="-server"/>
<option value="-XX:MetaspaceSize=96m"/>
<option value="-XX:MaxMetaspaceSize=256m"/>
<option value="--add-exports=java.base/sun.nio.ch=ALL-UNNAMED"/>
</jvm-options>
</jvm>
</jvms>
Thank you.
Have you tried setting the Garbage collection in 'standalone.conf' located in libexec/bin?
I switched to Oracle's G1 garbage collection and it sorted out all my "out of memory" problems on WildFly 10/11. Now using it with 12.
http://www.oracle.com/technetwork/tutorials/tutorials-1876574.html
# G1 Garbage Collector
JAVA_OPTS="-server -Xms2g -Xmx8g"
JAVA_OPTS="$JAVA_OPTS -XX:+UseG1GC"
JAVA_OPTS="$JAVA_OPTS -XX:MaxGCPauseMillis=200"
JAVA_OPTS="$JAVA_OPTS -XX:InitiatingHeapOccupancyPercent=45"
JAVA_OPTS="$JAVA_OPTS -XX:G1ReservePercent=25"
JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCDateStamps -verbose:gc -XX:+PrintGCDetails -Xloggc:/Users/NOTiFY/IdeaProjects/MigrationTool/garbage-collection.log"

HttpHandler low concurrent request processing

I have a simple .NET 4.5 HttpHandler that does not seem to scale beyond 10 concurrent requests. I'm expecting a much higher figure. The handler does nothing more than sleep for a second and return a simple string.
concurrent requests requests/minute
1 60
8 480
9 540
10 600
11 600
12 600
15 600
32 600
512 600
I've got a 64-bit Win7 machine with a 4-core i7 with 32GB of ram and an SSD, and the machine is idle during all the tests. IIS is as configured out of the box. I've also tried running this with an app pool created by IISTuner from Codeplex, but with no change in result.
Changing IsReusable between true/false does nothing.
I've added an entry to web.config to disable session state. I'm using SoapUI for the testing, and have the close connections after each request flag set (and can see this reflected in the http headers). Again, nothing seems to change.
Amending the number of processes per app pool does raise the number, but still gives nowhere near the throughput I'm expecting (hundreds/thousands of concurrent requests).
Here is the handler:
class TestHandler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
Thread.Sleep(1000);
context.Response.Write("I slept for 1s");
}
public bool IsReusable { get { return true; } }
}
Here is the associated web.config file:
<?xml version="1.0"?>
<configuration>
<system.web>
<compilation debug="true" targetFramework="4.5" />
<httpRuntime targetFramework="4.5" />
<sessionState mode="Off" timeout ="20" cookieless="false"></sessionState>
</system.web>
<system.webServer>
<handlers>
<add name="Proposal01_Test" verb="*"
path="*.test"
type="Proposal01.TestHandler, Proposal01"
resourceType="Unspecified" />
</handlers>
</system.webServer>
</configuration>
What am I missing?
The problem is as #Damien states - thank you again - this is a non-server OS limitation in Win7/IIS7.5, which I never picked up on as we've had Java services on the same machine scale into the thousands of concurrent request territory. Yey MS.

Windows Server AppFabric Cache Time-out based invalidation callback

I am using Windows Server AppFabric Caching in our application with local cache enabled.
This is configured as following:
<localCache isEnabled="true" sync="TimeoutBased" objectCount="1000" ttlValue="120"/>
I have setup time-out based invalidation with time-out interval of 120 seconds.
As per this configuration, local cache will remove items from in-memory cache after every 120 seconds and retrieve item from cache cluster. Is it possible to add a callback which gets fired whenever local cache tries to hit the cache cluster to retrieve items instead of fetching them locally?
Unfortunately, there is no way to know if data is fetched locally or not. There are cache server notifications but they are not reliable.
In your scenario, a good approach could be the Read-Through and Write-Behind feature. It does not fit to all situations but your can take a quick look.
Here are some links :
http://msdn.microsoft.com/en-us/library/hh377669.aspx
http://blogs.msdn.com/b/prathul/archive/2011/12/06/appfabric-cache-read-from-amp-write-to-database-read-through-write-behind.aspx

AppFabric Error ERRCA0017 SubStatus ES0006

Just installed Windows Server AppFabric 1.1 on my Windows 7 box and I'm trying to write some code in a console application against the cache client API as part of an evaluation of AppFabric. My app.config looks like the following:
<configSections>
<section name="dataCacheClient" type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" allowLocation="true" allowDefinition="Everywhere" />
</configSections>
<dataCacheClient>
<hosts>
<host name="pa-chouse2" cachePort="22233" />
</hosts>
</dataCacheClient>
I created a new cache and added my domain user account as an allowed client account using the Powershell cmdlet Grant-CacheAllowedClientAccount. I'm creating a new DataCache instance like so:
using (DataCacheFactory cacheFactory = new DataCacheFactory())
{
this.cache = cacheFactory.GetDefaultCache();
}
When I call DataCache.Get, I end up with the following exception:
ErrorCode<ERRCA0017>:SubStatus<ES0006>:There is a temporary failure. Please retry later. (One or more specified cache servers are unavailable, which could be caused by busy network or servers. For on-premises cache clusters, also verify the following conditions. Ensure that security permission has been granted for this client account, and check that the AppFabric Caching Service is allowed through the firewall on all cache hosts. Also the MaxBufferSize on the server must be greater than or equal to the serialized object size sent from the client.)
I'd be very grateful if anyone could point out what I'm missing to get this working.
Finally figured out what my problem was after nearly ripping out what little hair I have left. Initially, I was creating my DataCache like so:
using (DataCacheFactory cacheFactory = new DataCacheFactory())
{
this.cache = cacheFactory.GetDefaultCache();
}
It turns out that DataCache didn't like having the DataCacheFactory created it disposed of. Once I refactored the code so that my DataCacheFactory stayed in scope as long as I needed my DataCache, it worked as expected.