Creating a High Availability AppFabric Cache Cluster - appfabric

Is there anything aside from setting Secondaries=1 in the cluster configuration to enable HighAvailability, specifically on the cache client configuration?
Our configuration:
Cache Cluster (3 windows enterprise hosts using a SQL configuration provider):
Cache Clients
With the about configuration, we see primary and secondary regions created on the three hosts, however when one of the hosts is stopped, the following exceptions occur:
ErrorCode<ERRCA0018>:SubStatus<ES0001>:The request timed out.
An existing connection was forcibly closed by the remote host
No connection could be made because the target machine actively refused it 192.22.0.34:22233
An existing connection was forcibly closed by the remote host
Isn't the point of High Availability to be able to handle hosts going down without interrupting service? We are using a named region - does this break the High Availability? I read somewhere that named regions can only exist on one host (I did verify that a secondary does exist on another host). I feel like we're missing something for the cache client configuration would enable High Availability, any insight on the matter would be greatly appreciated.

High Availability is about protecting the data, not making it available every second (hence the retry exceptions). When a cache host goes down, you get an exception and are supposed to retry. During that time, access to HA cache's may throw a retry exception back to you while it is busy shuffling around and creating an extra copy. Regions complicate this more since it causes a larger chunk to have to be copied before it is HA again.
Also the client keeps a connection to all cache hosts so when one goes down it throws up the exception that something happened.
Basically when one host goes down, Appfabric freaks out until two copies of all data exist again in the HA cache's. We created a small layer in front of it to handle this logic and dropped the servers one at a time to make sure it handled all scenarios so that our app kept working but just was a tad bit slower.

After opening a ticket with Microsoft we narrowed it down to having a static DataCacheFactory object.
public class AppFabricCacheProvider : ICacheProvider
{
private static readonly object Locker = new object();
private static AppFabricCacheProvider _instance;
private static DataCache _cache;
private AppFabricCacheProvider()
{
}
public static AppFabricCacheProvider GetInstance()
{
lock (Locker)
{
if (_instance == null)
{
_instance = new AppFabricCacheProvider();
var factory = new DataCacheFactory();
_cache = factory.GetCache("AdMatter");
}
}
return _instance;
}
...
}
Looking at the tracelogs from AppFabric, the clients are still trying to connect to all the hosts without handling hosts going down. Resetting IIS on the clients forces a new DataCacheFactory to be created (in our App_Start) and stops the exceptions.
The MS engineers agreed that this approach was the best practices way (we also found several articles about this: see link and link)
They are continuing to investigate a solution for us. In the mean time we have come up with the following temporary workaround where we force a new DataCacheFactory object to be created in the event that we encounter one of the above exceptions.
public class AppFabricCacheProvider : ICacheProvider
{
private const int RefreshWindowMinutes = -5;
private static readonly object Locker = new object();
private static AppFabricCacheProvider _instance;
private static DataCache _cache;
private static DateTime _lastRefreshDate;
private AppFabricCacheProvider()
{
}
public static AppFabricCacheProvider GetInstance()
{
lock (Locker)
{
if (_instance == null)
{
_instance = new AppFabricCacheProvider();
var factory = new DataCacheFactory();
_cache = factory.GetCache("AdMatter");
_lastRefreshDate = DateTime.UtcNow;
}
}
return _instance;
}
private static void ForceRefresh()
{
lock (Locker)
{
if (_instance != null && DateTime.UtcNow.AddMinutes(RefreshWindowMinutes) > _lastRefreshDate)
{
var factory = new DataCacheFactory();
_cache = factory.GetCache("AdMatter");
_lastRefreshDate = DateTime.UtcNow;
}
}
}
...
public T Put<T>(string key, T value)
{
try
{
_cache.Put(key, value);
}
catch (SocketException)
{
ForceRefresh();
_cache.Put(key, value);
}
return value;
}
Will update this thread when we learn more.

Related

Update an instance variable in BPS

Using WSO2 BPS 3.6.0 - is there a (standard) way to update an instance variable in an already running instance?
The reason behind is - the client passes incorrect data at the process initialization, the client may fix its data, but the process instance remembers the wrong values.
I believe I may still update a data in the database, but I wouldn't like to see process admins messing with the database
Edit:
I am working with the BPEL engine and my idea is to update a variable not from a process design, but as a corrective action (admin console? api?)
Thank you for all ideas.
You are setting the instance variables during process initialization based on client's request.
For your requirement, where the variables need to be retrieved for the request. You can do this by using the execution entity to read the data instead of the instance variables that were set during process initialization.
Refer example below :
public class SampleTask implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
String userId = execution.getVariable("userId");
//perform your logic here
}
}
If you want to keep using the instance variables, I suggest you to change the instance variable during the process execution.
public class SampleTask implements JavaDelegate {
private String userId;
public void execute(DelegateExecution execution) throws Exception {
String newUserId = execution.getVariable("userId");
setUserId(newUserId);
//perform your logic here
}
public void setUserId(String userId) {
this.userId = userId;
}
public String getUserId() {
return userId;
}
}

Eclipse RAP Multi-client but single server thread

I understand how RAP creates scopes have a specific thread for each client and so on. I also understand how the application scope is unique among several clients, however I don't know how to access that specific scope in a single thread manner.
I would like to have a server side (with access to databases and stuff) that is a single execution to ensure it has a global knowledge of all transaction and that requests from clients are executed in sequence instead of parallel.
Currently I am accessing the application context as follows from the UI:
synchronized( MyServer.class ) {
ApplicationContext appContext = RWT.getApplicationContext();
MyServer myServer = (MyServer) appContext.getAttribute("myServer");
if (myServer == null){
myServer = new MyServer();
appContext.setAttribute("myServer", myServer);
}
myServer.doSomething(RWTUtils.getSessionID());
}
Even if I access myServer object there and trigger requests, the execution will still be running in the UI thread.
For now the only way to ensure the sequence is to use synchronized as follows on my server
public class MyServer {
String text = "";
public void doSomething(String string) {
try {
synchronized (this) {
System.out.println("doSomething - start :" + string);
text += "[" + string + "]";
System.out.println("text: " + (text));
Thread.sleep(10000);
System.out.println("text: " + (text));
System.out.println("doSomething - stop :" + string);
}
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Is there a better way to not have to manage the thread synchronization myself?
Any help is welcome
EDIT:
To better explain myself, here is what I mean. Either I trust the database to handle multiple request properly and I have to handle also some other knowledge in a synchronized manner to share information between clients (example A) or I find a solution where another thread handles both (example B), the knowledge and the database. Of course, the problem here is that one client may block the others, but this is can be managed with background threads for long actions, most of them will be no problem. My initial question was, is there maybe already some specific thread of the application scope that does Example B or is Example A actually the way to go?
Conclusion (so far)
Basically, option A) is the way to go. For database access it will require connection pooling and for shared information it will require thoughtful synchronization of key objects. Main attention has to be done in the database design and the synchronization of objects to ensure that two clients cannot write incompatible data at the same time (e.g. write contradicting entries that make the result dependent of the write order).
First of all, the way that you create MyServer in the first snippet is not thread safe. You are likely to create more than one instance of MyServer.
You need to synchronize the creation of MyServer, like this for example:
synchronized( MyServer.class ) {
MyServer myServer = (MyServer) appContext.getAttribute("myServer");
if (myServer == null){
myServer = new MyServer();
appContext.setAttribute("myServer", myServer);
}
}
See also this post How to implement thread-safe lazy initialization? for other possible solutions.
Furthermore, your code is calling doSomething() on the client thread (i.e. the UI thread) which will cause each client to wait until pending requests of other clients are processed. The client UI will become unresponsive.
To solve this problem your code should call doSomething() (or any other long-running operation for that matter) from a background thread (see also
Threads in RAP)
When the background thread has finished, you should use Server Push to update the UI.

How to Ignore wsit-client.xml when calling web service if exists

The application I am working on calls many webservice. Just recently I have intergrated another web service that requires wsit-client.xml for Soap authentication.
That is working now but all the other SOAP services have stopped working.
Whenever any of them is being called, I see messages like
INFO: WSP5018: Loaded WSIT configuration from file: jar:file:/opt/atlasconf/atlas.20130307/bin/soap-sdd-1.0.0.jar!/META-INF/wsit-client.xml.
I suspect this is what is causing the Service calls to fail.
How can I cause the wsit-client.xml to be ignored for certain soap service calls?
Thanks
Fixed it by Using a Container and a Loader to configure a dynamic location for the wsit-client.xml. This way it is not automatically loaded. To do that, I first implemented a Container for the app as shown below
public class WsitClientConfigurationContainer extends Container {
private static final String CLIENT_CONFIG = "custom/location/wsit-client.xml";
private final ResourceLoader loader = new ResourceLoader() {
public URL getResource(String resource) {
return getClass().getClassLoader().getResource(CLIENT_CONFIG);
}
};
#Override
public <T> T getSPI(Class<T> spiType) {
if (spiType == ResourceLoader.class) {
return spiType.cast(loader);
}
return null;
}
}
Then to use it in the Code I do this
URL WSDL_LOCATION = this.getClass().getResource("/path/to/wsdl/mysvc.wsdl");
WSService.InitParams initParams = new WSService.InitParams();
initParams.setContainer(new WsitClientConfigurationContainer());
secGtwService = WSService.create(WSDL_LOCATION, SECGTWSERVICE_QNAME, initParams);
And it works like magic

AppFabric Cache standalone mode?

As an ISV I'd like to be able to program my middle tier using the AppFabric Caching Service, but then be able to deploy in small (single server) environments without the need to have AppFabric Cache Server(s) deployed. It also seems natural to me that a "in-memory only" version of the cache client would be ideal for standalone development.
However, all the research I've done so far implies that I have to load a real cache server to make some of the apis work at all, and that the current "Local" option does not fit the bill for what I want.
It seems to me that what I'm looking for would work similarly to aspx session cache, in that the out of the box mechanism is in-memory, and then you can choose to configure the older external process provider, or the sql provider, and now the AppFabric provider, giving better and better scalability as you move up. This works great for aspx session.
Am I correct in thinking that there is no equivalent solution for programming and deploying in a "small" environment for AppFabric caching?
There's a number of issues raised in this question, let's see if we can tackle them...
First and foremost, as Frode correctly points out you can run an AppFabric instance quite happily on one server - it's what I do most of the time for playing around with the API. Obviously the High Availability feature isn't going to be, well, available, but from the question itself I think you've already accepted that.
Secondly, you can't use the AppFabric API against the Local cache - the local cache is only there to save an AppFabric client trips across the wire to a dedicated AppFabric cache server.
Now, to configurable caches, which I think is the most interesting part. What I think you want to do here is separate the operations on the cache from the cache itself into a generic interface, and then you write your code against the interface at design time, and at runtime you create a cache based on information from your app.config/web.config.
So let's start by defining our interface:
public interface IGenericCache
{
void Add(string key, object value);
void Remove(string key);
Object Get(string key);
void Update(string key, object value);
}
And now we can define a couple of implementations, one using the MemoryCache and one using AppFabric.
using System.Runtime.Caching;
class GenericMemoryCache : IGenericCache
{
public void Add(string key, object value)
{
MemoryCache cache = new MemoryCache("GenericMemoryCache");
cache.Add(key, value, null, null);
}
public void Remove(string key)
{
MemoryCache cache = new MemoryCache("GenericMemoryCache");
cache.Remove(key, null);
}
public object Get(string key)
{
MemoryCache cache = new MemoryCache("GenericMemoryCache");
return cache.Get(key, null);
}
public void Update(string key, object value)
{
MemoryCache cache = new MemoryCache("GenericMemoryCache");
cache.Set(key, value, null, null);
}
}
using Microsoft.ApplicationServer.Caching;
class GenericAppFabricCache : IGenericCache
{
private DataCacheFactory factory;
private DataCache cache;
public GenericAppFabricCache()
{
factory = new DataCacheFactory();
cache = factory.GetCache("GenericAppFabricCache");
}
public void Add(string key, object value)
{
cache.Add(key, value);
}
public void Remove(string key)
{
cache.Remove(key);
}
public object Get(string key)
{
return cache.Get(key);
}
public void Update(string key, object value)
{
cache.Put(key, value);
}
}
And we could go on and write IGenericCache implementations with the ASP.NET Cache, NCache, memcached...
Now we add a factory class that uses reflection to create an instance of one of these caches based on values from the app.config/web.config.
class CacheFactory
{
private static IGenericCache cache;
public static IGenericCache GetCache()
{
if (cache == null)
{
// Read the assembly and class names from the config file
string assemblyName = ConfigurationManager.AppSettings["CacheAssemblyName"];
string className = ConfigurationManager.AppSettings["CacheClassName"];
// Load the assembly, and then instantiate the implementation of IGenericCache
Assembly assembly = Assembly.LoadFrom(assemblyName);
cache = (IGenericCache) assembly.CreateInstance(className);
}
return cache;
}
}
Anywhere the client code needs to use the cache, all that is needed is a call to CacheFactory.GetCache, and the cache specified in the config file will be returned, but the client doesn't need to know which cache it is because the client code is all written against the interface. Which means that you can scale out your caching simply by changing the settings in the config file.
Essentially what we're written here is a plugin model for caching, but be aware that you're trading off flexibility for features. The interface has to be more or less the lowest common denominator - you lose the ability to use, say, AppFabric's concurrency models, or the tagging API.
There's an excellent and more complete discussion of programming against interfaces in this article.
We have one setup where we run app fabric cache on just one server...

Static caches in web services

Is this the right way to initialize a static cache object in a web service?
public class someclass{
private static Cache cache;
static someclass()
{
cache = HttpContext.Current.Cache;
}
}
More Info:
Seems like I receive more then one cache object from webservice. It creates a new request that only lasts for the duration of that call. If I move to a different machine, it creates a new request (and I think a webservice ) object that returns new cache. (because I can see two different caches being returned in the sniffer) By forcing it to be static I was hoping to have only one. However no avail. doesn't work.
This looks good to me - especially if you are going to wrap the Current.Context and expose properties for cache values like this:
public static class CacheManager
{
public static Boolean Foo
{
get { return (Boolean)HttpContext.Current.Cache["Foo"] }
set { HttpContext.Current.Cache["Foo"] = value; }
}
// etc...
}
You don't really need to create a private reference to the current cache unless you are only doing so to save on typing. Also notice that I made the class static as well.
Why not just access it directly using HTTPContext.Current.Cache?