Static caches in web services - web-services

Is this the right way to initialize a static cache object in a web service?
public class someclass{
private static Cache cache;
static someclass()
{
cache = HttpContext.Current.Cache;
}
}
More Info:
Seems like I receive more then one cache object from webservice. It creates a new request that only lasts for the duration of that call. If I move to a different machine, it creates a new request (and I think a webservice ) object that returns new cache. (because I can see two different caches being returned in the sniffer) By forcing it to be static I was hoping to have only one. However no avail. doesn't work.

This looks good to me - especially if you are going to wrap the Current.Context and expose properties for cache values like this:
public static class CacheManager
{
public static Boolean Foo
{
get { return (Boolean)HttpContext.Current.Cache["Foo"] }
set { HttpContext.Current.Cache["Foo"] = value; }
}
// etc...
}
You don't really need to create a private reference to the current cache unless you are only doing so to save on typing. Also notice that I made the class static as well.

Why not just access it directly using HTTPContext.Current.Cache?

Related

Update an instance variable in BPS

Using WSO2 BPS 3.6.0 - is there a (standard) way to update an instance variable in an already running instance?
The reason behind is - the client passes incorrect data at the process initialization, the client may fix its data, but the process instance remembers the wrong values.
I believe I may still update a data in the database, but I wouldn't like to see process admins messing with the database
Edit:
I am working with the BPEL engine and my idea is to update a variable not from a process design, but as a corrective action (admin console? api?)
Thank you for all ideas.
You are setting the instance variables during process initialization based on client's request.
For your requirement, where the variables need to be retrieved for the request. You can do this by using the execution entity to read the data instead of the instance variables that were set during process initialization.
Refer example below :
public class SampleTask implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
String userId = execution.getVariable("userId");
//perform your logic here
}
}
If you want to keep using the instance variables, I suggest you to change the instance variable during the process execution.
public class SampleTask implements JavaDelegate {
private String userId;
public void execute(DelegateExecution execution) throws Exception {
String newUserId = execution.getVariable("userId");
setUserId(newUserId);
//perform your logic here
}
public void setUserId(String userId) {
this.userId = userId;
}
public String getUserId() {
return userId;
}
}

SFDC Apex Code: Access class level static variable from "Future" method

I need to do a callout to webservice from my ApexController class. To do this, I have an asycn method with attribute #future (callout=true). The webservice call needs to refeence an object that gets populated in save call from VF page.
Since, static (future) calls does not all objects to be passed in as method argument, I was planning to add the data in a static Map and access that in my static method to do a webservice call out. However, the static Map object is getting re-initalized and is null in the static method.
I will really appreciate if anyone can give me some pointeres on how to address this issue.
Thanks!
Here is the code snipped:
private static Map<String, WidgetModels.LeadInformation> leadsMap;
....
......
public PageReference save() {
if(leadsMap == null){
leadsMap = new Map<String, WidgetModels.LeadInformation>();
}
leadsMap.put(guid,widgetLead);
}
//make async call to Widegt Webservice
saveWidgetCallInformation(guid)
//async call to widge webserivce
#future (callout=true)
public static void saveWidgetCallInformation(String guid) {
WidgetModels.LeadInformation cachedLeadInfo =
(WidgetModels.LeadInformation)leadsMap.get(guid);
.....
//call websevice
}
#future is totally separate execution context. It won't have access to any history of how it was called (meaning all static variables are reset, you start with fresh governor limits etc. Like a new action initiated by the user).
The only thing it will "know" is the method parameters that were passed to it. And you can't pass whole objects, you need to pass primitives (Integer, String, DateTime etc) or collections of primitives (List, Set, Map).
If you can access all the info you need from the database - just pass a List<Id> for example and query it.
If you can't - you can cheat by serializing your objects and passing them as List<String>. Check the documentation around JSON class or these 2 handy posts:
https://developer.salesforce.com/blogs/developer-relations/2013/06/passing-objects-to-future-annotated-methods.html
https://gist.github.com/kevinohara80/1790817
Side note - can you rethink your flow? If the starting point is Visualforce you can skip the #future step. Do the callout first and then the DML (if needed). That way the usual "you have uncommitted work pending" error won't be triggered. This thing is there not only to annoy developers ;) It's there to make you rethink your design. You're asking the application to have open transaction & lock on the table(s) for up to 2 minutes. And you're giving yourself extra work - will you rollback your changes correctly when the insert went OK but callout failed?
By reversing the order of operations (callout first, then the DML) you're making it simpler - there was no save attempt to DB so there's nothing to roll back if the save fails.

How to set Azure WebJob queue name at runtime?

I am developing an Azure WebJobs executable that I would like to use with multiple Azure websites. Each web site would need its own Azure Storage queue.
The problem I see is that the ProcessQueueMessage requires the queue name to be defined statically as an attribute of the first parameter inputText. I would rather have the queue name be a configuration property of the running Azure Website instance, and have the job executable read that at runtime when it starts up.
Is there any way to do this?
This can now be done. Simply create an INameResolver to allow you to resolve any string surrounded in % (percent) signs. For example, if this is your function with a queue name specified:
public static void WriteLog([QueueTrigger("%logqueue%")] string logMessage)
{
Console.WriteLine(logMessage);
}
Notice how there are % (percent) signs around the string logqueue. This means the job system will try to resolve the name using an INameResolver which you can create and then register with your job.
Here is an example of a resolver that will just take the string specified in the percent signs and look it up in your AppSettings in the config file:
public class QueueNameResolver : INameResolver
{
public string Resolve(string name)
{
return ConfigurationManager.AppSettings[name].ToString();
}
}
And then in your Program.cs file, you just need to wire this up:
var host = new JobHost(new JobHostConfiguration
{
NameResolver = new QueueNameResolver()
});
host.RunAndBlock();
This is probably an old question, but in case anyone else stumbles across this post. This is now supported by passing a JobHostConfiguration object into the JobHost constructor.
http://azure.microsoft.com/en-gb/documentation/articles/websites-dotnet-webjobs-sdk-storage-queues-how-to/#config
A slight better implementation of name resolver to avoid fetching from configuration all time. It uses a Dictionary to store the config values once retrieved.
using Microsoft.Azure.WebJobs;
using System.Collections.Generic;
using System.Configuration;
public class QueueNameResolver : INameResolver
{
private static Dictionary<string, string> keys = new Dictionary<string, string>();
public string Resolve(string name)
{
if (!keys.ContainsKey(name))
{
keys.Add(name, ConfigurationManager.AppSettings[name].ToString());
}
return keys[name];
}
}
Unfortunately, that is not possible. You can use the IBinder interface to bind dynamically to a queue but you will not have the triggering mechanism for it.
Basically, the input queue name has to be hardcoded if you want triggers. For output, you can use the previously mentioned interface.
Here is a sample for IBinder. The sample binds a blob dynamically but you can do something very similar for queues.

Creating a High Availability AppFabric Cache Cluster

Is there anything aside from setting Secondaries=1 in the cluster configuration to enable HighAvailability, specifically on the cache client configuration?
Our configuration:
Cache Cluster (3 windows enterprise hosts using a SQL configuration provider):
Cache Clients
With the about configuration, we see primary and secondary regions created on the three hosts, however when one of the hosts is stopped, the following exceptions occur:
ErrorCode<ERRCA0018>:SubStatus<ES0001>:The request timed out.
An existing connection was forcibly closed by the remote host
No connection could be made because the target machine actively refused it 192.22.0.34:22233
An existing connection was forcibly closed by the remote host
Isn't the point of High Availability to be able to handle hosts going down without interrupting service? We are using a named region - does this break the High Availability? I read somewhere that named regions can only exist on one host (I did verify that a secondary does exist on another host). I feel like we're missing something for the cache client configuration would enable High Availability, any insight on the matter would be greatly appreciated.
High Availability is about protecting the data, not making it available every second (hence the retry exceptions). When a cache host goes down, you get an exception and are supposed to retry. During that time, access to HA cache's may throw a retry exception back to you while it is busy shuffling around and creating an extra copy. Regions complicate this more since it causes a larger chunk to have to be copied before it is HA again.
Also the client keeps a connection to all cache hosts so when one goes down it throws up the exception that something happened.
Basically when one host goes down, Appfabric freaks out until two copies of all data exist again in the HA cache's. We created a small layer in front of it to handle this logic and dropped the servers one at a time to make sure it handled all scenarios so that our app kept working but just was a tad bit slower.
After opening a ticket with Microsoft we narrowed it down to having a static DataCacheFactory object.
public class AppFabricCacheProvider : ICacheProvider
{
private static readonly object Locker = new object();
private static AppFabricCacheProvider _instance;
private static DataCache _cache;
private AppFabricCacheProvider()
{
}
public static AppFabricCacheProvider GetInstance()
{
lock (Locker)
{
if (_instance == null)
{
_instance = new AppFabricCacheProvider();
var factory = new DataCacheFactory();
_cache = factory.GetCache("AdMatter");
}
}
return _instance;
}
...
}
Looking at the tracelogs from AppFabric, the clients are still trying to connect to all the hosts without handling hosts going down. Resetting IIS on the clients forces a new DataCacheFactory to be created (in our App_Start) and stops the exceptions.
The MS engineers agreed that this approach was the best practices way (we also found several articles about this: see link and link)
They are continuing to investigate a solution for us. In the mean time we have come up with the following temporary workaround where we force a new DataCacheFactory object to be created in the event that we encounter one of the above exceptions.
public class AppFabricCacheProvider : ICacheProvider
{
private const int RefreshWindowMinutes = -5;
private static readonly object Locker = new object();
private static AppFabricCacheProvider _instance;
private static DataCache _cache;
private static DateTime _lastRefreshDate;
private AppFabricCacheProvider()
{
}
public static AppFabricCacheProvider GetInstance()
{
lock (Locker)
{
if (_instance == null)
{
_instance = new AppFabricCacheProvider();
var factory = new DataCacheFactory();
_cache = factory.GetCache("AdMatter");
_lastRefreshDate = DateTime.UtcNow;
}
}
return _instance;
}
private static void ForceRefresh()
{
lock (Locker)
{
if (_instance != null && DateTime.UtcNow.AddMinutes(RefreshWindowMinutes) > _lastRefreshDate)
{
var factory = new DataCacheFactory();
_cache = factory.GetCache("AdMatter");
_lastRefreshDate = DateTime.UtcNow;
}
}
}
...
public T Put<T>(string key, T value)
{
try
{
_cache.Put(key, value);
}
catch (SocketException)
{
ForceRefresh();
_cache.Put(key, value);
}
return value;
}
Will update this thread when we learn more.

AppFabric Cache standalone mode?

As an ISV I'd like to be able to program my middle tier using the AppFabric Caching Service, but then be able to deploy in small (single server) environments without the need to have AppFabric Cache Server(s) deployed. It also seems natural to me that a "in-memory only" version of the cache client would be ideal for standalone development.
However, all the research I've done so far implies that I have to load a real cache server to make some of the apis work at all, and that the current "Local" option does not fit the bill for what I want.
It seems to me that what I'm looking for would work similarly to aspx session cache, in that the out of the box mechanism is in-memory, and then you can choose to configure the older external process provider, or the sql provider, and now the AppFabric provider, giving better and better scalability as you move up. This works great for aspx session.
Am I correct in thinking that there is no equivalent solution for programming and deploying in a "small" environment for AppFabric caching?
There's a number of issues raised in this question, let's see if we can tackle them...
First and foremost, as Frode correctly points out you can run an AppFabric instance quite happily on one server - it's what I do most of the time for playing around with the API. Obviously the High Availability feature isn't going to be, well, available, but from the question itself I think you've already accepted that.
Secondly, you can't use the AppFabric API against the Local cache - the local cache is only there to save an AppFabric client trips across the wire to a dedicated AppFabric cache server.
Now, to configurable caches, which I think is the most interesting part. What I think you want to do here is separate the operations on the cache from the cache itself into a generic interface, and then you write your code against the interface at design time, and at runtime you create a cache based on information from your app.config/web.config.
So let's start by defining our interface:
public interface IGenericCache
{
void Add(string key, object value);
void Remove(string key);
Object Get(string key);
void Update(string key, object value);
}
And now we can define a couple of implementations, one using the MemoryCache and one using AppFabric.
using System.Runtime.Caching;
class GenericMemoryCache : IGenericCache
{
public void Add(string key, object value)
{
MemoryCache cache = new MemoryCache("GenericMemoryCache");
cache.Add(key, value, null, null);
}
public void Remove(string key)
{
MemoryCache cache = new MemoryCache("GenericMemoryCache");
cache.Remove(key, null);
}
public object Get(string key)
{
MemoryCache cache = new MemoryCache("GenericMemoryCache");
return cache.Get(key, null);
}
public void Update(string key, object value)
{
MemoryCache cache = new MemoryCache("GenericMemoryCache");
cache.Set(key, value, null, null);
}
}
using Microsoft.ApplicationServer.Caching;
class GenericAppFabricCache : IGenericCache
{
private DataCacheFactory factory;
private DataCache cache;
public GenericAppFabricCache()
{
factory = new DataCacheFactory();
cache = factory.GetCache("GenericAppFabricCache");
}
public void Add(string key, object value)
{
cache.Add(key, value);
}
public void Remove(string key)
{
cache.Remove(key);
}
public object Get(string key)
{
return cache.Get(key);
}
public void Update(string key, object value)
{
cache.Put(key, value);
}
}
And we could go on and write IGenericCache implementations with the ASP.NET Cache, NCache, memcached...
Now we add a factory class that uses reflection to create an instance of one of these caches based on values from the app.config/web.config.
class CacheFactory
{
private static IGenericCache cache;
public static IGenericCache GetCache()
{
if (cache == null)
{
// Read the assembly and class names from the config file
string assemblyName = ConfigurationManager.AppSettings["CacheAssemblyName"];
string className = ConfigurationManager.AppSettings["CacheClassName"];
// Load the assembly, and then instantiate the implementation of IGenericCache
Assembly assembly = Assembly.LoadFrom(assemblyName);
cache = (IGenericCache) assembly.CreateInstance(className);
}
return cache;
}
}
Anywhere the client code needs to use the cache, all that is needed is a call to CacheFactory.GetCache, and the cache specified in the config file will be returned, but the client doesn't need to know which cache it is because the client code is all written against the interface. Which means that you can scale out your caching simply by changing the settings in the config file.
Essentially what we're written here is a plugin model for caching, but be aware that you're trading off flexibility for features. The interface has to be more or less the lowest common denominator - you lose the ability to use, say, AppFabric's concurrency models, or the tagging API.
There's an excellent and more complete discussion of programming against interfaces in this article.
We have one setup where we run app fabric cache on just one server...