Using WSO2 BPS 3.6.0 - is there a (standard) way to update an instance variable in an already running instance?
The reason behind is - the client passes incorrect data at the process initialization, the client may fix its data, but the process instance remembers the wrong values.
I believe I may still update a data in the database, but I wouldn't like to see process admins messing with the database
Edit:
I am working with the BPEL engine and my idea is to update a variable not from a process design, but as a corrective action (admin console? api?)
Thank you for all ideas.
You are setting the instance variables during process initialization based on client's request.
For your requirement, where the variables need to be retrieved for the request. You can do this by using the execution entity to read the data instead of the instance variables that were set during process initialization.
Refer example below :
public class SampleTask implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
String userId = execution.getVariable("userId");
//perform your logic here
}
}
If you want to keep using the instance variables, I suggest you to change the instance variable during the process execution.
public class SampleTask implements JavaDelegate {
private String userId;
public void execute(DelegateExecution execution) throws Exception {
String newUserId = execution.getVariable("userId");
setUserId(newUserId);
//perform your logic here
}
public void setUserId(String userId) {
this.userId = userId;
}
public String getUserId() {
return userId;
}
}
Related
I am writing an application where the Client issues commands to a web service (CQRS)
The client is written in C#
The client uses a WCF Proxy to send the messages
The client uses the async pattern to call the web service
The client can issue multiple requests at once.
My problem is that sometimes the client simply issues too many requests and the service starts returning that it is too busy.
Here is an example. I am registering orders and they can be from a handful up to a few 1000s.
var taskList = Orders.Select(order => _cmdSvc.ExecuteAsync(order))
.ToList();
await Task.WhenAll(taskList);
Basically, I call ExecuteAsync for every order and get a Task back. Then I just await for them all to complete.
I don't really want to fix this server-side because no matter how much I tune it, the client could still kill it by sending for example 10,000 requests.
So my question is. Can I configure the WCF Client in any way so that it simply takes all the requests and sends the maximum of say 20, once one completes it automatically dispatches the next, etc? Or is the Task I get back linked to the actual HTTP request and can therefore not return until the request has actually been dispatched?
If this is the case and WCF Client simply cannot do this form me, I have the idea of decorating the WCF Client with a class that queues commands, returns a Task (using TaskCompletionSource) and then makes sure that there are no more than say 20 requests active at a time. I know this will work but I would like to ask if anyone knows of a library or a class that does something like this?
This is kind of like Throttling but I don't want to do exactly that because I don't want to limit how many requests I can send in a given period of time but rather how many active requests can exist at any given time.
Based on #PanagiotisKanavos suggjestion, here is how I solved this.
RequestLimitCommandService acts as a decorator for the actual service which is passed in to the constructor as innerSvc. Once someone calls ExecuteAsync a completion source is created which along with the command is posted to the ActonBlock, the caller then gets back the a Task from the completion source.
The ActionBlock will then call the processing method. This method sends the command to the web service. Depending on what happens, this method will use the completion source to either notify the original sender that a command was processed successfully or attach the exception that occurred to the source.
public class RequestLimitCommandService : IAsyncCommandService
{
private class ExecutionToken
{
public TaskCompletionSource<bool> Source { get; }
public ICommand Command { get; }
public ExecutionToken(TaskCompletionSource<bool> source, ICommand command)
{
Source = source;
Command = command;
}
}
private IAsyncCommandService _innerSrc;
private ActionBlock<ExecutionToken> _block;
public RequestLimitCommandService(IAsyncCommandService innerSvc, int maxDegreeOfParallelism)
{
_innerSrc = innerSvc;
var options = new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = maxDegreeOfParallelism };
_block = new ActionBlock<ExecutionToken>(Execute, options);
}
public Task IAsyncCommandService.ExecuteAsync(ICommand command)
{
var source = new TaskCompletionSource<bool>();
var token = new ExecutionToken(source, command);
_block.Post(token);
return source.Task;
}
private async Task Execute(ExecutionToken token)
{
try
{
await _innerSrc.ExecuteAsync(token.Command);
token.Source.SetResult(true);
}
catch (Exception ex)
{
token.Source.SetException(ex);
}
}
}
i am defining a Workflow using WSO2 BPMN which will be integrated with our Application. When Instantiating a Process (from our application), i need to pass few information from our application to a BPMN Task which will be displayed in the Task screen.That information will be persisted in all the Task of the Workflow. How can i achieve this?
Yes, you can do that by introducing form variables for each task (user tasks). But if it is a service task you should have to write a custom java class to read the variables and then add it to the class name field in Main config tab as a property in activiti designer. A sample custom java class to read process variables through service task can be implemented as below. As you can see in the below code, employeeSalary and workingPeriod are the two variables which can be passed from a particular process (from your application). And you can set these variables to the task variables by invoking execution.setVariable("variableName", value) method.
package org.wso2.bps.serviceTask;
import java.util.Random;
import org.activiti.engine.delegate.DelegateExecution;
import org.activiti.engine.delegate.JavaDelegate;
/**
* Service task to calculate Bonus for employees
*
*/
public class App implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
int salary = Integer.parseInt((String) execution.getVariable("employeeSalary"));
int numOfWorkingDays = Integer.parseInt((String) execution.getVariable("workingPeriod"));
Random randomGenerator = new Random();
int value = randomGenerator.nextInt(10);
int result = salary + (numOfWorkingDays * value);
execution.setVariable("result", result);
}
}
Hope this resolves your issue.
I am developing an Azure WebJobs executable that I would like to use with multiple Azure websites. Each web site would need its own Azure Storage queue.
The problem I see is that the ProcessQueueMessage requires the queue name to be defined statically as an attribute of the first parameter inputText. I would rather have the queue name be a configuration property of the running Azure Website instance, and have the job executable read that at runtime when it starts up.
Is there any way to do this?
This can now be done. Simply create an INameResolver to allow you to resolve any string surrounded in % (percent) signs. For example, if this is your function with a queue name specified:
public static void WriteLog([QueueTrigger("%logqueue%")] string logMessage)
{
Console.WriteLine(logMessage);
}
Notice how there are % (percent) signs around the string logqueue. This means the job system will try to resolve the name using an INameResolver which you can create and then register with your job.
Here is an example of a resolver that will just take the string specified in the percent signs and look it up in your AppSettings in the config file:
public class QueueNameResolver : INameResolver
{
public string Resolve(string name)
{
return ConfigurationManager.AppSettings[name].ToString();
}
}
And then in your Program.cs file, you just need to wire this up:
var host = new JobHost(new JobHostConfiguration
{
NameResolver = new QueueNameResolver()
});
host.RunAndBlock();
This is probably an old question, but in case anyone else stumbles across this post. This is now supported by passing a JobHostConfiguration object into the JobHost constructor.
http://azure.microsoft.com/en-gb/documentation/articles/websites-dotnet-webjobs-sdk-storage-queues-how-to/#config
A slight better implementation of name resolver to avoid fetching from configuration all time. It uses a Dictionary to store the config values once retrieved.
using Microsoft.Azure.WebJobs;
using System.Collections.Generic;
using System.Configuration;
public class QueueNameResolver : INameResolver
{
private static Dictionary<string, string> keys = new Dictionary<string, string>();
public string Resolve(string name)
{
if (!keys.ContainsKey(name))
{
keys.Add(name, ConfigurationManager.AppSettings[name].ToString());
}
return keys[name];
}
}
Unfortunately, that is not possible. You can use the IBinder interface to bind dynamically to a queue but you will not have the triggering mechanism for it.
Basically, the input queue name has to be hardcoded if you want triggers. For output, you can use the previously mentioned interface.
Here is a sample for IBinder. The sample binds a blob dynamically but you can do something very similar for queues.
Is there anything aside from setting Secondaries=1 in the cluster configuration to enable HighAvailability, specifically on the cache client configuration?
Our configuration:
Cache Cluster (3 windows enterprise hosts using a SQL configuration provider):
Cache Clients
With the about configuration, we see primary and secondary regions created on the three hosts, however when one of the hosts is stopped, the following exceptions occur:
ErrorCode<ERRCA0018>:SubStatus<ES0001>:The request timed out.
An existing connection was forcibly closed by the remote host
No connection could be made because the target machine actively refused it 192.22.0.34:22233
An existing connection was forcibly closed by the remote host
Isn't the point of High Availability to be able to handle hosts going down without interrupting service? We are using a named region - does this break the High Availability? I read somewhere that named regions can only exist on one host (I did verify that a secondary does exist on another host). I feel like we're missing something for the cache client configuration would enable High Availability, any insight on the matter would be greatly appreciated.
High Availability is about protecting the data, not making it available every second (hence the retry exceptions). When a cache host goes down, you get an exception and are supposed to retry. During that time, access to HA cache's may throw a retry exception back to you while it is busy shuffling around and creating an extra copy. Regions complicate this more since it causes a larger chunk to have to be copied before it is HA again.
Also the client keeps a connection to all cache hosts so when one goes down it throws up the exception that something happened.
Basically when one host goes down, Appfabric freaks out until two copies of all data exist again in the HA cache's. We created a small layer in front of it to handle this logic and dropped the servers one at a time to make sure it handled all scenarios so that our app kept working but just was a tad bit slower.
After opening a ticket with Microsoft we narrowed it down to having a static DataCacheFactory object.
public class AppFabricCacheProvider : ICacheProvider
{
private static readonly object Locker = new object();
private static AppFabricCacheProvider _instance;
private static DataCache _cache;
private AppFabricCacheProvider()
{
}
public static AppFabricCacheProvider GetInstance()
{
lock (Locker)
{
if (_instance == null)
{
_instance = new AppFabricCacheProvider();
var factory = new DataCacheFactory();
_cache = factory.GetCache("AdMatter");
}
}
return _instance;
}
...
}
Looking at the tracelogs from AppFabric, the clients are still trying to connect to all the hosts without handling hosts going down. Resetting IIS on the clients forces a new DataCacheFactory to be created (in our App_Start) and stops the exceptions.
The MS engineers agreed that this approach was the best practices way (we also found several articles about this: see link and link)
They are continuing to investigate a solution for us. In the mean time we have come up with the following temporary workaround where we force a new DataCacheFactory object to be created in the event that we encounter one of the above exceptions.
public class AppFabricCacheProvider : ICacheProvider
{
private const int RefreshWindowMinutes = -5;
private static readonly object Locker = new object();
private static AppFabricCacheProvider _instance;
private static DataCache _cache;
private static DateTime _lastRefreshDate;
private AppFabricCacheProvider()
{
}
public static AppFabricCacheProvider GetInstance()
{
lock (Locker)
{
if (_instance == null)
{
_instance = new AppFabricCacheProvider();
var factory = new DataCacheFactory();
_cache = factory.GetCache("AdMatter");
_lastRefreshDate = DateTime.UtcNow;
}
}
return _instance;
}
private static void ForceRefresh()
{
lock (Locker)
{
if (_instance != null && DateTime.UtcNow.AddMinutes(RefreshWindowMinutes) > _lastRefreshDate)
{
var factory = new DataCacheFactory();
_cache = factory.GetCache("AdMatter");
_lastRefreshDate = DateTime.UtcNow;
}
}
}
...
public T Put<T>(string key, T value)
{
try
{
_cache.Put(key, value);
}
catch (SocketException)
{
ForceRefresh();
_cache.Put(key, value);
}
return value;
}
Will update this thread when we learn more.
As an ISV I'd like to be able to program my middle tier using the AppFabric Caching Service, but then be able to deploy in small (single server) environments without the need to have AppFabric Cache Server(s) deployed. It also seems natural to me that a "in-memory only" version of the cache client would be ideal for standalone development.
However, all the research I've done so far implies that I have to load a real cache server to make some of the apis work at all, and that the current "Local" option does not fit the bill for what I want.
It seems to me that what I'm looking for would work similarly to aspx session cache, in that the out of the box mechanism is in-memory, and then you can choose to configure the older external process provider, or the sql provider, and now the AppFabric provider, giving better and better scalability as you move up. This works great for aspx session.
Am I correct in thinking that there is no equivalent solution for programming and deploying in a "small" environment for AppFabric caching?
There's a number of issues raised in this question, let's see if we can tackle them...
First and foremost, as Frode correctly points out you can run an AppFabric instance quite happily on one server - it's what I do most of the time for playing around with the API. Obviously the High Availability feature isn't going to be, well, available, but from the question itself I think you've already accepted that.
Secondly, you can't use the AppFabric API against the Local cache - the local cache is only there to save an AppFabric client trips across the wire to a dedicated AppFabric cache server.
Now, to configurable caches, which I think is the most interesting part. What I think you want to do here is separate the operations on the cache from the cache itself into a generic interface, and then you write your code against the interface at design time, and at runtime you create a cache based on information from your app.config/web.config.
So let's start by defining our interface:
public interface IGenericCache
{
void Add(string key, object value);
void Remove(string key);
Object Get(string key);
void Update(string key, object value);
}
And now we can define a couple of implementations, one using the MemoryCache and one using AppFabric.
using System.Runtime.Caching;
class GenericMemoryCache : IGenericCache
{
public void Add(string key, object value)
{
MemoryCache cache = new MemoryCache("GenericMemoryCache");
cache.Add(key, value, null, null);
}
public void Remove(string key)
{
MemoryCache cache = new MemoryCache("GenericMemoryCache");
cache.Remove(key, null);
}
public object Get(string key)
{
MemoryCache cache = new MemoryCache("GenericMemoryCache");
return cache.Get(key, null);
}
public void Update(string key, object value)
{
MemoryCache cache = new MemoryCache("GenericMemoryCache");
cache.Set(key, value, null, null);
}
}
using Microsoft.ApplicationServer.Caching;
class GenericAppFabricCache : IGenericCache
{
private DataCacheFactory factory;
private DataCache cache;
public GenericAppFabricCache()
{
factory = new DataCacheFactory();
cache = factory.GetCache("GenericAppFabricCache");
}
public void Add(string key, object value)
{
cache.Add(key, value);
}
public void Remove(string key)
{
cache.Remove(key);
}
public object Get(string key)
{
return cache.Get(key);
}
public void Update(string key, object value)
{
cache.Put(key, value);
}
}
And we could go on and write IGenericCache implementations with the ASP.NET Cache, NCache, memcached...
Now we add a factory class that uses reflection to create an instance of one of these caches based on values from the app.config/web.config.
class CacheFactory
{
private static IGenericCache cache;
public static IGenericCache GetCache()
{
if (cache == null)
{
// Read the assembly and class names from the config file
string assemblyName = ConfigurationManager.AppSettings["CacheAssemblyName"];
string className = ConfigurationManager.AppSettings["CacheClassName"];
// Load the assembly, and then instantiate the implementation of IGenericCache
Assembly assembly = Assembly.LoadFrom(assemblyName);
cache = (IGenericCache) assembly.CreateInstance(className);
}
return cache;
}
}
Anywhere the client code needs to use the cache, all that is needed is a call to CacheFactory.GetCache, and the cache specified in the config file will be returned, but the client doesn't need to know which cache it is because the client code is all written against the interface. Which means that you can scale out your caching simply by changing the settings in the config file.
Essentially what we're written here is a plugin model for caching, but be aware that you're trading off flexibility for features. The interface has to be more or less the lowest common denominator - you lose the ability to use, say, AppFabric's concurrency models, or the tagging API.
There's an excellent and more complete discussion of programming against interfaces in this article.
We have one setup where we run app fabric cache on just one server...