Accessing process instance variables from ProcessInstanceQuery - camunda

What is the proper way to query process instance variables in Camunda?
In Activiti there is a getProcessVariables() method available on the org.activiti.engine.runtime.ProcessInstance but it was removed from org.camunda.bpm.engine.runtime.ProcessInstance.

camunda introduced a new, separate query for process instance variables:
VariableInstance v =
runtimeService.createVariableInstanceQuery()
.processInstanceIdIn(pId)
.variableName("myVariable")
.singleResult();

Related

Camunda, how to inject subprocess with specific parameters from main process

I have a Camunda flow with a Call Activity (sequential), the call Activity calls several subflows based on a list of process keys (ids) in a certain order.
For instance I get a list of ["flow-1", "flow-2"], then flow-1.bpmn and flow-2.bpmn are executed.
But, also in the scope is flow specific data, added to the scope in "Read LOT Configuration". For instance [{"name", "flow-1", "identifier" : "some-data"}, {name: "flow-2", "identifier" : "some other data"}].
I would like the call activity to determine that for flow-1, I need to send the flow-1 related object along.
I do not want to send the entire collection, but only the flow specific data.
How can I achieve this?
Some ideas:
a) use the element variable from the call activity settings as key to extract the correct data element in a data mapping
b) surround the call activity with a multi-instance embedded sub process. In this scope you will have the element variable (processId), which can then be used to perform delegate variable mapping (https://docs.camunda.org/manual/7.16/reference/bpmn20/subprocesses/call-activity/#delegation-of-variable-mapping)
c) pass the processID as data and fetch the configuration for the particular process inside its sub process implementation only

org.wso2.carbon.context.CarbonContext.getThreadLocalCarbonContext() returns non initialized object when process step is async

using wso2bps 3.6.0
i have code like this in our process step:
import org.wso2.carbon.context.CarbonContext;
CarbonContext cctx = CarbonContext.getThreadLocalCarbonContext();
String domain = cctx.getTenantDomain();
if step marked as Exclusive this code returns correct value.
if step marked as Asynchronous then i got domain=null.
Finally this behavior fails access to other carbon properties and registry.
It seems the problem in this ThreadLocal data holder that does not return initialized holder for my Async thread in org.wso2.carbon.context.internal.CarbonContextDataHolder :
private static ThreadLocal<CarbonContextDataHolder> currentContextHolder = new ThreadLocal(){
protected CarbonContextDataHolder initialValue(){
return new CarbonContextDataHolder(null);
}
};
The questions
How to get carbon domain and registry in my process step when it Asynchronous?
Maybe there is a way to initialize my thread to allow use of carbon registry?
PS: As workaround I use Exclusive step prior to long-running Anync step in my process to evaluate required carbon-dependent properties.

Shared Doctrine EntityManager service

We are using Symfony for our projects and there's something about Doctrine that I can't get on with.
Doctrine's entity manager (lets call it 'em' in the following) is a shared service, so when I inject em into multiple services, they share exactly the same instance of em. It is simpler If I introduce an example right away to explain what I want to ask: Consider the following example:
$service1 = $this->get('vendor_test.service_one'); // $service1 has a private entity manager property
$service2 = $this->get('vendor_test.service_two'); // $service2 as well has a private entity manager property
$entity1 = $service1->getEntityById(1); // getEntityById() queries for an entity with the given id and returns it. So it is in the managed list of service1's entity manager
$entity2 = $service2->getEntityById(2); // entity1 and entity2 not necessarily of the same class
$entity1
->setProperty1('aaaa')
->setProperty2($service2->updateDateTime($entity2)) // updateDateTime() let's say updates a datetime field of the passed entity (in this case entity2) and calls $this->entityManager->flush(); and returns the datetime.
->setProperty3('bbbb')
$service1->save(); // calls $this->entityManager->flush() so it should update the managed entities (in this case entity1)
So the question is: If the entityManager object of service1 and service2 are the same instance of entityManager so they are identical, they share the same internal managed list, then when calling $service2->updateDateTime($entity2) does an entityManager->flush(), does it flushes $entity1 as well? Does $entity1 with Property1 set to 'aaaa' being flushed midway and updated in the database, and being flushed in a second step when $service1->save(); is called?
Hope I managed to draw up what I mean and what I want to ask.
As I tested out and asked someone more competent, the answer is yes, since everywhere I use entity manager they all share the same managed list. To overcome the problem mentioned in the question, is to pass the entity to be flushed to the entity manager and all the others will be intact.

Lifetime and scope of class and instance variables in django

While there are quite a few questions and answers on here already about the life of different variables within python I am looking for how they translate into the django environment in terms of application scope and endpoint scopes. Here is a simple version of what I am making and I want to ensure that it will behave the way I am expecting it to
my_cache/models/GlobalCache.py:
# This class should be global to the entire application and only
# load when the server is started.
class GlobalCacheobject):
_cache = {}
#classmethod
def fetch(cls):
return cls._cache
#classmethod
def flush(cls):
cls._cache = {}
#classmethod
def load_cache(cls, files_to_load_data_from):
for file in files_to_load_from:
cls._cache[file] = <load file and process its data into an entry>
my_cache/models/InstanceCache.py:
from .GlobalCache import GlobalCache
# This class will contain a reference to the global cache and use it to look
# up entries.
class InstanceCache(object):
def __init__(self, name=None):
self._name = name
self._cache = GlobalCache.fetch()
def fetch_file_data(self, file_name):
cache_entry = self._cache.get(file_name, None)
if cache_entry is None:
raise EntryNotFoundException()
return ReadOnlyInterfaceObject(cache_entry)
The intent is to have GlobalCache have a cls._cache value that will persist as long as the server is running. Calling GlobalCache.flush() will drop its global reference to the data it was tracking and calling GlobalCache.load(files_to_load_from) will populate a new instance of its data from.
The InstanceCache object is then intended to hold a reference to the current version of the data and return read-only objects for the different data sets identified by their original file name.
From my testing this seems to work, though I do not really have the InstanceCache object per se. I can load the global cache, retrieve read only objects to it and then flush the global, load it with new data. The original read only objects still return the values they were originally loaded with, new requests will use the new data values.
What I want to confirm is that GlobalCache will exist as long as the server is running and only alter its data with direct calls to flush() and load_cache(). And that when I hit an endpoint and create an InstanceCache it will keep a reference to the original data only as long as it exists. When the execution on the end point is done I would expect it to go out of scope removing the reference to the global cache and if that was the last one, it goes away and only the new/current data is kept. If it matters I am running Python 2.7.6 and django 1.5.12. Solutions that require an upgrade may be useful as well but it is not an immediate option for me.
The answer here is a maybe, and it also depends a lot on which app server you are using to run django (if you are running multi-process).
So, generally speaking, yes, the GlobalCache will retain its cached contents for the lifetime of the process it is in after it has been initialized.
But InstanceCache, on the other hand, is only guaranteed to be garbage collected at some time after there are no more references to it. Garbage collection is a deep field and there are often teams of people that work on the algorithms so going into exact scenarios is probably outside the scope of an answer on SO. A popular implementation of python is pypy, and you can read more about the garbage collection used in pypy here.
That said, please remember that most app servers are multi-process. Both uwsgi and gunicorn spin up child processes to serve requests. So even though GlobalCache is a singleton in its process, there may be several processes, each with its own GlobalCache. And, this GlobalCache will ultimately be garbage collected/cleaned up when the process exits. Both uwsgi and gunicorn will usually kill child processes after the child services some number of HTTP requests.

Setting Connection Parameters via ADO for SQL Server

Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries.
We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application.
We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL.
I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code.
Set all variables before running Recordset.OpenForward
Connection->Execute("SET #GetStartDate = ...");
Connection->Execute("SET #GetEndDate = ...");
// Repeat for all parameters
Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT #variable statement?
Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL
// Command has been previously been created
ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate");
ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate");
// Set values and attach etc...
What I would like to know if there is something like:
Connection->SetParameter("GetStartDate", "20090101");
Connection->SetParameter("GetEndDate", 20100101");
And these will persist for the lifetime of the connection, and the SQL can do something like #GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.
Since no one has ventured an answer I'm guessing there isn't an elegant solution, that said:
Global cursors persist for the duration of the connection and can be accessed from any SQL or stored proc so you could execute this once on the connection:
DECLARE KludgeKursor CURSOR GLOBAL STATIC FOR
SELECT StartDate = '2010-01-01', EndDate = '2010-04-30'
OPEN KludgeKursor
and in your stored procedures:
--get the values
DECLARE #StartDate datetime, #EndDate datetime
FETCH FIRST FROM GLOBAL KludgeKursor
INTO #StartDate, #EndDate
--go crazy
SELECT #StartDate, #EndDate
Each connection would only see their own values, so the same stored procs can be used for different connection/values. The global cursor is automatically deallocated when the connection ends