Camunda delegateExecution's getVariable throws ENGINE-03040 No serializer defined for variable instance - camunda

I'm new to Camunda. I've defined BPMN for set of tasks. Created necessary delegates to handle the activities.
As a business process I need to check pre qualification to proceed to next activity. If pre qualification fails I need to wait and periodically check for the pre qualification conditions. The flow succeeds if the pre qualification is true in the first run itself. If the pre qualification is false, Camunda waits for 2 seconds and triggers for pre qualification activity, while doing so I get following error and the instance fails.
Stacktrace:
org.camunda.bpm.engine.ProcessEngineException: ENGINE-03040 No serializer defined for variable instance 'org.camunda.bpm.engine.impl.persistence.entity.util.TypedValueField#44ba275a'
at org.camunda.bpm.engine.impl.db.EnginePersistenceLogger.serializerNotDefinedException(EnginePersistenceLogger.java:387)
at org.camunda.bpm.engine.impl.persistence.entity.util.TypedValueField.ensureSerializerInitialized(TypedValueField.java:207)
at org.camunda.bpm.engine.impl.persistence.entity.util.TypedValueField.getSerializer(TypedValueField.java:194)
at org.camunda.bpm.engine.impl.persistence.entity.util.TypedValueField.getTypedValue(TypedValueField.java:105)
at org.camunda.bpm.engine.impl.persistence.entity.VariableInstanceEntity.getTypedValue(VariableInstanceEntity.java:276)
at org.camunda.bpm.engine.impl.core.variable.scope.AbstractVariableScope.getValueFromVariableInstance(AbstractVariableScope.java:146)
at org.camunda.bpm.engine.impl.core.variable.scope.AbstractVariableScope.getVariable(AbstractVariableScope.java:133)
at org.camunda.bpm.engine.impl.core.variable.scope.AbstractVariableScope.getVariable(AbstractVariableScope.java:129)
We are struggling on this for quite sometime now. Any help would be appreciated.
Thanks.
Saravan

Related

Camunda set Execution Variable

I'm trying to set a process variable in Task Listener of a human-task using Groovy script during 'create' as event type in Camunda BPMN work-flow.
execution.setVariable('newUserType',"RMAOFF1");
But it is giving me error saying "The task does not exist or the corresponding process instance could not be resumed successfully."
Any help most appreciated.
In a Task Listener you don't have an execution (DelegateExecution).
But you have a task (DelegateTask delegateTask).
So your example is: task.setVariable('newUserType',"RMAOFF1")
By the way this would also work in an expression ${task.setVariable('newUserType',"RMAOFF1")}
For more infos see https://docs.camunda.org/manual/7.18/user-guide/process-applications/process-application-event-listeners/
The signature of a TaskListener is void notify(DelegateTask task). I guess you are accidentally using an ExceutionListener, but that does not have a "create" event.

Airflow task to refer to multiple previous tasks?

Is there a way I can have a task require the completion of multiple upstream tasks which are still able to finish independently?
download_fcr --> process_fcr --> load_fcr
download_survey --> process_survey --> load_survey
create_dashboard should require load_fcr and load_survey to successfully complete.
I do not want to force anything in the 'survey' task chain to require anything from the 'fcr' task chain to complete. I want them to process in parallel and still complete even if one fails. However, the dashboard task requires both to finish loading to the database before it should start.
fcr *-->*-->*
\
---> create_dashboard
/
survey *-->*-->*
You can pass a list of tasks to set_upstream or set_downstream. In your case, if you specifically want to use set_upstream, you could describe your dependencies as:
create_dashboard.set_upstream([load_fcr, load_survey])
load_fcr.set_upstream(process_fcr)
process_fcr.set_upstream(download_fcr)
load_survey.set_upstream(process_survey)
process_survey.set_upstream(download_survey)
Have a look at airflow's source code: even when you pass just one task object to set_upstream, it actually wraps a list around it before doing anything.
download_fcr.set_downstream(process_fcr)
process_fcr.set_downstream(load_fcr)
download_survey.set_downstream(process_survey)
process_survey.set_downstream(load_survey)
load_survey.set_downstream(create_dashboard)
load_fcr.set_downstream(create_dashboard)

Loopback: return error from beforeValidation hook

I need to make custom validation of instance before saving it to MySQL DB.
So I perform (async) check inside beforeValidate model hook.
MyModel.beforeValidate = function(next){
// async check that finally calls next() or next(new Error('fail'))
}
But when check fails and I pass Error obj to next function, the execution continues anyway.
Is there any way to stop execution and response to client with error?
This is a known bug in the framework, see https://github.com/strongloop/loopback/issues/614
I am working on a new hook implementation that will not have issues like the one you have experienced, see loopback-datasource-juggler#367 and the pull request loopback-datasource-juggler#403

AWS SWF - IllegalStateException: No context found. (method called outside the workflow definition)

I am writing an AWS SWF application using the flow framework. Getting an IllegalStateException: No context Found. It means that the method is called outside of the workflow definition code. while calling the following code:
private DecisionContextProvider contextProvider
= new DecisionContextProviderImpl();
private WorkflowClock clock
= contextProvider.getDecisionContext().getWorkflowClock();
Why am I getting this error and how to get rid of it?
This exception is thrown by getDecisionContext() when you call it outside of a workflow (it should only be called somewhere in the call hierarchy of your workflow implementation - ie, your WorkflowImpl ).
To avoid getting that error, you should only call getDecisionContext() while inside of a workflow or its constructor. The object only gets set in those circumstances (by the simple workflow framework), and doesn't exist outside of workflow execution, hence the IllegalStateException.

How to increase deploy timeout limit at AWS Opsworks?

I would like to increase the deploy time, in a stack layer that hosts many apps (AWS Opsworks).
Currenlty I get the following error:
Eror
[2014-05-05T22:27:51+00:00] ERROR: Running exception handlers
[2014-05-05T22:27:51+00:00] ERROR: Exception handlers complete
[2014-05-05T22:27:51+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache/chef-stacktrace.out
[2014-05-05T22:27:51+00:00] ERROR: deploy[/srv/www/lakers_test] (opsworks_delayed_job::deploy line 65) had an error: Mixlib::ShellOut::CommandTimeout: Command timed out after 600s:
Thanks in advance.
First of all, as mentioned in this ticket reporting a similar issue, the Opsworks guys recommend trying to speed up the call first (there's always room for optimization).
If that doesn't work, we can go down the rabbit hole: this gets called, which in turn calls Mixlib::ShellOut.new, which happens to have a timeout option that you can pass in the initializer!
Now you can use an Opsworks custom cookbook to overwrite the initial method, and pass the corresponding timeout option. Opsworks merges the contents of its base cookbooks with the contents of your custom cookbook - therefore you only need to add & edit one single file to your custom cookbook: opsworks_commons/libraries/shellout.rb:
module OpsWorks
module ShellOut
extend self
# This would be your new default timeout.
DEFAULT_OPTIONS = { timeout: 900 }
def shellout(command, options = {})
cmd = Mixlib::ShellOut.new(command, DEFAULT_OPTIONS.merge(options))
cmd.run_command
cmd.error!
[cmd.stderr, cmd.stdout].join("\n")
end
end
end
Notice how the only additions are just DEFAULT_OPTIONS and merging these options in the Mixlib::ShellOut.new call.
An improvement to this method would be changing this timeout option via a chef attribute, that you could in turn update via your custom JSON in the Opsworks interface. This means passing the timeout attribute in the initial Opsworks::ShellOut.shellout call - not in the method definition. But this depends on how the shellout method actually gets called...