Camunda Process Definition Deployment via API - camunda

I'm trying to deploy a process definition from a file using the following code
DeploymentBuilder deploymentBuilder = repositoryService.createDeployment().name(definitionName);
deploymentBuilder.addInputStream(definitionName, definitionFileInputStream);
String deploymentId = deploymentBuilder.deploy().getId();
System.out.println(deploymentId);
The above code runs successfully and the new deploymentId is printed out.
Later, I tried to list the deployed process definitions using the following code
List<ProcessDefinition> definitions = repositoryService.createProcessDefinitionQuery().list();
System.out.println(definitions.size());
The above code runs successfully but the output is always 0.
I've done some investigations and found that in the ACT_GE_BYTEARRAY table an entry with the corresponding deploymentId exists and the BYTES_ column contains that contents of the definitions file.
I have also found that there is no corresponding entry found in ACT_RE_PROCDEF table.
Is there something messing? from the API and the examples I found it seems that the above code shall suffice, or is there a missing step?
Thanks for your help

It turned out that the issue was related to definitionName (thanks thorben!) as it has to ends on either .bpmn20.xml or .bpmn.
After further testing, the suffix is required for the following definitionName of the code
deploymentBuilder.addInputStream(definitionName, definitionFileInputStream);
Leaving the following definitionName without the suffix is fine
repositoryService.createDeployment().name(definitionName);

It seems that you forget the isExecutable flag on your deployed process definitions. Please check if your process model contains a isExecutable flag. If you use the camunda modeler simply set this option in the properties panel of the process.
If you call #deploy() with non-executable definitions an deployment will created, but the process definition are not deployed since they are not executable.
On the latest version of camunda platform (7.7), a new method called #deployWithResult() was added to the DeploymentBuilder. This method returns the deployed process definitions, so it is easy to check if process definitions are deployed.

Related

(ROS) Failed to create global planner

My setup is: ROS melodic, Ubuntu: 18.04
I want simulate turtlebot3 moving with my own global planner and have been following this tutorial to get started: http://wiki.ros.org/navigation/Tutorials/Writing%20A%20Global%20Path%20Planner%20As%20Plugin%20in%20ROS#Running_the_Plugin_on_the_Turtlebot. The tutorial seem to be made for ROS hydro, but as it was the best source of guidance I could find I hoped it would work.
The error I'm having is:
Failed to create the global_planner/GlobalPlanner planner, are you sure it is properly registered and that the containing library is built? Exception: MultiLibraryClassLoader: Could not create object of class type global_planner::GlobalPlanner as no factory exists for it. Make sure that the library exists and was explicitly loaded through MultiLibraryClassLoader::loadLibrary()
To my knowledge I've followed the tutorial as much as possible with a only a few things done differently because I wanted to test it, couldn't do as the tutorial asked, or because I thought it wouldn't impact the results. What I have done differently is:
I use the carrot_planner.h and carrot_planner.cpp files in the tutorial section 1 to test that it works before trying with my own code to avoid confusion about where possible errors come from. It's not 'different' from the tutorial to my knowledge, but figured I'd mention it. They are placed in catkin_ws/src/carrot_planner/src/global_planner/
The ros package I'm working from is in catkin_ws/src and is called the carrot_planner. In the tutorial step 1.3 I use add_library(global_planner_lib src/global_planner/carrot_planner.cpp). Would not imagine it affects the results either.
In section 3 of the tutorial it mentions that 'First, you need to copy the package that contains your global planner (in our case global_planner) into the catkin workspace of your Turtlebot (e.g. catkin_ws).' Since my package was already in catkin_ws/src/ I haven't moved it since I guess I didn't need to.
I've altered the 'move_base.launch' file in '/opt/ros/melodic/share/turtlebot3_navigation/launch/' instead of the 'move_base.launch.xml' in '/opt/ros/hydro/share/turtlebot_navigation/launch/includes/' as there doesn't seem to be a destination '...turtlebot3_navigation/launch/includes/'. There are files in launch, but no includes folder. Maybe that a difference from Hydro to Melodic, I don't know. There may be a whole lot of things that need to be done differently from the tutorial when using Melodic, or with turtlebot3, but I don't know.
I haven't made my own launch file for bringup of the turtlebot, but have instead followed this tutorial (https://emanual.robotis.com/docs/en/platform/turtlebot3/nav_simulation/) to guide me with turtlebot3. After finishing this step in the global planner tutorial 'Save and close the move_base.launch.xml. Note that the name of the planner is global_planner/GlobalPlanner the same specified in global_planner_plugin.xml. Now, you are ready to use your new planner' I tested whether it worked by running: 'roslaunch turtlebot3_gazebo turtlebot3_world.launch' and then I tried running: 'roslaunch turtlebot3_navigation turtlebot3_navigation.launch map_file:=$HOME/map.yaml' which led to the error I showed above. I have created the map-yaml, so there's no misunderstanding whether that's missing.
I would be very glad for any help, thank you ^^
Edit: My system only had 'navfn' on it, not 'global_planner' or 'carrot_planner', if that makes a difference.
After looking over the code I found a solution. It doesn't make everything work perfectly yet, but seems to solve the immediate problem.
The problem was that in my 'global_planner_plugin.xml' I just used the code provided in the tutorial:
<library path="lib/libglobal_planner_lib">
<class name="global_planner/GlobalPlanner" type="global_planner::GlobalPlanner" base_class_type="nav_core::BaseGlobalPlanner">
<description>This is a global planner plugin by iroboapp project.</description>
</class>
</library>
But in the carrot_planner.cpp file it says:
PLUGINLIB_EXPORT_CLASS(carrot_planner::CarrotPlanner, nav_core::BaseGlobalPlanner)
Changing type="global_planner::GlobalPlanner to type="carrot_planner::CarrotPlanner and then launching turtlebot3 doesn't give the same error anymore.

Get ScriptOrigin from v8::Module

It seems trivial, but I've searched far and wide.
I'm using this resource to make v8 run with ES Modules and I'm trying to implement my own search/load algorithm. Thus far, I've managed to make a simple system which loads a file from a known location, however I'd like to implement external modules. This means that the known location is actually unknown throughout the application. Take the following directory tree as an example:
~/
- index.js
import 'module1_index'; // This is successfully resolved to /libs/module1/module1_index.js
/libs/module1/
- module1_index.js
export * from './lib.js' // This import fails because it is looking for ./lib.js in ~/source
- lib.js
export /* literally anything */
The above example begins by executing the index.js file from ~. When module1_index.js is executed, lib.js is looked for from ~ and consequently fails. In order to address this, the files must be looked for relative to the file being executed at the moment, however I have not found a means to do this.
First Attempt
I'm given the opportunity to look for the file in the callResolve method (main.cpp:280):
v8::MaybeLocal<v8::Module> callResolve(v8::Local<v8::Context> context, v8::Local<v8::String> specifier, v8::Local<v8::Module> referrer)
or in loadModule (main.cpp:197)
v8::MaybeLocal<v8::Module> loadModule(char code[], char name[], v8::Local<v8::Context> cx)
however, as mentioned, I have found no function by which to extract the ScriptOrigin from the module. I should mention, when files are successfully resolved, the ScriptOrigin is initiated with the exact path to the file, and is reliable.
Second Attempt
I set up a stack, which keeps track of the current file being executed. Every import which is made is pushed onto the stack. Once the file has finished executing, it is popped. This also did not work, as there was no way to reliably determine once the file had finished executing.
It seems that the loadModule function does just that: loads. It does not execute, so I cannot pop after the module has loaded, as the imports are not fully resolved. The checkModule/execModule functions are only invoked on dynamic imports, making them useless to determining the completion of a static import.
I'm at a loss. I'm not familiar with v8 enough to know where to look, although I have dug through some NodeJS source code looking for an implementation, to no avail.
Any pointers are greatly appreciated.
Thanks.
Jake.
I don't know much about module resolution, but looking at V8's sources, I can see an example mapping a v8::Module to a std::string absolute_path, which sounds like what you're looking for. I'm not copying the whole code here, because the way it uses custom metadata is a bit involved; the short story is that it keeps a std::unordered_map to keep data about each module's source on the side. (I wonder if it would be possible to use Module::ScriptId() as that map's key, for simplification.)
Code search finds a bunch more example uses of InstantiateModule, mostly in tests. Tests often serve as useful examples/documentation :-)

RestKit entity mapping for UnitTests does not work

I'm trying to create my RKEntityMapping outside of my UnitTest. The problem I have is it only works if I create it inside my test. For example, this works:
RKEntityMapping *accountListMapping = [RKEntityMapping mappingForEntityForName:#"CustomerListResponse" inManagedObjectStore:_sut.managedObjectStore];
[accountListMapping addAttributeMappingsFromDictionary:#{#"count": #"pageCount",
#"page": #"currentPage",
#"pages": #"pages"}];
While the following does now work. The all to accoutListMapping returns exactly what is shown above using the same managed object store:
RKEntityMapping *accountListMapping = [_sut accountListMapping];
When the RKEntityMapping is created in _sut I get this error:
<unknown>:0: error: -[SBAccountTests testAccountListFetch] : 0x9e9cd10: failed with error:
Error Domain=org.restkit.RestKit.ErrorDomain Code=1007 "Cannot perform a mapping operation
with a nil destination object." UserInfo=0x8c64490 {NSLocalizedDescription=Cannot perform
a mapping operation with a nil destination object.}
I'm assuming the nil destination object it is referring to is destinationObject:nil.
RKMappingTest *maptest = [RKMappingTest testForMapping:accountListMapping
sourceObject:_parsedJSON
destinationObject:nil];
Make sure that the file you have created has a target membership of both your main target, and your test target. You can find this by:
clicking on the .m file of your class
open the utilities toolbar (the one on the right)
in the target membership section tick both targets.
This is because if your class does not have target membership to your test target, the test target actually creates a copy of the class that you have created, meaning it has a different binary file to the main target. This leads to that class using the test's version of the RestKit binary, rather than the main projects RestKit. This will lead to the isKindOfClass method failing when it tries to see if the mapping you have passed is of type RKObjectMapping from the main project, because it is of type RKObjectMapping from the test projects version of RestKit, so your mapping doesn't get used, and you get your crash.
At least this is my understanding of how the LLVM compiler works. I'm new to iOS dev so please feel free to correct if I got something wrong.
This problem may also be caused by duplicated class definitions, when including RestKit components for multiple targets individually when using Cocoapods.
For more information on this have a look at this answer.
I used a category on the Mapped object for example
RestKitMappings+SomeClass
+ (RKObjectMapping*)responsemappings {
return mappings;
}
now this category has to be included in the test target as well otherwise the mapping will not be passed.
When you're running a test you aren't using the entire mapping infrastructure, so RestKit isn't going to create a destination object for you. It's only going to test the mapping itself. So you need to provide all three pieces of information to the test method or it can't work.

Coldfusion and SVNKit Log

I am trying to use SVNKit to get a Log of the SVN Entries in Coldfusion. I downloaded the latest SVNKit jar files and threw them in the lib folder under WEB-INF/lib.
Here is my code that should return an Array Of Log Entries but this code is returning a Null Pointer exception in Coldfusion 9.0.2.
SVNURL = createObject('java','org.tmatesoft.svn.core.SVNURL').parseURIEncoded(svnurl);
drf = createObject("java","org.tmatesoft.svn.core.internal.io.dav.DAVRepositoryFactory");
drf.setup();
rf = drf.create(SVNURL);
SVNWCUtil = createObject("java","org.tmatesoft.svn.core.wc.SVNWCUtil");
authManager = SVNWCUtil.createDefaultAuthenticationManager(user,pass);
rf.setAuthenticationManager(authManager);
log = rf.log(JavaCast("String[]",[]),JavaCast("null",""),JavaCast("long",10),JavaCast("long",15),true,true);
rf.closeSession();
When running this code, I receive the following Error.
The system has attempted to use an undefined value, which usually indicates a programming error, either in your code or some system code.
Null Pointers are another name for undefined values.
Which points to this line..
log = rf.log(JavaCast("String[]",[]),JavaCast("null",""),JavaCast("long",10),JavaCast("long",15),true,true);
I moved this same code over to Railo, and everything is running fine. I just cannot see why ACF is choking on the log() method.
I was using the Printing Out Repository History example on the SVNKit Wiki to start me off.
Any suggestions on getting it to work in Adobe Coldfusion would be greatly appreciated. I did not test on CF10.
I wasn't using the JavaCast("boolean",true) for the last two arguments in the log() function. After that, everything worked fine.
Got to remember to check and use JavaCast()!
log = rf.log(JavaCast("String[]",[]),JavaCast("null",""),JavaCast("long",10),JavaCast("long",15),JavaCast("boolean",true),JavaCast("boolean",true));

How to dynamically build a new protobuf from a set of already defined descriptors?

At my server, we receive Self Described Messages (as defined here... which btw wasn't all that easy as there aren't any 'good' examples of this in c++).
At this point I am having no issue creating messages from these self-described ones. I can take the FileDescriptorSet, go through each FileDescriptorProto, adding each to a DescriptorPool (using BuildFile, which also gives me every defined FileDescriptor).
From here I can create any of the messages which were defined in the FileDescriptorSet with a DynamicMessageFactory instanced with the DP and calling GetPrototype (which is very easy to do as our SelfDescribedMessage required the messages full_name() and thus we can call the FindMessageTypeByName method of the DP, giving us the properly encoded Message Prototype).
The question is how can I take each already defined Descriptor or message and dynamically BUILD a 'master' message that contains all of the defined messages as nested messages. This would primarily be used for saving the current state of the messages. Currently we're handling this by just instancing a type of each message in the server(to keep a central state across different programs). But when we want to 'save off' the current state, we're forced to stream them to disk as defined here. They're streamed one message at a time (with a size prefix). We'd like to have ONE message (one to rule them all) instead of the steady stream of separate messages. This can be used for other things once it is worked out (network based shared state with optimized and easy serialization)
Since we already have the cross-linked and defined Descriptors, one would think there would be an easy way to build 'new' messages from those already defined ones. So far the solution has alluded us. We've tried creating our own DescriptorProto and adding new fields of the type from our already defined Descriptors but got lost (haven't deep dived into this one yet). We've also looked at possibly adding them as extensions (unknown at this time how to do so). Do we need to create our own DescriptorDatabase (also unknown at this time how to do so)?
Any insights?
Linked example source on BitBucket.
Hopefully this explanation will help.
I am attempting to dynamically build a Message from a set of already defined Messages. The set of already defined messages are created by using the "self-described" method explained(briefly) in the official c++ protobuf tutorial (i.e. these messages not available in compiled form). This newly defined message will need to be created at runtime.
Have tried using the straight Descriptors for each message and attempted to build a FileDescriptorProto. Have tried looking at the DatabaseDescriptor methods. Both with no luck. Currently attempting to add these defined messages as an extension to another message (even tho at compile time those defined messages, and their 'descriptor-set' were not classified as extending anything) which is where the example code starts.
you need a protobuf::DynamicMessageFactory:
{
using namespace google;
protobuf::DynamicMessageFactory dmf;
protobuf::Message* actual_msg = dmf.GetPrototype(some_desc)->New();
const protobuf::Reflection* refl = actual_msg->GetReflection();
const protobuf::FieldDescriptor* fd = trip_desc->FindFieldByName("someField");
refl->SetString(actual_msg, fd, "whee");
...
cout << actual_msg->DebugString() << endl;
}
I was able to solve this problem by dynamically creating a .proto file and loading it with an Importer.
The only requirement is for each client to either send across its proto file (only needed at init... not during full execution). The server then saves each proto file to a temp directory. An alternative if possible is to just point the server to a central location that holds all of the needed proto files.
This was done by first using a DiskSourceTree to map actual path locations to in program virtual ones. Then building the .proto file to import every proto file that was sent across AND define an optional field in a 'master message'.
After the master.proto has been saved to disk, i Import it with the Importer. Now using the Importers DescriptorPool and a DynamicMessageFactory, I'm able to reliably generate the whole message under one message. I will be putting an example of what I am describing up later on tonight or tomorrow.
If anyone has any suggestions on how to make this process better or how to do it different, please say so.
I will be leaving this question unanswered up until the bounty is about to expire just in case someone else has a better solution.
What about serializing all the messages into strings, and making the master message a sequence of (byte) strings, a la
message MessageSet
{
required FileDescriptorSet proto_files = 1;
repeated bytes serialized_sub_message = 2;
}