How to deploy a pointed CMMN file to a pointed process engine in Camunda? - camunda

The document shows me how to create ProcessEngineConfiguration and ProcessEngine, and then it shows me how to modify process instances. I don't know how to manage my process and case defintion files with Repositoryservice. Is there any example?

You have several ways of deploying your diagrams to the engine.
The easiest is to have a ProcessApplication on the classpath and a processes.xml file in src/main/resources/META-INF (con be empty). Camunda will then scan your library and deploy all processes on startup.
Second option, though I personally would not advise you to, is that you use the engine-spring module and activate auto-deployment
And as a third option, you can still deploy manually by using either repositoryService.createDeployment().addClasspathResource(...).deploy() or use the rest api.

Related

I'm learning Camunda . How to save a process XML document into act_ge_bytearray without deploying?

I want to save my process definition. But so far, I only know that my process files will be saved when deploying.
But I just want to save without deploying.
How can I do this ?
Your requirement is not fully clear. If you just want to save the file without deploying it, then you can save it to the location of your choice in the Desktop Modeler or download it from the web modeler, both without deploying it.
There is no need to store this file in the Camunda runtime if you do not want to deploy and use it. Feel feel to store it e.g. in Git.
You should not write something to Camunda tables directly, circumventing the API, as Jan already pointed out.
You can use the Camunda Web Modeler to manage and store process models and other artifacts without deploying them. For Camunda 7 you can use Cawemo.com. for Camunda 8 you can use the Saas version of the web modeler available under camunda.io, or a beta version of the web modeler for self-managed installation (https://docs.camunda.io/docs/self-managed/web-modeler/installation/).
It is just an xml file ... just save it wherever you want. Typically, it will be stored in a git repository because the model is versioned part of your codebase.

How to best deploy SNMP in existing application?

I have an existing Windows desktop application written in C++ that needs to add support for SNMP so that a few pieces of status information are available on some SNMP OIDs. I found the net-snmp project and have been trying to understand how this can best fit into the existing program.
Questions:
Do I need to run snmpd, or can I just integrate the agent code into my application? I would prefer that starting my application does everything necessary rather than worry about deploying and running multiple processes, but the documentation doesn't speak much about doing this. The net-snmp agent daemon tutorial has an option for running the sample code as the full-agent rather than sub-agent, but I'm not sure about any limitations of doing this.
What would the PROs/CONs be for running a full agent in my application vs using snmpd and putting a subagent in my application? Is there a 3rd option I should also consider?
If I can integrate the full agent into the existing program, how do I pass it a configuration file via the API? Can I avoid the config file all together by passing these parameters in via function call instead?

Cloud Foundry triggers if application was created

is there a possibility that cloud foundry triggers an function if a new application was pushed to the platform.
I would like to trigger same internal functions like registration on the API gateway. I know that I can pull the information from events API https://apidocs.cloudfoundry.org/224/events/list_all_events.html. But, is it also possible by push?
The closest thing I can think of to what you're asking is the profile script.
https://docs.cloudfoundry.org/devguide/deploy-apps/deploy-app.html#profile
The note about the Java buildpack not supporting .profile scripts is incorrect. It's a platform feature, so all buildpack's support them. The difference with Java apps is that you're probably pushing a JAR or WAR file so it's harder to make sure the file is placed in the correct location. Location of the file is everything.
When your application starts, the platform will first run the .profile script, if it exists, that is packaged with your application. It's a standard shell script and you can do whatever you like in this file.
The only caveat is that your application will not start until this script completes successfully (i.e. exit 0). Thus you have a limited amount of time for that script to run and your application to start. How much time, you ask? That is configured by cf push -t and is in seconds. You can also set it in your manifest.yml with the timeout attribute.
Time (in seconds) allowed to elapse between starting up an app and the first healthy response from the app
This is also something that each application needs to include. I suppose you could also use a custom buildpack to add that file, if you wanted to have it added across multiple applications. There's no easy way to add it for all apps though.
Hope that helps!

Run command on Cloud Foundry in it's own container

As you can see on the official doc of Cloud Foundry
https://docs.cloudfoundry.org/devguide/using-tasks.html
A task is an application or script whose code is included as part of a deployed application, but runs independently in its own container.
I'd like to know if there's a way to run command and manipulate files directly on the main container of an application without using a SSH connection or the manifest file.
Thanks
No. Tasks run in their own container, so they cannot affect other running tasks or running application instances. That's the designed behavior.
If you need to make changes to your application, you should look at one of the following:
Use a .profile script. This will let you execute operations prior to your application starting up. It runs for every application instance that is started (I do not believe it runs for tasks) so the operation will be consistently applied.
While not recommended, you can technically background a task through the .profile script that will continue running for the duration of your app. This is not recommended because there is no monitoring of this script and if it crashes it won't cause your app container to restart.
Integrate with your buildpack. Some buildpack's like the PHP buildpack provide hooks for you to integrate and add behavior to staging. For other buildpacks, you can fork the buildpack and make it do whatever you want. This includes changing the command that's returned by the buildpack which tells the platform how to execute your droplet and ultimately what runs in the container.
You can technically modify a running app instance with cf ssh, but I wouldn't recommend it. It's something you should really only do for troubleshooting or maybe testing. Definitely not production apps. If you feel like you need to modify a running app instance for some reason, I'd suggest looking at the reasons why and look for alternative ways of accomplishing your goals.

Where is Appropriate to Put AWS Keys

I'm learning about Strongloop, it's pretty good so far.
Question: What is the appropriate place to put AWS keys? config.json? ..and how would I access them from my application?
Thanks
Ideally you would not put those credentials in any file that is committed. I usually find environment variables to be the best balance of convenience and security.
If you are using strong-pm, then you would do this with slc ctl env-set. If you are using some other supervisor, then you'll need to consult its docs.
A lot of times it is enough to use Upstart or systemd directly, which both make it fairly easy to set environment variables in the service process.
Other than above answer, what you can do is put these in your release procedure.
What we have done in our product is all these entries are kept in a config file which is deployed from the shared folder.
Let me elaborate it.
we have local config files in the git. and separate config files on production servers in a folder names as shared, now, when ever a tag release is deployed from git, the shared folder overwrite these config files.