I'm learning Camunda . How to save a process XML document into act_ge_bytearray without deploying? - camunda

I want to save my process definition. But so far, I only know that my process files will be saved when deploying.
But I just want to save without deploying.
How can I do this ?

Your requirement is not fully clear. If you just want to save the file without deploying it, then you can save it to the location of your choice in the Desktop Modeler or download it from the web modeler, both without deploying it.
There is no need to store this file in the Camunda runtime if you do not want to deploy and use it. Feel feel to store it e.g. in Git.
You should not write something to Camunda tables directly, circumventing the API, as Jan already pointed out.
You can use the Camunda Web Modeler to manage and store process models and other artifacts without deploying them. For Camunda 7 you can use Cawemo.com. for Camunda 8 you can use the Saas version of the web modeler available under camunda.io, or a beta version of the web modeler for self-managed installation (https://docs.camunda.io/docs/self-managed/web-modeler/installation/).

It is just an xml file ... just save it wherever you want. Typically, it will be stored in a git repository because the model is versioned part of your codebase.

Related

How to deploy a pointed CMMN file to a pointed process engine in Camunda?

The document shows me how to create ProcessEngineConfiguration and ProcessEngine, and then it shows me how to modify process instances. I don't know how to manage my process and case defintion files with Repositoryservice. Is there any example?
You have several ways of deploying your diagrams to the engine.
The easiest is to have a ProcessApplication on the classpath and a processes.xml file in src/main/resources/META-INF (con be empty). Camunda will then scan your library and deploy all processes on startup.
Second option, though I personally would not advise you to, is that you use the engine-spring module and activate auto-deployment
And as a third option, you can still deploy manually by using either repositoryService.createDeployment().addClasspathResource(...).deploy() or use the rest api.

Get the names of artifacts inside a group from Nexus repository

I want to get all the artifact names on a particular group name in my nexus repository.
I have tried lucene web api for this. like i used a url like,
http://localhost:8080/nexus/service/local/lucene/search?g=my.group.name
But on the xml response I see the it is listed the artifact from index section, which contains the deleted artifacts also. I don't want the deleted artifact names.
How can I achieve this. Is there any we api supports this?
You can write a small plugin to retrieve the GAV's. The "crawling" example from this location does pretty much what you need already:
https://github.com/sonatype/nexus-example-plugins/
The ArtifactDiscoveryListener.java is called for every GAV in a repository. The plugin contributes a scheduled task, so it's easy to run.
You can find more information about developing plugins here:
https://books.sonatype.com/nexus-book/reference/plugdev.html

wagtail cms content deploy to production

I am study on the popular django cms framework - wagtail and coming to question: how do you deploy your developed contents - like pages/documents/images to production environments?
I am puzzled because these contents(like page) are saved into database, essentially they are just database tables rows but not a resource in git repo, so if I develope a simple web site in my dev and when I come to deploy to prod, it's not as simple as a git push. what is the best practice on this?
I read some codes from torchbox, there are some database dump and records pulling tasks using fabaric, not sure if that's the preferred way and neither can fully understand them.
Or if it's production site, is it supposed that everyone add content there and prod is the source of truth, there won't need of "content deployment" as all but only those schema changes via souths migration or other static resources only.
Please help if anyone has got experience on this and provide guidance.
Thanks
On our (Torchbox) sites, all content entry usually happens on the production site, so we don't need to push any database content as part of our regular deployments. Many of our sites have tens or even hundreds of editors, so it would be almost impossible to synchronise the content across multiple installations of the site.
Whenever we need to transfer content from one installation to another (for example, deploying the production site for the first time, or pulling a snapshot of the live site to help with development), we use the Postgresql pg_dump command to make a SQL dump of the complete database, then restore it at the destination using the psql command. Tools like Fabric can be used to automate this, but this isn't essential.

Good way to deploy a django app with an asynchronous script running outside of the app

I am building a small financial web app with django. The app requires that the database has a complete history of prices, regardless of whether someone is currently using the app. These prices are freely available online.
The way I am currently handling this is by running simultaneously a separate python script (outside of django) which downloads the price data and records it in the django database using the sqlite3 module.
My plan for deployment is to run the app on an AWS EC2 instance, change the permissions of the folder where the db file resides, and separately run the download script.
Is this a good way to deploy this sort of app? What are the downsides?
Is there a better way to handle the asynchronous downloads and the deployment? (PythonAnywhere?)
You can write the daemon code and follow this approach to push data to DB as soon as you get it from Internet. Since your daemon would be running independently from the Django, you'd need to take care of data synchronisation related issues as well. One possible solution could be to use DateTimeField in your Django model with auto_now_add = True, which will give you idea of time when data was entered in DB. Hope this helps you or someone else looking for similar answer.

Automated deployment of DSS datasource configuration

We have a "mavenized" project with several containers (wso2esb, wso2dss, tomcat) and many components to deploy to them.
We are trying to find a way to deploy the datasource configuration for all our DSS services but I notice it is stored in its own DB (H2).
Do you know if there is any way to declare something like a XML file in order to create the datasource in the DSS in an automated way?
I tried to see the documentation but did not find anything useful for automatic deployment (meaning without using the admin pages).
Yeah, you can use the Carbon data source configuration file datasources.properties, to provide this information. This file should be located at $SERVER_ROOT/repository/conf.
A sample for this configuration file can be found in BPS sources.
After the data sources are defined using this, you can use them using the data source type "carbon data source" from data services.
You can easily deploy artifacts with the hot deployment functionality in WSO2 Servers by simply copying them to a specific directory in the server.
For Data Services Server you can copy the dbs files (in your case with the help of Maven) to $WSO2DSS_HOME/repository/deployment/server/dataservices dirctory. Similarly for BPELs its $WSO2BPS_HOME/repository/deployment/server/bpel
For CAR files created with carbon studio, its $WSO2CARBON_HOME/repository/deployment/server/carbonapps.
For ESB configs, its $WSO2ESB_HOME/repository/deployment/server/synapse-configs.