Controlling Version of Deployed Camunda BPM - camunda

Everytime I modify and deploy a process, the version number is increasing.I understand why it is increasing. But is there a to force to a predefined version and the deployments will override only that version. The reason is even for small bug fixes, I don't want the version to change.

Are you talking about production or development?
In dev, you can configure the processes.xml so all instances and old version of the process are removed:
<process-archive>
<properties>
<property name="isDeleteUponUndeploy">true</property>
</properties>
</process-archive>
On production, you would not want to delete running or completed instances. You might want to migrate running instances to the next version, but that is not generic, it depends on the process and the changes made. Make sure to read process-versioning-version-migration from the user guide.
A third approach would be to work with calls to services (expressions/delegates/listeners) instead of hard modelling inside the bpmn. If for example you write "${price > 500}" at an exclusice gateway flow, you will have a new process version when you deploy a "fix" with the value "1000". If you design your process application that it calls "${myPriceCalculator.limitExceeded(price)}", you can deploy a new war, but the process remains untouched.

no this does not work. You can deploy a new version and delete the old one.

Camunda REST will help you to deploy and delete the version of deployment. You just have to pass deployment id:
If you are using separated Camunda process engine(server) then your REST API to delete deployment will be:
http://localhost:8080/engine-rest/deployment/fa9af59a-382b-11ea-96d8-5edcd02b4f71
or if your Camunda process engine integrated with spring boot application then your URL will be:
http://localhost:8080/rest/deployment/fa9af59a-382b-11ea-96d8-5edcd02b4f71
Or
You would have a process.xml file in resource folder of your application. You can set isDeleteUponUndeploy to true. So on every Undeployment of workflow, your workflow file will get deleted.
<process-archive>
<properties>
<property name="isDeleteUponUndeploy">true</property>
</properties>
</process-archive>
Or
You and delete from Camunda UI as well the link is: http://localhost:8080/app/cockpit/default/#/dashboard
Now goto deployment and select your deployed version and click on delete version.

Related

TEIID Springboot integration

i read about Teiid and i liked it, but because it has a lot of changes, i got some problems first to deal with it, but what i understand now is the following:
1- teiid wildfly and thorntail are going to be obsolete, so, i will not use them.
2- teiid spring boot and openshift are the most active projects now, so, i chose one of them which is sprintboot.
my thoughts for using teiid in springboot is for the following:
1- integrate my different schemas in a micro-service architecture pattern to solve the problem of data integrity for all services.
2- create a standalone data virtualization (data warehouse) for my internal database systems to be used in reporting.
for the reporting system, i created a ddl vdb file, and i was able to deploy springboot application with jdbc enabled, and used the existing Simple-java-client to connect to it, but i tried to use apache superset to get my reports in a BI application, and i enabled the ODBC with the postgres, but i always have the below error
Connection failed (psycopg2.OperationalError) TEIID30528 javax.transaction.SystemException: The system is only setup for spring managed transactions. If you need Teiid to manage transactions, then a third-party transaction manager like narayana-spring-boot-starter needs to be configured.
DETAIL: org.teiid.jdbc.TeiidSQLException: TEIID30528 javax.transaction.SystemException: The system is only setup for spring managed transactions. If you need Teiid to manage transactions, then a third-party transaction manager like narayana-spring-boot-starter needs to be configured.
although i tried to integrate with narayana, but i couldn't configure it,
sorry for this long discussion, but i need to know:
1- is the above approach is good, or should i try other things.
2- is the above error has any fix or there is any other suggested BI tool i can use with Teiid?
thanks in advance
What I can say (https://teiid.io/blog/post-2020-3) Teiid won't be obsolete for WildFly. The Teiid supports currently the WildFly and Spring Boot as could be found on their pages. If you are looking for the Java EE components and the best integration amongst other JBoss projects (e.g. Narayana) then you could try to check the WildFly version of Teiid. It's right that Thorntail develoment efforts are decreases for the sake of Quarkus. But it seems (as the post above mentions) the Teiid considers the support of Quarkus. But WildFly is(!) still supported.
On your question about the setup of the third party transaction manager for Spring Boot and with Narayana in particular you should check information from Snowdrop project and the README of the Narayana Spring Boot integration.
The thing which should be enough to configure the Narayana starter is to add the starter to your pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jta-narayana</artifactId>
</dependency>
If there is a need for configuring Narayana in particular the configuration properties start with spring.jta.narayana
thanks a lot for your help, and by adding the below dependency to the POM file
<dependency>
<groupid>me.snowdrop</groupid>
<artifactid>narayana-spring-boot-starter</artifactid>
<version>2.1.0</version>
and adding narayana.dbcp.enabled=true, spring.jta.enabled=true to the application.properties, i could do some graphs using the apache superset.
also, as long as wildly is still supported, i will go on for the wildfly solution, as it is more suitable to have more VDBs in a single container.
thanks a lot

Create a new GCP project from existing

I created a Project on GCP. It has a postgres database, a node Appengine web app, and some other stuff. Now I am developing the app, and when everything is set up and running nicely I'd like to clone this project somehow and create a staging and a production environment/project.
So my project now is called dev-awesomeapp. Can I somehow make a staging-awesomeapp for staging and a awesomeapp for production from my existing dev-awesomeapp?
Edit: there is an other question from 2017 that asks the same thing, but maybe it's possible now after 2,5 years?
You can't, but if you don't want to configure everything form the beginning each time, you can use "architecture as code" with tools like deployment manager or Terraform.
This could help you in replicating your infrastructure, moreover it can be really helpful in automating any architectural changes if you use it in a CI/CD pipeline, making your release phase quicker and more reliable :)

Revert failed cloud foundry deploy

I'm automating app deployment to cloud foundry. So in the start command, I do a db migration. What can happen is that the migration would fail and as the result, the app will be dead. Is there some predefined strategy that can be used to rollback to the last working deployment, or I should manually store the last working version, check for failure and in that case redeploy the stored version?
The typical strategy used to deploy apps on Cloud Foundry is blue/green. This generally works like this:
Push the new app under a new name & host, like my-app-new.
Test the app & make sure it works.
When your satisfied, change the route mapping from the old app to the new app.
Delete the old app & optionally rename the new app.
Step #3 is where the cut-over happens. Prior to that all traffic keeps flowing to the old app.
This is documented more here.
https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html
I'd say this often works well, but sometimes there are problems. Where this breaks down is with steps #1 & #2, if your app cannot have multiple instances of itself running or if migrations to your service are so different that the old app breaks when you update the database. It definitely helps if you keep this strategy in mind as you develop your app.
Aside from that, which has historically been the way to go, you could take a look at the new v3 API functionality. With v3, apps now retain multiple versions of a droplet. With this, you can rollback to a previous version of a droplet.
http://v3-apidocs.cloudfoundry.org/version/3.36.0/index.html#droplets
You can run cf v3-droplets to see the available droplets and cf v3-set-droplet to change the droplet being used.
That said, this will only rollback the droplet. It would not rollback a service like a database schema. If you need to do that, you'd need reverse migrations or perhaps even restore from a backup.
Hope that helps!
I work on very similar automation processes.
Daniel has explained the process very well. I think you're looking for the blue-green deployment methodology
1) Read up on blue green deploy here:
https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html
2) Look at this plugin or implement blue green deploy manually:
https://github.com/contraband/autopilot
3) Blue-green restage plugin (a nice to have, in case you need to restage the app but not cause any downtime to the clients):
https://github.com/orange-cloudfoundry/cf-plugin-bg-restage
It works by creating a temporary app, copying the env vars/routes/code from the working app to he temp app.
The temp app now accepts traffic while the original app is being restaged.
the traffic moves on to the original app after it is restaged and the temporary app is deleted.

Deploying Web job with appropriate environments variable

We are trying to deploy a web job via octopus. We have different eventhub keys saved in the variables and we expect the webjob to pick up the right key depending on the environment that it is being deployed to. Has one one done this before? Any advice on settings up configurations in octopus?
<========== UPDATE ===========>
We were being careless and didn't quite set our octopus process to transform the Configuration Variables. You should be able to do so by clicking 'configure variables' in the process step.
I don't think it being deployed via Octopus is all that relevant here. Generally, a .NET WebJob is able to access Azure App Setting using standard configuration API.
If that is not working for you, please update your question to clarify what you tries, and specifically what didn't work.

How to handle DB migration using AWS deployment tools

Amazon Web Services offer a number of continuous deployment and management tools such as Elastic Beanstalk, OpsWorks, Cloud Formation and Code Deploy depending on your needs. The basic idea being to facilitate code deployment and upgrade with zero downtime. They also help manage best architectural practice using AWS resources.
For simplicity lets assuming a basic architecture where you have a 2 tear structure; a collection of application servers behind a load balancer and then a persistence layer using a multi-zone RDS DB.
The actual code upgrade across a fleet of instances (app servers) is easy to understand. For a very simplistic overview the AWS service upgrades each node in turn handing connections off so the instance in question is not being used.
However, I can't understand how DB upgrades are managed. Assume that we are going from version 1.0.0 to 2.0.0 of an application and that there is a requirement to change the DB structure. Normally you would use a script or a library like Flyway to perform the upgrade. However, if there is a fleet of servers to upgrade there is a point where both 1.0.0 and 2.0.0 applications exist across the fleet each requiring a different DB structure.
I need to understand how this is actually achieved (high level) to know what the best way/time of performing the DB migration is. I guess there are a couple of ways they could be achieving this but I am struggling to see how they can do it and allow both 1.0.0 and 2.0.0 to persist data without loss.
If they migrate the DB structure with the first app node upgrade and at the same time create a cached version of the 1.0.0. Users connected to the 1.0.0 app persist using the cached version of the DB and users connected to the 2.0.0 app persist to the new migrated DB. Once all the app nodes are migrated, the cached data is merged into the DB.
It seems unlikely they can do this as the merge would be pretty complex but I can't see another way. Any pointers/help would be appreciated.
This is a common problem to encounter once your application infrastructure gets into multiple application nodes. In the olden days, you could take your application offline for "maintenance windows" during which you could:
Replace application with a "System Maintenance, back soon" page.
Perform database migrations (schema and/or data)
Deploy new application code
Put application back online
In 2015, and really for several years this approach is not acceptable. Your users expect 24/7 operation, so there must be a better way. Of course there is, the answer is a series of patterns for Database Refactorings.
The basic concept to always keep in mind is to assume you have to maintain two concurrent versions of your application, and there can be no breaking changes between these two versions. This means that you have a current application (v1.0.0) currently in production and (v2.0.0) that is scheduled to be deployed. Both these versions must work on the same schema. Once v2.0.0 is fully deployed across all application servers, you can then develop v3.0.0 that allows you to complete any final database changes.