How to update Camunda DMN table at runtime? - camunda

I have a DMN table created with a few rules and deployed to Camunda.
How do we update DMN tables programmatically at run-time and add a new rule when it is already deployed?

When you change the dmn table but keep the decision-key, a deployment will create a new revision of the table.
So yes, you can update dmn tables at runtime.
You can do so by either using the REST or the java API.
The java api relies on the RepositoryService#createDeployment Builder. The concrete implementation depends on where your files are stored, and how you read them. Here are some examples.
Deployment deployment = repositoryService.createDeployment()
.addString(resourceName, instanceAsString)
.deploy();

Related

aws amplify insert data on creation

i have the following question.
I have in my app an Table called Networkoperator
This table should be filled on creation. I have an JSON file with the data in it that i need to insert.
How is this achievable?
I was thinking of using an CDK that triggers an lambda only on creation
The Amplify documentation has a plugins page that lists a bunch of 3rd party plugins. There are two plugins that help seed DB data, but they both appear a bit out of date.
I'd be inclined to make a custom category (amplify add custom) and make it a dependency of the database table (ideally created and managed by Amplify). Then you can do anything you like in the 'build' step of the custom category's package.json. The aws-cdk-dynamodb-seeder package looks like it would do the trick (but was also published 2 years ago). Within the custom category, you can pull in the DB table name, etc, to avoid hardcoding.

Build versioning for django

I would like to build versioning for my django models. I have already tried django-reversion but i think that is not good for my use case and i have problems with many-to-many through models and multi table inheritance.
My use case:
I have a multi tenant web app. Every tenant have their own pool of resources. The tenants can create documents and reference to the resources.
Here is a simple diagram:
Now on every update from the document or a resource that is referenced in the document i would create a version of the document.
So the version should show all changes of the document and the referenced resources.
But on revert to a version only the direct values of the document should reverted and not the resources.
For example a document:
Now i edit the document and delete the resource_1 with the id 1. Also i change the name from the resource_1 with the id 2.
When i revert this document to the first version, it should look like this:
But how can i achieve this?
I think i can use MongoDB to store complete version of a document as serialized json data on every update. And can create a signal for the resources if it changed and check the correspondent documents to create also a version for the document.
Then i create for each tenant a new collection in MongoDB.
The resources itself should also have a versioning and also other models in my webapp but for the example that is not important.

Translation of a text column in BigQuery

I have a table in BigQuery containing consumers' reviews, some of them are in local languages and I need to use a translation API to translate them and create a new column to the existing table incorporating the transalted reviews. I was wondering whether I can automate this task? e.g. using Google Translate API in BigQuery....
An alter solution to achieve this if customer review has some limited review comments in response then you can create a Bigquery function to replace values.
A sample code is given over github repository.
If you want to use an external API in BigQuery, like a Language Translation API, you can use Remote Functions (a recent release).
In this GitHub repo you can see how to wrap the Azure Translator API (the same way you can use the Google Translate API) into a SQL function and use it in your queries.
Since you have created the Translation SQL function, you can write an update statement (and run it periodically - using scheduled queries) to achieve what you want.
UPDATE mytable SET translated_review_text=translation_function(review_text) WHERE translated_review_text IS NULL

How to manage schema changes to a BigQuery table via Terraform

We currently use the following mechanism to create a BigQuery table with a pre-defined schema and we created the infrastructure.
https://www.terraform.io/docs/providers/google/r/bigquery_table.html
The dev team decided to modify the schema by adding another column, so we are planning to modify the schema changes in the above terraform script to enable this.
What would be the best way to manage such schema migrations in production environments?
Since in a production environment, we would be expected to retain the table data while the schema migration is performed
It seems you cannot modify the schema of the table and retain data using Terraform. Instead you can use bq command-line for the same. https://cloud.google.com/bigquery/docs/managing-table-schemas#bq.
Looks like there was a fix for it -
https://github.com/hashicorp/terraform-provider-google/issues/8503

How to create a audit mapplet in Informatica?

I want to create an audit, which can be re-used across multiple mappings to capture Source record count and target record count in when source database is oracle and target database is sql server
We are using it from source to staging mappings
It's all there, in Metadata tables. There's no need to add anything that will make your loads longer and more complex.
You can review this Framework for some ideas.